The intervention: what are we doing and why?
During the two months that I’ve been in Rwanda, I’ve seen too many injuries and deaths from traffic accidents. It seems like every week I either pass by another calamitous accident on the side of the road or talk to someone who has experienced a near-death experience on the back of a motorcycle. Riding around on public buses, I find myself holding my breath as drivers try to pass cars while on curving mountain roads. When I ride on the back of the motos that routinely speed and weave through traffic, I always have a newfound gratitude for being alive when I reach my destination. What I’m trying to say is that reckless driving is common here, and the result is a high rate of traffic deaths and injuries.
In order to help reduce traffic deaths and make getting around a less terrifying experience, some professors from Georgetown University’s Initiative on Innovation, Development, and Evaluation (Gui2de) have come up with a clever intervention that is intended to ‘nudge’ drivers and passengers towards safer behaviors. They put these stickers in buses.
These stickers are supposed to encourage passengers to tell the driver to slow down, and we also think that they may affect the drivers more directly by encouraging them to slow down before they get yelled at. In Kenya, Gui2de already conducted an RCT (an experiment) that found that these stickers reduced traffic accidents by 50%, and that it only costs $6 to save a year of life. To determine if this intervention works elsewhere, they are replicating the experiment in Uganda, Tanzania, and here, in Rwanda.
In the world of development, the experimental method is often referred to as the gold standard of program evaluation. Since I’ve been in Rwanda, I have begun to realize that it should carry that name not only for its rigor in determining causal impact, but also because it is as expensive as the name implies. While the cost of saving a year of life with stickers is only $6, the cost of evaluating the effectiveness of stickers is much, much higher. If we weren’t doing an experiment, we could just throw the stickers in buses and be done with it. However, we are doing an experiment, so we have to collect a lot of data and spend a lot of time building relationships with the many people and organizations that allow us to do research and access information.
At first, I started to wonder whether or not an experiment was worth all of this extra money. If the intervention was so effective in Kenya, and if it’s so cheap, then is the rigor of an RCT really necessary? But then I realized that the evidence of the program’s effectiveness is not just important for the academics and experts that run the project. In order for local stakeholders to buy in to the project, they need to know that it works in Rwanda. They could care less what effect it had in Kenya. And without this buy in, the project would fall apart in the long-run. We need the police to be on board because they give permission for the stickers to be in buses; we need insurance companies to care because they may be the ones who fund the project when they realize how much money it saves them; and we need bus owners to care because they will be the ones to ensure that the stickers stay in their buses. And if we can’t convince them that the sticker will have an impact in this country, then none of that will happen.
In order for an RCT to work, you need a lot of data. And it has been my job to get that data for the baseline (pre-sticker placement), so we can observe changes in accident rates, deaths and injuries, and costs to the insurance companies. Some of it has come from police reports. With the help of a translator, I had the pleasure of reading through hundreds of police reports on fatal accidents in the country (and I thought I was nervous about moto rides before…). However, believe it or not, many people like to avoid interacting with the police if possible, so the police data is very incomplete. Because there is much more of an incentive to go to your insurance provider rather than the police after an accident, most of the data comes from insurance companies. Some of that data was from electronic files, but most came straight from the physical claims packets… thousands of them.
Essentially, I needed to design a system that would allow us to collect the data we need from these physical forms and then sync it with the electronic files. Thankfully, most of the files were in Kinyarwanda, so the tedious task of entering the data into the tablets that we have did not fall on me. Rather, we hired six enumerators, and it was instead my job to train and supervise them.
I’ve learned a few things since I’ve started this data collection process. For one, supervising enumerators without being able to participate can leave you without a lot of free time. I’ve gotten pretty good at pretending to be busy on my computer while actually just doing Duolingo (or writing blogs…). More importantly, I’ve learned how, even when you try to be as careful as possible, the quality of data you get from the field is inevitably going to be highly imperfect. During our first ‘self-audit’ where we re-entered 10% of the files, I found that so many of them failed to match that we had to start all over. Since then, quality has been better but you can always count on mistakes. In trying to match observations in Stata, I always end up pulling my hair out failing to match observations and not understanding where some data has come from and where other data has gone. It can all be very frustrating, but at the end of the day I’ve just had to accept that it won’t be perfect. At the very least, I know that bad data at least won’t cause bias: there will be imperfect quality equally across the control and treatment groups of the experiment.
Aside from data collection, I’ve also helped a bit with the methodology, where we try to figure out how implementation will work in a practical sense. There are a lot of puzzles to work through, like how are we supposed to actually find all the buses to put stickers in them, what will be the most cost-effective way to get a random sample of bus drivers to interview (there is a qualitative aspect of the study that involves surveys), and can we record bus passengers’ reactions to the stickers without asking permission and thus biasing results?
Overall it has been a great experience to apply a lot of the research skills that I have learned at GHD, and to get a better idea of how field research is actually conducted. However, the work I am doing may end up serving little more purpose than to give me experience; the Gui2de team has been working here for almost a year and still has not received permission form the police to put up the stickers. If they don’t get it soon, they may not be able to renew funding for a second year. So while this project has the potential to save many lives and create the evidence needed to sustain it, it may in the end being shut down by the stubborn resistance of bureaucracy.