Reflections on the Use of Randomized Control Trials in Development

By Katie Martin
posted on February 9, 2014 in Global Development

Proponents of Randomized Control Trials (RCTs), like The Abdul Latif Jameel Poverty Action Lab at MIT and Yale Economics Professor Dean Karlan of Innovations for Poverty Action, are working to bring RCTs to the forefront of the international development discourse.

The concept behind the use of RCTs is fairly straightforward: evaluate the efficacy of development interventions prior to scale-up, using similar methods to those employed by scientists and pharmaceutical companies. RCTs enable development practitioners to reliably assess a program’s performance through the use of static control groups, which function as a reliable basis of comparison to a group that receives treatment. In turn, these results can help inform the development of future projects. As a result, an “unsuccessful” intervention can be just as illuminating as a project that achieves marked success on the first try. RCTs, while perhaps time and resource consuming, are therefore both worthwhile and relatively simple to explain.

For example, consider a hypothetical agricultural intervention aimed at increasing the yield of a certain crop in South Asia by employing the use of fertilizers. Development Company X implements a yearlong testing phase for the project, providing subsidized fertilizer to a select group of farmers. At the end of the year, the Monitoring & Evaluation team conducts post-phase assessments and finds that crop yield doubled during the test phase and farmer income experienced a similar boost.

This led to a variety of knock-off effects, including improved school attendance for dependents of farming families, and increased investments in healthcare. Development Company X lauds the project, and advises their implementation team to begin scale-up efforts across the region. A year later, however, they find that the project’s measurable impact has essentially disappeared altogether. What happened? To answer the question, let’s take a look at Development Company Y’s experience with a comparable project.

Development Company Y observes a similar void in the use of fertilizers in their region of operation. After conducting some initial research, they decide to conduct a yearlong experiment involving the provision of subsidized fertilizer to a select group of farmers.  At a weekly town hall meeting, they take down the names of 100 farmers interested in the program.

A member of their Monitoring & Evaluation team conducts a few pre-program surveys to ensure that the participants are relatively similar in education, income, and a few other critical factors. The team then randomly selects 50 farmers who will receive the subsidized fertilizers, with the remaining farmers receiving a placebo bag of a compound that will neither harm nor promote the growth of their crops. (Note that it is important that the farmers in the control group are unaware that they are receiving a placebo fertilizer. This helps protect against work effort bias, such as a farmer not tending to his crops as carefully due to frustration over being in the control group.)

At the end of the year, Development Company Y’s Monitoring & Evaluation team conducts post-test assessments. They find that the farmers who received the real fertilizer experienced a doubling of their crop yield and a similar income boost. However, farmers in the control group experienced a slightly lower, yet still impressive, crop yield boost as well. The Monitoring & Evaluation team concludes that the crop yield increase cannot confidently be attributed to the fertilizer, and recommends that scale-up be delayed until further testing can confirm positive results. Figure 1 shows the basic design of an RCT from a back to work program used in the UK (all copyrights belong to the original owner of the image below).

So what happened?

Did farmers in the control group accidentally receive a bag of real fertilizer here and there? Was record rainfall in the region responsible for increasing crop yields across the board? Did a competing development organization implement an overlapping fertilizer program? Did the local government invest in public infrastructure that provided easier access to reliable irrigation? Here’s the kicker: we don’t know. We have no idea why the intervention didn’t achieve markedly successful results. But we don’t have to. At this phase, what is important is recognizing that this project is unlikely to experience transformative effects after a massive scale-up. The next step will either involve more testing, investigation of the crop yield boost causes, or both. This will enable Development Company Y to utilize this information to design a more successful intervention in the future.

Critics of RCTs will cry that experimenting on human beings is unethical. Denying a critical intervention to someone in need simply does not sit well with most people. While the intentions of this argument are good, it is perhaps equally unethical to waste dwindling resources on a program that may have zero net effect. Such funds could alternatively be used to scale up a program that has been proven to work through the use of RCTs or another rigorous evaluation method.

On the subject of constrained resources, the argument that RCTs sound like a lot of work should also be addressed. Such trials involve more time, thought, and most likely, money during a period of stretched budgets. Investing in a program that is ineffective, however, is not only fiscally irresponsible; it is reckless. People’s livelihoods depend upon these types of programs. While it may benefit donors to share positive results with stakeholders and the greater development community, negative results can be equally illuminating. They help practitioners to refine their approach and inform future projects so they can be more successful.

When asked about his numerous failed experiments in relation to electricity, Benjamin Franklin remarked, “I didn’t fail the test, I just found 100 ways to do it wrong.” It would behoove the development industry to take this attitude to heart. While it may take more upfront effort to “do” development the right way, the opportunity cost of not doing it is simply too high for the billions of poor people who these programs seek to serve. Eventually, the development community will find the right way, and that has the capacity to truly transform the world and dramatically reduce the number of people living in extreme poverty.


Katie Martin is a defense consultant and former intern at the White House. You can follow her on Twitter @katiesusette90.