3ie replication consultation: feedback for the future

3ie replication consultation
feature
01 June 2018

3ie replication consultation: feedback for the future

A few years ago, 3ie established the impact evaluation replication programme to highlight the benefits of internal replication studies of impact evaluations in the development sector. The programme soon became the subject of much debate that revolved around the scope, methods and timing of replication research. To get feedback from its stakeholders on the way ahead, 3ie recently hosted a consultation event in Washington DC that brought together both the proponents and the critics of replication. The event featured rich discussions and threw up some important ideas and questions on the future of replication research and the role 3ie could play?

Three years ago 3ie established a programme to highlight the benefits of internal replication studies of impact evaluations in the development sector. The goal was to incentivise the conduct of such studies, and thus improve the quality and reliability of impact evaluation evidence used for policymaking. We quickly found ourselves in the midst of much debate as there is substantial disagreement among researchers about the scope, methods and timing of replication research (See here for a relevant discussion of the debate). 

With a handful of 3ie-funded replication studies now complete we started thinking about next steps. We were eager to use stakeholder feedback in our decision-making process but were somewhat apprehensive about the idea of sitting in a room filled with replication proponents and critics.

We did it anyway. That is to say, we invited an array of social science researchers to #3ieRepCon, a day-long consultation on internal replication held on 29 May in Washington, DC. Bruce McCullough’s keynote address was on the past, present and future of replication in the social sciences while Brian Nosek‘s was on the work that the Center for Open Science has undertaken in the areas of replication and reproducibility.

Two researchers involved in 3ie-funded replication studies presented their results and discussed the process of conducting replication research. The discussant, also an original author of one of these studies, added an important perspective from ‘the other side.’ 

Benjamin Wood presented a forthcoming paper, co-authored with Annette Brown, on 3ie’s replication programme design and an assessment of our work to date contextualised within larger conversations in this space. A closing panel shared perspectives from organisations such as Experiments in Governance and Politics and Innovations for Poverty Action and provided recommendations for 3ie’s future replication work.

It turned out our apprehension was misplaced. As one participant commented toward the end of the day, “We’ve remained almost disappointingly civil.” 

The 3ie replication team was not disappointed. We appreciated the active dialogue throughout the day, enhanced by everyone allowing for a safe space where participants could ask questions and engage in thoughtful debate. We left the consultation with a lot of food for thought. 

Many topics were discussed. One that dominated much of the conversation was the question of how internal replication research should be defined. What constitutes a replication study? Where do you draw the line between replication research and a new study? Although 3ie has defined various aspects of replication, Benjamin Wood noted that we have failed to make them stick. A takeaway from these discussions was that the word replication may be problematic. Rather than getting stuck on the word, we should focus more on the questions that need to be answered. 

Another theme was the topic of incentives, particularly the lack of incentives for replication researchers to conduct these studies, for impact evaluators to take the time to clean and share their data, and for journals and other third parties to encourage and enforce these practices and research transparency more broadly. 

Participants also discussed questions of responsibility. Who is responsible for ensuring the quality of research? What happens if and when this responsibility is neglected?

Throughout the event 3ie received a lot of praise for its work to date in a ‘dangerous space’. Where participants differed was on the question of where we should go in the future. Some encouraged us to continue our path to date, albeit with more guidance to stakeholders involved in the replication process. 

Others encouraged 3ie to venture into new areas, for instance publishing guidelines and best practices for replication research, defining Journal of Economic Literature codes, advocating for journals to enforce replicability for published articles, and facilitating external replication research (also called field experiments).

3ie is further exploring these ideas to see how they align with our mandate to improve the evidence produced by development impact evaluations used for development policymaking. Annette Brown and Benjamin Wood will post a blog soon on upcoming changes to our programme. Watch this space.

(Jennifer Ludwig is a Program Manager at 3ie)​