3ie ~ International Initiative for Impact Evaluation

skip to content

Do implementation challenges run impact evaluations into the ground?

Déo Gracias-Houndolo and Jyotsna Puri 

Economists use a variety of tools to understand impact caused by development programmes. Theories of change, qualitative analysis, quantitative techniques and advanced econometrics are all arrows in the quiver. But are these methods sufficient to ensure high quality impact evaluations? 

To put it another way, is there a difference between a project designed solely for implementation and one that is prepped for an impact evaluation? And if not, does this compromise the external validity of impact evaluation results? 

We discuss these questions, using some examples from the field. 

Our first example comes from a field trip to the study site of a 3ie-supported impact evaluation in India. The programme being implemented was a conditional cash transfer project. The proposed design was a randomised controlled trial (RCT) with a pipeline approach. A randomly selected group of villages in a large Indian state was supposed to receive a debt-swapping intervention targeting self-help groups in the first year. The intervention provided low-interest rate loans to rural villagers in self-help groups, so that they could pay off their high-interest rate loans taken from local money lenders. The villages that served as a control group were supposed to receive this intervention later. 

This was how it was planned on paper. During a field monitoring visit, when we spoke to one of the principal investigators of the project, who was also a member of the implementation team, we realised that he did not know what random allocation meant. He also did not know that the project needed to be implemented in batches (the pipeline approach). In fact, he proudly talked about the near-universal coverage of the debt-allocation programme. Worse, baseline data collection, that was supposed to have started two years ago, was still being planned. 

All this while the lead research team, located in a far off western country, was designing survey instruments, survey roll-outs and planning for the evaluation. 

Epilogue: The project design has since changed to include a randomly assigned ‘encouragement’* to administrators to follow-up on implementing the debt-swapping intervention. 

In the case of another 3ie-supported study, examining factors affecting institutional deliveries, the study team selected a survey firm which was commercially well-known for market research. In an exhaustive dataset, this survey firm very quickly provided a lot of data to the study team. But when the study team went down to the field with a research assistant, who was also trained in the vernacular, they found that the enumerators from the survey firm were not very skilled in survey techniques. 

While collecting voice data, surveyors had been coaching their illiterate respondents to give the ‘right’ answer. Since the surveyors needed to reach their target numbers on a daily basis, they had little patience with their respondents. A senior researcher from the study team said, “After the question was asked, the surveyor would put the tape-recorder away. They would then gently tell the respondent how to respond and push the tape recorder towards the person.” Later when confronted with this obvious data tampering, the survey team told the senior researcher, “Tell us what you expect to see in the data, and we’ll show it to you.”

Epilogue: The study team has decided to drop the survey collection team.

Our third example comes from a 3ie-supported study examining the impact of the use of mobile phones on savings accounts. Surveyors were scheduled to survey target groups, who had received mobile phones and an initial start-up amount, about their saving habits. 

During the course of our field monitoring, we found out that the surveyors had been visiting households in their target group every month. And these regular visits carried on for several months. Not only did the surveyors end up knowing every member of the household they were visiting, but they also knew their savings habits. They also knew the ‘secret pin codes’ and chatted openly about the use of these savings accounts. On their part, households would wait for the surveyors to come and operate their savings accounts. 

It was not difficult to see the reasons for the uptake of phone based savings accounts in the population. The ‘surveyor effect’ was easy to predict but interestingly the research team did not account for this. They saw the uptake as proof of the concept working.

Epilogue: A separate group has been identified to look at survey effects.

Our fourth and final example comes from an impact evaluation of agro-business packages intervention in Niger. One of us was part of the research team of this project. The programme provided inputs for production to agricultural cooperatives and measures for easing access to the market. It was predicted that the intervention would reduce costs, improve producers’ technical skills and consequently lead to an increase in production and exports. The proposed design was an RCT.
In this case, while the evaluation team was still developing baseline data collection tools, the implementing agency went on to deliver the intervention. The implementers implemented a demand-based intervention on a first-come-first-served basis. So, farmers who submitted their application earlier on would receive the agricultural subsidy package. The implementing agency’s monitoring and evaluation team did not grasp the fact that such a demand-based identification process of beneficiaries jeopardized the RCT. 

Epilogue: A programme manager was appointed and the research team used an alternative design: propensity score matching.

Lessons learnt: too little too late? 

In each of the cases described here, a solution was found to address the problem at hand. But the big message we take away from these case studies is that there is an obvious disconnect between implementers and researchers. 

To help bridge this gap, we think it critical to have a field manager liaise between the implementation team and the research team. The field manager should be carefully chosen. This field manager should know the operational intricacies of the project and the political pressures that it faces. But he/she should also understand the evaluation design and be well-versed in management of evaluations.

Our own epilogue: When we recommended the idea of having a research team on the ground to one of 3ie’s grantees, we got the following response: "We note that we are only responsible for the evaluation of the program…the implementing agency has its own monitoring system…".

( Déo-Gracias Houndolo is an Evaluation Specialist and Jyotsna (Jo) Puri is Deputy Executive Director and Head of Evaluation at 3ie)

Footnote: *In encouragement designs, participants are randomly encouraged to participate in the programme. This is usually done when randomization of accessing the programme or participating in the programme is not practical or desirable.

Published on: 30 may 2013