Implementing impact evaluations: trade-offs and compromises

neil palmer ciat

In June this year, 3ie and the International Fund for Agricultural Development organised a workshop where we had several productive discussions around two key questions: Are impact evaluations answering policy-relevant questions and generating useful evidence? What are the challenges faced in designing and implementing impact evaluations of cash transfers and agricultural innovation programmes?

The workshop brought together a fairly diverse group of participants: researchers, implementing agencies and donors involved in impact evaluations of different kinds of development programmes. What was clear from the conversations at the workshop was that there are several stakeholders who are invested in impact evaluations. And many of them differ in their motivation for and expectation from impact evaluations. So, coordinating between several stakeholders to address different interests is a challenge at different stages of the impact evaluation cycle. There are several lessons to be learned.

 Balancing rigour, feasibility and policy relevance

There are several trade-offs to be made when it comes to choosing the method and the research questions of an impact evaluation. How does one balance methodological rigour and implementation feasibility in an impact evaluation? And how does this balance get translated in the evaluation questions? Should an impact evaluation only answer sexy research questions or focus on only policy-relevant questions?

If we were to take the example of a farmer field school programme, it may well be that only farmers with basic knowledge or entrepreneurial skills end up participating in the programme.  Selection bias is a threat to the quality of the impact evaluation. Controlling for such unobserved characteristics that affect the choice of participating in the programme can be more challenging when quasi-experimental methods are used in an impact evaluation. However, the use of experimental methods is not always feasible. For aligning the interests of researchers and implementers or for asking policy-relevant questions, impact evaluators may need to be flexible and choose methods other than randomised controlled trials. There are thus tough choices to be made about rigour, feasibility and policy relevance.

Balancing interests of stakeholders

 The choice of evaluation method and research questions is a complex exercise as it requires aligning the interests of different stakeholders: researchers, implementing agencies, donors and so on. Interventions can also be very complex. They can have multiple components and different activities within these components. This means that at the time of designing an impact evaluation, the evaluation of the programme as a whole may not be feasible. For the purpose of the impact evaluation, particular components need to be chosen. This, in turn, imposes restrictions on the types of research questions that can be answered, and limits the evaluator’s ability to make recommendations about the scale-up of the programme.

Jessica-LeaManaging multiple timelines may come at a cost

The timeline for programme implementation may be subject to delays. This would naturally also lead to a change in the timeline of the evaluation. In the case of agricultural programmes, timelines also need to be adjusted with agricultural seasons. These sort of constraints could shorten the follow-up period of the impact evaluation, which could mean that the evaluators need to change the outcome measures being used to assess impact.

Short-term outcome measures may not give a true picture of the programme’s impact as several interventions tend to take time to produce results. Cash transfers, for instance, may improve children’s enrolment in schools in the short term. However, it is unclear whether the increase in enrolment will translate into improved learning and labour market outcomes in the long term.

Dealing with the challenge of measuring long-term impacts

While it may be important to assess the long-term impacts of programmes, this may not align with the priorities of donors or implementing agencies. Assessing long-term impacts can be costly and designing an impact evaluation that lasts longer than four years may not be appealing to donors or implementers. Sometimes evidence is needed quickly for informing development programmes. Another hurdle to assessing long-term impacts is the attrition of programme participants. In the case of conditional cash transfers, impact evaluations have often focussed on short-term impacts. A possible reason for this is the reluctance of evaluators to deal with high attrition rates.  For example, in the case of the Mexican cash-transfer programme Oportunidades, the attrition rate after 10 years was 60 per cent. The attrition rate therefore needs to be an important consideration while assessing the impact of a programme.

Are impact evaluations still worth it?

 Given the number of challenges outlined in this blog, are impact evaluations still worth the time and effort invested? The answer is yes. An impact evaluation can be challenging. It may take time and effort to align interests of stakeholders and reach an agreement among different parties involved. However, the evidence that impact evaluations generate can inform so many different aspects of policy and programme design. Impact evaluations can also improve the efficiency of projects, influence the scale-up of good projects and improve the quality of data systems. However, balancing the interests and expectations of implementers, researchers and donors requires effort, commitment and compromises. The engagement of implementers through the entire cycle of the impact evaluation is a decisive factor that influences not just the quality of the impact evaluation but also the relevance and the use of the evidence generated.

Add new comment

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives