Today we are launching, at 3ie, a yearlong social media campaign called ‘2020 Hindsight: What Works in Development.’ For our non-American readership, let me briefly explain where the idiom ‘Hindsight is 20/20’ comes from. 20/20 vision is a term used to express that you can see clearly at 20 feet what should normally be seen at that distance. 20/20 hindsight means that one is able to evaluate past choices more clearly than at the time of the choice – indeed with the perfect clarity of vision that one did not have at the time.
To explain why we are launching this campaign and what we are hoping to accomplish, join me first on a little experiment of travelling back in time. In March 2006, the Evaluation Gap Working Group launched a report suitably titled When Will We Ever Learn? Improving Lives Through Impact Evaluation, which was one of the fundamental reasons why 3ie was established in 2008. The report concluded grimly and correctly that:
‘Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world. But very few programs benefit from studies that could determine whether or not they actually made a difference. This absence of evidence is an urgent problem: it not only wastes money but denies poor people crucial support to improve their lives.’
3ie was established to help address this evidence gap by supporting the production of new and rigorous evidence of what works, where, why, for whom and at what cost. Several other important players also came into existence during the first decade of the new millennium: Innovations for Poverty Action, the Center for Effective Global Action and the Abdul Latif Jameel Poverty Action Lab , two of whose founders won the 2019 Nobel Prize in economics.
Fast forward to 2020 and thousands of development effectiveness studies have been implemented in low- and middle-income countries (L&MICs), examining what works, for whom, why, and at what cost. 3ie was an important contributor to this achievement. Most of these studies, which now constitute a large public good of evidence, are available in 3ie’s searchable repositories of impact evaluations.
So what are the 10 things that work? Surely, 3ie should have something to say about this given its role in addressing the evidence gap described more than a decade ago?
Yes, but...not so fast! The studies we are referring to here are individual impact evaluations. These can inform us about whether a certain intervention or development program worked, but their findings may be of limited applicability for significantly different countries, contexts, time-periods, or implementation approaches.
The good news is that 3ie has gone a step further. We have produced or supported more than 50 systematic reviews thanks to the foresight of our founding executive director, Howard White. And since then 3ie has curated the world’s largest repository of systematic reviews of development-related questions.
Systematic reviews use standardized methods to collect secondary data, critically appraise research studies, and synthesize findings qualitatively or quantitatively. They are designed to provide a complete and exhaustive summary of current rigorous evidence relevant to a research question. What this means is that you can rely on all of the existing relevant studies for a certain topic, rather than just one. As long as this is properly done, most people will agree that several data points are better than one!
What this means is that you can ask 3ie —can a conditional cash transfer program increase the rate at which people go to get health check-ups? —and we can answer affirmatively with some amount of certainty based on available evidence. We can say this because we have systematic reviews (see 1 and 2) on the topic. Without a systematic review of the evidence, we would more likely say ‘well this worked here and the other worked there.'
While the evidence gaps are still large, for example in sectors like climate change, or in contexts of fragility and conflict, we need to hold ourselves to account. What have we learnt 14 years after the report asked ‘when will we ever learn?’ A back-of-the-envelope calculation suggests that since then, about three billion US dollars have been spent on the impact evaluations and systematic reviews captured in the 3ie repositories* . This is not pocket change, but only a very small fraction of the amount spent on development interventions over the same timeframe, which may or may not work. Nevertheless, policymakers and donors who funded this research deserve answers about what works. More importantly, people in L&MICs deserve development interventions that are based on the best evidence available to achieve their intended outcomes.
We therefore embark on a yearlong social media campaign, 2020 Hindsight: What Works in Development. We will present insights from programmes and policies that work based mainly on evidence from systematic reviews.
My team has warned me of the challenges. (i) A lot of the impact evaluations included in systematic reviews are of relatively low quality, hence we should be careful about the conclusions. (ii) A lot of the systematic reviews find conflicting evidence due to a lack of evidence, or moderating factors like context, implementation and intervention design. (iii) There may be evidence that interventions mostly do not work. (iv) And, even when programs create change, they may not achieve the desired development goals. Elsewhere, we have also blogged about the fact that impact evidence alone is not enough - cost evidence is essential for deciding how scarce resources can best be used and that this is still largely missing.
Nonetheless, the year 2020 will be our year to reflect on what the increasing body of evidence taught us. We hope that our partners at Campbell, EPPI, GESI, ACE, Cochrane and other evidence producers, as well as you, dear reader, will join us!
*This is of course only a sub-sample, as many studies have not passed the inclusion threshold for these repositories. There is also a wider range of evaluations out there, including performance evaluations, programmatic evaluations, corporate evaluations, process evaluations, formative evaluations, qualitative assessments etc, none of which are captured in the repositories, and on which the largest amount of resources are being spent.
Comments
I look forward to reading more on this over the next year. From my perspective effectiveness could be massively improved in almost every project by applying a much more rigorous and systematic project management approach. It amazes me that in an industry that is almost completely based on projects, Project Management as a skill/profession is so under valued. I only qualifed as a project manager after I stopped managing development projects and I was amazed at how much I didn't know. I now evaluate projects regularly and effectiveness and efficiency are almost always key criteria to be evaluated, but none of my clients ever want you to ask the hard questions about management systems as they equate management system with manager and then it gets personal. Assessing effectiveness of M&E systems is often included, but this is only one facet of project management (quality control). If we want to get stuff done and get it done well, we need evidence to build on, but we also need to use strong systems to ensure implementation is as good as it can be.
Ms. Nancy Birdsal's message remains relevant to many countries, governments, institutions, among others. Her question is yet to be answered.
Bottom line, it takes leadership. M&E/IE reveals progress is happening or not. Honesty endears. Lies make system sick like COVID 19, it knows no border, race, religion, politics. Happy holidays!
With best regards to Dr. Howard White. He is an inspiration next to Ms. Nancy B.
Sincerely,
Eva Benita A. Tuzon
A very interesting topic of discussion. As a person involved in a $29.6m programme, I ask myself the same questions of what works, where and why. I clear observation is that the operational model has a lot to do with the impact to be generated or its potential. My programme is implemented through government departments at the district level. There is a clear difference in expectations between these departments and the Project staff. Implementing staff take it as business as usual while Project staff have their eyes on achieving set targets. Clearly, there is every likelihood for failure, but we are doing everything possible to ensure we steer the Project towards success.