The Research Department of Agence Française de Développement (AFD) has published the Ex Post collection of the following papers: Insuring health or insuring wealth? An experimental evaluation of health insurance in rural Cambodia and Sky impact evaluation, Cambodia, 2010, village monographs
Ian Ramage and David Levine, co-authors of these studies will be speaking at the Health Financing in Developing Countries Conference (Financement de la santé dans les pays en développement) on 27 and 28 May at AFD.
The latest issue of Journal of Development Effectiveness features papers on impact evaluation design for the Millennium Villages project in Northern Ghana and the effect of solar ovens on fuel use, emissions and health: results from a randomised controlled trial. The paper on 'Effects of the Frontiers Prevention Project in Ecuador on sexual behaviours and sexually transmitted infections amongst men who have sex with men and female sex workers: challenges on evaluating complex interventions,' is open access.
How should impact of development work be measured? Panelists at the recent discussion on the issue offer suggestions to demystify the process and ensure it supports accountability and learning.
Gareth Roberts of University of Witwatersrand presents the findings of this study which led to a policy change in South Africa.
The study finds that those who were allocated a wage subsidy voucher were more likely to be employed both one year and two years after allocation. The impact of the voucher thus persisted even after it was no longer valid. This suggests that those young people who entered jobs earlier than they would have – because of the voucher – were more likely to stay in jobs. This confirms the important dynamic impacts of youth employment. It also suggests that government interventions which successfully create youth employment are important and have virtuous long-term effects.
This report gathers together examples which attempt to explain how and why change happens as a result of cash transfers (CTs). It first presents a selection of theories of change about how cash transfers are expected to work in general, drawn from academic literature, and then a selection of theories as used in a few specific cash transfer programmes.
Pocast of presentation by Jyotsna Puri at the 28th Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) meeting.
In a video interview, Phil Davies talks about the importance of timing for researchers wanting to engage and inform policymakers. He talks about 3ie GapMap as a visual and engaging tool for understanding what is known and what isn't.
In Part 2 of the video, he stresses the importance of physical access, encouraging efforts towards ensuring that policymakers are aware of, and can get hold of, the kinds of evidence that they need to make good decisions.
Kirsty Newman underlines three important advantages of synthesised evidence in international development -- it can tell us something is true, which we didn't realise was true; it can also tell us something that we all thought was true is actually not true; it can also tell us that something we thought we fully understood, we actually don’t have a clue about.
A revised paper by Abhijit Banerjee, Esther Duflo, Rachel Glennerster and Cynthia Kinnan. It reports on the first randomized evaluation of the impact of introducing the standard microcredit group-based lending product in a new market.
A DFID guidance note on the best ways to assess evidence in international development. It offers some rules on:
- understanding different types of empirical research evidence
- appreciating the principles of high quality evidence
- considering how the context of research findings affects the way that staff may use them
- understanding how to make sense of inconsistent or conflicting evidence
Theme: What evidence-based development has to learn from evidence-based medicine
Speaker: Chris Whitty
Theme: What we have learned from 3ie's experience in evidence- based development
Speaker: Howard White
William Savedoff and Ted Collins find 14 searchable evaluation databases that provide information for policymaking and programme development. But how accessible are these databases? Can they do a better job?
The search is on for databases that include studies that attribute impact to particular programmes, interventions or policies and whose findings are relevant to low- and middle-income countries.
Howard White discusses different designs of randomized control trials and addresses criticisms of RCTs which are mostly argued to rest on misunderstandings of the approach.
The standard approach to policymaking and advice in economics implicitly or explicitly ignores politics and political economy, and maintains that if possible, any market failure should be rapidly removed. This NBER working paper by Daren Acemoglu and James A. Robinson argues why this conclusion may be incorrect.
This CGD Working Paper by Tessa Bold, Mwangi Kimenyi, Germano Mwabu, Alice Ng'ang'a, and Justin Sandefur looks at how a Kenyan education programme proven successful in a randomised controlled trial, failed to have similar outcomes when implemented.
See blog by Justin Sandefur Finding what works in development: what is the what?)
Esther Duflo discusses a 3ie-supported study on Gujarat pollution control in an interview to Yale Insights.
This paper by Paul Shaffer presents a selective review of empirical examples of the use of mixed method, or Q-Squared, approaches in impact assessment.
The Centre for Development Impact (CDI) will design innovative approaches and share learning on impact evaluations. "It is also dedicated to expanding the tool box of methods for impact evaluation, critically evaluating established and new methods as we go along," says Lawrence Hadad, Director, Institute of Development Studies.
Chris Whitty talks about the importance of evidence synthesis in communicating research findings to policymakers, at the 2013 STEPS Centre Symposium.
Most scientists are unsure of how to engage with policy making processes. Better insight into it is the first step to engaging with policy making more effectively. Scientific evidence gains traction when it tells a clear and relevant story.
3ie's Philip Davies cited in the article.
Markus Goldstein highlights the findings from two 3ie-supported studies -- a systematic review on the impact of daycare programmes and the impact evaluation of Save the Children's early childhood development programme in Mozambique.
"The purpose of this document is to advance the Foundation’s existing work so that our evaluation practices become more consistent across the organization.We hope to create more common understanding of our philosophy, purpose,
and expectations regarding evaluation as well as clarify staff roles and available support."
Bill Gates discusses the importance of measurement in improving human condition. "From the fight against polio to fixing education, what's missing is often good measurement and a commitment to follow the data. We can do better. We have the tools at hand."
Howard White, 3ie Executive Director, discusses “closing the evidence gap”, randomised control trials, and the value of impact evaluation with Dereck Rooken-Smith of ODE, AusAID.
Louise Shaxson, Research Fellow, ODI on how "Pressure to demonstrate concrete impacts on public policy is encouraging researchers to make grand claims about what we/they are likely to achieve."
Chris Whitty, DFID’s Director of Research and Evidence and Chief Scientific and Stefan Dercon, its Chief Economist respond to Chris Roche and Rosalind Eyben's concerns over the results agenda.
3ie finds a mention in the article by Natasha Gilbert. "To aid such reviews, the International Initiative for Impact Evaluation (3ie), a non-profit organization based in Washington DC that funds and conducts aid-assessment research, is setting up a database in which researchers can register studies. Expected to launch later this year, the initiative aims eventually to provide a complete listing of assessments for various types of aid interventions, says Howard White, executive director of 3ie."
Rosalind Eyben, Research Fellow at IDS and Chris Roche, Associate Professor, La Trobe University kick off a discussion on the implications of evidence-based approaches.
Nina Cromeyer Dieke shares tips and lessons development community can learn from mainstream media. The 3ie policy impact toolkit finds a mention in the story.
Berk Ozler on the pros and cons of using surveys to measure impact.
"...So, evaluating a large government program using an unrelated routine government survey may be fine (although I suspect that they too will have biases depending on what the respondents think the survey is for, how large, important, and ‘in the news’ the intervention is, etc.), but evaluating your own experiment that aims to change some behavior by asking study participants whether they have changed that behavior is unacceptable."
How can systematic reviews contribute evidence for policy? Blogs on this page take up the debate on conducting and using systematic reviews.
Tracey Koehlmoos, adjunct professor at George Mason University, Washington DC, and adjunct scientist at ICDDR,B blogs on sessions at the Dhaka Colloquium on Systematic Reviews.
"...Perhaps the most controversial session that I have attended so far was provocatively named “Rapid reviews: opportunity or oxymoron?” 3ie deputy director, Phil Davies presented on “rapid evidence assessment” and their place in the pantheon of evidence synthesis efforts aimed at informing policy making. Serious questions remain about rapid reviews being biased compared to systematic reviews—and how the process would even allow the developers of these reviews to recognize any biases. However, Davies pointed out that “all evidence is probabilistic.” ..."
"In our view, the vast majority of aid interventions have almost no rigorous evidence behind them. A very small set of interventions – including LLIN distribution – have a broad, impressive evidence base. Deworming is somewhere in between. The studies discussed here are rigorous, have highly encouraging findings, held up to the best scrutiny we could bring to them. At the same time, many questions remain unanswered. This is one of the areas in which an additional long-term study would have the most effect on our views."
JDEff Volume 4, Issue 4 includes a host of interest articles analysing the impact of development interventions in Nigeria, Yemen and Ethiopia. This issue also includes a piece on using evidence from impact evaluations to influence policy.
Duncan Green on Lant Pritchett's critique of 'RCT randomistas'.
"RCTs are a tool to cut funding, not to increase learning. ‘Randomization is a weapon of the weak’ – a sign of how politically vulnerable the argument for aid has become since the end of the Cold War. ‘Henry Kissinger wouldn’t have demanded an RCT before approving aid to some country.’ And I can’t see the military running RCTs to assess the value for money of new weaponry before asking for more cash (mind you, if they did, that might at least save some money on Trident….)."
David Mckenzie takes on Angus Deaton on trial registries. "...because the general points that identification of genuine impacts requires not exploring 1000 different patterns in the data and choosing the one which has a large t-value is not specific to RCTs at all – indeed 3ie is building a trial registry that will attempt to register non-experimental impact evaluations as well."
The special issue focuses on the fact that impact evaluation can be applied to a range of policy questions. The papers are based on issues like education, agriculture, migration, microfinance etc.
"...The MCC brief makes explicit the argument that impact evaluations need to be designed for learning, not just accountability (which had been the primary goal when these set of evaluations started)...."
The Millennium Challenge Corporation has released its first five impact evaluations for farmer training activities in Armenia, El Salvador, Ghana, Honduras, and Nicaragua.
Systematic reviews hold great promise and are relevant to policymakers because they are systematic. Adam Wagstaff uses the Waddington et al.toolkit to analyse two health systems systematic reviews to show that it is possible to do a systematic review unsystematically and give a biased view of the literature.
Evidence alone is not enough: policymakers must be able to access relevant evidence if their policy is to work
It is not enough to look for evidence of a previous policy success. Jeremy Hardie and Nancy Cartwright argue that exactly what evidence is needed, and of what, is the key question that needs to be asked for making real evidence-based social policy interventions.
When we (rigorously) measure effectiveness, what do we find? Initial results from an Oxfam experiment
Dr Karl Hughes on Oxfam GB's effectiveness reviews: "Currently, there is considerable interest in how to evaluate the impact of interventions that don’t lend themselves to statistical approaches, such as those that are seeking to bring about policy change (aka “small n” interventions). See a recent paper by Howard White and Daniel Phillips. We have attempted to address this by developing an evaluation protocol based on a methodology called process tracing used by some case study researchers."
"This discussion paper aims to support appropriate and effective use of impact evaluations in AusAID by providing AusAID staff with information on impact evaluation. It provides staff who commission impact evaluations with a definition, guidance and minimum standards."
"Is in danger of being messed up. Here is why: There are two fundamental reasons for doing impact evaluation: learning and judgment. Judgment is simple – thumbs up, thumbs down: program continues or not. Learning is more amorphous – we do impact evaluation to see if a project works, but we try and build in as many ways to understand the results as possible, maybe do a couple of treatment arms so we see what works better than what."
This issue highlights why systematic reviews should be an important component of evidence-informed development policy and practice. Papers by 3ie researchers and other authors demonstrate how reviews can be made to live up to the promises generated around them.
3ie recently participated at an IFAD learning event on impact evaluations for environmental and climate change interventions.
William Savedoff blogs on the challenge that impact evaluation poses for organisations. "...Other times, the concerns reflect an unwillingness to clearly state their goals, be explicit about their theories of change, or put their beliefs about what works to an objective test. Yet, this is exactly what is at stake with evaluation: are you willing to be proven wrong?"
World Bank supported evaluation of a public works programme in Ethiopia shows that it's an effective way to help people manage risk and help stabilise income generation.
Cash on Delivery for the world's poorest: "If the project is ambitious, what’s really fascinating is that Dfid has tried to evaluate Tuungane I rigorously, using a randomised controlled trial. Villages were enrolled in Tuungane through a public lottery. With 1.8 million people in the treatment group and a large control group, such an evaluation would be challenging to conduct in a rich country."
The Millennium Village Project, Jeffery Sachs 's integrated approach to rural development, has been subject to criticism following serious flaws in its evaluation methods. A new project site in northern Ghana will be independently assessed.
Dr. Aditi Mukherji of International Water Management Institute, wins the Norman Borlaug Award in recognition of her impact evaluation on groundwater resources in agriculture. This 3ie-supported study led to major policy changes benefiting marginal farmers in West Bengal.
GiveWell lists the principles for deciding how much weight to put on a study’s claims.
"In today’s social sciences environment – in which preregistration is rare – we think that the property of being an RCT is probably the single most encouraging (easily observed) property a study can have, which has a practical implication: we often conduct surveys of research by focusing/starting on finding RCTs (while also trying to include the strongest and most prominent non-RCTs). ...And we think it’s possible that if preregistration were more common, we’d consider preregistration to be a more important and encouraging property of a study than randomization."
"Public debate about two prominent poverty-alleviation programs shows that over the past 15 years international development has become much more scientific," says Dean Karlan and Caroline Fiennes.
Dr Andrew Clappison blogs on the discussions at the recent Theory of Change workshop and suggests ways DFID may want to consider improving the theories of change for its programmes.
The paper titled 'Challenges in Banking the Rural Poor: Evidence from Kenya's Western Province' combines experimental and survey evidence to document some of the supply and demand factors behind low levels of financial inclusion in rural Kenya.
The World Bank Human Development Network has launched an Impact Evaluation Toolkit, a hands-on guide on how to design and implement impact evaluations especially those related to maternal and child health and those involving results-based financing. It can also be adapted for impact evaluation in other fields.
Lawrence Haddad on a 3ie-IDS randomised control trial on how to communicate research findings.
The key questions posed in the research were:
- Do policy briefs influence readers?
- Does the presence of an op-ed type commentary within the brief lead to more or less influence? and
- Does it matter if the commentary is assigned to a well-known name in the field?
A 3ie-supported study on the impact of water and sanitation on child health published as Population Council working paper
Using a combination of qualitative and quantitative data, this paper, 'The Impact of Water Supply and Sanitation on Child Health: Evidence from Egypt,' investigates whether access to improved sources of water and sanitation is an effective treatment for the incidence of diarrhea among children under five years of age in Egypt.
"In the eternal quest for better data, one of the most exciting modes of data collection as part of survey efforts across the globe is known as Computer-Assisted Personal Interviewing, or CAPI.
"CAPI means the integration of interviewing and data entry process via the use of a handheld device, such as a tablet computer or a netbook, preloaded with an electronic questionnaire."
"Potential biases can arise when collecting qualitative data, in deciding which questions are asked, in what order, how they are asked and how the replies of the respondents are recorded.
"There can also be biases in how the responses are interpreted and analyzed, and finally which results are chosen for presentation. Of course quantitative data and analysis is also prone to bias, such as sampling bias and selection bias. But methodologies have been developed to explicitly deal with these biases. Indeed evaluation designs are judged on precisely how well they deal with these biases."
"The Cochrane Collaboration’s recent summary of the evidence on treating school-age children for soil-transmitted intestinal worms (or STH) is incomplete and misleading. While we do not comment on the evidence of the health and cognitive outcomes reviewed, we continue to find that the educational benefits alone justify mass school-based deworming. We strongly endorse the WHO and Copenhagen Consensus’s recommendation to mass treat children for STH."
A new systematic review by Cochrane Collaboration questions the effectiveness of deworming drugs in improving nutritional status, school performance, and cognitive test scores. Schistosomiasis Control Initiative responds to this new evidence.
The Norad Evaluation Department’s recently released Annual Report emphasises the need to ensure that more of the interventions funded or supported by Norwegian aid build in rigorous evaluations. The report calls for funds, time and human resources to be set aside for this purpose.
Lawrence Haddad discusses two new impact designs – Howard White and D. Phillips’ ‘Small n’ approach, and the methods prescribed by Stern et.al.
Alexandra Fielden of Innovations for Poverty Action blogs on the benefits of the chlorine dispenser system in Kenya
"One inexpensive, safe, and effective option to improve water quality while protecting against re-contamination is to treat water with chlorine using IPA’s innovative Chlorine Dispenser System."
"Installed at a communal water source, users simply turn a valve on the dispenser to release a metered dose of chlorine into their jerricans, which they then fill with water as usual. The chlorine mixes with the water and kills the germs that cause many diarrheal diseases. The chlorine provides protection from recontamination for up to 48 hours, and achieves an average diarrhea reduction of 41 percent."
IDEAS fifth national review, “Implementing a Subnational Results-Oriented Management and Budgeting System: Lessons from Medellin, Colombia” by Rafael Gomez, Mauricio Olivera and Mario A. Velasco is now available online.
The first evaluation of early childhood development programmes in Africa shows that children going to preschool are much more likely to be interested in mathematics and writing, recognize shapes, and show respect for other children, than those who are not.
Carolyn Miles, CEO, Save the Children, blogs about the impact of preschool programmes in Mozambique.
Which studies should someone be paid to re-examine?, asks David Roodman, research fellow, Center for Global Development and author of Due Diligence.
"Repeating experiments can also resolve debates about studies some experts think were poorly done. They help the U.S. government, which spent $15 billion on international aid in 2011, and private donors, who spend billions more, decide how to spend their budgets," says Innovation News Daily.
Watch the presentations from the Monitoring and Evaluation Roundtable 2 on Theory of Change co-hosted by IDRC, CLEAR at J-PAL SA at IFMR.