Nina Cromeyer Dieke shares tips and lessons development community can learn from mainstream media. The 3ie policy impact toolkit finds a mention in the story.
Berk Ozler on the pros and cons of using surveys to measure impact.
"...So, evaluating a large government program using an unrelated routine government survey may be fine (although I suspect that they too will have biases depending on what the respondents think the survey is for, how large, important, and ‘in the news’ the intervention is, etc.), but evaluating your own experiment that aims to change some behavior by asking study participants whether they have changed that behavior is unacceptable."
How can systematic reviews contribute evidence for policy? Blogs on this page take up the debate on conducting and using systematic reviews.
Tracey Koehlmoos, adjunct professor at George Mason University, Washington DC, and adjunct scientist at ICDDR,B blogs on sessions at the Dhaka Colloquium on Systematic Reviews.
"...Perhaps the most controversial session that I have attended so far was provocatively named “Rapid reviews: opportunity or oxymoron?” 3ie deputy director, Phil Davies presented on “rapid evidence assessment” and their place in the pantheon of evidence synthesis efforts aimed at informing policy making. Serious questions remain about rapid reviews being biased compared to systematic reviews—and how the process would even allow the developers of these reviews to recognize any biases. However, Davies pointed out that “all evidence is probabilistic.” ..."
"This discussion paper aims to support appropriate and effective use of impact evaluations in AusAID by providing AusAID staff with information on impact evaluation. It provides staff who commission impact evaluations with a definition, guidance and minimum standards."
3ie recently participated at an IFAD learning event on impact evaluations for environmental and climate change interventions.
William Savedoff blogs on the challenge that impact evaluation poses for organisations. "...Other times, the concerns reflect an unwillingness to clearly state their goals, be explicit about their theories of change, or put their beliefs about what works to an objective test. Yet, this is exactly what is at stake with evaluation: are you willing to be proven wrong?"
"Potential biases can arise when collecting qualitative data, in deciding which questions are asked, in what order, how they are asked and how the replies of the respondents are recorded.
"There can also be biases in how the responses are interpreted and analyzed, and finally which results are chosen for presentation. Of course quantitative data and analysis is also prone to bias, such as sampling bias and selection bias. But methodologies have been developed to explicitly deal with these biases. Indeed evaluation designs are judged on precisely how well they deal with these biases."