Economists have experiments figured out. What’s next?

Berk Ozler on the pros and cons of using surveys to measure impact.

"...So, evaluating a large government program using an unrelated routine government survey may be fine (although I suspect that they too will have biases depending on what the respondents think the survey is for, how large, important, and ‘in the news’ the intervention is, etc.), but evaluating your own experiment that aims to change some behavior by asking study participants whether they have changed that behavior is unacceptable."

Read more

Published on: 14 jan 2013


Systematic reviews in international development

Tracey Koehlmoos, adjunct professor at George Mason University, Washington DC, and adjunct scientist at ICDDR,B blogs on sessions at the Dhaka Colloquium on Systematic Reviews.

"...Perhaps the most controversial session that I have attended so far was provocatively named “Rapid reviews: opportunity or oxymoron?” 3ie deputy director, Phil Davies presented on “rapid evidence assessment” and their place in the pantheon of evidence synthesis efforts aimed at informing policy making. Serious questions remain about rapid reviews being biased compared to systematic reviews—and how the process would even allow the developers of these reviews to recognize any biases. However, Davies pointed out that “all evidence is probabilistic.” ..."

Read more

Published on: 13 dec 2012


Impact Evaluations: A discussion paper for AusAID practitioners

"This discussion paper aims to support appropriate and effective use of impact evaluations in AusAID by providing AusAID staff with information on impact evaluation. It provides staff who commission impact evaluations with a definition, guidance and minimum standards."

Read more

Published on: 07 oct 2012


Impact evaluations everywhere: What's a small NGO to do?

William Savedoff blogs on the challenge that impact evaluation poses for organisations. "...Other times, the concerns reflect an unwillingness to clearly state their goals, be explicit about their theories of change, or put their beliefs about what works to an objective test. Yet, this is exactly what is at stake with evaluation: are you willing to be proven wrong?"

Read more

Published on: 17 sep 2012


Dr Howard White's guest post on the World Bank impact blog

"Potential biases can arise when collecting qualitative data, in deciding which questions are asked, in what order, how they are asked and how the replies of the respondents are recorded. 

"There can also be biases in how the responses are interpreted and analyzed, and finally which results are chosen for presentation. Of course quantitative data and analysis is also prone to bias, such as sampling bias and selection bias. But methodologies have been developed to explicitly deal with these biases. Indeed evaluation designs are judged on precisely how well they deal with these biases."

Read more

Published on: 23 jul 2012


Scroll to Top