How to peer review replication research

“The 3ie replication process differs in important ways from the standard research community-led peer-review process in academic journals. We have been explicitly instructed by 3ie staff not to discuss our experiences with the replication process at any length in this note, including our views on the weaknesses of their current system and the review standards they employ.

Myths about microcredit and meta-analysis

It is widely claimed that microcredit lifts people out of poverty and empowers women. But evidence to support such claims is often anecdotal. A typical micro-finance organisation website paints a picture of very positive impact through stories: “Small loans enable them (women) to transform their lives, their children’s futures and their communities. 

Impact evaluations of agriculture & rural development programmes | Markus Olapade

Agricultural productivity in Africa is low although there are several technologies now available for increasing yields.

Evaluating the impact of an education programme | Radhika Menon

Many education programmes have helped in increasing the enrolment and attendance of children in schools but there is less evidence on what works in improving learning.

Demand creation for voluntary medical male circumcision: how can we influence emotional choices?

This year in anticipation of World AIDS Day, UNAIDS is focusing more attention on reducing new infections as opposed to treatment expansion. As explained by Center for Global Development’s Mead Over in his blog post, reducing new infections is crucial for easing the strain on government budgets for treatment as well as for eventually reaching “the AIDS transition” when the total number of people living with HIV begins to decline.

How big is big? The need for sector knowledge in judging effect sizes and performing power calculations

A recent Innovations for Poverty Action (IPA) newsletter reported new study findings from Ghana on using SMS reminders to ensure people complete their course of anti-malaria pills. The researchers concluded that the intervention worked. More research is needed to tailor the messages to be even more effective.

Calculating success: the role of policymakers in setting the minimum detectable effect

When you think about how sample sizes are decided for an impact evaluation, the mental image is that of a lonely researcher laboring away on a computer, making calculations on STATA or Excel. This scenario is not too far removed from reality. But this reality is problematic. Researchers should actually be talking to government officials or implementers from NGOs while making their calculations

“Well, that didn’t work. Let’s do it again.”

Suppose you toss a coin and it comes up heads. Do you conclude that it is a double-headed coin? No, you don’t. Suppose it comes up heads twice, and then a third time. Do you now conclude the coin is double-headed? Again, no you don’t. There is a one in eight chance (12.5 per cent) that a coin will come up heads three times in a row. So, though it is not that likely, it can and does happen.

Requiring fuel gauges: A pitch for justifying impact evaluation sample size assumptions

We expect researchers to defend their assumptions when they write papers or present at seminars. Well, we expect them to defend most of their assumptions. However, the assumptions behind their sample size, determined by their power calculations, are rarely discussed. Sample sizes and power calculations matter. Power calculations determine sample size requirements, which match budget constraints with minimum sample size requirements.

Subscribe to