If you want your study included in a systematic review, this is what you should report

Impact evaluation evidence continues to accumulate, and policymakers need to understand the range of evidence, not just individual studies. Across all sectors of international development, systematic reviews and meta-analysis (the statistical analysis used in many systematic reviews) are increasingly used to synthesise the evidence on the effects of programmes.

What did I learn about the demand for impact evaluations at the What Works Global Summit?

At the recently concluded What Works Global Summit (WWGS) which 3ie co-sponsored, a significant number of the sessions featured presentations on new impact evaluations and systematic reviews. WWGS was a perfect opportunity to learn lessons about the demand for and supply of high-quality evidence for decision-making because it brought together a diverse set of stakeholders. There were donors, knowledge intermediaries, policymakers, programme managers, researchers and service providers. They came from both developed as well as developing countries.

Is impact evaluation still on the rise?

Since 2014, 3ie’s impact evaluation repository (IER) has been a comprehensive database of published impact evaluations of development programmes in low- and middle-income countries. We call the database comprehensive because we build it from a systematic search and screening process that covers over 35 indexes and websites and screens for all development programme evaluations or experiments that use a counterfactual method for estimating net impact.

The pitfalls of going from pilot to scale, or why ecological validity matters

The hip word in development research these days is scale. For many, the goal of experimenting has become to quickly learn what works and then scale up those things that do. It is so popular to talk about scaling up these days that some have started using upscale as a verb, which might seem a bit awkward to those who live in upscale neighbourhoods or own upscale boutiques.

Seven impact evaluations on demand creation for VMMC: how a focused thematic window can meet multiple evidence needs

On World AIDS Day 2015, we are marking the culmination of 3ie’s third thematic window, which funded seven pilot programmes and rapid impact evaluations looking for ways to increase the demand for voluntary medical male circumcision (VMMC). In late 2013, we awarded grants to project teams of implementers and researchers to pilot innovative programs for increasing VMMC demand and to conduct rapid impact evaluations of those programmes.

Twelve tips for selling randomised controlled trials to reluctant policymakers and programme managers

I recently wrote a blog on ten things that can go wrong with randomised controlled trials (RCTs). As a sequel, which may seem to be a bit of a non-sequiter, here are twelve tips for selling RCTs to reluctant policymakers and programme managers.

Ten things that can go wrong with randomised controlled trials

I am often in meetings with staff of implementing agencies in which I say things like ‘a randomised design will allow you to make the strongest conclusions about causality’. So I am not an ‘unrandomista’.

When will researchers ever learn?

I was recently sent a link to this 1985 World Health Organization (WHO) paper which examines the case for using experimental and quasi-experimental designs to evaluate water supply and sanitation (WSS) interventions in developing countries.

This paper came out nearly 30 years ago. But the problems it lists in impact evaluation study designs are still encountered today. What are these problems?

Can we learn more from clinical trials than simply methods?

What if scientists directly tested their drug ideas on humans without first demonstrating their potential efficacy in labs? This question sounds hypothetical because we all know that using untested drugs can be potentially dangerous. If we were then to use the same logic, should we not be exercising similar caution with randomized controlled trials (RCTs) of social and economic development interventions involving human subjects?

Tips on selling randomised controlled trials

Development programme staff often throw up their hands in horror when they are told to randomise assignment of their intervention. “It is not possible, it is not ethical, it will make implementation of the programme impossible”, they exclaim.

In a new paper in the Journal of Development Effectiveness I outline how different randomised controlled trial (RCT) designs overcome all these objections. Randomisation need not be a big deal.