Making participation count

Toilets get converted into temples, and schools are used as cattle sheds. These are stories that are part of development lore. They illustrate the poor participation of ‘beneficiaries’ in well-intentioned development programmes. So, it is rather disturbing that millions of dollars are spent on development programmes with low participation, when we have evidence that participation matters for impact.

Does development need a nudge, or a big push?

Sending people persuasive reminder letters to pay their taxes recovered ₤210 million of revenue for the UK government. Getting the long term unemployed to write about their experiences,  increased their chances of getting a job. Placing healthy choices of food –like fruit instead of chocolate- in obvious locations improves children’s eating habits.

Moving impact evaluations beyond controlled circumstances

The constraints imposed by an intervention can often make designing an evaluation quite challenging. If a large-scale programme is rolled out nationally, for instance, it becomes very hard to find a credible comparison group. Many evaluators would shy away from evaluating programmes when it is hard to find a plausible counterfactual. Since it’s also harder to publish the findings of such evaluations, there don’t seem to be many incentives for evaluating such programmes.

Placing economics on the science spectrum

Where does economics fit on the spectrum of sciences? ‘Hard scientists’ argue that the subjectivity of economics research differentiates it from biology, chemistry, or other disciplines that require strict laboratory experimentation. Meanwhile, many economists try to separate their field from the ‘social sciences’ by lumping sociology, psychology, and the like into a quasi-mathematical abyss reserved for ‘touchy-feely’ subjects, unable to match the rigor required of economics research.

Tips on selling randomised controlled trials

When we randomise, we obviously don’t do it across the whole population. We randomise only across the eligible population. Conducting an RCT requires that we first define and identify the eligible population. This is a good thing. Designing an RCT can help ensure better targeting by making sure the eligible population is identified properly.

Collaborations key to improved impact evaluation designs

Do funding agencies distort impact evaluations? A session organised by BetterEvaluation on choosing and using evaluation methods, at the recent South Asian Conclave of Evaluators in Kathmandu, focused on this issue. Participants were quite candid about funding agencies dictating terms to researchers. “The terms of reference often define the log frame of evaluation (i.e. the approach to designing, executing and assessing projects) and grants are awarded on the basis of budgets that applicants submit.

Using the causal chain to make sense of the numbers

At 3ie, we stress the need for a good theory of change to underpin evaluation designs. Many 3ie-supported study teams illustrate the theory of change through some sort of flow chart linking inputs to outcomes. They lay out the assumptions behind their little arrows to a varying extent. But what they almost invariably fail to do is to collect data along the causal chain. Or, in the rare cases where they do have indicators across the causal chain, they don’t present them as such.

Can we do small n impact evaluations?

3ie was set up to fill ‘the evaluation gap’, the lack of evidence about ‘what works in development’. Our founding document stated that 3ie will be issues-led, not methods led, seeking the best available method to answer the evaluation question at hand. We have remained true to this vision in that we have already funded close to 100 studies in over 30 countries around the world.

Evidence to policy: bridging gaps and reducing divides

Evidence-based policy-making is important but not always straightforward in practice. The complex reality of policy-making processes means that the availability of high quality research is a necessary, but not sufficient, ingredient for evidence informed policy.