Latest blogs

Making impact evidence matter for people’s welfare

The plenary session at the Making Impact Evaluation Matter conference in Manila made clear that impact evidence – in the form of single evaluations and syntheses of rigorous evidence – do indeed matter. Two key themes were (1) strong evidence about the causal effects of programmes and policies matter to making decisions that improve the welfare of people living in low- and middle-income countries and (2) that, to make impact evaluation matter more, we need to continue to make efforts to build capacity to generate, understand, and use such evidence in those same countries.

Early engagement improves REDD+ and early warning system design and proposals

At 3ie, our mission is to fund the generation and sharing of sound, useful evidence on the impacts of development programmes and policies work. Actually, we’re more curious (or nosy) than that. For impact evaluation that matters, we need to know which bits of a programme worked, which didn’t, why and through which mechanisms, in which contexts and for what costs.

Twelve tips for selling randomised controlled trials to reluctant policymakers and programme managers

I recently wrote a blog on ten things that can go wrong with randomised controlled trials (RCTs). As a sequel, which may seem to be a bit of a non-sequiter, here are twelve tips for selling RCTs to reluctant policymakers and programme managers.

Gearing up for Making Impact Evaluation Matter

Over the last week, 3ie staff in Delhi, London and Washington were busy coordinating conference logistics, finalising the conference programme, figuring out how to balance 3ie publications and clothing in their suitcases, and putting the last touches to their presentations. This is usual conference preparation for a conference that is going to be different. Why is this conference different? The participant mix – more than 500 people – is balanced among policymakers, programme managers and implementers, and researchers.

Ten things that can go wrong with randomised controlled trials

From the vantage point of 3ie having funded over 150 studies in the last few years, there are some pitfalls to watch for in order to design and implement randomised controlled trials (RCTs) that lead to better policies and better lives. If we don’t watch out for these, we will just end up wasting the time and money of funders, researchers and the intended beneficiaries.

How fruity should you be?

A couple of months back the BBC reported a new study which questioned existing advice to eat five portions of fresh fruit and vegetables a day.  Five was not enough according to the study authors, it should be seven.  I really do try each day to eat five portions. Where was I going to find the time and space for these extra two portions?  But this looked like a sound study published in a respected academic journal, with data from over 65,000 people.

Unexpected and disappearing outcomes: Why relying on proxy outcomes is often not enough

In the early years of the Second World War, British intelligence undertook one of its first exercises in strategic deception. To divert the attention of occupying Italian forces from a planned attack on Eritrea by troops based in Sudan, the British engaged in various activities to make the Italians think an attack was going to be launched on British Somaliland from Egypt.  The British were successful in making the Italians believe that an attack was coming.

If the answer isn’t 42, how do we find it?

Those of you around my age may be familiar with Douglas Adams’ Hitchhikers Guide to the Galaxy in which the answer to the question ‘What is the meaning of life, the universe and everything?’ turns out to be the number 42.  We wish that systematic reviews could be like that. Throw all the evidence into a big number cruncher and out pops a single answer.

The efficacy – effectiveness continuum and impact evaluation

This week we proudly launch the Impact Evaluation Repository, a comprehensive index of around 2,400 impact evaluations in international development that have met our explicit inclusion criteria. In creating these criteria we set out to establish an objective, binary (yes or no) measure of whether a study is an impact evaluation, as defined by 3ie, or not.

Is independence always a good thing?

Evaluation departments of development agencies have traditionally jealously guarded their independence. They are separate from the operational side of agencies, sometimes entirely distinct, as in the case of UK’s Independent Commission for Aid Impact or the recently disbanded Swedish Agency for Development Evaluation. Staff from the evaluation department, or at least the head, are often not permitted to stay on in any other department of the agency once their term ends.

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives