External validity: policy demand is there but research needs to boost supply

A randomised controlled trial (RCT) in a Northern district of Uganda finds that the young adults who receive cash transfers use it to buy more food for their families, football shirts, and airtime for their mobile phones, compared to those in control areas. Would the pattern be the same if young adults in central Uganda are given cash transfers? Would the findings replicate if the cash transfers were given to young women in Senegal? This stylised example points to the crucial question of generalisability of program impacts to other contexts – commonly referred to as external validity.

Sounds good... but what will it cost? Making the case for rigorous costing in impact evaluation research

Imagine two government programs—a job training program and a job matching program—that perform equally well in terms of boosting employment outcomes. Now think about which is more cost-effective. If your answer is ‘no idea’ you’re not alone! Most of the time, we don’t have the cost evidence available to discern this important difference.

The tricky business of measuring latrine use: lessons from 3ie’s evidence programme

There has been a fair bit of hoopla around India being declared open defecation free last month.  In the media debates, the measurement of India’s open defecation-free status has come under a good deal of scrutiny. Leaving aside the politics of the debate, there remains an important question: how can latrine use be measured rigorously?

Is it possible to combine capacity development with a rapid synthesis response to an evidence request?

Two of the most important and long standing challenges for evidence synthesis in international development are: 1) The need to provide timely (eg. rapid) responses to demands for evidence to inform decisions; 2) Developing capacity to do high-quality, policy-relevant syntheses, especially in L&MICs. At present these challenges are typically addressed in isolation.

How can a rethink of lessons from field experiments inform future research in transparency, participation and accountability?

The conference also made us consider the value of using our 3ie blog to extend the audience for the topic and start a conversation around a selected presentation or remarks. We have invited one of the panellists from the ‘Reality Check’ session, Jonathan Fox, to share a version of his remarks as a blog.

Be careful what you wish for: cautionary tales on using single studies to inform policymaking

For a development evaluator, the holy grail is to have evidence from one’s study be taken up and used in policy or programming decisions that improve people’s lives. It’s not easy. Decisions are based on many factors. The availability of evidence is just one of them. And of course, even when evidence is taken up, it does not mean that it will lead to the right decision.

A shot in the arm: why engaging with a range of stakeholders matters

When 3ie set up the Innovations in Increasing Immunisation Evidence Programme, we realised early on that we had to walk the talk. The evidence programme funded formative and impact evaluations of community-based approaches for boosting coverage in countries with low or stagnating immunisation rates. But community members and local decision makers had not been figuring prominently in our approach to how our grantees engage with stakeholders.

Putting government in the driver’s seat to generate and use impact evaluations in the Philippines

Impact evaluations are sometimes criticised for being supply-driven. It is hard to know for sure. There is no counterfactual to what would have happened without the impact evaluation. Regardless of whether this is true or not, one of the ways to ensure that an impact evaluation is more demand-driven is to put the government in the driver’s seat for increasing the demand for evaluation.

From capacity building to developing capacities: dreaming big about improving evidence use

Knowledge and information sharing are two of the main reasons people join the Africa Evidence Network (AEN). More specifically, demand for more capacity development for evidence production and use comes up again and again in our webinars, in blogs from our members and in what they share with us via the AEN website and newsletter. We see this demand in the high turnout for workshops at our biennial AEN conference. Yet, people we talk with are frustrated with the types of capacity development on offer in our region.

Field notes on latrine use promotion in Odisha, India

Odisha has one of the lowest rates of sanitation coverage in India, with over 50 per cent of rural households in need of latrines. 3ie’s Neeta Goel and Radhika Menon joined programme implementers from Emory University and Gram Vikas for a learning exchange to understand what works and what doesn’t in sanitation behaviour change programmes. Representatives from all three organisations authored this blog, capturing the key lessons learned that can inform sanitation policy.