Sounds good... but what will it cost? Making the case for rigorous costing in impact evaluation research

Imagine two government programs—a job training program and a job matching program—that perform equally well in terms of boosting employment outcomes. Now think about which is more cost-effective. If your answer is ‘no idea’ you’re not alone! Most of the time, we don’t have the cost evidence available to discern this important difference.

3ie’s Agricultural Risk Insurance Evidence Programme: a structured approach to impact evaluations

With climate change becoming a reality, agricultural productivity has suffered considerably. This has put at risk the livelihood of the majority of the world’s poor, who are dependent on agriculture and related activities. Various risk mitigation solutions such as improved seeds and drought irrigation have shown promising results, but the role of transferring risk via agricultural insurance demands deeper exploration.

Too difficult, too disruptive and too slow? Innovative approaches to common challenges in conducting humanitarian impact evaluations

Over 200 million people are in urgent need of humanitarian assistance across the world today.  In 2017, the UN-coordinated appeals reported a shortfall of 41 per cent, despite receiving a record amount of funding. As the demands on these limited funds increase, there is a concurrent increase in the need for high-quality evidence on the most effective ways to improve humanitarian programming.

Learning power lessons: verifying the viability of impact evaluations

Learning from one’s past mistakes is a sign of maturity. Given that metric, 3ie is growing up. We now require pilot research before funding most full impact evaluation studies. Our pilot studies requirement was developed to address a number of issues, including assessing whether there is sufficient intervention uptake, identifying or verifying whether the expected or detectable effect is reasonable and determining the similarity of participants within clusters.

If you want your study included in a systematic review, this is what you should report

Impact evaluation evidence continues to accumulate, and policymakers need to understand the range of evidence, not just individual studies. Across all sectors of international development, systematic reviews and meta-analysis (the statistical analysis used in many systematic reviews) are increasingly used to synthesise the evidence on the effects of programmes.

What did I learn about the demand for impact evaluations at the What Works Global Summit?

At the recently concluded What Works Global Summit (WWGS) which 3ie co-sponsored, a significant number of the sessions featured presentations on new impact evaluations and systematic reviews. WWGS was a perfect opportunity to learn lessons about the demand for and supply of high-quality evidence for decision-making because it brought together a diverse set of stakeholders. There were donors, knowledge intermediaries, policymakers, programme managers, researchers and service providers. They came from both developed as well as developing countries.

Let’s bring back theory to theory of change

Anyone who has ever applied for a grant from 3ie knows that we care about theory of change. Many others in development care about theory of change as well. Craig Valters of the Overseas Development Institute explains that development professionals are using the term theory of change in three ways: for discourse, as a tool, and as an approach.

The pitfalls of going from pilot to scale, or why ecological validity matters

The hip word in development research these days is scale. For many, the goal of experimenting has become to quickly learn what works and then scale up those things that do. It is so popular to talk about scaling up these days that some have started using upscale as a verb, which might seem a bit awkward to those who live in upscale neighbourhoods or own upscale boutiques.

Private outcomes and the public interest: a call for more impact evaluations?

The 2015 Year of Evaluation has now come and gone. There were many noteworthy events (more than 80 conferences, workshops, seminars and the like, according to some accounts), most of which focused on the needs in developing countries. Participants included some of the best known from the evaluation community across the public or non-government sectors. However, the interesting question raised in these events was, Where was the private sector?

When is an error not an error?

Thomas Herndon, Michael Ash, and Robert Pollin (HAP) in their now famous replication study of Reinhart and Rogoff’s (R&R) seminal article on public debt and economic growth use the word “error” 45 times.