At 3ie, we believe that decisions informed by evidence can improve programmes and policies and impact people’s lives in low- and middle-income countries. But our work on evaluations, systematic reviews, evidence gap maps and other types of studies only has an impact when it makes it into the hands of decision-makers. Evidence can only have an impact when it is used. It might seem obvious what we mean: Positive findings from an evaluation can lead a programme to be continued or expanded, while negative findings can lead a programme to be shut down.
But that is only part of the story. Policymaking is a complex, messy process, and many of the ways that evaluation evidence can shape policy choices do not fit into such a simple narrative. In fact, most decision-making that we have been able link to 3ie-supported studies has been informed by evidence in a more nuanced way, rather than a simple thumbs-up or thumbs-down. So, when we talk about evidence uptake and use, what do we mean?
When do we consider evidence to be impactful?
Given the variety of ways in which evidence is taken up and used, we found that ‘you will know evidence uptake and use when you see it’ is not a good enough standard. We are in the business of evidence, not anecdotes. So, we developed a detailed typology of the ways decision-makers take up and use research evidence, particularly findings and lessons from impact evaluations.
We created our typology based on our experiences with studies we have supported. We used researchers’ reports of engagement with implementing agencies or other decision makers to look for instances where they appeared to have used evidence. Since we wanted to be careful and consistent, we also verified use cases using contribution tracing. (We will talk about that methodology in a forthcoming blog post, or you can read about it here.) Our typology of evidence uptake and use reflects what we have learnt about how evaluation and systematic review evidence gets used in the "real world."
We have identified seven types of evidence uptake and use:
Change policies or programmes: When decision-makers use findings from an evaluation or systematic review to alter their programming. Examples include changes to targeting, cash transfer amounts, training modules or other factors that inhibit the policy or programme’s ability to achieve its intended impacts. The first blog in this series spoke of just this type of evidence use by the Ugandan government.
Close a programme: When evaluation or review findings inform decisions to stop implementation or planned scale-up of a programme or its components.
Improve the culture of evidence use: When decision-makers demonstrate positive attitudinal changes towards evidence use or towards information the research team provides. Examples include strengthening monitoring and evaluation systems, increasing understanding of evidence and openness to using it, integrating these systems more firmly into programming or commissioning another evaluation or review.
Inform discussions of policies and programmes: When subsequent phases of the evaluated programme or policy draw from the findings of the evaluation or review, or the study team participates in informing the design discussions for a next phase.
Inform global guidelines and policy discussions: When global policy discussions or actions mention or refer to findings from an evaluation or review. Examples include governments, multilateral donors, or others mentioning findings in policy documents or debates.
Inform the design of other programmes: When findings from an evaluation or review inform the design of a programme other than the one which was evaluated.
Scale up a programme: When programmes found to be effective in the evaluation or review are scaled up.
We value each type equally
These seven types recognise that reviews, evaluations and other types of studies contribute in several ways apart from directly causing the expansion or closure of a programme. Some studies end up having more than one impact of more than one type. We value each type of use equally. In all, given the complex nature of decision-making, particularly in government, any verifiable contribution of evidence to culture, discussion, design or scale is a remarkable feat.
You can find interesting examples of all seven types of evidence impact in our new evidence impact summaries portal. As we monitor use of evaluation and review evidence and track contributions of different types of studies, like formative evaluations and evidence gap maps, we keep updating our typology. The seven types of uptake and use and their definitions keep evolving to reflect what we see in the real world. Have you also wondered about what counts as evidence use and evidence impact? Are we missing anything in our typology? Email us your thoughts at influence@3ieimpact.org.