The importance of evidence for decision-making is widely acknowledged, and an increasing number of organizations are stating this officially. This type of organizational signaling can be an important lever for improving the culture of evidence use. At the same time, acknowledging the importance of evidence is just the beginning; it does not entail any specific decision, action, or behavior. It is fundamentally a commitment to structuring and justifying decisions based on a sound logical/theoretical framework supported by empirical facts as part of a multi-criteria decision-making framework (explicit or implicit).
Individuals and organizations that aim to integrate evidence into decision-making must be able to distinguish actual relevant evidence from other data points, pieces of information, or anecdotes. Here, we lay out the simple framework we use to answer the question: "What is evidence?"
Evidence versus information
Just like statistics, which can be manipulated to support nearly any argument, it is critical to think about how we define "evidence" when we consider its implications in practice. Though sometimes used interchangeably with "data" or "information", evidence, by definition, has a higher bar than simply anything we might want to look at or think about to inform a decision. In both the dictionary and 3ie's perspective, there is a more specific interpretation – i.e., evidence is information that substantiates (or disproves) a specific proposition. All evidence is information but not all information is evidence.
A closer look at what we mean by "evidence"
The distinction between information and evidence highlights two critical attributes of the latter:
- The existence of a carefully constructed proposition - or series of propositions (e.g., there is a need for A; intervention B works; program C is politically feasible, etc.). Information does not become evidence until it is attached to a specific proposition.
- The appropriateness of the evidence for addressing that proposition (both in terms of the nature and sufficiency of the type of evidence as well as how it was collected and analyzed). Information cannot be evidence if it does not have a direct and credible bearing upon a proposition.
This definitional distinction may seem merely rhetorical, but it has important and specific implications for practice. Many people and organizations are vague on one or both counts: they may express interest in evidence without clearly stating the proposition, and/or they emphasize the importance of "evidence" without specifically considering its suitability for addressing the specific question or need at hand. Both actions can lead to suboptimal outcomes, such as gathering information that is not actually useful or selectively presenting specific bits of information to justify or bolster pre-existing views.
There is also a third, often invisible but critically important, attribute of evidence: - The context in which the evidence is being generated and/or used. This includes institutional or cultural differences in epistemology, axiology, priorities on voice/representation, research/evaluation objectives, norms, etc. Context is often embedded in the subtext of decision-making processes, public statements, and documents, and is rarely stated explicitly. This facet can also be a point of debate and contention, especially when there are disagreements about the evidence to be generated, how it is generated, its interpretation, etc.
Constructing ‘evidence + context’ frameworks
Like the proverbial three-legged stool, all three of these attributes of evidence need to be recognized, clarified and addressed in decision-making processes to ensure:
- A clear understanding of the evidence need(s) and decision(s) to be made, reflected as a series of substantial propositions (e.g., Y is a binding constraint; Y is recognized by key stakeholders as a problem; X has worked to address the problem in similar contexts; X is socially acceptable, politically viable, practically feasible, etc.);
- The right match of evidence to question(s)/need(s);
- A contextually responsive approach to #1 and #2 throughout to help ensure that the evidence is useful and used by key stakeholders (which often goes beyond the commissioners of the evidence).
These three attributes constitute the basic scaffolding of an 'evidence + context' framework. Thinking through and constructing this evidence framework can be led internally within an evidence-generating organization or facilitated by an external evidence partner. The latter may be beneficial in terms of bringing a neutral "outsider's" perspective, engaging multiple different internal stakeholder groups, and offering up suggestions that may not be as easy or feasible for internal stakeholders to do for various reasons.
What do you think? We invite your thoughts, comments, suggestions, and feedback. For example, what does a 'commitment to evidence' mean to you in practice? Do you have your own version of an 'evidence + context' framework? Whether you're a generator, commissioner or user of evidence, what approaches do you use to integrate context into your conversations about evidence?
Implications for evidence champions
What does this mean for all of us? Making a formal organizational commitment to using evidence for decision-making is not – and should not – be about committing to a particular set of methods in all cases. The range of specific questions about program funding, design, implementation, etc., is too broad, and the use cases of some individual methods are too constrained for that to be sensible. Rather, it is a commitment to being deliberate, systematic, rigorous, and transparent in how we gather, interpret and apply information to inform specific decisions.
At 3ie, we use a flexible, conversational approach to help our partners sketch and, usually iteratively, build out 'evidence + context' frameworks. We start with the (relatively) easy stuff in the 'evidence' portion of the conversation. This includes clarifying or re-framing the request or inquiry in terms of specific decision-making need(s). We then break these down into a series of substantiable propositions – structured in the form of research questions – that may be particularly useful for informing the decision at hand. For each research question, we note the primary users of the evidence, their intended uses, recommended methodological approach(es), and relevant knowledge translation needs, including the format of the output(s), timing requirements to inform decision-making processes, etc.
Though much-commissioned research focuses almost exclusively on the 'evidence' portion of the conversation, we often find it useful to dig below the surface to learn about the broader contextual environment in which decisions are made. In this portion of the conversation, we probe to understand better overarching or underlying organizational objectives, incentives, training and/or resource needs and constraints, relevant policies, processes, systems, institutional norms and/or culture, and key points of variation between organizational levels units. Such intra-organizational dynamics are infrequently mentioned in research that is made publicly available and are typically not discussed in conferences, webinars, and other public fora, yet they can make all the difference in whether or not a study is used or left on the figurative shelf. In practice, delving into these contextual aspects of evidence use requires an interdisciplinary approach spanning research, evaluation and organizational development (at a minimum).
Ideally, every researcher or evaluator working directly with a research commissioner or decision-maker should be adept in leading and facilitating both the 'evidence' and 'context' conversations. Some do this naturally and intuitively, while others benefit from some guidance. At 3ie, we work with both groups to develop structured guidance. In an upcoming blog, we will introduce the 3ie 'Evidence + Context' Conversation Guide and describe how it works in practice.