3ie’s recently-published working paper ‘Incorporating process evaluation into impact evaluation – What, why and how’ by Senior Research Fellows Vibecke Dixon and Michael Bamberger lays down guidelines that provide impact evaluators tools and ideas for exploring and adding relevant elements of process evaluations to experimental and quasi-experimental impact evaluation designs. This blog is the second of a two-part series that looks at the design of process evaluation.
In the first part, we discussed why a process evaluation is important and how it strengthens an impact evaluation. In part II, we describe how to design and use a process evaluation.
It must be stressed that the design of a process evaluation requires flexibility to adapt to time and budget constraints, to creatively use different kinds of information that may be available, and to adapt to changes in project implementation and the changing environment in which the project operates. For example, a general election may bring in a new government with different priorities for the project or changing migration patterns or political unrest might affect the design or implementation of the project or the attitude of the target population. The six-step approach we describe in this blog (see Figure 2) should be considered as a design framework that must be adapted to each project context.
Step 1: Define the impact evaluation scenario. There are three main impact evaluation scenarios: retrospective impact evaluations towards the end of the project; pre-test–post-test comparison group designs where baseline data is compared with end-of-project; and formative/real-time evaluation that continues throughout all, or a significant part, of project implementation [p39, PE guidelines]. In the first two scenarios, a counterfactual design is used where the project (treatment) group is compared with a matched comparison group. Where possible, a randomized control trial design is used, but in many cases where random assignment is not possible, the two groups are matched using statistical procedures such as propensity score-matching.
Step 2: Define the dimensions of implementation to be evaluated. According to the 3ie process evaluation guidelines, most process evaluations focus on one or more of the following dimensions:
- Adequacy of the implementation design [p7, section 2.1, PE guidelines] This is assessed both in terms of adherence to the project design protocol (implementation fidelity) and how well the design will contribute to the achievement of broad development goals (such as the SDGs)
- How effectively the project was implemented [p19, section 2.2, PE guidelines] This can address: (a) how adequately the actual implementation process complies with the implementation protocol (sometimes called implementation fidelity), and (b) how adequately implementation contributes to the project’s development goals (some of which may not have been included in the original project design)
- How organizational structures and processes affect impacts [p26, section 2.3, PE guidelines]: Projects involve multiple funding, planning, and implementation partners and relationships between them significantly affect implementation and outcomes
- Influence of context and external factors [p30, section 2.4, PE guidelines]: Implementation and outcomes are affected by multiple contextual factors
Step 3: Selecting the process evaluation design. [p36, section 3, PE guidelines] There is no standard process evaluation design and flexibility is required in adapting the wide range of design options to each program context. There are at least four design considerations:
- Clarifying the key questions to be addressed [p44, section 3.3, PE guidelines]: It is important to ensure the evaluation is question-driven and not methods-driven. Many kinds of information could be collected so it is important to focus on the priority questions and information needs (see Step 2).
- Integrating the findings into the impact evaluation: It is important to coordinate with the impact evaluation team to agree on how the findings from the process evaluation will be used. Will the information be used as general background to help understand how implementation affects outcomes, or are there specific questions that the findings should address? Also, there must be agreement as to whether the findings will be used descriptively, or whether any of the data should be converted into a format that can be incorporated into the regression analysis (see Steps 5 and 6).
- Articulating the theory of change [p9, 2.1.2, PE guidelines]: The guidelines stress the importance of basing the evaluation on a theoretical framework. We recommend the use of a theory of change (ToC) but there are other possible frameworks (such as a results framework). The ToC describes the steps through which the project is intended to achieve its outcomes, the different internal and external factors that can influence implementation, and the key assumptions at different stages of the process that must be tested. It helps structure the evaluation design so that it focuses on the most important questions.
- Selecting the appropriate set of data collection instruments. Process evaluation mainly uses qualitative methods, but these can be combined with quantitative methods, if appropriate. The guidelines identify and describe nine categories of qualitative methods, all of which can be used in process evaluations [p56, section 3.5, and Annex 2 Part B, PE guidelines].
- Case-based methods, including qualitative comparative analysis (QCA).
- Qualitative interviews, including key informant and in-depth interviews.
- Focus groups.
- Observation and participant observation.
- Social network analysis.
- Self-reporting (diaries, time-use, and calendars).
- Analysis of documents and artefacts.
- Participatory group consultations (including participatory rural appraisal).
- Bottleneck analysis (Bamberger and Segone 2011 pp 45-50).
Often, a combination of several different methods is used.
Figure 1. The integrated process/impact evaluation design
Step 4: Designing a mixed-methods framework to strengthen the evaluation design. [p65, Section 3.6.3, PE guidelines] Mixed-method designs combine qualitative and quantitative tools to strengthen the representativity and validity of both qualitative and quantitative designs. Two of the limitations of most qualitative methods are that they tend to collect information from a relatively small sample of individuals or groups and that the samples are often not selected to ensure their representativity. The goal is to collect rich, in-depth information from subjects accessible to the interviewers. These factors make it difficult to generalize from the findings to the total project population.
Mixed methods strengthen the generalizability and validity of qualitative data in two ways. First, the selection of cases tries to utilize the sampling frames used in the impact evaluation to ensure that the cases studied in the qualitative analysis are broadly representative. Second, mixed-methods use two or more independent sources of data to compare estimates (triangulation) so as to increase validity. Mixed-methods are used to strengthen quantitative designs by using in-depth analysis (such as observation, unstructured interviews, or focus groups) to validate survey data.
Step 5: Data analysis. This addresses two dimensions: assessing how closely the process of project implementation conformed to the project design protocol (sometimes called “implementation fidelity”); and, assessing how adequately the design and implementation contributed to the achievement of broader development goals, such as the SDGs. The analysis can also be conducted at two levels: descriptive analysis; and conversion into scales and other ordinal measures that can be incorporated into the impact evaluation designs (p68, PE guidelines). While these scales are ordinal and do not permit statistical analysis such as mean or standard deviations, they provide a useful way to compare performance on different dimensions or to compare the overall performance of difference projects.
Step 6: Integrating the process evaluation data into the impact evaluation. The process evaluation findings can be incorporated into the impact evaluation in at least three main ways:
- Providing rich descriptive data to help understand the impact evaluation findings and to help interpret deviations or variations among sub-groups with respect to the estimated impacts
- Transforming the findings into ordinal scales. These can either be used to identify variations in performance for different indicators, or they can be incorporated into the impact analysis
- Selecting a set of case studies to illustrate and help explain differences among the main groups identified in the analysis
As also emphasized in the first part of the process evaluation blogs, the findings of the integrated analysis can be used in four main ways. They can help in understanding how the implementation process affects impacts. The findings may also provide recommendations on how to improve the implementation of future projects. The design of future impact evaluations can be refined based on the analysis. Finally, by identifying case studies, we can explore in more depth some aspects of the analysis.