Understanding real-world complexities for greater uptake of evaluation findings
Most evaluations largely ignore the complexities of the context in which development programs are designed and implemented. This two-part blog series by 3ie Senior Fellow Michael Bamberger underscores the need for and challenges to designing ‘complexity-responsive’ evaluations. Bamberger discusses solutions and techniques that help evaluators take into consideration real-world factors that affect project implementation and outcomes.
Most development programs are designed and implemented in complex political, socio-cultural, economic and ecological contexts where outcomes are influenced by many factors over which program management has little or no control. These factors interact in different ways in different project locations. Consequently, a project, which has a clear design and implementation strategy, may produce significantly varying outcomes in different locations or at different points in time.
Despite the widespread acknowledgement by evaluators and stakeholders that project evaluations are ’complex’, most evaluations apply designs that assume the project is ’simple’ with a clearly defined relationship between the project inputs and a defined (and usually limited) set of outcomes. For example, most quantitative evaluations adopt a pre-test post-test comparison group design where outcomes are measured for the project and a comparison group at the start of the project (baseline) and at the end line. Project impacts are defined as the difference in the rate of change of the two groups over the life of the project. A variant of this design provides the underlying logic for randomized control trials, propensity score matching, double difference, instrumental variable estimation, regression discontinuity and pipeline designs.
What unites these quantitative evaluation designs is the lack of any systematic consideration of the multiple factors affecting a program’s implementation and outcomes. In fact, most widely-used quantitative evaluation designs eliminate complexity by matching the project and control (comparison) groups so as to eliminate the effects of external factors and estimate the effects of the project intervention ’when everything else is held constant’. The evaluations assess how the project would operate under controlled laboratory conditions, but largely fail to assess how projects perform in the face of real-world complexities.
Space does not permit a discussion of how different qualitative evaluations treat complexity, nor a consideration of the different challenges in the treatment of complexity in project, sector program, country level and policy evaluations.
Figure 1: Complexity Map
Mapping complexity
One of the challenges in understanding complexity is that many discussions are very technical and theoretical. Breaking it into the following four interrelated components (figure 1) makes complexity more understandable to policymakers and program staff:
- Intervention dimensions. Projects and programs have subcomponents that can be rated in terms of level of complexity (see box 1 for examples).
- Interactions among stakeholders and other actors. Large programs may involve funding agencies, consultants, government agencies and line ministries, civil society, and community, business and special interest groups. Their interactions, differing priorities and approaches can turn even a ’simple’ project into a complex evaluation challenge.
- Internal and external systems. These include political, economic, administrative and legal systems, socio-cultural influences, public services and infrastructure, and the influence of social media. They operate within unique historical traditions that influence how programs work in a particular sector or country and often resist change (inertia).
- Process of causality and change. This can vary from relatively simple (with a clear linear relationship between program inputs and a limited set of outcomes) to relatively complex systems with multiple pathways linking program inputs to a wide range of outcomes.
A ‘complexity-responsive evaluation’ requires an analysis of the level of complexity of each of these four components, and how interactions among them affect the implementation and outcomes of the project. In collaboration with colleagues, I have developed a complexity checklist which can be used to rate the level of complexity of each component and sub-component. This helps policymakers, managers and other non-specialists to understand the dimensions of complexity, and to rate projects on whether they are sufficiently complex to justify the use of a complexity-responsive evaluation.
Box 1
Implications of ignoring complexity
The lack of attention to complexity can have serious implications for the practical utility of evaluation findings. When multiple factors promoting and constraining the achievement of desired outcomes are not well understood, it becomes difficult to use the findings to design more effective projects and policies. Moreover, policymakers and managers may consider the findings of simple evaluations too unrealistic to be of practical utility1. As projects and policies produce a much wider range of outcomes than those addressed in conventional evaluations, project designers miss the opportunity to broaden the range of positive program outcomes. In addition to these, projects have different effects – both positive and negative – on different groups, but conventional evaluations often only capture the average effects for the total population, frequently overlooking the consequences for vulnerable or less vocal groups. Thus, the one-size-fits-all approach to projects fails to adapt projects to different contexts and to the needs of different groups.
Why do so many evaluations ignore complexity?
Despite the widespread recognition that most development programs involve elements of complexity, only a very small proportion of evaluations in fact systematically address complexity. There are several reasons for this:
- The evaluation profession tends to be conservative with many evaluators continuing to use the methods – mainly designed to assess simple, linear models – they know and in which they have invested considerable time and resources to develop.
- Many clients continue to request the kinds of evaluation with which they are familiar and there has been relatively little demand for complexity-responsive evaluations.
- Evaluation procurement procedures are often very rigid and constrain the ability of evaluators to propose new methods or that new issues, such as complexity, be addressed.
- Finally, many complexity evaluation methods such as systems analysis, systems dynamics and social network analysis require access to large quantities of data, which, until recently, were not available to most evaluators. The increasing availability of big data and data analytics means that many kinds of large data sets are now becoming accessible.
Building on this discussion, the next piece in this blog series offers practical approaches for the evaluation of complex programs. It reviews the wide range of complexity-focused evaluation tools and the new sources of big data that are now becoming available – that make it possible for evaluators to begin to address complexity in a more systematic way.
1A 2021 report by CECAN (Centre for the Evaluation of Complexity across the Nexus) found that many UK policymakers considered that most evaluations were of very limited practical utility because they were based on simple (linear) models that largely ignored the complex contexts in which most policies are designed and implemented.
Comments
Dear Michael,
Thank you for posting a nice article to promote "complexity based evaluations".
I would not blame the evaluation first but the programme design that is being evaluated.
It is quite possible to design a project that is more manageable and achieves desired results but it takes considered preparation, contextual knowledge and consultations--not a donor who has a theme or "universal" value to impose. The current trend is to talk of complexity as though the world suddenly became complex. The inscrutability of many settings comes from a lack of effort by the outsider to try to understand. Every intervention is disruptive but well designed ones can also be helpful. Good evaluations require focused time and access on site, not masses of random data.
Re the use of the measure of complexity generated by the checklist, it says above "This helps policymakers, managers and other non-specialists to understand the dimensions of complexity, and to rate projects on whether they are sufficiently complex to justify the use of a complexity-responsive evaluation". How does the checklist score enable the latter? Is there a particular cutoff value, and if so how is its choice justified?
Responding to Aftab's comment
Thank you for these important points. Many impact evaluations focus on the "counterfactual" (how to select a closely-matched control/comparison group to exclude selection and other kinds of sample bias), but they pay very little attention to the "factual" (how the project was designed and how well it is implemented). There are very few instances where the project was implemented exactly as planned, but this is ignored in many impact evaluations. Consequently, if an impact evaluation does not find a statistically significant difference in outcomes between the project and control group, it is assumed this must be due to "design failure" and the alternative explanation - that the failure was due to poor implementation is often not considered.
3ie is currently working on guidelines for how to incorporate "process evaluation", how the project is actually implemented on the ground, to address this important gap in many impact evaluations. The guidelines should be available by early September.
Re to addressing complexity by breaking it into the four components & five-step approach (why not into five components and four steps is never explained). This simplifies explanation that helps non-specialists (policymakers, managers) to differentiate between weak complexity, which must be dismissed and strong complexity that must be simplified in four categories and five steps.
First, why is privilege to understanding complexity reserved for specialists? Social complexity operates between facts and values so it must remain open to be integrative. Complexity needs evaluative not merely expert understanding.
Second, complexity map draws a line between when complexity is ignored and when domesticated in non-complex way and to non-complex mind. This is complexity for dummies, who insist to comprehend world in non-complex ways (or at least not radically complex). When faced with complexity, evaluators should not see clients as dummies. They need to elaborate evaluative understanding of complex reality in resolutely uncompromising, radically complex way. This is possible, I propose, when complexity is conceptualised in the middle between pro et contra and at the meso level with mesoscopic reasoning.
Add new comment