Evidence impact: informing better monitoring and measurement of interventions
How helpful can an evaluation be if it shows an intervention had no effects on desired outcomes? The first evaluation of a community-driven reconstruction (CDR) program called Tuungane, implemented in the Democratic Republic of Congo, is one such study. The evaluation did not show positive effects on all desired social and behavioural outcomes, but it did help the implementer, International Rescue Committee (IRC), revamp some of its systems. Here’s how.
IRC started the first phase of Tuungane, which means “let’s unite” in Kiswahili, in 2007. This phase of the CDR program took place in 1,200 conflict-affected communities in eastern DRC. Tuungane aimed to rebuild community assets while promoting, among other goals, joint problem solving, stronger local governance, trust and social cohesion. In some areas, IRC also introduced a mandate to involve women as equal partners in decision-making bodies called village development committees.
However, a 3ie-supported impact evaluation found that the program did not significantly improve social cohesion, governance or welfare even though it succeeded in completing a large number of projects.
The findings also showed that the gender parity requirement was not necessary for women’s representation in this context. Village development committees that did not have such a requirement had a similar level of women’s representation to those that did. Also, neither the attitudes around roles and responsibilities of women nor the projects selected varied significantly between villages with and without the requirement.
In a good example of using null findings, IRC and the donor checked with the field team to interpret these ‘disappointing’ findings. These meetings highlighted the need for better monitoring and evaluation to understand the challenges faced during implementation and to better capture impact. They also revealed the need to review the theory of change of CDR to strengthen the pathways to desired social outcomes.
The findings prompted IRC to revamp its monitoring and evaluation systems to capture program impacts better. They also informed IRC’s ongoing review of program theories of change and designs to improve the chances that interventions reach desired social objectives.
“The findings themselves I think were important, but I think where it had a lot [of] and maybe in some ways more impact was immediately realizing that we needed to…have much better M&E (monitoring and evaluation) even as we had impact evaluation,” said Marie-France Guimond, then Research Systems Advisor at IRC,
In 2015, IRC launched a research program to further investigate the CDR program’s theory of change and apply learning to future programming. The experience of the Tuungane phase one evaluation led IRC to embed qualitative components in monitoring and evaluation to understand the implementation context and impacts better.
While this evaluation ended in 2013, years before the complexity-focused evaluation movement, it shows that null findings can help inform programming, program evaluation and decision-making. If you would like to read more stories of evidence informing decision-makers, head to the new evidence impact summaries section of our evidence hub.