Governments want results. Tax payers want results. Beneficiaries want results.
The results agenda gained momentum in development circles during the 1990s, becoming firmly established with the widespread adoption of the Millennium Development Goals. This focus on results is welcome. Simply measuring success by the volume of spending, or even the number of teachers trained, kilometres of road built and women’s groups formed, is not a satisfactory approach. Input monitoring does not ensure that development spending makes a difference to people’s lives. Spending that makes a difference; that is what we mean by a result. So we would expect this agenda to go hand in hand with impact evaluations. But that has not been the case.
The response of the development community to the results agenda has largely been outcome monitoring. So indicators like infant mortality, business profits, and female empowerment are tracked. USAID was the first to adopt this approach in the mid-90s. And the first to abandon it, when the Government Accountability Office (GAO) objected that such outcome monitoring did not tell us anything about whether observed changes in outcomes were the result of the interventions supported with US dollars. Yet the use of outcome monitoring remains widespread amongst those claiming to be interested in results. There remains a view that ‘attribution is difficult’. But attribution is precisely what impact evaluation is about.
It is not being suggested that impact evaluations be carried out for all development programmes. But they should be in place for pilot projects and other innovative intervention, for large scale and flagship programmes, and for a sample of representative programmes of the sort which the agency typically supports.
Only with the widespread adoption of impact evaluation across development agencies can we truly demonstrate results. And, at the same time, create the evidence base about what works and why to get even better results in the future.
Interesting read in 2024!!