Home Announcements › Bridging-Knowledge-Gap-What-Works-Development-Why-And-When

For the first time, an extraordinary gathering of 700 people from different continents, disciplines, sectors and methodological traditions shared their experiences and insights on how to evaluate development effectiveness. The Cairo international conference (March 29 to April 2, 2009) brought people from diverse perspectives together to draw lessons learned and engage with different approaches within the real-world of political, financial, and time limitations.

For the first time, an extraordinary gathering of 700 people from different continents, disciplines, sectors and methodological traditions shared their experiences and insights on how to evaluate development effectiveness. The Cairo international conference (March 29 to April 2, 2009) brought people from diverse perspectives together to draw lessons learned and engage with different approaches within the real-world of political, financial, and time limitations.

“There is a glaring need for better evidence to inform decision making. These are vulnerable people, people in need – the decisions we take every day affect their lives so they need to be right,” said Nick York, the Chair of NONIE, the Network of Networks on Impact Evaluation.

New steps for sound evaluation

Amongst some of the outcomes of the conference were: the launch of the Journal of Development Effectiveness; the release of the NONIE Guidance on Impact Evaluation which will provide the evaluator with a framework and a logic of the comparative advantages of tools and their uses for impact evaluation; and the official formalisation of the African Evaluation Association (AfrEA), which celebrated its tenth anniversary with the election of its new Board and President, Florence Etta.

“This conference was the epitome of partnership”, said Dr Etta. It was jointly organised by AfrEA, NONIE, 3ie and UNICEF and also served as the fifth pan-African evaluation conference since the establishment of AfrEA in 1999. Hundreds of representatives from African associations and the newly established Egyptian evaluation group participated, with more than 200 sponsored by the organisers.

Closing the gap between evaluators, policy-makers and development practitioners

For the Country Representative of UNICEF Egypt, Dr Erma Manoncourt, “Reliable data management systems and quality evaluation are still a challenge in many developing countries; hence there is a need to close the gap between evaluators on the one hand and development practitioners and policy makers on the other.” She also stressed that the emphasis on impact evaluation is central to improving development effectiveness and could optimally be a positive force for change and for the fulfillment human rights.

Two million children are dying every year from lack of sanitation - 5,000 are dying every day. Yet there is already evidence of what works, but sustainability needs to be established. “This calls for more research on quantifying the health impacts of sanitation Interventions” stressed Howard White, Executive Director of 3ie.

Commenting on the impact of the Mexican Government’s flagship program Progresa/Oportunidades, which provides cash transfers to poor households conditional upon the regular school attendance of the children, health clinic visits by all family members and nutritional monitoring of children less than three years old, Paul Gertler from the University of California Berkeley, praised the fact that “Impact evaluation has now become the currency in Mexico... Policy evaluation generated interest because it is the key to sustainability. How can an incumbent party eliminate a program that has proven to be successful without harming their own reputation and risking the loss of support?”

The evaluation of Progresa/Oportunidades not only strengthened support for the program by showing how it directly contributed to poverty reduction, but also has influenced the design of similar programs throughout the world (Morley and Coady 2003).

Applying a range of credible and appropriate designs and methods

Asked about ethical concerns in relation to the use of Randomized Control Trials, Rebecca Thornton of the University of Michigan emphasized that the risk was to keep on carrying out programs of which the impact and benefits are not known. With the budget and institutional constraints most governments and programs face, a full-scale implementation for all potential beneficiaries is usually unthinkable and leaves room for a randomized and transparent scaling up of an intervention.

Martin Ravallion from the World Bank cautioned against starting an impact evaluation by focusing on a method, and argued that instead impact evaluation should begin with a policy relevant questions and be eclectic with methods. “By constraining evaluation research to situations in which one favourite method is feasible, research may exclude many of the most important and pressing development questions”, he warned. For Ravallion, standard methods do not address all the policy-relevant questions and examine the potential disparities in impacts among different groups affected by the intervention such as men and women. It is also important to compare a range of policy options and not just compare the intervention with the option of non-intervention.

Jennifer Greene from the University of Illinois also underscored the importance of getting the evaluation questions right, as well as understanding context and prioritising values and voices. “Methodology is ever the servant of substance”, she said.

The usefulness of a mixed methods approach was a recurrent theme. Strong voices also called for methods for causal attribution to include a range of experimental, quasi-experimental and non-experimental techniques. “I read this conference as having disabused the view that there is in fact a ‘gold standard’ or ‘hierarchy of methods’ when it comes to what constitutes acceptable methods for impact evaluation”, said Odette Ramsingh, Director-General of the Public Service Commission in South Africa in a summary of the conference. “We should approach the debate on methods from a constructive, developmental perspective and try to combine methods in ways that answer the most important questions”, she added.

People First

Robert Chambers from the Institute for Development Studies (IDS) shared some practical tips on how to ‘count the uncountable’ and ‘seek surprise’ (Guijt 2008) throughout the evaluation process.

“The most important is to put poor people first”, he stressed. “How they themselves learn and gain is an under-recognised externality of participatory evaluation processes. Many innovations in participatory methodologies are coming from Africa and Asia. These have their own rigour and credibility. Complementary or often alternative to other approaches, they can be win-wins – revealing unexpected realities, quantifying the qualitative, empowering poor participants through their own analysis, and providing policy-makers with a richer range of relevant and grounded insights.”

At the end of the five days, participants left with some answers as to how evaluation can be best conducted and used to inform better policies and interventions that make a difference in peoples lives.

For further information, please log onto www.impactevaluation2009.org

Published on: 02 April 2009

Scroll to Top