Tips on using impact evaluation to measure agency performance: applying the triple A principles

UN women

Impact evaluation has grown in popularity as governments and development agencies have come to realise that it is the best way to assess if their programmes work or not. But will these evaluations help politicians, managers and funders know if an agency as a whole is ‘working’?

Some years ago, Howard White proposed a triple A standard for agency-wide performance measurement systems (AWPMS). A recent paper we wrote, applies these ideas to the role of impact evaluation in agency performance measurement. The triple A principles are:

  1. Alignment: The outcomes being measured have to be the same as the outcomes reflected in the agency’s goals. If the agency’s vision statement is ‘a world free of hunger’ then impact evaluations have to measure impact on food security, nutrition and so on.
  1. Aggregation: It should be possible to add up across project-level ratings. This has usually been done the way the World Bank does it: projects are rated as satisfactory or unsatisfactory; so, aggregation is the per cent of the portfolio that is satisfactory. Impact evaluation enters this system indirectly by applying the ‘no benefit of the doubt’ principle by which a project can only be rated satisfactory if there is rigorous evidence to back up that claim. But the holy grail of AWPMS is to be able to say how many people have been lifted out of poverty, how many girls empowered and children’s lives saved directly as a result of the agency’s efforts.
  1. Attribution: Evaluation findings should be able to make credible causal statements linking the intervention to the outcome. This is the trump card of impact evaluation and the reason most agencies still need more of them.

In discussing the application of the triple A principles in a range of agencies, we identified a number of key actions to be taken by agencies planning and implementing a programme of impact evaluations. The experiences of a few agencies have been summarised in papers in a special edition of the Journal of Development Effectiveness.

  • Decide how many impact evaluations need to be done, and decide on a system for choosing them.Some impact evaluations are better than none. You can start opportunistically. But once you are doing a considerable number you need to pick them strategically. If you want to make agency-wide statements at the outcome level, then that strategy will involve representative sampling of programmes for impact evaluation. This is what Oxfam GB has done.
  • Value lesson learning as well as accountability. The purpose of AWPMS is to improve the accountability of an agency. However, impact evaluations also provide valuable lesson learning opportunities that can be missed if the sole focus is on the accountability function.
  • Balance independence and influence. Integrity in evaluation is more important than independence. Too great a distance between the evaluator and evaluee can undermine relevance and limit use of the study findings.
  • Build use into evaluation processes. Both of the last points are ways to ensure that we are not just producing reports that sit unread on shelves. User engagement starts at the design stage, ensuring the study design is useful to the commissioning agency. Good communication and engagement planning from the start also helps increase the means for understanding and using findings.
  • Build better agency incentive structures. The penalties should not be for failure but for failing to learn from failure.

3ie has been at the forefront of fostering a steady increase in government agency commitment to doing impact evaluations. The growing acceptance of their value is an important step forward. Just as importantly, agencies need to also think about how they will use their evaluation findings to measure and improve agency performance. And, for most, that journey is just beginning.

Add new comment

Authors

Howard white Howard WhiteDirector, GDN Evaluation and Evidence Synthesis Programme
Richard ManningFormer chair of 3ie’s Board of Commisioners

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives

Authors

Howard white Howard WhiteDirector, GDN Evaluation and Evidence Synthesis Programme
Richard ManningFormer chair of 3ie’s Board of Commisioners

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives