There is plenty of interest from policymakers in having cost analyses accompany impact evaluations, as we learned in last week’s Virtual Evidence Weeks panel (video here, summary blog post here). On Thursday, more than 300 people joined us to discuss the methodological and practical barriers to including cost analyses in impact evaluations. Our expert panelists agreed that overcoming these barriers will require pressure from funding organizations, efforts to standardize methods, and increased respect for the intellectual merit of cost analyses.

“The biggest problem is that we just haven’t taken costs very seriously, particularly from the academic side,” said Craig McIntosh, professor and co-director of the Policy Design and Evaluation Lab, University of California, San Diego.

In academia, there are no professional incentives to include cost analyses, panelists agreed.

Advancing the quality and status of cost analyses requires a recognition that there are intellectually interesting questions regarding how to model and measure costs, said Caitlin Tulloch, International Rescue Committee’s Associate Director for the Best Use of Resources.

Part of that should include a reconceptualization of costs not as simple accounting figures but rather as a function of scale, input prices, and other variables, she said.

“I’m excited about developing a structural understanding of the cost side of the equation,” Tulloch said.

She described an IRC analysis of 10 latrine construction programs conducted by the same NGO in the same country around the same time. The cost per person per year varied from US$5 to US$110, and the difference was all about scale. She said that economies of scale should be no surprise to an economist, but such issues are often overlooked.

At the moment, most cost analyses are too cursory to permit such analyses, panelists said.

“We have a lot of trouble with the quality and the transparency of the cost evidence when they are produced,” said Elizabeth Brown, who leads the cost transparency initiative at the Center for Effective Global Action.

Often, existing analyses don’t even provide the most minimal level of detail.

“It was almost impossible to tell what was underneath the hood of the numbers that you did see,” Tulloch said, referring to evaluations she had read. “Is that the average cost or the marginal cost of adding one more person?”

Panelists agreed that health researchers are ahead of others in incorporating cost analyses into evaluations. One advantage in that sector is that different types of health interventions are evaluated using common outcome measures: the Quality-Adjusted Life Year (QALY) or Disability-Adjusted Life Year (DALY). These same outcome measures can be used to compare programs addressing different medical issues, like comparing an anti-malaria program to a program to combat hypertension

“What QALYs and DALYs do is they allow us not only to measure the outcome but to compare very different types of programs,” said Catherine Pitt, assistant professor of health economics, London School of Hygiene and Tropical Medicine.

Panelists said it is unlikely that similar common outcome measures can be developed to work for all the different types of social interventions, however.

“I think it’s important to recognize the tremendous diversity of interests people have (in the development community),” McIntosh said. “We have donors with very different desires and programs that achieve very different outcomes.”

The initial pressure to conduct good cost analyses needs to come from donors, he said.

“This should just be a pro forma part of what any publicly-funded evaluation should be doing,” McIntosh said. “It is legitimate for donors to simply require this from the academic community.”

He and other panelists called for the field’s experts to establish a straightforward guide of best practices for evaluation researchers who do not specialize in cost analyses.

“I think it’s unrealistic that most people are going to dive really deeply into this,” McIntosh said. “I think we want it done competently in every big study.”

There should be more explicit training on how to conduct cost analyses as part of Masters and Doctoral programs, said Constantine Manda, co-founder of the Network for Impact Evaluation Researchers in Africa (NIERA) and a Ph.D. candidate at Yale University's political science department.

"Trying to build capacity on this dimension is very, very key, especially for researchers in these developing countries," Manda said.

Watch the video of the full panel discussion here, and sign up here to join our last Virtual Evidence Weeks panel on scaling interventions up.

Leave a comment

Enter alphabetic characters only.