Making participation count

09 January 2014

Toilets get converted into temples, and schools are used as cattle sheds. These are stories that are part of development lore. They illustrate the poor participation of ‘beneficiaries’ in well-intentioned development programmes.

So, it is rather disturbing that millions of dollars are spent on development programmes with low participation, when we have evidence that participation matters for impact. But many impact evaluations do not analyse participation at all. Often, they fail to report data on take-up rates or even if they do, they don’t look at why there is low participation.

Evaluators often do not clearly distinguish between the impact on those who actually took part in the programme (treatment effects on the treated) and impacts on the entire population targeted by the intervention (intention-to-treat effect). According to Aidgrade, only 45 per cent of impact evaluationsclearly distinguish between intention-to-treat effects and treatment effects on the treated.

For a policymaker, this distinction is crucial to deciding whether to scale up. A programme with high intention-to-treat effects is worth expanding. But when positive treatment effects on the treated are accompanied by low participation, the programme cannot be considered a success.

Voluntary male medical circumcision (VMMC), for instance, is widely advocated as an efficacious clinical measure to reduce the risk of contracting HIV. But  to show that the demand for circumcision is quite low. According to a 3ie scoping study, fear of pain and complications during and after the surgery, concern about long healing periods, and financial and opportunity costs are major barriers responsible for the low demand for it. It is clear that VMMC will not significantly impact HIV rates, if we don’t carefully look at the facilitators and barriers influencing a man’s decision to get circumcised.

So, why do poor people not participate in development programmes that presumably have clear benefits for them? Do they not know about the programme? Are they not aware of the benefits?  Are the socio-economic costs of participation excessive?  Do they not perceive the intervention as a benefit?

At 3ie, we are seeing several of our funded impact evaluations reporting low take-up of a programme. However, only a fraction of these evaluations actually delve into the question of why there is low participation. We are tackling this limitation by looking into what evaluators can be doing to unearth important answers to these questions. For starters, there is an urgent need for impact evaluation designs to adopt a more systematic approach to understanding the various dimensions of participation. Here are a few pointers that we offer when we discuss evaluation designs with our grantees.

Conduct mixed-methods impact evaluations

A 3ie-supported impact evaluation of an inventory credit and storage facility programme for palm oil producers in Sierra Leone showed that the intervention did not have an impact on either the storage or the sales of palm oil. Focus group discussions conducted as part of the evaluation revealed that many of the palm oil producers had never interacted with bank officials. A general lack of trust contributed to the low take-up of the intervention. Qualitative research in this case contributed to a richer understanding of why there was low participation. To analyse take-up, evaluators need to include qualitative methods.

Map out the assumptions underlying the theory of change and analyse them

Many 3ie-supported research teams illustrate a progamme theory of change using a flow chart. Unfortunately, while charting out the causal chain, they often do not consider the underlying assumptions.  These assumptions could include a whole host of structural and contextual factors that may be ignored.

For example, while implementing a women’s self-help group programme, it would be important to consider the possibility that information about the existence of the programme may not reach potential participants. It is also likely that the women may consider attendance in meetings to have a big opportunity cost. They may have to give up several hours of work on their farm or at home to attend a self-help group meeting. Assumptions about participation may also run counter to what a woman is able to do in her community.  Women may have to break gender related social norms in the community to attend this meeting unaccompanied by their spouse or male relative.

All these assumptions need to be tested and analysed in the evaluation. Evaluators can only analysereasons for low take-up and the funnel of attrition when these assumptions are made explicit before the start of the evaluation.

Conduct formative research

Before conducting an impact evaluation, implementers and evaluators should assess the demand for an intervention. Formative research can help in understanding how demand works, particularly the local context that affects demand. If existing demand is low, formative research could also spark thinking onadditional interventions for increasing take-up.

This is what happened in the case of a planned 3ie-supported impact evaluation of the Philippines Open High School Programme. Focus group discussions and in-depth interviews with students and teachers ahead of the impact evaluation threw up various reasons for the existing low take-up of this programme, for example, lack of information and the printing costs for study modules. All these useful insights prompted the evaluators to work with the Department of Education and design a factorial impact evaluation that could inform the design of the programme. The researchers are now considering a randomised controlled trial to test whether the provision of information about the programme is sufficient, or whether the additional subsidisation of printing costs for study modules would be required to spur participation.

It is particularly important to increase the participation of people who could benefit from an intervention. An impact evaluation of a vocational training programme in Nicaragua showed that those who self-select for the programme actually benefit less than relatively poorer people who do not participate in the vocational training due to low aspirations. So, in such cases, improved targeting is key to achieving improved take-up.

Final thoughts

Some of the more practical challenges related to the take-up of a programme may be easy to fix with small design tweaks like marketing tools for increasing awareness. But development problems tend to be complex, long standing, and deep. Addressing these problems involves making social, political and behavioral changes. We therefore require a fundamental shift in our thinking about the design and evaluation of programmes. Whatever the case may be, evaluators need to first analyse the drivers and blockers of demand. A good start is to look at participation systematically and to dig deeper for answers to the question of why participation levels were what they were.

Please sign up to receive email alerts when a new blog gets posted on Evidence Matters and feel free to leave a comment below.  

Add new comment

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
CAPTCHA This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.


Radhhika Radhika MenonSenior Policy and Advocacy Officer


Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.