What if scientists directly tested their drug ideas on humans without first demonstrating their potential efficacy in labs? This question sounds hypothetical because we all know that using untested drugs can be potentially dangerous. If we were then to use the same logic, should we not be exercising similar caution with randomized controlled trials (RCTs) of social and economic development interventions involving human subjects?
Development economists have readily adopted the methods used in clinical trials but they have not fully implemented all the rigorous approaches that are integral to these trials. For instance, do we establish clinical equipoise before conducting impact evaluations? Not quite. Do we conduct formative research to inform the design of a programme before it is implemented? Not really.
Over the last fifteen years, development economists have been using RCTs to evaluate whether different development interventions involving human subjects research, work or not. They claim that scientific methods such as clinical trials which have been used for decades in epidemiology and public health should also be used to determine which interventions work. Development interventions in this context are being equated with drugs. But if there is no doubt that development economists should use more of these methods to assess the effectiveness of development interventions, they should not only copy methods but also learn more from the conduct of clinical trials.
Establishing clinical equipoise
In public health research, the justification for randomly assigning participants is based on clinical equipoise. This means that clinical trials are implemented only when, the researchers have substantial uncertainty (doubt) about the expected impact (efficacy) of the intervention (drug).The researchers may arrive at this conclusion after having reviewed the available research in the field. Clinical equipoise is then a necessary condition for the ethical justification of conducting RCTs. Hence, in public health, the first function of the Institutional Review Board is to ensure that clinical equipoise exists for new RCTs.
But in the development sector, economists are not aware of the need to establish clinical equipoise before conducting RCTs of development interventions. Since RCTs are being increasingly used by development economists, we should start thinking about how clinical equipoise can be established for impact evaluations of development interventions. A first step might be to get more aware of systematic reviews. By examining all the existing evidence on a particular development intervention, systematic reviews enable us to know what has been done in other fields and also identify where there is a gap for impact evaluations.
There is clearly a need to establish clinical equipoise in impact evaluations of development interventions when there is already a lot of existing research. But it is also important that formative research, which is part of many clinical trials, should be embedded in the design of impact evaluations run by development economists.
Rationale for formative research
There is a lot that can be learnt from a clinical drug trial. Before testing the efficacy of a new drug on people, a lot of scientific research is done to establish whether the drug has the potential to be beneficial and safe. But this process does not exist for the design of development interventions. We could in some way compare the long process behind the development of a new drug with the use of theories to design interventions. However as pointed out in a recent article, there is a concern that some interventions may be implemented and rigorously evaluated by development economists but they are not based on theory.
In the absence of theory based interventions, formative research can feed into the design of an intervention by revealing the needs of recipients and looking into whether an intervention would be found acceptable by recipients. For instance, we know that the construction of schools might increase the enrolment rate in remote areas. However, an important question that needs to be addressed is how to get potential students into schools in complex and difficult environments.
The answers to such questions can be unearthed by formative research which will enable us to understand how all these factors interact. It can certainly serve to refine the theory of change of the intervention. Given that in development economics, most of the time there is no lab to test ideas before conducting an RCT, the design of the intervention should be preceded by formative research.
Carrying out formative research
In the area of public health, formative research studies are extensively used before conducting an RCT. It can use several methods including small surveys and checklists, semi-structured interviews, and focus group discussions.
The overall aim of formative research is to help design the trial’s interventions. Thus, formative research can enable us to better understand communities by identifying barriers and facilitators of the issues that the proposed intervention intends to address, to develop effective strategies for the recruitment and the retention of participants, and to design evaluation instruments.
For instance, in preparing for a clinical trial to assess the efficacy of disease-modifying antirheumatic drugs in patients with early inflammatory arthritis, formative research was conducted to determine patients' educational needs and assess patients' interest in enrolling in a hypothetical prevention trial. This sort of study of the acceptability and feasibility of an intervention is embedded in most of the research (e.g. HIV prevention, malaria) in public health. This is important for judging whether the intervention has the potential for getting acceptable take-up and for knowing if it can be implemented as planned. Formative research will certainly help in avoiding flawed studies that show very low take-up of programmes. This seems to be the case with several impact evaluations run by development economists.
Encouraging formative research
At 3ie, we have tried to identify ways to address the lack of formative research in designing interventions for impact evaluations by introducing innovative grant programmes like Thematic Window 2 and Thematic Window 3 (launched in the spring of 2013). Thematic Window 2 phase 1 funds formative research studies which are meant to inform the design and implementation of pilot programmes using oral HIV self-tests. These programmes will subsequently be rigorously evaluated if results from formative research are promising. Thematic Window 3 funds both the implementation and impact evaluations of innovative interventions aiming to increase the demand for Voluntary Medical Male Circumcision (VMMC) for HIV prevention in 14 priority countries in eastern and southern Africa.
For both of these thematic windows, a lot of work was put into identifying the issues which could be taken up by formative research. In the case of Thematic Window 2, we identified formative research questions by reviewing the self-testing and rapid diagnostic testing literature to-date, and from information gathered in a series of meetings we had in Kenya. For Thematic Window 3, the launch of the call for proposals was preceded by a scoping paper and the Eastern and Southern Africa Regional Meeting on Demand Creation for VMMC in Lusaka, Zambia. The purpose of this meeting was to bring together implementers of VMMC programmes, stakeholders, and researchers for sharing experiences and lessons, and for thinking of ways to move forward by designing innovative interventions to increase the uptake of VMMC. Both of these grant windows illustrate the steps that can be taken for using formative research in designing interventions.
While researchers may see the value of formative research for informing the design of a proposed intervention and its theory of change, the big question is whether they will stay committed to using this rigorous approach. Researchers would certainly feel encouraged to adopt this approach if there is a platform where formative research studies can be published. It seems like the way ahead then lies in providing more incentives.
(Eric Djimeu is Evaluation Specialist, HIV/AIDS, at 3ie)