Ami Bhavsar and Hugh Waddington, 3ie Systematic Reviews Office
3ie has been funding fewer than the proposed systematic review questions due to a lack of sufficiently high-scoring applications. This outcome led the 3ie systematic review office to ask, what are the features of high-scoring systematic review applications? And what are the pitfalls that keep proposals from scoring well during the review phase?
3ie unravelled answers to these questions by analysing the application scores from 3ie’s requests for proposals since 2010 (including joint calls with partner organisations and agencies), shown in the figure below. The final score given to a proposal is a weighted average of the scores for the following review criteria: study staff (20%), technical proposal (40%), project budget and plan for communicating results for uptake into policy and practice (25%), and the number and degree of involvement of developing country researchers (15%) (see Appendix 1 of the Systematic Reviews call 6 Request for Proposals – Deadline 30 August 2013).
The analysis showed that proposals tended to score most weakly on the criterion developing-country researcher involvement (accounting for 15 per cent of the total score). Scores on the technical proposal criterion and on quality of proposed staff are also generally lower, too. While the variation in scoring has come down over time, largely because we receive fewer low scoring proposals, the average score has not changed much over the 3 calls for proposals.
Using the findings from the review of past application scores, 3ie has therefore prepared this summary of key sticking points where applications have failed to make the mark. For each of the review criteria, 3ie offers tips on what applicants can do to avoid these weaknesses and strengthen their scores. This is not a guide for completing an application. For example, issues related to budget and contractual matters are addressed in the SR6 call frequently asked questions (FAQs).
1. Putting together a winning team
Get the right mix of skills and experience. Having the right mix of skills in the team is crucial to completing a high-quality systematic review. Therefore, scoring the skills mix of proposed teams reflects the importance attached to the experience the team is offering. An optimal mix of skills and experience means having team members with both the experience in conducting systematic reviews and substantive subject expertise. 3ie commissions effectiveness reviews that incorporate the full range of experimental and quasi-experimental designs, and that undertake meta-analysis of effect sizes. Hence, a statistician or econometrician will be required on the review team. It is also important to have an information specialist to design the search.
Conducting a systematic review is time consuming, complex, and process-oriented. So, teams need principal investigators and research assistants who can dedicate significant blocks of time to undertake searching, data collection and analysis. Effective team management includes having clear lines of reporting and frequent team meetings (which may have to be undertaken remotely). Scoring takes into account how much time has been accorded to key personnel and how the review process will be managed.
Systematic reviews in international development are still fairly new, and the people who have both methodological and substantive subject expertise may not always be available in the traditional international development sector. There is therefore a need for applicants to recognise when their teams need to include collaboration between the international development and systematic review communities. 3ie tries to facilitate the team recruitment process by encouraging match making through the systematic review community listserv, email@example.com.
Include developing-country researchers. 3ie awards up to 15 per cent of the available points to teams based in, or incorporating members from, low- and middle-income countries (LMICs). Teams based in LMICs may be constrained by limited access to information via libraries and internet, limited full-text download capability, and problems in coordinating work across long distances. To overcome these problems, such teams should build information retrieval costs (including the costs of an information specialist or librarian) into their applications, as well as a budget for travel. Free and low-cost applications, such as Dropbox, Illuminate and Skype, facilitate remote working and can help keep the proposal budget down. Organisations such as the EPPI-centre also provide online review management software to enable remote working.
2. Getting the technical proposal right
Propose appropriate review question(s). To get a useful answer, you need a good question. The main issue in setting the question is its breadth. Thus, everyone would like to know the answer to the question, ‘How do we end global poverty?’, but it is clearly too broad for a systematic review! Rather, a systematic review will be most successful when the methodology is applied to a clearly defined research question. Many of the systematic review questions that international development researchers have attempted to answer are too broad, which inevitably leads to challenges. Among reviewers, this debate is known as lumping versus splitting. By lumping, evidence is reviewed on a broad scope in terms of interventions, outcomes and/or study designs (research questions). By splitting, reviewers usually focus on a single intervention or research question.
So, how does one decide whether to lump or split? Within international development, rigorous evaluation studies are still thin on the ground for many interventions. If this is the case, then it is easier to lump. When there is more evidence, the case for splitting is stronger. Hence, 3ie and its partners frequently propose broad questions, like the ones for Systematic Review 6 call. Some of these questions may indeed be too broad for a systematic review of sufficient depth of analysis to be easily undertaken.
Under these circumstances, the best applications will undertake a preliminary assessment (or scope) of the existing evidence to propose answerable reviews questions. Stronger applications have paid close attention to the definition, refinement and scope of the review question(s). Within systematic reviews, this is commonly referred to by using the acronym PICOS, which stands for Population (or sub-groups) to be included; the Intervention(s) in question; the Comparison groups or Counterfactuals to be considered; the Outcomes to be investigated; and the Study designs eligible. Reviews which have aim to answer multiple sub-questions usually need separate study designs (or PICOS) to answer each question.
Use a theory-based research approach. A common pitfall seen in the applications that were analysed was that they did not specify or use a theory of change. Theory of change analysis, in the context of a systematic effectiveness review, uses the programme theory (or logic model or conceptual framework) to identify the activities, mechanisms, outputs and intermediate and ‘endpoint’ outcomes, together with other key features of the implementation process that have to be in place for an intervention to be successful. This then allows data to be collected and analysed along the causal chain to help understand what factors affect the success or failure of an intervention.
Theory-based systematic reviews aim to combine the best of both traditional summative systematic reviews, which identify ‘what works’ in a transparent and replicable manner, and formative evaluation approaches, which help explain why and under which circumstances the intervention may or may not be effective.
Strong proposals for effectiveness reviews engage with all of the summative evaluative evidence on efficacy and effectiveness. 3ie has laid out the steps for conducting a good effectiveness review in international development (available here). A fully integrated synthesis would also incorporate broader factual evidence, which would include observational studies and qualitative evidence, to undertake analysis at the early stages of the causal chain such as on targeting, take-up and adherence. This approach can be particularly useful when the types of interventions or evaluations and data sets are not the same, as 3ie have adopted in our systematic review of farmer field schools evidence – see here for discussion.
3. Ensuring adequate user involvement and dissemination plan
Provide a policy influence plan and adequate staffing and budget. In evaluation, usefulness is the golden principle. It is a key 3ie objective to fund evaluations and reviews that help strengthen development policies and improve development practice. Consequently, the required policy influence plan is a vital component of all 3ie-funded studies. A clear plan to influence the users of evidence includes adequately staffing and budgeting of the plan. 3ie requires these plans because it knows that sound evidence alone does not affect policy or practice, as a rule. A policy influence plan needs to start at the very beginning of the review process and remain integrated throughout. For this reason, 3ie expects to see users of evidence included in their advisory group. Consultation with and involvement of policy actors and practitioners throughout the review cycle helps to ensure that the review asks (and hopefully answers) questions that are relevant to policymakers and programme implementers. See the International Development Coordinating Group advisory group guidance here.
Communication planning helps to ensure that key audiences and policy influence actors are aware of and engage with the evidence generated by the review and with the team producing it. Ongoing engagement of team members with policy actors and influencers helps to ensure understanding of, interest in and trust in the review findings. Identification of key actors and attention to engagement with them make for a stronger plan and higher score.
Make your ‘front ends’ friendly. 3ie expects the findings to be presented in two formats: one for the registry and one that follows the 3ie template for writing a clear and accessible report. The latter is the version that is important for influencing policy and practice. It provides a more user-friendly narrative than the standard technical systematic review report does. It engages more closely with information on implementation and includes policy recommendations based on the findings. The budget needs to cover the production of both types of reports.
4. Final words on housekeeping
Stick to the word count and edit before posting to the online form. 3ie mandates a strict word count on the application form. This means that applicants working on their applications offline should be careful about adhering to the word limit while updating the information online. The system will not record words that exceed the count for a section, so that information will not be reviewed. Put the most important information first. Edit the section and keep track of word count while finalising the content. The analysis showed some entries that appeared cut off or badly constructed. The assumption was that these factors could be due to not editing the content before trying to load it and applicants possibly running out of time before having to send to meet the strict application deadline.
Read 3ie’s terms and conditions. There have been cases where 3ie was unable to commission a review, despite the application receiving an excellent score because the successful institution could not abide by the terms of the 3ie grant agreement. To avoid such problems, be sure to read the terms of the 3ie grant agreement, and ensure that the applicant institution is able and willing to abide by its conditions before submitting an application. Help is also available in the guidelines (found on the right-hand panel of the online application system). An email can also be sent to 3iehelp (at) 3ieimpact (dot) org.
3ie also welcomes feedback from applicants on ways to make Systematic Review grant windows more accessible. Please email us at firstname.lastname@example.org