The Malaysian Evaluation Society (MES) will be holding its 6th International Evaluation Conference in Kuala Lumpur from 24-27 March 2014. The theme of the conference is "Results Based Management and Evaluation: Lessons Learnt-Experiences Gained". The MES Conference organising committee would like to invite expressions of interest from potential paper and/or workshop presenters to submit proposals for paper, workshop, and/or poster presentations during the above conference. Submission deadline: 15 December 2013.
The International Energy Policies and Programmes Evaluation conference will be held 9-11 September 2014, in Berlin. This an international conference for policy makers, programme implementers, evaluators, researchers and those interested in evaluation issues related to energy efficiency and low carbon policies and programmes. Abstracts are invited for presentation at the conference (See topics for the conference). Deadline for submission: 23:59 GMT 16 December 2013.
The Brooks World Poverty Institute and the Institute for Development Policy and Management at the University of Manchester announce a call for papers for an International Workshop entitled: What Works for Africa’s Poorest? Practice and Policy, to be held in London on January 13, 2014. Deadline for submissions: 19 December 2013.
The Leverhulme Centre for Integrative Research on Agriculture and Health (LCIRAH), an interdisciplinary center established by London International Development Centre, invites submission of abstracts for its 4th annual research conference, to be held at Birkbeck College, London, on 3-4 June, 2014. The theme for the conference is agri-food policy and governance for nutrition and health. Deadline for submissions: 20 December 2013.
ALNAP invites panels for its 29th Annual meeting on Engaging with crisis-affected people in humanitarian action. The meet will be hosted by the African Humanitarian Action in Addis Ababa on 11-13 March, 2014.
Representatives of communities affected by crises, humanitarian practitioners, academics, and other suitably qualified individuals and organisations can propose and organise panels. Panels should focus on new learning and emerging best practice related to the topic of the meeting. Deadline for application 2 December 2013.
The International Association for Impact Assessment annual conference will be held 8-11 April in Chile. The theme this year is 'Impact Assessment for Social and Economic Development'. Posters and paper submissions are being accepted. Deadline for abstracts: 6 December 2013.
The conference organised by the Finland Futures Research Centre, University of Turku, will be held 11-12 June, 2014 in Helsinki. The conference will bring together experts from the fields of Futures Studies, Climate Change and Sustainable Development and enhance interaction and cooperation between the scientific community, policymakers and representatives of companies.
Theoretical, methodological and empirical contributions are solicited. Deadline for submission is 16 December 2013.
European Association of Development Research and Training Institutes (EADI) conference: Call for papers
EADI is requesting papers on development evaluation for the 14th EADI conference on Responsible Development in a Polycentric World to be held 24-26 June 2014 in Bonn. Deadline for abstracts: 15 December 2013
Following the devastation caused by Typhoon Haiyan in the Philippines, Evidence Aid is working with colleagues in the disaster community to compile evidence-based resource.
Abstracts are being accepted for the AfrEA 2014 biennial conference 3-7 March 2014. The theme is 'Evaluation for Development: From Analysis to Impact'. Deadline for abstracts: 15 November, 2013.
The Campbell Collaboration requests submissions for its annual colloquium to be held 16-19 June in Belfast, Northern Ireland. The theme is 'Better evidence for a better world'.
Proposals for oral paper presentations on any topic relevant to the development and application of systematic review methods, as well as topics related to the iterative cycle of evidence gathering and decision-making about social interventions are welcome.
Campbell is also inviting poster and oral presentations on research related to interventions for improving the well-being of society or individuals in any of the key Campbell areas: Crime and Justice, Education, International Development, Social Welfare, Research Synthesis Methods (or with implications for synthesis), and Knowledge Translation and Implementation.
Deadline for submissions: 20 December 2013
Howard White writes about the lessons 3ie has learned through the experience of conducting and managing impact evaluations in the past decade in the Impact magazine published by Population Services International.
A 3ie-supported study on voluntary medical male circumcision in Zimbabwe by Grassroot Soccer featured in The Huffington Post.
The Pacific Conference for Development Economics (PacDev) 2014 is accepting papers. Please submit via this site by 2 December, 5 pm PST. Submissions must include the following:
● Name, Co-authors
● Paper Abstract (300 words maximum)
● Paper to Upload
Completed papers will receive priority. If you are interested in proposing an entire session, please submit papers individually online and include the proposed session title that groups them together.
The conference will be held 15 March 2014 at the University of California, Los Angeles. The goal of PacDev is to bring together graduate students, faculty and practitioners to present and discuss various issues facing developing economies.
An article in The Hindu newspaper (dated 31/10/2013) on the 3ie-ASCI conference on Measuring Results.
Howard White, 3ie Executive Director, was part of a discussion on evaluation of social sector programmes on Rajya Sabha TV with Ajay Chhibber, Director General, Independent Evaluation Office, Government of India; Thoriq Ali Luthfee, Minister of Health, Maldives and Biraj Patnaik, Principal Advisor, Right to Food in the office of Commissioners to Supreme Court. Anchor: Girish Nikam
Two 3ie-supported studies cited in the Economist. These include Girl power: cash transfers and adolescent welfare. Evidence from a cluster-randomized experiment in Malawi, by Sarah J. Baird, Ephraim Chirwa, Jacobus de Hoop, Berk Özler and Relative effectiveness of conditional and unconditional cash transfers for schooling outcomes in developing countries: a systematic review, by Sarah Baird, Francisco H. G. Ferreira, Berk Özler, Michael Woolcock.
The latest issue of the Journal of Development Effectiveness features papers by Gala Diaz Langou and Vanesa Weyruch on 'Sound expectations: from impact evaluation to policy change'; 'Do for-profit microfinance institutions achieve better financial efficiency and social impact? A generalised estimating equations panel data approach' by Philippe Louisa and Bart Baesens in addition to other papers.
Girl power: cash transfers and adolescent welfare. evidence from a cluster-randomized experiment in Malawi
This 3ie-supported study now has a paper in the NBER working paper series.
This study summarises evidence from short-term impacts of a cash transfer programme on the empowerment of adolescent girls in Malawi during and immediately after the two-year intervention.
Getting children into school is only part of the education battle. We must also ensure they learn once they are there. Blog by Howard White in the Guardian
The University of East Anglia is offering a one-year MSc course in Impact Evaluation in International Development.
The course has been designed for students who are interested in designing and implementing development projects and programmes and/or in researching development effectiveness, and who need to develop and enhance their skills for undertaking high-quality rigorous impact evaluations.
3ie is offering a scholarship for this course.
This 3ie-supported study now has a paper in the NBER Working Paper Series.
This study evaluates the impact of a programme that provides pre-fabricated housing to members of extremely poor population groups in Latin America. Findings show that a positive effect on general well-being of the people. There is also evidence of improvement in children's health in two countries.
3ie and Innovations for Poverty Action (IPA) are pleased to announce a research and grant-making programme designed to increase the evidence base on transforming conflict-affected regions into stable settings.
Together 3ie and IPA are fundraising for this groundbreaking programme. The goal is to support over 14 new evaluations, thereby significantly increasing the available evidence on programmes aimed at reducing violence around the world. The broader programme will include an international evidence review, technical guidance notes related to research in conflict affected settings and dissemination activities to share newly generated evidence with key policy makers.
Please contact Annette Brown (email@example.com) or Kelly Bidwell (firstname.lastname@example.org) if you would like to support this programme or learn more.
A 3ie study on the impact of daycare in Brazil shows that it has the potential to improve household welfare especially for the poor. Household income was went up (8%), as did the labour supply of the carer (usually the mother). Children also fared better in terms of cognitive and anthropometric outcomes.
This paper by Marie Gaarder and Jeannie Annan argues that it is both possible and important to carry out impact evaluations even in settings of violent conflict, and it presents some examples from a collection of impact evaluations of conflict prevention and peacebuilding interventions. This Policy Research Working Paper is published by the Independent Evaluation Group, World Bank.
The Campbell Collaboration training videos were recorded at the May 2013 Campbell Colloquium. The introductory videos are intended for researchers conducting systematic reviews as well as for policymakers interested in evidence-based policies. The advanced methods / applied topics videos are for researchers carrying out systematic reviews.
Transfer Project's annual research workshop in April, 2013 featured evaluation results from nine cash transfer programmes from sub-Saharan Africa.
3ie researchers participated in the M&E Roundtable series on Systematic Reviews organised by CLEAR South Asia and the Community of Evaluators in New Delhi.
Do conditions moderate the effects of cash transfer programs? Preliminary findings from a systematic review
The preliminary findings from the systematic review indicates that both conditional cash transfer (CCT) and unconditional cash transfer (UCT) programmes improve school enrolment: 23% in the case of UCT programmes and 41% for CCT programmes.
An impact evaluation of the loans offered by Compartamos, the largest microlender in Mexico shows that loan recipients grew their business revenues and expenses, were happier, more trusting, had greater household decision power, and were better able to manage liquidity and risk. However, there was little evidence that loans had an impact on building wealth like household income, business profits, or consumption.
The study adds to the evidence that microcredit is generally beneficial, but not necessarily transformative.
The Research Department of Agence Française de Développement (AFD) has published the Ex Post collection of the following papers: Insuring health or insuring wealth? An experimental evaluation of health insurance in rural Cambodia and Sky impact evaluation, Cambodia, 2010, village monographs
Ian Ramage and David Levine, co-authors of these studies will be speaking at the Health Financing in Developing Countries Conference (Financement de la santé dans les pays en développement) on 27 and 28 May at AFD.
The latest issue of Journal of Development Effectiveness features papers on impact evaluation design for the Millennium Villages project in Northern Ghana and the effect of solar ovens on fuel use, emissions and health: results from a randomised controlled trial. The paper on 'Effects of the Frontiers Prevention Project in Ecuador on sexual behaviours and sexually transmitted infections amongst men who have sex with men and female sex workers: challenges on evaluating complex interventions,' is open access.
How should impact of development work be measured? Panelists at the recent discussion on the issue offer suggestions to demystify the process and ensure it supports accountability and learning.
Gareth Roberts of University of Witwatersrand presents the findings of this study which led to a policy change in South Africa.
The study finds that those who were allocated a wage subsidy voucher were more likely to be employed both one year and two years after allocation. The impact of the voucher thus persisted even after it was no longer valid. This suggests that those young people who entered jobs earlier than they would have – because of the voucher – were more likely to stay in jobs. This confirms the important dynamic impacts of youth employment. It also suggests that government interventions which successfully create youth employment are important and have virtuous long-term effects.
This report gathers together examples which attempt to explain how and why change happens as a result of cash transfers (CTs). It first presents a selection of theories of change about how cash transfers are expected to work in general, drawn from academic literature, and then a selection of theories as used in a few specific cash transfer programmes.
Pocast of presentation by Jyotsna Puri at the 28th Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) meeting.
In a video interview, Phil Davies talks about the importance of timing for researchers wanting to engage and inform policymakers. He talks about 3ie GapMap as a visual and engaging tool for understanding what is known and what isn't.
In Part 2 of the video, he stresses the importance of physical access, encouraging efforts towards ensuring that policymakers are aware of, and can get hold of, the kinds of evidence that they need to make good decisions.
Kirsty Newman underlines three important advantages of synthesised evidence in international development -- it can tell us something is true, which we didn't realise was true; it can also tell us something that we all thought was true is actually not true; it can also tell us that something we thought we fully understood, we actually don’t have a clue about.
A revised paper by Abhijit Banerjee, Esther Duflo, Rachel Glennerster and Cynthia Kinnan. It reports on the first randomized evaluation of the impact of introducing the standard microcredit group-based lending product in a new market.
A DFID guidance note on the best ways to assess evidence in international development. It offers some rules on:
- understanding different types of empirical research evidence
- appreciating the principles of high quality evidence
- considering how the context of research findings affects the way that staff may use them
- understanding how to make sense of inconsistent or conflicting evidence
Theme: What evidence-based development has to learn from evidence-based medicine
Speaker: Chris Whitty
Theme: What we have learned from 3ie's experience in evidence- based development
Speaker: Howard White
William Savedoff and Ted Collins find 14 searchable evaluation databases that provide information for policymaking and programme development. But how accessible are these databases? Can they do a better job?
The search is on for databases that include studies that attribute impact to particular programmes, interventions or policies and whose findings are relevant to low- and middle-income countries.
Howard White discusses different designs of randomized control trials and addresses criticisms of RCTs which are mostly argued to rest on misunderstandings of the approach.
The standard approach to policymaking and advice in economics implicitly or explicitly ignores politics and political economy, and maintains that if possible, any market failure should be rapidly removed. This NBER working paper by Daren Acemoglu and James A. Robinson argues why this conclusion may be incorrect.
This CGD Working Paper by Tessa Bold, Mwangi Kimenyi, Germano Mwabu, Alice Ng'ang'a, and Justin Sandefur looks at how a Kenyan education programme proven successful in a randomised controlled trial, failed to have similar outcomes when implemented.
See blog by Justin Sandefur Finding what works in development: what is the what?)
Esther Duflo discusses a 3ie-supported study on Gujarat pollution control in an interview to Yale Insights.
This paper by Paul Shaffer presents a selective review of empirical examples of the use of mixed method, or Q-Squared, approaches in impact assessment.
The Centre for Development Impact (CDI) will design innovative approaches and share learning on impact evaluations. "It is also dedicated to expanding the tool box of methods for impact evaluation, critically evaluating established and new methods as we go along," says Lawrence Hadad, Director, Institute of Development Studies.
Chris Whitty talks about the importance of evidence synthesis in communicating research findings to policymakers, at the 2013 STEPS Centre Symposium.
Most scientists are unsure of how to engage with policy making processes. Better insight into it is the first step to engaging with policy making more effectively. Scientific evidence gains traction when it tells a clear and relevant story.
3ie's Philip Davies cited in the article.
Markus Goldstein highlights the findings from two 3ie-supported studies -- a systematic review on the impact of daycare programmes and the impact evaluation of Save the Children's early childhood development programme in Mozambique.
"The purpose of this document is to advance the Foundation’s existing work so that our evaluation practices become more consistent across the organization.We hope to create more common understanding of our philosophy, purpose,
and expectations regarding evaluation as well as clarify staff roles and available support."
Bill Gates discusses the importance of measurement in improving human condition. "From the fight against polio to fixing education, what's missing is often good measurement and a commitment to follow the data. We can do better. We have the tools at hand."
Howard White, 3ie Executive Director, discusses “closing the evidence gap”, randomised control trials, and the value of impact evaluation with Dereck Rooken-Smith of ODE, AusAID.
Louise Shaxson, Research Fellow, ODI on how "Pressure to demonstrate concrete impacts on public policy is encouraging researchers to make grand claims about what we/they are likely to achieve."
Chris Whitty, DFID’s Director of Research and Evidence and Chief Scientific and Stefan Dercon, its Chief Economist respond to Chris Roche and Rosalind Eyben's concerns over the results agenda.
3ie finds a mention in the article by Natasha Gilbert. "To aid such reviews, the International Initiative for Impact Evaluation (3ie), a non-profit organization based in Washington DC that funds and conducts aid-assessment research, is setting up a database in which researchers can register studies. Expected to launch later this year, the initiative aims eventually to provide a complete listing of assessments for various types of aid interventions, says Howard White, executive director of 3ie."
Rosalind Eyben, Research Fellow at IDS and Chris Roche, Associate Professor, La Trobe University kick off a discussion on the implications of evidence-based approaches.
Nina Cromeyer Dieke shares tips and lessons development community can learn from mainstream media. The 3ie policy impact toolkit finds a mention in the story.
Berk Ozler on the pros and cons of using surveys to measure impact.
"...So, evaluating a large government program using an unrelated routine government survey may be fine (although I suspect that they too will have biases depending on what the respondents think the survey is for, how large, important, and ‘in the news’ the intervention is, etc.), but evaluating your own experiment that aims to change some behavior by asking study participants whether they have changed that behavior is unacceptable."
How can systematic reviews contribute evidence for policy? Blogs on this page take up the debate on conducting and using systematic reviews.
Tracey Koehlmoos, adjunct professor at George Mason University, Washington DC, and adjunct scientist at ICDDR,B blogs on sessions at the Dhaka Colloquium on Systematic Reviews.
"...Perhaps the most controversial session that I have attended so far was provocatively named “Rapid reviews: opportunity or oxymoron?” 3ie deputy director, Phil Davies presented on “rapid evidence assessment” and their place in the pantheon of evidence synthesis efforts aimed at informing policy making. Serious questions remain about rapid reviews being biased compared to systematic reviews—and how the process would even allow the developers of these reviews to recognize any biases. However, Davies pointed out that “all evidence is probabilistic.” ..."
"In our view, the vast majority of aid interventions have almost no rigorous evidence behind them. A very small set of interventions – including LLIN distribution – have a broad, impressive evidence base. Deworming is somewhere in between. The studies discussed here are rigorous, have highly encouraging findings, held up to the best scrutiny we could bring to them. At the same time, many questions remain unanswered. This is one of the areas in which an additional long-term study would have the most effect on our views."
JDEff Volume 4, Issue 4 includes a host of interest articles analysing the impact of development interventions in Nigeria, Yemen and Ethiopia. This issue also includes a piece on using evidence from impact evaluations to influence policy.
Duncan Green on Lant Pritchett's critique of 'RCT randomistas'.
"RCTs are a tool to cut funding, not to increase learning. ‘Randomization is a weapon of the weak’ – a sign of how politically vulnerable the argument for aid has become since the end of the Cold War. ‘Henry Kissinger wouldn’t have demanded an RCT before approving aid to some country.’ And I can’t see the military running RCTs to assess the value for money of new weaponry before asking for more cash (mind you, if they did, that might at least save some money on Trident….)."
David Mckenzie takes on Angus Deaton on trial registries. "...because the general points that identification of genuine impacts requires not exploring 1000 different patterns in the data and choosing the one which has a large t-value is not specific to RCTs at all – indeed 3ie is building a trial registry that will attempt to register non-experimental impact evaluations as well."
The special issue focuses on the fact that impact evaluation can be applied to a range of policy questions. The papers are based on issues like education, agriculture, migration, microfinance etc.
"...The MCC brief makes explicit the argument that impact evaluations need to be designed for learning, not just accountability (which had been the primary goal when these set of evaluations started)...."
The Millennium Challenge Corporation has released its first five impact evaluations for farmer training activities in Armenia, El Salvador, Ghana, Honduras, and Nicaragua.
Systematic reviews hold great promise and are relevant to policymakers because they are systematic. Adam Wagstaff uses the Waddington et al.toolkit to analyse two health systems systematic reviews to show that it is possible to do a systematic review unsystematically and give a biased view of the literature.
Evidence alone is not enough: policymakers must be able to access relevant evidence if their policy is to work
It is not enough to look for evidence of a previous policy success. Jeremy Hardie and Nancy Cartwright argue that exactly what evidence is needed, and of what, is the key question that needs to be asked for making real evidence-based social policy interventions.
When we (rigorously) measure effectiveness, what do we find? Initial results from an Oxfam experiment
Dr Karl Hughes on Oxfam GB's effectiveness reviews: "Currently, there is considerable interest in how to evaluate the impact of interventions that don’t lend themselves to statistical approaches, such as those that are seeking to bring about policy change (aka “small n” interventions). See a recent paper by Howard White and Daniel Phillips. We have attempted to address this by developing an evaluation protocol based on a methodology called process tracing used by some case study researchers."
"This discussion paper aims to support appropriate and effective use of impact evaluations in AusAID by providing AusAID staff with information on impact evaluation. It provides staff who commission impact evaluations with a definition, guidance and minimum standards."
"Is in danger of being messed up. Here is why: There are two fundamental reasons for doing impact evaluation: learning and judgment. Judgment is simple – thumbs up, thumbs down: program continues or not. Learning is more amorphous – we do impact evaluation to see if a project works, but we try and build in as many ways to understand the results as possible, maybe do a couple of treatment arms so we see what works better than what."
This issue highlights why systematic reviews should be an important component of evidence-informed development policy and practice. Papers by 3ie researchers and other authors demonstrate how reviews can be made to live up to the promises generated around them.
3ie recently participated at an IFAD learning event on impact evaluations for environmental and climate change interventions.
William Savedoff blogs on the challenge that impact evaluation poses for organisations. "...Other times, the concerns reflect an unwillingness to clearly state their goals, be explicit about their theories of change, or put their beliefs about what works to an objective test. Yet, this is exactly what is at stake with evaluation: are you willing to be proven wrong?"
World Bank supported evaluation of a public works programme in Ethiopia shows that it's an effective way to help people manage risk and help stabilise income generation.
Cash on Delivery for the world's poorest: "If the project is ambitious, what’s really fascinating is that Dfid has tried to evaluate Tuungane I rigorously, using a randomised controlled trial. Villages were enrolled in Tuungane through a public lottery. With 1.8 million people in the treatment group and a large control group, such an evaluation would be challenging to conduct in a rich country."
The Millennium Village Project, Jeffery Sachs 's integrated approach to rural development, has been subject to criticism following serious flaws in its evaluation methods. A new project site in northern Ghana will be independently assessed.
Dr. Aditi Mukherji of International Water Management Institute, wins the Norman Borlaug Award in recognition of her impact evaluation on groundwater resources in agriculture. This 3ie-supported study led to major policy changes benefiting marginal farmers in West Bengal.
GiveWell lists the principles for deciding how much weight to put on a study’s claims.
"In today’s social sciences environment – in which preregistration is rare – we think that the property of being an RCT is probably the single most encouraging (easily observed) property a study can have, which has a practical implication: we often conduct surveys of research by focusing/starting on finding RCTs (while also trying to include the strongest and most prominent non-RCTs). ...And we think it’s possible that if preregistration were more common, we’d consider preregistration to be a more important and encouraging property of a study than randomization."
"Public debate about two prominent poverty-alleviation programs shows that over the past 15 years international development has become much more scientific," says Dean Karlan and Caroline Fiennes.
Dr Andrew Clappison blogs on the discussions at the recent Theory of Change workshop and suggests ways DFID may want to consider improving the theories of change for its programmes.
The paper titled 'Challenges in Banking the Rural Poor: Evidence from Kenya's Western Province' combines experimental and survey evidence to document some of the supply and demand factors behind low levels of financial inclusion in rural Kenya.
The World Bank Human Development Network has launched an Impact Evaluation Toolkit, a hands-on guide on how to design and implement impact evaluations especially those related to maternal and child health and those involving results-based financing. It can also be adapted for impact evaluation in other fields.
Lawrence Haddad on a 3ie-IDS randomised control trial on how to communicate research findings.
The key questions posed in the research were:
- Do policy briefs influence readers?
- Does the presence of an op-ed type commentary within the brief lead to more or less influence? and
- Does it matter if the commentary is assigned to a well-known name in the field?
A 3ie-supported study on the impact of water and sanitation on child health published as Population Council working paper
Using a combination of qualitative and quantitative data, this paper, 'The Impact of Water Supply and Sanitation on Child Health: Evidence from Egypt,' investigates whether access to improved sources of water and sanitation is an effective treatment for the incidence of diarrhea among children under five years of age in Egypt.
"In the eternal quest for better data, one of the most exciting modes of data collection as part of survey efforts across the globe is known as Computer-Assisted Personal Interviewing, or CAPI.
"CAPI means the integration of interviewing and data entry process via the use of a handheld device, such as a tablet computer or a netbook, preloaded with an electronic questionnaire."
"Potential biases can arise when collecting qualitative data, in deciding which questions are asked, in what order, how they are asked and how the replies of the respondents are recorded.
"There can also be biases in how the responses are interpreted and analyzed, and finally which results are chosen for presentation. Of course quantitative data and analysis is also prone to bias, such as sampling bias and selection bias. But methodologies have been developed to explicitly deal with these biases. Indeed evaluation designs are judged on precisely how well they deal with these biases."
"The Cochrane Collaboration’s recent summary of the evidence on treating school-age children for soil-transmitted intestinal worms (or STH) is incomplete and misleading. While we do not comment on the evidence of the health and cognitive outcomes reviewed, we continue to find that the educational benefits alone justify mass school-based deworming. We strongly endorse the WHO and Copenhagen Consensus’s recommendation to mass treat children for STH."
A new systematic review by Cochrane Collaboration questions the effectiveness of deworming drugs in improving nutritional status, school performance, and cognitive test scores. Schistosomiasis Control Initiative responds to this new evidence.
The Norad Evaluation Department’s recently released Annual Report emphasises the need to ensure that more of the interventions funded or supported by Norwegian aid build in rigorous evaluations. The report calls for funds, time and human resources to be set aside for this purpose.
Lawrence Haddad discusses two new impact designs – Howard White and D. Phillips’ ‘Small n’ approach, and the methods prescribed by Stern et.al.
Alexandra Fielden of Innovations for Poverty Action blogs on the benefits of the chlorine dispenser system in Kenya
"One inexpensive, safe, and effective option to improve water quality while protecting against re-contamination is to treat water with chlorine using IPA’s innovative Chlorine Dispenser System."
"Installed at a communal water source, users simply turn a valve on the dispenser to release a metered dose of chlorine into their jerricans, which they then fill with water as usual. The chlorine mixes with the water and kills the germs that cause many diarrheal diseases. The chlorine provides protection from recontamination for up to 48 hours, and achieves an average diarrhea reduction of 41 percent."
IDEAS fifth national review, “Implementing a Subnational Results-Oriented Management and Budgeting System: Lessons from Medellin, Colombia” by Rafael Gomez, Mauricio Olivera and Mario A. Velasco is now available online.
The first evaluation of early childhood development programmes in Africa shows that children going to preschool are much more likely to be interested in mathematics and writing, recognize shapes, and show respect for other children, than those who are not.
Carolyn Miles, CEO, Save the Children, blogs about the impact of preschool programmes in Mozambique.
Which studies should someone be paid to re-examine?, asks David Roodman, research fellow, Center for Global Development and author of Due Diligence.
"Repeating experiments can also resolve debates about studies some experts think were poorly done. They help the U.S. government, which spent $15 billion on international aid in 2011, and private donors, who spend billions more, decide how to spend their budgets," says Innovation News Daily.
Watch the presentations from the Monitoring and Evaluation Roundtable 2 on Theory of Change co-hosted by IDRC, CLEAR at J-PAL SA at IFMR.