3ie Context: the results agenda The call for evidence Rigorous impact evaluation The international response Reactions
3ie
Methodological developments Results agenda› US: Government Results and Performance Act, GRPA -
1993 – USAID adopted six ‘strategic development goals’ e.g. broad-based economic growth, defining outcome indicators for each e.g. per capita growth
› UK: Public Service Agreements and Service Delivery Agreements (very similar to Report Cards and Development Indicators in South Africa)
MDGs› Focus on outcomes› Raises questions about attribution
3ie
Goals were ‘so broad, and progress affected by many factors other than USAID programmes, that the indicators cannot realistically serve as measures of the agency’s specific efforts’
In 2000 USAID abandoned the goal outcome measures as a basis for monitoring its performance
3ie Timescale, exacerbated by data lags Decentralized programs not aligned with
higher-level objectives Data quality Impact?
Development agencies may not know exactly what impact their efforts are having given the wide range of other agencies and external political, economic and social factors involved.
3ie
Use existing data systems, but with a view to data quality and timeliness
Invest in improving existing data systems rather than making new (often parallel) ones
Know the advantages and limitations of different sorts of data (administrative versus survey data)
Avoid indicator proliferation Don’t be all M and no E
3ie
Outcome monitoring is not a valid basis for rigorous performance measurement
… hence the need for impact evaluation
3ie International agency response
› World Bank: DIME and IEG› Inter-American Development Bank*
At country level response strongest in Latin America› Progressa› In Mexico and Colombia social programs
legally require impact evaluations to secure funding
Note: * See IDS Bulletin March 2008 for discussion
3ie
After decades in which development agencies have disbursed billions of dollars for social programs, and developing countrygovernments and nongovernmental organizations (NGOs) have spent hundreds of billions more, it is deeply disappointing to recognize that we know relatively little about the net impact of most of these social programs.
3ie UNICEF review (1995) found 15% had
‘impact assessments’ but ‘many evaluations were unable to properly assess impact because of methodological shortcomings’
ILO review of 127 studies of 258 community health programs found only 2 which could give robust conclusion on impact on access
Recent (2008) NORAD review found many reports draw impact conclusions on basis of weak evidence (or none at all)
3ie
Billions of dollars are being spent with no idea if they are the best intervention to achieve the desired outcome – or even if the help achieve it at all. Rigorous impact evaluation – and only rigorous impact evaluation – will provide the evidence needed for optimal resource allocation
3ie When will we ever learn?
› An approach which can attribute change in outcomes to the intervention
› Requires a well-defined control group› Strongest design is developed at project design
stage› Where feasible, a randomized approach should be
considered Poverty action lab
› Active promotion of randomized control trials (RCTs)› Evaluations in conjunction with NGOs especially in
India and Kenya
3ie
Joint AfrEA, NONIE, 3ie impact evaluation conference Cairo May 2009
NONIE 3ie
History Origins in DAC meeting November 2006
Evaluation gap working group
Purpose Demand generationGuidanceCoordination
Promote quality impact studies, including financingAdvocacy
Membership Evaluation networks Governments and NGOsIE practitioners are associates
Engagement OPSC represented Sign up, membership and studies
3ie Websites
› World Bank (PREM, IEG and NONIE)› Poverty Action Lab› 3ie – www.3ieimpact.org
Training courses› PAL› IPDET› Africa Impact Evaluation Initiative
Opportunities to conduct IEs› DIME› 3ie› Own resources
3ie Each year two themes and open window Themes being determined – see 3ie website for
details and to submit proposed questions Southern-driven, issues-based with southern-led
evaluation teams Under each of these three windows
› 10-12 ‘quick wins’› 6-8 large studies› 6 baseline studies› 6 synthetic reviews
3ie Only promoting RCTs – not true RCTs seen as gold standard – not exactly true,
issues-led not methods-led, but where feasible an RCT should certainly be considered (best available method)
A positivist approach – true, but evidence-based policy making is an inherently positivist approach
Another northern initiative – partly true, but 3ie offers great potential for (1) Southern ownership of evaluation agenda, (2) harmonization around quality standards
3ie
The current policy framework in South Africa implicitly demands quality impact evaluations, and resources are available to help meet the supply
3ie The move to performance-based management
leads to questions being asked about attribution Impact evaluation is the approach to be adopted
to address attribution Impact evaluation has to adopt certain technical
standards if it is to provide proper evidence The new international initiatives exist to support
efforts to expand programs of impact evaluation
3ie
VISITwww.3ieimpact.org