Download - Evavluation of large scale health programs
EVALUATION OF LARGE SCALE HEALTH PROGRAMS
By: Adam F. Izzeldin; BPEH, MPH, PhD candidate.
Department of International Health, TMDU
CESAR G. VICTORA et., al. : evaluation of large scale health programs; Michael H. Merson, Robert E. Black, Anne J. Mills. Global health: Diseases, Programs, Systems and Policies, 2011
ContentsContents
Planning the Evaluation
Impact models
Types of inference and choice of design
Defining the indicators and obtaining the data
Carrying out the evaluation
Disseminating evaluation findings
Working in large-scale evaluations
Why We Need Large-Scale evaluation?
• In spite of large investments aimed at improving health outcomes in low- and middle-income countries, few programs been properly evaluated ("Evaluation," 2011; Evaluation Gap Working Group, 2006; Oxman et al., 2010).
• Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world, but very few programs benefit from studies that could determine whether or not they actually made a difference (Evaluation Gap Working Group, 2006).
Types of evaluations
• External evaluation:
Independent
Carried out by researchers not involved in implementation
Funded by third party
• Internal evaluation:
Dependent
Carried out by implementing institutions
Funded by implementers themselves
• Two categories for evaluation: formative and summative.
Examples for large scale evaluations
• The Multi-Country IMCI Evaluation
• Accelerated Child Survival and Development Initiative
• Tanzanian National Voucher Scheme for Insecticide-Treated Nets
1. Planning the evaluation
•Who Will Carry Out the Evaluation?
•What Are the Evaluation Objectives?
•When to Plan the Evaluation?
•How Long Will the Evaluation Take?
•Where Will the Evaluation Be Carried Out?
Who Will Carry Out the Evaluation?
• For internal evaluation: implementing institutions themselves or sometimes with the help of external consultants for specific tasks.
• For external evaluation: national or international institution of research will be recruited (UNICEF commissioned the Bloomberg School of Public Health at Johns Hopkins University to conducted an independent retrospective evaluation of ACSD in Benin, Ghana, and Mali)
What Are the Evaluation Objectives?
• To review the available documentation on program objectives and goals, and to turn these items into evaluation objectives.
• The ultimate objective of an evaluation is to influence decisions.
• Funders interested in impact outcomes:
(Their decisions will be whether to continue funding ,or strategy needs to be reformulated)
• Local implementers interested in quality of service and population coverage:
(Their decisions are related to improving the program through specific actions)
When to Plan the Evaluation?
• Before implementation; at the time the program is being designed
• Early onset, prospective evaluations allow collection of baseline data.
• Allows thorough, continuing documentation of program inputs and the contextual variables that may affect the program's impact.
• Early planning may enable the evaluation team to influence how the program is rolled out, thereby improving the validity of future comparisons.
• A disadvantage of prospective evaluations is that program implementation may change over time for reasons that are outside the control.
How Long Will the Evaluation Take?
• The answer depends on whether the evaluation is retrospective, prospective, or a mixture of both techniques
Fully prospective evaluations include sequential steps:1. Collect baseline information2. Wait until the large-scale program is fully
implemented and reaches high population coverage
3. Allow time for a biological effect to take place in participating individuals
4. Wait until such effect can be measured in an endline survey
5. Clean the data and conduct the analysis
Where Will the Evaluation Be Carried Out?
• Many large-scale programs are implemented simultaneously in more than one country
• This decision is usually taken in agreement with the implementation agencies
• Selection criteria should include characteristics that are desirable in all participating countries (geography, health system strength, and epidemiological profiles, and health system etc.)
• The rationale for selecting some countries and not others, because will affect the external validity or generalizability of the evaluation findings
2. Developing an Impact Model….
• The model helps to clarify the expectations of program planners and implementers
• Contributes to the development of the evaluation proposal
• Helps guide the analyses and attribution of the results
• Can help track changes in assumptions as these evolve in response to early evaluation findings.
• Helps implementers and evaluators stay honest about what was expected
health services attendance rates or mosquito nets
health services attendance rates or mosquito nets
Diagram
training, logistics, and management
training, logistics, and management
Inputs Inputs Outputs Outputs Process Process
staff, drugs, Equipment, teaching materials
staff, drugs, Equipment, teaching materials
percentage of women giving birth at a healthcare facility or the proportion of children sleeping under an insecticide-treated mosquito net
percentage of women giving birth at a healthcare facility or the proportion of children sleeping under an insecticide-treated mosquito net
Outcomes Outcomes
Common framework for evaluation
impacts impacts
reduced mortality or improved nutrition
reduced mortality or improved nutrition
The IMCI Impact Model
Training ofhealth
workers
Family and community interventions
Health system improvements
Improvedhousehold
compliance/care
Improvedcareseeking &
utilization
Improved preventive practices
Introduction of IMCI
Improved quality of care in health facilities
Improved health/nutrition & reduced mortality
Increased coverage for curative & preventive interventions
Development of an Impact Model
Steps in the Development of an Impact Model
Step Details
Learn about the program
• Read documents• Interview planners and implementers• Carry out field visits• Use special techniques as needed: cards, sorting exercise
Develop drafts of the model
• Focus on intentions and assumptions• Document responses from implementers• Record iterations and changes as model develops
Quantify and check assumptions
• Review existing evidence and literature • Identify early results from the evaluation- Documentations: what was actually done?- Outcomes: are assumptions confirmed?
Use and evaluate the model
• Develop an evaluation design, testing each assumption if possible
• Plan for analysis, including contextual factors• Analyze• Interpret results with participation by implementers
A Stepwise Approach to Impact Evaluations
1.Policies; results-based planning: Are the interventions and plans for delivery technically sound and appropriate for the epidemiological and health system context?
2.Provision: Are adequate services being provided? at health facility/community levels?
3.Utilization: Are these services being used by the population?
4. Effective coverage: Have adequate levels of effective coverage been reached in the population?
5.Impact:Is there an impact on health and nutrition?
6.Cost-effectiveness: Is the program cost-effective?
3.Types of inference and choice of design
• Adequacy Evaluations (converge)
• Plausibility Evaluations (comparison group)
• Before-and-After Study in Program and Comparison Areas
• The Ecological Dose-Response Design
• Randomized (Probability) Evaluation Designs
• Stepped Wedge DesignFigure 16-3 Simplified Conceptual Framework of Factors Affecting Health, from the Standpoint of Evaluation Design
Impact
4. Defining the indicators and obtaining the data
• Documentation of Program Implementation
• Measuring Coverage (house hold surveys)
• Measuring or Modeling Impact
• Describing Contextual Factors
• Measuring Costs ( unit cost, operations, utilizations)
• Patient-Level Costs (severity of illness )
• Facility-Level Characteristics (quality, scope of service )
• Contextual Variables ( transport, supervision, pa tients' ability to access care)
• Data Collection Methods (cost) and Allocation Methods
5. Carrying Out the Evaluation
• Starting the evaluation clock
• Feedback to implementers and midstream
corrections
• Linking the independent evaluation to routine
monitoring and evaluation
• Data Analyses
• Analyzing Costs and Cost-Effectiveness (process,
intermediate, and outcome in dicators)
• Interpretation and Attribution
Types of process, intermediate, and outcome indicators and data needed Type of Indicator Indicator What measured Additional data
Process cost-effectiveness Expected costs and value for money
Budget projections, work plans, coverage
Process total cost perperson treated
Services provided Utilization rates
Process total cost perpreventive item
Services provided Utilization rates
Process Cost per capita Services provided, program effort
Population
Intermediate cost of qualityimprovement
Treatment leading to health gains
Utilization rates adjusted by quality
Outcome cost per deathaverted
Mortality reduction Mortality rates
Outcome cost per lifeyear gained
Mortality reduction Mortality rates and age of death (and life expectancy
Joint interpretation of findings from adequacy and plausibility analysis How did program areas fare relative to nonprogram areas? (plausibility assessment)
How did impact indicators change over time in the program areas (adequacy Assessment)
Improved No change Worsened
Better Both areas improved, but the program led to faster improvement
Program provided a safety net
Program provided a partial safety net
Same Both areas improved; no ev idence of an additional program impact
No change in either area; no evidence of program impact
Indicators worsened in both areas; no evidence of a safety net
Worse Both areas improved; presence of the program may have precluded the deploy ment of more effective strategy
Program precluded progress; presence of the program may have hindered the deploy ment of more effective strategies
Program was detrimen tal; presence of the program may have hin dered the deployment of more effective strategies
6. Disseminating Evaluation Findings and Promoting Their Uptake
• Policy makers and
program
implementers at
country level.
• Global scientific
public health
communities.
7.Working in Large-Scale Evaluations
• First, good evaluations require effective communications
• Second, good evaluations require a broad range of skills and techniques, as well as an interdisciplinary approach.
• Third, good evaluations require patience and flexibility.
8. Conclusion
• Conducting large-scale evaluations is not for the
fainthearted. This chapter has focused on the
technical aspects of designing and conducting an
evaluation, mentioning only in passing some of
the political and personal challenges involved
Message taken home
• Ideal designs (based on text books like
this one) must often be modified to
reflect what is possible and affordable
in specific country contexts.
Thank you for listening