monitoring, evaluation and learning at usaid: a review · i. usaid program cycle ii. monitoring and...

37
Monitoring, Evaluation and Learning at USAID: A Review Presented at the Global Health Knowledge Collaborative Meeting, October 25, 2017 Bamikale Feyisetan, PhD Snr. Evaluation & Sustainability Advisor, USAID/GH/PRH Marc Cunningham, MPH MEL Advisor, USAID/GH/PPP 1

Upload: vuongque

Post on 23-Apr-2018

225 views

Category:

Documents


3 download

TRANSCRIPT

Monitoring, Evaluation and

Learning at USAID: A Review

Presented at the Global Health Knowledge Collaborative Meeting, October 25, 2017

Bamikale Feyisetan, PhDSnr. Evaluation & Sustainability Advisor, USAID/GH/PRH

Marc Cunningham, MPH

MEL Advisor, USAID/GH/PPP

1

I. USAID Program Cycle

II. Monitoring and Tracking Performance

III. Program Evaluation

IV. Collaboration, Learning and Adaptation

V. Contact, Resources & Training in MEL

2

ME&L At USAID: An Overview

In this presentation, an overview of Monitoring, Evaluation and Learning at USAID is provided. As part of the overview, we will review the USAID Program Cycle briefly and its key principles, and look at how MEL fits into the Program Cycle and USAID activities in general. We will also look at the role of implementing partners in the process.

2

• Intended to shift the focus of Government officials and managers from program inputs toward program execution — what results (outcomes & outputs) are being achieved, and how well programs are meeting expected objectives

REQUIRES

• Long-term goals

• Annual performance targets

• Annual reporting of actual performance

• Requires agencies to publish strategic and performance plans and reports in machine-readable formats (GPRAMA).

3

Government Performance and Results Act (GPRA) of

1993 and Modernization Act of 2010

In order to know why we monitor, we need to know about the Government Performance and Results Act (GPRA) and the GPRA Modernization Act (GPR of 2010. These acts were intended to shift the focus of Government officials and managers from reporting program inputs toward program implementation — what results (outcomes & outputs) are being achieved, and how well programs are meeting expected objectives. The shift from reporting program inputs to measuring results/outcomes was a big change in the concept for all USG. In USAID we started to learn about and use strategic planning and managing for results. In summary both GPRA and GPRAMA require that all USG agencies set long term goals and annual performance targets, report annually on actual performance; GPRAMA requires that agencies publish strategies, performance plans and reports in an electronic format.

3

The Program Cycle is USAID’s operational model for planning, delivering,

assessing, and adapting development programming in a given region or

country in order to achieve more effective and sustainable results to

advance U.S. foreign policy.

-ADS 201

Please note the four program stages underlined – planning, delivering (program execution/implementation), assessing (examining results) and adapting (using the results to inform program decisions)

4

5

1. Apply Analytic Rigor to

Support Evidence-

based Decision-Making

2. Manage Adaptively

through Continuous

Learning

3. Promote Sustainability

through Local

Ownership

4. Utilize a Range of

Approaches to Achieve

Results

5

Four Principles of the Program Cycle

Adherence to these principles may depend greatly on MEL activities. For this reason, USAID has made Monitoring, evaluation and learning core components of USAID’s program

cycle: from setting strategy, to designing and implementing projects, to learning and adapting.

From the graphic, it could be seen that M&E are expected to guide high level strategic planning at the country and

regional level, as well as activity design and implementation. M&E activities are also INFORMED by these

processes. Learning and adapting happen in the context of program development and M&E.

Guided by the USG and Agency policies and strategies as well as Presidential Initiatives, USAID Missions are required to develop and use Country Development Cooperation Strategies (CDCS). These five-year, country-based strategies show how Agency assistance is

synchronized with other agencies' efforts.

Projects are then designed based on the development objectives (DO) and other considerations such as budget, country absorptive capacity, and other considerations. Implementation is usually done through Acquisition (contract) and Assistance (grant) Mechanisms which require M&E. M&E are required at the CDCS and project levels in order to enable us learn from the implementation of both the strategy and the projects. M&E provide information to feed into new policies and strategies - new CDCS and new project designs. M&E are also required for partners at the activity level.

One of the most important things to take away from this graphic is that MEL should feed always back into

existing strategic plans, activities and program implementation for course redirection where necessary, and

inform subsequent program design.

5

6

Rationale for M&E

• M&E help to make informed decisions regarding ongoing programs

– Facilitate effective and efficient use of resources

– Determine whether a program is right on track and where changes need

to be considered

• M&E help stakeholders conclude whether the program is a success

• M&E documentation preserves

institutional memory

Please note the last bullet – for program continuity

Definition – Performance Monitoring

• “Performance monitoring is the ongoing and systematic collection of

performance indicator data and other quantitative or qualitative

information to reveal whether implementation is on track and whether

expected results are being achieved.”

– ADS 201

Monitoring is like taking snapshots of the progress of your program or project.

7

Why do we monitor

As noted earlier, the Program Cycle is USAID’s operational model for planning, delivering, assessing, and adapting development programming in a given region or country to advance U.S. foreign policy. It is described in USAID policy ADS 201. Monitoring plays a critical role throughout the Program Cycle and is used to determine whether USAID is accomplishing what it sets out to achieve, what effects programming is having in a region, and how to adapt to changing environments.

Performance monitoring and context monitoring occur throughout the Program Cycle, from Country or Regional Development Cooperation Strategies (CDCS/RDCS), to projects, to activities. Data from monitoring are used to: assess whether programming is achieving expected results; adapt existing activities, projects, and strategies as necessary; and, apply Agency learning to the designs of future strategies and programming. This slide primarily focuses on how USAID staff use monitoring and various monitoring requirements. We highlight three boxes with yellow borders for discussionon monitoring’s role in reporting on results; in learning and adapting; and the interaction between USAID staff and Implementing Partners on monitoring through Activity MEL Plans.

8

Requirements for Monitoring

What are the Performance Monitoring Requirements? (See ADS 201)

• Mission-wide PMP for the CDCS – with Results Framework and CLA plan

• Project MEL Plan – for the PAD

• Activity MEL Plan – by implementing partner within 90 days following award

• Performance Indicator Reference Sheet (PIRS) – for all indicators in thel

Monitoring Plan

• DQAs – for all indicators reported externally

• Prepare the annual Performance Plan and Report (PPR) and other reports

Monitoring data are also used to:

• Assess achievements over time and identify gaps in Portfolio Reviews

• Adaptively manage programs

9

10

10

Linking the Results Framework with the Logical

Framework

At the country level, project goals are linked to Mission’s Development Objectives (DO).Here we are looking at link between the Results framework in the PMP and the logical framework in the PAD. Note, there are a variety of ways to put together a logical framework, so project might look slightly different – but there should still be some alignment.

Appropriate indicators should be chosen to monitor the extent to which desired results/outcomes/outputs are attained.

11

Indicators are quantifiable measures of a characteristic or condition of

people, institutions, systems, or processes that may change over time.

• Baseline: value of an indicator before an intervention or activity

• Target: specific, planned level of result to be achieved within a specific

timeframe with a given level of resources

• Disaggregation: indicator data broken down by key categories of

interest (eg: demographic characteristics)

In USAID, there are 21 Standard Indicators for health as well as

many recommended ‘custom’ indicators. Please use them when

applicable.

11

Indicators

Log frames and PMPs have indicators to help us track progress.

Please don’t forget that indicators are proxies for results;

Using the Standard Indicators allows USAID to aggregate indicators from across the entire portfolio to better tell

the story of our work.

• Precise indicator definitions and clear explanations of the unit of analysis for performance indicators help to guard against variation in the data collection from site to site and over time.

• Performance Indicator Reference Sheet (PIRS) document the definition, purpose, and methodology of the indicator

• Already developed for the 21 standard (F) indicators in the Performance Plan and Report (PPR)

12

Performance Indicator Reference Sheet

As stated earlier, USAID tracks indicators at multiple levels: activity, project, program and strategy.

Precise indicator definitions and clear explanations of the unit of analysis for performance indicators help to guard against variation in the data collection from site to site and

12

over time. Performance Indicator Reference Sheet (PIRS) documents the definition, purpose, and methodology of the indicator to ensure all parties collecting and using the indicator have the same understanding of its content.

12

13

Validity

Reliability

Timeliness

Precision

Integrity

13

Data Quality Assessments

YES NO COMMENTS

VALIDITY – Data should clearly and adequately represent the intended result.1 Does the information collected measure what it is supposed to

measure? (E.g. A valid measure of overall nutrition is healthy variation in diet; Age is not a valid measure of overall health.)

2 Do results collected fall within a plausible range?3 Is there reasonable assurance that the data collection methods

being used do not produce systematically biased data (e.g. consistently over- or under-counting)?

4 Are sound research methods being used to collect the data?RELIABILITY – Data should reflect stable and consistent data collection processes and analysis methods over time.1 When the same data collection method is used to

measure/observe the same thing multiple times, is the same result produced each time? (E.g. A ruler used over and over always indicates the same length for an inch.)

2 Are data collection and analysis methods documented in writing and being used to ensure the same procedures are followed each time?

TIMELINESS – Data should be available at a useful frequency, should be current, and should be timely enough to influence management decision making.1 Are data available frequently enough to inform program

management decisions?2 Are the data reported the most current practically available?3 Are the data reported as soon as possible after collection?PRECISION – Data have a sufficient level of detail to permit management decision making; e.g. the margin of error is less than the anticipated change.1 Is the margin of error less than the expected change being

Data quality assessments (DQAs) are conducted regularly by activity and project managers to:• Verify the quality of the data collected• Identify strengths and weaknesses of data • Determine extent to which data integrity can be trusted to inform management decisions

USAID and State worked together to develop one streamlined DQA checklist that can be used for all indicators to assess: validity, integrity, precision, reliability, and timeliness.

When?Conduct assessment of data quality for each performance indicator reported to external entitiesAfter data has been collected on a new indicator and within 12 months of the new indicator data being reportedEvery 3 years thereafter and more frequently if needed

Evaluation - Definition

Evaluation is the systematic collection and analysis of information about

the characteristics and outcomes of strategies, projects, and activities as a

basis for decisions to improve effectiveness, and timed to inform

decisions about current and future programming.

-Evaluation Policy

14

Performance

Evaluations

Impact

Evaluations

Use

Explore a range of issues linked

to program design and

implementation; e.g., how a project

is being implemented or what is has

achieved.

Lack a rigorously defined

counterfactual

More narrowly defined; provide a

quantifiable measurement of change

attributable to a given intervention

with a high level of confidence

Requires rigorous counterfactual or

comparison group

Questions Generally descriptive and normative Cause-and-effect

Design Wide variety depending on purpose

& questions

Experimental (randomization) or

quasi-experimental

Methods Mix of qualitative and quantitative

methods

Quantitative, though often include

qualitative methods

Types of Evaluation at USAID

Impact Evaluation: measures the change in a development outcome that is attributable to a defined intervention. Impact evaluations are based on models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other than the intervention that might account for the observed change.

Performance Evaluation: encompass a broad range of methods to address descriptive, normative and/or cause-and-effect questions. The majority of evaluations at USAID are performance evaluations.

15

They can help determine:

What a particular strategy, project tor activity has achievedHow it is being implementedHow it is perceived and valuedContribution of USAID assistance to the results achievedPossible unintended outcomes from USAID assistance

15

Why do we Evaluate

Here again, we see evaluations at different program levels and stages as well as how evaluation contributes to learning. The box with the yellow borders highlights the role of implementers in evaluations.

16

1. Integrated Into Design of Strategies, Projects and Activities

✓ USAID’s renewed focus on evaluation has a complementary and reinforcing relationship with other efforts to focus

projects and activities on achieving measurable results. For this reason, consideration for evaluation should be given

at the project design phase.

2. Unbiased in Measurement and Reporting

✓ Undertaken in such a way that they are not subject to perception or reality of biased measurement or reporting due to

conflict of interest or other factors. Evaluations conducted to meet the evaluation requirements of the ADS will be

external, and the contract or grant for the evaluation will be managed by the Program Office. Independence has to do

with the engagement of evaluators who are not otherwise affiliated with the program. In some cases an expert from

within USAID might serve on the evaluation team, however, the lead should be an outside expert.

3. Relevant

✓ Linking evaluation to future decisions to be made by USAID leadership, partners or other key stakeholders.

4. Based on Best Methods

✓ Utilize methods that generate the highest quality and most credible evidence for the questions being asked. Many of

our programs are complex and/or carried out in the context of larger national or global initiatives. Methods such as

developmental evaluation or contribution analysis can help us to improve program design and understand its

contribution in the greater landscape of health initiatives.

17

5. Oriented toward Reinforcing Local Ownership

✓ Evaluations should be consistent with institutional aims of local ownership through respectful engagement with all partners,

including local beneficiaries and stakeholders

6. Transparent

✓ Findings from evaluations should be shared as widely as possible with a commitment to full and active disclosure)

17

At the operating unit (OU) level, all required evaluations are to be external evaluations and adhere to the principle of evaluation independence.

Requirements for evaluation were revised under ADS 201. The two new requirements (requirements 1 and 3) state:Requirement 1: Each Mission and Washington OU that manages program funds and designs and implements projects as described in 201.3.3 must conduct at least one evaluation per project. The evaluation may address the project as a whole, a single activity or intervention, a set of activities or interventions within the project, questions related to the project that were identified in the PMP or Project MEL Plan, or cross-cutting issues within the project.

Requirement 3: Each Mission must conduct at least one “whole-of-project” performance evaluation within their CDCS timeframe. Whole-of-project performance evaluations examine an entire project, including all its constituent activities and progress toward the achievement of the Project Purpose. A whole-of-project evaluation may count as one of the evaluations required under Requirement 1.

[Note: Requirement 2 is the “pilot intervention requirement” which is essentially unchanged from the previous ADS

18

guidance.]

The new ADS (201) changes “Integrated into design of projects” (from the previous Evaluation Policy) to “Integrated into the Design of Strategies, Projects, and Activities” and changed “Oriented toward reinforcing local capacity” (from the previous Evaluation Policy) to “Oriented toward Reinforcing Local Ownership.”

18

Definition - Learning

• Learning is a continuous process of analyzing a wide variety of

information sources and knowledge including evaluation findings,

monitoring data, research, analyses conducted by USAID or others, and

experiential knowledge of staff and development actors.

Please note the word ‘continuous’ – deliberate efforts should be made to promote the culture of continuous learning among staff. Having a learning agenda could promote the culture of continuously improving staff knowledge.

19

Learning in

the CLA

Framework

Learning systematically takes place when USAID and stakeholders analyze a variety of information (including data from monitoring, portfolio reviews, research findings, evaluations, analyses conducted by USAID or third parties, knowledge gained from experience) and take time to pause and reflect on implementation. This process helps us to draw on evidence and experience from many sources to test and strengthen theories of change as well as employ participatory development methodologies that catalyze learning for ourselves and our stakeholders.

The graphic shows the enabling conditions for CLA as well as the CLA in the program cycle. Integrating CLA into our work helps to ensure that our programs are coordinated with others, grounded in a strong evidence base, and iteratively adapted to remain relevant throughout implementation.

In the simplest terms, the CLA framework directs our attention to the following:Collaboration: Are we collaborating with the right partners at the right time to promote synergy over stove-piping?Learning: Are we asking the most important questions and finding answers that are relevant to decision making?Adapting: Are we using the information that we gather through collaboration and learning activities to make better decisions and make adjustments as necessary?Enabling conditions: Are we working in an organizational environment that supports our collaborating, learning and adapting efforts?

20

Learning: A commitment to continuous improvement must be grounded on broad contextual awareness as well as a deep understanding of the questions that matter most; however, it could be unreasonable to try to keep on top of every new development relevant to a local context or technical sector. Creating a learning agenda is one way to decide on priority questions and consider how monitoring, evaluation, and other types of analysis can help answer those questions.

20

Learning - How

• Tracking, using, and contributing to the technical evidence base

• Testing and exploring our theories of change

• Ensuring our monitoring and evaluation (M&E) are designed to

help us learn from implementation, in addition to meeting established

reporting requirements

21

Principles for Collaborating, Learning, and Adapting

• CLA efforts should build upon and reinforce existing processes and

practices

• Collaboration and coordination should be approached strategically.

• Tacit, experiential, and contextual knowledge are crucial complements

to research and evidence-based knowledge. USAID will value and use

all forms of knowledge in the development of strategies, projects, and

activities and the ways to manage them adaptively.

• Implementing partners and local and regional actors play a central role

in USAID’s efforts to be a learning organization.

• Knowledge and learning should be documented, disseminated, and used

to help spread effective practices widely for improved development

22

Learning Agenda

While not required by the ADS, learning agenda can be effective tools to drive continued program adaptation and improvement.

A learning agenda includes a (1) set of questions that address critical knowledge gaps (2) a set of associated activities to answer them and (3) products aimed at disseminating findings and designed with usage and application in mind. A learning agenda can help you: • Test and explore assumptions and hypotheses throughout implementation and stay open to the possibility that your assumptions and hypotheses are not accurate; • Fill knowledge gaps that remain during implementation start-up; • Make more informed decisions and support making your work more effective and efficient. A learning agenda can also help to guide performance management planning by setting knowledge and information priorities. For example, a learning agenda can assist with prioritizing evaluations and research activities as well as in determining key indicators.

A learning agenda can also be a useful process through which collaboration with peers and colleagues can be promoted, fill gaps in knowledge and generate new evidence that we can then use to adapt our work. Ideally, a learning agenda should be developed during the design phase of a strategy, project, or activity, once the results framework or

23

development hypotheses have been developed.

At the strategy (CDCS) level, a learning agenda can be a part of the Mission’s required Collaborating, Learning, and Adapting (CLA) Plan. The same is true for required Monitoring, Evaluation, and Learning (MEL) Plans at the project and activity levels. Whatever the level is, the goal, in formulating a learning agenda, is to create a list of prioritized learning questions that when answered can lead to better, more informed programming decisions. To do so, it is important to involve both the generators of knowledge and the users (e.g., program staff, implementing partners, monitoring and evaluation staff, decision-makers, etc.).

23

Recommendations for Learning

• Learning can take place through a range of processes and use a variety

of sources including monitoring data, evaluation findings, research

findings, lessons from implementation, and observation.

• Project MEL Plan in the PAD should define a learning plan to fill gaps in

technical knowledge and inform adjustments during implementation

• Activity MEL Plans should include a learning section as well. It should

be informed by the Project MEL Plan and the CDCS CLA section

24

Open data paves the way for an increasingly robust, data-rich

environment to create breakthrough insights and solutions:

• Establishes the Development Data Library (DDL)

• Requires USAID staff and implementing partners to submit datasets

generated with USAID funding to the DDL

• Defines a data clearance process to ensure that USAID makes as much

data publicly available as possible, while still affording necessary

protections

ADS-579 Open Data

When discussing ME&L, we should also discuss the open data policy. USAID is committed to supporting the availability of data – when possible without compromising necessary protections of privacy & confidentiality or security. This is codified in the ADS 579 guidance.

25

Within 90 days of an activity being awarded, the Activity Monitoring, Evaluation, and Learning (MEL) Plan is drafted:

• Aligned, where possible, with PPR (21 Standard F Indicators).

• Aligned with the Project MEL Plan (from USAID) and Mission or Office PMP.

• PIRS Sheet developed for each Indicator

Evaluation Dissemination

• Uploaded to DEC (within 90 days)

• Uploading quantitative data to the DDL (open data policy)

Activity MEL Guide available: https://usaidlearninglab.org/sites/default/files/resource/files/cleared_-_how-to_note_-_activity_mel_plan_sep2017.pdf

Expectations from Partners

26

• National

• Sub-national

• Program

• Service Delivery Point

Use of Monitoring Data at All Levels

Let’s take a few moments to discuss your experience with the use of monitoring data at all levels.

27

Learning: High Impact Practices (HIPs)

FP/RH’s High Impact Practices are examples of how USAID uses evidence and continued learning to better improve our programs.

28

• Global Health Bureau M&E Plan 2014-2016

– https://programnet.usaid.gov/library/global-health-bureau-me-plan-2014-2016

• ADS 201: Programming - https://www.usaid.gov/ads/policy/200/201

• ProgramNet - https://programnet.usaid.gov/

• Toolkits

– https://usaidlearninglab.org/content/monitoring-toolkit

– https://usaidlearninglab.org/evaluation

– https://usaidlearninglab.org/cla-toolkit

• USAID Evaluation Policy, Link to Reports

– http://www.usaid.gov/evaluation, https://dec.usaid.gov

Key Resources in MEL

The toolkits contain a wealth of information in easily accessible formats.

29

Questions?

30