monitoring? evaluation? impact evaluation? appreciating and taking advantages of the differences...

55
Evaluation? Appreciating and Taking Advantages of the Differences Workshop at the Cairo conference on Impact Evaluation 29 March 2009 Burt Perrin La Masque [email protected] 30770 Vissec FRANCE +33 4 67

Upload: lee-solomon-henderson

Post on 16-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Monitoring? Evaluation? Impact Evaluation? Appreciating and Taking Advantages of the Differences

Workshop at the Cairo conference on Impact Evaluation29 March 2009

Burt Perrin La [email protected] 30770 Vissec

FRANCE +33 4 67 81 50

11

Alternative title:

Putting the “and” back in MandE

Plan for the workshop

Participative approach – small group exercises, your real-world examples, general discussionConsider differences between monitoring and evaluation Strengths and limitations of each Use and misuse of performance indicators

How to use monitoring and evaluation approaches appropriately and in a complementary fashionWhat is “impact evaluation” and where does it fit in?

What do we mean by Monitoring, and by Evaluation?

Monitoring – the concept and common definitions

Tracking progress in accordance with previously identified objectives, indicators, or targets (plan vs. reality) RBM, performance measurement,

performance indicators …

En français: “suivi” vs. “contrôle” Some other uses of the term Any ongoing activity involving data

collection and performance (usually internal, sometimes seen as self evaluation)

Evaluation – some initial aspects

Systematic, data based Often can use data from monitoring as

one source of information

Can consider any aspect of a policy, programme, projectMajor focus on assessing the impact of the intervention (i.e. attribution, cause)E - valua - tion

Frequent status of M&E

monitoringandevaluation

RBM (Monitoring)

Evaluation

or

Monitoringandevaluation

Ideal situation – Monitoring and Evaluation complementary

Monitoring

Evaluation

Monitoring and EvaluationMonitoring

Periodic, using data routinely gathered or readily obtainable, generally internalAssumes appropriateness of programme, activities, objectives, indicatorsTracks progress against small number of targets/ indicators (one at a time)Usually quantitativeCannot indicate causalityDifficult to use for impact assessment

Evaluation Generally episodic, often externalCan question the rationale and relevance of the program and its objectivesCan identify unintended as well as planned impacts and effectsCan address “how” and “why” questionsCan provide guidance for future directions Can use data from different sources and from a wide variety of methods

10

MONITORING, EVALUATION AND IA

Inputs Outputs Outcomes Impact

Investments (resources, staff…) and

activities

Products Immediate achievements of the

project

Long-term, sustainable changes

Monitoring: what has been invested, done and produced, and how are we progressing towards the achievement of the objectives?

Evaluation: what occurred and what has been achieved as a result of the project?

Impact assessment: what long-term, sustainable changes have been produced (e.g. the contribution towards the elimination of child labour)?

Evaluation vs. Research

Research Primary objective: knowledge generation

Evaluation reference to a particular type of situation Utilisation in some form an essential

component

But: evaluation makes use of research methodologies

Monitoring data: quantitative only, or also qualitative?

Some/most guidelines specify quantitative onlySome nominally allow qualitative information, but:

Indicator Q1

Q2

Q3

Q4

Yr

Performance Indicators

See, for example:

Burt Perrin, Effective Use and Misuse of Performance Measurement, American Journal of Evaluation, Vol. 19, No. 3, pp. 367-369, 1998.

Burt Perrin, Performance Measurement: Does the Reality Match the Rhetoric? American Journal of Evaluation, Vol. 20, No. 1, pp. 101-114, 1999.

A consideration of their limitations and potential for misuse

Common flaws, limitations, and misuse of performance indicators - 1

Goal displacementTerms and measures interpreted differentlyDistorted or inaccurate dataMeaningless and irrelevant dataCost shifting vs. cost savingsCritical subgroup differences hidden

Common flaws, limitations, and misuse of performance indicators -2

Do not take into account the larger context/complexitiesLimitations of objective-based approaches to evaluationUseless for decision making and resource allocationsCan result in less focus on innovation, improvement and outcomes

The process of developing indicators – should include:

Involvement of stakeholders Development, interpretation and revision of

indicators

Allocation of time and resources to the development of indicatorsProvision of training and expertiseThinking about potential forms of misuse in advancePretesting, testing, review and revision

Using indicators appropriately – some basic strategic considerations

First, do no harmMeaningful and useful at the grassroots – the program, staff, local stakeholdersNOT linked to budget allocations or managerial rewardsUse only when makes sense, e.g. Mintzberg, Pollitt/OECD: Standardised programmes – recurrent products/services Established programmes with a basis for identifying

meaningful indicators and targets NOT for tangible individual services NOT for non-tangible ideal services

Using indicators appropriately –strategic considerations – 2

Use indicators as indicators At best, a window vs. reality To raise questions rather than to

provide the “answer” Different levels (e.g. input, activities,

outputs, outcomes where it makes sense)

Using indicators appropriately –strategic considerations – 3

Focus on results vs. busy-nessPerformance information vs. performance data Descriptive vs. numerical indicator

Performance MANAGEment vs. MEASUREment (original intent diverted from management to control)

Periodically review overall picture – ask if the “data” makes sense, identify questions arisingIndicators as part of a broad evaluation strategy

Using indicators appropriately – operational considerations

Look at subgroup differences

Indicators/targets indicating direction vs. assessing performance If latter, don’t set up programme for failure

Dynamic vs. static Never right the first time Constantly reassess validity and meaningfulness Pre-test, pre-test, pre-test Update and revise

Provide feedback – and assistance as needed

Using indicators appropriately - reporting

More vs. less information in reports

Performance story vs. list of numbers

Identify limitations – provide qualifications

Combine with other information

Request/provide feedback

Evaluation

A strategic approach to evaluation

Raison d’être of evaluation Social betterment Sensemaking

More generally, raison d’être of evaluation To be used! Improved policies, programmes,

projects, services, thinking

Monitoring and EvaluationMonitoring

Periodic, using data routinely gathered or readily obtainableAssumes appropriateness of programme, activities, objectives, indicatorsTracks progress against small number of targets/ indicators (one at a time)Usually quantitativeCannot indicate causalityDifficult to use for impact assessment

Evaluation Generally episodicCan question the rationale and relevance of the program and its objectivesCan identify unintended as well as planned impacts and effectsCan provide guidance for future directions Can address “how” and “why” questionsCan use data from different sources and from a wide variety of methods

Future orientation - Dilemma

“The greatest dilemma of mankind is that all knowledge is about past events and all decisions about the future.

The objective of this planning, long-term and imperfect as it may be, is to make reasonably sure that, in the future, we may end up approximately right instead of exactly wrong.”

Questions for evaluation

Start with the questions Choice of methods to follow

How to identify questions Who can use evaluation information? What information can be used? How? Different stakeholders – different questions Consider responses to hypothetical findings Develop the theory of change (logic model)

The three key evaluation questions

What’s happening?(planned and unplanned, little or big at any level)

Why?

So what?

Some uses for evaluation

Programme improvementIdentify new policies, programme directions, strategiesProgramme formationDecision making at all levelsAccountabilityLearningIdentification of needsAdvocacyInstilling evaluative/questioning culture

Different types of evaluation

Ex-ante vs. ex-postProcess vs. outcomeFormative vs. summativeDescriptive vs. judgementalAccountability vs. learning (vs. advocacy vs. pro-forma)Short-term actions vs. long-term thinkingEtc.

Results chainImpact

Outcomes

Reach

Outputs

Processes

Inputs

Intervention logic model

Inputs

Activities OutputsResults/

Intermediate Outcomes

Ultimate Impacts

Generic logic model (simplified)

Inputs Activities Intermediate results (1)

Intermediate results (2)

ImpactsOutputs

process outputs

process outputs

Generic logic model – in context

Inputs ActivitiesIntermediate results (1)

Intermediate results (2)

Impacts

Other results Other

results

Other results

Other results

Other factors

Other factors

Other factors

Needs

Environment et context

Knowledge

Outputs

Other factors

Other interventions

Other interventions

ProLL ProLL ModelModelEVALUATIONEVALUATION

POINTSPOINTSFE

Clients Target Group/sStakeholders

SE

FE

RelevanceValidityApplicability

SE

SE

RelevanceAppropriatenessScopeCoverage

RelevanceAppropriatenessScopeCoverage

SE

FE

EfficiencyEffectivenessEconomy

FE

Clients’ CharterPerformance TargetsMicro Accounting TQM & QualityProductivity MeasuresService RecoveryCounter ServiceZero DefectISO 9000

SEFE

Quantity /QualityTimeliness /CostAppropriateness

SEFE

Summative Evaluation PointsFormative Evaluation Points

CLIENTS

BASIC NEEDS/PROBLEMS

POLICY

PROGRAM/ACTIVITY

INPUT(RESOURCES)

PROCESS

OUTPUT(SERVICES/PRODUCTS

Efficiency EffectivenessEconomy

OUTCOME/IMPACT/CHANGES

PositiveNegativeUnintendedDerivative

SE

FE

EfficiencyEffectivenessEconomy

ObjectiveAchievement(rate/level) FE

SE

FundsManpowerMaterials

Equipment

Mission

GoalsOBJECTIVES

Needs Fulfillment/Problem Alleviation

(Degree/Level) FE

SE

© 1992 Arunaselam Rasappan, reproduced with permission.

Progressive Elimination of Child Labour (CL) with Emphasis on the Worst Forms

Children engaged in CL or at risk successfully integrated into the education system

Availability of relevant high quality education (formal and

NFE) and skills training

Access to education and training available for children in CL or at risk

Communities participate actively in support of appropriate

and assessable quality educ

Educ system guided by CL sensitive policies

Outcomes in the areas of:. Support/alternatives for children. Competent, motivated teachers. Competent, motivated principals/ administrators. Relevant, flexible, adapted curricula

Outcomes in areas of:. NFE as well as formal educ. Social exclusion barriers addressed. Free and compulsory school system in place. Expanded access to educ. School monitoring system\

Outcomes in areas of:. Availability of community resources. School management by community. Community-based monitoring. Community demand for educ

Outcomes in areas of:. Political commitment. Educ management & planning. CL considerations integrated into mainstream educ policy. Other policy areas take action to facilitate CL-appropriate educ

Progressive Elimination of Child Labour (CL) with Emphasis on the Worst Forms

Children engaged in CL or at risk successfully integrated into the education system

Availability of relevant high quality education (formal and

NFE) and skills training

Access to education and training available for children in CL or at risk

Communities participate actively in support of appropriate

and assessable quality educ

Educ system guided by CL sensitive policies

Community resources mobilised

Private organisations demonstrate

social responsibility

Schools adopted by the

community

Respect shown for teachers, especially for

female teachers

Community-based monitoring undertaken

regularly

Schools are managed by the

community

Demand for educ for children in CL

or at risk increased

Police/law enforcement

agencies participate in CLM

The community as a whole is

engaged in CLM

Trade unions participate in CLM

Monitoring is used to provide for increased

transparency and accountability and

decreased corruption

Municipalities/local authorities

involved in school management

Different sectors within the community

form alliances or cooperate in other

ways

Education-oriented NGOs integrate CL concerns in

their work

Community schools

established

Increased awareness of importance of

educ by families and the community

Social mobilisation and awareness

raising activities undertaken

CL Days/enrolment drives

held

Parents involved, e.g. through PTAs

Children's clubs created/used to

create interest in educ

Making evaluation useful - 1

Be strategic E.g. start with the big picture – identify questions

arising

Focus on priority questions and information requirementsConsider needs, preferences, of key evaluation usersDon’t be limited to stated/intended effectsDon’t try to do everything in one evaluation

Making evaluation useful - 2

Primary focus: how evaluation can be relevant and usefulBear the beneficiaries in mind Take into account diversity, including differing world views, logics, and valuesBe an (appropriate) advocateDon’t be too broad

42Don’t be too narrow

42

How else can one practice evaluation so that it is useful?

Follow the Golden Rule “There are no golden rules.” (European

Commission) Art as much as science

Be future orientedInvolve stakeholdersUse multiple and complementary methods, qualitative and quantitativeRecognize differences between monitoring and evaluation

To think about …Constructive approach, emphasis on learning vs. punishmentGood practices (not just problems)Take into account complexity theory, systems approach, chaos theorySynthesis, knowledge managementEstablishing how/if the intervention in fact is responsible for results (attribution or cause)

Impact evaluation/assessment: what does this mean?OECD/DAC definition of impact: Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.Development objective: Intended impact contributing to physical, financial, institutional, social, environ-mental, or other benefits to a society, community, or group of people via one or more development interventions.But beware! ‘Impact’ and ‘impact assessment’ frequently used in very different ways.

Determining attribution – some alternative approaches

Experimental/quasi-experimental designs (randomisation)

Eliminate rival plausible hypothesesPhysical (qualitative) causalityTheory of change approach“reasonable attribution”“Contribution” vs. “cause”Contribution analysis(simplest approach – at needed

confidence)

Some considerations for meaningful impact evaluation

Need information about inputs and activities as well as about outcomes Check, don’t assume that what is

mandated in (Western) capitals is what actually takes place sur le terrain

Check: are data sources really accurate?

Dealing with responsiveness – a problem or a strength?

Internal vs. external validity

Some questions about impact evaluation

What is possible with multiple interventions?

Changing situationStrategies/policies vs. projectsTime frame?

Monitoring and Evaluation in Combination

How Monitoring and Evaluation can be complementary

Ongoing monitoring

Can identify questions, issues for (in-depth) evaluation

Can provide data for evaluation

Evaluation

Can identify what should be monitored in the future

Monitoring vs. EvaluationStart with the purpose and question(s)

E.g. control vs. learning/improvement Identify information requirements (for whom, how would

be used …) Articulate the theory of change Use most appropriate method(s) given the above

Some form of monitoring approach? and/or Some form of evaluation?

Do not use monitoring when evaluation is most appropriate – and vice versa

Consider costs (financial, staff time). timeliness Monitoring usually – but not always! – less costly and

quicker

Mon. and Eval. in combination

Multi-method approach to evaluation usually most appropriate – can include monitoring

Generally monitoring most appropriate as part of an overall evaluation approach E.g. use evaluation to expand upon the “what”

information from monitoring, and to address “why” and “so what” questions

Strategic questions strategic methodsSeek minimum amount of information that

addresses the right questions and that will actually be used

Tell the performance storyTake a contribution analysis approach

Contribution Analysis (Mayne: Using performance measures sensibly)

1. Develop the results chain2. Assess the existing evidence on

results3. Assess the alternative explanations4. Assemble the performance story5. Seek out additional evidence6. Revise and strengthen the

performance story

Conclusion

Go forward, monitor and evaluate – and help to make a difference.

Thank you / Merci pour votre participation.

Burt [email protected]