iea dsm task 24 subtask 3 evaluation conundrums

Post on 29-Nov-2014

104 Views

Category:

Technology

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

Dr Ruth Mourik presented some of our conundrums with Subtask 3, developing an evaluation tool for behaviour changers at the IEA DSM Task 24 workshop in Oxford, Sept 5, 2014.

TRANSCRIPT

Subtasks of Task XXIVIEA DSM Implementing Agreement

Task 24

Ruth Mourik,

After Behave workshop 2014 September 5th

‘Subtask 3- monitoring and evaluating ‘behaviour’ change

Thoughts and struggles …

2

Life seemed easy…

2

What is it? • Monitoring: measuring progress and

achievements and production of planned outputs

• Evaluation: structured process of assessing success in meeting goals and reflect on learnings

Life seemed easy…

2

What is it? • Monitoring: measuring progress and

achievements and production of planned outputs

• Evaluation: structured process of assessing success in meeting goals and reflect on learnings

Why do it the way we do now? Establish effect of policies Assess need for improvementsAssessing value for moneyContribution to evidence base for effectiveness of behavioral interventions at population level

Life seemed easy…

2

What is it? • Monitoring: measuring progress and

achievements and production of planned outputs

• Evaluation: structured process of assessing success in meeting goals and reflect on learnings

Why do it the way we do now? Establish effect of policies Assess need for improvementsAssessing value for moneyContribution to evidence base for effectiveness of behavioral interventions at population level

How to do it…….

Life seemed easy…

3

It’s getting challenging…

3

• Evaluation team often not included in design• often not even part of programme…

It’s getting challenging…

3

• Evaluation team often not included in design• often not even part of programme…• often only snapshot at end or just after• not longitudinal, sustainability/rebound often

not assessed• No insight in formation of networks supporting

lasting change

It’s getting challenging…

3

• Evaluation team often not included in design• often not even part of programme…• often only snapshot at end or just after• not longitudinal, sustainability/rebound often

not assessed• No insight in formation of networks supporting

lasting change• Large scale M&E of actual behaviour too costly, • Modelling or self reported (at best)• ‘proxies’, such as savings or even better: cost

effectiveness..

It’s getting challenging…

3

• Evaluation team often not included in design• often not even part of programme…• often only snapshot at end or just after• not longitudinal, sustainability/rebound often

not assessed• No insight in formation of networks supporting

lasting change• Large scale M&E of actual behaviour too costly, • Modelling or self reported (at best)• ‘proxies’, such as savings or even better: cost

effectiveness..• Proxies= NOT actual behaviour change, only

about value for money etc.

It’s getting challenging…

3

• Evaluation team often not included in design• often not even part of programme…• often only snapshot at end or just after• not longitudinal, sustainability/rebound often

not assessed• No insight in formation of networks supporting

lasting change• Large scale M&E of actual behaviour too costly, • Modelling or self reported (at best)• ‘proxies’, such as savings or even better: cost

effectiveness..• Proxies= NOT actual behaviour change, only

about value for money etc.• No participatory process or feedback loops in

the traditional M&E

It’s getting challenging…

4

To make life more difficult..

4

Many of us increasingly value interventions that are • tailored, • multidisciplinary, • varied interventions, • qualitative, iterative, • Systemic • and have outcomes beyond duration of project and beyond

energy, etc..)

To make life more difficult..

4

Many of us increasingly value interventions that are • tailored, • multidisciplinary, • varied interventions, • qualitative, iterative, • Systemic • and have outcomes beyond duration of project and beyond

energy, etc..)

And at the same time judge the ‘behaviour’ of policymakers who demand for simple, focused, quantitative and up scaled evaluations defining success in efficiency and effectiveness terms.

To make life more difficult..

4

Many of us increasingly value interventions that are • tailored, • multidisciplinary, • varied interventions, • qualitative, iterative, • Systemic • and have outcomes beyond duration of project and beyond

energy, etc..)

And at the same time judge the ‘behaviour’ of policymakers who demand for simple, focused, quantitative and up scaled evaluations defining success in efficiency and effectiveness terms.

But how could M&E look like that is:

To make life more difficult..

4

Many of us increasingly value interventions that are • tailored, • multidisciplinary, • varied interventions, • qualitative, iterative, • Systemic • and have outcomes beyond duration of project and beyond

energy, etc..)

And at the same time judge the ‘behaviour’ of policymakers who demand for simple, focused, quantitative and up scaled evaluations defining success in efficiency and effectiveness terms.

But how could M&E look like that is:Relevant to end-users ‘cost effective’, doable, lasting actual behavioral change, formation of networks, focusing on alignment, and processes underpinning that change?

To make life more difficult..

5

What now?

5

• No unified way of both designing and M&E interventions

• Different disciplinary approaches have different methods and foci of M&E, all pertinent to what they aim

What now?

5

• No unified way of both designing and M&E interventions

• Different disciplinary approaches have different methods and foci of M&E, all pertinent to what they aim

• Perhaps more fruitful to focus on learning processes?

1. Single loop= instrumental, focused on short term learning about effectiveness in meeting goals/ outcome focused

2. Double loop= process oriented, focused on the how and why, long term

What now?

6

Way forward or dead end?

6

M and E of single loop learning doable to undertake and fine for low hanging fruit and non habitual change

Double loop learning much more difficult but more relevant to our aims…? We want to focus on: • Interaction• Participation quality• Learning by doing and doing by learning• Aligning• Iteration

Way forward or dead end?

6

M and E of single loop learning doable to undertake and fine for low hanging fruit and non habitual change

Double loop learning much more difficult but more relevant to our aims…? We want to focus on: • Interaction• Participation quality• Learning by doing and doing by learning• Aligning• Iteration

• Can or should one central body do this?• Or do we need user generated content? A

decentralized collective participatory M&E?

Way forward or dead end?

top related