iea dsm task 24 subtask 3 evaluation conundrums

22
IEA DSM Implementing Agreement Task 24 Ruth Mourik, After Behave workshop 2014 September 5 th ‘Subtask 3- monitoring and evaluating ‘behaviour’ change Thoughts and struggles …

Upload: iea-dsm-task-24

Post on 29-Nov-2014

104 views

Category:

Technology


1 download

DESCRIPTION

Dr Ruth Mourik presented some of our conundrums with Subtask 3, developing an evaluation tool for behaviour changers at the IEA DSM Task 24 workshop in Oxford, Sept 5, 2014.

TRANSCRIPT

Page 1: IEA DSM Task 24 Subtask 3 Evaluation conundrums

Subtasks of Task XXIVIEA DSM Implementing Agreement

Task 24

Ruth Mourik,

After Behave workshop 2014 September 5th

‘Subtask 3- monitoring and evaluating ‘behaviour’ change

Thoughts and struggles …

Page 2: IEA DSM Task 24 Subtask 3 Evaluation conundrums

2

Life seemed easy…

Page 3: IEA DSM Task 24 Subtask 3 Evaluation conundrums

2

What is it? • Monitoring: measuring progress and

achievements and production of planned outputs

• Evaluation: structured process of assessing success in meeting goals and reflect on learnings

Life seemed easy…

Page 4: IEA DSM Task 24 Subtask 3 Evaluation conundrums

2

What is it? • Monitoring: measuring progress and

achievements and production of planned outputs

• Evaluation: structured process of assessing success in meeting goals and reflect on learnings

Why do it the way we do now? Establish effect of policies Assess need for improvementsAssessing value for moneyContribution to evidence base for effectiveness of behavioral interventions at population level

Life seemed easy…

Page 5: IEA DSM Task 24 Subtask 3 Evaluation conundrums

2

What is it? • Monitoring: measuring progress and

achievements and production of planned outputs

• Evaluation: structured process of assessing success in meeting goals and reflect on learnings

Why do it the way we do now? Establish effect of policies Assess need for improvementsAssessing value for moneyContribution to evidence base for effectiveness of behavioral interventions at population level

How to do it…….

Life seemed easy…

Page 6: IEA DSM Task 24 Subtask 3 Evaluation conundrums

3

It’s getting challenging…

Page 7: IEA DSM Task 24 Subtask 3 Evaluation conundrums

3

• Evaluation team often not included in design• often not even part of programme…

It’s getting challenging…

Page 8: IEA DSM Task 24 Subtask 3 Evaluation conundrums

3

• Evaluation team often not included in design• often not even part of programme…• often only snapshot at end or just after• not longitudinal, sustainability/rebound often

not assessed• No insight in formation of networks supporting

lasting change

It’s getting challenging…

Page 9: IEA DSM Task 24 Subtask 3 Evaluation conundrums

3

• Evaluation team often not included in design• often not even part of programme…• often only snapshot at end or just after• not longitudinal, sustainability/rebound often

not assessed• No insight in formation of networks supporting

lasting change• Large scale M&E of actual behaviour too costly, • Modelling or self reported (at best)• ‘proxies’, such as savings or even better: cost

effectiveness..

It’s getting challenging…

Page 10: IEA DSM Task 24 Subtask 3 Evaluation conundrums

3

• Evaluation team often not included in design• often not even part of programme…• often only snapshot at end or just after• not longitudinal, sustainability/rebound often

not assessed• No insight in formation of networks supporting

lasting change• Large scale M&E of actual behaviour too costly, • Modelling or self reported (at best)• ‘proxies’, such as savings or even better: cost

effectiveness..• Proxies= NOT actual behaviour change, only

about value for money etc.

It’s getting challenging…

Page 11: IEA DSM Task 24 Subtask 3 Evaluation conundrums

3

• Evaluation team often not included in design• often not even part of programme…• often only snapshot at end or just after• not longitudinal, sustainability/rebound often

not assessed• No insight in formation of networks supporting

lasting change• Large scale M&E of actual behaviour too costly, • Modelling or self reported (at best)• ‘proxies’, such as savings or even better: cost

effectiveness..• Proxies= NOT actual behaviour change, only

about value for money etc.• No participatory process or feedback loops in

the traditional M&E

It’s getting challenging…

Page 12: IEA DSM Task 24 Subtask 3 Evaluation conundrums

4

To make life more difficult..

Page 13: IEA DSM Task 24 Subtask 3 Evaluation conundrums

4

Many of us increasingly value interventions that are • tailored, • multidisciplinary, • varied interventions, • qualitative, iterative, • Systemic • and have outcomes beyond duration of project and beyond

energy, etc..)

To make life more difficult..

Page 14: IEA DSM Task 24 Subtask 3 Evaluation conundrums

4

Many of us increasingly value interventions that are • tailored, • multidisciplinary, • varied interventions, • qualitative, iterative, • Systemic • and have outcomes beyond duration of project and beyond

energy, etc..)

And at the same time judge the ‘behaviour’ of policymakers who demand for simple, focused, quantitative and up scaled evaluations defining success in efficiency and effectiveness terms.

To make life more difficult..

Page 15: IEA DSM Task 24 Subtask 3 Evaluation conundrums

4

Many of us increasingly value interventions that are • tailored, • multidisciplinary, • varied interventions, • qualitative, iterative, • Systemic • and have outcomes beyond duration of project and beyond

energy, etc..)

And at the same time judge the ‘behaviour’ of policymakers who demand for simple, focused, quantitative and up scaled evaluations defining success in efficiency and effectiveness terms.

But how could M&E look like that is:

To make life more difficult..

Page 16: IEA DSM Task 24 Subtask 3 Evaluation conundrums

4

Many of us increasingly value interventions that are • tailored, • multidisciplinary, • varied interventions, • qualitative, iterative, • Systemic • and have outcomes beyond duration of project and beyond

energy, etc..)

And at the same time judge the ‘behaviour’ of policymakers who demand for simple, focused, quantitative and up scaled evaluations defining success in efficiency and effectiveness terms.

But how could M&E look like that is:Relevant to end-users ‘cost effective’, doable, lasting actual behavioral change, formation of networks, focusing on alignment, and processes underpinning that change?

To make life more difficult..

Page 17: IEA DSM Task 24 Subtask 3 Evaluation conundrums

5

What now?

Page 18: IEA DSM Task 24 Subtask 3 Evaluation conundrums

5

• No unified way of both designing and M&E interventions

• Different disciplinary approaches have different methods and foci of M&E, all pertinent to what they aim

What now?

Page 19: IEA DSM Task 24 Subtask 3 Evaluation conundrums

5

• No unified way of both designing and M&E interventions

• Different disciplinary approaches have different methods and foci of M&E, all pertinent to what they aim

• Perhaps more fruitful to focus on learning processes?

1. Single loop= instrumental, focused on short term learning about effectiveness in meeting goals/ outcome focused

2. Double loop= process oriented, focused on the how and why, long term

What now?

Page 20: IEA DSM Task 24 Subtask 3 Evaluation conundrums

6

Way forward or dead end?

Page 21: IEA DSM Task 24 Subtask 3 Evaluation conundrums

6

M and E of single loop learning doable to undertake and fine for low hanging fruit and non habitual change

Double loop learning much more difficult but more relevant to our aims…? We want to focus on: • Interaction• Participation quality• Learning by doing and doing by learning• Aligning• Iteration

Way forward or dead end?

Page 22: IEA DSM Task 24 Subtask 3 Evaluation conundrums

6

M and E of single loop learning doable to undertake and fine for low hanging fruit and non habitual change

Double loop learning much more difficult but more relevant to our aims…? We want to focus on: • Interaction• Participation quality• Learning by doing and doing by learning• Aligning• Iteration

• Can or should one central body do this?• Or do we need user generated content? A

decentralized collective participatory M&E?

Way forward or dead end?