satori evaluation and reflection principles and criteria · 5.1.2.4 question 4: please rank the...
TRANSCRIPT
1
SATORI Deliverable D12.2
SATORI evaluation and reflection principles and criteria
Brent Mittelstadt (De Montfort University)
Mark Coeckelbergh (De Montfort University)
Kutoma Wakunuma (De Montfort University)
Tilimbe Jiya (De Montfort University)
December 2014
This publication and the work described in it are part of the project
Stakeholders Acting Together on the Ethical Impact Assessment of Research and
Innovation – SATORI which received funding from the European Community’s
Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 612231
Contact details for corresponding author:
Prof. dr. Mark Coeckelbergh, Centre for Computing and Social Responsibility
De Montfort University, The Gateway, Leicester, LE1 9BH United Kingdom
http://satoriproject.eu/
2
D12.2 – SATORI evaluation and reflection principles and criteria
report
CONTENTS
1 Abstract ........................................................................................................................................... 4
2 Executive summary ........................................................................................................................ 5
3 Introduction .................................................................................................................................... 6
4 Background ..................................................................................................................................... 7
4.1 DMU’s Role ........................................................................................................................... 7
4.2 Principles of Good Practice ................................................................................................... 7
4.3 Scope of Tasks 12.2 and 12.3 .............................................................................................. 10
5 Preliminary evaluation activities .................................................................................................. 12
5.1 Pre-evaluation Questionnaire ............................................................................................... 12
5.1.1 Methodology .................................................................................................................... 12
5.1.2 Results .............................................................................................................................. 13
5.1.2.1 Question 1: Which SATORI Tasks are you currently working on, or will work on
in the future? ............................................................................................................................ 13
5.1.2.2 Question 2: How do you interpret your role in SATORI? ..................................... 13
5.1.2.3 Question 3: Practically speaking, what support do you hope to receive from us (the
evaluators)? .............................................................................................................................. 14
5.1.2.4 Question 4: Please rank the following outputs in terms of importance to your
organisation and its participation in SATORI ......................................................................... 15
5.1.2.5 Question 5: Please comment on any other aspects of your/your organisation’s
participation in SATORI that you think are important for evaluating the project ................... 17
5.1.3 Discussion and Conclusions ............................................................................................ 17
5.2 Consortium Meeting Observation ........................................................................................ 18
5.2.1 DMU presentation of results deliverable 12.1 and work in progress .............................. 18
5.2.2 Results of Deliverable 12.1 .............................................................................................. 18
5.2.3 Survey Outcome............................................................................................................... 19
5.2.3.1 Interpretation of questionnaire ................................................................................ 19
5.2.4 Work in progress .............................................................................................................. 19
5.2.4.1 What DMU could do .............................................................................................. 20
5.2.4.2 Future Work ............................................................................................................ 20
5.2.5 Observations from Rome meeting and discussion of problems in the project ................ 20
5.2.5.1.1 Concerns with interpreting legal documents ...................................................... 20
3
5.2.5.1.2 Delays in completing work and/or not doing work at all ................................... 20
5.2.5.1.3 Concerns with budget especially travel in order to complete for example
interviews 21
5.2.5.1.4 Communication and peer review........................................................................ 21
5.2.5.1.5 The question of DMU’s role .............................................................................. 21
5.2.5.1.6 Conclusion .......................................................................................................... 21
6 Classification of SATORI Tasks for Evaluation .......................................................................... 23
7 Evaluation and reflection criteria ................................................................................................. 36
7.1 Evaluation vs. Verification .................................................................................................. 37
7.2 Subjects of Evaluation ......................................................................................................... 38
7.2.1 Mutual Learning and Mobilisation .................................................................................. 38
7.3 Evaluation Principles and Criteria ....................................................................................... 41
7.3.1 Principles and criteria for evaluating stakeholder engagement ....................................... 42
7.3.2 Principles and criteria for evaluating recruitment ............................................................ 46
7.3.3 Principles and criteria for evaluating surveys, interviews and case studies .................... 46
7.3.4 Principles and criteria for evaluating desk research ........................................................ 47
7.3.5 Principles and criteria for evaluating recommendations/tools ......................................... 48
7.3.6 Principles and criteria for evaluating dissemination/impact ............................................ 48
7.3.7 Principles and criteria for evaluating evaluation ............................................................. 50
7.3.8 Principles and criteria for evaluating administration ....................................................... 51
7.3.9 Evaluating ‘Internal’ Activities ........................................................................................ 51
7.4 SUMMARY OF EVALUTION AND REFLECTION CRITERIA .................................... 52
8 Conclusion .................................................................................................................................... 54
9 References .................................................................................................................................... 56
10 Annexes.................................................................................................................................... 59
ANNEX 1 – Pre-Evaluation Questionnaire ......................................................................................... 59
4
1 ABSTRACT
This report describes the findings of research activities carried out by De Montfort University in
‘Task 12.2 – SATORI evaluation and reflection principles and criteria’ of the SATORI Mobilisation
and Mutual Learning Action (MML). The report builds upon the findings of ‘D12.1 – Good practice
in evaluation, reflection and civil society engagement’ to specify a range of evaluation and reflection
principles and criteria to be used in evaluating SATORI. The criteria are developed through a series
of preliminary evaluation activities, a classification of SATORI tasks by ‘activity type’, and
application of the principles of good practice and evaluative frameworks considered in D12.1 to the
task classification. A set of criteria applicable to different activity types were specified and a set of
criteria which we think are applicable to SATORI is offered. These are based on the analysis
undertaken in D12.1, a pre-evaluative questionnaire distributed to partners and observations made
during the Rome workshop which was held in October on 2014. These criteria will be further
‘operationalized’ for each individual task through input from the consortium concerning appropriate
discipline-specific ‘Indicators of Success’ in a questionnaire to be conducted during Task 12.3. The
principles and criteria reported here will further inform the SATORI evaluation and reflection
strategy to be developed in Task 12.3. This strategy will then be implemented in evaluating
SATORI in Task 12.4.
5
2 EXECUTIVE SUMMARY
This report describes the findings of research activities carried out by De Montfort University in
‘Task 12.2 – SATORI evaluation and reflection principles and criteria’ of the SATORI Mobilisation
and Mutual Learning Action (MML). Mobilisation and Mutual Learning Actions are a new type of
engagement project funded by the European Commission under the 7th
Framework Programme
Mobilisation and Mutual Learning Action Plan (MMLAP). One point of consensus concerning
necessary activities undertaken in MML is the need for on-going evaluation and reflection (or
monitoring) on project progress, methods and impact.
Despite the general consensus on the need for evaluation in MMLs, it remains unclear how, when
and according to which measures MMLs should be evaluated; the EC has not openly identified
specific requirements to be met by evaluation, beyond having a separate work package dedicated to
evaluation. Following on from a literature review and empirical study into this apparent gap, which
identified 21 principles of good practice (see: Deliverable 12.1), and taking into account observations
made by DMU during the previous months and at the Rome workshop in October of 2014, this report
specifies a set of criteria for evaluation and reflection in SATORI.
In particular, in its conclusion this report proposes to focus on evaluating stakeholder involvement
(main criterion given this type of project), evaluating recruitment (in particular representativeness),
evaluating surveys, interviews and case studies (in particular engagement), evaluating and
monitoring the way the project produces and deals with legal and other documentation, evaluating
recommendations (how are they arrived at and do they acknowledge all stakeholders’ values and
perspectives), facilitating the self-evaluation of impact by means of indicator of success
questionnaires, encouraging partners to reflect on their research (again self-evaluation, e.g. via
workshops), improving and stimulating collaboration (e.g. monitor if feedback is given to other
partners, and focus on engagement and reflectiveness in the previous criteria. These criteria do not
stand on their own but are all related. They are also not final yet; they will be further developed and
operationalized in task 12.3
A full evaluation will take place at a later stage, but before and during the Rome workshop we
already started our evaluation activities and preliminary evaluation was carried out. Our observations
during the workshop indicate that collaboration in SATORI is a criterion that needs extra monitoring
since some problems concerning doing the work assigned, giving feedback to other partners, and the
role and responsibilities of researchers involved in the project were raised.
During the next months, DMU will continue to carry out evaluation activities, further organize its
evaluation and reflection workshop (Brussels), and focus on task 12.3 which develops an evaluation
strategy.
6
3 INTRODUCTION
This report describes the findings of research activities carried out by De Montfort University in
‘Task 12.2 – SATORI evaluation and reflection principles and criteria’ of the SATORI Mobilisation
and Mutual Learning Action (MML). The report builds upon the findings of ‘D12.1 – Good practice
in evaluation, reflection and civil society engagement’ to specify a range of evaluation and reflection
principles and criteria to be used in evaluating SATORI. In its conclusion it suggests a set of criteria
that may be used for the evaluation of SATORI. These criteria will be further developed and
operationalized in Task 12.3.
The report is structured as follows: Section 4 provides a background of SATORI and the scope of
Task 12.2. Section 5 discusses preliminary evaluation activities carried out in Project Months 7 to
12, including a pre-evaluation questionnaire and observations of a consortium meeting in Rome.
Section 6 categorises the tasks undertaken in SATORI as described in the Description of Work for
purposes of evaluation. Section 7 then specifies principles and criteria for evaluating each type of
activity based on the findings of D12.1. Section 8 concludes the report and suggests a number of
criteria for evaluating SATORI.
7
4 BACKGROUND
As an MML project, SATORI conceptualises ethical assessment as a form of engagement between
research and society. In this context SATORI aims to “improve respect of ethics principles and laws
in research and innovation, and to make sure that they are adequately adapted to the evolution of
technologies and societal concerns.” These proposed aims will be accomplished through the
development of an ethics assessment framework based upon “thorough analysis, commonly accepted
ethical principles, participatory processes and engagement with stakeholders, including the public, in
Europe and beyond1.” To this end the project will first create an inventory of current practice and
principles in ethics assessment. Stakeholders in ethical assessment in existing research projects will
be identified, with due attention paid to the impact of globalisation and movement towards
conducting research in non-European countries to benefit from ‘looser’ standards of ethical review.
An initial framework will then be developed, with a subsequent review of the prospects for
standardisation and sustaining a network to promote adoption of the framework in future research.
Policy and other developments meriting ethical review will be monitored throughout the duration of
the project, providing an initial channel for promotion and adoption of the framework beyond the
SATORI partners. Results of the MML will be disseminated to stakeholders identified in the project,
including representatives of government and research or policy requiring ethics assessment.
Dissemination activities undertaken in the project seek to engage members of society with a stake in
ethics assessment.
4.1 DMU’S ROLE
As leader of WP12, DMU will provide an independent evaluation of these activities for the duration
of the project, advising the SATORI consortium on potentially beneficial changes to project
activities when necessary. Principles of good practice in MML evaluation and reflection have been
determined in Task 12.1 through review of literature concerning good practice in evaluation of
public participation and participatory research, as well as review of project documents and empirical
study with partners from existing MMLs. In Task 12.2 these principles will be adapted to SATORI’s
specific aims and activities to create a set of customised evaluation and reflection principles and
criteria. A complementary evaluation strategy will then be created in Task 12.3 to employ the
principles and criteria in carrying out evaluation of the project, which will take place in Task 12.4.
4.2 PRINCIPLES OF GOOD PRACTICE
D12.1 described 21 principles of good practice in evaluation and reflection of MMLS, including five
principles related to evaluation criteria:
Criteria Principles
1. Evaluative criteria should be specified according to the context of the particular MML,
including potentially engaging the consortium to identify appropriate discipline-specific or
task-specific criteria for particular activities and deliverables.
2. Evaluation should address the ‘generic’ qualities of participatory processes such as those
areas of consensus in evaluation literature identified by Chilvers2. Evaluation should also
address impacts and evidence which demonstrate that key MML activities and desired
1 SATORI, THEME [SiS.2013.1.2-1 SiS.2013.1.2-1] Mobilisation and Mutual Learning (MML) Action Plans:
Mainstreaming Science in Society Actions in Research] - Annex 1 - Description of Work. 2 Chilvers, “Reflexive Engagement?”.
8
outcomes have been realised—mutual learning and the facilitation of collaboration and
cooperation among stakeholders—using criteria and typologies such as those specified by
Haywood & Besley3 and Walter et al.
4.
3. The success of an MML should be ‘stakeholder oriented’, meaning evaluative criteria should
be linked to factors such as the reaction of stakeholders to engagement events, the new
connections established between engaged stakeholders for communication and collaboration,
the effectiveness of training in building capacities, and the empowerment of underrepresented
groups in MML and societal discourses.
4. Project management should be evaluated, meaning that objectives, milestones and
deliverables are delivered on time and of acceptable quality according to how they are
defined in the Description of Work (DoW).
5. The ability of the MML to get target stakeholder groups in attendance at engagement events
may be used as an evaluative measure.
Eight ‘methodology’ principles were identified which are relevant to the principles and criteria that
will guide SATORI’s evaluation strategy:
6. In general evaluation should aim to assist in developing research activities during the life of
the project (e.g. through feedback from evaluators to partners), improve the design of future
related activities, assess project impact5, and provide stakeholders with a better idea of the
value of their participation by tracking influence on the process6. MML evaluation should, at
a minimum, seek to meet these three generic aims.
7. Evaluation should consider data beyond the deliverables, including stakeholders in assessing
the quality of dialogue facilitated by the project wherever possible. This approach is
necessary because fairness, competence and learning all have an implicit component of
subjectivity, requiring the perspectives of participants (or ‘learners’) to be collected and
assessed.
8. Despite methodological and epistemic difficulties, an explicit method for evaluating societal
impact should be adopted or designed, with particular attention paid to evidence of mutual
learning (e.g. changes in stakeholder perspectives, beliefs and actions).
9. The evaluation process should be conducted transparently for the benefit of the consortium,
including identifying its scope (e.g. summative/formative, technical/holistic) and the position
of the evaluator in relation to the consortium (e.g. internal, external, and independent) as
early as possible. This approach will help reduce resistance to recommendations made by the
evaluators.
10. The entire consortium should be involved in providing data for evaluation beyond writing
deliverables (e.g. interviews, surveys, reflective meetings, etc. conducted with consortium
partners). Broad engagement allows for assessment of mutual learning between project
partners.
11. Initial templates or indicators of success created with consortium input should be created
prior to the start of each research task, and potentially added to or revised according to
challenges faced. This approach can ensure that discipline-specific perspectives inform the
3 Haywood and Besley, “Education, Outreach, and Inclusive Engagement”.
4 Walter et al., “Measuring Societal Effects of Transdisciplinary Research Projects”.
5 Research Councils UK, Evaluation: Practical Guidelines - A Guide for Evaluating Public Engagement Activities.
6 Rowe and Frewer, “Evaluating Public-Participation Exercises: A Research Agenda”.
9
assessment of the success or quality of project activities while being responsive to the
practical challenges of engagement.
12. A clear ‘endpoint’ should be specified at which point project impacts can start to be identified
and evaluated.
13. Evaluation should occur before, during and after the project to ensure all processes and
impacts are evaluated to some degree.
Five further mutual learning principles were identified which are relevant to assessing the quality of
mutual learning occurring within an MML, and thus the criteria necessary to qualify mutual learning:
Mutual Learning Principles
14. Data collection and analysis methods conducive to evaluating learning or attitudinal change
over time should be employed in evaluation, meaning explicit and implicit evidence of
mutual learning should be sought in evaluation by asking project partners and participants to
reflect on changes to their attitudes and behaviours caused by participating in the project and
engaging with unfamiliar ideas and perspectives.
15. Mutual learning outcomes among project participants should be assessed, for example by
monitoring changes in participant perspectives, beliefs and actions over time. Mutual
learning conceived of as societal impact can also be evaluated according to the extent to
which project outputs have reached and influenced them.
16. In evaluating the quality of mutual learning that has occurred, the possibility of mutual
learning without absolute consensus should be recognised.
17. A participatory approach to evaluation conducive to mutual learning between stakeholders
and project partners should be used. The appropriate degree of stakeholder involvement,
from designing to carrying out the evaluation and reporting on its findings, must be decided
on a project-specific basis according to the willingness of the stakeholders and the expertise
required to perform the evaluation.
18. A reflexive account of the conception of mutual learning adapted should be provided,
including its theoretical basis (where appropriate), and criteria for evaluating mutual learning
should be consistent with the theoretical approach taken.
Finally, a further three principles concerning the role of reflection in a MML were described which
suggest criteria for evaluating the quality of reflection in SATORI:
Reflection Principles
19. The evaluator transparently should report on perceived pressures and influence of project
partners in the evaluation to identify, as far as possible, influence on the evaluation outcomes.
20. When conducting a formative evaluation, the evaluator should provide critical feedback and
recommendations to the consortium to improve ongoing research activities.
21. The evaluator, coordinator and/or work package leaders should encourage partners to
critically reflect on their progress and changes to attitudes and behaviours (e.g. implicit
learning) through formal or informal methods such as interviews, project management
meetings, or peer review of deliverables.
Short of prescribing specific evaluative actions or criteria to be adopted, these principles are intended
to provide guidance for consortia in planning evaluation and reflection activities. The principles are
10
therefore broad enough for specification into criteria responsive to the unique characteristics of
individual MMLs. Accordingly, the next step in applying the principles is to select and specify
certain ones into a project-specific set of principles and evaluative criteria, before designing or
choosing evaluation tools and methods appropriate to assessing the project along these lines.
Specifically, the next step of the planning undertaken by DMU in WP12 is to identify which
principles are relevant to engaging stakeholders around ethical assessment frameworks, and to
specify evaluative criteria on this basis in Task 12.2. An evaluation and reflection strategy can then
be created in Task 12.3. Task 12.4, in which an evaluation strategy built upon these principles is
applied in evaluating the success of the SATORI project, is therefore a first step towards empirical
validation of the principles recommended in this report.
4.3 SCOPE OF TASKS 12.2 AND 12.3
In Task 12.2 DMU will identify which principles from D12.1 are applicable to SATORI, and how
they may be specified into particular criteria for evaluating and reflecting upon the success of project
activities. The adaptation of the principles will focus on how the quality of the multitude of project
activities can be assessed using criteria designed for particular work packages, task types or
individual tasks. As SATORI will begin to develop an ethics assessment framework, the method of
development and application of the framework across other project activities and beyond the
consortium will be assessed, for example in terms of how project activities have supported the
adoption or further development of the framework by relevant non-consortium stakeholders.
Evaluative criteria will be developed according to the principles of good practice and evaluative
frameworks identified in Deliverable 12.1 along with the variety of aims pursued in SATORI. For
example, criteria will concern questions of quality and feasibility, the prospects for incorporation of
the ethical framework into the EU system of research and innovation, support for transnational co-
operation in ethical assessment provided by the project, and the contribution to sustainable and
inclusive solutions to key challenges facing European society made by the framework and supporting
project activities. As SATORI demands an intensive and prolific deliberation among participants (in
line with being a MML), criteria focusing on the ‘fairness’ and ‘competence’ of these processes,
including the quality of debate facilitated and the representativeness of stakeholders engaged, will
likely feature in the final set of evaluation and reflection criteria7.
Each of the above examples concerns the quality of tasks (and underlying methods) completed by the
consortium. However, criteria will also be developed to assess the impact of SATORI as a whole
and its ethical impact assessment framework in particular on citizens and society. For the latter,
DMU will coordinate with the leaders of WP6 who will be assessing the impacts of the ethical
assessment framework in particular. In doing so the principles identified in D12.1 will also be
considered to ensure a realistic range of impacts, which can be practically assessed given the
timeframe of evaluation, are chosen.
The output of Task 12.2 will be a SATORI-specific set of principles and criteria adapted from the 21
principles identified in Deliverable 12.1, and a set of evaluative criteria to be applied in the
evaluation and reflection strategy developed in Task 12.3 (see Conclusion of this deliverable). These
outputs of Task 12.2 will be presented to the consortium at a reflection workshop, organized by
DMU in February in Brussels, at which time partners can voice their comments, concerns and
potential revisions to the evaluation and reflection principles and criteria to be used by DMU in its
7 See: SATORI, THEME [SiS.2013.1.2-1 SiS.2013.1.2-1] Mobilisation and Mutual Learning (MML) Action Plans:
Mainstreaming Science in Society Actions in Research] - Annex 1 - Description of Work, 43.
11
evaluation. This workshop will form the first instance of ‘second order reflection’ for the consortium,
which will be evident in other evaluation activities planned in Deliverable 12.3. The overall
evaluation and reflection strategy will describe the activities to be undertaken by DMU to measure
the success of SATORI against the principles and criteria identified in Deliverable 12.2. The
evaluation strategy will cover the most important aspects of all WPs, concentrating on the high-level
activities of fact finding of ethics assessment in Europe and beyond, dialogue and participation
among the partners and stakeholders, various communication activities, policy watch and heritage or
sustainability activities. The strategy created in Task 12.3 will be flexible to ensure it can respond to
changes in project activities. Additionally, to ensure reflection among the consortium it will describe
how, when and what type of feedback will be provided to consortium partners based on evaluation of
completed tasks and outputs, which is intended to improve future activities in their WP(s).
The outputs of Tasks 12.2 and 12.3 will be subject to peer-review by the consortium via e-mail
feedback and discussion at future consortium meetings (as described above). The strategy will also
be reviewed every six months to ensure it remains relevant to changing project activities. However,
peer-review and subsequent bi-annual review will not occur during the timeframe (M07-12)
described in this document.
12
5 PRELIMINARY EVALUATION ACTIVITIES
Although DMU’s evaluation of SATORI is not set to begin until project month 18 according to the
DoW, a series of preliminary evaluation activities have been planned which will inform the
development of the evaluation and reflection strategy in Task 12.3. These activities include a ‘Pre-
Evaluation Questionnaire’, consortium meeting observations (so far especially carried out at the
meeting in Rome), and an ‘Indicators of Success Questionnaire’. The first two of these activities will
be described in this deliverable as they are directly relevant to the evaluation/reflection criteria
proposed by providing initial insight into the consortium’s expectations of our input as evaluators.
The ‘Indicators of Success Questionnaire’ will be occurring between month 13 and 18 as part of
Task 12.3.
5.1 PRE-EVALUATION QUESTIONNAIRE
Preliminary evaluation of SATORI has started with a pre-evaluation questionnaire distributed to all
individuals currently working on the project via online survey. The decision to use the questionnaire
was based on recommendations provided by evaluators for other MMLs encountered in the empirical
study described in Deliverable 12.1. The questionnaire (see: Appendix #) concerns each partner’s
interpretation of their role in the project, expectations of the evaluators and evaluation, and the
relative importance of a variety of project outputs and impacts on an individual and organisational
level. The sample was purposefully broad, including all partners and not only WP leaders, to gather
as many perspectives as possible concerning our role and input as evaluators.
The questionnaire aimed to encourage partner participation in the evaluation before it has officially
begun, as several evaluators in the 12.1 study reported difficulties with establishing a long-lasting
and honest collaboration with consortium partners. In the words of the aforementioned evaluator
who inspired the questionnaire, it is intended to get evaluation “on their radar” without requiring
significant effort from the partners due to its brevity. Additionally, the questionnaire provides an
initial indication of each partner’s expectations concerning their role in the project and the role of the
evaluators, along with early indications of how the success of particular activities can be measured.
Each of these potential contributions of the questionnaire will assist in the creation of an
evaluation/reflection strategy (Task 12.3) that matches the expectations and needs of the consortium
as far as possible.
5.1.1 Methodology
An online questionnaire was constructed on surveymonkey.com and distributed to all 61 partners
currently listed as working on SATORI (the DMU team not included). Partners were identified via
an up-to-date consortium contact list provided by Trilateral. Partners were invited to participate via
e-mail, which is the primary communication tool for the consortium meaning invitations can
reasonably be expected to reach partners. Three invitations were sent, with the latter two as
‘reminder e-mails’ requesting partners to complete the survey. Invitations were sent over a period of
6 weeks.
The questionnaire consisted of four open-ended questions and one ranking exercise. Questions were
initially based on the pre-evaluation questionnaire encountered in D12.1. Revisions were made by
the DMU team in lieu of piloting, given the questionnaire’s relative simplicity and brevity. One
open-ended question was solely descriptive, asking respondents to indicate the work packages and
tasks for which they are responsible. The other three open-ended questions were interpretive and
descriptive, asking respondents to describe their interpretation of their role in the project, their
13
expectations of the evaluators, and any other comments relevant to evaluating their (organisation’s)
role in the project. The ranking exercise posed 9 types of outputs and impacts and asked the respond
to rank them in order of importance. An ‘Other’ option was included to allow the respondent to
input outputs/impacts not present.
Analysis was a mix of qualitative interpretive and quantitative analysis. Responses from the three
open-ended prescriptive questions were subjected to thematic analysis; words and phrases with
similar meanings were grouped together into themes and presented narratively below 8. The ranking
exercise was analysed quantitatively to discover their relative important according to respondents,
with average rankings for each type of output being calculated by adding together the total rankings
numerically and dividing by 21 (the number of responses).
5.1.2 Results
Of the 61 partners invited to participate in the survey, 21 responded. Questions 1-4 were completed
by all respondents, except for one respondent who failed to answer Question 2. Solely for the sake
of simplicity all respondents are referred to as ‘her’ in the discussion of results, although this should
not be taken as an accurate representation of the respondent’s gender, which was not seen as a
necessary aspect for analysis of responses.
5.1.2.1 Question 1: Which SATORI Tasks are you currently working on, or will work on in
the future?
Responses to this question were used solely for interpreting answers to the other questions. The
question was posed to allow for analysis of responses based on the respondent’s ‘objective’ role in
the project, as opposed to their interpretation of their role as provided in response to Question 2. The
responses to Question 1 will therefore not be analysed separately.
5.1.2.2 Question 2: How do you interpret your role in SATORI?
Responses to this question varied considerably, which was expected given the broad range of tasks
and disciplines found across the project and consortium. Many of the responses merely mirrored the
task descriptions found in the DoW, and thus were an ‘objective’ representation of the partner’s role
following on from Question 1, as opposed to an ‘interpretation’ wherein partners described their
work in terms of normative responsibilities or desired outcomes beyond what is described in the
DoW. The discussion here focuses on ‘interpretive’ responses rather than those merely describing
what is written in the DoW.
Some respondents emphasised their role in recruiting specific stakeholder groups; for example, one
partner emphasised a responsibility for “engaging different stakeholders in Serbia.” Others
identified with particular ‘perspectives’ to ‘represent’ in the project, for example in representing “the
industry point of view...[and to] help to understand the valuable contribution of ethical and societal
assessment of industry policy on R&I.” Others saw themselves as providing “advice from a
consumer/NGO point of view,” from “legal and human rights perspectives,” or “providing general
guidance on EU-related issues.”
One respondent emphasised the important of different aspects of her role as the project progresses,
“being responsible for project logo and web design is a vital part of the project start up…press
releases and feature stories will grow in important throughout the project.”
8 Miles and Huberman, An Expanded Source Book: Qualitative Data Analysis.
14
One respondent seemed to suggest that her value to the project is as an “ethical expert, involved in
the practice of ethical vetting of research.” Another identified herself as a “roadmapping expert,
[providing] insights into how socio-technical changes evolve and manifest.” Another saw her role as
“communicating with the public via traditional and social media [as] an important aspect of
SATORI.”
Interestingly, one respondent flagged up a potential early problem with her involvement in the
project, saying that while she is only responsible for doing interviews, she sometimes feels
“uncomfortable with the instruments of social sciences,” suggesting a gap between the work required
in the DoW and the partner’s skillset. She added that she can “only help on practical tasks, because
at the moment none of my professional competences are required.”
5.1.2.3 Question 3: Practically speaking, what support do you hope to receive from us (the
evaluators)?
From the perspective of planning an evaluation and reflection strategy, this question is perhaps the
most important in terms of gauging the consortium’s expectations of DMU. Several themes were
found in the responses.
Perhaps the most commonly requested form of support was feedback, albeit on a variety of topics.
Numerous respondents hope to receive feedback regarding the “suitability of reports” and “tasks” in
terms of fit with “project objectives,” “the content and quality of plans,” adherence to “standards of
the project…and the required level for the EC,” and how individual teams fulfil their tasks, including
problems encountered. The emphasis in these responses is on the evaluators providing a ‘critical’
view of consortium activities and outputs, suggesting that the “evaluation process [should] be
organised in such a way that we can gain from it during our tasks” by identifying problems or
weaknesses and feeding these back to individual partners to “improve on [their] inputs and
deliverables” and “better focus [their] actions.” One partner described this as “objective feedback on
the different activities,” although the degree to which such feedback could possibly be feasible is
questionable given variation in indicators of quality and work procedures across the various tasks
and disciplines represented within the SATORI.
Feedback need not, however, come entirely from the evaluators as indicated by the peer-review of
deliverables. In supporting feedback between consortium partners, one partner suggested that the
evaluators can provide “feedback and practical tips on how to improve cooperation between different
partners, how to implement their suggestions/approaches,” suggesting evaluators take up a role as
‘mediators’ in the channels between individual partner organisations through which feedback and
review are provided.
This aspect of feedback hints at another theme in the data which focuses on monitoring the quality of
communication within the consortium and to external stakeholders. This need may stem from the
size of the consortium and complexity of the project: “since it is a large project continuous
communication on tasks and evaluations performed are needed…also to ensure that all relevant
knowledge is shared throughout the project.” The latter concern was shared by another partner, who
requested “support regarding how the project is proceeding internally,” for example by reporting on
work and progress made by other partners (especially within other work packages). One aspect of
such support is in pre-emptively identifying barriers to cooperation, so that partners will “receive as
early as possible help in identifying potential obstacles that may hinder further cooperation with
partners or achieving the project’s goals.” Doing so was seen to “optimise the links between the
work packages by providing constructive suggestions on challenges and possible improvements.”
15
As seen in the emphasis on feedback, a third theme concerns improving the quality of outputs, or
“insights into the usefulness and practicality of our results.” An explicit link was highlighted
between outputs and “the added value of the project in comparison to other activities in the field of
RRI” (responsible research & innovation)9, and the “sustainability and impact” of the project. This
theme was further operationalized in Question 4 in which outputs (and thus, sources of impact) were
ranked in terms of importance.
5.1.2.4 Question 4: Please rank the following outputs in terms of importance to your
organisation and its participation in SATORI
Tables 1 and 2 show the average ranking and ranking breakdown for each output type across the
entire sample of responses. Given the diversity of respondents in the consortium, further analysis on
the basis of ‘interpreted roles’ or work packages was not possible. Furthermore, this aspect was not
seen as relevant to defining DMU’s role as evaluator as we have an equal responsibility towards all
partners, meaning emphasising the importance of impacts as ranked by a particular sub-set of the
consortium would be inappropriate.
Table 1 – Average Ranking of Outputs
9 cf. Stahl, “Responsible Research and Innovation”.
16
Table 2 – Ranking Breakdown of Output Types
Four respondents identified a further type of output as ‘Other’: standards, learning about ethics and
ethics assessment, (influence on) research ethics committees on the EU-level, and defining a new
role for science journalism in the ethical debate about RRI. The last of these was ranked as most
important by the respondent suggesting it, while interesting the others were ranked least or second to
least important. This latter aspect, along with the lack of responses to the ‘Other’ category, suggests
(but does not prove) that the outputs identified in the table were all seen as relevant to the project,
and sufficient to cover the interests of the consortium in terms of impact. This finding was further
discussed at the consortium meeting in Rome (see: Section 5.2).
As shown in the tables policy impacts were identified as the most important impact, followed by
refereed academic journal articles/conference proceedings, and networking with governmental
bodies/policy-makers. While policy impacts was clearly the most important impact, being ranked
first in importance by 8 of 21 respondents, it is interesting to note that four of the nine outputs
achieved an average rank of 5+, suggesting that numerous outputs are seen as highly important
across the consortium rather than a single type. Recognising the size and discipline-diversity of the
consortium, this distribution should perhaps have been expected; however, it confirms that
evaluating the impact of the project along one or two variables will not be sufficient. In preparing
the evaluation and reflection strategy it may therefore be necessary to give equal attention to the
outputs deemed important by all partners, regardless of their relevant importance on the table above.
17
Another interesting finding is the emphasis on networking and influence; following policy impacts
and academic outputs, networking/influence with policy-makers was ranked highly. This suggests
that partners hope to achieve greater influence on the policy-making process through participation in
SATORI. Tracking this type of influence as a third party may be difficult given that ‘influence’ can
be intangible and informal, occurring for example through a clandestine conversation or project
output finding its way onto the desk of a relevant policy-maker. In evaluating this type of impact
going forward it may therefore be necessary to rely on self-reporting on impact by partners, or
drawing correlations between other activities and likely future impact, as mentioned in D12.1
(Section 5.1.5 on quality of processes and impact).
5.1.2.5 Question 5: Please comment on any other aspects of your/your organisation’s
participation in SATORI that you think are important for evaluating the project
Of the 21 respondents, 8 chosen to answer this optional question. Answers tended to be specific to
understanding the role of particular organisations and thus will become relevant when evaluation is
actually undertaken of particular partners and tasks at a later stage of the project. The answers will
therefore not be discussed here, as their relevance and application to tasks undertaken in the project
is as-of-yet unclear until work has begun and reported upon for relevant tasks, in deliverables,
discussion at consortium meetings, and communication between partners.
5.1.3 Discussion and Conclusions
The responses of partners provide significant valuable input for the evaluation and reflection strategy
which will be developed in Task 12.3. Briefly, the consortium expects significant critical feedback
from the evaluator (although how welcome ‘honest feedback’ will be once given is questionable if it
is negative) and consortium, early identification of obstacles to completing tasks according to
schedule, assistance in communicating and keeping up-to-date with the progress of other partners,
and assessment of the quality and potential impact of outputs. Some of these activities may be
fulfilled by other partners; for instance, feedback can be provided not only by the evaluators but
partners peer-reviewing each other’s work, as well as the project coordinator and EC project officer,
the latter of which can also ‘verify’ delivery against deadlines and agreed upon objectives. It will
therefore be important in developing the strategy in Task 12.3 to clearly distinguish between the
responsibilities of DMU as evaluators and those of the partner organisations, project coordinator and
EC project officer.
One role not mentioned here that would be appear to be a relevant aspect of evaluation according to
the results of D12.1 is the evaluator assessing the quality of project outputs and activities in terms of
‘responsibility’ to the stakeholders engaged (and potentially, the EC). In particular, DMU as
evaluators may need to provide intervention when ‘stakeholder engagement activities’ fail in some
way (see: Section 0) or when the quality of outputs is sufficient lacking in terms of potential future
impact, although the latter will be subject to a significant degree of interpretation given the
unpredictability of future impact.
In terms of relevance to principles and criteria for evaluation and reflection as described in this
deliverable, the questionnaire seems to have less to say. However, insight can be gained from
analysis of the output ranking exercise—the range of outputs seen as important suggests that impact
criteria will need to be applicable to a broad range of types of impact. Beyond this, the emphasis on
communication and ‘work updates’ between partners suggests that criteria will be required for
evaluating the quality of project consortium meetings, perhaps in terms of ‘mutual learning’
opportunities and the quality of discussion. Criteria are also needed concerning the uptake of
18
feedback by partners; as mentioned in the DoW, this does not mean that partners will be expected to
make all adjustments and revisions suggested by evaluators (and peer-review partners), but rather
that a description of the adjustments made and the reasoning for accepting and rejecting certain
feedback will be required following evaluation reports—in other words, criteria for evaluating the
quality of dialogue between the evaluators and partners will be required.
5.2 CONSORTIUM MEETING OBSERVATION
The second preliminary evaluation activity involved observation and note-taking at the SATORI
consortium meeting on October 13, 2014 in Rome. Generally speaking, consortium meetings and
SATORI events will be observed and evaluated by DMU wherever practically feasible. The scope
and purpose of these observations will be confirmed in Deliverable 12.3, meaning the observation of
the October 13 involved basic reflection on the success of the meeting by the observer as well as
reaction to the brief presentation by DMU regarding results of the pre-evaluation questionnaire. One
key issue reported on in these observations is potential conflicts or discrepancies between our role as
evaluators found in the pre-evaluation questionnaire versus the opinions shared at the meeting.
5.2.1 DMU presentation of results deliverable 12.1 and work in progress
During the meeting in Rome, DMU gave an update on the work that has been done in WP12. Work
done was outlined in Deliverable 12.1 in which the aims were to analyse good practice in
evaluation and reflection, review possible evaluation and/ or reflection methods and criteria for the
methods and finally to identify principles of good practice in MML evaluation. In Deliverable 12.1,
MMLs were described in general but they will later be specified in relation to SATORI in WP12.3.
The methods used in Deliverable 12.1 include a review of literature in evaluation of public
participants and participation research. A thorough review of project documents was also used in
coming up with the Deliverable 12.1. In addition, the method also involved conducting an empirical
study of existing MML partners. In the empirical study, interviews and questionnaires were used to
collect data form participants.
5.2.2 Results of Deliverable 12.1
In WP12.1, we came up with 21 principles on good practice in evaluation and reflection. Among the
21 principles there were 13 criteria principles on how to evaluate MMLs, 5 mutual learning
principles on how can stakeholder be engaged and on how can civil society be involved in
evaluation of MMLs. The other 3 principles were on reflection principles and emphasis was on the
role of reflection and evaluator in an MML.
The underlying points under criteria principles included that firstly, evaluative criteria should be
specific to the context of MML. Secondly, Evaluation of MMLs should address qualities of
participatory processes and learn from research for example through reflexive engagement. Thirdly,
the success of an MML should be stakeholder oriented, for instance it should facilitate new
connections and collaborations among participants. Lastly, the management of the project should
also be evaluated in order to measure the ability of an MML.
Within the criteria principles, some principles related to methodology of conducting the evaluation
process and they include that the methodology should assist the development of research activities
during life of project for example, through feedback between evaluator and partners. In relation to
this, the methodology used should consider data beyond the deliverables, including involving
stakeholders in evaluating quality of dialogue. Another principle, was to ensure transparency in the
evaluation process and that the entire consortium is providing data for evaluation.
19
Among the results of WP12.1 were mutual learning principles. These principles suggests that
evaluators should collect evidence of mutual learning by asking partners and participants to reflect
on the changes to their attitudes and behaviours due to participation in project. In addition, evaluators
should monitor changes in participants’ perspectives. Lastly, on mutual learning principles,
evaluators should involve stakeholders in the evaluation process.
The last dimension from the 21 criteria principles is based on reflection. Under reflection principles
there should be transparent reporting on perceived pressure and influence of partners on the
evaluation. The feedback given among participants should be critical and lastly partners hold be
encouraged to critically reflect on their progress and changes in attitudes and behaviours. This can be
achieved through interviews, project management meetings and peer review of deliverables.
5.2.3 Survey Outcome
A questionnaire with 61 questions was distributed and there were 21 responses. The questions in the
questionnaire included; what tasks are participants working on? How do participants interpret their
role in SATORI? What support do participants hope to receive from us, evaluators?
The questionnaire also touched on outputs and asked the participants on how important are these
outputs for their organisation and its participation in SATORI? The outputs that were referred to,
among others included refereed academic articles, project reports, policy impact, networking with
NGOs, media attention.
The rationale of questionnaire questions was to look at the following areas;
Tasks - To understand each partner’s interpretation of their role in the project.
Interpretation of roles - For partners to be able to indicate their expectations of the
evaluators and evaluation.
Outputs - To understand the relative importance of a variety of project outputs and impacts
on an individual and organisational level.
The sample was purposefully broad, including all partners and not only WP leaders. This was done
in this way in order to gather as many perspectives as possible concerning DMU’s role and input as
evaluators.
5.2.3.1 Interpretation of questionnaire
The responses that were received were interpreted and the main themes that were deduced from the
results showed that the following was expected from the evaluator: critical feedback to the
consortium (WP12), early identification of obstacles to completing tasks on time and assistance in
communicating and keeping up to date with progress of other partners.
The results of the questionnaire were interpreted in relation to evaluators providing a ‘critical’ view
of consortium activities and ‘outputs’. In addition to the types of ‘outputs’, four respondents
identified a further type of output as ‘Other’. This included standards, learning about ethics and
ethics assessment, influence on research ethics committees on the EU-level and defining a new role
for science journalism in the ethical debate about RRI.
5.2.4 Work in progress
We explained at the workshop that DMU is working on Deliverable 12.2 ‘SATORI evaluation and
reflection principles and criteria’ and that this deliverable will be submitted in month12. We also
explained that deliverable 12.2 builds on Deliverable 12.1 ‘– Good practice in evaluation, reflection
20
and citizen society engagement’. The work in WP12.2 will be based principles identified in 12.1 to
SATORI’s aims and objectives. Consequently, in WP12.2 there will be a choice of appropriate
principles and criteria for SATORI’s evaluation (see the Conclusion of this deliverable). We
communicated that WP12.2 will define which of the 21 principles will be applicable to SATORI.
5.2.4.1 What DMU could do
We proposed that as an evaluator, DMU could give critical feedback at partner events and
consortium meetings which could help remove obstacles to completing tasks. DMU could also act as
a coordinator. This means that DMU could ask partners to report on progress, difficulties, assess
influence and impact stakeholder involvement. DMU could also evaluate if and how mutual learning
is taking place. This could be achieved through communication and keeping up to date with progress.
5.2.4.2 Future Work
The future tasks that will be carried out by DMU are Task 12.3 and Task 12.4. Task 12.3 will
involve SATORI evaluation and reflection strategy. In this task, DMU will develop chosen
principles/criteria into practical evaluation/reflection methodology and a strategy for involving
stakeholders in the evaluation. In Task 12.4, DMU will put into practice the evaluation and reflection
strategy developed in the preceding task. The evaluation will likely include some of the following
activities; procedural and substantive evaluation, a second pre-evaluation questionnaire, attendance
at project meetings (at least 1 person from DMU), interviews with project partners (at project
meetings), feedback to project partners to improve on-going work , learning outcomes assessment
and impact assessment . Finally, preparation for a workshop in Brussels to take place in February
2015 is under way.
5.2.5 Observations from Rome meeting and discussion of problems in the project
During the Rome meeting DMU made a number of observations regarding problems in the project.
Based on these observations and DMU’s presentation, DMU triggered and facilitated a discussion
about the following issues:
5.2.5.1.1 Concerns with interpreting legal documents
The meeting raised the issue of interpreting legal documents and sought guidelines in terms of how
this could be resolved. Specifically, the issue was that interpretation of legal documents varies from
country to country and that documents may be in different languages. To deal with this issue, it was
suggested that country reports should only briefly summarize the legal framework for ethical
assessment discussion with the help of translators or legal experts. In addition, it was further agreed
that the European legal system could be used as a point of departure.
5.2.5.1.2 Delays in completing work and/or not doing work at all
Concerns were raised about delays in completion of work by some partners. It was felt that some
partners were not taking the completion of work or doing assigned work seriously, which was then
left to other partners to pick up. Some of the issues around this might have to do with a lack of
understanding of what an EU funded project entails in terms of completion of work and meeting
deadlines which has a close correlation with different cultures of work. The fact that some partners
are new to EU funded projects and in addition, do not come from a research orientated environment,
may have an impact on delays in meeting required tasks. This does not make it easier for those
partners who are focussed on meeting deadlines, completing work as well as undertaking their
assigned work.
21
There was a suggestion that in order to avoid impacting the project negatively, the consortium could
vote against erring partners. It was however emphasised that this was a last resort and had to be
avoided at all costs. It was recommended that as a first step the task leaders and, if necessary, the
coordinator, should intervene when deadlines are not met and/or work is not done.
5.2.5.1.3 Concerns with budget especially travel in order to complete for example interviews
Budgetary issues remained a concern with most partners concerned about how they would be able to
travel, particularly with regards to conducting interviews. One contribution to a solution is that
different partners share the same invited stakeholders, but it remains to be seen if this will work. A
problem with this potential solution is that shared stakeholders would give similar answers.
5.2.5.1.4 Communication and peer review
The problem of feedback on WPs was raised as a concern particularly as most partners failed to
communication in a timely fashion with regards to feedback or failed to review assigned tasks
completely. The meeting learnt that a partner normally sent out their work for peer review to all
partners with the expectation that they would all give feedback. This was seen as problematic,
because some partners were never going to review any work without being explicitly assigned that
task. It was suggested that WP leaders should specify who are to be peer reviewers, preferably
members expertise in the area and those outside the WP as this would ensure that members did not
peer review their own work. Furthermore, the number of reviewers should be reduced to about 4 or 5
partners, rather than the whole consortium. In addition, it would be ideal for there to be a deadline in
order to ensure a timely turnaround for reviews. With regards to other consortium documents, it was
agreed that all partners were to endeavour to engage with the documents and give feedback. This
would help to keep abreast with developments of the SATORI project.
5.2.5.1.5 The question of DMU’s role
In light of the concerns raised above, primarily with regards to delays in completion of tasks and
undertaking the required work as well as the issue of peer review, DMU was asked what its role
might be in this respect. There was a tacit implication that DMU’s role would require ensuring that
partners completed their work and undertook the required peer review. However, DMU reminded the
partners at the meeting and the coordinator that DMU’s role could not be that of a coordinator but
merely as an evaluator which could signal SATORI problems with the aim of ensuring that the
quality of the work in the project was upheld. To this end, DMU encouraged partners to contact the
evaluators if they are having problems with undertaking or completing their work. DMU would
subsequently try to broker solutions for differences or misunderstandings between partners. In
addition, DMU was of the view that in order to complete work and give relevant reviews, partners
ought to have the necessary expertise, capacity and personnel to be able to carry out the required
work.
DMU was also asked how it will evaluate SATORI and how to measure MMLs and their impact. As
evaluator, DMU informed the meeting that the chosen evaluation criteria for SATORI will be
outlined in WP12.2. This will be done after analysing WP12.1 which covered several aspects of
conducting evaluation and measuring MMLs. The choice of evaluation will also take into account
questionnaire answers provided by partners.
5.2.5.1.6 Conclusion
DMU used the Rome meeting to presents its results to the partners and to facilitate discussion about
ongoing concerns in the project. The meeting was also used to get feedback on DMU’s role. DMU
encouraged partners to approach DMU with their concerns but also indicated that it will not take
over the task of the coordinator and the task leaders. DMU also proposed criteria for evaluation and
22
offered the partners a chance to indicate how they wanted to be evaluated. In the next workshop in
Brussels (February 2015), the criteria specific to SATORI will be presented and feedback will be
asked on our suggestions for DMU’s evaluation strategy.
23
6 CLASSIFICATION OF SATORI TASKS FOR EVALUATION
As suggested in D12.1, MMLs are defined as interdisciplinary activities, meaning evaluation must
account for different ways of doing research and measuring success across the different disciplines
represented in a project’s consortium. Following on from good practice discussed in the empirical
study described in D12.1, evaluation of SATORI will occur in part at a task-specific level so as to
allow for discipline-specific methods and metrics of quality.
While task-specific evaluation is desirable for this reason, it presents a much more complex
challenge for the evaluator who is responsible for conducting separate evaluations and, perhaps,
defining task-specific quality criteria. To somewhat simplify this complex task, similarities were
identified between tasks in terms of general methods and purposes of the tasks undertaken in
SATORI, as described in the DoW. These similarities allow for ‘mid-level’ criteria to be defined at
the level of ‘task types’, which are derived from ‘high-level’ evaluative frameworks (see: D12.1) and
specified into ‘low-level’ task-specific ‘indicators of success’, or characteristics of successful
activities or outputs.
Classifications, or ‘task types’ were assigned by comparing the description of each Task in the
SATORI Description of Work to the classifications defined below. The classifications are derived
from D12.1 and the description of the aims and characteristics of MMLs as a new type of research
and coordination action. The list is not intended as a comprehensive representation of MML
activities. The classifications are instead based on the authors’ interpretation of the defining
characteristics and ‘ends’ of MMLs, and the general types of activities undertaken. Here,
‘stakeholders’ refer to individuals, groups and organisations engaged through the MML, whereas
‘partners’ refers to the consortium partners and contracted personnel responsible for organising and
carrying out the activities described. The types of activities are:
Desk research refers to any analysis of extant data, which does not explicitly involve
empirical data collection or a structured literature review. Desk research can include analysis
of empirical data and literature, so long as it does not involve structured collection of either.
For example, secondary analysis or consideration of findings from a prior task or consortium
partner are considered desk research.
Recruitment refers to any activities which recruit stakeholder participants to MML activities
(e.g. stakeholder engagement, capacity building, survey, interviews, etc.).
Stakeholder engagement – dialogue refers to activities in which stakeholders (civil society
organisations, members of the public, ethical committee members, regulators, etc.) are
engaged in a process of dialogue or deliberation. It is expected that stakeholders and
coordinators will learn from and between one another. Intended outcomes include mutual
learning between all participants, facilitation of network/relationship building, and exchange
of ideas/perspectives (and therefore, awareness raising).
Stakeholder engagement - capacity building refers to stakeholder engagement activities in
which stakeholders are engaged in training programmes, educational programmes and similar
activities in which ‘one-way’ flow of information is expected. This does not preclude the
possibility of mutual learning; rather, the purpose of stakeholder engagement - capacity
building is to help stakeholders develop particular skills, awareness or knowledge. Activities
focus on passing the desired information or skills from the partners or activity leaders to
stakeholders. The ends are therefore distinguishable from stakeholder engagement, which
focuses on mutual learning, although overlap can be expected.
24
Stakeholder engagement - feedback refers to stakeholder engagement activities in which
stakeholder engagement - feedback is specifically sought on a tool, framework or other
deliverable created by the consortium. Mutual learning and dialogue between stakeholders is
possible, but the explicit goal of the activity is to discover how stakeholders feel about the
output considered.
Survey refers to any type of questionnaire or survey instrument intended to collect data from
‘stakeholders’ (e.g. non-consortium partners) identified in the MML.
Interviews refers to any type of interview instrument intended to collect data from
‘stakeholders’ (e.g. non-consortium partners) identified in the MML.
Case studies refers to any type of case study protocol intended to collect and analyse data
from ‘stakeholders’ (e.g. non-consortium partners) identified in the MML. Case studies may
include interviews, document analysis, surveys, etc. but are distinguished from other
activities by the explicit usage of a case study methodology.
Recommendations/tools refers to the creation of practice-oriented recommendations or tools
are created, for example for refining legal, regulatory and ethical frameworks. Restrictions
are not placed on the source of recommendations, and may be developed with input from
both partners and stakeholders.
Dissemination/impact refers to any activity related to disseminating outputs of the MML or
encouraging/supporting/developing impact of the project and its outputs. Examples include
support activities related to heritage and sustainability of stakeholder networks established
through the project.
Evaluation refers to any activities undertaking or supporting evaluation of the project
including its ongoing activities, outputs and impact. Evaluation may involve data collection
from both stakeholders and consortium partners.
Administration refers to administrative activities undertaken in coordination of the project.
Table 3 classifies the tasks undertaken in SATORI.
25
Task
Number
Title Description Type Deliverables
WP 1
Comparative analysis of
ethics assessment practices
1.1 Criteria and tools for analysis The partners will identify the criteria, categories, methods and tools necessary
for a productive analysis of current practices related to ethics assessment in
scientific research and related innovation activities. Provide a broad conceptual
tool to review, categorise and compare ethics assessment approaches.
Desk research D1.1
1.2 Inventory of approaches
within fields
Using the criteria and tools of analysis developed within Task 1.1, the partners
will make a systematised inventory of current practices and principles of ethics
assessment across five fields.
Literature review D1.1
1.3 Comparison between fields Using the criteria and tools of analysis developed in Task 1.1 and the findings of
Task 1.2, the partners will study and determine what differences and similarities
exist between frameworks and practices for ethics assessment across the five
different fields in Task 1.2
Desk research D1.1
1.4 Inventory and comparison of
approaches by different
stakeholders
The partners will study how principles and practices of ethics assessment differ for
different types of stakeholders who engage in ethics assessment, including
researchers, research ethics committees, research funding bodies, industry,
policy-makers, legislators and civil society actors. The partners will interview at
least three representatives of each of the main stakeholder groups. In addition, a
survey will be undertaken of the national ethics committees in all 27 Member
States to determine how Member States have established ethical frameworks of
values and principles governing different types of research.
(1) Desk research
(2) Interviews
(3) Survey
D1.1
1.5 International comparison Drawing on the results of the previous tasks, the partners will make an up-to-
date and comparative analysis of EU and international practices related to ethics
assessment in scientific research and related innovation activities.
Desk research D1.1
WP 2 Dialogue and participation
26
2.1 Landscape of existing MML
projects and other relevant,
ethics-related projects
The partners will identify existing MML projects (2) as well as other relevant,
ethics-related projects (3) and explore their approaches to participatory
processes with a view to understanding their incorporation of and/or interaction
with different stakeholders. The output of this task will be -1- a list of MML and
other relevant projects detailing their findings regarding participatory processes
and -2- a short handbook of participatory processes. The partners will base their
report of what has worked well and what has not in part on interviews with MML
and other project leaders and key contributors.
(1) Desk research
(2) Interviews
D2.1
2.2 Survey of MML actors and
other stakeholders
The partners will identify and characterise stakeholders relevant to this project.
The output of this task will be a confirmation of the various stakeholder categories
explored in WP1 and a description of their drivers and concerns as well as an
assessment of the “scale” and significance of each category. The partners will
also compile a contact list of at least 300 individual stakeholders. Each of these
stakeholders will be sent a web-based survey that will build on the output of WP1
and will provide insight into their motivations, activities and concerns.
(1) Desk research
(2) Recruitment
(3) Survey
D2.2
2.3 Assessment of capacity
building and training needs The partners will evaluate the different capacity building and training needs of
various stakeholder groups identified in Task 2.2 and develop a programme of
activities that would address such needs. The partners will be especially
mindful of the “Year of Citizens” challenge of raising citizens’ awareness of how
they can benefit from EU policies, and to stimulate citizens’ active participation in
EU policy-making and specifically the Horizon 2020 challenge on “Inclusive,
innovative and security societies”. Citizens will be engaged in dialogue and
public debate in order to discuss expected impact on citizens and civil society of
ethics assessment. As far as possible, the partners will attempt to implement the
programme of activities during the course of the SATORI project and, where they
cannot (because of a shortage of time or resources), they will include those
outstanding item in their recommendations (Task 9.3).
(1) Desk research
(2) Stakeholder engagement -
capacity building
(3) Stakeholder engagement -
dialogue
D2.3
WP 3 Legal aspects and impacts
of globalisation
3.1 Legal and regulatory aspects The partners will identify and analyse the legal frameworks that currently guide and/or
constrain ethical procedures within research and research assessment in the EU.
Desk research D3.1
27
3.2 International differences in
research cultures, ethical
standards and legal frameworks
Drawing on the results of Tasks 1.5 and 2.2, the partners will analyse the
broader background conditions that contribute to observable differences in
ethical assessment frameworks and practices across the EU and in selected
non-EU jurisdictions. The partners will further enrich their analysis with a focus on
existing international frameworks.
Desk research D3.1
3.3 Impact of globalisation on
research activities and resulting
problems for research ethics
The partners will determine how globalisation is changing research agendas, activities
and assessment procedures, and how these changes may cause problems for ethics
assessment. The partners will conduct at least six case studies within the fields
covered in WP1, considering the globalisation of research activities by various
stakeholders, within academia as well as industry. The minimum of data collection
is likely to include two interviews per case plus publicly available data such as website
content or publications.
(1) Desk research
(2) Case studies
D3.1
3.4 Policy and legal options for
developing research ethics
within the context of
globalisation
Based on the outcomes of the earlier tasks, the partners will study what policy and legal
options may be available for overcoming obstacles resulting from legal developments
and globalisation for the development and implementation of a common ethics
assessment framework for research within the EU. The partners will propose policy
and legal options that could minimise unethical conduct and absence of due
attention to scientific responsibility. The initial proposal of policy and legal
options will be discussed in an open workshop comprising consortium members,
legislators and other policy-makers as well as affected stakeholders. The workshop
will allow the collection of feedback on the initial draft of the policy and legal
options which will be used to revise these and develop them into a policy briefing
document.
(1) Desk research
(2) Stakeholder engagement -
feedback
(3) Recommendations/tools
D3.1
WP 4
Roadmap for a common EU
ethics assessment
framework
4.1 Common ethical values and
principles
The partners will investigate to what extent a set of ethical values and
principles can be derived based on the ethics assessment frameworks
established by ethics assessors in the Member States. The correlations
discovered in WP1, and their further analysis here, will be used to propose
and defend a set of shared ethical values and principles for European ethics
assessment.
Desk research D4.1
28
4.2 Cultural diversity and
national differences
The partners will conduct a more detailed analysis of the results from Tasks
1.5 and 4.1 in order to account for the commonalities and differences and, in
particular, to what extent cultural diversity and national differences play a
role. The partners will assess how easy or difficult it might be to establish a
common EU ethics framework as well as the pros and cons of a common
framework.
Desk research D4.1
4.3 Outline of a common ethics
assessment framework and
workshop
Based on the findings of Tasks 4.1 and 4.2, the partners will develop a
detailed outline for a common ethics assessment framework. The partners will
convene a large workshop of stakeholders to whom we will present the outline
framework and invite their views.
(1) Recommendations/tools
(2) Stakeholder engagement -
feedback
D4.2
4.4 Roadmap towards adoption
of a fully developed
framework
Taking into account the feedback from stakeholders, the partners will
generate a roadmap for producing a fully developed framework, obtaining
consensus from as many stakeholders as possible and adoption of the
framework at the EU and MS levels as well as among other stakeholders.
(1)
Recommendations/tools
(2) Stakeholder
engagement - dialogue
D4.3
4.5 Training sessions on the new
framework
The partners will conduct training sessions on the new detailed framework.
The training sessions will focus on a mix of relevant, interested stakeholders,
including representatives from national ethics committees. The goal of the
training sessions is to train ethics assessors in using the ethics framework
developed in the SATORI project in their professional activities of ethics
assessment. A secondary goal is to get feedback from participants that may be
helpful in fine-tuning and implementing the framework at future occasions.
The focus for the project will be on ethics assessors, including:
1. research ethics committees, both local and national
2. policy makers in science and innovation policy, both in government and in
academic bodies
3. research funding bodies (including national contact points for FP7/H2020 and
members of future (regulatory)
programme committees)
4. industry
5. NGOs/Civil Society actors (including science journalists)
6. individual scientists and innovators, both profit and non-profit
(1) Stakeholder engagement -
capacity building
(2) Stakeholder engagement -
feedback
D4.4
WP 5
Risk-benefit analysis of ethics
assessment activities
29
5.1 Cost-effectiveness and risk-
benefit of ethics assessment
The partners will examine the cost-effectiveness and risk-benefit of ethics
assessment. They will do so through a set of six case studies, a literature review
and interviews with leading ethicists in various Member States. The aim of this
task is to develop an evidence base demonstrating the cost-effectiveness
and risk-benefit of ethics assessment.
(1) Case studies
(2) Literature review
(3) Interviews
D5.1
5.2 Methodology for assessing cost-
effectiveness and risk-benefit
Drawing on the evidence base compiled in Task 5.1, the partners will develop a
methodology for examining the cost-effectiveness and analysing the risk-benefit of the
ethics assessment activities.
Recommendations/tools D5.2
5.3 Workshop on the cost-
effectiveness and risk benefit of
ethics assessment
The partners will convene a workshop to which we will invite a group of about 10
people selected to be representative of the various stakeholder categories and to
whom we will present the findings of the evidence base as well as the
methodology. The aim of the workshop will be to “test” our results on the
stakeholders, to see if they have any views on how the evidence base or
methodology could be improved or “tweaked” so that the latter will be useful
for stakeholders aiming to demonstrate the cost-effectiveness and risk-benefit of
ethics assessment to others. The partners may seek to enhance the evidence base
or modify the methodology based on the outcome of the workshop. Having
done so, the partners will prepare an explanatory memo or short article (10
pages or so) which they will send to all stakeholders, inviting comments.
(1) Stakeholder engagement -
feedback
(2) Recommendations/tools
D5.3
WP 6 Measuring the impact of
ethics assessment
6.1 Identifying the different types
of impacts of ethics
assessment
Before we can measure the impacts of ethics assessment, we first need to
identify the different types of impacts. Thus, in this task, the partners will examine
the different types of impacts that ethics assessment could have. The
identification of impacts of ethics assessment will draw on and interact with
the interviews conducted and the case studies developed in WP1, the survey
sent out to stakeholders in WP2 and the case studies developed in WP3.
Desk research D6.1
30
6.2 Methodology for measuring
the impacts of ethics
assessment
In this task, the partners will develop a methodology for measuring the impacts
of ethics assessment. We will develop the criteria for enabling the measurement
of impacts. The criteria will include questions of quality and feasibility, the
prospects for incorporation into the EU system of research and innovation,
improved transnational co-operation, and the contribution to sustainable and
inclusive solutions to key challenges facing European society, as well as
positive impact on citizens and civil society. The partners will also develop
criteria for assessing the impact of SATORI as a whole and its ethical impact
assessment framework, in particular on citizens and civil society.
(1) Recommendations/tools
(2) Evaluation
D6.1
6.3 Stakeholder views on ethical
impact assessment
The partners will elicit the views of stakeholders on the outputs of both Tasks 6.1
and Task 6.2 in at least two ways – first, by sending a summary paper to our
contacts with a cover page with a set of questions (“e-mail solicitation”) aimed at
getting their views on the identified impacts and methodology for measuring
those impacts. Second, we envisage a workshop with stakeholders to discuss
whether there are any impacts we might have overlooked and whether they think
the methodology is user friendly and adequate in measuring the impacts and
how or in what circumstances such a methodology could prove to be of value.
(1) Survey
(2) Stakeholder engagement –
feedback
D6.2
6.4 Pilot impact study on FP
ethics review
In this pilot study, the partners propose to collaborate with the Commission by
including relevant work from our project, especially with regard to the ethics
assessment framework, in the EC’s Intensive Ethics Review training course,
which helps diffuse information and awareness on the EC’s Ethics Review
procedure (2). The course targets the research community, Commission staff,
National Contact Points (NCPs), related national authorities, and ethics
reviewers themselves.
Stakeholder engagement –
capacity building
D6.2
WP 7
Standardizing operating
procedures and certification
for ethics assessment
7.1 General study of
standardising operating
procedures
In this task, the partners will review standardisation as a horizontal activity.
The partners will review and analyse the progress within ISO in developing a
privacy impact assessment standard. The partners will also explore the
possibility of a CEN Workshop Agreement. Finally, the consortium will explore
standards related to ethics or social responsibility. All partners are expected to
contribute to this task.
Desk research D7.1
31
7.2 General study of certification
in assessment procedures
Equal to Task 7.1, the partners will study to what extent assessment procedures
in general have become subjected to certification; including the related
obstacles and problems.
Desk research D.7.2
7.3 Development of a framework
for standardising operating
procedures for ethics
assessment
Taking into account the findings of Task 7.1, the partners will prepare a report
on the feasibility of standardising operating procedures for ethics assessment...
Our report will be based on interviews with several national standards bodies
and representatives from the national ethics committees. The partners envisage
a series of workshops involving the public and the project partners; thus, the
standardisation process will be open to any interested parties. In this task, to
which the development of the possible CWA is allocated, the contribution of all
partners will be requested, e.g., by contributing to the development of the
business plan for the CWA and its contents, and by attending several
workshops. Relevant external stakeholders, who may be instrumental in
progressing the standardisation process, will be invited to the workshops.
(1) (1) Desk research
(2) (2) Interviews
(3) (3) Stakeholder
engagement – dialogue
(4) (4) Stakeholder
engagement - feedback
D7.1
7.4 Development of a framework
for certification for ethics
assessment
Taking into account our findings from Task 7.2, the partners will examine and
assess the feasibility of certification for ethics assessment. If certification seems
desirable and feasible, the partners will take steps to initiate such certification
measures.
(1) Desk research
(2) Dissemination/impact
D.7.2
WP 8 Heritage (sustainability)
8.1 Strategy for sustainability of
the SATORI network
The partners will develop a strategy the purpose of which is to ensure the
sustainability of the SATORI initiatives and give future participants wishing to
pursue this work a roadmap for efficiently implementing the MML
recommendations, at least those that are not likely to be implemented in the
immediate future. Once the partners have drafted the strategy, we will e-mail it
to our network of contacts in order to get some feedback (“e-mail solicitation”).
(1) Dissemination/impact
(2) Stakeholder engagement -
feedback
D8.1
8.2 Identifying competent leaders
willing to take SATORI into
the future
The partners will identify those stakeholders who have been most active and most
effective in promoting the goals of the SATORI project and query (interview) and
assess those as potential leaders for taking SATORI into the future, once the EC-
funded project comes to an end.
(1) Recruitment
(2) Interviews
(3) Dissemination/impact
D8.1
32
8.3 Attracting other sources of
financing The partners will identify possible sources of financial support from other national
and EU sources. The partners will build a business case which also takes into
account the findings of previous work packages.
Dissemination/impact D8.1
8.4 Workshop of SATORI
stakeholders regarding
sustainability
If the outputs from the first three Tasks seem promising, the partners will
convene a workshop of key SATORI stakeholders and funding bodies to
“concretise” the strategy developed in Task 8.1 – i.e., to agree an action plan
and who will do what to make the SATORI network sustainable – and the
business plan developed in Task 8.3.
(1) Stakeholder engagement –
feedback
(2) Dissemination/impact
D8.2
WP 9 Policy watch and policy
recommendations
9.1 Identification and inclusion of
relevant EU strategic priorities
and policy developments
The partners will consider EU strategic priorities, including the means to monitor
throughout the project other EU-related initiatives and policy developments at
local, national and European levels, in order to better connect ethics assessment
with policy cycles. The partners will monitor key EU websites, CORDIS News,
Commissioners’ speeches, Council agendas and notes, impact assessments
and other means in order to identify EU strategic priorities that potentially raise
ethical issues. Partners will similarly monitor key policy developments at local
and national levels of their home countries.
Desk research D9.1
9.2 Posting news of EU-related
initiatives and policy
developments
The partners will publish a “newsletter” containing news of the policy developments
and other EU, national and local initiatives which the consortium believes merit
drawing to the attention of its contact list. It will also publish the same items as a blog
on the project’s website. Where and when appropriate, the consortium will encourage
its contacts to make their views known to policy-makers, especially where the
consortium and its contacts believe policy developments and other initiatives raise
ethical issues which should be subject to an ethical impact assessment.
(1) Stakeholder engagement –
dialogue
(2) Dissemination/impact
D9.2
9.3 The SATORI consortium’s
integrated assessment framework
and recommendations
The partners will create an integrated ethics impact assessment framework. The
partners will also draw together the recommendations from all previous work
packages and synthesise them in a single report. The full set of policy
recommendations will be discussed with policy- makers during a dedicated policy
workshop. The final version of the consortium’s recommendations will include input
and feedback received during the high-level event for policy-makers envisaged in this
task. The partners will present the final version of the report from this task at the
project’s final conference for all stakeholders (Task 10.7).
(1) Recommendations/tools
(2) Stakeholder engagement –
feedback
(3) Dissemination/impact
D9.3
WP 10 Communication
33
10.1 Elaborate the consortium’s
communications strategy The consortium has already established a high-level strategy to disseminate the
project’s findings and to engage stakeholders. This task aims to further
elaborate that strategy and then proceed with its implementation. The strategy
consists of the following main activities:
1. Identify stakeholder categories and decide on the level of granularity of
stakeholder types.
2. Identify stakeholders’ motivations and why we want to reach each of the
stakeholder types.
3. Identify different means, ways, and media of reaching each of the different
stakeholder types.
4. Identify individual stakeholders and compile their e-mail addresses.
5. Evaluate the cost-effectiveness of each of the different ways of reaching out to
stakeholders.
6. Translate and distribute press releases and selected other materials in order
to reach out to stakeholders in all Member States.
7. Initiate and implement the plan.
(1) Desk research
(2) Dissemination/impact
D10.1
10.2 Establish and maintain the
project’s website
The project’s website is one of the main sources of information available to most
stakeholders. The consortium will post its deliverables on the website as well as
other items aimed at stakeholders, such as press releases (Task 10.3) and the blog
(Task 9.2).
Dissemination/impact D10.2
10.3 Press releases and feature
stories The consortium will prepare and translate press releases for distribution to the
media and other stakeholders identified in Task 2.2 in all Member States. The
consortium will also prepare other information material, such as two-sided fact
sheets, leaflets and/or brochures for distribution at workshops and conferences
and information for blogs on other high profile websites.
Dissemination/impact D10.3
10.4 Journal articles The consortium will prepare articles for peer-reviewed journals based on some or
all of the project’s deliverables (at least those which the partners think will be of
interest to journals).
Dissemination/impact N/A
10.5 Presentations at third-party
workshops and conferences The consortium will prepare and present papers and slide shows for
presentation at third-party workshops and conferences attended by relevant
stakeholders.
Dissemination/impact N/A
10.6 Social media The consortium will explore use of social media such as Twitter, Facebook,
LinkedIn, Ming and/or Wikipedia to distribute information about the project and
its findings and to engage stakeholders.
Dissemination/impact N/A
10.7 The project’s final conference Final SATORI conference. (Note: detail not provided in Down). Dissemination/impact(?) D10.4
34
WP 11 Project management
11.1 Project co-ordination Coordination of project activities including monitoring and supervision of work
progress, maintaining project implementation plan, and maintaining contact with EC’s
project officer.
Administration D11.1
11.2 Project operational support Various coordination activities, including monitoring timely production of deliverables
and orchestrating “peer review” of deliverables, and preparing reports to EC project
officer.
(1) Administration
(2) Evaluation
D11.1
11.3 Project financial administration The financial administration includes managing diligently the project funding. The
project co-ordinator receives the funding from the Commission and distributes it
to partners in a timely fashion.
Administration D11.1
WP 12 Evaluation
12.1 Good practice in evaluation
and reflection
Report on good practice in evaluation and reflection in MMLs based on literature
review and interviews with MML consortia. Presentation of results to consortium
at evaluation workshops to validate ongoing evaluation of project and future plan.
(1) Desk research
(2) Evaluation
D12.1
12.2 SATORI evaluation and
reflection principles and
criteria
Specification of evaluation and reflection principles and criteria for SATORI.
Based on findings from Task 12.1. Presentation of results to consortium at
evaluation workshops to validate ongoing evaluation of project and future plan.
(1) Desk research
(2) Evaluation
D12.2
12.3 SATORI evaluation and
reflection strategy
Specification of evaluation and reflection strategy by for the lifespan of SATORI,
including methods of data collection and timing of evaluation reports and
workshops. Presentation of results to consortium at evaluation workshops to
validate ongoing evaluation of project and future plan.
(1) Desk research
(2) Evaluation
D12.3
12.4 Evaluation and reflection This task will put into practice the evaluation and reflection strategy developed in
the preceding task. The actual activity of the task will be defined in detail in in
12.3. Stakeholders will likely be involved, at a minimum to evaluate the quality
of stakeholder engagement activities.
(1) Evaluation
(2) Stakeholder engagement -
feedback
D12.4
12.5 Remedial action The six-monthly evaluation report will include suggestions for improvement,
where required. WP12 will communicate this to the consortium and the rest of the
consortium, led by UT, will commit itself to react appropriately to the report and
the remedial suggestions. The consortium does not have to follow the suggestions
but, where it decides not to do so, will justify this decision in writing
Evaluation N/A
Table 3 – Classification of SATORI Tasks
35
The classifications assigned to each task are intended only to facilitate evaluation by grouping tasks
similar in terms of methods and purpose. The classifications are not necessarily a realistic portrayal
of the content of each task as it will actually occur, given the scope for considerable departure from
the DoW in actually completing any given task. A key distinction is made between (1) stakeholder
engagement activities through which mutual learning and mobilisation primarily occur, (2) other
forms of ‘fact-finding’ such as desk research, and (3) tasks concerned with the output and public
perception (e.g. impact) of the project. Each group is amenable to procedural and substantive
evaluation; however, the distinction between these three groups helps distinguish between (1)
activities where mutual learning and mobilisation should be expected in line with the general aims of
MMLs, (2) ‘traditional’ forms of research without an explicit focus on mutual learning and
mobilisation and thus more appropriately evaluated in terms of traditional ‘project evaluation’ (see:
Section 7.2), and (3) activities concerning social, political and other forms of impact which present
unique problems with ‘tracking’ and ‘predicting’ impact.
Another important factor to note is that the classification only addresses ‘Tasks’ that are part of a
‘Work Package’. Consortium and Advisory Board meetings, informal communications and project
conferences were not classified. These activities create opportunities for mutual learning between
consortium partners. Their quality will therefore also be evaluated.
36
7 EVALUATION AND REFLECTION CRITERIA
As described in D12.1, evaluation can be defined as “the process of determining the merit, worth and
value of things”10
, and can be used to describe many “different kinds of judgments, from informal
assessment that relies on intuition or opinion, to well-defined and systematic research that makes use
of social science research methods”11
. Concerning research and engagement projects, evaluation will
at a minimum focus on the “design, implementation and effectiveness” of the project12
, as well as
outputs (e.g. reports, deliverables), outcomes (e.g. products, processes) and impacts (e.g. follow-on
products, processes and research)13
. In general evaluation aims to assist in developing research
activities during the life of the project (e.g. through feedback from evaluators to partners), improve
the design of future related activities, assess project impact14
, and provide participants with a better
idea of the value of their participation by tracking influence on the process15
. Evaluation will often
be required or called for by sponsors or funders of research programmes, who wish to pose certain
questions or have an indication of the relative merit of the funded research16
. Criteria against which
the success of the project can be assessed are typically used17
, and can be pre-defined or developed
in-situ. Methods of data collection and analysis are often guided by prescriptive discipline-specific
guidelines18
.
An initial distinction can be drawn in the evaluation of participatory research between procedural
evaluation which examines the quality of procedures or methods of stakeholder engagement and
mutual learning while they occur, and substantive evaluation which assesses the outcomes of
procedures19
. The former can contribute to the refinement of research and participatory processes by
identifying weaknesses or areas for improvement prior to the project’s conclusion20
, thereby acting
as a feedback mechanism or ‘double loop’ to refine project activities21
through assessment of
deliverables, communications and events. The latter, on the other hand, evaluates the quality of a
project’s outputs and outcomes to evaluate its success22
, for example in meeting its stated objectives,
impacts or success indicators, or against a pre-defined framework for assessing quality (see: Section
Fout! Verwijzingsbron niet gevonden.). As with procedural evaluation, post-project substantive
evaluation can inform the design and conduct of future participatory processes and research. In
10
Scriven, Evaluation Thesaurus. 11
Joss, “Evaluating Consensus Conferences: Necessity or Luxury”, 89. 12
Tuominen et al., “Evaluating the Achievements and Impacts of EC Framework Programme Transport Projects”, 61. 13
Ibid.; Arnold, “Understanding Long-Term Impacts of R&D Funding”; Bornmann and Marx, “How Should the Societal
Impact of Research Be Generated and Measured?”; Nagarajan and Vanheukelen, “Evaluating EU Expenditure
Programmes: A Guide. Ex Post and Intermediate Evaluation. XIX/02–Budgetary Overview and Evaluation”. 14
Research Councils UK, Evaluation: Practical Guidelines - A Guide for Evaluating Public Engagement Activities. 15
Rowe and Frewer, “Evaluating Public-Participation Exercises: A Research Agenda”. 16
O’Sullivan, “Collaborative Evaluation within a Framework of Stakeholder-Oriented Evaluation Approaches”, 518. 17
e.g. Rowe and Frewer, “Public Participation Methods”; Webler, “‘Right’ Discourse in Citizen Participation: An
Evaluative Yardstick”; EuropeAid, Evaluation - Guidelines. 18
e.g. O’Sullivan, “Collaborative Evaluation within a Framework of Stakeholder-Oriented Evaluation Approaches”;
EuropeAid, Evaluation - Guidelines. 19
Abelson and Gauvin, Assessing the Impacts of Public Participation, 12; Merkx et al., “Evaluation of Research in
Context A Quick Scan of an Emerging Field”; Mickwitz, “A Framework for Evaluating Environmental Policy
Instruments Context and Key Concepts”; Rowe and Frewer, “Public Participation Methods”, 10; Tuominen et al.,
“Evaluating the Achievements and Impacts of EC Framework Programme Transport Projects”. 20
Chess, “Evaluating Environmental Public Participation”, 773. 21
Abelson and Gauvin, Assessing the Impacts of Public Participation, 3. 22
Ibid.
37
defining criteria for SATORI’s evaluation and reflection activities, it is important to recognise these
fundamental distinctions and prepare relevant criteria for each type.
Each type of evaluation focuses in some way on the ‘quality’ of research activities, participatory
processes, outcomes, impact, etc. Discourse in this area tends to focus on establishing and justifying
evaluative criteria to describe the quality of the process (or procedure) and its outcomes23
. Many
potential criteria of quality can be used, and as with evaluation of any form of public participation
such criteria must be carefully chosen to ensure a match with the aims and field of the project being
evaluated24
. Frameworks and criteria for project evaluation should be understood as helping assess
the quality (or as is sometimes said, effectiveness or success) of the activity25
; while the criteria vary
across disciplines, frameworks and projects, the central purpose of evaluative criteria as facilitating
assessment of quality does not change.
As reported in D12.1, a universally accepted framework to define effectiveness does not exist in
evaluation literature26
, due perhaps to participatory processes occurring across numerous disciplines
with different epistemic frameworks and standards of quality27
. MMLs can be thought of as
interdisciplinary projects lacking a single real-world problem focus28
. While SATORI broadly
focuses on ethical assessment frameworks, the myriad of contexts in which ethical assessment is
required precludes a single discipline-specific focus. Following from this, the criteria chosen to
evaluate SATORI will not be based in a single discipline, and will be specified in Task 12.3 for
evaluating individual tasks according to discipline-specific methods of good practice and indicators
of quality.
7.1 EVALUATION VS. VERIFICATION
Before describing evaluation criteria, an initial distinction between evaluation and verification needs
to be drawn to avoid confusion. All activities and deliverables of SATORI will be ‘verified’ in terms
of compliance with the milestones and requirements set out in the DoW. No matter the evaluative
framework used a basic distinction in approaches to evaluation adopted by MMLs can be seen here,
between ‘verification’ as an administrative or management process to ensure the requirements of the
DoW are being met on time, and evaluation as a normative exercise according to which the quality of
deliverables, activities and impacts are evaluated, in some cases to refine the project’s ongoing
activities. Verification will be carried out by the project coordinator and a European Commission
project officer. Verification results will be included but only briefly discussed in forthcoming
evaluations of the project, due to the lack of normative assessments of quality. ‘Normative’
evaluation moves beyond mere administrative concerns regarding meeting contractual objectives and
deadlines, and instead deals with the procedural and substantive quality of processes and outputs. It
is this normative type of evaluation which will be the primary concern of DMU as evaluators.
23
Rowe and Frewer, “Evaluating Public-Participation Exercises: A Research Agenda”. 24
Rowe and Frewer, “Public Participation Methods”, 3; Smith, Nell, and Prystupa, “FORUM”. 25
Rowe and Frewer, “Evaluating Public-Participation Exercises: A Research Agenda”, 517. 26
Rowe and Frewer, “Evaluating Public-Participation Exercises: A Research Agenda”. 27
Klein, “Evaluation of Interdisciplinary and Transdisciplinary Research”; Bergmann et al., Quality Criteria of
Transdisciplinary Research: A Guide for the Formative Evaluation of Research Projects; Bornmann and Marx, “How
Should the Societal Impact of Research Be Generated and Measured?”. 28
Mittelstadt, Stahl, and Wakunuma, SATORI Deliverable D12.1 - Good Practice in Evaluation, Reflection and Civil
Society Engagement.
38
7.2 SUBJECTS OF EVALUATION
For MMLs, certain subjects of evaluation seem necessitated by the types of activities undertaken
(see: Section 6). As discovered through interviews with MML consortia in Task 12.1, evaluating
mutual learning is, however, not enough by itself. Two general types of evaluation are needed in
MMLs: (1) generic project evaluation akin to EuropeAid framework; and (2) evaluation of quality of
participatory processes, network building, capacity building and facilitating communication,
collaboration and mutual learning between stakeholders. The latter form of evaluation must
necessarily look at the impacts of the project, as learning and exchange of ideas between
stakeholders are continual processes occurring throughout and beyond the life of the project. The
respondents also made clear that MMLs are not unique in the difficulties faced with evaluating
impact. However, the attempts among respondents to begin to evaluate societal and policy impact in
particular, in some cases in terms of mutual learning, shows that the methodological and practical
difficulties with impact evaluation are not undermining the desire among practitioners to track the
influence of research. Given the emphasis on mutual learning, capacity and network building in
MMLs, attempts to evaluate project influence in these terms should be encouraged and perhaps
supported through the development of evaluation frameworks and methods for MMLs.
To summarise, at a minimum generic project evaluation criteria appear to be necessary, such as those
specified in the OECD framework. Criteria to assess mutual learning are also required based on
mutual learning being a key characteristic of MMLs. The same may be said for assessing the quality
of networks of stakeholders or communication/collaboration channels built through MMLs.
7.2.1 Mutual Learning and Mobilisation
The conception of mutual learning adopted by evaluators will significantly influence the criteria
chosen to evaluate the quality of learning occurring in participatory processes, as well as the types of
evidence sufficient to demonstrate changes in attitudes, beliefs or behaviour that may indicate
learning has occurred. For example, a one-way ‘deficit model’ of learning would logically lead to
success being evaluated against the degree of learning among societal participants while lacking
reflexive assessment of learning among researchers (or evaluators).
D12.1 reviewed four theories of learning: organisational, expansive, transformative or reflective, and
social. In SATORI mutual learning will be conceived of within the transformative and reflective
traditions (for example, transformative learning theory builds upon expansive; see: D12.1), as these
theories best account for the ‘mutual learning’ and ‘mobilisation’ aims of MMLs, as explained
below. The theories adopted frame the choice of criteria for evaluating mutual learning occurring
throughout the project, but in particular through participatory events and in terms of impact. Prior to
discussing criteria for evaluating the different types of SATORI events below, it is worth briefly
revisiting these approaches.
Transformative and reflecting learning are built upon expansive learning theory, which does not
conceive of learning as knowledge acquisition or “becoming an active participator of cultural
practices,” but rather views learning as a form of knowledge creation29
. This view of learning is
equivalent to ‘double-loop learning’ described in theories of organisational learning30
, by which
existing conceptions and frames of reference are broken down and re-created to better account for
unfamiliar phenomena or perspectives. Expansive learning encourages dialogue between a diversity
29
Saari and Kallio, “Developmental Impact Evaluation for Facilitating Learning in Innovation Networks”, 228. 30
Mittelstadt, Stahl, and Wakunuma, SATORI Deliverable D12.1 - Good Practice in Evaluation, Reflection and Civil
Society Engagement.
39
of stakeholders (including researchers and evaluators) to create ‘double-loops’ through which
prevailing research strategies and accepted wisdom can be criticised and refined31
. In practical terms
expansive learning theory suggests that learning will occur when participants in research are
encouraged to engage with unfamiliar perspectives and phenomena in some sort of dialogue (which,
as a necessarily two-way activity, implies ‘mutual’ learning).
Transformative and reflective learning are closely related to expansive learning; in all three, learning
is initiated by encountering unfamiliar perspectives or phenomena requiring interpretation.
Transformative learning is a process through which an individual’s frame of reference, understood in
hermeneutic terms as the set of preconceptions and understanding which frames interpretation of
phenomena such as speech32
, is changed33
. The approach is closely related to reflective learning,
which can be described as a process through which a learner comes to understand the pre-analytic
assumptions held by himself and others which frame how phenomena are interpreted34
.
Transformative or reflective learning occurs when these assumptions are analysed, challenged and
potentially changed, through (for example) contact with unfamiliar perspectives or values in dialogue
with others.
According to these approaches, learning is possible when the ability to interpret a phenomenon fails
or is incomplete due to a lack of relevant experiences, meanings, evidence or beliefs in one’s frame
of reference. Learning occurs when the frame of reference is changed to accommodate new
phenomena. A critical distinction here is between ‘points of view’ and ‘habits of mind’; while both
are products of experience and ‘cultural assimilation’, the former are more transient than the latter35
.
In the process of problem solving new points of view are adopted and tested to interpret an
unfamiliar phenomenon or idea—in this sense a person’s point of view may frequently change,
whereas a habit of mind consists of the cultural and personal ‘backdrop’ which cannot be abandoned
or escaped in the act of interpretation36
. Transformative learning occurs when both points of view
and habits of mind are changed, meaning the entire frame of reference is critically assessed to
identify and change the assumptions underlying interpretation.
Transformative learning becomes reflective when the “legitimacy of other sources of knowledge and
the perspectives of other actors” is acknowledged37
, meaning the learner’s frame of reference or
assumptions are opened to modification. To understand this idea a distinction is required between
reflection and reflexivity: reflection requires that attention be given to the “broadly salient attributes
of the objective in question” including consideration of alternatives and consequences, whereas
reflexivity requires “self-awareness and self-reflection that comes from recognizing that attributes of
the subject construct and condition the object”38
. In terms of learning, reflective learning becomes
reflexive when the learner is willing to question the frame of reference behind his interpretations and
not only those of others. Therefore, reflective learning, when practiced reflexively, is inherently
31
Saari and Kallio, “Developmental Impact Evaluation for Facilitating Learning in Innovation Networks”, 231. 32
cf. Gadamer, Truth and Method; Heidegger, Being and Time. 33
Mezirow, “Transformative Learning”. 34
Chilvers, “Reflexive Engagement?”. 35
Mezirow, “Transformative Learning”. 36
Ibid.; Gadamer, Truth and Method; Patterson and Williams, Collecting and Analyzing Qualitative Data: Hermeneutic
Principles, Methods and Case Examples. 37
Chilvers, “Reflexive Engagement?”. 38
Stirling, “9. Precaution, Foresight and Sustainability: Reflection and Reflexivity in the Governance of Science and
Technology”, 226.
40
critical because it requires analysis of the basic, potentially unacknowledged beliefs, values,
judgments and structures which frame interpretation of the world.
When operationalized into criteria for evaluation of participatory processes and MMLs in general, a
reflexive approach to learning requires self-critical thought and questioning, ideally among all those
involved in a participatory discourse, in particular the project partners responsible for interpreting
and presenting findings based on the perspectives of other stakeholders. Reflexivity, then, may need
to be practiced through explicit reflection events in which dialogue between partners is constructed
so as to require decisions made across the consortium to be subjected to critical questioning.
Reflexivity would require participants in the dialogue to openly acknowledge “their underlying
assumptions, motives, and commitments relating to the forms of public dialogue they orchestrate or
are exposed to.” Such admissions would require a degree of openness and humility from
participants, particularly in admitting which stakeholders have been excluded as irrelevant39
, and
how the desired outcomes of the participatory processes serve their interests.
While simple to describe, such reflexivity is likely difficult to achieve in practice, particularly when
openness reveals weaknesses or biases that could undermine one’s position and the credibility of the
outcomes of the process40
. Reflexivity requires a considerable effort in critical self-questioning (akin
to philosophical inquiry), which raises questions over the appropriateness of requiring reflexivity
among participants in participatory research. In particular, the limited time in which participants are
typically involved in a participatory process (for instance, a focus group) likely creates a practical
barrier to reflexivity. As a result, for the purposes of SATORI the quality of participatory processes
will not hinge upon the reflexivity implicit in the process; reflexivity will be recognised as desirable
or ‘above and beyond’ what can be expected, but not required for ‘high-quality’ stakeholder
engagement. Instead, transformative learning will be used as a benchmark, evidenced by changes to
behaviours and perspectives of stakeholders achieved through contact and dialogue with alternative
perspectives and frameworks of understanding.
This type of learning is ‘mutual learning’ in the sense that participants learn from one another.
According to one MML partner encountered in D12.1, the mutual learning aspect of MMLs is “not
an expert talking to an audience, but let us say it is one of the created forms of debate, wherein
knowledge from various perspectives are mutually exposed to one another.” Another commented
that it is important that “all partners feel they’re contributing to the project equally but in different
ways,” referring to the necessity of two-way learning rather than one-way information exchange.
This suggests the success of learning in the project can be assessed by the exchange of concepts and
ideas in both directions in the connections and dialogues between stakeholders fostered by the MML.
Following on from such ‘mutual learning’, transformative learning theory suggests that by expanding
the participants awareness of other stakeholders and alternative perspectives/interests, they may be
‘mobilised’ or more likely to collaborate with (or at least consider) stakeholders with whom they are
newly brought into contact via stakeholder engagement activities, and the interests these stakeholders
represent beyond their pre-existing concerns. Transformative learning theory therefore provides an
39
Chilvers, “Reflexive Engagement?”, 300. 40
It is worth explaining why a social learning approach was not adopted. Social learning follows on from transformative
and reflective learning, but further requires participants in a dialogue to adopt collective interests and pursue a collective
good. Given the definition of MMLs provided in D12.1, such a ‘collective interest’ and focus on communitarian
problem solving is not clearly required (although it is certainly possible, for example in MMLs concerning ‘global’
problems such as ocean pollution; see: D12.1 Appendix 1).
41
appropriate framework for evaluating stakeholder interactions within MMLs, as it is responsive to
the two key aims of MMLs: ‘mutual learning’ and ‘mobilisation’.41
Before translating these considerations into a set of evaluative criteria for SATORI, it must be noted
that the same practical limitations on reflexivity noted above cannot be said to exist for the SATORI
consortium; indeed, as researchers who regularly examine bias (seen for example in research
methods discourses), reflexivity may be more reasonably expected. For SATORI, while the quality
of the project in terms of participant experiences, impact, etc. will only be evaluated in terms of
transformative ‘mutual learning’, the quality of inter-consortium dialogue will be evaluated in terms
of (reflexive) reflective learning. The same arguments for reflective learning as ‘mutual learning’
and ‘mobilisation’ apply, with the added expectation that researchers have greater abilities (or
responsibilities) for self-criticism (or reflexivity).
7.3 EVALUATION PRINCIPLES AND CRITERIA
The criteria described here are derived from the findings of D12.1, where two evaluative
frameworks42
were recommended as ideally suited to assessing the quality of MMLs. Specifically,
they follow on from a key principle of good practice in MML evaluation identified in D12.1: MMLs
should ensure evaluation addresses ‘generic’ evaluation criteria for participatory processes such as
those identified by Chilvers (2008) (representativeness and inclusivity, fair deliberation, access to
resources, transparency and accountability, learning and efficiency), while also including criteria to
assess the impacts and evidence of mutual learning and the facilitation of collaboration and
cooperation among stakeholders, such as those specified by Haywood & Besley (2013) or Walter et
al. (2007) (network building, trust in others, understanding of others, community identification,
distribution of knowledge, system knowledge, goal knowledge, transformation knowledge).
The following sub-sections detail how these frameworks can be applied to evaluate the quality of the
different types of activities as described in Table 3, identifying specific criteria that will be
implemented in the evaluation and reflection strategy described in D12.3. These criteria operate on
the assumptions regarding the defining characteristics and purposes of MMLs established in D12.1:
specifically, that mutual learning and mobilisation are primary ends and therefore key components of
the 'quality' of MMLs. Justification for these assumptions are not repeated here, but can be found in
D12.1.
An overarching theme in the criteria stems from the purpose of SATORI, understood as the
development of an ethics assessment framework. The method of development and application of the
framework across other project activities and beyond the consortium will therefore be assessed, for
41
At this stage of planning the evaluation it is worth considering the difficulty of translating such a theory into particular
evaluation activities for measuring learning. As noted by participants in the D12.1 study, evaluation of mutual learning
as “very difficult” because “learning is a continual process,” meaning it is difficult to attribute attitude and behaviour
change to specific project influence and activities. Additionally, “measuring it is really difficult, because things are slow
to change, and obviously, changing one person can catalyse change in others through a social process that isn't actually
learned first-hand.” Furthermore, mutual learning can be ‘implicit learning’ in the sense that it occurs without the
explicit recognition or effort of the learner. It is therefore more than the mere exchange of ideas between two parties, and
very difficult to measure in evaluation. As a result it is important to view the criteria developed to assess mutual learning
as an ideal which the actual evaluation may be unable to meet due to the practical difficulties of tracking learning (e.g.
changes in behaviour and perspectives) over time. Self-assessment activities by stakeholders before, during and after
stakeholder engagement events may, however, offer a way forward. Additionally, changes to ‘mindset’ may be feasible,
where mindset is understood as understanding, beliefs, aims, questions asked and ways to address and answer those
questions. Mindset change may be evident when partners reassess the feasibility of objectives to achieve during the
project, as well as the overall aims of their involvement. 42
Haywood and Besley, “Education, Outreach, and Inclusive Engagement”; Chilvers, “Reflexive Engagement?”.
42
example in terms of how project activities have supported the adoption or further development of the
framework by relevant non-consortium stakeholders.
For example, criteria address questions of quality and feasibility, the prospects for incorporation of
the ethical framework into the EU system of research and innovation, support for transnational co-
operation in ethical assessment provided by the project, and the contribution to sustainable and
inclusive solutions to key challenges facing European society made by the framework and supporting
project activities. These activities are ways of specifying the concepts ‘mutual learning’ and
‘mobilisation’ for SATORI; for example, evaluation would be concerned with the ‘mobilisation’ of
stakeholders in terms of encouragement within the various social and professional networks engaged
in the project to adapt the framework produced, or in terms of adaptation of the framework for
feasible adoption within the various organisational and disciplinary contexts seen in recruited
stakeholder ‘groups’. As SATORI demands an intensive and prolific deliberation among
participants (in line with being a MML), criteria focusing on the ‘fairness’ and ‘competence’ of these
processes, including the quality of debate facilitated and the representativeness of stakeholders
engaged, will likely feature in the final set of evaluation and reflection criteria43
.
Each of the above examples concerns the quality of tasks (and underlying methods) completed by the
consortium. However, criteria also concern the impact of SATORI as a whole and the impact of the
ethical assessment framework on citizens and society. For the latter, DMU will coordinate with (or
receive data ‘from) VTT (leaders of WP6), who will be assessing the impacts of the ethical
assessment framework in particular, throughout the evaluation.
7.3.1 Principles and criteria for evaluating stakeholder engagement
D12.1 concluded that ‘mutual learning’ is a core concept in evaluating the quality of an MML.
Starting from this assumption, a primary task of this deliverable is to develop not only traditional
evaluation criteria for the project, but also criteria for assessing mutual learning (and therefore,
reflection) within SATORI. Stakeholder engagement activities can be understood as the locus of
MMLs which directly facilitate dialogue and network-building (or mobilisation) between
stakeholders (and partners). The purpose of stakeholder engagement is to encourage mutual learning
and mobilisation between all those involved (stakeholders as well as partners) (see: D12.1). As
explained above (see: Section 7.2.1), these concepts can be operationalised in terms of the degree of
exchange of perspectives between those involved, relationship or network building, etc. The criteria
described here can be seen as indicators of high quality mutual learning and mobilisation in
stakeholder engagement activities, achieved primarily by establishing a fair and competent
dialogue44
.
Several criteria stem from the Rowe & Frewer framework45
, which addresses the fairness and
acceptability of participatory processes in research. Following on from Webler, these criteria focus
on fairness, which concerns the “perceptions of those involved in the engagement exercise and/or the
wider public, and whether they believe that the exercise has been honestly conducted with serious
intent to collect the views of an appropriate sample of the affected population and to act on those
views,” and competence (or for Rowe & Frewer, acceptance), which concerns “the appropriate
43
See: SATORI, THEME [SiS.2013.1.2-1 SiS.2013.1.2-1] Mobilisation and Mutual Learning (MML) Action Plans:
Mainstreaming Science in Society Actions in Research] - Annex 1 - Description of Work, 43. 44
cf. Rowe and Frewer, “Public Participation Methods”; Webler, “‘Right’ Discourse in Citizen Participation: An
Evaluative Yardstick”. 45
Rowe and Frewer, “Public Participation Methods: A Framework for Evaluation”.
43
elicitation, transfer, and combination of public and/or sponsor views.” The latter of the two is
synonymous with effectiveness as used by Rowe & Frewer, understood as “maximizing the relevant
information from the maximum number of all relevant sources and transferring it (with minimal
information loss) to the other parties, with the efficient processing of that information by the
receivers (the sponsors and participants) and the combining of it into an accurate composite.” The
competency or effectiveness of a public participation process can therefore be disrupted when
information is suboptimal, distorted, inaccurate, incomplete, or when it is misunderstood by
participants46
, perhaps due to a mismatch between the language or concepts in the information and
the background expertise of the participants. Criteria derived from these authors for SATORI are
intended to ensure stakeholder engagement events are fair and accessible to affected stakeholder
groups, which (as discussed in D12.1) can be linked to a concern for a ‘fair discourse’ in MMLs47
.
The criteria include:
‘Representativeness’ – The degree to which participants in stakeholder engagement events
“comprise a broadly representative sample of the population of the affected public,”
including both demographic and organisational groups. For SATORI, such groups will
include representatives of multiple disciplines to whom the ethical assessment framework is
aimed, as well as participants in research potentially subjected to review under the
framework. This criterion aims to ensure that participants are exposed to a range of relevant
perspectives both in making decisions within the participatory process, and to inform
subsequent decision-making beyond the research project. Without a representative sample
MMLs may suffer from insular and circular dialogue, in which individuals with similar
backgrounds and perspectives merely reinforce one another’s views, thereby limiting the
potential for transformative ‘mutual learning’. Where particular stakeholder groups are
named by the organisers of an event, representativeness will be based on whether these
stakeholders were actually present, as well as whether a clear bias is evident within the
sample (e.g. ethics review panels only from the UK, despite the EU-wide scope of the
framework).This requirement correlates with principles that suggest that evaluation should
address the ‘generic’ qualities of participatory processes such as those areas of consensus in
evaluation literature identified by Chilvers48
and relatively, that a participatory approach to
evaluation conducive to mutual learning between stakeholders and project partners should be
used with an appropriate degree of stakeholder involvement, from designing to carrying out
the evaluation and reporting on its findings.
Transparency’ – The degree to which the decision-making processes within the engagement
event are transparent to stakeholders, including making the biases or underlying assumptions
of the organisers or decision-makers known as far as possible to participants. This criterion is
necessary to maximise the mobilising effect of the event, as stakeholders may be less likely to
‘champion’ the outputs of the event if it is unclear whether their interests have been taken
seriously. This correlates with the criterion principle in D12.1 that suggests that the
evaluation process should be conducted transparently for the benefit of the consortium,
including identifying its scope (e.g. summative/formative, technical/holistic) and the position
of the evaluator in relation to the consortium (e.g. internal, external, independent) as early as
possible which helps reduce resistance to recommendations made by the evaluators.
46
Rowe and Frewer, “A Typology of Public Engagement Mechanisms”, 262–3. 47
cf. Habermas, The Theory of Communicative Action: Volume 1: Reason and the Rationalization of Society. 48
Chilvers, “Reflexive Engagement?”.
44
‘Accessibility’ – Concerns the availability of relevant learning and information materials to
the participants, so as to allow for an informed dialogue between stakeholders49
. Further, any
recognised barriers to participation should be removed as far as possible50
to ensure the
resources are understandable for participants, for example language or comprehension of
technical language. This criterion can be used to justify the quality of outcomes of a
participatory process—the quality of judgments produced in consultation with stakeholders
can be assessed for necessary understanding of a technology, technique or other relevant field
of expertise; for example, do stakeholders know enough about the topic to take their views
seriously? Procedurally, the arrangements made to support stakeholders in the processes can
be assessed; for example, was sufficient information made available to ensure stakeholders
can learn and become competent participants? It also includes a time aspect; e.g. were
participants given materials near to the start of the event, or given enough time during the
event to read, comprehend and ask questions? The evaluation will be dependent upon the
need for such materials in the event.
‘Task Definition’ – Concerns the clarity of instructions or guidance given to the participants.
Were participants made aware of the scope, purpose and their role in the event?
Other criteria not derived directly from Rowe & Frewer, but representing similar concerns with
fairness and acceptability include:
‘Fair Deliberation’ – Concerns the degree to which participants are allowed to “enter the
discourse and put forward their views in interactive deliberation that develops mutual
understanding between participants”51
. Following on from the transformative approach to
mutual learning described above with reference to ‘Representativeness’, this criterion adds a
further dimension by emphasising the quality of the process in which the sample participates.
Specifically, mutual learning may be hampered if dialogues are dominated by a particular
perspective or stakeholder group, as it limits opportunities for learning from interaction with
multiple unfamiliar perspectives and phenomena. As suggested by Chilvers52
, “While
recognizing the role of consensus, the deliberative process should emphasize diversity and
difference through representing alternative viewpoints, exploring uncertainties, and exposing
underlying assumptions.” In evaluating according to this criterion, events may be assessed in
terms of how they are structured so as to minimise opportunities for particular ‘outspoken’
stakeholders to dominate the dialogue (which can be inhibited by a moderator), or
opportunities for one-to-one dialogues between stakeholders. For SATORI, it is important to
note that ‘feedback’ style events may be particular vulnerable to emphasis on the views of the
organisers which will be implicit in tools or frameworks presented for review to the
stakeholders.
‘Iteration’ – The degree to which dialogue and topics within the event are revised and
responsive to the “needs, issues and concerns expressed by participants”53
. Practically
49
Rowe and Frewer, “Public Participation Methods”; Haywood and Besley, “Education, Outreach, and Inclusive
Engagement”. 50
Chilvers, “Deliberating Competence Theoretical and Practitioner Perspectives on Effective Participatory Appraisal
Practice”. 51
Ibid. 52
Ibid. 53
Ibid.
45
speaking this criterion can be assessed in terms of opportunities for stakeholders to propose
topics for dialogue outside of a pre-defined framework of events and topics54
. Again, this
criterion emphasises opportunities for encountering multiple unfamiliar perspectives and
phenomena to encourage mutual learning.
‘Criticalness’ – Rather than referring to questioning of participant claims by the organisers,
this criterion refers to the degree to which participants are encouraged to challenge and
negotiate with the perspectives and values of other members of the dialogue55
. This
willingness to question the claims of others is critical to learning about the underlying
assumptions and values held by others, both of which are necessary for transformative
learning. Practically speaking, these qualities could be assessed in terms of the effectiveness
of the ‘rules’ established for discourse in the process or learning materials provided to
participants56
, and the degree to which this structure contributed to a high quality discourse in
which social or mutual learning occurred between participants (and organisers). This is not
to say participants must disagree with one another or come to a consensus, but rather that the
act of questioning itself has value for the learning process. This criterion is further seen as
necessary to emphasise the empowerment of participants to question the claims of others and
prevent dominance of the discourse by a particular view or stakeholder57
. According to
transformative learning, mutual learning can only occur when participants do more than
merely agree or disagree—respect for alternative views and trust in the integrity of others is
required according to which the participant feels compelled to offer reasons and counter-
arguments58
. These requirements suggest specific requirements to be met in participatory
discourses when mutual learning is conceived of as a type of transformative learning;
specifically, participants should be ‘open-minded’ meaning they are willing to consider the
views of others as legitimate, and should be seen to offer reasons of support and criticisms of
particular views rather than mere opinions or ultimatums. Power relationships within a
discourse need also be considered, as the perception of authority or favouring by facilitators
of the views of a particular stakeholder can undermine trust among participants, respect for
other views, and the overall perception of a fair discourse59
conducive to transformative
learning.
‘Ownership of Outcomes’ – The degree to which participants are engaged in disseminating
results of the event, and thus feel that they ‘own’ or ‘control’ the decisions reached (or other
outputs)60
. Ownership is seen to encourage mobilisation of stakeholders and ideas (or
dissemination of ideas) by encouraging a feel of ‘belonging’ to the process and
‘responsibility’ for its outcomes.
‘Participant Satisfaction’ – To avoid an entirely theoretical approach to evaluation, assessing
the participants’ satisfaction with the event can ensure aspects of the event not covered by the
above criteria are still considered in the evaluation. Simply put, asking a participant what
they found ‘good or bad’ about a particular engagement event, and to explain why,
encourages identification of unforeseen aspects of the event linked to its quality. This is
suggested by a criteria principle that proposes that data collection and analysis methods
54
Abelson et al., “Deliberations about Deliberative Methods”. 55
Haywood and Besley, “Education, Outreach, and Inclusive Engagement”. 56
Webler, Kastenholz, and Renn, “Public Participation in Impact Assessment”, 456. 57
Ibid., 443. 58
e.g. Stagl, “Multicriteria Evaluation and Public Participation”. 59
Ibid. 60
Haywood and Besley, “Education, Outreach, and Inclusive Engagement”.
46
conducive to evaluating learning or attitudinal change over time should be employed in
evaluation, meaning explicit and implicit evidence of mutual learning should be sought in
evaluation by asking project partners and participants to reflect on changes to their attitudes
and behaviours caused by participating in the project and engaging with unfamiliar ideas and
perspectives. In terms of the aims of MML, participant satisfaction can be reasonably linked
with the mobilising effects of the event, similar to ‘Ownership’.
Each of these criteria is intended to assess the quality of stakeholder engagement events, understood
both procedurally and substantively. These criteria may further suggest impacts of the events, as
discussed below.
7.3.2 Principles and criteria for evaluating recruitment
Several of the aforementioned criteria can be used to specifically assess the quality of recruitment in
a MML. The ‘Representativeness’ of the stakeholders engaged throughout the project, not only in
stakeholder engagement events, is obviously relevant. Furthermore, in terms of ‘Accessibility’ and
‘Criticalness’, enhancing the ‘voice’ of underrepresented stakeholders by ensuring equal
representation in the project suggests that MMLs should be empowering stakeholders involved, not
only through capacity building and learning but by ensuring traditionally underrepresented
stakeholders are involved in the discourse. These latter criteria can be considered generic for the
MML mechanism as it is designed to tackle large societal challenges which involve a multitude of
stakeholders, some of whom will inevitably, as in any discourse, have less influence or power than
others61
.The criteria mentioned above relates to a criteria principle established in D12.1 which
suggests that the success of an MML should be ‘stakeholder oriented’, meaning evaluative criteria
should be linked to factors such as the reaction of stakeholders to engagement events, the new
connections established between engaged stakeholders for communication and collaboration, the
effectiveness of training in building capacities, and the empowerment of underrepresented groups in
MML and societal discourses and the MML should be able to get target stakeholder groups in
attendance at engagement events which can later be used as an evaluative measure.
7.3.3 Principles and criteria for evaluating surveys, interviews and case studies
Evaluation of these empirical activities will likely consist solely of peer-review of deliverables by
partners, as suggested in the DoW. Empirical activities are referring to research activities and one of
the criteria principles in D12.1 suggests that evaluation should aim to assist in developing research
activities during the life of the project (e.g. through feedback from evaluators to partners), improve
the design of future related activities, assess project impact, and provide stakeholders with a better
idea of the value of their participation by tracking influence on the process. In addition, indicators of
success for evaluating the findings and methods of surveys, interviews and case studies will need to
be developed by the partners conducting these activities, so as to ensure a fit between the partner’s
disciplinary background and the evaluation. This calls for creation of initial templates or indicators
of success with consortium input prior to the start of each research task, and potentially adding to or
revising according to challenges faced. This approach can ensure that discipline-specific
perspectives inform the assessment of the success or quality of project activities while being
responsive to the practical challenges of engagement.
61
cf. Foucault, Discipline & Punish; Habermas, The Theory of Communicative Action: Volume 1: Reason and the
Rationalization of Society.
47
Broadly speaking, quality of these activities can be described in terms of three criteria derived from
the ‘Quality and Rigor’ criteria proposed by Haywood and Besley62
, understood as the “degree to
which the project is perceived as rigorous and credible. This may be peer-review for project leaders
or the plausibility of research recommendations for citizens.” Specifically, these empirical activities
will be evaluated according to (1) ‘Methodological Rigour’, meaning the empirical instrument is
informed by a sound methodological basis and philosophical paradigm—these are a minimum of
good practice in any type of research, (2) ‘Credibility’, meaning a strong connection is established
between the data or interpretations of the data (as well as any theoretical or conceptual frameworks
necessary to understand the data) and the findings or recommendations produced, and (3)
‘Transparency’, or the presence of sufficient detail in describing and analysing the data for the reader
to follow the authoring partner’s line of reasoning63
.
7.3.4 Principles and criteria for evaluating desk research
Evaluating the quality of desk research is dependent upon the disciplinary assumptions and accepted
practice of the authoring partner. Practically speaking, such evaluation can be difficult due to the
evaluator potentially coming from a different discipline. Additionally, undertaking observations
(which would be necessary to evaluate the quality of the process of desk research) is perhaps
unrealistic and unnecessary, as the outputs of desk research (e.g. deliverables, tools,
recommendations) will be assessed elsewhere in SATORI—either through peer review or workshops
with consortium partners and stakeholders. This signifies criteria principle from WP12.1 which
provides that evaluation should consider data beyond the deliverables, including stakeholders in
assessing the quality of dialogue facilitated by the project wherever possible. This promotes
subjectivity in evaluating desk research by collecting different perspectives from participants and
assessing them.
With that said, the quality of deliverables is key to the success of the project because they inform
subsequent dissemination and impact as a key point of contact with stakeholders outside of face-to-
face engagement activities. Discipline specific ‘indicators of success’ (or quality) will need to be
derived and applied for each deliverable. This relates to a principle that propose that evaluative
criteria should be specified according to the context of the particular MML, including potentially
engaging the consortium to identify appropriate discipline-specific or task-specific criteria for
particular activities and deliverables. However, one generic criterion based on mutual learning can be
specified: where input from other deliverables and WPs is explicitly mentioned in the DoW, quality
can be assessed in terms of ‘Collaborative Influence’, or evidence of influence from the named
sources (e.g. references, concepts). This can be conceived of as evidence of mutual learning between
consortium partners evidenced by the exchange of ideas across work packages (and disciplines), and
should therefore be encouraged as far as possible through feedback from the evaluators and peer-
reviewers. The latter conforms to the need for evaluation to address the ‘generic’ qualities of
participatory processes such as those areas of consensus in evaluation literature identified by
Chilvers64
. Evaluation should also address impacts and evidence which demonstrate that key MML
activities and desired outcomes have been realised—mutual learning and the facilitation of
collaboration and cooperation among stakeholders—using criteria and typologies such as those
62
Haywood and Besley, “Education, Outreach, and Inclusive Engagement”. 63
See also: Kuper, Lingard, and Levinson, “Critically Appraising Qualitative Research”; Guba and Lincoln, Naturalistic
Inquiry; Finlay, “‘Rigour’, ‘Ethical Integrity’ or ‘Artistry’?”. 64
Chilvers, “Reflexive Engagement?
48
specified by Haywood & Besley65
and Walter et al66
and relatively with the principle that suggests
the use of participatory approach to evaluation conducive to mutual learning between stakeholders
and project partners. The appropriate degree of stakeholder involvement, from designing to carrying
out the evaluation and reporting on its findings, must be decided on a project-specific basis according
to the willingness of the stakeholders and the expertise required to perform the evaluation.
7.3.5 Principles and criteria for evaluating recommendations/tools
The quality of tools, frameworks and recommendations produced by SATORI can be linked in part
to the quality of stakeholder engagement events as discussed above. Specifically,
recommendations/tools can be assessed in terms of the degree to which the recommendations or tools
produced by the consortium reflect or address the concerns raised by stakeholders67
, even where
these concerns are rejected or otherwise dismissed. Practically, tools/recommendations can be
compared with event minutes and reports. In the pursuit of mutual learning, deliverables should
demonstrate at a minimum that stakeholder interests have been considered. This requirement is
partially covered by the ‘Transparency’ criteria above in terms of decision-making processes
preceding recommendations/tools; however, acknowledgement of all stakeholder values/perspectives
encountered in such events which inform recommendations/tools can be described in terms of
‘Relevance’. This satisfies the criteria principle that suggests that the possibility of mutual learning
without consensus should be recognised when evaluating quality of mutual learning that has
occurred.
7.3.6 Principles and criteria for evaluating dissemination/impact
As noted in D12.1, impact is fundamentally difficult to evaluate due to a lack of causal links between
project outputs and future policy/decisions, as well as the restrictive timeline of projects versus
future impacts. With this in mind, a clear ‘endpoint’ should be specified at which point project
impacts can start to be identified and evaluated. Impact is understood here as “positive and negative,
primary and secondary long-term effects produced by a [project], directly or indirectly, intended or
unintended”68
. It may initially be easier to speak of impact in terms of dissemination, which can
refer to the spread of ideas, knowledge, tools, deliverables, publications and other outputs of a
project. One possible criteria for measuring dissemination (and thus impact) is ‘Quantity’, which
refers to the number of times project outputs are referenced in journal articles, conference
proceedings, policy documents, news stories. Quantity alone is however a very crude measurement
of dissemination and impact, as the mere mention of a SATORI output does not necessarily indicate
uptake of the ethics assessment framework or prove that the project’s activities have influenced
policy and the behaviour of stakeholders. ‘Influence’ can therefore be suggested as a complementary
criteria, referring to the uptake of project ideas and tools in public and policy discourses which
consortium partners expect to influence through their involvement in SATORI. This criteria would
therefore require specification in the ‘Indicators of Success’ questionnaire through partners
identifying specific discourses and channels where they hope to see the framework and other outputs
adopted. Furthermore, it is likely only assessable through self-assessment by partners and review of
news items or forthcoming policy where a direct connection can be established between (a partner
in) the decision-making process behind the relevant policy or decision.
65
Haywood and Besley, “Education, Outreach, and Inclusive Engagement”. 66
Walter et al., “Measuring Societal Effects of Transdisciplinary Research Projects”. 67
OECD, “Evaluation of Development Programmes”; Haywood and Besley, “Education, Outreach, and Inclusive
Engagement”. 68
OECD, “Evaluation of Development Programmes”.
49
Both transformative and reflecting learning (see: Section 7.2.1) are useful in assessing the impact of
research as a form of learning, where the research is seen as contributing unfamiliar perspectives or
phenomena to be interpreted by the learner. Impact evaluation guided by this approach would require
methods capable of assessing the quality of such dialogues between stakeholders whose behaviours,
attitudes or mindset have been affected by participating in or coming into contact with the project.
The impacts of the project can therefore be seen through changes to the participants once the project
has ended69
. It follows that a link can be drawn in evaluation between potential impacts of the
project according to the quality of outputs and events which introduce participants to unfamiliar
phenomena in such a way that co-creation of knowledge is encouraged, for example through
dialogue with researchers rather than a non-interactive presentation of results. Such evaluation of
impact focuses on predicting impact rather than retrospectively assessing it, which may address
practical difficulties (such as those noted in D12.1) with evaluating the impact of a project during its
lifespan thus evaluation should occur before, during and after the project to ensure all processes and
impacts are evaluated to some degree. This approach will be adopted in SATORI, meaning that a
direct link is made between the quality of participatory processes and (publicly available)
deliverables and the impact of the project.
A criterion of impact following on from this is ‘Behaviour Adjustment’, understood as the extent to
which capacities, skills and knowledge have been built among stakeholders through training and
dialogue and become evident in behaviour and lifestyles following participation in the project70
. An
objective measure for this criterion is unlikely to exist due to the difficulty of ‘measuring’
behavioural change noted in D12.1; instead, the criterion is intended as a ‘lens’ for the subjective
assessment (or perhaps self-assessment) of subsequent stakeholder behaviours. An identical criterion
for change in perspectives or opinions, ‘Perspective Adjustment, will also be used. Both criteria are
proposed to focus the evaluation on evidence of exchange of perspectives or ‘mutual learning’
among stakeholders, under the assumption that perspectives and behaviours will only change if
‘serious consideration’ is given to alternative views and interests, and that such change is a form of
learning. This is in line with the principle that suggests that mutual learning outcomes among project
participants should be assessed, for example by monitoring changes in participant perspectives,
beliefs and actions over time. Mutual learning conceived of as societal impact can also be evaluated
according to the extent to which project outputs have reached and influenced them.
Impact will also be measured in terms ‘Network Expansion’, or networking between stakeholders
facilitated by the project, which may be measurable through self-assessments of changes to a
participant’s social or professional networking following stakeholder engagement events. This has
previously been described by Haywood & Besley as the “degree to which the project facilitates new
networks and relationships among project members or reinforces existing bonds”71
. Equating quality
with an increase in linkages among involved organisations also provides a useful way to assess the
success of network building facilitated by an MML with quantitative measurement. On this basis
MMLs can be said to be more successful as they increase the links or channels or communication
between stakeholders involved. This relates to a criteria principle that propose the involvement of the
entire consortium in providing data for evaluation beyond writing deliverables (e.g. interviews,
69
For such a methodology, see: Saari and Kallio, “Developmental Impact Evaluation for Facilitating Learning in
Innovation Networks”. 70
Haywood and Besley, “Education, Outreach, and Inclusive Engagement”. 71
Ibid.
50
surveys, reflective meetings, etc. conducted with consortium partners). Broad engagement allows for
assessment of mutual learning between project partners.
As suggested in D12.1, it may be possible to evaluate the likelihood of future impacts according to
the quality of the procedures through which mutual learning occurs. Impact should be distinguished
from outputs and outcomes—impact refers to the mid- and long-term effects of R&D on society and
ongoing research, measured for example in terms of societal products, use and benefits, or social,
cultural, environmental and economic returns72
. In contrast, outputs and outcomes can be understood
as short-term products, processes, recommendations and knowledge created by a project, such as
meetings, reports and other deliverables7374
. A basic distinction can be drawn between ‘academic’
impacts via publications and presentation of results, and ‘societal impacts’ wherein policy, social
attitudes or behaviours outside the project are influenced75
. One of the criteria to consider when
evaluating impact is to create initial templates or indicators of success created with consortium input
prior to the start of each research task, and potentially add to or revise according to challenges
encountered. This approach can ensure that discipline-specific perspectives inform the assessment of
the success or quality of project activities while being responsive to the practical challenges of
engagement. The evaluation of impact is therefore inherently connected to the quality of the project
activities preceding the impact. For stakeholder engagement activities, impact can also be seen
through change in stakeholder behaviours, attitudes or knowledge resulting from participation76
.
Mutual learning evident in stakeholder engagement processes and suggested by the criteria above are
therefore key to evaluating impact. As suggested in D12.1, one of the good criteria principles
recommends that an explicit method for evaluating societal impact should be adopted or designed,
with particular attention paid to evidence of mutual learning. If learning is understood as a process
through which actors are introduced to new experiences, perspectives and knowledge which
transforms or effects their ‘framework of understanding’77
or future perspective, beliefs and actions,
it follows that learning outcomes may influence MML participants in the long-term. While this
argument does not suggest a particular criterion for impact evaluation, it does show how procedural
and substantive evaluation of stakeholder engagement during the life of SATORI can inform the
assessment of impact.
7.3.7 Principles and criteria for evaluating evaluation
In a recent study Chilvers noted that evaluators of public participation exercises were dubious of the
possibility of operationalising the principles in evaluation (described above) due to practical
institutional, cultural, political and economic constraints. It was therefore suggested that to
overcome these constraints evaluators should be increasingly critical78
, identifying limitations not
only with the process itself but the context in which it and its evaluation occur. In D12.1 one of good
criteria principles was suggested in relation to limitations and it proposes that the evaluator
transparently should report on perceived pressures and influence of project partners in the evaluation
72
Bornmann and Marx, “How Should the Societal Impact of Research Be Generated and Measured?”, 211. 73
Abelson et al., “Deliberations about Deliberative Methods”; Walter et al., “Measuring Societal Effects of
Transdisciplinary Research Projects”. 74
The use of vocabulary in the literature is not consistent (see: Section Fout! Verwijzingsbron niet gevonden.). In
some sources, outputs refer to products, processes, reports and deliverables, whereas in others products and processes are
‘outcomes’ and reports and deliverables are ‘outputs’. Both terms are used here to refer to all four types of results. 75
Penfield et al., “Assessment, Evaluations, and Definitions of Research Impact”, 21. 76
Walter et al., “Measuring Societal Effects of Transdisciplinary Research Projects”. 77
cf. Gadamer, Truth and Method; Heidegger, Being and Time. 78
Chilvers, “Deliberating Competence Theoretical and Practitioner Perspectives on Effective Participatory Appraisal
Practice”, 180.
51
to identify, as far as possible, influence on the evaluation outcomes, Limitations can stem from
aspects of the process or context of evaluation, such as resistance from the consortium partners or
limitations established in the DoW. If evaluations of MMLs are understood as a component of
mutual learning, for example by providing critical and reflective feedback to partners to revise
ongoing project activities (see: D12.1), then reporting on limitations in evaluation in the same
fashion as other project activities is necessary to be able to holistically evaluate the quality of mutual
learning in the MML. In this light, a criteria principle described in D12.1 indicates that the evaluator,
coordinator and/or work package leaders should encourage partners to critically reflect on their
progress and changes to attitudes and behaviours (e.g. implicit learning) through formal or informal
methods such as interviews, project management meetings, or peer review of deliverables. On this
basis the quality of evaluation can be assessed in terms of ‘Restrictiveness’, established through
critical self-assessment of limitations imposed on the evaluators and evaluation by the project’s
broader context, description of work, consortium, coordinator or other relevant sources.
Another potentially relevant factor, depending on the strategy developed in Task 12.3, concerns the
quality of stakeholder participation in evaluation. A participatory approach to evaluation can in
general be advocated in MMLs due to its conduciveness to two-way learning compared to ‘objective’
approaches to evaluation. This affirms the criteria principle that propose the use of a participatory
approach to evaluation conducive to mutual learning between stakeholders and project partners. It is
not yet clear whether SATORI will use a participatory approach to evaluation; however, if
stakeholders are included as evaluators the ‘Representativeness’ of the participants recruited for
evaluation achieves new importance, as groups not participating lack a voice in the process itself and
evaluation. Careful consideration is needed of who is considered a stakeholder, understood broadly
as an individual or group of individuals affected by a decision, issue or other topic of research79
.
7.3.8 Principles and criteria for evaluating administration
While administration is a fact of any type of collaborative research project, it should still be
evaluated in MMLs. In D12.1 two criteria principles were described that points to evaluating the
administration of projects. The first principle suggests that project management should be evaluated,
meaning that objectives, milestones and deliverables are delivered on time and of acceptable quality
according to how they are defined in the Description of Work (DoW). Poor coordination or
administration can lead to a breakdown in communication and project agendas among partners,
undermining the quality of all other processes. The quality of administration and coordination can be
assessed in terms of ‘Quality of Collaboration’, wherein breakdowns in communication or conflicts
between partners reduce the quality of collaboration and thus jeopardise the project. The second
criteria principle that relates to evaluating administration, suggests that when conducting a formative
evaluation, the evaluator should provide critical feedback and recommendations to the consortium to
improve ongoing research activities. It may be possible to assess this criterion through qualitative
observations of partner workshops and meetings (e.g. body language, atmosphere, mood, etc.) as
well as self-assessment reports by partners concerning any practical barriers to collaboration they
have encountered.
7.3.9 Evaluating ‘Internal’ Activities
It should also be noted that certain administrative tasks concern inter-consortium communication and
collaboration, many aspects of which are considered ‘coordination’ of the project. Such activities
include consortium meetings, peer-review and informal communication, all of which were not
79
cf. Renn et al., “Public Participation in Decision Making”, 190–1.
52
classified in Table 3 due to not being included in a particular Work Package. Additionally,
consortium interaction at stakeholder engagement events may also be key to facilitating mutual
learning not only among stakeholders, but the consortium itself. Recognising this, these types of
activities, which may be considered ‘Internal’ activities, will also be evaluated in terms of the mutual
learning that occurs between consortium partners. Criteria concerning mutual learning specified
with reference to stakeholder engagement activities will therefore also be relevant for these activities.
Additionally, such activities will be assessed in terms of ‘Reflectiveness’, or the degree to which
partners show respect for alternative views and trust in the integrity of other partners, which is
necessary if communication and collaboration are to progress beyond mere (dis)agreements on
proposed actions80
. Therefore, criteria such as using a participatory approach to evaluation
conducive to mutual learning between stakeholders and project partners and deciding the appropriate
degree of stakeholder involvement, from designing to carrying out the evaluation and reporting on its
findings, on a project-specific basis according to the willingness of the stakeholders and the expertise
required to perform the evaluation could be relevant in evaluating internal activities. As suggested by
Chilvers81
, participants and partners alike will need to acknowledge “their underlying assumptions,
motives, and commitments relating to the forms of public dialogue they orchestrate or are exposed
to,” which requires a significant degree of openness and humility. Decisions across the consortium
should be systematically subjected to reflexive (critical) questioning, perhaps through scheduled
events or workshops in which progress and proposed changes are discussed. This is suggested in
criteria principle described in D12.1 which proposes that the evaluator, coordinator and/or work
package leaders should encourage partners to critically reflect on their progress and changes to
attitudes and behaviours through formal or informal methods such as interviews, project
management meetings, or peer review of deliverables.
7.4 SUMMARY OF EVALUTION AND REFLECTION CRITERIA
The criteria specified above have been developed on the basis of a literature review, empirical study
and application of principles developed in Task 12.1. The set of criteria are therefore both
theoretically and empirically grounded, providing an evaluative framework responsive to the
particular aims and activities of SATORI based upon broader principles of good practice in MMLs.
While all of the criteria specified above are relevant to the evaluation of SATORI as a whole, this
does not imply that all will be included in the evaluation and reflection strategy created in Task 12.3.
For example, practical reasons may exist for limiting the set of criteria used for evaluating a
particular stakeholder engagement event based upon the availability of stakeholders to provide
feedback, the need for learning/training materials, or the necessary limitation of ‘dialogue’ to one-
way communication of information or tools/recommendations for further consideration/application
by stakeholders in the future.
Table 4 summarises all of the criteria specified above, showing which criteria are relevant to which
types of activities. To develop practical methods of applying these criteria in the SATORI
evaluation and reflection strategy developed in Task 12.3, a further ‘Indicators of Success’
questionnaire will be carried out which may provide both further criteria for evaluation, but primarily
specific metrics or measures of the criteria described here.
80
e.g. Stagl, “Multicriteria Evaluation and Public Participation”. 81
Chilvers, “Reflexive Engagement?”, 300.
53
Activity Type Criteria
Stakeholder Engagement Representativeness
Transparency
Accessibility
Task Definition
Fair Deliberation
Iteration
Criticalness
Ownership of Outcomes
Participant Satisfaction
Recruitment Representativeness
Accessibility
Criticalness
Surveys, Interviews and Case Studies Methodological Rigour
Credibility
Transparency
Desk Research Collaborative Influence
Recommendations/Tools Transparency
Relevance
Dissemination/Impact Quantity
Behaviour Adjustment
Perspective Adjustment
Network Expansion
Evaluation Restrictiveness
Representativeness
Administration Quality of Collaboration
‘Internal’ Stakeholder engagement criteria (as above)
Reflectiveness
Table 4 – SATORI Evaluation and Reflection Criteria by Activity Type
54
8 CONCLUSION
This deliverable has described a set of evaluation and reflection criteria which will be further applied
in developing a strategy for evaluation and reflection in SATORI. The findings of this deliverable
follow on from the activities of Task 12.1, and will eventually inform DMU’s work in Tasks 12.3
and 12.4. An evaluation and reflection strategy specifying activities and timelines to be undertaken
in the evaluation of SATORI will be described in Task 12.3, and applied in Task 12.4.
Task 12.2 looked at different areas of evaluation and reflection and considered feasibility and quality
of the criteria in terms of evaluating SATORI as a whole and its ethical impact assessment
framework on citizens and civil society. The task undertook some preliminary evaluative work in
which a pre-evaluative questionnaire was distributed to participants and observations were conducted
in a consortium meeting that took place in Rome. The results from both activities were analysed and
informed the criteria for evaluating SATORI. In addition, a classification of tasks to be evaluated
was developed which further informed the recommendation for an evaluation and reflection criteria
for SATORI.
Consequently, with reference to fairness, competence in evaluation together with efficiency,
transparency, effectiveness of the evaluation process and reflective outcome from SATORI
consortium (see section 5.25) the following criteria principles are recommended for evaluation of
SATORI. It should be noted that the defined criteria principles are not prescriptive and final yet but
will be discussed at the evaluation and reflection workshop in 2015 in February (Brussels) and will
inform the SATORI evaluation and reflection strategy in 12.3.
Thus, based on the research presented in this deliverable and on the observations we made during the
Rome workshop, we propose to select the following criteria and propose to interpret them for
SATORI in the following way:
The first and main criterion that will be used in evaluating SATORI is that which focuses on
evaluating stakeholder involvement. This criterion will assess mutual learning among stakeholders
of the SATORI project. As described in D12.1, assessing mutual learning is paramount in an MML
because it can be used to evaluate the quality of mutual learning and mobilisation in stakeholder
engagement activities. This criterion will include a concern for representativeness among participants
in stakeholder engagement events, transparency in decision-making processes within engagement
activities and accessibility of relevant information material to participants of engagement activities.
In addition, this criterion will address concerns on clarity of tasks and instruction or guidelines given
to participants in relation to an event. The criterion will also evaluate the extent of fair deliberation
which relates to the degree to which participants are allowed to put forward their views. Relatively,
the criteria will evaluate the extent to which dialogue and topics are revised in response to needs,
issues and concerns expressed by participants and evaluate them in terms of criticalness which refers
to the degree to which participants are encouraged to challenge and negotiate views and values of
other members of the engagement process. Lastly under this criterion the evaluation and reflection
will consider the level of ownership of outcomes among participants and participant satisfaction.
In relation to the above, the second criterion for SATORI evaluation and reflection will include the
criterion principle for evaluating recruitment. The focus here is on representativeness and is
connected to the first criterion stakeholder involvement. This criterion will be used to ensure that
there is equal representation and that stakeholders are empowered not only through capacity building
55
and learning but by ensuring that underrepresented stakeholders are involved in the discourse thereby
tackling large society challenges which involve an array of stakeholders.
Thirdly, SATORI will be evaluated using the principle and criteria for evaluating surveys,
interviews and case studies. This will use indicators of success for specific project activities while
being responsive to challenges of engagement. The results from the evaluation process will aid
reflection on SATORI’s activities in terms of rigour and quality.
Fourthly, since desk research is not a novel activity in SATORI, we will not use this criterion in our
evaluation. We prefer to focus on the novel and primary aspects of (the evaluation of) SATORI such
as stakeholder involvement and the interviews and surveys.
However, since SATORI is also active with regard to legal documents and documentation (e.g. on
ethical training) and given our observations in the Rome workshop, it may be essential to evaluate
and monitor legal and other related documentation as a way of observing ethical considerations.
Furthermore, the recommendations given by SATORI should be relevant to the project aims and
transparent in terms of the decision-making processes that precede the recommendations – it should
be clear how the recommendations are arrived at – and they should acknowledge all stakeholders’
values and perspectives.
Further, although it is difficult to assess impact, it is possible to do an indicator of success
questionnaires which asks stakeholders to evaluate the impact of SATORI. For example, questions
can cover the following: Has the impact been positive? Has the behaviour of participants been
affected by the project? Has there been any expansion in their network? Is there more recruitment?
These are all relevant impact assessment criteria.
Likewise, we need to facilitate and encourage partners to reflect on their research. In practice the WP
leaders will play an important role in this self-evaluation. But as evaluators we will facilitate this
during the workshops.
In addition, so far our observations indicate that collaboration in SATORI could be improved. For
example, accepting responsibility for doing the work assigned or giving feedback to other partners
was not always happening, or not to a sufficient degree. There was also discussion about the role and
responsibilities of, for example, WP leaders. This is something we need to monitor. In line with this,
we have already made suggestions for improvement at the Rome workshop.
Finally, engagement and reflectiveness are also important criteria but we feel that they have been
covered in the previous criteria, or at least in the way we interpreted them.
This also shows that the criteria cannot and should not stand on their own; they are all related.
In conclusion, the criteria described here should not yet be considered finalised for other reasons as
well; practicalities of evaluating the project and methodological difficulties with evaluating certain
aspects (in particular, learning over time and impact) may preclude evaluation of particular criteria.
This should not be taken as a weakness of the principles and criteria described here or the strategy to
be developed; rather, as with all evaluative frameworks, the framework created here is a theoretical
ideal around which evaluation activities responsive to the real-world limitations of the project
context will be defined. Regardless of the eventual shape of the evaluation strategy, the primary
contribution of this deliverable (and thus, D12.1) has been to provide a theoretically and empirically
sound basis for the evaluation of SATORI based upon best practice as currently understood.
56
9 REFERENCES
Abelson, Julia, Pierre-Gerlier Forest, John Eyles, Patricia Smith, Elisabeth Martin, and Francois-
Pierre Gauvin, “Deliberations about Deliberative Methods: Issues in the Design and
Evaluation of Public Participation Processes”, Social Science & Medicine, Vol. 57, No. 2,
July 2003, pp. 239–251.
Abelson, Julia, and François-Pierre Gauvin, Assessing the Impacts of Public Participation: Concepts,
Evidence and Policy Implications, Canadian Policy Research Networks Ottawa, 2006.
https://desafios2.ipea.gov.br/participacao/images/pdfs/participacao/abelson_julia_gauvin_fra
ncois_assessing_impacts_public_participation.pdf.
Arnold, Erik, “Understanding Long-Term Impacts of R&D Funding: The EU Framework
Programme”, Research Evaluation, Vol. 21, No. 5, December 2012, pp. 332–343.
Bergmann, Matthias, Bettina Brohmann, Esther Hoffmann, M. Celine Liobl, Regine Rehaag,
Englebert Schramm, and Jon-Peter Voss, Quality Criteria of Transdisciplinary Research: A
Guide for the Formative Evaluation of Research Projects, Evalunet, Frankfurt, 2005.
http://isoe.eu/ftp/evalunet_guide.pdf.
Bornmann, Lutz, and Werner Marx, “How Should the Societal Impact of Research Be Generated and
Measured? A Proposal for a Simple and Practicable Approach to Allow Interdisciplinary
Comparisons”, Scientometrics, Vol. 98, No. 1, January 2014, pp. 211–219.
Chess, Caron, “Evaluating Environmental Public Participation: Methodological Questions”, Journal
of Environmental Planning and Management, Vol. 43, No. 6, November 2000, pp. 769–784.
Chilvers, Jason, “Deliberating Competence Theoretical and Practitioner Perspectives on Effective
Participatory Appraisal Practice”, Science, Technology & Human Values, Vol. 33, No. 2,
March 1, 2008, pp. 155–185.
———, “Reflexive Engagement? Actors, Learning, and Reflexivity in Public Dialogue on Science
and Technology”, Science Communication, Vol. 35, No. 3, June 1, 2013, pp. 283–310.
EuropeAid, Evaluation - Guidelines, 2005.
http://ec.europa.eu/europeaid/evaluation/methodology/guidelines/gbb_det_en.htm.
Finlay, Linda, “‘Rigour’, ‘Ethical Integrity’ or ‘Artistry’? Reflexively Reviewing Criteria for
Evaluating Qualitative Research”, The British Journal of Occupational Therapy, Vol. 69, No.
7, 2006, pp. 319–326.
Foucault, M., Discipline & Punish: The Birth of the Prison, Knopf Doubleday Publishing Group,
2012.
Gadamer, H.G., Truth and Method, Continuum International Publishing Group, 2004.
Guba, E. G., and Yvonna S. Lincoln, Naturalistic Inquiry, SAGE, 1985.
Habermas, J., The Theory of Communicative Action: Volume 1: Reason and the Rationalization of
Society, Beacon, Boston, 1984.
Haywood, Benjamin K., and John C. Besley, “Education, Outreach, and Inclusive Engagement:
Towards Integrated Indicators of Successful Program Outcomes in Participatory Science”,
Public Understanding of Science, Vol. 23, No. 1, January 1, 2014, pp. 92–106.
———, “Education, Outreach, and Inclusive Engagement: Towards Integrated Indicators of
Successful Program Outcomes in Participatory Science”, Public Understanding of Science,
Vol. 23, No. 1, January 1, 2014, pp. 92–106.
Heidegger, Martin, Being and Time, Blackwell, 1967.
Joss, Simon, “Evaluating Consensus Conferences: Necessity or Luxury”, Public Participation in
Science: The Role of Consensus Conferences in Europe, Science Museum London, 1995, pp.
89–108.
Klein, Julie T., “Evaluation of Interdisciplinary and Transdisciplinary Research”, American Journal
of Preventive Medicine, Vol. 35, No. 2, August 2008, pp. S116–S123.
57
Kuper, Ayelet, Lorelei Lingard, and Wendy Levinson, “Critically Appraising Qualitative Research”,
BMJ, Vol. 337, 2008.
Merkx, Femke, Inge van der Weijden, Anne-Marie Oostveen, Peter van den Besselaar, and Jack
Spaapen, “Evaluation of Research in Context A Quick Scan of an Emerging Field”, Den
Haag: Rathenau Instituut/ERiC, 2007. http://www.social-informatics.net/quickscan.pdf.
Mezirow, Jack, “Transformative Learning: Theory to Practice”, New Directions for Adult and
Continuing Education, Vol. 1997, No. 74, 1997, pp. 5–12.
Mickwitz, Per, “A Framework for Evaluating Environmental Policy Instruments Context and Key
Concepts”, Evaluation, Vol. 9, No. 4, 2003, pp. 415–436.
Miles, M., and A. Huberman, An Expanded Source Book: Qualitative Data Analysis, Sage, London,
1994.
Mittelstadt, Brent, Bernd Carsten Stahl, and Kutoma Wakunuma, SATORI Deliverable D12.1 - Good
Practice in Evaluation, Reflection and Civil Society Engagement, SATORI, De Montfort
University, Leicester, 2014.
Nagarajan, Nigel, and Marc Vanheukelen, “Evaluating EU Expenditure Programmes: A Guide. Ex
Post and Intermediate Evaluation. XIX/02–Budgetary Overview and Evaluation”,
Directorate-General XIX–Budgets. European Commission, 1997.
OECD, “Evaluation of Development Programmes”, 1991.
http://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm.
O’Sullivan, Rita G., “Collaborative Evaluation within a Framework of Stakeholder-Oriented
Evaluation Approaches”, Evaluation and Program Planning, Vol. 35, No. 4, Collaborative
Evaluation: Theory and Practice, November 2012, pp. 518–522.
Patterson, Michael E., and Daniel R. Williams, Collecting and Analyzing Qualitative Data:
Hermeneutic Principles, Methods and Case Examples, Vol. 9, Vol. 9, Advances in Tourism
Application Series, Sagamore Publishing, Inc., Champaign, IL, 2002.
http://www.treesearch.fs.fed.us/pubs/29421.
Penfield, Teresa, Matthew J. Baker, Rosa Scoble, and Michael C. Wykes, “Assessment, Evaluations,
and Definitions of Research Impact: A Review”, Research Evaluation, Vol. 23, No. 1,
January 1, 2014, pp. 21–32.
Renn, Ortwin, Thomas Webler, Horst Rakel, Peter Dienel, and Branden Johnson, “Public
Participation in Decision Making: A Three-Step Procedure”, Policy Sciences, Vol. 26, No. 3,
August 1993, pp. 189–214.
Research Councils UK, Evaluation: Practical Guidelines - A Guide for Evaluating Public
Engagement Activities, 2014. http://www.rcuk.ac.uk/RCUK-
prod/assets/documents/publications/evaluationguide.pdf.
Rowe, Gene, and Lynn Frewer, “Public Participation Methods: A Framework for Evaluation”,
Science, Technology & Human Values, Vol. 25, No. 1, 2000, pp. 3–29.
Rowe, Gene, and Lynn J. Frewer, “A Typology of Public Engagement Mechanisms”, Science,
Technology & Human Values, Vol. 30, No. 2, April 1, 2005, pp. 251–290.
———, “Evaluating Public-Participation Exercises: A Research Agenda”, Science, Technology &
Human Values, Vol. 29, No. 4, 2004, pp. 512–556.
Rowe, G., and L.J. Frewer, “Public Participation Methods: A Framework for Evaluation”, Science
Technology and Human Values, Vol. 25, No. 1, 2000, pp. 3–29.
Saari, E., and K. Kallio, “Developmental Impact Evaluation for Facilitating Learning in Innovation
Networks”, American Journal of Evaluation, Vol. 32, No. 2, June 1, 2011, pp. 227–245.
SATORI, THEME [SiS.2013.1.2-1 SiS.2013.1.2-1] Mobilisation and Mutual Learning (MML)
Action Plans: Mainstreaming Science in Society Actions in Research] - Annex 1 - Description
of Work, 2013.
Scriven, Michael, Evaluation Thesaurus, Sage, 1991.
58
Smith, L. Graham, Carla Y. Nell, and Mark V. Prystupa, “FORUM: The Converging Dynamics of
Interest Representation in Resources Management”, Environmental Management, Vol. 21,
No. 2, March 1, 1997, pp. 139–146.
Stagl, Sigrid, “Multicriteria Evaluation and Public Participation: The Case of UK Energy Policy”,
Land Use Policy, Vol. 23, No. 1, Resolving Environmental Conflicts:Combining
Participation and Muli-Criteria Analysis, January 2006, pp. 53–62.
Stahl, B. C., “Responsible Research and Innovation: The Role of Privacy in an Emerging
Framework”, Science and Public Policy, 2013, p. sct067.
Stirling, Andy, “9. Precaution, Foresight and Sustainability: Reflection and Reflexivity in the
Governance of Science and Technology”, Reflexive Governance for Sustainable
Development, 2006, p. 225.
Tuominen, Anu, Tuuli Järvi, Kirsi Hyytinen, Evangelos Mitsakis, Maria Eugenia Lopez-Lambas,
Lissy La Paix, Jan van der Waard, Anne Binsted, and Anatolij Sitov, “Evaluating the
Achievements and Impacts of EC Framework Programme Transport Projects”, European
Transport Research Review, Vol. 3, No. 2, July 1, 2011, pp. 59–74.
Walter, Alexander I., Sebastian Helgenberger, Arnim Wiek, and Roland W. Scholz, “Measuring
Societal Effects of Transdisciplinary Research Projects: Design and Application of an
Evaluation Method”, Evaluation and Program Planning, Vol. 30, No. 4, November 2007, pp.
325–338.
Webler, Thomas, “‘Right’ Discourse in Citizen Participation: An Evaluative Yardstick”, Fairness
and Competence in Citizen Participation, Springer, 1995, pp. 35–86.
Webler, Thomas, Hans Kastenholz, and Ortwin Renn, “Public Participation in Impact Assessment: A
Social Learning Perspective”, Environmental Impact Assessment Review, Vol. 15, No. 5,
September 1995, pp. 443–463.
59
10 ANNEXES
ANNEX 1 – PRE-EVALUATION QUESTIONNAIRE
SATORI – Pre-Evaluation Form for Participants
Name: _________________________________
Organisation:____________________________
To assist in De Montfort University's evaluation of SATORI we would kindly ask you to complete the following pre-evaluation questionnaire so that we may better tailor our evaluation approach to your expectations and needs. All current members of SATORI have been asked to complete the questionnaire, so your answers can reflect your individual perspectives as well as those of the organisation/group you represent. Any questions/comments about the survey itself should be directed to Brent Mittelstadt ([email protected]). Your responses will be used solely by DMU to create its evaluation strategy for SATORI. Your responses will not be shared beyond the evaluation team at DMU. By completing the survey you consent to the data you provide being used for this purpose only.
1. Which SATORI Tasks are you currently working on, or will work on in the future?
2. How do you interpret your role in SATORI?
3. Practically speaking, what support do you hope to receive from us (the evaluators)?
4. Please Rank Outputs in Terms of Importance to Your Organisation and Its
Participation in SATORI:
___Refereed academic journal articles/conference proceedings
___Project reports (posted on the internet)
___Policy impact around issues of importance to your organisation
___Networking with NGOs and civil society organisations (CSO)
___Networking with governmental bodies/policy-makers
___Media attention (print, web, etc.)
___Networking with lay members of the public (non-media)
60
___Enhanced influence and decision-making impact for your organisation
___Other (please specify)
5. Please comment on any other aspects of your/your organisation’s participation in
SATORI that you think are important for evaluating the project: