beyond and publications: performance monitoring indicators ... · performance and monitoring...
TRANSCRIPT
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 1 of 121
Beyond Patents and Publications:
Performance Monitoring Indicators for ICT
Research in the EU‐funded RTD
Contract:
Development of Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD
SMART 2010/0028
Document title: Final Report
Reference: D4
Status: Final
Date: 14.12.2011
Author[s]: Michael Dinges, Martijn Poel, Nicolai
Søndergaard Laugesen
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 2 of 121
ExecutiveSummary
This report illustrates the findings of a study, which aimed at identifying a set of performance‐monitoring
measures that will provide the Evaluation and Monitoring Unit of Directorate General Information Society and
Media (DG INFSO) of the European Commission (EC) with a comprehensive picture of the added value of EU‐
funded research and innovation activities in the area of Information and Communication Technologies (ICT).
The study also serves as an input for the preparations of the Horizon 2020 programme and its relative
monitoring system. It also provides guidance on which monitoring activities the Evaluation and Monitoring
Unit of DG INFSO can establish.
The study portrays the intervention logic of the ICT part of the 7th Framework Programme (FP7‐ICT) and the
ICT Policy Support Programme of the Community Innovation Programme (CIP ICT‐PSP). Against the background
of the EC evaluative criteria and the present monitoring system, existing performance methodologies and
concepts in a number of countries within and outside of the EU Member States are presented. Gaps in the
evidence base of the performance monitoring system, and a proposal for setting up key performance
measures that serve the purpose to be implemented in future monitoring exercises is provided.
Thepurposeandscopeofperformancemonitoring
Performance monitoring systems need to provide a continuous assessment of programme functions organised
internally by programme management or a monitoring unit. Through its data gathering approaches monitoring
needs to enable the development of interim performance measures of programme progress, outputs and
outcomes.
For the Monitoring Unit of DG INFSO key performance indicators should provide information on the relevance,
effectiveness, efficiency, utility, and sustainability of EU ICT funding, which are the main evaluative criteria for
EU funded research activities.
Performance monitoring cannot substitute evaluations, because a number of data sources and a combination
of evaluation methods are needed to capture the outcomes and impacts of a large European Union
programme, such as FP7‐ICT and CIP ICT‐PST. However, the data and indicators obtained from a performance
monitoring system should have the capacity to feed into the interim assessments and ex post evaluations of
the programme. Therefore, a performance monitoring system of EU funded ICT research needs to consider by
which means the EU‐ICT research activities unfold its impact.
A performance monitoring system should further allow for comparisons between different instruments and
objectives of the multi‐annual Work Programmes of FP7‐ICT. For reasons of efficiency and in order to minimise
burden on programme participants and programme management it needs to be established primarily from
exploiting existing sources on a timely annual basis.
PerformanceMonitoringandtheInterventionLogicofFP7‐ICTandCIPICT‐PSP
Both, the FP7‐ICT and the CIP ICT‐PSP are characterised by a high degree of complexity and very specific
technological objectives. The logic chart analysis allowed structuring the objectives of FP7‐ICT and CIP ICT‐PSP
and provides the linkages to the impact chain of the programmes (i.e. desired inputs, outputs, outcomes and
impacts). The logic charts also showed that FP7‐ICT and CIP ICT‐PSP unfolds its impact through
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 3 of 121
scientific/technological knowledge creation, knowledge transfer through research networks and collaboration,
skill formation of human capital, and a strengthening of the economic and social impact of research.
Through the analysis of the Work Programme and the interviews conducted in the study, it was also revealed
that the various technological domains of the Work Programme are different regarding a) the state of the art
of the scientific and technological development, and b) the market maturity, and c) the balance of actors
engaged. Some domains seek to explore more radical scientific knowledge, technologies and market
opportunities, whereas other domains focus on the reinforcement and enhancement of existing capabilities,
incremental knowledge creation, and technological deployment. Therefore, a performance monitoring system
needs to take track of the different types of actors involved in the portfolio of projects, and to gather
information about their roles.
Due to the specificity of the technological objectives, a performance monitoring system employed by the
Evaluation and Monitoring Unit of DG INFSO should not try to monitor specific indicators for each instrument
and each specific Work Programme challenge. This would increase the administrative burden in an unjustified
way. Instead, the performance monitoring system should be structured along the impact chain of a
programme (inputs, outputs, outcomes, and impacts), the relevant impact channels (knowledge, networks,
human resources, economic and social objectives), and provide information on progress made towards
achieving the common, and most relevant objectives of ICT research and innovation. These are:
To increase technological excellence of EU ICT industry;
To increase quality of ICT research;
To re‐inforce coordination and pooling of resources;
To ensure training, dissemination and knowledge management;
To focus on research on themes of major societal needs.
The current monitoring system merely provides information on the achievement of these objectives. It
concentrates on inputs, while outputs and outcomes of EU funded research and innovation activities are fairly
neglected. Although a number of good and relevant data are gathered, linkages towards the main objectives of
FP7‐ICT and CIP ICT‐PSP are missing.
Resultsfromthecomparativeanalysis
The comparative analysis revealed that a substantial number of indicators are available for monitoring
increased knowledge, networks, human capital and economic mechanisms. More indicators have been
identified at the level of outputs and outcomes than at the level of impact. This reflects that it takes time for
an impact to emerge, which makes it difficult ‐ if not impossible ‐ to monitor (expected) impacts while projects
are on‐going.
For a lot of indicators, weaknesses regarding the possibilities for benchmarking and international comparisons
became evident. E.g.: many indicators consist of the number of outputs (publications, patents, research
networks, etc.) but fail to provide information on the quality of research performed, the effectiveness of
programme implementation, and the sustainability of a programme. In addition, a series of indicators is only
available in a qualitative manner. Hence, when establishing an indicator system, care needs to be taken that
the qualitative information is collected in a standardised manner in order to be ready to use for analytical
purposes.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 4 of 121
Based upon the results of the intervention logic and the comparative analysis, a number of input, output and
outcome and impact indicators have been established, which have the potential to extend the scope and
usefulness of a performance monitoring system of ICT research and innovation. Strengths, weaknesses,
opportunities and threats associated with the indicators have been highlighted by conducting SWOT analysis
focusing on the usefulness and the implementation of the proposed measures.
Main criteria for the proposal of performance monitoring indicators have been:
The feasibility of the proposed monitoring concepts and data collection approaches. The
administrative burden has to be minimised.
The timeliness of the set of indicators. Indicators should be retrieved on an annual basis, allowing for
real time performance monitoring, not ex‐post assessment procedures.
The ability to compare and contextualise the performance measures (e.g. with indicators from similar
programmes, other EU funding mechanisms etc.).
Keyinputindicators
In the present monitoring system data regarding participation and funding are well monitored and published
regularly and consistently. Nevertheless, shortcomings as regards key aspects of FP7‐ICT and CIP ICT‐PSP exist.
These relate to:
The ability to create a well balanced portfolio of projects given the objectives of the individual
instruments and the annual Work Programmes (i.e. as to provide incentives for new actors to be
engaged in European research endeavours and to integrate European research actors), and
The involvement of relevant industrial actors and research organisations (as to provide a baseline for
evaluations and impact assessments).
In order to remedy these, the project team suggests to extend the existing monitoring system, by including a
number of additional input indicators that focus on i) the composition of the project portfolio, and ii) the
inclusion of industrial actors and research organisations.
Of particular interest regarding the composition of the project portfolio are indicators that provide information
on the effectiveness and sustainability of EU ICT funding and show whether the ICT programme is open to new
participants (a pre‐requisite for integration of ERA), and able to build up and structure research agendas in a
European dimension. Therefore, the proposal for key input indicators are i) number and share of new
participants, ii) new coordinators, iii) new collaborations, iv) frequency of participation of individual
organisations, and v) repetitive cooperations. The indicators are relevant for all funding instruments and
thematic areas and can be compared with each other.
The proposed organisational level indicators focus on a provision of i) key economic data of participating
organisations, and ii) key research and innovation data. Both types of data are critical for econometric analyses
to be performed in interim and ex post evaluations. At a stage of project execution they allow to characterise
project participants in a more detailed manner than at present, in which only the type of organisation and a
differentiation between SMEs and non‐SMEs is provided.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 5 of 121
PrerequisitesforimplementationExcept from the indicator on repetitive cooperations, all indicators concerning the project portfolio can easily
be established by making use of the EC administrative system. For the implementation of the indicator on
repetitive co‐operations the project team suggests to retrieve specific collaboration data at the stage of
project start by using key field data of in the reporting system. An additional option is to pursue network
analysis, which can inform policy makers of the changing organisational landscape of ICT research and
innovation. Network analysis can point to “hub” organisations and their critical roles in maintaining the
connectivity of the network and the progress towards the ERA. However, the complexity of implementation of
network analysis goes beyond the regular capacities of a monitoring unit. Therefore, network analysis should
rather be part of interim and ex post evaluations and specific studies.
Data on participating organisations may either be retrieved by commercial databases or through own
initiatives of the European Commission. Commercially available databases contain information on i) cash flow,
ii) turnover, iii) number of employees, iv) ownership, v) industry classification, vi) location, and vii) the year of
incorporation. Attention needs to be paid to the feasibility of linkages between commercial databases and EC
databases.
Data on research and innovation behaviour are rarely to be found in commercial databases. Therefore, data
gathering should take place at the time of project proposal acceptance, after negotiation has taken place. The
data should be retrieved via the Electronic Proposal Submission System of the European Commission, which
allows to ensure completeness of data and to build up a database with key data on participating organisations
and their research and innovation patterns.
Keyoutputindicators
Output indicators need to show how a programme is progressing towards meeting its objectives. The current
monitoring system provides only information on the number of scientific articles and open access journals, and
the number of patent applications in a regular manner. Hence, there is considerable room to improve the
performance monitoring of outputs.
The analysis of the multi‐annual Work Programmes for FP7‐ICT identified the following relevant areas for
output measurment: knowledge, networks and human capital. For knowledge indicators, tangible knowledge
outputs (new technology, methodology, components, algorithms, etc.) prevail. However, apart from peer
reviewed publications and all forms of IPRs, international benchmarking possibilities are limited. Output
indicators often focus on counts of outputs, but quality aspects and appropriateness of results are not
monitored.
Based upon the analysis of the intervention logic and the comparative analysis, the project team recommends
that the performance monitoring system for ICT research should retrieve:
Performance indicators stemming from the periodic project reviews and the final project reviews, and
Output counts stemming from information provided in the project reports.
The proposal for key output indicators consist of i) the overall satisfaction with the project progress/results, ii)
the achievement of project results in relation to the funding objectives, iii) the evidence on contribution to
tangible knowledge outputs and intangible knowledge outputs, iv) the satisfaction with the dissemination of
project results, and v) and tangible knowledge results including peer reviewed publications, conference
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 6 of 121
proceedings, publications in non‐peer reviewed journals, and qualifications obtained in the course of the
project.
Indicators i) to iv) should be retrieved from the project reviews and indicator v) should be retrieved from the
project reports.
The indicators stemming from the project reviews measure the quality of outputs and the progress made as
indicated by the external project reviewers.
The main aim of collecting information on tangible project outputs is to establish databases for evaluative
studies. The tangible output indicators provide information at the level of projects and participants and can
easily be adapted for different types of programmes . In order to utilise output counts also for monitoring
purposes, the European Commission should consider a ranking of peer reviewed publications and conference
presentations/proceedings into prestige tiers. However, it is not suggested to implement such an approach in
the short run. Considerable additional research efforts are needed in order to establish a tier based system for
both peer‐reviewed publications and journal proceedings.
PrerequisitesforimplementationIn order to fully exploit the information contained in the periodic review reports, these need to be carefully
adapted:
The periodic review reports and the final project review reports need to be in an electronic format,
using a standardised reporting tool.
Standardised performance monitoring questions need to be included in the reports. In addition to
text based assessments, questions about project performance should be asked either by yes/no
questions or likert scale questions. Structured electronic fields need to be used, which are linked to a
repository of results. Linkages to the project database, the participant database, the instruments and
the specific challenges need to be provided.
The performance monitoring questions should be the same for all instruments and challenges of the
ICT FP and CIP ICT‐PSP programme. Hence, questions should be fairly broad, and the administrators of
the Work Programmes should be able to use the indicators stemming from the periodic reviews as a
source of information and adaption of plans.
In order to ensure acceptance of this type of measurement, a limited number of concrete questions
has to be formulated and tested in the field.
A minimum prerequisite to allow performance of Bibliometrics is that basic information on the publications
provided in the current project reports (title, main author, title of the periodical or the series, number, date or
frequency, publisher, place of publication, year of publication, permanent identifiers) is stored in an electronic
repository. The repository data have to be linked with the project databases. Information on publications
obtained during the lifetime of a project should be collected at an annual basis.
Keyoutcomeandimpactindicators
Programme outcomes and impacts are rather measured in interim and ex post evaluations than in monitoring
exercises. Attribution problems and timing are the main reasons why outcomes and impacts are rarely
considered in programme monitoring.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 7 of 121
Key dimensions addressed by outcome and impact performance measures are matters of relevance,
effectiveness and sustainability. These relate to knowledge, networks, human capital, and socio‐economic
outcomes and impacts.
Based upon the intervention logic and the comparative analysis, the project team suggests to include output
indicators, which are based upon self‐assessments of project coordinators. The following outcome and impact
indicators should be included: i) innovation impacts at the level of participants and non‐participants, ii) ability
to develop different types of product/processes innovations, iii) capability to introduce organisational
innovations, iv) increased creativity and skills, v) different tangible economic results at the level of participants
and the economy at large.
PrerequisitesforimplementationThe information on project outcomes and project impacts should be collected via the project final reports. To
some extent benchmarks may be achieved through a comparison with Community Innovation Survey data.
Information can be distilled from the project final report, if standardised electronic queries are included in
these reports geared at project coordinators. For establishing an electronic repository of results from the
project final reports, the same rules as for the periodic reports apply.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 8 of 121
TableofContents
Executive Summary ................................................................................................................................................ 2
Table of Contents ................................................................................................................................................... 8
List of Figures ........................................................................................................................................................ 10
List of Tables ......................................................................................................................................................... 11
1 Introduction ................................................................................................................................................. 12
1.1 Context of the Study ........................................................................................................................... 12
1.2 Challenges for Monitoring and Evaluation of EU‐funded ICT ............................................................. 13
1.3 Objectives of the study ....................................................................................................................... 15
1.4 Scope and Structure of the Report...................................................................................................... 15
1.5 Methodology ....................................................................................................................................... 16
1.5.1 Intervention Logic ........................................................................................................................... 17
1.5.2 Comparative approach ................................................................................................................... 18
1.5.3 Performance measure development .............................................................................................. 18
2 The Intervention Logic of ICT Research and Innovation in the European Union ......................................... 19
2.1 The EU strategy towards ICT research and innovation ....................................................................... 19
2.1.1 Europe 2020 .................................................................................................................................... 19
2.1.2 The Digital Agenda for Europe ........................................................................................................ 21
2.1.3 The Innovation Union ..................................................................................................................... 22
2.2 Logic frameworks for ICT in FP7 and CIP‐ICT PSP ................................................................................ 23
2.2.1 The Intervention Logic of ICT research in FP7 ................................................................................ 24
2.2.2 The intervention logic of the different instruments for ICT research and innovation.................... 27
2.2.3 The thematic priorities in the intervention logic of FP7‐ICT ........................................................... 30
2.2.4 Additional EU ICT research initiatives ............................................................................................. 34
2.2.5 The Community Innovation Programme (CIP) and the ICT Policy Support Programme (ICT‐PSP) . 36
2.2.6 The ICT Policy Support Programme ................................................................................................ 37
2.3 Conclusions ......................................................................................................................................... 39
3 Comparative analysis: approaches to monitor and evaluate (ICT) research and innovation ...................... 41
3.1 Performance indicators in monitoring and evaluation systems ......................................................... 41
3.1.1 The scope of performance monitoring and the identification of programme performance
indicators and evaluation questions ............................................................................................................ 41
3.1.2 Designing good indicators ............................................................................................................... 45
3.2 The EU performance monitoring system ............................................................................................ 48
3.2.1 FP7 ICT monitoring ......................................................................................................................... 48
3.2.2 Evaluation of FP7 ICT ...................................................................................................................... 52
3.3 Trends and indicators in global good practice: results from a comparative cross country analysis ... 53
3.3.1 Increased knowledge ...................................................................................................................... 54
3.3.2 Human capital ................................................................................................................................. 56
3.3.3 Networks ......................................................................................................................................... 57
3.3.4 Economic indicators ........................................................................................................................ 59
3.4 Conclusions ......................................................................................................................................... 61
3.4.1 Intervention logics and performance monitoring ........................................................................... 61
3.4.2 Impact channels .............................................................................................................................. 61
3.4.3 Connection between indicators and the evaluation grid of DG Budget ......................................... 62
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 9 of 121
3.4.4 The present monitoring system ...................................................................................................... 62
4 Beyond patents and publications? Performance monitoring indicators for EU funded ICT research and
innovation ............................................................................................................................................................. 64
4.1 Input indicators ................................................................................................................................... 64
4.1.1 Structuring effects – Well balanced portfolio of collaborative projects ......................................... 66
4.1.2 Inter‐relation with national research funding ................................................................................ 66
4.1.3 Involve relevant industrial actors and research organisations ....................................................... 67
4.1.4 List of input indicators .................................................................................................................... 68
4.2 Output indicators ................................................................................................................................ 72
4.2.1 FP7 and ICT PSP outputs – Results from the intervention logic analysis ........................................ 72
4.2.2 Knowledge indicators ..................................................................................................................... 76
4.2.3 Network indicators ......................................................................................................................... 80
4.2.4 Human capital indicators ................................................................................................................ 80
4.2.5 Efficiency indicators ........................................................................................................................ 80
4.2.6 List of output and efficiency indicators .......................................................................................... 82
4.3 Outcome and Impact Indicators .......................................................................................................... 88
4.3.1 FP7 and ICT PSP outcomes and impacts – results from the intervention logic analysis ................. 88
4.3.2 Knowledge indicators ..................................................................................................................... 91
4.3.3 Human capital indicators ................................................................................................................ 92
4.3.4 Network indicators ......................................................................................................................... 93
4.3.5 Economic indicators ........................................................................................................................ 94
4.3.6 Long list of outcome/impact indicators .......................................................................................... 97
5 Proposal for key performance monitoring indicators ................................................................................ 103
5.1 Principles for an appropriate performance monitoring system ........................................................ 103
5.2 Key Input Indicators .......................................................................................................................... 104
5.2.1 Project portfolio indicators ........................................................................................................... 106
5.2.2 Indicators on key actors ................................................................................................................ 107
5.3 Key Output Indicators ....................................................................................................................... 109
5.3.1 Results of the project reviews ...................................................................................................... 110
5.3.2 Quantitative output indicators ..................................................................................................... 113
5.4 Key Outcome and Impact Indicators ................................................................................................. 115
5.4.1 Outcome and impact indicators ................................................................................................... 116
6 References ................................................................................................................................................. 119
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 10 of 121
ListofFigures
Figure 1: Europe 2020: high policy objectives relevant for ICT R&D and Innovation ........................................... 20
Figure 2: The Digital Agenda ‐ high policy objectives relevant for ICT R&D and Innovation ................................ 22
Figure 3: The Innovation Union ‐ high policy objectives relevant for ICT R&D and Innovation ........................... 23
Figure 4: The overarching policy objectives of Europe 2020 and the flagship initiatives Digital Agenda, and
Innovation Union .................................................................................................................................................. 23
Figure 5: A logic model for the FP7 Cooperation Programme ‐ ICT ...................................................................... 25
Figure 6: A logic model for Specific Targeted Research Projects (STREPs) within FP7 – ICT ................................ 29
Figure 7: A logic model for Integrated Projects (IPs) within FP7 – ICT ................................................................. 29
Figure 8: A logic model for Networks of Excellence (NoEs) within FP7‐ICT .......................................................... 30
Figure 9: A logic model for the Future and Emerging Technologies Programme ................................................. 31
Figure 10: A logic model for ICT PSP Programme ................................................................................................. 38
Figure 11: Objectives Hierarchy, Indicators and Evaluation ................................................................................. 43
Figure 12: Effects of R&D subsidies on total R&D expenditures .......................................................................... 95
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 11 of 121
ListofTables
Table 1: ICT‐RTDI relevant challenges of the Digital Agenda ................................................................................ 22
Table 2: Linkages between specific objectives of FP7‐ICT and the challenges of the Work Programme ............. 33
Table 3: The indicator system of FP7‐ICT research ............................................................................................... 49
Table 4 Results from the international cases ‐ Knowledge indicators .................................................................. 54
Table 5 Results from the international cases ‐ Human capital ............................................................................. 56
Table 6 Results from international cases ‐ Networks indicators .......................................................................... 57
Table 7 Results from international cases – Economic indicators ......................................................................... 59
Table 8: Summary of input/activity dimensions in the intervention logic of FP7‐ICT and ICT‐PSP ...................... 64
Table 9: Input Indicators ....................................................................................................................................... 69
Table 10: STREPS objectives and output .............................................................................................................. 73
Table 11: IPs objectives and output ...................................................................................................................... 73
Table 12: NoEs objectives and output .................................................................................................................. 74
Table 13: Summary of outputs from ICT Work Programmes ............................................................................... 75
Table 14: Summary of outputs from Work Programmes; FET .............................................................................. 75
Table 15: CIP ICT PSP – Operational objectives and outputs from Work Programmes ........................................ 76
Table 16: Example for project evaluation criteria: integrated project proposals in FET proactive ...................... 78
Table 17: Long list of output indicators ................................................................................................................ 84
Table 18: Efficiency Indicators .............................................................................................................................. 86
Table 19: Relevance of FP7‐ICT outcomes and impacts (vertical axis) for different FP7‐ICT instruments and
nexus to impact channels (horizontal axis) .......................................................................................................... 89
Table 20: Relevance of FP7‐ICT outcomes and impacts at the specific objective level ........................................ 90
Table 21: Identified indicators for outcomes and impacts ................................................................................... 98
Table 22: Key input measures – project portfolio .............................................................................................. 105
Table 23: Key input measures – relevant actors ................................................................................................. 105
Table 24: SWOT analysis for proposed indicators on the portfolio of collaborative projects ............................ 106
Table 25: Tracking previous RTD collaborations ................................................................................................. 107
Table 26: SWOT analysis for proposed indicators involvement of key actors .................................................... 108
Table 27: Key output indicators retrieved by periodic reviews and final reviews .............................................. 109
Table 28: SWOT analysis for results of independent project reviews ................................................................ 110
Table 29: Pilot questionnaire for periodic reviews and final project reviews .................................................... 111
Table 30: SWOT analysis for a prestige tiers based publication measurement ................................................. 113
Table 31: Descriptors for ICT conference Tiers ................................................................................................... 115
Table 32: Key outcome and impact measures .................................................................................................... 116
Table 33: SWOT analysis for additional outcome and impact indicators based upon a structured query ......... 116
Table 34: Pilot questionnaire to operationalise a key number of outcome and impact indicators ................... 117
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 12 of 121
1 Introduction
This report proposes a key list of performance monitoring measures for EU‐funded ICT research and
innovation activities. The proposal is based upon a research endeavour, in which the intervention logic of the
European Seventh Framework Programme for R&D and its ICT theme (FP7‐ICT) and the ICT Policy Support
Programme within the Competitiveness and Innovation Framework Programme (CIP ICT PSP) programmes,
international good practices in performance monitoring, and the academic literature on performance
measurement for research and innovation programmes have served to create an extensive list of performance
measurement metrics.
The report incorporates the findings of the First Interim Report and the Second Interim Report, including a
workshop held on the Intervention Logic of FP7‐ICT, and the findings of the final project meeting. It lays out
the general requirements for performance monitoring concepts and tools for research and innovation in the
field of ICT. The report details the requirements for a performance monitoring system, which has to be
implemented while projects are on‐going. It then provides a long list of possible performance indicators
covering aspects such as the meaning of each indicator and its use, its potential for a comparative dimension,
its relation to the intervention logic of the programme, its alignment to the broader evaluation criteria of the
European Commission, its current availability, and the respective data gathering requirements.
In the final section the proposal for a key list of performance measures is provided, aiming to complement the
current monitoring system. The proposal discusses concrete cases for implementing additional performance
measures based upon SWOT analyses. Thereby, the contribution of each short‐listed indicator/performance
measure to the monitoring system as a whole is exemplified. The usefulness of the indicators and the
operationalisation of the monitoring system are discussed. Conclusions for setting up a performance
measurement system and concrete recommendations for future exercises are provided.
1.1 ContextoftheStudy
The ICT sector is one of the most dynamic economic sectors in the EU, displaying above average economic
growth rates and R&D intensity. ICT is a general‐purpose technology important to nearly all modern human
activities – private, social and economic. ICT is therefore considered to be among the most important driving
forces for (future) socio‐economic development/prosperity and an important factor driving other technologies
and sectors. As a result R&D and innovation in ICT rank highly on the agenda of policy makers in most
developed national economies and the European Union as a whole.
It is the declared objective of the European Commission, with key strategies such as the Europe 2020 strategy,
the Flagship Initiatives Digital Agenda for Europe and the Innovation Union, that Europe has to be among the
technological leaders in the ICT area. In addition to technological leadership, there are key challenges for other
economic sectors including health care, transport, energy, learning, culture, etc., all of which benefit from
recent advances in ICT research.
The FP7 ICT and CIP ICT‐PSP constitute the main pillars of policy delivery to pursue the strategic goals outlined
above.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 13 of 121
Under FP7, the objective of ICT research is to improve the competitiveness of European industry as well as to
enable Europe to master and shape the future developments of these technologies so that the demands to its
society and economy can be met. FP‐funded ICT research helps European leadership in generic and applied
technologies, aims to stimulate and drive innovation through ICT use, and aspires to transform ICT progress
into benefits for all European stakeholders, including citizens, businesses, and governments.
Within the “Cooperation” programme of FP7, initiatives in the field of ICT are endowed with a total of € 9.1
billion. This makes ICT the largest research theme within the programme, which is itself the largest Specific
Programme in FP7.
The ICT part of the Community Innovation Programme (CIP) ‐ the Policy Support Programme (ICT‐PSP) aims at
stimulating a wider uptake of innovative ICT‐based services and the exploitation of digital content across
Europe by citizens, governments and businesses, in particular SMEs. The focus is on driving this uptake in areas
of public interest while addressing EU challenges such as moving towards a low carbon economy and coping
with an ageing society. The programme contributes to a better environment for developing ICT‐based services
and helps overcome hurdles such as the lack of interoperability and market fragmentation. Funding mainly
goes to pilot actions, involving both public and private organisations, for validating in real settings, innovative
and interoperable ICT‐based services in areas such as a) ICT for health, ageing and inclusion; b) Digital
Libraries; c) ICT for improved public services; d) ICT for energy efficiency and smart mobility; e) Multilingual
web and Internet evolution.
1.2 ChallengesforMonitoringandEvaluationofEU‐fundedICT
Europe has reaped significant benefits from its investment in ICT research and deployment, many of which
have been captured in the periodic evaluation exercises such as the Interim Evaluation of FP71 or the 5‐year ex
post assessments of the Programme2. The regulations establishing FP6 and FP7 explicitly require the
Commission to continually and systematically monitor the overall Programme and its specific themes such as
FP7‐ICT. Using both the internal evaluation capabilities of the Commission and external expertise, the
evaluation and monitoring activities aim at providing a systematic analysis of the inputs, outputs, outcomes,
and impacts of FP7‐ICT, a well‐founded judgment regarding the achievement of the operational objectives, and
a reasonably substantiated opinion regarding future investments in IST‐RTD.
Since 2003, the Commission has gradually adopted a formal impact assessment procedure for all major
initiatives included in the Annual Policy Strategy or in its Work Programme. The implementation of FP7
commits to interim evaluations of the Framework Programme and its Specific Programmes regarding research
quality and progress towards meeting their objectives, as well as a final evaluation of the FP, covering its
rationale, implementation and achievements two years after the completion of the programme. In short, the
Framework Programme as a whole and its constituent parts are now subject to systematic appraisal that
includes all three legs of the evaluation:
priority setting and ex‐ante impact assessments;
monitoring of progress, including interim evaluation; and
evaluation of results (ex‐post).
1http://ec.europa.eu/research/evaluations/pdf/archive/other_reports_studies_and_documents/fp7_interim_evaluation_expert_group_report.pdf#view=fit&pagemode=none. 2 http://ec.europa.eu/research/reports/2004/fya_en.html
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 14 of 121
The specific Commission reforms3 in the past few years have made evaluative activities even more important
in an effort to enhance accountability and transparency, initiate systematic ex‐ante appraisals of proposed
programmes, and further extend programme monitoring and ex‐post impact assessment. A particularly
important development for designing monitoring and evaluation activities is the closer coupling of the impact
assessment exercises with the ex‐ante evaluation for budgetary purposes as a means to increase efficiency, set
priorities, and reduce fragmentation.
DG INFSO puts considerable efforts into setting up effective roadmaps for ex‐ante, interim, and ex‐post
evaluations. At the beginning of a programme, evaluation milestones and their timing are defined, blending
ambitious, yet feasible, methodological indicators and targets, while respecting the idiosyncrasies of the
programmes in question. The main pillars of the present monitoring system are the annual portfolio
monitoring report (Stream Report) and the project specific monitoring of on‐going projects. The STREAM
report provides information on the portfolio of projects in each challenge and instrument as well as an analysis
of beneficiaries. The monitoring of on‐going projects includes a periodic activity reports (summary of activities,
deliverables produced, milestones), periodic reviews, and a final report. However, in terms of quantitative
indicators, the current monitoring system of DG INFSO, only incorporates a number of input data about the
projects (participating organisations, project abstracts, and funding information), a limited number of output
data (patents, publications and papers) and occasional data about estimated impacts (increase in market
share, development of new products and services, enhancement of citizens' quality of life through
development of new products and services).
In this regard, both the recent FP7 Interim Evaluation4 and the First CIP ICT‐PSP Interim evaluation5 identified a
number of challenges regarding the development of a monitoring system including relevant indicators.
The expert panel that led the Interim Evaluation of CIP ICT‐PSP states that the extent of the policies and
practices covered by the themes of ICT‐PSP and their accompanying target objectives will present challenges
to those evaluating the eventual impact of ICT PSP. However, impact indicators may be developed on bases
such as:
the achievement of viability, sustainability and scalability beyond the phases of work undertaken
through CIP ICT PSP;
securing the support of public bodies and developing the capacity to build support and consensus
across the EU;
ensuring the free availability of innovations achieved so creating the necessary components and
building blocks for interoperability; and
assuring the openness of the networks towards relevant outside organisations.
However, the the above indicators are remarkably broad in scope and the expert panel recommends clarifying
and embedding project‐level indicators. Thus, the Commission should give the project applicants the
responsibility for framing specific indicators through which their delivery of impact in relation toCIPICT PSP’s
objectives can be tracked. These should be quantitative, qualitative and time‐based. They must be
accompanied by indications of when they will be monitored and reported and of who has the responsibility to
3SEC(2007)213, Responding to Strategic Needs: Reinforcing the use of evaluation, http://ec.europa.eu/dgs/information_society/evaluation/data/pdf/sec_2007_0213_en.pdf 4http://ec.europa.eu/research/evaluations/pdf/archive/other_reports_studies_and_documents/fp7_interim_evaluation_expert_group_report.pdf 5 http://ec.europa.eu/dgs/information_society/evaluation/non_rtd/programmes/cip_ict-psp_interim_evaluation_report.pdfInsert Quote
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 15 of 121
do so. The observations and recommendations of the FP7 ICT interim evaluation, which are outlined in the
terms of references of our study, are also geared in the same direction:
a strong focus on input indicators and measures of administrative performance in the current data
collection practices;
outputs and outcomes cannot always be directly correlated with objectives and their assessment
relies on (or is partly influenced by) the perceptions of stakeholders; and
a limited number of traditional output indicators (such as patents and publications) collected by
surveys.
1.3 Objectivesofthestudy
This study aims to identify performance‐monitoring indicators that put aside these shortcomings. The
proposed set of indicators for the Evaluation and Monitoring Unit of DG INFSO should give a more
comprehensive picture of the added value of EU‐funded research and innovation activities in the area of ICT.
Thus, the study should serve as an input for the preparations of the next Framework Programme (Horizon
2020) and provide guidance on which monitoring activities the Evaluation Unit of DG INFSO can establish.
To develop such a performance monitoring system, the first objective of the study is to detail the intervention
logic of the ICT part of the Framework Programme as it serves as a basis for structuring performance measures
along the evaluation criteria set by the Framework Programme.
A second objective of the study is to analyse and assess existing methodologies that are relevant to the
monitoring of public interventions with special emphasis on the ICT sector. This objective includes a review of
global good practices aimed at providing inspiring examples of evaluation and indicator measures from
countries outside the EU or from national research programmes carried out by selected Member States. The
analyses of the existing methodologies and indicators are compared with the findings of the present
monitoring system of ICT research in FP7 to identify gaps in the evidence base and to contribute to setting up a
long list of performance measures.
Based on the findings of the activities outlined above, the project team has set up a long list of performance
indicators and a proposal for key performance measures for EU funded ICT research that serve the purpose of
future monitoring exercises.
1.4 ScopeandStructureoftheReport
The report reviews the strategic policy objectives of the EU‐funded ICT research and innovation activities and
provides a detailed picture on the intervention logic of FP7‐ICT and CIP ICT‐PSP. It provides a comparative
analysis of approaches to monitoring ICT research and innovation activities and thereby sets up a
methodological framework that allows structuring indicators and links them to evaluative questions relevant
for EU‐funded ICT research endeavours. Consequently, the report links performance measures and indicators
to the intervention logic of the current programme and puts forward a long list of performance indicators for
the input dimension, the output dimension, and the outcome and impact dimension of EU‐funded ICT research
and innovation. The report provides the basis for the Final Study Report, which will include all the results of
the analyses, conclusions and concrete recommendations for setting up a performance monitoring system.
Chapter 2 provides a short review on the strategic policy objectives of ICT R&D linking the overall policy
objectives to the specific ICT instruments. The first step details the intervention logic of the overarching policy
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 16 of 121
priorities of the FP7‐ICT research programme to highlight and understand the hierarchical objectives of DG
INFSO policies. This information serves as a basis for devising the logic framework for the monitoring system.
The second step sets up logic models for FP7 and its specific instruments. Based on the Work Programmes for
the thematic priorities of the FP7‐ICT, an overview on the specific outputs, outcomes and impacts to be
expected from the thematic priorities are provided. A validation of the logic models was achieved through
interviews with Commission Officials6 and an intervention logic workshop. This also enabled us to take into
account insights into the specificities of the thematic priorities and challenges in terms of specific outputs,
outcomes and impacts. The logic models presented in this report reflects these findings. In addition to FP7‐ICT,
Chapter 2 also provides a logic model for CIP ICT Policy Support Programme.
Chapter 3 provides a comparative analysis of approaches to monitor ICT research and innovation. The main
aim is to develop a reference framework for performance monitoring concepts on which a long list of
performance measures may be based. Taking into account the findings of Chapter 2, Section 3.1 first defines
the use of quantitative performance indicators in monitoring and evaluation systems based on recent
academic literature and evaluation handbooks. Section 3.2 contains a brief description of the present EC‐ICT
related monitoring system. Later in the report, the present monitoring system is compared with the
international experiences described in Section 3.3. This section provides international examples on how
performance indicators are actually used in practice. This way, gaps and weaknesses in the current monitoring
system are identified. The underlying country studies are included in Annex 2.
Chapter 4 provides a long list of possible performance indicators for EU‐funded ICT research and innovation
activities. The indicators are structured along two dimensions, namely the results chain of the programme
(inputs/activities, outputs, outcomes and impacts), and the relevant impact channels through which the
programme delivers impact. The meaning of each indicator and its potential use are discussed against the
background of the intervention logic of the programme and the broader evaluation criteria of the European
Commission. The chapter discusses current availability, the different data gathering requirements, and the
availability of international benchmarks of the indicators.
Chapter 5 provides a proposal for a limited number of additional performance indicators, which have the
potential to extend the existing monitoring system of FP7‐ICT research and of CIP ICT‐PSP. The performance
monitoring indicators should provide additional knowledge on inputs, outputs, outcome and impacts produced
by the programmes, and by the partnerships that run the projects. The proposal of these performance
measures is seen as a tool to depict variants of the future of monitoring and evaluation of EU funded ICT
research activities.
1.5 Methodology
In general the project methodology is based upon desk research, expert interviews and workshops with EC
officials and international experts, and literature reviews concentrating on the academic indicators literature
and international cases on monitoring systems. For each chapter a structuring technique has been applied as
summarised below.
6 In the course of the project, a number of interviews with Commissions Officials and a focus group workshop with representatives of the European Commission and two external experts (Prof. Terttuu Luukonen and Prof. John Rigby) were conducted.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 17 of 121
1.5.1 InterventionLogic
The primary purpose of the information collected on the intervention logic is to highlight and understand the
hierarchical objectives of DG INFSO policies, which serves as basis for devising the logic framework for the
monitoring system.
The first step in the intervention logic outline is to establish the policy strategy for ICT research and innovation.
This provides a potential baseline for the desired long‐term strategic impact of ICT research and innovation.
Establishing this baseline requires that relevant policy objectives are analysed and categorised into strategic
and specific objectives. To this end we have used a logic modelling approach.7 A logic model presents a
plausible description of how a programme will work under certain conditions to solve identified problems
(Jordan 2010). It is intended to provide a clear diagram of the basic elements of a programme, sub‐programme
or project revealing what it is to do, how it is to do it, and with what intended consequences.
In order to provide an overview of the policy objectives these objectives are integrated into logic charts of the
specific ICT instruments, taking into account the thematic priorities of FP7‐ICT and the CIP ICT‐PSP, along the
following hierarchy:
Strategic objectives: refer to the overarching policy goals as outlined in the main policy documents
Europe 2020, Digital Agenda for Europe, and the Innovation Union.
Specific objectives: refer to the main programme objectives of a research and innovation programme
such as FP7 and ICT‐PSP. The specific objectives are detailed in the programme documents.
Operational objectives: refer to the concrete objectives that are considered for the set‐up and the
implementation of programmes. The operational objectives refer to the specific instruments used in
ICT programmes and the specific challenges outlined in the Work Programmes for ICT research.
To devise the logic chart, the following procedure is applied:
At the level of strategic EU policy objectives and the specific objectives of ICT research and
innovation, the most relevant policy objectives outlined in Europe 2020, the Digital Agenda, and the
Innovation Union were selected and then linked with the operational objectives of the FP7
Cooperation Programme.
At the activities/inputs level, the critical activities of the FP7 Cooperation Programme are portrayed.
At the output level, the boxes indicate that funding has been provided for a series of projects in areas
of significant relevance.
The outcomes level relates to the effects of the projects that occur at the level of the group of
participants in a medium‐ to long‐term timeframe.
Following the objectives of FP7 and the logic chart diagram of the FP7‐ICT Cooperation Programme and the
performance measurement levels (inputs/outputs/outcomes/impact), the project team distilled the different
performance dimensions of FP7, which also need to be taken into account for structuring a performance
monitoring system for EU‐funded ICT research. These performance dimensions include
the Scientific/technological Knowledge Dimension
the new Instrumentation and Methodologies Dimension
7 The development of a logic model is a recommended starting point in planning evaluation and identifying performance monitoring indicators (see for example Ruegg and Feller 2003, Laredo 2006). It is a systematic and visual way to present and share the understanding of the relationships among the resources available to operate a program, the activities planned, and the changes or results that should be achieved (Kelloggs Foundation 2004)
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 18 of 121
the Knowledge Circulation Dimension via Research Networks and Collaboration
the Human Capital and Capacity Building Dimension
the Economic and Social Impact Dimension.8
The logic modelling approach provides a very structured outset for analysing and developing performance
indicators.
1.5.2 Comparativeapproach
In order to develop performance indicators, the following step was to identify the tasks of performance
monitoring systems and the type of indicators used. The approach was therefore to outline the present
monitoring system of FP7‐ICT research and compare this system to other systems from international R&D
programmes.
The information for the case studies was collected by desk‐ and internet‐research and complemented by direct
communication with persons involved in the design or evaluation of the cases. The cases include concrete
examples of the main objectives of the programmes/policy initiative and the use and usefulness of the
monitoring/evaluation system applied9. The comparative cross‐country analysis focuses exclusively on the
main purpose of creating a long list of indicators based on the four level terminologies in the logic charts:
Inputs, Outputs, Outcomes, and Impacts. In the individual case studies, specificities of performance
measurement systems were considered10. In addition, the indicators have been structured along relevant
impact channels. Impact channels are mechanisms through which research programmes lead to outcome and
impact. Given the objectives of this study, the relevant impact channels are: knowledge, human capital,
networks, economic (and societal)11.
1.5.3 Performancemeasuredevelopment
For the creation of the long list of performance indicators, the findings from the international experiences and
additional findings stemming from additional sources are taken into account12. The long list of possible
performance indicators is structured along: inputs, outputs, outcomes and impacts. In addition, a section on
management indicators is provided, because efficient programme implementation may also contribute to the
overall performance of a programme. In the output and outcomes and impact sections, the relevant impact
channels serve as a structuring element for each analysis. The impact channels indicate the relevant
dimensions through which the programmes unfold its impact. For setting up the long list of Furthermore the
long list of indicators discusses, which key evaluative issues, instruments/specific objectives, and benchmarks
can be addressed by the performance measures. Also the current availability, sources and key assumptions for
setting up the indicators have been discussed. For the short list of indicators a further refinement for the
proposed performance measures is provided, as the usability of the measures is exemplified by conducting a
SWOT analysis on the implementation of the proposed measures.
8 These channels reflect by large the “Results-based Approach for Impact Assessment” of research activities, which has been developed by researchers at the Science and Technology Policy Research Unit (SPRU) of the University of Sussex from 1996 onward. 9 A table with persons interviewed in the course of the project is provided in Annex 2 of this report. 10 For instance, Canada differentiates between three levels of outcomes, which we have taken together as one level: outcomes. Furthermore, only a few countries use indicators for Inputs. 11 In Annex 2, it is made explicit that one indicator (such as number of PhDs) can be relevant for several impact channels. In the cross-country tables below, we have positioned the indicator in the impact channel for which it is most relevant. 12 Major source have been the indicator study of Braun et al. (2009), the Community Innovation Survey, the Report on R&D in ICT in the European Union (EC - IPTS 2011).
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 19 of 121
2 The Intervention Logic of ICT Research and Innovation in theEuropeanUnion
Chapter 2 provides a short review on the strategic policy objectives of ICT R&D, linking the overall policy
objectives to the specific ICT instruments thus highlighting crucial implications for a performance monitoring of
EU‐funded ICT research and innovation activities.
The first part of the chapter details the information collected on the intervention logic of the underlying policy
priorities of the FP7‐ICT research programme. The main aim is to highlight and understand the hierarchical
objectives of DG INFSO policies, which will serve as basis for devising the logic framework for the monitoring
system.
The second part of the chapter provides the logic frame matrices for FP7 and its specific instruments, the ICT
policy interventions based on the Work Programmes for the thematic priority of the EU Framework
Programme FP7‐ICT, and the Information Communication Technologies Policy Support Programme (ICT‐PSP).
Wherever appropriate, this part of the work provides links to the main strategic policy objectives of EU‐funded
research and innovation.
2.1 TheEUstrategytowardsICTresearchandinnovation
The European Union policy towards research and innovation in ICT is embedded in the top policy strategy
documents of the European Commission, notably the Europe 2020 strategy13, the Digital Agenda for Europe14,
and the Innovation Union.15 The three documents provide a potential baseline for the desired long‐term
strategic impact of ICT research and innovation conducted within the framework of the EU‐funded
Framework Programme FP7‐ICT, the Competitiveness and Innovation Framework Programme (CIP) and the
Information Communication Technologies Policy Support Programme (ICT‐PSP).
2.1.1 Europe2020
In the midst of the global economic crisis, Europe 2020 has set out a broad vision to achieve high levels of
employment, a low carbon economy, productivity and social cohesion, to be implemented through concrete
actions at EU and national levels under the headings of 1) Smart, 2) Sustainable, and 3) Inclusive growth.
Under the heading “Smart growth”, Europe 2020 sketches a strategy seeking to develop an economy based on
knowledge and innovation, which are drivers of our future growth. Key challenges mentioned under this
heading relate to the areas “Innovation, education, training and lifelong learning”, and the “Digital Society”.
In the innovation area, Europe 2020 acknowledges that Europe not only exhibits a lower rate of R&D spending
(below 2%) than its main competitors and therefore needs to increase private R&D investments in particular,
Europe also needs to “focus on the impact and composition of research spending, and to improve the
conditions for private sector R&D in the EU”. Whereas the challenge of education, training and lifelong
13 COM(2010) 2020 final, Communication from the Commission: EUROPE 2020 - A strategy for smart, sustainable and inclusive growth, Brussels, 3.3.2010. 14 COM(2010) 245 final/2, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: A Digital Agenda for Europe, Brussels, 26.8.2010. 15 COM(2010) 546 final, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Europe 2020 Flagship Initiative - Innovation Union, Brussels, 6.10.2010.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 20 of 121
learning mainly adresses Europes’ backwardness in tertiary education levels, the main challenge regarding the
digital society is that Europe underutilises the market potential of ICT and is also falling behind on the
provision of high‐speed internet (with negative consequences for its ability to innovate) as well as on‐line
dissemination of knowledge and on‐line market services.
To overcome these obstacles, Europe 2020 outlines actions at EU and Member State level within three
Flagship Initiatives. Among these the “Innovation Union” and the “Digital Agenda for Europe” initiatives play a
distinct role for addressing key challenges for research in general and ICT research in particular.
With regard to the Flagship Initiative “Innovation Union” the main aim is “to re‐focus R&D and innovation
policy on the challenges facing our society, such as climate change, energy and resource efficiency, health and
demographic change. Every link should be strengthened in the innovation chain, from 'blue sky' research to
commercialisation”. The main tasks of the European Commission will focus in this respect on the “completion
of the European Research Area and the development of a strategic research agenda focused on challenges
such as energy security, transport, climate change and resource efficiency, health and ageing, environmentally‐
friendly production methods and land management, and to enhance joint programming with Member States
and regions”. It further stresses the need to improve framework conditions for business to innovate (i.e.
create the single EU Patent and a specialised Patent Court, improve access to capital, make full use of demand
side policies, etc.) and to launch “European Innovation Partnerships” between the EU and national levels to
speed up the development and deployment of the technologies needed to meet the challenges identified.
With regard to the flagship Initative "A Digital Agenda for Europe" Europe 2020 stresses that the main aim is
“to deliver sustainable economic and social benefits from a Digital Single Market based on fast and ultra fast
internet and interoperable applications, with broadband access for all by 2013, access for all to much higher
internet speeds (30 Mbps or above) by 2020, and 50% or more of European households subscribing to internet
connections above 100 Mbps”. The main initiatives that are relevant to ICT research innovation within the
European Framework Programmes (FPs) and the Community Innovation Programme (CIP) concern the reform
of the research and innovation funds and increase support in the field of ICTs so as to reinforce Europe's
technology strength in key strategic fields and create the conditions for high growth SMEs to lead emerging
markets and to stimulate ICT innovation across all business sectors. Moreover, the Commission work will pay
attention to an array of measures targeting the legal framework in order to “stimulate investments in an open
and competitive high speed internet infrastructure; to develop an efficient spectrum policy; to facilitate the use
of the EU's structural funds in pursuit of this agenda; to create a true single market for online content and
services; to promote internet access and uptake by all European citizens, especially through actions in support
of digital literacy and accessibility”.
Thus, the review of the Europe 2020 Agenda allows us to distil key missions relevant to EU‐funded R&D and
Innovation activities and performance monitoring. This is illustrated in Figure 1.
Figure 1: Europe 2020: high policy objectives relevant for ICT R&D and Innovation
Source: Own compilation based on the Europe 2020 flaghsip Initiative
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 21 of 121
2.1.2 TheDigitalAgendaforEurope16
Whereas Europe 2020 sets out the overall policy objectives for the next decade of the Commission and
Member States’ work, the Digital Agenda for Europe, which is one of the flagship initiatives launched under
the Europe 2020 strategy, encompasses the overall ICT strategy for the European Union. The overall aim of the
Digital Agenda is "to deliver sustainable economic and social benefits from a digital single market based on fast
and ultra fast internet and interoperable applications".
As the Digital Agenda is an overarching strategy, it does not only contain measures related to ICT research and
innovation, it also outlines the general strategy to mobilise benefits from ICT via EU and Member State
measures. This includes issues related to market regulation, procurement policies, and provision of ultra‐fast
internet access, etc. Therefore, the Digital Agenda frames its key actions round the following seven problem
areas acknowleding the three growth dimensions set out in Europe 2020:
fragmented digital markets;
lack of interoperability;
rising cybercrime and risk of low trust in networks;
lack of investment in networks;
insufficient research and innovation efforts;
lack of digital literacy and skills; and
missed opportunities in addressing societal challenges.
Within the problem area “insufficient research and innovation efforts” the Digital Agenda acknowledges the
ICT investment gap with respect to its major trading partners such as the US. It highlights that Europe must
invest more in RTDI to ensure that Europes best ideas reach the market as ICT represents a significant share of
total value‐added in European industrial strengths such as automobile (25%), consumer appliances (41%) or
health and medical (33%). The lack of investment in ICT RTDI is a threat to the entire European manufacturing
and service sectors.
The Digital Agenda relates the investment gap in weak and dispersed public R&D efforts, market
fragmentation, dispersion of financing means, and a slow uptake of ICT‐based innovations – notably in areas of
strong public interest. For these challenges, the EU sets out concrete actions, which may be translated into
strategic objectives for ICT research and innovation including links to the EU Framework Programme and CIP‐
ICT PSP (see Table 1).
16 COM(2010) 245 final/2, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: A Digital Agenda for Europe, Brussels, 26.8.2010
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 22 of 121
Table 1: ICT‐RTDI relevant challenges of the Digital Agenda
Challenge Strategic ICT RTDI objectives Linkage to FP7 and CIP
Challenge 1: Weak and dispersed public R&D efforts in ICT
To leverage public and private investments for ICT R&D
To ensure light‐access financing to EU‐funded ICT
20% yearly increase of the ICT R&D budget at least for the duration of FP7: Input additionality of FP7‐ICT research activities
To overcome cumbersome application procedures in FP and ICT‐PSP
Challenge 2: Market fragmentation and dispersion of financing means for innovators
To reinforce the coordination and pooling of resources of Member States and industrial actors
Pooling of resources through strategic thematic priority setting within FP7‐ICT and ICT‐PSP
Challenge 3: Slow uptake of ICT‐based innovations, notably in areas of public interest
To ensure uptake of ICT research through participation of public and private users
Increase participation of users and public authorities of Member States in FP7‐ICT and ICT‐PSP
Source: Own compilation
Furthermore, the Digital Agenda also stresses “that through the smart use of technology and exploitation of
information, challenges facing society like climate change and the ageing population should be addressed”. An
overarching goal for EU‐funded ICT research and innovation is to target research and deployment activities
towards the needs of society. This includes critical elements such as supporting an ageing society, climate
change, reducing energy consumption, improving transportation efficiency and mobility, empowering patients
and ensuring the inclusion of persons with disabilities. This way the Digital Agenda also has links to the
thematic priorities within the FPs and ICT‐PSP.
Hence, the main strategic objectives relevant to FP7‐ICT and ICT‐PSP may be summarised as shown in Figure 2.
Figure 2: The Digital Agenda ‐ high policy objectives relevant for ICT R&D and Innovation
Source: Own compilation based on the Digital Agenda for Europe Flagship Initiative
2.1.3 TheInnovationUnion17
Like the Digital Agenda, the Innovation Union is one of the seven flagship initiatives announced in the Europe
2020 strategy. The Innovation Union defines the key strategic priorities for Europe’s research and innovation
policy, and aims “to improve the conditions and access to finance for research and innovation, to ensure that
innovative ideas can be turned into products and services that create growth and jobs”. In this vein, the
Innovation Union sets out broad targets to develop an inclusive business oriented research and innovation
policy to tackle major societal challenges, raise competitiveness and generate new jobs. The reduction in costly
fragmentation of Europe’s research and innovation activities and hence a better alignment of EU and national
research innovation systems are therefore pre‐requisites.
The headings of the Innovation Union outline that the EU must “a) tackle unfavourable framework conditions,
b) avoid fragmentation of effort, c) focus on innovations that address major societal needs, d) pursue a broad
17 COM(2010) 546 final, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Europe 2020 Flagship Initiative - Innovation Union, Brussels, 6.10.2010.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 23 of 121
concept of innovation (including research‐driven innovation, innovation in business models, and user driven
innovation), and e) involve all actors and all regions in the innovation cycle”.
The Innovation Union acknowledges the role of the European Framework Programmes as regards its potential
to contribute to the integration of EU research efforts via agenda setting and pooling of resources, its
contribution towards education and training and strengthening the science base, as well as its ability to align
R&D programmes towards major societal needs.
Figure 3 presents the key objectives that may be distilled from the Innovation Union document.
Figure 3: The Innovation Union ‐ high policy objectives relevant for ICT R&D and Innovation
Source: Own compilation based on the Innovation Union Flagship Initiative
2.2 LogicframeworksforICTinFP7andCIP‐ICTPSP
The information retrieved from the strategic policy documents Europe 2020, the Digital Agenda, and the
Innovation Union provides a coherent cluster of key strategic objectives for EU research and innovation
activities and specific objectives conducted within the framework of FP7‐ICT and CIP ICT‐PSP, although the
relevant parts for the main policy objectives are somewhat dispersed over the strategic documents. Therefore,
the policy documents have been analysed and transferred into strategic and specific objectives (Figure 4).
Figure 4: The overarching policy objectives of Europe 2020 and the flagship initiatives Digital Agenda, and
Innovation Union
Source: Own compilation based on Europe 2020, the Digital Agenda and the Innovation Union Flagship Initiative
The strategic objectives outlined in the main EU policy documents do not relate directly to the specific
activities addressed within FP7 and CIP ICT‐PSP, but the strategy documents acknowledge that there are
several different and complementary mechanisms/channels via which European research programmes have an
impact. Hence, it is of vital importance to combine these overall objectives with the more specific objectives of
FP7 ‐ in particular the ICT‐relevant part of FP7 located mainly in the FP7 Cooperation Programme and CIP ICT‐
PSP taking into account the different mechanisms that lead to impact.
This is done in the following sections first for FP7‐ICT and then for CIP‐ ICT‐PSP by using a logic modelling
approach.18 A logic model presents a plausible description of how a programme will work under certain
18 The development of a logic model is a recommended starting point in planning evaluation and identifying performance monitoring indicators (see for example Ruegg and Feller 2003. It is a systematic and visual way to present and share the understanding of the relationships among the resources available to operate a program, the activities planned, and the changes or results that should be achieved (Kelloggs Foundation 2004)
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 24 of 121
conditions to solve identified problems (Jordan 2010). It is intended to provide a clear diagram of the basic
elements of a programme, sub‐programme or project revealing what it is to do, how it is to do it, and with
what intended consequences.
Developing a logic model that captures the essence of a programme in a hierarchical manner and includes the
complete performance spectrum of a programme can help identify a set of “key” indicators for measuring and
reporting whether a programme is on track to achieve its goals (Jordan 2010). Therefore, the overall policy
objectives are integrated into logic charts of the specific ICT instruments, taking into account the thematic
priorities of FP7‐ICT and the ICT‐PSP, along the following hierarchy:
Strategic objectives: refer to the overarching policy goals as outlined in the main policy documents
Europe 2020, Digital Agenda for Europe, and the Innovation Union.
Specific objectives: refer to the main programme objectives of a research and innovation programme
such as FP7 and ICT‐PSP. The specific objectives are detailed in the programme documents.
Operational objectives: refer to the concrete objectives that are considered for the set‐up and the
implementation of programmes. The operational objectives refer to the specific instruments used in
ICT programmes and the specific challenges outlined in the Work Programmes for ICT research.
2.2.1 TheInterventionLogicofICTresearchinFP7
The overriding aim of the Seventh Framework Programme is to contribute to the Union becoming the world's
leading research area.19 This requires the Framework Programme focus strongly on promoting and investing in
world‐class state‐of‐the‐art research based primarily on the principle of excellence in research. The objectives
of the Seventh Framework Programme should be geared towards the creation of the European Research Area
and carrying them further towards the development of a knowledge‐based economy and society in Europe.
Key programme objectives of the European Framework Programme are:
to support trans‐national cooperation at every scale across the EU:
to enhance the dynamism, creativity and excellence of European research at the frontier of
knowledge, including investigator‐driven basic research based on excellence;
to strengthen human potential in research and technology and facilitate scientific careers;
to intensify the dialogue between science and society;
to strengthen the research and innovation capacities throughout Europe; and
to ensure a wide use and dissemination of the knowledge generated by publicly funded research
activity.
The European Framework Programme therefore has set up four types of activities:
trans‐national cooperation on policy‐defined themes (the "Cooperation" programme);
investigator‐driven research based on the initiative of the research community (the "Ideas"
programme);
support for individual researchers (the "People" programme); and
support for research capacities (the "Capacities" programme).
19 The specific objectives of FP7 stem from the following source and are extracted thereof: Decision No 1982/2006/EC of the European Parliament and of the Council of 18 December 2006 concerning the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007-2013), 30.12.2006.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 25 of 121
Within FP7, initiatives for Information and Communication Technologies (ICT) are mainly located within the
Cooperation Programme. In the Cooperation Programme, the Member States have earmarked a total of € 9.1
billion for funding ICT during the lifetime of FP7. This makes it the largest research theme in the programme,
which is itself the largest Specific Programme of FP7 (with 64% of the total budget). In this programme,
support should be provided for appropriate trans‐national cooperation across the EU and beyond, in a number
of thematic areas corresponding to major fields of the progress of knowledge and technology, where research
should be supported and strengthened to address European social, economic, environmental, public health
and industrial challenges, serve the public good and support developing countries. Where possible, this
programme will allow flexibility for mission‐oriented schemes that cut across the thematic priorities.
Within the FP7 “Capacities” programme, the ICT element “e‐Infrastructures” has a budget of approximately €
600 million. Additional ICT research will be carried out through two long‐term private‐public partnerships in
the form of Joint Technology Initiatives (JTIs) in the areas of Embedded Computing Systems (ARTEMIS) and
Nano Electronics (ENIAC). Furthermore, the Ambient Assisted Living Initiative (Art. 169) seeks to ensure
Community participation in jointly implemented research aimed at enhancing the quality of life of older people
by using new ICT.
As the Cooperation Programme is the main pillar of ICT research in FP7, Figure 6 provides a logic model for the Cooperation part of FP7, taking into account the specific instruments and the thematically oriented challenges or priorities defined in the Work Programmes of the FP7‐ICT Cooperation Programme.
Figure 5: A logic model for the FP7 Cooperation Programme ‐ ICT
Source: Own compilation
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 26 of 121
In order to highlight how the three main strategic objectives (economic competitiveness, the advancement of
the European Research Area, and social welfare) are transferred into specific and operational objectives and
then outputs, outcomes and impacts the main mechanisms are indicated via different coloured arrows:
The red arrows refer to economic and industry related mechanisms, such as new products, services
and markets, and spin‐offs. This can be combined with increased knowledge, e.g. in (new)
collaborations with research organisations. It can also involve increased human capital, e.g. more and
better‐trained researchers working in industry.
The green arrows refer to scientific and research related mechanisms, e.g. fundamental and applied
R&D that leads to increased knowledge, human capital, new instruments and methodologies. In the
context of applied R&D and innovation (such as pilots), economics and industry related mechanisms
also play a role.
The blue arrows refer to a combination of mechanisms ‐ such as increased knowledge and networks ‐
that are primarily driven by societal challenges. Often, this involves participation of a variety of actors
(including civil society and public organisations) and/or dissemination of knowledge to a variety of
stakeholders.
It should be stressed that the three main strategic objectives cannot be completely separated. For instance,
firms collaborating with universities from different countries may be successful in addressing societal
challenges. This would cover all three strategic objectives. Likewise, it is not possible completely to separate
the economic mechanisms from mechanisms related to increased knowledge, human capital and networks. In
Networks of Excellence, the focus is on networking, but scientific output is important as is training and the
uptake by industry. As mechanisms cannot completely be singled out, the three sets of mechanisms (the
arrows in three colours) provide only tentative links between specific inputs/activities, outputs, outcomes,
impacts, operational objectives, and the three strategic objectives. This also reveals how the relative
importance of strategic objectives and of mechanisms is different for different parts of FP7‐ICT and CIP ICT‐
PSP.
To devise the logic chart, the following procedure was applied:
At the level of strategic EU policy objectives and the specific objectives of ICT research and
innovation, the most relevant policy objectives outlined in Europe 2020, the Digital Agenda, and the
Innovation Union were selected and and then linked with the operational objectives of the FP7
Cooperation Programme.
At the activities/inputs level, the critical activities of the FP7 Cooperation Programme are portrayed.
The boxes at the activity level indicate a logic flow from left to right, ranging from the programme
management work of the European Commission to the activities performed by the participants of
FP7. They include, on behalf of the European Commission, the design of the specific funding
instruments with its specific emphasis, the selection of appropriate thematic priorities executed
within the annual Work Programmes including an allocation of funds to the specific areas, and the
selection of projects. On behalf of the research project participants, the input comprises the set‐up of
international cooperative research teams including a definition of a research agenda, a specific work
plan including the incorporation of qualification measures. The incorporation of qualification
measures, exchange schemes, etc., is provided as a single box to demonstrate that the human
resource dimension is an integral part of many of the specific FP7 instruments of FP7.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 27 of 121
At the output level, the boxes indicate that funding has been provided for a series of projects in areas
of significant relevance. The conduct of the research activities will also result in various forms of new
scientific and technological knowledge, and a capacity formation at the level of participating
organisations and individuals will take place. These outputs include tangible/countable outputs and
non‐tangible outputs. Tangible scientific and technological outputs include, i.e., publications in peer
reviewed and non‐peer reviewed journals, working papers, conference presentations, lectures, patent
applications, IPRs, prototypes, models. Non‐tangible scientific and technological outputs include, i.e.,
new hypotheses and theories, new problems and hunches, new practically oriented ideas. There are
also countable and non‐countable outputs with regard to the capacity/human resources side.
Measurable outputs include, e.g., the number of PhDs resulting from the project.
The outcomes level relates to the effects of the projects that occur at the level of the group of
participants in a medium‐ to long‐term timeframe. Central aspects at the intermediate level are that
the knowledge acquainted in the projects will be transferred into new products/processes and
services or product improvements, spin‐off products, granted patents and licences ‐ to name a few ‐
which are taken up by users. The research results stemming from the project are supposed to have
leveraged additional private funded R&D activities further strengthening the capacities of
participating organisations. The knowledge generated through the portfolio of projects will also lead
to an accumulation of knowledge, strengthening the technological leadership of European industry
and visible scientific excellence in key research areas. It is also expected that a sustainable change in
the behaviour of project participants can be traced during this time. Further integration of the
activities of project partners should have taken place, and new strategic business alliances across
Europe should have emerged.
The impact level depicted in the figure relates to the fundamental intended (or unintended) change
occurring at the level of the European Community. The indicated boxes link back to the strategic
objectives of FP7.
The objectives of FP7 and the logic chart diagram of the FP7‐ICT Cooperation Programme also highlight that
not only different performance measurement levels exist (inputs/outputs/outcomes/impact), they also
acknowledge that there are different performance dimensions of FP7, which need to be taken into account
for structuring a performance monitoring system for EU‐funded ICT research. These performance dimensions
include a) the Scientific/technological Knowledge Dimension, b) the new Instrumentation and Methodologies
Dimension, c) the Knowledge Circulation Dimension via Research Networks and Collaboration, d) the Human
Capital and Capacity Building Dimension, e) the Economic and Social Impact Dimension.20
As the ICT funded research in FP7 consists of a range of different funding with different emphasis on the
different performance domains, the intervention logic of the different instruments applied for EU‐funded ICT
research and innovation are presented in the following sections.
2.2.2 TheinterventionlogicofthedifferentinstrumentsforICTresearchandinnovation
As highlighted in Figure 5, the Cooperation Programme for ICT Research consists of a range of instruments with different focuses and purposes:
20 These channels reflect by large the “Results-based Approach for Impact Assessment” of research activities, which has been developed by researchers at the Science and Technology Policy Research Unit (SPRU) of the University of Sussex from 1996 onward.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 28 of 121
Collaborative projects (CPs) provide support to research projects carried out by consortia with
participants from different countries aiming at developing new knowledge, new technology,
products, demonstration activities or common resources for research. The funding scheme allows
for two types of projects to be financed: a) “Specific Targeted Research Projects” (STREPs), b) “Large‐
scale Integrating Projects" (IPs). Whereas STREPs target a specific research objective in a sharply
focused approach, IPs have a comprehensive programme approach, including a coherent set of
activities dealing with multiple issues aimed at specific deliverables. As opposed to STREPs, IPs
activities also include activities to disseminate research results and prepare for their uptake and use,
including knowledge management and IPR protection, and training activities of researchers and key
staff, which should improve the professional development of the personnel concerned.
Networks of Excellence (NoEs) provide support to Joint Programme of Activities implemented by a
number of research organisations integrating their activities in a given field, carried out by research
teams in the framework of long‐term cooperation. The main aim of the NoEs is to support the long‐
term durable integration of research resources and capacities (researchers, services, teams,
organisations, and institutions) to consolidate or establish European leadership at world level in their
respective fields by integrating at European level the resources and expertise needed for that
purpose. Hence, NoEs seek in particular to overcome fragmentation of European research efforts,
achieve higher scientific excellence, and make Europe more competitive at international level.
Coordination and support actions (CSA) provide support to activities aimed at coordinating or
supporting research activities and policies (networking, exchanges, coordination of funded projects,
trans‐national access to research infrastructures, studies, conferences, etc.). These actions may also
be implemented by other means than calls for proposals. The funding scheme allows for two types of
projects to be financed: a) “Coordination Actions” (CA), “Specific Support Actions" (SA).
The outline of the specific instruments of FP7 shows that a performance measurement system must take into
account the differences in their rationales, organisation, and intended outcomes. Therefore, specific logic
charts for three major funding instruments are provided (STREPs, IPS and NoEs). As the specific instruments
adhere to the strategic EU policy objectives and the specific objectives of ICT research in FP7, the following
logic charts start at the operational objectives level.
Figure 6 provides a logic model for Small and Medium Sized Projects of FP7‐ICT, and Figure 7 provides a logic
model for Integrated Projects (IPs). In the operational objective area, STREPs focus on very specific, objective‐
driven research activities, with a limited number of participants (approx. 6‐15). Due to the limited time range,
STREPs also target results that lead to specific useful new technological and scientific knowledge of societal
interest. Hence, the logic model foresees that the cooperation performed within STREPs will not lead to large
new networks, but a strengthening of cooperation of a limited number of actors and a portfolio of projects
that addresses specific solutions for certain challenges. The Logic Chart for STREPs also acknowledges that
STREPs do not incorporate exchange schemes, training measures and preparatory activities for IPRs into their
project activities. Hence, a specific channel focused on human resources development has been omitted, as
opposed to Integrated Projects that have a much wider scope and activity agenda than STREPs.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 29 of 121
Figure 6: A logic model for Specific Targeted Research Projects (STREPs) within FP7 – ICT
Source: Own compilation
Figure 7: A logic model for Integrated Projects (IPs) within FP7 – ICT
Source: Own compilation
Figure 8 provides a logic chart for Networks of Excellences funded within FP7‐ICT. As illustrated in the logic
chart, the main emphasis is that the NoEs seek to pool resources to overcome fragmentation of European
research activities. A considerable number of networking activities, including joint training programmes, and
an aligned research agenda are core elements as illustrated in the logic chart.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 30 of 121
Figure 8: A logic model for Networks of Excellence (NoEs) within FP7‐ICT
Source: Own compilation
2.2.3 ThethematicprioritiesintheinterventionlogicofFP7‐ICT
Apart from the different instruments, the FP7‐ICT research agenda is defined in the annual Work Programmes
for ICT research. The ICT Work Programmes define the priorities for each call for proposals to be launched in
the following year. The priorities of the annual Work Programmes are faithful to the FP7 Framework
Programme and Specific Programme decisions and in line with the main ICT policy priorities as defined in the
Digital Agenda.
The ICT Work Programme 2011‐12 under FP7 is divided into eight challenges of strategic interest to European
society and research into ‘Future and emerging technologies’ and support for horizontal actions, such as
international cooperation and pre‐commercial procurement:
Challenge 1 ‐ Pervasive and Trusted Network and Service Infrastructures
Challenge 2 ‐ Cognitive Systems and Robotics
Challenge 3 ‐ Alternative Paths to Components and Systems
Challenge 4 ‐ Technologies for Digital Content and Languages
Challenge 5 ‐ ICT for Health, Ageing Well, Inclusion and Governance
Challenge 6 ‐ ICT for low carbon economy
Challenge 7 ‐ ICT for the Enterprise and Manufacturing
Challenge 8 ‐ ICT for Learning and Access to Cultural Resources
In contrast, the ICT Work Programmes for 2009‐10 and for 2007‐08 were divided into seven challenges and
research into ‘Future and emerging technologies’ as Challenge 7 of these Work Programme (ICT for
independent living, inclusion and governance) moved to Challenge 5 in the Work Programme 2011‐12, and ‘ICT
for Learning and Access to Cultural Resources’ was added to the list of challenges.
A logic chart for the Future and Emerging Technologies
Figure 9 provides the logic chart for the Future and Emerging Technologies (FET) scheme, which is different
from the other challenges outlined in the Work Programme 2011‐2012 which seek to a) overcome
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 31 of 121
technological roadblocks and reinforce EU industrial strengths (Challenges 1,2,3,7), and feature also b) specific
socio‐economic goals relevant for EU citizens (Challenges 4,5,6,8).
Figure 9: A logic model for the Future and Emerging Technologies Programme21
Source: Own compilation
The FET scheme aims to lay new foundations for future ICT by exploring new unconventional ideas that can
challenge the understanding of the scientific concepts behind ICT and can impact future industrial ICT research
agendas. As such, the FET scheme is very different from the IPs and STREPs. FET projects aim at the realisation
of bold ideas involving high risks through high‐quality, long‐term, visionary research with sound and well‐
targeted objectives. It has no pre‐defined Work Programme and is open to any research related to ICT. The
early stage phase of the research activities, its novelty and long‐term research approach makes it more
comparable to ERC type of funding, but the FET scheme does not fund individual researchers, it supports only
collaborative research.
The logic chart of the FET scheme reflects the key characteristics of FET research. It should lay new foundations
for future ICT. It should explore new, unconventional ideas and scientific paradigms that are too risky for
present industrial research. The FET scheme has a strong emphasis on high‐risk purpose driven research, which
means that the research conducted within the FET scheme should a) make an impact on future industrial ICT
research agendas, and b) should also lead to scientific and technological breakthroughs. Apart from the
collaborative approach, which should affect the strategic research agendas of its participants, the
multidisciplinary approach of FET is also reflected in the logic chart. The logic chart also acknowledges that two
complementary schemes, i.e. FET Proactive Initiatives (thematic, top‐down) and FET Open (open, bottom‐up),
are used.
21 This logic model has been elaborated based on the following source: Lieshout, M. et al. (2009): The Impact of FET Research Initiatives (IFETRI), Final report: Part I- Methodology, Delft, 16.02.2011
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 32 of 121
The links between FP7 ICT and the Work Programme challenges
As indicated in the overall logic chart of the Cooperation Programme FP7‐ICT (Figure 5), each specific challenge
of the Work Programme is operationalised via an implementation of the specific instruments available in FP7
(Figure 6 – Figure 9).
Each challenge of the Work Programme provides a set of top‐down defined research agendas that will result in
distinct outputs and outcomes, which may be considered for the development for performance measurement
indicators. Annex 1 lists tables on outputs, outcomes and impacts stemming from the specific challenges based
on an analysis of the latest multi‐annual FP7 Work Programme (2011‐2012). The analysis performed in Annex 1
shows that each challenge of the Work Programme has a distinct research/technological oriented focus,
outputs, outcomes and impacts. Some challenges also have a particular societal oriented focus.
All challenges adhere to the overall targeted objectives and are faithful to the FP7 Cooperation Programme
and its instruments. Consequently, the summary Table 2 indicates by which means the specific objectives of
FP7‐ICT as outlined in Figure 5 relate to the Work Programme challenges.
Table 2 shows that the challenges of the Work Programmes relate to concrete tasks and measures that are
planned for distinct technologies, markets, and societal challenges, whereas increased quality in ICT research,
an upgrade of human capital and skills, and reinforced coordination are common to all specific challenges –
and may be measured by similar means.
Differences in the directly involved target groups at the input/activity level and the market areas, in which the
portfolio of projects will exert its influence, have also been confirmed in the interviews. For instance, in
Challenge 1 “Pervasive and Trusted Network and Service Infrastructures” there is a relatively small group of
core actors, consisting of very large European companies, which the EU Framework Programmes have allowed
to engage in collaborative R&D endeavours. The joint R&D activities of these companies have contributed to
an alignment of R&D agendas and have resulted in considerable advancement of knowledge of the
participating organisations. The results of this type of work are by far all forms of IPRs, which are then used
internally by the enterprises for the development of new products and processes. The more societal oriented
challenges (4, 5, 6, and 8), have a more demand driven approach. They build much more on the interaction
with potential users of services and have a clear application orientation in mind. This also has consequences in
terms of direct outputs. In the technical oriented challenges of the FP7 ICT, direct technological results are
more often specific technologies, tools and methods, prototypes, algorithms, and IPRs, whereas in the societal
oriented challenges new systems applications prevail.
The analysis of the Work Programme made it clear that the current state of the level of interaction between
research organisations, technology providers and potential users of new technologies is different between the
challenges. In this respect, the overview of challenges provided in Annex 1 of this report reveal that similar
concepts are sometimes positioned at different levels. For example, "Consensus building between research
organisations and commercial organisations" is an output in Challenge 4; new research agendas are mostly
positioned as an outcome (e.g. in Challenges 1 and 2) and roadmaps and innovation scenarios are seen as
impacts in Challenges 1 and 7 respectively. To some extent, this reflects the maturity of a domain and existing
links between researchers, business and other stakeholders. A similar situation emerged for standardisation
and interoperability. These concepts are phrased differently and positioned differently, across the eight
challenges.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 33 of 121
Table 2: Linkages between specific objectives of FP7‐ICT and the challenges of the Work Programme
Specific Objectives
FP7‐ICT
Challenge 1
Networks
Challenge 2
Cognitive
Systems/Robotics
Challenge 3
Components and
Systems
Challenge 4
Digital Content and
Languages
Challenge 5
Health
Challenge 6
Low Carbon Energy
Challenge 7
Enterprise and
Manufacturing
Challenge 8
Learning and
Cultural Resources
‐ Increase
technological
excellence of ICT
industry
Technological field:
‐ software
engineering
‐ network
technologies
‐ Market area:
‐ Internet services,
Cloud Computing
Technological field:
‐ robotics systems
Market area:
Logistics,
manufacturing
technologies,
Transport
Technological field:
‐ Several
(manufacturing
industry)
Market area:
IT‐equipment,
components
Technological field:
‐ Software
engineering
Market area:
‐ Internet services
Technological field:
‐ Software, data
management,
sensors
Market area:
‐ Healthcare services
Technological field:
‐ IT systems/tools
for specific sectors
Market area:
‐ Automotive,
transport, energy,
water
Technological field:
‐ sensors, methods,
architectures
Market area:
‐ Manufacturing
sector
Technological fied:
‐ software
applications,
technologies
Market area:
Software, services
‐ Increase quality of
ICT research
‐ Increased
technological
capabilities
‐ Increased
technological
capabilities
‐ Increased
technological
capabilities
‐ Increased
technological
capabilities
‐ Increased
technological
capabilities
‐ Increased
technological
capabilities
‐ Increased
technological
capabilities
‐ Increased
technological
capabilities
‐ Reinforce
coordination/poolin
g of resources
‐ Common
technological
platforms through
cooperation of
major industrial
actors
‐ Integrated EU
research community
in robotics
‐ Cross fertilisation
of academic and
industrial
communities
‐ Integrated EU
research
communities
‐ Integration of
businesses,
researchers, users
‐ Integrated EU
research
communities
‐ Integration of
businesses,
researchers, users
‐ Integration of
businesses,
researchers, users
‐ Integration of
businesses,
researchers, users
‐ Cross fertilisation
of academic and
industrial
communities
‐ Integration of
businesses,
researchers, users
‐ Foster human
capital and skills
‐ Depends more on
instrument than on
challenge
‐ Depends more on
instrument than on
challenge
‐ Depends more on
instrument than on
challenge
‐ Depends more on
instrument than on
challenge
‐ Depends more on
instrument than on
challenge
‐ Depends more on
instrument than on
challenge
‐ Depends more on
instrument than on
challenge
‐ Depends more on
instrument than on
challenge
‐ Research on
themes of major
societal needs
‐ General: economic
competitiveness,
social welfare
‐ General: Economic
competitiveness,
social welfare
‐ Specific: New tools
assisting citizens
‐ General: Economic
competitiveness,
social welfare
‐ General: Economic
competitiveness,
social welfare
‐ Specific: new opp.
for SMEs, lower
language barriers for
citizens
‐ General: Economic
competitiveness,
social welfare
‐ Specific: Better
healthcare and
information systems
‐ General: Economic
competitiveness,
social welfare
Specific: Lower CO2
emissions, higher
energy‐efficiency,
costs saved
‐ General: Economic
competitiveness,
social welfare
‐ Specific:
Manufacturing
industry
‐ Specific:
New possibilities to
engage in cultural
resources, New
training/tutoring
possibilities
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 34 of 121
2.2.4 AdditionalEUICTresearchinitiatives
Apart from the FP7‐ICT Cooperation Programme, there are a number of other EU‐funded ICT research
activities within the Framework Programme, i.e. a) the Joint Technology Initiatives, b) the ERA‐NETs and c)
Joint Programming.
Joint Technology Initiatives22
The Joint Technology Initiatives (JTIs) originate in the European Technology Platforms (ETPs) set up since the
6th Framework Programme for Research and Development to contribute to Europe’s growth, competitiveness
and sustainability objectives by focusing research funding better to the needs of industry. The ETPs are means
to define R&D priorities, timeframes and action plans in a number of strategically important areas. The ETPs’
Strategic Research Agendas (SRA) were set up with a key role for industry.
A limited number of these ETPs offer opportunities for significant technology advances with the need for
setting up a long‐term public private partnership to implement (parts of) their ambitious SRAs. That is, where
the loose coordination through the European Technology Platforms and the support through the regular
instruments of the European Framework Programme for Research and Development are not sufficient, the
European Commission proposed to set up Joint Technology Initiatives. Hence, JTIs primarily came out of the
work of the ETPs (SEC (2005) 800, p.9).
JTIs have been implemented as so‐called Joint Undertakings in accordance with Article 171 of the Treaty. The
Joint Undertakings have the legal status of Community Bodies. In the field of ICT research, two JITs have been
set up:
The ARTEMIS Joint Undertaking has been established to run the JTI in embedded systems.
The ENIAC Joint Undertaking has been established to run the JTI in nanoelectronics.
The general objectives common to all JTIs as stated in the programming documents (SEC (2005) 800, p.9)
include:
coherent implementation of European research efforts in the strategic technological fields for the
future;
accelerating the generation of new knowledge, innovation and the uptake of research into strategic
technologies, leading to enhanced productivity and strengthened industrial competitiveness;
concentrating efforts on key projects that can help meet Europe’s industrial competitiveness goals;
enhancing the technology verification process to identify and remove obstacles to future market
penetration; and
pooling user requirements to guide investment in research and development towards operational and
marketable solutions.
At the same time, the JTIs are not intended to have a restrictive effect on competition. They are intended to
enhance downstream competitiveness in key technologies by addressing market failures arising from the high
costs and risks associated with long‐term, pre‐competitive, multidisciplinary research. For the established JTIs,
the above objectives are further elaborated in the Council Regulations.
22 The information on the JTIs objectives and intended results are extracted from the following specific study: Dinges, M. et al. (2008), Evaluation Manual for ICT research in FP7: The Joint Technology Initiatives, Study performed for the European Commission SMART 2008/034, Final Report, Vienna. http://ec.europa.eu/dgs/information_society/evaluation/studies/s2008_05/jti_eval_manual_final_report.pdf
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 35 of 121
The respective Joint Technology Initiative shall contribute to the implementation of the Seventh Framework
Programme and the theme X (the respective field of research) of the Specific Programme ‘Cooperation’. It
shall, in particular:
Define and implement a Research Agenda for the development of key competences or technologies
for theme X across different application areas in order to strengthen European competitiveness and
sustainability and allow for the emergence of new markets and societal applications. Activities for the
implementation of the Research Agenda are hereinafter referred to as ‘R & D Activities’.
Support the implementation of the R & D Activities notably by awarding funding to participants in
selected projects following competitive calls for proposals.
Promote a public‐private partnership aimed at mobilising and pooling Community, national and
private efforts, increasing overall R & D investments in the respective field, and fostering
collaboration between the public and private sectors.
Achieve synergy and coordination of European R & D efforts in the respective field including, when
added value can be created, the progressive integration in the respective Joint Technology Initiative
of the related activities in this field currently implemented through intergovernmental R&D schemes
(Eureka).
Promote the involvement of SMEs in its activities in line with the objectives of the Seventh
Framework Programme.
The ex‐ante assessments of the current JTIs of DG INFSO outline the expected results stemming from the JTI
activities and indicate corresponding result (outcome) indicators, i.e.:
Leveraging resources and integrating national efforts. By providing incentives to industry and Member
States, additional national support will be attracted and greater industry funding will be leveraged.
o Indicators: (i) number of countries that commit funding to the JTI; (ii) commitments and
payment appropriations; (iii) national funding committed and spent on projects selected by
the JTI; (iv) resources invested by industry in R&D work for projects selected by the JTI.
Focusing on common R&D agendas more effectively than is currently possible.
o Indicators: (v) number and level of partnerships.
Raising programme efficiency by removing uncertainty as to the availability of national budgets and
by removing duplicative evaluation and monitoring procedures.
o Indicators: (vi) time interval between proposal submission and project selection decision by
the JTI; (vii) number of organisations, including SMEs participating in Calls for proposals; (viii)
overhead costs for operating the programme; (ix) results transferred to the marketplace.
Providing significant economic and social benefits by making progress towards its technological and
economic objectives of the JTI. This progress will be subject to periodic independent evaluation.
o Indicators: (x) patents filed resulting from projects; (xi) number of publications resulting from
projects.
ERA NETS, Article 169 initiatives and Joint Programming
The main aim of the ERA‐NET scheme23 and the participation of the Community in jointly implemented
national research programmes (under Article 169 of the Treaty) are to:
23 Decision no 1982/2006/EC of the European Parliament and the Council concerning the 7th framework programme of the European Community for Research, Technological Development and Demonstration Activities (2007-2013) (OJ L412/1 30.12.2006)
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 36 of 121
Step up the cooperation and coordination of research activities carried out at national or regional
level in the Member States and Associated States through the networking of research activities and
the mutual opening of research programmes
To improve the coherence and coordination across Europe of such research programmes.
Where appropriate, ERA‐NETs could be used for programme coordination between European regions and
Member States to enable their cooperation with large‐scale initiatives, and in a limited number of cases,
provide additional Community financial support to those participants that pool resources for the purpose of
joint calls for proposals between their respective national and regional programmes (‘ERA‐NET PLUS’).
The Joint Programming concept was introduced by the European Commission in July 2008. It is one of the five
initiatives for implementing the European Research Area (ERA). The aim of Joint Programming is to increase
the value of relevant national and EU R&D funding by concerted and joint planning, implementation and
evaluation of national research programmes. Even common financing could be considered in this context.
Within the concept of Joint Programming, Member States must coordinate national research activities, bundle
resources, benefit from complementarities and develop common research agendas in order to face the grand
societal challenges – all in variable geometry and therefore on a voluntary basis. Joint Programming intends to
tackle the challenges that cannot be solved solely at the national level and allows Member States to
participate in joint initiatives that seem useful for them.
2.2.5 TheCommunityInnovationProgramme(CIP)andtheICTPolicySupportProgramme(ICT‐PSP)
CIP’s overarching aim is to contribute to the enhancement of competitiveness and innovation capacity in the
EU, the advancement of the knowledge society, and sustainable development based on balanced economic
growth. A significant part of CIP consists of encouraging the competitiveness of European enterprises,
especially SMEs.
With a proposed budget of EUR 4,212.6 million CIP will fund actions in three different Work Programmes:
The Entrepreneurship and Innovation Programme, with a special focus on SMEs;
The ICT Policy Support Programme, supporting the use of ICT in businesses;
The Intelligent Energy Europe Programme.
FP7 and CIP are mutually reinforcing components of the EU's efforts to reach the Lisbon goals and support
Europe's competitiveness and innovative capacity.
CIP will primarily focus on the technological and non‐technological aspects of innovation and the downstream
parts of the innovation and research process. It aims to provide a common framework for sub‐programmes
integrating a number of existing measures in a much simpler structure. The programmes include the multi‐
annual programme for enterprise and entrepreneurship, the LIFE programme, i2010 (former eEurope action
plan), CIP ICT PSP – eContentplus (former European Digital Content programme), financial aid to the
establishment of TEN‐T/Transport infrastructure and the Intelligent Energy ‐ Europe Programme.
http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2006:412:0001:0041:EN:PDF
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 37 of 121
2.2.6 TheICTPolicySupportProgramme
ICT PSP aims at stimulating smart sustainable and inclusive growth by accelerating the wider uptake and best
use of innovative digital technologies and content by citizens, governments and businesses. It provides EU
funding to support the realisation of the Digital Agenda for Europe.
The programme addresses obstacles preventing further and better use of ICT‐based products and services and
barriers to the development of high growth businesses, notably SMEs, in this field. In addition to illustrating
and validating the high value of digital technologies for the economy and society, it will foster the
development of EU‐wide markets for innovations enabling every company in Europe to benefit from the
largest internal market in the world.
Particular emphasis is placed on areas of public interest given their weight in the European economy and the
unique solutions that ICT can bring to the societal challenges that lie ahead such as health and ageing,
inclusion, energy efficiency, sustainable mobility, culture preservation and learning as well as efficient public
administrations. The main challenges include the relatively slow uptake of ICT innovations in the public sector
and the high fragmentation of relevant markets due notably to a lack of interoperability between ICT solutions
deployed across the Member States and Associated Countries. ICT PSP covers technological and non‐
technological innovations that have moved beyond the final research demonstration phase. ICT PSP does not
support research activities; but it may cover, when needed, technical adaptation and integration work in order
to achieve the objectives.
Networking actions for sharing experiences and preparing the deployment of innovative ICT‐based solutions in
such areas are also supported, as well as the monitoring of the Information Society through benchmarking,
analyses and awareness raising actions.
Figure 11 provides a logic chart for the ICT‐PSP programme. The logic chart shows that ICT PSP programme has
established four types of instruments:
Pilot (Type A): Building on initiatives in Member States or associated countries. Service pilots with interoperability are the central theme. To implement an open, common interoperable solution with results widely disseminated and available to all Member States.
Pilot (Type B) Stimulating the uptake of innovative ICT‐based services and products, for a first implementation of an ICT‐based innovative service carried out under realistic conditions. Operational pilot service demonstrating significant impact potential and the engagement of a complete value‐chain of stakeholders.
Thematic Networks (TN) Providing a forum for experience exchange and consensus building to bring together relevant stakeholders, expertise and facilities, to support the implementation of Information Society policies using ICT pursuing clear objectives and measurable outcomes.
Best Practice Networks (BPN) (up to 2011) ‐ making European digital libraries more accessible and usable by combining the consensus building and awareness raising function of a network with a large‐scale implementation in a real life context.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 38 of 121
Figure 10: A logic model for ICT PSP Programme
Source: Own compilation
With regard to the Framework Programmes for research and innovation, the different instruments of ICT PSP
have specific emphasis on the various objectives of ICT PSP. Whereas the Pilot A instrument seeks to gather a
large, representative number of public authorities, NGOs, and industry for setting up a European wide service
pilot, the Pilot Bs deal with a quite lower number of actors, including regional authorities and firms that seek
to implement operable pilots at a much smaller scale usually in a limited number of countries. In the Thematic
Networks, a central aim is to build a critical mass of European public administration bodies to respond jointly
to collective needs. The Thematic Networks can be seen as common interest groups, which seek to stimulate a
European wide discussion and actions towards a harmonisation of EU‐wide rules, regulations, and the joint
activities for ICT‐based policy driven applications.
ICT PSP is furthermore organised in annual Work Programmes that lay out the main themes of actions. In each
Work Programme, community support is concentrated in a limited set of actions in predefined themes where
Community funding is needed in order to achieve the highest impact. The themes highlighted in the annual
Work Programme all reflect societal challenges relevant to citizens and public policy administrations, which are
the core entities where the Pilots are implemented. In the course of the Work Programmes from 2007‐2011,
the main themes addressed were:
Efficient and interoperable eGovernment services including EU‐wide e‐procurement, pan‐European
recognition of electronic IDs, innovative solutions for inclusive e‐government, user friendly
administrations, public services and inclusion, public sector information.
ICT for health, ageing and inclusion including interoperable health services, accessibility of ICT for all,
e‐Prescription, patient summary.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 39 of 121
ICT for energy efficiency and sustainability in urban areas/environment and ICT for low carbon
economy and smart mobility
ICT for digital libraries
Multilingual web
From the perspective of relevant issues to be considered for a performance monitoring system for ICT PSP, the following picture emerges:
The programme applies innovative funding instruments in the promotion of demand and user‐
oriented innovation policies. Hence, the involvement of end‐users and the uptake of new services and
their wider deployment are key factors for a monitoring system.
The most central aim of the Pilot A projects is to address issues of interoperability in specific, tightly
defined areas, assembling a wide network from a large number of countries, aiming to create
interoperable solutions/standards across Europe. This means that the projects need to be able to
gather a well‐balanced mix of Member States to pave the way for Europe wide dissemination. In
terms of a monitoring system, this means that the origin of participants and their tasks/roles in the
programmes need to be well documented and specified.
Pilot B projects, which focus geographically on a more limited number of actors in the Member States,
also have the objective to promote their wider adoption. Hence, mechanisms that lay out the
approach and success towards this objective have to be implemented and reported in a synthesized
way.
The different thematic objectives of ICT‐PSP also mean that outputs, outcomes and impacts target
different domains. This needs to be taken into account when measuring the socio‐economic impact
dimension.
2.3 Conclusions
FP7‐ICT and CIP ICT‐PSP are characterised by a high degree of complexity and address very sector specific
technological issues and societal targets. Hence, from both a technological and a societal perspective there are
large variances between the different instruments and thematic areas of the Work Programmes.
Differences in the target groups, market areas, and market maturity have been witnessed. In some thematic
areas, the previous programmes have already allowed the formation of a core set of research agendas and
involved actors, and the new Work Programmes are very much based on what has already been achieved so
far. In others co‐ordination and co‐operation between different actors is at a much lower level and therefore
not seen as an input, but an impact of EU funded ICT research.
However, in spite of the very specific challenges of the Work Programmes, which relate to concrete tasks and
measures that are planned for distinct technologies, markets, and societal challenges, the following relevant
commonalities for establishing a performance monitoring system have been identified:
The instruments and thematic areas are characterised by a common set of overarching targets. These
are: i) to increase technological excellence of EU ICT industry, ii) to increase quality of ICT research, iii)
to re‐inforce coordination and pooling of resources, iv) to ensure training, dissemination and
knowledge management and iv) to focus on research on themes of major societal needs.
The differences between the thematic challenges and their sub‐domains reflect different emphases of
their work (e.g. knowledge exploration vs. use, network creation vs. network exploitation, etc.), but
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 40 of 121
the differences between each of them may not be characterised as fundamental: all challenges are
part of FP7 ICT and make use of the same set of instruments.
The differences at the outcomes and impact level are largely the result of a specification of general
concepts to the specific domain of a challenge (e.g. strengthened competitiveness of nano‐industry,
increased competitiveness of EU industry in flexible, organic, large area electronics, or improved
competitiveness in a multilingual digital market).
Hence, against a background of complexity, the logic chart analysis provided linkages between the specific
objectives of the instruments and thematic priorities to inputs, outputs, outcomes and impacts (i.e. the impact
chain of a programme). Together with the Work Programme analysis, the findings provide the European
Commission with a concept for performance measurement that is based upon the impact chain of the
programme, and the relevant impact channels (knowledge, research networks and collaboration, human
capital, economic and social impact).
Qualitative approaches, such as evaluations by expert panels, may extensively use domain specific concepts
and indicators. This adds precision. However, quantitative approaches, such as monitoring systems that should
comprise all EU‐funded ICT research, may not use concepts for very specific domains. In terms of feasibility, it
is already a huge challenge for the Commission to develop indicators that are able to track the 'domain
independent' outputs, outcomes and impacts that are common throughout the ICT programme on a timely
basis.
A performance monitoring systems should also remain stable over years. It should collect reasonable data that
allows for tracing of developments, and should therefore not be bound to the specifities of the ICT Work
Programmes at the specific objective level. By nature, these are more likely subject to change than the
portfolio of instruments and the overarching targets of the Framework Programme24.
For specific challenges or domains in the successor to FP7, it is possible to add examples from a specific
domain and select or emphasize different indicators. It is also possible to position an indicator at a different
level. In emerging domains, consensus on a research agenda or a standard setting process is an outcome or
impact. In other domains, it is an output. This allows for a customised approach for specific domains. However,
the strategic objectives, in particular for ERA Nets and Joint Programming, are fundamentally different to the
objectives of the Cooperation Programme of FP7 and FP7‐ICT, as they mainly address public authorities
responsible for R&D programming, and seek to pool resources for funding at a European level. The actual
programme design, selection procedures, and the research and innovation activities within these programmes
are placed at the level of national authorities. Therefore, the principles for a performance monitoring also
have to be defined at the level of the executing agencies. The EC may therefore only monitor the progress of
national coordination. In addition, the European Commission currently supports setting up learning processes
for effective implementation, monitoring and evaluation of the Joint Programming Initiatives via specific FP7
coordination and support actions.
24 As indicated in an interview conducted in the course of this stage of the project, Horizon is likely to drop the division of EU-funded ICT research into eight thematic challenges. Instead, a division into three broad areas called 1) ICT for Science, 2) ICT for Industry, and 3) ICT for Society is envisaged. For the list of interviews conducted with EC members, see Annex 3 of this report.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 41 of 121
3 Comparative analysis: approaches tomonitor and evaluate (ICT)researchandinnovation
The main aim of this part of the study is to define the use of quantitative performance indicators in monitoring
and evaluation systems based on recent theoretical literature and available handbooks. The section highlights
the tasks of performance monitoring systems and the type of indicators used and provides a reference
framework for devising performance monitoring concepts for the EU‐funded ICT research. In addition, it
outlines the present monitoring system of FP7‐ICT research in the light of international experiences.
3.1 Performanceindicatorsinmonitoringandevaluationsystems
3.1.1 Thescopeofperformancemonitoringandthe identificationofprogrammeperformanceindicatorsandevaluationquestions
Monitoring can be defined as a continuous assessment of key programme functions organised internally by
programme management (or a monitoring unit) and carried out on an on‐going basis. It entails setting up a
data collection system for compiling key data on programme activities, participants, interim achievements, and
outputs. The resulting data from programme monitoring should enable the development of interim
performance metrics or “indicators” of programme progress, outputs, and outcomes. The data are also useful
for keeping a programme on track and for guiding interim corrections (in the form of providing information to
help programme managers make decisions to design or revise their programme, re‐direct existing R&D funds,
or allocate new funds). Data obtained from monitoring exercises are also a key parameter in ex‐post
evaluation studies that seek to identify the impacts of a programme:
“Good monitoring systems are supposed to collect all relevant data ‐ and only that ‐ and to document it, as far
as possible, in a straightforward, systematic and gender‐sensitive manner. For one thing, this set of data serves
the purpose of project controlling in scientific and financial terms; for another, it should also give evaluators an
appropriate insight into the respective project. This can considerably enhance the quality of the database of an
evaluation while avoiding the same set of data being collected twice. It will be up to funding agencies to
determine to what extent appropriate programme management information systems can be employed.”25
Performance monitoring requirements should be derived directly from the connection between programme
objectives, programme inputs, programme outputs and outcomes and the underlying evaluation parameters.
As performed in Chapter 2 for the FP7 ICT programme and ICT PSP programme, the development of logic
charts is a recommended starting point in planning evaluations and identifying underlying performance
monitoring indicators.
The logic charts for FP7 ICT and ICT PSP have shown that performance indicators may be devised for the
following evaluative levels:
Performance indicators related to the input/activity dimension of a programme including resources
and the programme design, implementation and management of the programme;
Performance indicators related to the output dimension of the programme including scientific
outputs and technological outputs;
25 Platform fteval (2005), Evaluation Standards in Research and Technology Policy, Vienna, 2005.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 42 of 121
Performance indicators related to outcome dimension of a programme;
Performance indicators related to the impact dimension of a programme; and
Performance indicators related to structuring effects of a programme.
The different evaluative levels largely reflect the results chain of a programme and are augmented by
indicators related to the structuring effects of a programme, as one main target of the European Framework
Programmes is the Advancement of ERA for which specific performance indicators need to be developed.26
Performance indicators along the result chain are supposed to "show results relative to what was planned" at
each level of the chain (OECD‐DAC 2001). For the development of performance indicators, the literature shows
that evaluative questions which are linked to the programme logic are a key issue for the creation of
performance indicators domain (see Ruegg and Feller 2003, Kellog Foundation 2004). In this respect, DG
Budget’s evaluation guide (2004) provides certain evaluation criteria which may be translated into the
following overarching evaluative questions that can be applied for any interim and ex‐post evaluation exercise
of funding initiatives at European level:
Relevance: To which extent are the objectives of the intervention in line with the needs of the
beneficiaries and/or the social, economic and environmental problems that the intervention aimed to
address?
Effectiveness: To which extent do the activities/outputs/outcomes/long‐term impacts of the
intervention correspond with its objectives?
Efficiency: To which extent are the objectives of the intervention achieved at a reasonable cost?
Utility: To which extent do the outcomes and impacts of the intervention correspond with the needs,
problems, and issues over and beyond those embodied in the strictly stated objectives?
Sustainability: To which extent are the impacts of the intervention likely to continue into the future in
the absence of funding assistance?
26 This procedure was suggested by Braun et al. (2009).
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 43 of 121
Figure 11: Objectives Hierarchy, Indicators and Evaluation
Source: Evaluating EU Activities: A Practical Guide for the Commission Services, DG Budget, Evaluation Unit, July 2004
The evaluation criteria set out by DG Budget for the Evaluation of Community Policy initiatives can be
incorporated into the performance chain of a programme, and the evaluation criteria may be linked to the
performance chain of the programme as well. This is illustrated in Figure 11.
The logic charts provided in Figures 6‐10 (Chapter 2) provide the necessary logical structure to develop a
hierarchy of evaluative questions, in which each box of the logic chart represents a potential measurement
area related to the relevance, the efficiency, and the effectiveness of the programme. Several studies have
already outlined evaluative questions related to the main evaluation criteria of efficiency, effectiveness and
relevance. For example, questions related to effectiveness could be detailed in the following way,
distinguishing between outputs, outcomes and impacts of the Framework Programme (Braun et al. 2009):
Have the immediate outputs been produced as specified in programme planning?
Have the outcomes been created as specified in the objectives for the FP in its key planning and
decision documents?
Has the FP made the desired contributions to the ultimate objectives as specified by Europe’s
(research) policy decision makers?
In the context of FP7, a core set of evaluative questions was outlined in the following manner (Polt et. al 2006):
Programme Rationale
o Objectives, rationale and intervention logic for the IST‐RTD Programme
o Policy Mix/Portfolio
Programme Implementation
o Has the implementation of the Programme been satisfactory?
o Has the implementation of the Programme by the Commission been satisfactory and has it
lessened the burden of the constituents?
o Were the activities carried out efficiently and were they cost‐effective?
o Did the activities constitute the best way of achieving the objectives set?
o Were the overall legal framework (including rules for participation and contracts), policy
instruments and the modalities for implementation clear, appropriate and effective?
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 44 of 121
o Were the levels of funding and other available resources adequate?
o Were the targeted industrial and research communities, including SMEs, able to respond
appropriately?
Programme Achievements
o What were the outputs, outcomes and impacts of the IST‐RTD Programme?
o Has the Programme affected the industrial organisation and behaviour of individual players in the
affected sectors? These could be reflected in:
achieving critical mass;
inducing participants to activities that would not have been carried out without the
Programme (input and behavioural additionality);
disseminating knowledge more efficiently;
integrating core organisations with more peripheral ones and integrating European;
organisations with global “knowledge hubs”;
advancing regional innovativeness and entrepreneurship;
advancing ERA.
Answering all the evaluative questions relevant for an evaluation will require a combination of various
evaluation methods. Thus, different methods may be used at different levels of data aggregation or to capture
immediate and long‐term impacts. The use of more than one method has advantages as it allows for cross‐
checking the robustness of conclusions about the observed effects of the intervention – i.e. it permits
triangulation”(European Commission 2006). The most relevant analytical methods may be qualified into
qualitative approaches including case studies, focus groups and interviews, expert judgements and peer
reviews, historical tracing and quantitative approaches including sociometric and social network analysis,
bibliometric analysis, econometric and statistical analyses.
The analytical methods used for evaluations all depend on certain data requirements: Some methods like
bibliometrics, social network analysis or econometric estimations depend critically on the availability of large
datasets and their (often time‐consuming) exploitation. Other approaches like case studies or panel‐based
evaluations – the chosen methodology for FP evaluations – are essentially based on expert judgement and
thus depend less on the availability of indicators to some extent (Braun et al. 2009). As there is neither a single
‘best practice method’ for all types of evaluations nor a reason for a general preference for quantitative
methods against more qualitative ones, the indicators can fulfil two main functions in the evaluation process
(Braun et al. 2009):
On a generic level, indicators are used as descriptors for programme inputs, outputs or impacts,
complementing or supporting peer review opinions. They provide basic information about
programme activities, and ensure the necessary overview and transparency over large, complex
programmes like the FPs, where no single expert panel can assess scientific production of all different
programme elements in the same depth. This basic characterisation function is central for a sound
evaluation, because it enables panels and evaluators to focus on the judgment aspects of the
evaluation. Therefore, a core set of indicators should ideally be produced already in the FP monitoring
and made available to evaluation panels.
On a more complex level, composite indicators and quantitative methods can address specific
evaluation questions, in most cases through a careful combination of basic data and indicators. For
this purpose, customised indicators must be designed and produced ad hoc for a specific evaluation
exercise, based on the identified specific evaluation questions and requirements.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 45 of 121
In monitoring systems, both quantitative and qualitative data may be generated. Usable quantitative data
mainly stem from databases that provide basic information on projects and project participants. In addition,
quantitative data may also stem from specific surveys conducted on a regular basis while the projects are in
progress (i.e. annually, or at the beginning of a project, in the mid‐term of a project, at the end of a project).
Another opportunity for generating quantitative data is to include survey‐structured elements in project
review and reporting systems. Examples of this type of information are:
The quality of proposals and projects: quantitative ratings of, e.g., the project quality at the proposal
stage, the implementation phase and the results.
The specific objectives of the projects and its achievement: technological objectives, scientific
objectives, the (socio)‐economic objectives of the project, the expected time frame for realisation of
impact, the risk associated to achieve the project objectives.
The specific objectives of the involved project participants.
In addition, monitoring systems may also collect qualitative information via project reviews and reports,
interviews, and focus groups. The main challenge as regards this type of monitoring approaches, which is in
particular true for large programmes such as the European Framework Programmes, is that the results of the
interviews must be synthesised and reported regularly and in a uniform manner to allow for their use in
evaluation processes.
3.1.2 Designinggoodindicators
A number of methodological issues need to be considered when indicators are designed for the use of
performance measurements and evaluations. These issues concern the bandwidth or different levels of
indicators, the peculiarities of research activities in general, and the particularities of the organisation of
funding programmes.
Consider the bandwidth of indicators
When designing performance indicators, three different levels of indicators outlined below should be
considered. The underlying evaluation questions determine the extent to which each of these levels is
addressed in a specific evaluation. Questions related to broader policy goals – like the FP’s contribution to the
ERA development – typically require the use of macro‐level indicators, whereas evaluations of programme
efficiency will concentrate on programme‐level indicators (Braun et al 2009).
Micro level indicators cover the project level including participating institutions and project teams. At this
level, indicators specify and measure the inputs, outcomes and impacts of a programme. Also management
related issues and their relevant impacts on output can be considered.
FP‐participant level indicators (including institutions, teams or authors) aim at identifying and measuring the
profiles of participants, researchers and research groups’ likelihood at succeeding in bringing about the
required new knowledge and the effects of the programme on its participants. This is a simpler approach in
terms of indicators, provided one can unambiguously describe programme participation. However, the
relationship between programme funding, participation and effects on participants (and even more on non‐
participants) is more difficult to establish.
Meso level indicators cover the level of R&D programmes (i.e. the entire Framework Programme, the ICT
specific part of the FP, and the specific objectives at the challenge level). While these type of data are easily
covered from an input point of view (budgets, nr. of projects in a field), it is a challenge to take into account
specific outputs and outcomes at this level.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 46 of 121
Macro level indicators refer to the general Community objectives as well as the general R&D policy objectives,
operational guidelines for policy implementation and policy monitoring. Most of the time, macro level
indicators are composite indicators aimed at characterising the (socio)‐economic, scientific, and technological
production context, such as the contribution of a programme towards ERA. Macro level indicators are very
often composite indicators seeking to characterise the achieved progress.
Examples for such macro‐level indicators are provided in the course of monitoring the i2010 initiative and the
Digital Agenda for Europe Initiative. The i2010 benchmarking framework27 proposes a conceptual framework
for collection of statistics on the information society as well as a list of core indicators to be used for
benchmarking in the future European Digital Agenda. It monitors the development of ICT through
consideration of a supply, use, and impact framework. The monitoring takes place via household and business
surveys conducted by EUROSTAT. The majority of the provided indicators can also be found in Europe’s Digital
Competitiveness Report 2010.28 The problem of macro level indicators is that the progress towards certain
targets can hardly be attributed solely to FP7‐ICT or the ICT‐PSP programmes with its different funding
instruments and thematic priorities targeting specific outcomes. Measurability and data limitations prevail.
Consider measurability and data limitations
Due to the nature of science and research activities, certain limitations exist as regards attempts to identify
and measure the outcomes and impacts of R&D investment measures (see Martin and Tang 2007):
First, there is no linear chain of causation linking inputs to research activities, technological
development and innovation as illustrated by a simple linear model of science impacts.
Second, innovation processes and technological development depend on a number of other factors
than research inputs – such as the accumulated experience (tacit knowledge of actors) as well as non‐
technical inputs as well. This means that attribution problems arise in particular in connection with
measurement of socio‐economic benefits, because it is difficult to disentangle what fraction of a
specific economic or social benefit should be attributed to a set of research activities launched by a
programme.
Third, attribution problems for socio‐economic benefits also arise because innovations increasingly
draw on outputs from research and development in other countries, and it is virtually impossible to
relate economic and social benefits in a country to distinct investments in a programme or a country.
Fourth, the time from research to innovation and socio‐economic benefits may be very long.
Attempts to do an early assessment will capture only short‐term benefits from research and ignore
long‐term and potentially more substantial benefits from the research.
These principal limitations must be taken into account when designing evaluations and setting up a
performance monitoring system and performance indicators to be implemented while projects are in progress.
Simple measures, e.g. for productivity, will not acknowledge these distinct characteristics concerning the
impact chain of research activities. Due to the long time lags between research activities, outputs, outcomes
and impacts Braun et al. (2009), among others, suggest that “for longer‐term and wider societal impacts, the
idea of measuring these directly has even to be questioned. Typically, only proxies can be provided which give a
very rough, indirect insight into progress achieved in areas like the innovation performance of European
industry, based on assumptions about the relationship between such progress and European research funding”.
27http://ec.europa.eu/information_society/eeurope/i2010/docs/benchmarking/benchmarking_digital_europe_2011-2015.pdf 28 http://ec.europa.eu/information_society/digital-agenda/documents/edcr.pdf
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 47 of 121
A performance indicator framework must therefore be checked for its validity for its evaluative purposes and
questions to be answered. Indicators should not be used mechanically or adhere to certain properties that suit
the needs of evaluators, programme management, those implementing the measures, and policy makers. For
this purpose, Bach et al. (2006) propose that the following criteria be considered:
From the point of view of evaluators, the key point is that the indicators should be scientifically sounded. This
particularly means:
Explanatory power: they should be based on a clear explanatory framework, avoiding any
meaningless list of single indicators.
Normativity of indicators: it must be possible to associate some kind of reference to assess if the
indicator is showing “bad” or “good” results.
Coverage/Representativity: indicators must concern most of the selected cases to be studied.
Robustness: indicators should be sensitive only to the change in the facts that they are supposed to
measure.
Reliability: likeliness to achieve the same results even if different people or organisations use the
indicator.
Balance: a relevant combination of different types of indicators enabling understanding of
phenomena as comprehensive as possible using different point of views.
Comparability: there must be comparability of indicators used in different situations or for different
organisations.
For data providers and data collectors, the key point is the very possibility to handle the data used to feed the
indicators and avoid any useless burden for data providers already affected by “evaluation fatigue”. This
particularly means:
Availability of data: exploiting what exists and not reinventing indicators when one has already been
validated by relevant experts and information/statistical systems.
Limited number of indicators: not too difficult and time consuming to feed.
Freshness and time adequacy of information: it is important to choose the right time to extract the
right information, i.e. in some case data are available only a long time after their collection, while
other data should be collected very quickly before they are “forgotten”.
For policy‐makers and more generally for those who are using the results for designing/implementing
programmes, the key words are usefulness and being operational at all levels. This particularly means:
Ease‐of‐use: the system of indicators must be simple and easy to absorb, and easy to understand
without ambiguity by everyone who uses it.
Synthetic: the system of indicators should highlight the key elements in a limited number of
indicators, keeping the possibility to study further on these key elements with a range of more
detailed indicators.
Comparability: the indicators should provide a way to compare results with other existing studies.
Transferability: referring to EU context, the indicators could potentially be used for analysis at
national level.
Relevance: the indicators have to measure phenomena that are directly relevant to the decision‐
making process for which the assessment is made.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 48 of 121
The criteria for the selection of indicators provide guidance for elaborating a long list of indicators and make a
critical assessment of indicators as to how to arrive at a key list of performance indicators. For the long list of
indicators presented in Chapter 4, the following key features have been taken into account:
The key evaluative issues: which of the EC evaluation criteria are covered by the
indicator/performance measure?
The impact channel: what is the indicator supposed to measure?
The FP7‐ICT instrument: is the indicator relevant for all different instruments of FP7‐ICT or just for
some specific instruments?
Availability of benchmarks: does the indicator allow for comparisons?
Assumptions: short discussion about pre‐requisites and use of the indicator.
For the short list of indicators, the needs of the European Commission Services, in particular for the thematic
priorities, must be taken into account in order to assure that the performance indicators will be accepted and
used in a meaningful manner.
3.2 TheEUperformancemonitoringsystem
This section provides a brief description of the present EC‐ICT related monitoring system with primary focus on
FP7‐ICT RTD monitoring and the overall evaluation guidelines.
3.2.1 FP7ICTmonitoring
The monitoring and evaluation requirements for FP7 were set out in the ex‐ante Impact Assessment of FP7
with some of the key details confirmed in the formal FP7 Decisions (European Parliament (2006). The
implementation of FP7 and its Specific Programmes must be continually and systematically monitored.
In FP7, there is therefore more focus on outputs and impacts compared to FP6 including verifiable objectives
and indicators; a higher‐quality “evidence‐base”; more focus on “systemic” effects, notably in the research‐
innovation‐competitiveness links and on “knowledge networks”; more attention on the EU “added‐value”;
linked ex‐ante – ex‐post evaluations; and adequate resources for an expanded programme of evaluation
studies.
The progress of FP7 is monitored in the FP7 Monitoring Reports.29 So far, three monitoring reports have been
published. They cover the implementation of the Framework Programme based on the FP7 monitoring system
designed as an internal management tool using a core set of performance indicators. First, the monitoring
reports provide a detailed factual analysis of the main elements of the overall implementation of FP7. Second,
a closer look at some of the elements of the Framework Programme that deserve a special focus is provided.
Special attention was given to the simplification process of FP7 application as well as the perception of
simplification in FP7 by National Contact Points. Early achievements were also taken into account in the
monitoring report.
The evaluation logic model presented in Chapter 2 shows that performance indicators may be devised for the
following evaluative domains: inputs, outputs, outcomes, impacts and structuring effects. As outlined in
“Evaluating EU Activities – A practical guide for the commission services” (DG Budget 2004), monitoring is to
29 The third FP7 monitoring report is available at: http://www.era.gv.at/space/11442/directory/14031/doc/20788.html
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 49 of 121
some extent limited to inputs, outputs, and results. The table below, which outlines the monitoring of FP7 ICT,
also focuses on:
Indicators related to the input/activity dimension of the programme that includes resources plus the
programme design, implementation and management of the programme. The input descriptions
cover issues related to projects and the performance of the programme management, e.g.
performance of the proposal evaluation and redress procedure.
Indicators related to the output dimension of the programme, including scientific outputs and
technological outputs.
Indicators related to intermediate results dimension of the programme.
Table 3: The indicator system of FP7‐ICT research
Activity Description Indicators Means of Verification
Assumptions, e.g. impact channels
Input
Selection of thematic priorities
Impact analysis, Portfolio of project, rec. from ISTAG
IPPA Relevance
Selection of projects Ranked list of proposals and allocated funding to each proposal corresponding to budget allocated for each objective/funding instrument.
Evaluation Reports
Relevance, Quality
Programme management and quality of on‐going projects
Average results of independent project review process by priority area
Data from new reporting system
Efficiency
Percentage of projects by priority area covered by reviews
Performance of proposal evaluation and redress procedure
Quality assessment of proposal evaluators
Annual evaluators survey
Efficiency
Quality assessment of FP evaluation and other systems
Time to grant CORDA
Percentage of experts reimbursed timely
DG RTD
FP 7 Activity Total number of projects CORDA Effectiveness
Average financial size of projects
Projects by types of organisations, area, country and funding schemes
SME participation Percentage of SME grants CORDA Economic, societal
Output
Research output Number of articles published/accepted for publication in peer‐reviewed journals.
Number of articles in open access provided (journals/repositories)
Project final report
Knowledge, Effectiveness
Patents and IPR Number of new patent applications (‘priority filings’)
Project final report
Knowledge, Effectiveness
Number of the Intellectual Property Rights applications made for Trademark/Registered design/Other
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 50 of 121
Activity Description Indicators Means of
Verification Assumptions, e.g. impact channels
Outcomes
Number and level of researchers
(as a direct result of public and other funding)
Number of men/women as
‐ Scientific Coordinator ‐Work package leaders ‐Experienced researchers (i.e. PhD holders) ‐PhD Students ‐Other
Project final report
Knowledge, human capital, Effectiveness
Number of additional researchers (in companies and universities) recruited specifically for the project
Achieving gender equality Specific gender equality actions during project
Project final report
Societal, Effectiveness
Number of men/women as
‐ Scientific Coordinator ‐Work package leaders ‐Experienced researchers (i.e. PhD holders) ‐PhD Students ‐Other
Sound ethical principles 1) Ethics Review (and/or Screening). Specific issues: (research on humans, research on human embryo/foetus, privacy, research on animals, developing countries, military use/terrorist abuse)?
Project final report
Societal, Effectiveness
Interdisciplinarity Engagement with societal actors beyond the research community (citizens' panels/juries) or organised civil society (NGOs, patients' groups etc.)
Project final report
Knowledge, societal, network, Effectiveness
Engagement with government/public bodies or policy makers (including international organisations)
Synergies with Science Education
Involvement with students and/or school pupils (e.g. open days, participation in science festivals and events, prizes/competitions or joint projects)
Project final report
Knowledge, societal , Effectiveness
Development of science education material (e.g. kits, websites, explanatory booklets, DVDs)
Impact
Employment and spin offs Employment effect resulting directly from the project in Full Time Equivalent (FTE = one person working fulltime for a year)
Project final report
Economic, Effectiveness
Number of spin‐off companies created/planned as a direct result of the project
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 51 of 121
Regarding the intermediate results, it should be noted that international cooperation activities are listed in the
third monitoring report. However, the monitoring results collected via the DG INFSO project final report
primarily focus on synergies with science and education and interdisciplinarity such as societal actors beyond
the research community, engagement with citizens (citizens' panels/juries) or organised civil society (NGOs,
patients' groups, etc.), engagement with government/public bodies or policy makers (including international
organisations). There are no particular references to the promotion of cross‐disciplinary research in the overall
objectives of FP7. However, such an objective is clearly stated in the annex of the decision for the Cooperation
Programme30: ‘Special attention will be paid to ensuring there is effective coordination between the thematic
areas and to priority scientific areas which cut across themes, such as forestry research, cultural heritage,
marine sciences and technologies. Multi‐disciplinarity will be encouraged by joint cross‐thematic approaches to
research and technology subjects relevant to more than one theme, with joint calls being an important inter‐
thematic form of cooperation’.
Another dimension in the table above is SME participation. For FP7, the Commission is required to pay special
attention to the EU Contribution to SMEs within this particular programme in accordance with the target
introduced by the Parliament and the Council in the Decision establishing FP731:“…Particular attention should
be paid to ensuring the adequate participation of SMEs, in particular knowledge‐intensive SME in transnational
cooperation. Concrete measures, including support actions to facilitate SME participation, will be taken
throughout the ‘Cooperation’ part of the programme in the framework of a strategy to be developed under
each theme. These strategies will be accompanied by quantitative and qualitative monitoring against the
objectives set. The aim will be to enable at least 15 % of the funding available under the ‘Cooperation’ part of
the programme to go to SMEs.” Reports are produced on a bi‐annual basis, to analyse the SME Participation in
FP7 comprising overall SME participation rates across the themes of the Cooperation Programme and of the
EU contribution to SMEs in FP7 Grant Agreements (GAs).
Regarding the means of verification, the primary instrument for analysing in the preparation of the Work
Programme is the Integrated Programme Portfolio Analysis‐Report (IPPA). According to the intervention logic
of the FP7‐ICT, the research areas are divided into specific challenges. The portfolio analyses monitor projects
from each challenge as well as each instrument, i.e., Integrated Projects (IPs), STREPs, Collaborative projects as
well as Networks of Excellence (NoEs) and CSAs. The analyses also include beneficiaries by group, i.e. industry,
research, SMEs and geography.
Monitoring the on‐going projects and their intermediate results includes a periodic report including a summary
of activities, deliverables produced, milestones, publications and patents. After the project, the information
from the periodic report is collected in the final report, which also includes main results and potential
impact. Finally, a detailed questionnaire collects information regarding workforce statistics, gender statistics,
educational and societal implications.
The output measurement of FP7 ICT research projects is based on publications and patents. The beneficiaries
report on publications and patents achieved during the projects.
30 OJ L 412, 30.12.2006, P. 007. 31 http://ec.europa.eu/transparency/regdoc/rep/eims/RTD/137033/SME%20Mid%20Term%20FP7%20-%20final%20-%20October%202010.pdf SME Participation in FP7 - Mid Term Report. DG Research, 2010.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 52 of 121
3.2.2 EvaluationofFP7ICT
Evaluation is “judgement of interventions according to their results, impacts and needs they aim to satisfy” (DG
Budget 2004). The key notion in this definition is that it is a process that culminates in a judgement (or
assessment) of an intervention. Moreover, the focus of evaluation is primarily on the needs, results and
impacts of an intervention. The main reasons for carrying out evaluations are:
to contribute to the design of interventions, including providing input for setting political priorities;
to assist in an efficient allocation of resources;
to improve the quality of the intervention; and
to report on the achievements of the intervention (i.e. accountability).
This leads to the following steps in the evaluation of FP7: An evidence‐based interim evaluation to be
conducted proceeded by a progress report. Two years following the completion of FP7, an external evaluation
of its rationale, implementation and achievements is conducted by independent experts. Altogether, this
includes at least five evaluation exercises (ex‐ante, mid‐term and final) – including a specific progress report
before the interim evaluation.
The ex‐ante evaluation was incorporated into an Impact Assessment of the Commission’s proposal for FP7 in
2005. It is proposed that the new FP be built on a robust hierarchy of logically interdependent outcome
objectives with a limited number of realistic and appropriate indicators. When defining outcome objectives
and indicators, priority should be given to establishing separate components:
The starting point or baseline for the change;
The vision for or expected development of the;
The area for the intervention; and
The role of Community.
Indicators should be both quantitative and qualitative and progressive to show the path or direction of
changes to be expected to allow for monitoring of progress.
Mid‐term evaluations ex‐post, two years after the end of the previous FP (2008) supported by a progress
report before the interim evaluation (before 2010): the mid‐term evaluations are supported by a coherent set
of independent studies, interim evaluation (science panels) and other evaluation activities carried out over the
life‐time of the FP as listed above.
The final evaluation is set two years after completion of the FP (2015), supported by specific studies, the
interim evaluation, and other evaluation activities carried out throughout the FP period.
Following this definition the key question is by which means do the monitoring system provide insight into
relevance, efficiency, effectiveness, sustainability of EU‐ICT funding?
Relevance is defined as the extent to which an intervention’s objectives are pertinent to the needs and
issues to be addressed. The question of relevance is relevant for ex‐ante and interim evaluations. As Table 3
shows, the relevance of the EU‐ICT funding is monitored and analysed in the evaluation reports and overall
data regarding the success rate. The level of success is defined as the share of proposals retained for
negotiations. The success rate indicates the relevance of the ICT proposals. On the one hand, low success rates
may be explained by annual programmes having a broad definition of priorities (calling for many different
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 53 of 121
types of projects) or by the emergence of new and vibrant research topics. On the other hand, high success
rates may be due to the novelty of some specific sub‐programmes, poor communication of funding
possibilities, or very narrow calls. These causes need to be properly diagnosed for them to be redressed, as
over‐ or under‐subscription indicates that there is a problem in the balance between resources and demand.
The efficiency of ICT R&D funding can be measured as delivering policy objectives at minimum cost. The
monitoring system contains data regarding proposals, applicants and success rates by funding scheme,
applicant activity type and nationality. They are based on (i) eligible proposal and participants’ data submitted
to single‐stage calls for proposals and (ii) second‐stage eligible proposal and participants data for FP7 calls for
proposals involving two‐stage proposal submission and evaluation procedures.
Effectiveness can be defined as the extent to which objectives are achieved. Patents as output are often
perceived as the tangible R&D effectiveness success measure. The argument is that the more patents filed,
the more productive the R&D department. However, the ratio of patents per R&D EURO also represents the
activity of a company's lawyers and administrators and not only its engineers and product developers.
Furthermore, the objective of the R&D funding is not just R&D ‐ but the outcomes of R&D in terms of
competitiveness, employment, marketable products and as such the patent is only a proxy for these
objectives.
Sustainability of funding and the extent to which positive effects are likely to last after an intervention has
terminated are only an issue in the ex‐post evaluations. The monitoring system does not provide relevant data
for this task.
Taking stock of the current FP7 in the interim evaluation from 2010, the evaluation comes at a point when the
Framework Programme has reached its mid‐point in calendar terms, even though many of the projects funded
in its early years are still in progress and most of the funds have not been allocated. Some of the projects
initiated in the latter years of FP7 are expected to continue for as long as five years after its formal end. This
means that the final impact can only be evaluated after 2017‐2018. Therefore, only tentative conclusions can
be reached about the outcome of FP7 and the impact it will have on Europe’s science, economy and societies.
Even though seeing the impact of FP7 is difficult, the interim evaluation states that ‘…an interim evaluation has
a vital role to play in taking stock, in putting forward proposals for the remaining years of FP7 and in drawing
out lessons that can feed into the planning – already in its early stages – of a successor programme.’
3.3 Trends and indicators in global good practice: results from acomparativecrosscountryanalysis
This section presents the results of a cross‐country analysis of monitoring practices in six countries, namely
Australia, Canada, Denmark, the Netherlands, the UK and the US. The countries have been selected in
discussions between the research team and the client. In addition, Australia was recommended by a
monitoring expert who was contacted for the UK examples. The information for the case studies was collected
by desk and internet research and complemented by direct communication with persons involved in the
design or evaluation of the cases.
The main aim of pursuing the international case studies was to contribute to the development of possible
performance measures for the successor of FP7 ICT and CIP ICT‐PSP. The project team aimed at variety to
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 54 of 121
highlight differences in the scope of performance monitoring, and illustrate main trends in monitoring of
research programmes. Some of the cases (i.e. Australia, the NSF), go beyond performance monitoring of R&D
programmes, but they have been considered relevant because they include performance measures applied at
the level of research groups and institutions or provide rich information on performance measures for a
funding institution.
Annex 2 contains the individual case studies. The cases include concrete examples of the main objectives of
the programmes/policy initiative and the use and usefulness of the monitoring/evaluation system applied.
The comparative cross‐country analysis in this section focuses exclusively on the main purpose of creating a
long list of indicators. Therefore, the four level terminology presented in the logic charts in Chapter 2 was
applied: Inputs, Outputs, Outcomes, and Impacts. In the individual case studies, specificities of performance
measurement systems were considered32. In addition, the indicators have been structured along relevant
impact channels. Impact channels are mechanisms via which research programmes lead to outcome and
impact. Given the objectives of our study, the relevant impact channels are: knowledge, human capital,
networks, economic (and societal)33.
3.3.1 Increasedknowledge
Not surprisingly, given the nature of R&D and innovation, all countries use indicators to monitor increased
knowledge. Two key output indicators are patents and publications. However, the examples illustrate how
patents can be monitored and analysed in more detail by adding patent applications (and patents issued) and
license agreements (and associated revenues).
Table 5 presents several more indicators, not just for patents and publications. This provides inspiration for
improving the current monitoring system for FP7‐ICT, because the existing set of indicators has strong focus on
patents and publications to capture increased knowledge. This improvement would be relevant for all
instruments and activities (‘from FET to CIP ICT‐PSP’), because increased knowledge is always a crucial impact
channel that is often combined with other impact channels (e.g. networks and human capital). Hence,
indicators for increased knowledge are highly relevant in the set of indicators for monitoring ICT R&D and
innovation.
Table 4 Results from the international cases ‐ Knowledge indicators
Activity Description Indicators Means of Verification
Country example
Coverage in existing FP7‐ICT monitoring
Outputs
R&D level and quality % supported projects deemed additional
Not specified UK To some extent
% supported projects which met technical objectives
Innovative quality of projects (peer review ratings; standard survey question about ‘how innovative was the project, by firm standards; by
32 For instance, Canada differentiates between three levels of outcomes, which we have taken together as one level: outcomes. Furthermore, only a few countries use indicators for Inputs. 33 In Annex 2, it is made explicit that one indicator (such as number of PhDs) can be relevant for several impact channels. In the cross-country tables below, we have positioned the indicator in the impact channel for which it is most relevant.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 55 of 121
Activity Description Indicators Means of
Verification Country example
Coverage in existing FP7‐ICT monitoring
regional/UK standards; by EU standards; by global standards)
Research output
Number of publications: books, book chapters, journal articles, conference papers (peer reviewed), non‐traditional research output (e.g. events), presentations (at events or self‐organised events) and prototypes
Universities provide data bi‐annually
Australia + UK + Netherlands
Yes, but less specified
Patents Number of patents benchmarked to EU, US and other regions
Universities provide data bi‐annually Statistical reports (NCE) + annual progress reports (CERC)
Australia (NCE + CERC) + UK + Canada
Yes
Number of license agreements (and revenues)
Number of patent applications (and patents issued)
Outcomes
Business capabilities Number (and/or %) supported
businesses achieving significant
increase in Increased/improved use of
ICT
Not specified UK No
Number (and/or %) supported businesses achieving significant increase in Innovation capability; technology absorptive capacity
Rating for each research group, based on research outputs and other indicators
Each institution is scored on a five‐point scale. The scores are used for the entire field of research and benchmarked to world standards
Universities Australia
International knowledge position
International ranking of institutes involved and companies
Bibliometric analyses/citation analysis Interviews only
Netherlands No
Publicly available secondary data, consultancy reports etc.
Leading‐edge research
findings that are
relevant to the needs
of the user sector
(industry, government,
nongovernmental organizations, and others) and Canada’s socio‐economic development
Extent of which policies and practices of the user sector have been influenced by leading‐edge research findings
Annual progress reports, mid‐term reports
Canada (NCE) No
Expert opinion by mid‐term expert panel on quality and relevance of findings
Number and nature of national and
international prizes and awards to NCE researchers for NCE research
Number of invitations as guest speakers at major international conferences and congresses
In terms of the European Commission’s evaluative dimensions, the knowledge indicators are most relevant for
assessing the effectiveness of research programmes. Are FP7‐ICT and ICT PSP effective in stimulating more,
new, leading‐edge knowledge? Monitoring knowledge is also needed for an analysis of the efficiency of
research programmes, e.g. the ratio between research budgets (public and private) and output indicators such
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 56 of 121
as patents, publications and the position of European actors in international rankings. There is a more indirect
link between knowledge indicators and the evaluative dimensions of relevance and utility. For instance, the
number of publications in (peer‐) reviewed journals is an indication of the relevance of the research
programme (its objectives, focus and output) for academia, and the number of patents (especially license
agreements) is an indicator for economic relevance (and utility).
3.3.2 Humancapital
A substantial number of indicators were identified for the impact channel of human capital. Although human
capital is a crucial input for any R&D and innovation process, most countries position human capital indicators
at the outcome level. Public programmes are meant to provide incentives to increase human capital. It is not
only a matter of counting PhDs and other researchers. The attraction of world‐class researchers or the esteem
of researchers are both indicators that go beyond number and level as they also include quality and
recognition. Outcome indicators for human capital are not covered by the current monitoring system of FP7‐
ICT. Similar to increased knowledge, human capital is a relevant impact channel in all parts of FP7‐ICT and CIP
ICT‐PSP. However, the relevance is higher for some instruments and programmes (e.g. in STREP and FET) than
in other instruments and programmes (e.g. CIP ICT‐PSP).
Table 5 Results from the international cases ‐ Human capital
Activity Description Indicators Means of Verification Country example
Coverage in existing FP7‐ICT monitoring
Outcome
Attract and retain top talents/business leaders
Number of top researchers and business leaders (domestic/international) participating in centres
Annual progress report
Canada (NCE)
No
Importance of the centre to come/stay in Canada
Esteem count – reflecting research recognition
Editorial positions Universities provide data + expert panel
Australia No
Fellowships
Attraction/retention of world class researchers (essential for economic and societal development)
Number of post docs researchers working on NCE projects + area of work
Statistical reports Canada (NCE)
No
Number of research personnel attracted to Canada + area of work
Development of pool of highly qualified personnel (in areas essential for economic and social development)
Number of post docs researchers, graduates, undergraduates and others working on NCE projects
Statistical reports Canada (NCE) + DK
To some extent
Number of students/trainees employed in skilled jobs (by sector and area of work)
Create centres with a strong research orientation that yield significant public benefits
Number of centres with a strong research orientation
Annual progress report
Canada (CECR)
No
Significant public benefits
Number and level of researchers involved
Number of FTE’s and Staffing profiles in five academic levels
Universities provide data bi‐annual
Australia + Netherlands + DK
Yes
Increase the number of Number of PhDs in companies Survey and interviews DK No
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 57 of 121
highly educated employees
(PhDs) in companies (annually for large companies, samples
for small and medium‐sized enterprises)
Monitoring human capital is relevant for assessing the effectiveness of FP7‐ICT and ICT PSP. As mentioned in
Chapter 2, human capital is an important impact channel in most instruments and challenges of FP7‐ICT and
ICT PSP. Although indicators for human capital are – often – less extensive and less quantitative than indicators
for knowledge, they are needed assess the efficiency of research programmes, e.g. by exploring the ratio
between public support and the number of PhD students, post docs, fellows, etc. Indicators for human capital
(and for networks) are highly relevant for the evaluative dimension of sustainability. For instance, PhDs and
world‐class researchers may continue to apply their skills and contacts in subsequent research projects as well
as in businesses and public sectors. Indicators for human capital allow for a prediction of the outcomes and
impact of a specific research programme as well as the potential for future outcomes and impact, possibly in
different organisations and sectors.
3.3.3 Networks
Networks are an impact channel closely linked to increased knowledge (e.g., networks facilitate knowledge
transfer) that may lead to relatively sustainable impacts (e.g. partners that continue their collaboration after
an FP7 project) and this is crucial for the European Research Area (e.g. less fragmentation).
In Canada, we looked at two large programmes that stimulate networks. Network indicators were also
identified in R&D programmes in Denmark, the Netherlands and the UK. Network indicators for outcomes
cover different types of networks (e.g. national/international, industry/science) and between different
research programmes and sectors (e.g. different granting agencies). Given the objective of the ERA, networks
are relevant for all instruments and programmes in FP7‐ICT and CIP ICT‐PSP. This is especially the case for
Networks of Excellence. However, the current set of indicators for FP7‐ICT only has one indicator for networks
(covering the interdisciplinarity and engagement with societal actors) and uses open, qualitative questions to
cover other types of networking.
Table 6 Results from international cases ‐ Networks indicators
Activity Description Indicators Means of Verification Country example
Coverage in existing FP7‐ICT
Inputs
Agreements with networks Nature and number of agreements
Funding and network agreements
Canada (NCE + CERC) + Netherlands
Yes
Funded networks
Number of funded networks + resources allocated
Selection committee Canada (NCE+ CERC) + Netherlands
Yes
Outputs
Multi‐regional interdisciplinary research teams
Distribution of researchers by province, institution, discipline and sector
Statistical reports Canada (NCE) Yes
Multidisciplinary and integrated approach to program delivery
Share of each granting agency NCE annual report Canada (NCE) No
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 58 of 121
Activity Description Indicators Means of Verification Country
example Coverage in existing FP7‐ICT
Outcomes
R&D collaboration and effects on market processes, networks, supply chain relationships
Number and share (pct.) supported collaboration which had not collaborated on R&D previously/with each other/plans to collaborate again
UK No
Links between industry and science base
Number and share (pct.) supported collaboration which had not collaborated on R&D previously/with each other/plans to collaborate again
UK No
Increased networking and collaboration national/international
Number of joint authorship publications (by sector/country)
Statistical reports (NCE) + annual progress reports (CERC)
Canada (NCE + CERC)
No
Active cooperation Collaboration between programme organisations
Database (Senternovem) Netherlands To some extent
Collaborations between organisations in R&D projects funded by EUREKA
Database (Senternovem)
Collaboration between organisations funded by FP
Database (Senternovem)
Collaboration in research coordination activities
Official websites of relevant European ETPs and projects
Acceleration of exchange within networks and use of knowledge by organisations to harness it for economic and social development
List of new/improved products, services, and processes from networks
Statistical reports (NCE) + annual progress reports (CERC)
Canada (NCE + CERC)
No
Number of transfer agreements
Number of license agreements
Number and magnitude of international agreements
Annual progress reports and mid‐term reports
Domestic collaboration and spill over to other sectors and regions
Number of domestic collaborations by firm, sector and region
Annual progress reports Canada (CERC) No
Training that promotes
multidisciplinary and
multisectoral research
approaches and encourages
trainees to consider the economic,
social, environmental and ethical implications of their work
Number of graduate students
working on NCE projects and list of degrees disciplines
Statistical reports Canada (NCE) No
Sectors of partners involved
Others not counted above working on NCE projects and discipline of work
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 59 of 121
Activity Description Indicators Means of Verification Country
example Coverage in existing FP7‐ICT
Increased cooperation and knowledge‐sharing between R&D and companies
Cooperation between companies and universities
Survey and interviews (annually for large companies, samples for small and medium‐sized enterprises)
DK No
‐Increased networking and collaboration national/international
Distribution of researcher by province, institution, discipline, sector, country
Statistical reports
Canada (NCE)
No
Draw on existing strength, infrastructure, networks and funding sources to enhance capacity
Number of partnerships and collaborations
Annual progress reports
Canada (CERC)
No
Indicators for networks – and subsequent network analyses – are crucial to assess the effectiveness of
Networks of Excellence, Integrated Projects and other FP7‐instruments, and for thematic networks in ICT PSP.
To some extent, this information is also relevant for assessing efficiency. This is not straightforward, because
market transactions, location, different research and networking programmes and other factors influence the
many different types of linkages between the various types of actors. There is no ratio between an X amount
of funding and a new or extended network. As mentioned in section 3.3.2, indicators for networks are highly
relevant for the evaluative dimension of sustainability. For instance, networks may persist, evolve over time,
and support the creation and dissemination of knowledge in the same field of technology or in adjacent fields.
Network indicators are also used for assessing the relevance of research programmes. The number of
research, business and societal actors that join a network is an indication of the relevance of the network and
– hence – the programmes that support the networks in question.
3.3.4 Economicindicators
The indicators for economic mechanisms primarily concern outcomes and impact. Examples of economic
mechanisms are spin‐offs and new firms that can accelerate innovation while at the same time being an
economic impact themselves. Economic mechanisms are intertwined with other mechanisms. For example,
spin‐offs can be partly due to increased knowledge and networks. Outcome indicators include research
commercialisation and business performance Examples include research commercialisation income, gross
value added or increased productivity. The current FP7 monitoring system does not include any outcome
indicators in this respect. At the impact level, economic mechanisms are to some extent included, e.g.
employment effects and spin‐off companies. The relevance of economic mechanisms ‐ and associated
indicators ‐ is especially high for FP7‐ICT instruments and programmes that focus on industry and applied R&D
(e.g. Integrated Projects) and the innovation centred CIP ICT‐PSP.
Table 7 Results from international cases – Economic indicators
Activity Description Indicators Means of Verification
Country example
Coverage in existing FP7‐ICT
Outcomes
Level of R&D expenditures ‐ Wage costs of personnel ‐ Total R&D expenditures (and increased R&D expenditures)before taxes
Survey, interviews, national statistics
Netherlands + UK
No
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 60 of 121
Activity Description Indicators Means of
Verification Country example
Coverage in existing FP7‐ICT
Research commercialisation
Research commercialisation income (Australian dollars per year)
Data provided by universities
Australia
Acceleration of exchange within networks and use of knowledge by organisations to harness it for economic and social development
Number of new companies and existing companies developed or maintained
Statistical reports Canada (NCE)
No
Create centres with strong commercial orientation expected to become self‐sufficient
Numbers of self‐efficient centres, commercial oriented
Annual progress reports
Canada (CERC)
No
Revenues of commercialisation
Significant public benefits created
Indicators of business performance outcomes and effects on firms
Sales Not specified UK No
Gross value added
Productivity (GVA or sales per employee)
Exports
Start‐ups survival rates
Indicators of business performance and productivity
Increased productivity per employee
Survey and interviews (annually for large companies, samples for small and medium‐sized enterprises)
DK No
Increased productivity in companies
Added value in companies
Innovation Introduced new [improved] products or processes; … . significant increase in the % turnover derived from products < 18 months old
Not specified UK No
Market access and creation/development
Entrance to new markets (product, geographical) gained access to significant new contacts, new customers etc.
Not specified UK No
Impacts
Deliver economic, societal and environmental benefits
Significant public benefits created
Annual progress report
Canada (CERC)
To some extent
National and international reputation
Foreign direct investments in program related sectors
National statistics, survey and interviews
Netherlands No
Number of foreign companies located in Netherlands
Production, employment and export added
Production/added value National statistics, survey and interviews
Netherlands No
Employment
Export in related sectors
Increased productivity and economic growth
Numbers of jobs created (outside the network)
Annual statistical reports
Canada (NCE)
To some extent
Companies created in new/underdeveloped sectors
Progress reports
Case studies of impact of network innovations in existing industries
Progress reports
Indicators for economic impact channels can be used to assess the effectiveness and relevance of research
programmes. This mainly concerns the relevance for business of course. For instance, to what extent do
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 61 of 121
research programmes – and the knowledge that is created ‐ lead to new firms, new products, and increased
productivity? Assessing efficiency is more challenging. On the one hand, as mentioned in Table 7, there are no
output indicators. Only when outcome and impact data become available – e.g. at the end of a research
project –it is possible to confront this with the input/activities. On the other hand, once the outcomes and
impacts emerge and are measured, this information can be used for assessing sustainability. Caveats include
that it is difficult to monitor or predict whether new firms will survive, new products will be successful, etc.
3.4 Conclusions
The comparative analysis in this chapter has shown that performance monitoring systems need to provide a
continuous assessment of programme functions organised internally by programme management or a
monitoring unit. Through its data gathering approaches monitoring needs to enable the development of
interim performance measures of programme progress, outputs and outcomes.
Conclusions from the comparative analysis relate to i) the connection between the intervention logic and
performance, ii) the number of indicators for the different impact channels, iii) the connection between
indicators and the evaluation grid of DG Budget, and iv) the present monitoring system of EU funded ICT
research and innovation.
3.4.1 Interventionlogicsandperformancemonitoring
The international case studies have illustrated that clear intervention logics provide a good baseline for
developing performance measures. However, it is noteworthy to mention that even if consistent intervention
logic is established, it can be difficult to identify the right indicators: e.g. in one of the case studies a
programme goal was to set international standards, but the indicators chosen were not suitable for the use of
international benchmarks.
Across the analysed countries, the extent of monitoring varies due to the complexity of the programmes and
the duration of the programme. The Canadian programme, for example, is a characterised by project durations
of 5 or more years. This means that on‐going monitoring is required and programme administrators rely on
detailed annual statistics, which have to be provided by the project participants. This can be burdensome and
may only be justified in case of long project durations. In general the five case studies show the importance of
balancing the need for detailed monitoring and the administrative burdens on project participants.
The Danish case does not provide specific information on ICT relevant indicators. Instead it shows that a
generic set of performance measures can be established which are valid across the Danish innovation
programmes. Interestingly, already the foreseen performance measures cover all impact channels and are
closely tied to the performance chain of R&D programmes. Furthermore, it has to be stressed, that the
development of performance measures is embedded in a series of government actions, which try to develop
standards for the scope and implementation for evaluations and impact assessments.
3.4.2 Impactchannels
A substantial number of indicators are available for monitoring increased knowledge, networks, human capital
and economic mechanisms. More indicators have been identified at the level of outputs and outcomes than at
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 62 of 121
the level of impact. This reflects that it takes time for an impact to emerge, which makes it difficult ‐ if not
impossible ‐ to monitor (expected) impacts while projects are on‐going.
For a lot of indicators, weaknesses regarding the possibilities for benchmarking and international comparisons
became evident. Many indicators count the number of outputs (publications, patents, research networks, etc.)
but fail to provide information on the quality of research performed, the effectiveness of programme
implementation, and the sustainability of a programme. In addition, a series of indicators is only available in a
qualitative manner. Hence, when establishing an indicator system, care needs to be taken that the qualitative
information is collected in a standardised manner in order to be ready to use for analytical purposes.
3.4.3 ConnectionbetweenindicatorsandtheevaluationgridofDGBudget
Another conclusion can also be drawn regarding the connection between the indicators and the evaluation
grid of DG Budget, in particular for relevance, effectiveness, and efficiency. Relevance is a key criterion for ex
ante and interim evaluations in FP7. Evaluation reports monitor the relevance aspect. In this respect,
indicators include the share of proposals retained for negotiations of total submitted proposals measured as
success rate. The international comparison shows that the current FP7 monitoring covers the issue of
relevance well compared to other funding schemes. Other relevance indicators for interim evaluations could
include a number of quality issues concerning the merit review process or a number of transparency issues,
but these are only linked indirectly to the question whether the intervention’s objectives are relevant to needs
and issues.
Efficiency is a question of delivering policy objectives at minimum cost. The current monitoring data provide
limited options for assessing efficiency. The international comparison provides a few additional examples of
indicators related to efficiency. When studying individual proposals, a timeframe for ‘time to decision’ could
be further explored on the input side. However, first the overall conclusion from the international comparison
is that efficiency indicators require clear and defined objectives to assess whether objectives have been
achieved. Second, the question is if these achievements have been reached at minimum cost. Again, the
success rate is an important indicator that is already included in the current monitoring system. This leads to
the conclusion that further examples of efficiency indicators are necessary and the subject will further
explored in the next chapter.
The third criterion of particular interest is effectiveness and the extent to which objectives are achieved.
Output indicators such as patents and publications are often used in this context, and as objectives are related
to outcomes other indicators are, for instance, competitiveness, employment and marketable products.
Patents are only proxies for these outcomes but other funding schemes do include marketable products –
commercialisation – even though they require a longer timeframe for monitoring. Our comparison of other
funding schemes shows that commercialisation of R&D is used as an indicator in several programmes.
However, its relevance in FP7 largely depends on the different instruments, STREPS, NoE, IPs and the
challenges.
3.4.4 Thepresentmonitoringsystem
The analysis of the existing system has revealed that at present very few indicators tackle distinct impact
channels that are crucial in the intervention logic of ICT research and innovation. Moreover, a series of
indicators is only available in a qualitative manner. The international cases have highlighted gaps in the current
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 63 of 121
monitoring systems of EU funded ICT research and provides inspiring thoughts on how to fill these gaps and
improve the current monitoring system.
The current monitoring system concentrates on inputs, while outputs and outcomes of EU funded research
and innovation activities are neglected, except from some tangible knowledge outputs (publications), and ‐ to
some extent ‐ human capital and economic impact channels.
The cross‐country analysis has identified indicators that can also be used to monitor networks, e.g. between
research and industry, between sectors, and between countries in the European Research Area (and its global
partners). In addition, there appears to be room for improving the indicators for human capital and economic
channels, e.g. quantitative indicators for PhDs (in the project, in firms, etc.) and for spin‐off firms.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 64 of 121
4 Beyond patents and publications? Performance monitoringindicatorsforEUfundedICTresearchandinnovation
Based on the findings of the international experiences and the analysis of the present monitoring system, this
chapter provides a broad list of possible performance indicators for EU funded ICT research and innovation for
inputs, outputs, outcomes and impacts for EU funded ICT research and innovation. This allows providing
linkages to the logic charts of the FP7‐ICT instruments. In addition, a section on management indicators is
provided, because efficient programme implementation may also contribute to the overall performance of a
programme.
In the output and outcomes and impact sections, the relevant impact channels serve as a structuring element
for each section. The impact channels indicate the relevant dimensions through which the programmes unfold
its impact.
For the creation of the long list of performance indicators, the findings from the international experiences and
additional findings stemming from additional sources are taken into account34. In each section the relation of
the indicators to the intervention logic of the EU funded ICT activities are taken into account as well as the
relation to the broader evaluation criteria of the European Commission. In addition, the respective data
gathering requirements, and the availability of international benchmarks are discussed.
4.1 Inputindicators
Input indicators provide the necessary information on the programme including the patterns of participation
(applied projects/granted projects) and the patterns of funding in the different thematic priorities and the
different instruments.
Collecting data and creating indicators on the patterns of participation and funding are crucial for assessing the
relevance, efficiency, and effectiveness of funding. Furthermore, a comprehensive collection and presentation
of data concerning programme funding and programme participation is also a prerequisite for mid‐term and
ex‐post evaluative endeavours in which specific indicators are necessary for answering questions related to the
outcomes and impacts of the programme (i.e. structuring effects of the programme in the light of advancing to
ERA). Table 8 lists the main performance criteria established in the logic‐charts for FP7‐ICT and ICT‐PSP.
Table 8: Summary of input/activity dimensions in the intervention logic of FP7‐ICT and ICT‐PSP
FP7‐ICT STREPS IPs NoEs ICT‐PSP
‐ Well balanced
collaborative projects
(size, actors, funding)
‐Well balanced
collaborative projects
(size, actors, funding)
‐Well balanced
collaborative
projects (size,
actors, funding)
‐ European Research
networks in specific
themes
‐ Well balanced
participation of EU
Member States and
public authorities
‐ Involve relevant
industrial actors
‐ Involve relevant
industrial actors
‐ Involve relevant
industrial actors
‐ Involve key
industrial actors
‐ Relevant research ‐ Involve relevant ‐ Involve relevant ‐ Involve key
34 Major source have been the indicator study of Braun et al. (2009), the Community Innovation Survey, the Report on R&D in ICT in the European Union (EC - IPTS 2011).
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 65 of 121
FP7‐ICT STREPS IPs NoEs ICT‐PSP
organisations and
researchers
research organisations
and researchers
research
organisations and
researchers
research
organisations
‐ Establish joint
research and activity
plan
‐ Implementation of
joint programme
activities
‐ Incorporate training
and knowledge sharing
‐ Incorporate
training and
knowledge sharing
‐ Joint training and
knowledge sharing
The table shows that performance indicators for inputs have to provide information to the effect that there is
a well‐balanced set of collaborative projects, that relevant industrial and academic research organisations are
involved, and that certain measures (training and knowledge sharing, implementation of joint activities) are
considered.
In the present FP7 monitoring system, participation and funding patterns are monitored at the level of
programmes, calls for proposals, instruments (funding schemes), by type of organisations (including
participation of SMEs), and country (differentiating between EU Member States and participation of third
countries). The data collected via the internal data warehouse are presented in the annual FP7 Monitoring
Reports. With regard to EU funded ICT research, the “ Stream report”35 also provides specific information on
the level of challenges and strategic objectives. Data and indicators include:
Number of participations per project and instrument (average/total);
Funding per project and instrument (average/total);
Funding per participant and instrument (average);
% of funding per legal status of organisation (per instrument and per challenge);
Top 50 participants in the ICT programme, top 20 participants per challenge and respective funding
Participation per country (% of participation and % of funding);
SME participation per instrument, challenge, strategic objective and country;
Project and budget distribution by challenge and strategic objective.
A breakdown per organisation type, instrument and thematic area, country and region is possible for all data.
The indicators used in FP7‐ICT monitoring show that data regarding participation and funding are well
monitored and published regularly and consistently. This information alone, allows checking for some targets
for different thematic priorities related to the portfolio of programme participants.
Nevertheless, a number of additional input indicators can be used to take into account the structuring effects
of the FP7‐ICT programme and involvement of key actors. These indicators are important because all logic
charts of the FP instruments emphasise the role of research consortia in which relevant industrial actors and
research organisation should take part.
According to Braun et al. (2009), input data should also provide information on whether funding has an impact
on research structures and networks at a European level and has synergies with national and private funding
sources. It also serves as a background for interpretation of results of FP funding (outputs and outcomes).
35 For the purpose of the project the Commission services provided the 2009 Integrated Programme Portfolio Analysis report 2009 and its successor, the Stream Report 2010, to the project team.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 66 of 121
4.1.1 Structuringeffects–Wellbalancedportfolioofcollaborativeprojects
Demographic input indicators on consortia participation and their success rates can be used as proxies for
measuring structuring effects and formation of clusters within ERA. In this respect, the following participation
indicators can be implemented by exploiting EC administrative databases:
Frequency of participation: As mentioned in the interviews with the Commission Services that are
responsible for certain thematic priorities, one key achievement of EU funded ICT research activities is
that the programmes allow the EU to build‐up and structure research agendas in a European
dimension. This should also be reflected in terms of participation of institutions. Indicators monitoring
the frequency of participation allow identification of a core of proposals, structuring effects and
dropouts.
Repetitive cooperation: Another dimension of structuring effects is the patterns of collaboration. The
actual number of collaborative proposals between institutions provides information on whether
sustainable research networks (and critical masses of involved actors) are evolving.
New entrants: Ratio of new entrants participating in proposals. This indicator shows whether the FP7‐
ICT programme is open to new participants.
New coordinators: Ratio of new coordinators participating in proposals.
New collaborations: Ratio of partners who have collaborated for the first time in an ICT related FP
project.
While these data relate to the frequency of participation, the probability of success should also be taken into
account. Success rates i.e. of coordinators of different type of organisations, countries, and individual
institutions, allow for identifying poles of excellence in distinct fields, but are also an issue as regards
undesirable selection biases. The following indicators on success rates can be identified:
Success rates of Different themes and instruments: Higher rates of oversubscription allow
identification of areas of high attractiveness and potentially stronger European competences.
Success rates by type of institution and country: Provide an indication of attractiveness for different
types of actors, and research capacities of countries in a field. It is worth noticing that the frequency
of applications could also relate to the existence or non‐existence of national funding opportunities.
Success rates of individual institutions: Long‐term success rates of institutions show specific
competencies in certain fields.
A breakdown per institution, instrument, thematic challenge, and region is possible for all data. For some of
the indicators, data may only be utilised if long time series are constructed. This reflects the evolution of ICT
research funding in the European Union.
4.1.2 Inter‐relationwithnationalresearchfunding
For most of the Member States FP7‐ICT is not the only source of research funding in the field of ICT research,
and the FP funding should also take into account its interrelationship with national research funding, potential
complementarities and substitutions. Braun et al. (2009) suggest that public research laboratories in particular,
which depend largely on measures for project research funding, the question of selecting between the
different possible funding sources is of utmost importance and may have a profound impact on the trajectory
of these research laboratories. Therefore, the study suggests that the following funding indicators should be
taken into account:
a specialisation index of FP funding vs. national project funding per sector of performance and for
each country;
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 67 of 121
correlation indices between national funding instruments and FP funding;
more detailed charts looking at the breakdown of funding per sector for national project funding and
EU funding for each country; and
At a micro level, detailed matrices of funding profiles of laboratories against the different funding
instruments.
Although these type of indicators may be very good at providing elaborations on the relevance of EU funding,
a combination of EU funding and national funding at a disaggregated ICT‐level is at present well beyond
established routines. Data limitations at the EU and national levels are huge because joint definitions and data
gathering routines remain to be established. So far only experimental exercises and experimental efforts at
OECD level have been conducted. These exercises try to measure the extent and scope of research project
funding. At a micro (firm/institutional) level, data can only be gathered via extensive use of surveys.
4.1.3 Involverelevantindustrialactorsandresearchorganisations
Another important issue of participation and funding is attracting the most relevant industrial actors and
research organisations (in terms of achieving the desired goals of FP7‐ICT).
Output data, such as counts and statistical analyses of publications, conference appearances, patents and
other forms of IPRs, provide information if the participants have achieved the desired results. However, an
overall performance framework should take into account the previous/continuous performance of
participating organisations. For this purpose, the following types of data at a firm/organisation level that can
be incorporated for measuring the additionality aspect of R&D subsidies, for example, can be retrieved:
General data characterising the organisation: annual Turnover of the firm (last three years), cash
flow (last three years), exports (last three years), number of employees (last three years), year of
foundation, area of Economic activity (NACE‐code), location, type of organisation (already applied in
EC data system).
Research and innovation data: R&D expenditures (last three years), R&D personnel (last three years),
cooperation with other enterprises/public institutions, number of patents applied for and patents
granted in the last three years, number of publications published in the last three years, number of
PhD/Master theses in the last three years, public funding of research and innovation.
Data gathering in this respect would allow the positioning of inputs provided by the Commission with general
inputs and research and innovation activities actually taking place in the participating organisations. Hence, at
the level of participating organisations, the following series of input indicators could be elaborated:
Number and share of R&D personnel/researchers financed by the FP7‐ICT participation;
Total R&D expenditures and share of R&D expenditures financed by FP7‐ICT participation;
Number of PhDs/Master students commencing (future output) and share of total students enrolled;
Ratio of FP7‐ICT funding to other public funding sources of companies; and
Ratio of Commission support to total project volume announced.
Another important dimension for addressing relevant industrial and academic organisations is the project
selection process. The main purpose of project selection processes is to ensure that the best projects are
selected. This serves the purpose and effectiveness of the programme in an optimum manner. Hence, the
effectiveness criterion is closely linked to the project selection process, which usually includes different forms
of expert review processes.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 68 of 121
For the purpose of programme management, it should be ensured that the selection process is 1) fair (there
must be no bias (e.g. personal bias, insider bias, dominant group bias)); 2) reliable (different experts with the
same scientific background and quality should come to similar opinions about a project); and it should be 3)
valid externally and internally (the selection process should guarantee optimum results in terms of quality of
outputs (external validity) and should reflect the set of evaluative criteria which has been established for the
selection process (internal validity)). Of course, the external validity can only be assessed when projects are
finished, but the absence of bias and reliability may be considered right away in the selection process.
In order to allow the selection process to be subject to evaluative purposes, monitoring systems can gather
two types of information:
Characterisation of reviewers: nationality, scientific merit, age, gender, institutional affiliation.
A quantitative characterisation of review results for the different performance dimensions, which
typically could include: the scientific relevance of the proposal, the novelty of the proposal, the
appropriateness of methods, the appropriateness of costs, the quality of the foreseen collaborations,
the quality of the scientific personnel, and the overall quality.
4.1.4 Listofinputindicators
The discussion of possible input indicators has shown that the majority of input indicators must stem from
administrative data from the European Commission and data that need to be retrieved directly from the
participating organizations in FP7 ICT and the ICT PSP programme. Table 10 presents a summary of input and
activity indicators that have been identified as relevant for FP7‐ICT and the ICT PSP programme.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 69 of 121
Table 9: Input Indicators
Proposed Indicator Key Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Present Availability Sources
Assumptions
Project characteristics: ‐ Average number of participations ‐ Average project funding ‐ Average funding per participant ‐ % of funding per legal status of organisation
‐ Top 50 participants in the ICT programme ‐ Participation per country ‐ SME participation ‐ Project and budget distribution by challenge and strategic objective: differentiation by legal unit (enterprises, higher education institutes, government R&D organisations, private non‐profit organisations, public administration)
‐ Relevance ‐ Effectiveness
‐ Economic ‐ Human resources
‐ Networks
All
‐ Different themes and instruments
‐ Potentially planned vs. achieved
‐ Yes ‐ EC administrative system
‐ Core input indicators, which provide basic facts upon funded projects. The indicators are comparatively well covered in the StReAM reports.
‐ Additional emphasis could be put on the different participating organisation types. Also large companies, higher education institutions, non‐profit research organisations, public administration, and other should be portrayed because one of the differentiating factors of the different funding instruments concerns their profiles of participant types.
‐ The degree to which the funding instruments have attracted their expected participants is an important factor of relevance and effectiveness.
‐ Frequency of participation of individual institutions
‐ Relevance ‐ Effectiveness
‐ Networks ‐ Economic
All ‐ Different themes and instruments
‐ No ‐ EC administrative system
‐ Indication of ability to attract a core set of organisations
‐ Repetitive cooperation: Nr. of collaborative projects with same partners
‐ Effectiveness ‐ Sustainability
‐ Networks All ‐ Different themes and instruments
‐ No ‐ partly EC administrative system
‐ Survey
‐ Stability of co‐operation; ability to attract a core set of organisations
‐ Requires a definition of a nucleus of partners, because it is unlikely that all the same partners apply for a series of projects, although core partners may be the same.
‐ Operationalisation is difficult via EC administrative system because knowledge about core partners may not exist.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 70 of 121
Proposed Indicator Key Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Present Availability Sources
Assumptions
‐ New entrants: % of new entrants submitting proposals
‐ Effectiveness ‐ Networks All ‐ Different themes and instruments
‐ No ‐ EC administrative system
‐ Ability to attract new organisations – outreach of FP7‐ICT
‐ New coordinators: % of new coordinators in proposals
‐ Effectiveness ‐ Networks All ‐ Different themes and instruments
‐ No ‐ EC administrative systems
‐ Ability to attract new organisations – outreach of FP7‐ICT
‐ New collaborations: % of partners who have collaborated for the first time in an ICT related FP project
‐ Effectiveness ‐ Networks All ‐ Different themes and instruments
‐ No ‐ EC administrative system
‐ Ability to create new research networks
‐ Nr. and share of R&D personnel/researchers financed by the FP7‐ICT participation
‐ Effectiveness, Relevance
‐ Human Resources
All ‐ Different themes and instruments
‐ No ‐ EC administrative system
‐ Relevant for potential outputs and outcomes
‐ Total R&D expenditures of firm and share of R&D expenditures financed by FP7‐ICT participation
‐ Relevance, Effectiveness
‐ Economic All ‐ Different themes and instruments
‐ No ‐ Survey and EC administrative system
‐ Relevant for leverage effects and additionality
‐ Nr. of PhDs/Master students commencing (future output) and share of total students enrolled
‐ Effectiveness, Relevance
‐ Human Resources
All ‐ Different themes and instruments
‐ No ‐ Survey and EC administrative system
‐ Relevant for potential outputs and outcomes
‐ Ratio of FP7‐ICT funding to other public funding sources of companies
‐ Relevance, Efficiency
‐ Knowledge All ‐ Different themes and instruments
‐ No ‐ Survey and EC administrative system
‐ Have relevant priorities been addressed by FP7‐ICT?
‐ Ratio of commission support to total project volume announced
‐ Effectiveness ‐ Efficiency
‐ Economic All ‐ Different themes and instruments
‐ No ‐ EC administrative system
‐ First indication of ability of the FP project to leverage private funds, although triggered by financing rules and prone to creative accounting
‐ General data characterising the organisation:
‐ Annual Turnover of the firm (last three years)
‐ Cash flow (last three years) ‐ Exports (last three years) ‐ Number of employees (last three years),
‐ Effectiveness ‐ Efficiency
‐ Economic All
‐Different themes and instruments
‐ Sectors (NACE classification)
‐ Regions
‐ No ‐ EC administrative system
‐ Eurostat
‐ Relevant data for impact assessments and reference framework for outcomes and impacts
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 71 of 121
Proposed Indicator Key Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Present Availability Sources
Assumptions
‐ Year of foundation ‐ Area of Economic activity (Nace Code), location.
‐ Research and innovation data: ‐ R&D expenditures (last three years) ‐ R&D personnel (last three years) ‐ Co‐operation with other enterprises/public institutions
‐ Nr. of patents applied and patents granted in the last three year
‐ Nr. of publications published in the last three years
‐ Nr. of PhD/Master theses in the last three years
‐ Amount of public funding of research and innovation
‐ Effectiveness, Efficiency
‐ Knowledge‐ Human Resources
‐ Economic
All
‐Different themes and instruments
‐ Sectors (NACE classification)
‐ Regions
‐ No ‐ EC administrative system
‐ Eurostat
‐ Relevant data for impact assessments and reference framework for outcomes and impacts
‐ Success rates of Different themes and instruments
‐ Relevance, Efficiency
‐ Knowledge All ‐ Different themes and instruments
‐ No ‐EC administrative system
‐ Attractiveness of research area,
indication for European competencies.
‐ Success rates by type of institution and country
‐ Efficiency, Sustainability
‐ Knowledge ‐ All ‐ Different themes and instruments
‐ EC administrative system
‐ Attractiveness for different types of actors, research capacities
‐ Existence of national funding
‐ Average results of the proposal evaluation process
‐ Quality ‐ Effectiveness
‐ Knowledge‐ Networks ‐ Human Resources
All ‐ Different themes and instruments
‐ No
‐ Provides information on potential impact of the portfolio of projects
‐ Appropriateness of scientific and formal requirements
‐ Relevance ‐ Quality
‐ Human resources
‐ Networks ‐ Economic
All ‐ Different themes and instruments
‐ No ‐ Participant survey
‐ Indication on relevance and quality of project selection and project performance criteria
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 72 of 121
4.2 Outputindicators
The primary goal for output indicators is to provide precise information on how the programme is performing
and show progress towards meeting programme objectives. Measuring and understanding the output is
crucial for a subsequent analysis of relevance, effectiveness and efficiency. Relevant indicators addressing the
performance of the programme should potentially answer the following questions:
To what extent is the programme reaching its objectives?
What are the immediate results of the programme?
Output indicators should tell us how the programme is performing. Output indicators are often confused with outcome and impact indicators, but there is a difference:
Output indicators demonstrate the work of the project and show progress towards meeting
objectives.
Outcome indicators demonstrate changes that take place because of work and show progress
towards meeting specific aims.
Impact indicators demonstrate long‐term change(s) relating to the overall aim or mission of the
programme.
The international comparison revealed that the number of output indicators from research programmes is
limited and that they are often the same type of indicators where various types of publications and patents are
used to assess different programmes with different purposes. On the one hand, for performance
measurements in particular, outcomes are the preferred reference point, because they refer more closely to
the intervention rationale of a programme. On the other hand, measuring output is more feasible than
measuring outcome and assessing impact. Outcomes and impact cannot always be attributed to the policy
intervention. They can be the result of several factors, many of which are outside a policymaker or a
programme’s control. Outcomes reflect changes that take place only after outputs have been produced.
Hence, outputs give a better idea of the current situation. Outcome/impact indicators can be more difficult
and more costly to collect.
The usual output indicators are related to technological and scientific knowledge creation. This knowledge
creation has both tangible and non‐tangible outputs. The most common tangible outputs are publications and
patents whereas non‐tangible outputs are new theories or new processes, discovered problems/scientific
challenges.
In the following sections, an overview about output indicators resulting from the intervention logic is provided.
Then, manifest output indicators for the impact channels of research programme (i.e. knowledge indicators,
network indicators, and human capital indicators) and programme implementation efficiency are discussed.
4.2.1 FP7andICTPSPoutputs–Resultsfromtheinterventionlogicanalysis
In the following, each of the specific instruments STREPS, IPs NoE and FETs in terms of their intervention logic,
objectives and outputs is discussed. Each instrument varies in scope and has specific output requirements. The
outlined gross list provides a number of alternative output indicators to specific instruments.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 73 of 121
SpecificTargetedResearchProjects(STREPS)STREPS are focused on narrowly defined research‐driven objectives and the outputs related to these objectives
are therefore more specifically defined. Table 10 shows objectives and output for STREPS.
Table 10: STREPS objectives and output
Operational objectives Outputs
Increase technological excellence of EU ICT
Industry
Increase quality of research
Reinforce European coordination and pooling of
resources
Research on themes of major societal needs
Critical mass of projects in specific areas
Increased European collaboration
Patents, IPRs, product and process innovation
Publications as new codified scientific knowledge
The critical mass of projects, patents and publication are all quantifiable and measurable outputs that can be
monitored via the current set‐up. The challenge is to monitor the actual quality of the research. The number of
publications is used as a proxy for quality of research. However, citations from and the impact factors of the
publications are more precise and can furthermore be used in benchmarking the research. With regard to the
objective of reinforcing European coordination and pooling of resources, the output is perceived as increased
European collaboration. Since the specific scope of STREPS is not to create large new networks but
cooperation between a limited number of actors, the relevant indicator should focus on the actual results of
the collaboration. Moreover, the available data on the number of STREPS allow analysis of development trends
concerning STREPS over time.
IntegratedProjects(IPs)Compared with STREPS IPs have a broader scope and also include objectives related to increased human
capital and critical mass of actors and collaborations.
Table 11: IPs objectives and output
Operational objectives Outputs
Increase technological excellence of EU ICT
Industry
Increase quality of research
Reinforce European coordination and pooling of
resources
Research on themes of major societal needs
Ensure training, dissemination and knowledge
management.
Pooled resources and critical mass of projects in
certain themes
Critical mass of actors and collaboration
Patents, IPRs, product and process innovation
Publications as new codified scientific knowledge
Increased human capital
Dissemination of content
Regarding the critical mass of actors, the international comparison of performance indicators suggests that the
set of network indicators could include indicators that are more precise by measuring whether partners have
collaborated previously or have received funding previously. This not only indicates the collaboration but also
the increase in European collaboration and obtaining a critical mass of actors and collaborations. Another IPs
objective is to ensure training and, to that end, increased human capital is a relevant output indicator.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 74 of 121
NetworksofExcellence(NoEs)The overall objective of NoEs is to pool resources to overcome fragmentation of European research capacities
and establish long‐term cooperation between European research organisations. These objectives result in a
different set of relevant output indicators as described in the Table 12.
Table 12: NoEs objectives and output
Operational objectives Outputs
Integration of research activities in a field of
strategic EU importance
Long‐term cooperation of European research
organisations
Strengthen scientific excellence
Overcome fragmentation of research capacities
Research networks in areas of strategic
importance
Integrated research portfolio of network of
participants
Pooled resources of network partners
Cluster of scientific research results
Visibility of research results
Trained researchers
To obtain precise indicators, the first objective of integration of research activities in a field of strategic EU
importance evidently requires that the strategic fields in question are sufficiently defined. Another important
objective is the long‐term cooperation between research organisations, but the essence of long‐term is not
reflected in the output indicators. This is an option as indicators could potentially be the number of recurrent
collaborations. In relation to long‐term cooperation between European research organisations, another aspect
is the objective of fragmented research capacities. The relevant output indicator is cluster of research results
and pooled resources of network partners. To this could be added the geographical distribution of actors to
monitor which countries/regions and actors are specifically strong in the European research community.
An interesting conclusion with regard to the above‐mentioned instruments is that none of these instruments
has operational objectives related to commercialisation of R&D. With regard to STREPS, there is an output
indicator related to products and process innovations and outcomes related to new products, processes and
technology services. The share and value of commercialisation of research are not included, as this is seen as a
long‐term strategic objective, which may only occur after projects have been finalised.
AnnualWorkProgrammesforICTresearchThe ICT Work Programme for FP7 2011‐12 is divided into eight challenges and a scheme for Future and
Emerging Technologies (FET). Each of the eight challenges has a set of defined research activities and
subsequent outputs and outcomes (cf. Section 2.2.3). These distinct and detailed outputs can be translated
into performance indicators.
Table 13 summarises these outputs across the relevant impact channels (i.e. knowledge, network and human
capital). Common for the first category is that the Work Programme focuses on tangible knowledge outputs
(new technology, methodology, components, algorithms, etc.). However, apart from publications and all forms
of IPRs, no quantifiable indicators can be benchmarked across the Work Programmes or R&D programmes. For
intangible outputs, there is a similar challenge of creating sound indicators. Only entry and exit surveys of
programme participants may allow providing information, which types of knowledge results should stem from
the projects, and to which extent these have been achieved.
The network and human capital indicators are comparable to indicators in the other instruments.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 75 of 121
Table 13: Summary of outputs from ICT Work Programmes
Output
Knowledge New technologies
New methodologies
New modelling, simulation, design
New components
New algorithms
New systems analysis
New IPRs applied
New theories and concepts
Network Multi‐disciplinary cooperation
Coordination with national and regional
programmes
Bringing together research communities and
commercial organisations
Dissemination
Human capital Training of human resources
The FET scheme differs from the other challenges in the sense that FET is aimed at R&D exploring
unconventional ideas and risky scientific paradigms for industrial research. The objectives are therefore to
explore foundational new areas that lead to technological breakthroughs.
Table 14: Summary of outputs from Work Programmes; FET
Operational objectives Outputs
Explore foundational new knowledge
Purpose driven research with future impact on
industrial R&D
Ensure international collaboration
Ensure synergies and cross‐fertilisation between
different disciplines
New scientific knowledge
Scientific/technologic breakthroughs
New/changed collaboration
Trained researchers
Even though the FET scheme differs in its objectives from the challenges in the annual Work Programme, the
outputs face the same difficulties of measurable indicators related to new technology and scientific
knowledge. The two other outputs – new/changes collaboration and trained researchers ‐ are again similar to
the other instruments.
Communityinnovationprogramme(CIP)andICTpolicysupportprogramme(ICTPSP)The overarching aim of ICT PSP is to contribute to the enhancement of competitiveness and innovation
capacity in the EU, the advancement of the knowledge society and sustainable development based on
balanced economic growth. ICT PSP does not cover research activities and its objectives cover increased
uptake in the public sector, in areas of high societal relevance, reducing fragmentation of markets, and
increasing the interoperability of public services.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 76 of 121
Table 15: CIP ICT PSP – Operational objectives and outputs from Work Programmes
Operational objectives Outputs
Increase ICT uptake in public sector in areas of
societal relevance
Reduce fragmentation of markets
Increase interoperability
Industry and authorities across Europe brought
together
Interoperability across Member States
demonstrated
Operational pilot service demonstrating
significant potential
Alignment of national and regional initiatives
allow Europe wide fertilisation
Compared to the FP7 instruments, the objectives and output of ICT PSP are not related to R&D but to the use
of ICT technology and to alignment and interoperability across Europe. From a monitoring point of view, the
objectives are measurable and quantifiable, i.e. increase in uptake of ICT in the public sector can both be
monitored and benchmarked. To increase interoperability requires a precise definition of which ICT solutions
the interoperability is measuring. This is more challenging. When turning to the output side of Table 15, the
interoperability is included but the ICT uptake is not covered. As described in section 2.2.6 on the intervention
logic, the ICT PSP programme includes the following instruments: Pilot A and B as well as best practice and
thematic networks. The number of Member States that participates in Pilot A can also qualify as a proxy for
cooperation. The Pilot B instrument does include stimulation of ICT uptake but this is not reflected on the
output side.
4.2.2 Knowledgeindicators
The main objective of publicly funded R&D programmes is to generate new scientific knowledge, and the usual
way to indicate whether knowledge has been generated is to count the number of patents and publications.
The challenge is to add indicators that also measure the quality of the generated knowledge and provide
indicators that mirror the objectives of the R&D programme. To that end, patents and publications are not
always sufficient or adequate and have to be supported by other indicators.
QuantitativeperformancemeasuresUsing the number of publication as an indicator can be further explored through bibliometric indicators, even
though bibliometrics is more than an indicator and an actual quantitative research assessment exercise of
academic output. The general indicator publications can be supported by indicators such as:
Number of peer reviewed publications used as a proxy for quality of research;
average number of citations per publication, the number (and fraction) of publications that have not
been cited and their interrelations;
Number of ISI publications;
Number of publications in top journals;
Degree of specialisation;
Number of publications published with at least one co‐author with an address in the USA, China,
India, Japan, South Korea;
Number of publications published with at least one co‐author with an address in another EU Member
State; and
Number of internationally co‐authored publications and share of publications with US, China, India,
Japan, South Korea
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 77 of 121
Furthermore, publications could also include conference presentations, articles and workshops, which are
difficult to quantify and even harder to use in a quality assessment of the conducted research.
In some respects, a publication is a direct output of research whereas citation can be considered as an
outcome. Citation analysis can be biased because older publications tend to be cited more than new
publications and strong variations between different fields of science exist. Possible citation indicators include:
Citations and journal impact factors, i.e. the average number of citations received per paper
published in the journal in question during the two preceding years. These measures can be used for
benchmarking and a more precise measure of the actual publication.
Mean Observed Citation Rate (MOCR), i.e. the ratio of citation count to publication count.
Mean Expected Citation Rate (MECR) of a single publication, i.e. the average citation rate of all
publications published in the same journal in the same year.
Relative Citation Rate (RCR),i.e. t is RCR=MOCR/MECR.
In the current monitoring system, the output indicators cover the number of articles and open access
channels, whereas benchmarks and bibliometrics only occur on ad hoc basis in broader analysis of R&D
programmes. It should be noted that the scientific output and productivity of FP7‐ICT projects have been
analysed. For FP7‐ICT, an analysis by (Breschi 2010) shows a skewed distribution of scientific productivity with
few projects accounting for a large number of publications and a majority of projects still characterised by
limited or no publications. The conclusion is that much of the scientific output produced by FP7‐ICT projects is
represented by documents that have been published or made public through other means than peer‐reviewed
scientific journals. In particular, articles published in the proceedings of conferences and articles published as
book chapters or specialised journals constitute outlets of the conducted research (Breschi 2010).
Patents as an indicator domain include both the number of patent applications and the number of patents
granted, but this domain can also be seen as part of a broader category of intellectual property rights.
Patent applications are particularly valuable in a broader global innovation context, even though hasty
generalisations on the changing geography of innovation patterns should be avoided. A full assessment of
patenting activities requires further econometric, classificatory survey research and interdisciplinary
interpretation. DG INFSO’s monitoring system also covers patent applications and granted patents and could
extend the use of benchmarking, with the above caution in mind.36
Considering intellectual property as an output more broadly, other indicators in the knowledge domain include
the share and value of commercialisation and the number and value of fees for services‐clients (paying for
expert advice, contract research and sales). Commercialisation is not relevant for all FP7 instruments, which
we explore in further detail in section 4.3.4. ICT and Computer Implemented Inventions (CIIs) are patentable in
Europe as long as the applications adhere to the strict criteria set down by the EPO. It is worth noting that ICT
companies can generate large royalty revenue streams by patenting. The European Patent Office (EPO) does
not grant patents for computer programs or computer‐implemented business methods that make no technical
contribution. As such, computer programs are excluded from patentability by virtue of Art. 52(2)(c) and (3)
EPC. According to the patent legislation, a computer program is not patentable if it does not have the potential
36 Breschi (Breschi 2010) has calculated the following indicators: (a) The number and the percentage distribution of scientific articles reported by DG INFSO projects and published in the period 1996-2009 among the 15 scientific fields; (b) The number and the percentage distribution of scientific articles reported by DG INFSO projects and published in the period 2004-2008 among the 15 scientific fields; (c) The total number and the percentage distribution of scientific articles published in the period 2004-2008 in the same set of ISI-WoS journals in which the DG INFSO articles have been published among the 15 scientific fields.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 78 of 121
to cause a "further technical effect" which must go beyond the inherent technical interactions between
hardware and software. However, a CII (even in the form of a computer program) that can provide a further
technical effect may be patentable subject to the other patentability requirements such as novelty and an
inventive step. In this case, it would be recognised as patentable as it provides a technical solution to a
technical problem.
In general, patent data provide no information regarding commercialisation of R&D, but the relevance of
commercialisation of intellectual property as an output indicator of course depends on the programme
objectives.
QualitativeperformancemeasuresApart from patents and (journal) publications, which are only relevant for very specific fields of ICT research,
main indicators for business related R&D and innovation programmes such as FP7‐ICT and CIP ICT‐PSP are:
The number of technologies brought to a higher readiness level (more mature technology, set
standard, etc.);
The number and value of the technology developed by the programme – and used in industry; and
Leverage of private R&D to develop technology further.
The embedded challenge with these indicators is to retrieve the relevant data. The annual Work Programmes
also set new technologies and processes as an output, but these are highly specific and cannot be counted and
measured easily. A method, which can be used to retrieve at least some valuable information on project
outputs, is to retrieve qualitative performance measures based upon expert judgements. For the overall
quality of the proposals, and the overall quality of on‐going projects, a series of indicators stemming from the
proposal evaluation, the project periodic report, and the project review report can be constructed. During the
project selection phase, FP7 evaluation criteria usually include three broad domains:
1. Scientific and/or technological excellence (relevant to the topics addressed by the call).
2. Quality and efficiency of the implementation and the management.
3. The potential impact through the development, dissemination and use of project results.
For each of the three dimensions there is a numeric rating from 1.0 (very poor) to 5.0 (excellent). The criteria,
against which eligible proposals are assessed by independent experts, are generally the same for all proposals
throughout FP7. In addition, there are sub‐dimensions on more detailed aspects covered by the individual
assessment but there are no numeric ratings available for these. An illustration of these additional aspects is
provided in Table 16.
Table 16: Example for project evaluation criteria: integrated project proposals in FET proactive
1. S/T Quality: “Scientific and/or technological excellence (relevant to the topics addressed by the call)”
2. Implementation: “Quality and efficiency of the implementation and the management”
3. Impact: “Potential impact through the development, dissemination and use of project results”
‐ Clarity of objectives and their relevance towards the long‐term vision
‐ Novelty and foundational character ‐ Specific contribution to progress in science and technology
‐ Quality and effectiveness of the S/T methodology
‐ Quality of work plan and management
‐ Quality of the individual participants ‐ Quality of the consortium as a whole (complementarity, balance)
‐ Appropriate allocation and justification of the resources to be committed (person months, budget)
‐ Transformational impact of the results on science, technology and/or society
‐ Impact towards the targeted objective in the Work Programme
‐ Appropriateness of measures envisaged for the dissemination and/or use of project results
Source: European Commission (2011), Guide for Applicants ‐ Information and Communication Technologies, Funding scheme: Collaborative
projects, http://ec.europa.eu/research/participants/portal/page/home
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 79 of 121
At present, there is a ranked list of proposals and allocated funding for each proposal corresponding to the
budget allocated for each objective/funding instrument. These sources allow for indicators such as:
% of proposals beyond overall threshold and mean overall score of proposals in a call;
% of proposals beyond threshold by evaluation criteria quality, implementation and impact and
mean score of evaluation criteria.
One drawback of the present project evaluation criteria is that the implementation dimension does not
distinguish in its numeric ratings between the quality of the work plan and management and the quality of
human resources. The box below illustrates how this can be achieved.
The case of the Austrian Science Fund illustrates, by which means performance measures on the quality of
proposals can be included. To assess its project proposals the Austrian Science Fund records quantitative
ratings on the following performance dimensions:
1. Scientific quality of the project
a. Position in the appropriate international scientific community
b. Extent to which the project could break new ground scientifically
c. Importance of the expected results for the discipline
d. Clarity of the goals
e. Appropriateness of the methods
f. Quality of the collaborations
2. Scientific quality of the scientists involved
a. Scientific qualifications and/or potentials of the scientists involved
b. Expected importance of the project for the career development of the participants
3. Financial aspects
a. Appropriateness of personnel and non‐personnel costs of the worthwhile parts
b. What cuts could be made without jeopardizing the success of these parts
c. Suggestions for improvement to the equipment requested
The Austrian Science Fund records quantitative ratings in the three main quality dimensions for each proposal.
The rating scale is from 0 to 100. Ratings from 100‐80 indicate excellent projects, 80‐60 very good projects, 60‐
40 acceptable projects, and below 40 problematic projects. In addition, a post project review takes place for all
funded projects. Combined with data characterising the project (team) this allows to include qualitative
reviewers’ judgements in quantitative project performance analyses.
Source: Dinges, M. (2005), The Austrian Science Fund – Ex post evaluation and performance of FWF funded research projects, InTeReg
Research Report No. 42‐2005, Vienna.
As all FP7 projects are subject to periodic reviews and final project reviews, a number of indicators could be
distilled from this type of information, which is at present only available in a non‐standardised text format. For
real time performance indicators that provide information on the quality and relevance of the portfolio of on‐
going projects, consolidated review reports are needed, in which stylised facts about the projects are gathered
in a standardised electronic form. The information provided in the interim review reports and the final review
report suggest that the main challenge as regards monitoring of quality related aspects in the course of project
selection and project performance is not to create new information, but to establish routines that allow
distilling of quantifiable information that can be used for assessment purposes.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 80 of 121
4.2.3 Networkindicators
One of the main objectives of programmes facilitating collaboration is to develop knowledge that allows better
understanding of complex issues. Therefore, these programmes are very knowledge creation oriented and use
the same results indicators (patents, publications, citations) as described above. However, there is a need for
indicators that show the value‐added of collaborations to solve complex issues that would not have been
solved by individual researchers. The first step in measuring value‐added is to provide examples of value‐
added to research by networking, collaboration and multi‐disciplinary research as indicated in the list of result
indicators.
The indicators derived from the case studies on multidisciplinary and integrated approach to programme delivery are:
Number of collaboration projects;
Number of supported collaborations that had not collaborated before. These collaborations include
educational institutes, industry, research centres, NGOs, etc. The indicator also measures whether the
same collaborations and participants receive funding for R&D .
In general, there is a lack of indicators related to the results of collaborations but plenty of indicators on
number of collaborations. Consequently, network and collaboration indicators are used as proxy for
knowledge transfer. Indicators showing the value added of collaborative teams over individual researchers
would be a valuable addition to any result indicator list.
The network indicators currently used for FP7 are related to interdisciplinarity and include outputs regarding
engagement with societal actors, and engagement with public bodies or policy makers in the sense of either
taking part in the project or being addressed by projects. Thus, the final project reporting currently includes
interdisciplinarity questions such as engagement with societal actors beyond the research community and
engagement with government/public bodies or policy makers including international organisations.
4.2.4 Humancapitalindicators
Most R&D programmes perceived human capital as related to outcome, whereas the headcount of
researchers involved in the project as well as gender aspects may be perceived as an output of the
programme. When the programme objectives include issues like nurturing a pool of excellent researchers,
human capital output indicators are relevant. They include:
The number of graduates and doctorate fellows trained, i.e. a proxy for increased human capital Human capital indicators are also represented in the current monitoring system. In the current monitoring
template for final project reporting, the project workforce indicators include particular focus on the number of
Ph.D. students, experienced researchers, etc., working on the project. Furthermore, there is also particular
emphasis on the gender perspective and the ratio of men/women participating in the projects.
4.2.5 Efficiencyindicators
A last dimension for performance measurement crucial for success of publicly funded research activities is
programme management. An efficient implementation of a programme is important for all phases of
programme implementation. This comprises the phase of call dissemination, project selection, negotiation (in
which a too long time span bears the risk that researchers have already shifted their priorities), and the project
execution phase, for which a timely delivery of funding is essential for project delivery.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 81 of 121
Indicators in the programme management domain should report on time and resources required for the
implementation of EU funded ICT research and innovation activities. In this respect, the following indicators
are provided in the overall FP7 monitoring:
Time to Grant (TTG): is defined as the time elapsed from the deadline of the call for submission of
proposals until the signature of the grant agreement. The third FP7 monitoring report states that the
average TTG for the whole FP7 is 350 days, the median is 335 days.
Timeliness of expert’s reimbursements: is defined as the percentage of experts payments for
reviewers, monitoring experts, assessment experts and evaluation observers which have been made
on time. For the payment authorities PMO 73.4% of payments were on time and for DG RTD 82.6%.
Furthermore, in the course of the overall FP7 monitoring an annual survey of the National Contact Points is
conducted in order to collect their views on the promotion and implementation of FP7.
TTG is the only quantitative metric that is established at present. The indicator would allow for benchmarking
between the different activities of the cooperation programme and in between different thematic priorities
for EU funded ICT research. A fine‐tuning of the indicator can be thought of by distinguishing between the time
from project submission to funding recommendation (reviewer decision), and the project negotiation phase. A
qualitative benchmarking of project participants could be established, in which an estimate is provided
whether TTG is smaller/larger than in national R&D programmes of similar type and size.
Apart from the indicators applied by the FP7 monitoring system, the following additional indicators
considering time and resources spent on behalf of the Commission services can be identified:
% of calls which have been announced according to the foreseen Work Programme
% of calls which have finalised the project selection and negotiation phase according to schedule
Ratio of planned costs for operating a call to project decision vs. actual costs accrued for operating a
call
Ratio of actual costs for operating a call vs. amount of funding distributed in a call
Average time needed to perform periodic reviews and final project reviews.
A breakdown of all these indicators is possible for the ICT cooperation programme as a whole, each call, and
the different thematic priorities.
While the indicators mentioned above refer to the efficiency of implementation in terms of the Commission
services and National Contact Points another important aspect to be considered are the opinions of
programme participants (applicants and beneficiaries). Programme participants are the group that is most
affected by programme management and the opinions of this group should be carefully and systematically
monitored via standardised surveys, as no other readymade datasets are available in this field.
Braun et al. (2009) suggest a series of aspects of the administrative process that can be analysed by indicators
stemming from surveys of programme participants at the level of specific challenges, instruments, calls, and
research sectors. Based this study, the following aspects are considered as relevant for performance
monitoring of ICT related research and innovation:
The quality of information provided by the Commission services during calls for proposals: Is the
information provided by the commission services easy to understand? Does it require competencies
normally not to be found within the different groups of participants (e.g. universities or SMEs)?
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 82 of 121
The time and resources needed by applicants to establish a proposal and the satisfaction with the
process: A quantitative and qualitative assessment on invested resources both for successful and
unsuccessful applicants including i.e.: the Real time needed to establish a proposal; the
appropriateness of time needed with respect to potential funding (i.e. comparison with national
funding opportunities).
Bottlenecks in the project proposal and implementation phases: i.e. the most critical phases in the
proposal process and the project that make it difficult for proposers/participants to keep deadlines,
e.g. what are the most critical phases in the course of a project?
The appropriateness of formal and scientific requirements for proposals.
The timeliness of handling contracts and provision of financing/economic support in order to receive
information on i.e. potential bottlenecks on behalf of the Commission services, which prevent
participants from optimal performance of contracts or the efficiency of programme implementation
as compared with national research and innovation programmes.
Administrative costs on behalf of project participants such as the ratio of administrative costs per
project and participant, ratio of administrative costs funded from EU over total administrative costs as
reported by coordinator, and administrative costs as reported by coordinators out of total EU funding
for the project.
4.2.6 Listofoutputandefficiencyindicators
Following the discussion of the different types of output indicators and the objectives of the various
instruments in FP7 and ICT PSP, we present a long list of output indicators and a long list of programme
management (efficiency) indicators. For evaluation and impact assessment, it is important to link objectives
and outputs. The question of effectiveness is closely related to outputs as it concerns to which extend the
activities and outputs correspond to the objectives of the funding initiative.
The evaluative criterion of efficiency is specifically related to resources and their use by programme managers
(i.e. the ratio between costs and output) and is therefore only indirectly linked to the outputs. The evaluative
question of relevance mainly concerns whether the outputs are in line with the objectives of the programme
and its instruments, and the underlying needs and/or problems.
The benchmarking opportunities of the indicators are another important feature. By nature, the benchmarking
options are closely related to two aspects: first on the type of the output indicator (quantitative or qualitative),
second, to the specificity of the indicator. Thus, patents allow for benchmarking, whereas technologies
brought to a higher readiness level can be monitored but are specifically tied to the technology, which makes
them difficult to benchmark across funding instruments and technology challenges addressed by the funding
initiative. The OECD Patent Database was set up to develop patent indicators that are suitable for statistical
analyses that can help address S&T policy issues.
The OECD Patent Database covers data on patent applications to the European Patent Office (EPO), the US
Patent and Trademark Office (USPTO), patent applications filed under the Patent Cooperation Treaty (PCT)
that designate the EPO as well as Triadic Patent Families. Data mainly derives from the latest version of the
EPO’s Worldwide Patent Statistical Database (PATSTAT).The following patent statistics were updated on the
OECD's statistical portal in January 2011:37
37 http://www.esds.ac.uk/international/support/user_guides/oecd/sti_manual.pdf
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 83 of 121
Patents by country and technology fields (EPO, PCT, USPTO, Triadic Patent Families);
Patents by regions and selected technology fields (EPO, PCT); and
Indicators of international cooperation in patents (EPO, PCT, USPTO)
The table below provides a summary of indicators, evaluative issues, and availability of benchmarks38.
A broader set of patent-related indicators is available on-line, along with methodological issues, at www.oecd.org/sti/ipr-statistics, covering notably patents by main technology classes, patents by regions, as well as indicators on international cooperation in patenting. For further details on patent data, refer to the OECD Patent Statistics Manual, 2009. 38 The table does not explore patent and bibliometric indicators in detail, as this eventually would lead to a report on its own.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 84 of 121
Table 17: Long list of output indicators
Proposed Indicator Key evaluative issue
Impact Channels Instruments Challenges
Availability of Benchmarks
Availability Sources
Feasibility & Assumptions
Nr. of publications per project by type of publication: ‐ Nr. of publication in peer reviewed journals ‐ Nr. of articles presented in conferences ‐ Nr. of articles published as book chapters ‐ Nr. of articles directed at non‐research communities
Programme results: Effectiveness
Knowledge All
‐ Yes, by field of science, international, between thematic challenges
‐ Partly ‐ Current monitoring
‐ Primary output indicator in most R&D programmes ‐ Counts only tangible outputs only ‐ Proxy for research quality through consideration of Journal Impact Factors ‐ Proxy for interaction with knowledge users
‐ Average results of independent project review process
Programme results:
‐ Quality ‐ Effectiveness
Knowledge Networks Economic
All ‐ Different themes and instruments
‐ Yes ‐ EC administrative systems: SESAM
‐ Provides information on potential impact of the portfolio of projects
‐ Average Nr. of publications/project Programme results: Efficiency
Knowledge All ‐ Yes, between thematic challenges
‐ Can be derived from current monitoring
‐ May depend on project size, thematic area. ‐ Can lead to undesired behaviour of R&D actors
‐ Nr. of Citations
Programme results: Effectiveness
Knowledge All ‐ Yes ‐ Partly ‐ Cross regional analysis
‐ Unlikely that citations occur in a phase while projects are on‐going ‐ Citations of research teams/institutions may provide estimate for expected results
‐ Bibliometrics
Programme results: Effectiveness, Efficiency
Knowledge All
‐ Yes, by field of science, international, between thematic challenges
‐ Yes ‐ Cross regional analysis
‐ Limited nr. of readymade indicators ‐ Requires further analysis
‐ Number of patent applications
Programme results
Knowledge All ‐ Yes, PATSTAT ‐ Yes ‐ Cross regional analysis
‐ Primary output indicator in most R&D programmes ‐ Depends strongly upon technological domain
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 85 of 121
‐ Number of patents issued
Programme results
Knowledge All ‐ Yes, PATSTAT ‐ Yes ‐ Current monitoring
‐ Primary output indicator in most R&D programmes ‐ May likely arise only after project has finished
‐ Patent benchmark Programme results
Knowledge All ‐ Yes, PATSTAT
‐ Yes ‐ Benchmark to other funding schemes
‐ Caution on benchmarking across global regions
‐ Share and value of commercialisation
Effectiveness Relevance, utility
Knowledge, economic
All, in
particular
ICT‐PSP ‐ No
‐ No ‐ Participant survey
‐ Relates to use of R&D and relevance ‐ Differs for the programme instruments ‐ May arise only after project has finalised
‐ Number and value of fee‐for services clients (paying for expert advice, contract research and sales)
Relevance, utility Knowledge, economic
All ‐ No ‐ No ‐ Participant survey
‐ Difficult to benchmark
‐ Number of technologies brought to a higher readiness level (more mature technology, set standard etc.)
Programme effectiveness
Knowledge All ‐ No ‐ No ‐ Participant survey
‐ Difficult to benchmark and requires sector and technological analysis
‐ Number and value of technology developed by programme – and used in industry
Effectiveness, relevance, utility
Knowledge All ‐ No ‐ No ‐ Participant survey
‐ Difficult to benchmark and requires sector and technological analysis
‐ Leverage of private R&D to develop technology further
Relevance, utility Knowledge All ‐ No ‐ No ‐ Participant survey
‐ Difficult to benchmark and requires sector and technological analysis
‐ Number of collaboration projects Effectiveness Network All ‐ Yes, by instrument, challenge, similar R&D programmes
‐ Yes ‐ Benchmark to other funding schemes
‐ Relevant output indicator in several R&D programmes
Number of supported educational‐industry collaborations which: ‐ had not collaborated on R&D previously ‐ had not collaborated with each other previously ‐ planned to collaborate with each other again
Effectiveness Relevance
Network All ‐ Yes, by instrument, challenge, similar R&D programmes
‐ Yes ‐ Current monitoring
‐ The scope for STREPS and IP’s differs in terms of objectives and focus on long term collaborations and number of collaborations. The relevance of these network output indicators varies accordingly. But for both instruments data is feasible to retrieve and analyse.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 86 of 121
Number of researchers trained: ‐ graduates ‐ Ph.D.’s ‐ doctorate fellows
Effectiveness Human capital All ‐ Yes, by instrument, challenge, similar R&D programmes
‐ Yes ‐ Current monitoring
‐ Depends on instruments, type of organisation involved.
Table 18: Efficiency Indicators
Proposed Indicator Key Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Availability Sources
Feasibility & Assumptions
‐ Time to Grant: Time elapsed from deadline of call until signature of grant agreement
‐ Programme implementation efficiency: time
‐ All
‐ Different themes and instruments
‐ National R&D programmes
‐ Yes ‐ Current Monitoring
‐ Given the development cycles of products a too long TTG may have a negative impact on innovation.
‐ A long TTG may prevent organisations from participating in certain funding schemes
‐ Timeliness of experts reimbursements: % of in time expert reimbursements
‐ Programme implementation efficiency: time
‐ All ‐ Different themes and instruments
‐ Yes ‐ Current Monitoring
‐ Little relevance for overall performance of R&D activities
‐ % of calls announced according to foreseen work schedule
‐ Programme implementation efficiency: time
‐ All ‐ Different themes and instruments
‐ No ‐ EC administrative systems
‐ Precondition for achieving desired long term impacts
‐ % of calls which have finalised the project selection and negotiation phase according to schedule
‐ Programme implementation efficiency: time
‐ All ‐ Different themes and instruments
‐ No ‐ EC administrative systems
‐ Precondition for achieving desired long term impacts
‐ Ratio of planned costs for operating a call to project decision vs. actual costs accrued for operating a call
‐ Programme implementation efficiency: costs
‐ All Different themes and instruments
‐ No ‐ EC administrative systems
‐ Requires full cost accounting in operating calls
‐ Ratio of actual costs for operating a call vs. amount of funding distributed in a call
‐ Programme implementation efficiency: costs
‐ All
‐ Different themes and instruments
‐ National R&D programmes
‐ Other agencies
‐ No ‐ EC administrative systems
‐ Requires full cost accounting in operating calls
‐ Average time needed to perform periodic reviews and final project reviews
‐ Programme implementation efficiency: time
‐ All ‐ Different themes and instruments
‐ No ‐ Survey among reviewers
‐ Indication for burden put on reviewers and administration: if time needed to produce reviews is too high, there is a
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 87 of 121
Proposed Indicator Key Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Availability Sources
Feasibility & Assumptions
higher potential
‐ Overall quality assessment of the proposal evaluators on the FP proposal evaluation process
‐ Programme implementation: quality effectiveness
‐ All ‐ Different themes and instruments
‐ Yes ‐Annual Evaluators'
Survey
‐ Indication for possible impact of the portfolio of projects
‐ Assessment of quality by the evaluators between the FP evaluation process and other equivalent systems
‐Programme implementation: quality, effectiveness
‐ All ‐ Different themes and instruments
‐ Yes ‐ Annual Evaluators'
Survey
‐ Provides information about which type of organisations are attracted by FP
‐ % of projects covered by reviews
‐Programme implementation: quality, effectiveness
‐ All ‐ Different themes and instruments
‐ Yes ‐ EC administrative systems: SESAM
‐ Provides information on level of accuracy of programme management
‐ Quality of information provided by EC in project proposal phase
‐ Programme implementation: quality, effectiveness
‐ All ‐ Different themes and instruments
‐ Partly – NCP survey in monitoring system
‐ Participant survey
‐ Indication for quality of commission services
‐ Real time needed to establish a proposal
‐ Programme implementation efficiency: time and costs
‐ Economic All
‐ Different themes and instruments
‐ International comparison: National funding schemes
‐ No ‐ Participant survey
‐ Electronic Proposal System
‐ Indication for attractiveness of EC funding
‐ Ratio of administrative costs per project vs. total project costs
‐ Programme implementation efficiency: time and costs
‐ Economic All
‐ Different themes and instruments
‐ International comparison: National funding schemes
‐ No ‐ EC administrative system
‐ Quantitative indication of administrative burden
‐ High usefulness
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 88 of 121
4.3 OutcomeandImpactIndicators
Outcomes refer to changes which take place as a result of an intervention while impacts relate to the overall
objective or mission of a programme, addressing the anticipated needs of the policy intervention. Outcomes
and impacts are rather measured in interim and ex post evaluations than in monitoring exercises. As outlined
in Chapter 3, attribution problems and timing are main reasons why outcomes and impacts are rarely
considered in programme monitoring. For attributing outcomes and impacts to a public intervention, it is wise
to draw at least upon a number of finalised projects and make use of a mix of well‐defined quantitative and
qualitative evaluation methods. But even in this case difficulties arise, because FP7‐ICT and ICT‐PSP are most
probably not the only instruments in which participants operate.
Nevertheless, performance measurements are also important in the course of monitoring. Anticipated
outcomes represent the bottom‐line for interim and ex post evaluations. They relate to the desired results of a
programme as defined in the strategic, socio‐economic objectives addressed by the portfolio of instruments,
and the thematic priorities of the Work Programme. In order to circumvent methodological problems, it is
necessary to establish indicators primarily from existing sources of data and surveys: how do the main outputs
of the EU funded ICT research activities unfold an impact on the economy, the society, and the environment in
terms of established networks, knowledge, human capital.
Key dimensions addressed by outcome and impact performance measures are matters of relevance,
effectiveness and sustainability. These relate to the impact channels knowledge, networks, human capital, and
socio‐economic factors. These four impact channels will be used to structure the analysis of outcome and
impact performance measures and indicators.
In the following sections first the nexus between identified outcome indicator domains and the FP7‐ICT
instruments and challenges is presented. Then, relevant indicators for the different impact channels and the
long list of indicators are provided.
4.3.1 FP7andICTPSPoutcomesandimpacts–resultsfromtheinterventionlogicanalysis
Table 19 shows that outcomes and impacts of FP7‐ICT relate to all identified impact channels of the
programme, but with a different emphasis. Sustainable research networks and increased mobility are
emphasized in all European research and innovation programmes, but with different focus. Integrated Projects
and Networks of Excellence have the strongest focus on the networking channel. FET projects and STREPS are
co‐operative, but operate at much smaller project scale. The CIP also has support schemes focussing explicitly
on networking, but opposed to NoEs and IPs, the CIP is more oriented towards knowledge transfer,
deployment and bringing innovations to the market.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 4.0 Status: Final Deliverable: D3
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 89 of 121
Table 19: Relevance of FP7‐ICT outcomes and impacts (vertical axis) for different FP7‐ICT instruments and
nexus to impact channels (horizontal axis)
FP7‐ICT Impact
channel
STREPS IPs NoEs FET CIP ICT‐PSP
‐ Sustainable European research networks and increased mobility
‐ Networks + +++ +++ + +++
‐ Stronger S‐I linkages
‐ Networks Knowledge
+++ +++ + + +++
‐ Increased private R&D investments
‐ Knowledge ‐ Economic
+++ +++ + + +
‐ New products, processes, services
‐Economic +++ +++ + + ++
‐ More and better human resources
‐ Knowledge ++ +++ +++ +++ +
‐ EU leadership in ICT research
‐ Knowledge +++ +++ +++ +++ +++
‐ EU leadership in ICT technology
‐ Knowledge ‐ Economic
+++ +++ + ++ +++
‐ Societal challenges successfully tackled
‐ Economic and social
‐ Depends on Work Programme challenges
‐ Depends on Work Programme challenges
‐ Depends on Work Programme challenges
+
‐ Depends on Work Programme challenges
+ Low importance, ++ medium importance, +++ high importance
Stronger science‐industry linkages also relate to uptake (see above) and diffusion of knowledge, and networks.
STREPS, IPs and the CIP have a strong emphasis on science and industry collaborations, whereas the NoEs and
the FET scheme concentrate more on academic actors.
Increased private R&D investments and the development of new products, processes and are in particular
important for IPs and STREPS, in which industrial R&D organisations are core participants. Both NoEs and FETs
have a more academic orientation, although with different purposes. The FET acts as a pathfinder for ICT
research, and may stimulate private R&D only in the long term. The NoEs concentrate on broad research
agendas, which might contribute to overall long‐term development of industrial sectors, but key impacts are to
be expected in terms of scientific/technological progress and the development of human resources.
In addition to the NoEs, the Integrated Projects and the FET scheme are supposed to have a bigger impact on
human resources than STREPS and the CIP, in which human resources also matter, but have a less distinct
specialisation on these issues.
Also the outcomes of the specific challenges of the Work Programme can be linked to the impact channels. In
the present Work Programme of FP7‐ICT research, the challenges of the Work Programme tackle different
research areas, of which some have a closer generic technological/research oriented approach, and others
have a more system oriented approach. The table below shows which of the different challenges tackle certain
impact channels, and by which means these are reflected in the Work Programme.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 4.0 Status: Final Deliverable: D3
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 90 of 121
Table 20: Relevance of FP7‐ICT outcomes and impacts at the specific objective level
Knowledge Networks Human Resources Economic/Social
Challenge 1: Network
and service
infrastructure
‐ New network infrastructure
‐ Global standards achieved
‐ Advanced engineering capabilities
‐ Strategic alliances ‐ Standards prepared
‐ Advanced skills and capabilities
‐ New education and training capabilities
‐ Cloud computing services
‐ Faster broadband access
‐ Safer i‐net applications
‐ New companies
Challenge 2:
Cognitive
systems/robotics
‐ New, improved functionalities, enhanced systems
‐ Increased innovation capacities of firms
‐ Fostered communication across research communities
‐ Consensus on standards
‐ Stronger cohesion between research and industry
‐ Advanced skills and capabilities
‐ New education and training capabilities
‐ Robotic systems operating in real world environments
Challenge 3:
Components and
systems
‐ Increased skills and innov. Capacities of firms
‐ New paradigms for system designs, hardware, software
‐ Excellence in multi‐core architectures
‐ Fostered communication across research communities
‐ Consensus on standards
‐ Stronger cohesion between research and industry
‐ Advanced skills and capabilities
‐ New education and training capabilities
‐ Emergence of new products, devices and applications
‐ Shorter time to market
‐ Increased competitiveness
Challenge 4:
Digital libraries
‐ New technologies ready to use
‐ New partnerships formed
‐ Joint/aligned research agendas
‐ Knowledge transfer between research/content providers
‐ Advanced skills and capabilities
‐ New education and training capabilities
‐ New business opportunities for SMEs
‐ Better services for citizens
‐ New technologies ready to use
Challenge 5:
ICT for Health
‐ New technologies ready to use
‐ Knowledge transfer between research/content providers
‐ Advanced skills and capabilities
‐ New education and training capabilities
‐ New ICT solutions for public policy
‐ New devices, products, services
‐ Increased competitiveness of EU industry
Challenge 6:
ICT for a low carbon
economy
‐ Strengthened excellence in engineering
‐ New standards
‐ Reinforced coordination between EU industry
‐ Strengthened partnerships between industry and applicants
‐ Advanced skills and capabilities
‐ New education and training capabilities
‐ New control systems ‐ New tools, system devices
‐ New traffic/management control
‐ Increased competitiveness of EU industry
‐ Measurable reduction in water, energy consumption
Challenge 7:
ICT for enterprise and
‐ New systems, tools for process automation
‐ Reinforced coordination between EU
‐ Advanced skills and capabilities
‐ New education and
‐ Strengthened position of EU manufacturing
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 4.0 Status: Final Deliverable: D3
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 91 of 121
Knowledge Networks Human Resources Economic/Social
manufacturing ‐ Increased interoperability
‐ New engineering platforms
industry
training capabilities industry ‐ Reinforced competitiveness
‐ New business models
Challenge 8:
ICT for learning and
access to cultural
resources
‐ New application services for schools, workplaces, museums
‐ Better technologies for digitisation
‐ Coordination between IT‐producers and cultural sector/users
‐ More and better trained citizens
‐ New education and training capabilities
‐ Access to new markets
4.3.2 Knowledgeindicators
In terms of impacts, the assessment of publication patterns generated by scientific research is the prevalent
method in the evaluation of impacts from (basic) research (Kanninen and Lemola 2006): “publications are the
main channel through which research results are disseminated in the scientific community, and they are also
an important element in the scientific reward system”. Bibliometric methods and patent analyses, as applied
i.e. by Breschi et al. (2010) in the field of ICT research, are measurable in a quantified form, and can be
attributed to individual researchers, organisations, and scientific disciplines.
However, the task of this study is not to focus on publications and patents, but to go beyond these
straightforward tangible outputs, and concentrate on knowledge outcomes and impacts as addressed in the
FP7 and the specific challenges of ICT research in FP7 and ICT PSP. For the impact channel “knowledge” these
relate to the following main dimensions, for which no ready‐made indicator systems exist:
The creation of new capabilities at the level of participating firms and research organisations
Key research areas and key technologies
The knowledge impact channel relates to:
The micro‐level of firms and participating research organisations, and
The meso level of research areas and technologies, which can have an ultimate effect even at the
macro‐level (i.e.) the international knowledge position of Europe.
For both areas non‐tangible performance measures prevail.
ParticipantsFirm level knowledge indicators are relevant for all ICT research activities (from fundamental research to
pilots, by all types of actors, across a range of ICT domains). Methods to address research performing
organisations include (before and after) comparisons of knowledge creation capacities via self‐assessments,
surveys and interviews. In order to allow for before and after comparison approaches, the knowledge creation
and innovation capabilities have to be tracked from the start of a project (see section 4.1) to the end of a
project. Relevant indicators are:
Business/Research capabilities: % of participating organisations achieving significant increase in
innovation capabilities, % of participating organisations achieving significant improved use of ICT.
International knowledge position: Increased reputation/recognition in the scientific community
through participation in the project, changes in international rankings of institutes and involved
companies
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 4.0 Status: Final Deliverable: D3
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 92 of 121
Programme contribution to develop product or process innovations: Increase range of goods or
services, replace outdated products or processes, enter new markets or increase market share,
improve quality of goods or services, improve flexibility for producing goods or services, increase
capacity for producing goods or services, reduce labour costs per unit output, reduce material and
energy costs per unit output, reduce environmental impacts, improve health or safety of your
employees.
Capability to introduce different types of organisational innovations: new business practices, new
methods of organising work, new methods of organising external relations.
Increased in‐house creativity and skills: Software development, graphic arts, design of objects, web
design, engineering/applied sciences, and mathematics/statistics/database management.
Benchmarks for this type of indicators can be achieved if terminologies of standardised surveys such as the
Community Innovation Survey 2010 (CIS 2010) are used. Anonymous and non‐anonymous micro data would
allow for specific analyses of the effects of participation in the Framework Programmes and the CIP ICT PSP.
KeyresearchareasandtechnologiesThe emergence of scientific excellence in key research areas and technologies is a central aim of FP7 and its
specific challenges and instruments. Mappings of excellence in key research areas can be achieved through
advanced bibliometric analyses and patent analyses39. While these types of analyses may appropriately reflect
the emergence of scientific excellence of European research in certain fields/technologies, other points of
reference, as approached i.e. in the course of the FP6 Impact Analysis Study are (European Commission 2009):
Research priorities of FP7‐ICT research themes in terms of knowledge enhancement and knowledge
integration
Contributions to Standards.
Whether or not FP7‐ICT research projects have contributed to a knowledge enhancement and knowledge
integration at the level of research areas and technologies can be answered via: specific surveys and expert
judgements, before and after group comparison approaches (if data are gathered at the beginning of a project;
see section on input indicators).
4.3.3 Humancapitalindicators
Outcomes and impacts in the impact channel human capital mainly relate to the quality and qualifications of
involved researchers and mobility. Through funding of programmes and projects, a number of researchers are
financed, and PhDs and master students commenced their work in research teams consisting of several
academic research organisations and private enterprises. Hence, in terms of human capital outcomes the
following indicators can be considered while projects are ongoing:
Nr./% of PhD and master graduates taking up employment with industry
The existence of formal qualification and exchange schemes between participating organisations
Nr./% of researchers participating in qualification and exchange schemes
Nr./% of professional training courses, workshops for industry and end‐users (as a proxy for outcome)
Nr./% of training courses for programme participants (as a proxy for outcomes)
39 For an advanced discussion on options and limitations of bibliometric analyses in this respect, see: Braun et al. (2009), Tools and Indicators for Community Research Evaluation and Monitoring, Final Report, Vol. 1 & Vol. 2. Bad Camberg, 2009.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 4.0 Status: Final Deliverable: D3
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 93 of 121
The attraction/retention of world class researchers (e.g. nr. of new employed researchers with a track
record above a certain threshold level).
4.3.4 Networkindicators
The creation of research networks is a key impact channel. Structuring effects are a desired outcome of the
collaborative FPs and in particular of its bigger, research and knowledge transfer instruments such as the
Networks of Excellence and the Integrated Projects.
Network analysis methods and indicators have been widely used in the analysis of impact of the FPs, but there
are few techniques that allow for monitoring the performance of networks. Exceptions include techniques that
rely on participant surveys and participant data retrieved from the European Commissions’’ administrative
systems.
Quantitative network analyses are not apt for regular monitoring, but play an important role in interim and ex
post evaluations. However, by making use of participant data (as highlighted in the section 4.1 on programme
participants) proxies for the emergence of new research networks can be developed. Proxies include: the
frequency of participation, new entrants, new collaborations, repetitive co‐operations of research groups.
Survey based data may provide evidence on the effects of such collaboration and complement input data.
Performance measures that can be considered are of the following type:
Nr./% of project participants willing to cooperate again in research endeavours;
Nr./% of project participants establishing joint research agendas (i.e. develop a pool of research
strands together);
Nr./% of project participants launching new market alliances;
Nr. /% of project participants establishing joint training and qualification measures.
Further positive network effects are diffusion and uptake of knowledge among participants (raising the
absorptive capacity), behavioural impacts of project participants, and the knowledge transfer between
knowledge producers and knowledge users.
Effects regarding knowledge diffusion among participants may be monitored via surveys focussing on
implementation of joint learning and programming activities (see above).
Impact of knowledge that is generated in FP7‐ICT may only emerge if it diffuses to different groups of end‐
users (i.e. public authorities, firms, citizens etc.). While projects are on‐going, it is most likely that no diffusion
can be monitored. Instead one has to limit oneself to proxies in terms of outputs that are geared towards end‐
users. These are:
Nr. of publications for non‐academic audiences
Presentations, lectures and discussion meetings regarding research results
Patents (e.g. licenses, revenues)
Products, services and consulting.
According to Kanninen and Lemola (2006), outputs that are targeted at (final) users, serve as a proxy to
measure impact on actors outside academia. The main advantage is that the data can be collected from
researchers, and no potential end‐users have to be addressed. Data gathering may be achieved by annual
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 4.0 Status: Final Deliverable: D3
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 94 of 121
reporting and/or surveys. Thereby, different types of users can be indicated, which also allows to provide
estimates upon impact of different types of funding schemes on user groups.
4.3.5 Economicindicators
Ultimately, FP7‐ICT and CIP ICT‐PSP seek to have economic, social and – to some extent – environmental
impacts. Economic growth, improved quality of life, improved health, increased social welfare and the
concrete economic impacts on participating organisations, constitute the main underlying rationales for
investing in large scale research. However, these impacts are most difficult to measure. In the following
section we concentrate on economic impacts, as an area in which the monitoring system may be improved.
For a performance measurement of socio‐economic impacts, the key question is: to what extent are the
measurable effects due to the nature of the intervention of a programme? Polt et al. (2006) show that the
problem of attribution has different facets:
Publicly subsidized research projects frequently turn out to be one of many projects within a
company’s research portfolio. If these projects are complementary to each other, then new products
and processes, and consequently income stemming from these research activities are hard to be
attributed to one project/programme.
Indirect effects can play a major role. While direct effects are related to the program goals, indirect
effects come about when the effects go beyond the program goals – for instance when the research
project enhances the reputation of the firm or contributes to improvements in management. These
can lead to an improvement in the company’s competitive position.
Rising turnover, increased value added or cost savings are generally the result of multiple influences
both internal and external to the firm. Research activities as such are generally only a small part of
these. Whether or not an improved process or a successfully introduced new product lead to a mid‐
range increase in value‐added is, for instance, greatly dependent upon how quickly competitors react
to the new situation or the level of demand elasticity.
Apart from attribution problems, it is also important to decide upon the appropriate level of analysis.
Performance measures can concentrate on the micro‐level of participating firms, the meso level (a set of ICT
related projects in certain instruments, the different technological and societal pillars of the ICT Work
Programme, the competitiveness of Europe’s sectors) or on the macro‐level of global outcomes and impacts
stemming from the whole ICT Work Programme.
In order to assess the economic outcomes and impact of FP7‐ICT and the CIP, the main actors need to be well
identified and specifically characterised in terms of their innovation and economic capabilities. The basic
characteristics for this purpose need to be gathered at the stage of project application (see section 4.2).
Secondly, the scientific and technological outputs need to be characterised as well. A grouping of outputs may
consider the geographical location, the industrial branch, and the types of institutions. This allows to position
inputs and outputs of the programme.
R&DadditionalityFor analytical purposes, the concept of additionality is a key concept for measuring effects of R&D
programmes. Three concepts are being distinguished: input additionality, output additionality and behavioural
additionality.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 4.0 Status: Final Deliverable: D3
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 95 of 121
Studies dealing with input additionality investigate whether financial public R&D support have a positive
impact on total private R&D expenditures – and if so, do they boost them by an amount which is larger than
the amount of taxpayers’ money which was used in this way (Streicher et al. 2004). For FP7‐ICT, evaluative
questions in this regard are:
Has FP7‐ICT significantly increased the level of R&D investments of participating firms?
Has FP7‐ICT significantly increased the level of overall R&D investments in the ICT sector?
If public funding is directed towards projects that the firm would have been undertaken anyway, this would
lead into a misallocation of resources. On the other hand, a complementary relationship between private and
public funding would legitimize public intervention. Hence, the desirable situation is one in which public R&D
subsidies are crowding in additional private R&D investment. The figure below demonstrates, by which means
public R&D subsidies may influence private R&D expenditures. The following effects can be distinguished: full
crowding out, partial crowding out, no influence, and crowding in.
Figure 12: Effects of R&D subsidies on total R&D expenditures
Source: Streicher et al. (2004)
Crowding out, is the most unfavourable situation. In this situation, the public subsidy does not alter the R&D
plans of firms, but firms rather reduce their own R&D spending. Partial crowding out occurs if firms raise their
total R&D expenditures, but by less than the full amount of the subsidy. According to Streicher et al. (2004),
this effect is probably the likeliest effect for firms which are not “liquidity constrained”, meaning that their
R&D plans are not kept down by (external) budget constraints (e.g. the inability to get bank credit). In the
presence of liquidity constraints, a possible reaction to a subsidy might be an unchanged level of own R&D
expenditures: the firm would like to do more R&D than it is able to afford because of banks’ unwillingness to
finance it. In this case, the firm would use the subsidy to extend total research by the full amount of the
subsidy. Finally, crowding in may happen if benefits from subsidised research “spill over” to other R&D, via
cumulated know‐how or shared equipment, say, or by firms using subsidies to build up and expand their R&D
departments. Of course, crowding in is the most favoured outcome of an R&D subsidy.
full crowding out
0
2
4
6
8
10
12
-2 -1 0 1 2 3 4
time
leve
l of t
otal
R&
D
R&D subsidy
private R&D
level o f "counterfactual" R&D
no influence on private R&D
0
2
4
6
8
10
12
-2 -1 0 1 2 3 4
time
leve
l of t
otal
R&
D
R&D subsidy
private R&D
level o f "counterfactual" R&D
partial crowding out
0
2
4
6
8
10
12
-2 -1 0 1 2 3 4
time
leve
l of t
otal
R&
D
R&D subsidy
private R&D
level o f "counterfactual" R&D
crowding in
0
2
4
6
8
10
12
-2 -1 0 1 2 3 4
time
leve
l of t
otal
R&
D
R&D subsidy
private R&D
level o f "counterfactual" R&D
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 4.0 Status: Final Deliverable: D3
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 96 of 121
According to Braun et al. (2009) proxies for measuring crowding in effects by simple indicators are as follows:
Total EU‐contribution as a share of total costs in all research projects by specific programme and by
R&D performing sector (universities, research institutes, firms. others).
Total R&D costs in FP7‐ICT as a share of the total GERD in EU27.
Annual total R&D costs of national participants in FP7‐ICT as a share of national GERD for all EU27.
Annual total R&D costs of national participants from universities in the FP7 as a share of national
HERD for all EU27.
Annual total R&D costs of national participants from research institutes in the FP7 as a share of
national GOVERD for all EU27.
Annual total R&D costs of national participants from firms in the FP7 as a share of national BERD for
all EU27.
However, for demonstrating crowding in effects, micro‐econometric simulation techniques and regression
analyses have to be performed. Therefore, the indicators presented above may only provide a vague indication
of input additionality.
On the other hand, output additionality refers to the effects of a subsidy for research in terms of impact on a
firm’s turnover, profit, etc. For measuring impacts on private and social returns to R&D a number of
techniques, including accounting approaches and econometric modelling exist, but these cannot be
implemented on a regular basis in terms of a performance monitoring system40. In order to allow for these
methods to be implemented, performance monitoring data need to retrieve data on the firm size, R&D
behaviour, turnover, market structure, type of research etc., and complement them with FP7‐ICT funding data.
Econometric modelling based upon firm data and funding data may allow providing insight on differences in
firm performance of FP7‐ICT participants and non‐participants. These differences could be represented by the
following indicators:
Sales
Income gained through royalties/IPRs stemming from FP7‐ICT
Gross value added of a firm
Exports of a firm
Increased productivity in companies/per employee
Added value in companies.
Although output additionality may not be measured while projects are ongoing, indications can be provided a)
by means of surveys, in which questions are asked regarding the feasibility of achieved technological outputs
in case of absence of a public subsidy, and b) by counting different types of achieved technological and
scientific outputs, and estimates about their economic valorisation value.
Finally, behavioural additionality refers to differences in the innovation behaviour of participating firms.
Behavioural additionality is not part of the economic impact channel, although it may have an ultimate impact
on output additionality, but refers to changes in the mode of knowledge creation and innovation networks.
Hence, aspects of behavioural additionality are covered in these impact channels.
40 See i.e. the following source for an overview on analytical methods: The European Commission (2005), Impact Assessment and ex ante evaluation, Commission Staff Working Paper, SEC(2005) 430.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 4.0 Status: Final Deliverable: D3
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 97 of 121
4.3.6 Longlistofoutcome/impactindicators
The discussion of outcome and impact indicators has shown that the respective data need to be retrieved
directly from the participating organizations in FP7 ICT and the ICT PSP programme via surveys and
standardised reporting procedures.
A number of outcome and impact indicators have been identified for the impact channels knowledge,
networks, human capital, and economic. The key evaluative issues are relevance, effectiveness and
sustainability.
The table below presents a summary of input and activity indicators that have been identified as relevant for
FP7‐ICT and the ICT PSP programme.
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 98 of 121
Table 21: Identified indicators for outcomes and impacts
Proposed Indicator Key
Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Availability Sources
Feasibility & Assumptions
Business/Research capabilities: ‐ % of participating organisations achieving significant increase in innovation capabilities
‐ % of participating organisations achieving significant improved use of ICT
‐ Relevance ‐ Effectiveness
‐ Knowledge All
‐ Partly, comparisons possible, if accordance with innovation surveys granted
‐ No ‐ Survey
‐ Main aim of R&D programmes, should be checked and validated in evaluations through case studies, focus groups etc.
‐ Research activities in FP7‐ICT seek to increase capabilities of involved actors. Surveys can provide information on achieved capabilities.
International knowledge position: ‐ Increased reputation/recognition in the scientific community
‐ Changes in international rankings of institutes and involved companies
‐ Relevance ‐ Effectiveness
‐ Knowledge FET
‐ Yes, via creation of institutional indicators i.e. through bibliometrics
‐ No ‐ Survey, Institutional rankings
‐ Relevant as regards international competitiveness of research, but requires up‐front profiling in order to be able to check for changes
‐ Attribution problems virulent: FP7‐ICT will only account for fraction of research output
Increased ability to develop product or process innovations:
‐ increase range of goods or services ‐ replace outdated products or processes, ‐ enter new markets or increase market share ‐ improve quality of goods or services ‐ improve flexibility for producing goods or services
‐ Increase capacity for producing goods or services,
‐ reduced labour costs per unit output ‐ reduce material and energy costs per unit output
‐ reduce environmental impacts ‐ improve health or safety of your employees Capability to introduce different types of organisational innovations:
‐ new business practices
‐ Relevance ‐ Effectiveness
‐ Knowledge ‐ Economic
All ‐ Partly: Via linkage with CIS data
‐ No ‐ Survey, CIS survey
‐ Availability of benchmarking surveys and firm micro data need to be granted
‐ Main aim of R&D programmes ‐ Can most probably only be checked in interim and ex post evaluations, but FP7‐ICT microdata on existing capabilities may provide benchmarks for such activities. Potentially, entry and exit surveys would allow implementation
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 99 of 121
Proposed Indicator Key
Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Availability Sources
Feasibility & Assumptions
‐ new methods of organising work ‐ new methods of organising external relations
Increased in‐house creativity and skills: ‐ software development ‐ graphic arts ‐ design of objects, web design, engineering/applied sciences, mathematics/statistics/database management
‐ Effectiveness ‐ Sustainability
‐ Knowledge ‐ Human Resources
All ‐ No ‐No ‐ Survey
‐ Availability of benchmarking surveys and firm micro data need to be granted
‐ Nr. and share of PhD and master graduates taking up employment with industry
‐ Effectiveness ‐ Sustainability
‐ Human Resources
All
‐ No readymade benchmarks
‐ European mobility surveys needed
‐ No ‐ Survey
‐ Comparison between instruments and specific challenges possible
‐ Relevant knowledge transfer indicator i.e.: “the best knowledge transfer is a pair of shoes”
‐ Existence of formal qualification and exchange schemes between participating organisations
‐ Effectiveness ‐ Sustainability
‐ Human Resources
All IPs
‐ No ‐ No ‐ Survey
‐ Comparison between instruments and specific challenges possible
‐ Relevance is highly skewed for different instruments
‐ Indicates sustainability of knowledge transfer
‐ Nr. of researchers participating in qualification and exchange schemes
‐ Effectiveness ‐ Human Resources
All NoEs
‐ No ‐ No ‐ Survey
‐ Comparison between instruments and specific challenges possible
‐ Indicates potential effect on human resources
‐ The number of professional training courses, workshops for industry and end‐users
‐ Effectiveness ‐ Human Resources
‐ Networks
All NoEs IPs
‐ No ‐ No ‐ Exit survey
‐ Comparison between instruments and specific challenges possible
‐ Indication for knowledge transfer
‐ The number of training courses for programme participants
‐ Efficiency ‐ Human Resources
‐ Networks
All NoEs IPs
‐ Comparison between instruments and specific challenges possible
‐ The attraction/retention of world class researchers
‐ Effectiveness ‐ Human Resources
All ‐ No ‐ No ‐ Survey
‐ Attribution problems and definition problems
‐ Nr./% of project participants willing to cooperate again in research endeavours
‐ Effectiveness ‐ Sustainability
‐ Networks All ‐ No ‐ No ‐ Survey
‐ Comparison between instruments and specific challenges possible
‐ Indication for project/programme failure. Good indication for further analyses about
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 100 of 121
Proposed Indicator Key
Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Availability Sources
Feasibility & Assumptions
reasons
‐ Nr./% of project participants establishing joint research agendas
‐ Effectiveness ‐ Sustainability
‐ Networks All ‐ No ‐ No ‐ Survey
‐ Comparison between instruments and specific challenges possible
‐ Important for sustainability, indication of programme achievements
‐ Nr./% of project participants launching new market alliances
‐ Effectiveness ‐ Sustainability
‐ Networks ‐ Economic
All ‐ No ‐ No ‐ Survey
‐ Comparison between instruments and specific challenges possible
‐ Relevant for overall market impact and economic growth
‐ Nr. /% of project participants establishing joint training and qualification measures.
‐ Effectiveness ‐ Sustainability
‐ Networks ‐ Economic
NoEs ‐ No ‐ No ‐ Survey
‐ Comparison between instruments and specific challenges possible
‐ Indication for knowledge transfer
‐ Nr. of publications for non‐academic audiences ‐ Effectiveness ‐ Economic All ‐ No ‐ No ‐ Survey
‐ Comparison between instruments and specific challenges possible
‐ Indication for knowledge transfer
‐ Presentations, lectures and discussion meetings regarding research results
‐ Effectiveness ‐ Economic All ‐ No ‐ No ‐ Survey
‐ Comparison between instruments and specific challenges possible
‐ Indication for knowledge transfer
‐ Share of the ICT sector in the economy measured as a proportion of GDP and of total employment.
‐ Effectiveness ‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ Eurostat
‐ Digital Agenda indicator. Problems of attribution, but possibility to benchmark
‐ Classification by NACE codes of participating firms allows higher level of detail
‐ Growth of the ICT sector measured as a % change of value added at current and constant prices.
‐ Effectiveness ‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ Eurostat
‐ Digital Agenda indicator. Problems of attribution, but possibility to benchmark
‐ Classification by NACE codes of participating firms allows higher level of detail
‐ Ratio of the productivity level in the ICT sector with respect to the entire economy
‐ Effectiveness ‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ Eurostat
‐ Digital Agenda indicator. Problems of attribution, but possibility to benchmark
‐ Classification by NACE codes of participating firms allows higher level of detail
‐ Productivity growth in the ICT sector. ‐ Effectiveness ‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ Eurostat
‐ Digital Agenda indicator. Problems of attribution, but possibility to benchmark
‐ Classification by NACE codes of participating firms allows higher level of detail
‐ Size and nominal growth of ICT markets (IT and ‐ Effectiveness ‐ Economic Total FP7‐ ‐ World ‐ No ‐ Digital Agenda indicator. Problems of
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 101 of 121
Proposed Indicator Key
Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Availability Sources
Feasibility & Assumptions
telecom) ICT ‐ Leading markets
‐ Eurostat attribution, but possibility to benchmark ‐ Classification by NACE codes of participating firms allows higher level of detail
‐ R&D expenditure by the ICT sector as a % of GDP: Benchmarking of FP7‐ICT participants vs. non participants
‐ Effectiveness ‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ EC admin system ‐ Eurostat
‐ Digital Agenda indicator. Problems of attribution, but possibility to benchmark
‐ Classification by NACE codes of participating firms allows higher level of detail
R&D expenditure in the ICT sector as a % of total R&D expenditure in the business sector (BERD)
‐ Effectiveness ‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ EC admin system ‐ Eurostat
‐ Digital Agenda indicator. Problems of attribution, but possibility to benchmark
‐ Classification by NACE codes of participating firms allows higher level of detail
R&D expenditure in the ICT sector as a % of value added (in the ICT sector)
‐ Effectiveness ‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ EC admin system ‐ Eurostat
‐ Digital Agenda indicator. Problems of attribution, but possibility to benchmark
‐ Classification by NACE codes of participating firms allows higher level of detail
Imports and exports of ICT goods and services as a % of total imports and Exports
‐ Effectiveness ‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ EC admin system ‐ Eurostat
‐ Digital Agenda indicator. Problems of attribution, but possibility to benchmark
‐ Classification by NACE codes of participating firms allows higher level of detail
Total EU‐contribution as a share of total costs in all research projects by specific programme and by R&D performing sector (universities, research institutes, firms. others);
‐ Relevance ‐ Effectiveness
‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ EC admin system ‐ Eurostat
‐ Macro level indicator, which provides information on relevance of EU funded ICT research in certain areas/certain industries
Total R&D costs in FP7‐ICT as a share of the total GERD in EU27
‐ Relevance ‐ Effectiveness
‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ EC admin system ‐ Eurostat
‐ Macro level indicator, which provides information on relevance of EU funded ICT research in certain areas/certain industries
Annual total R&D costs of national participants in FP7‐ICT as a share of national GERD for all EU27
‐ Relevance ‐ Effectiveness
‐ Economic Total FP7‐ ICT
‐ World ‐ Leading markets
‐ No ‐ EC admin system ‐ Eurostat
‐ Macro level indicator, which provides information on relevance of EU funded ICT research in certain areas/certain industries
Annual total R&D costs of national participants from universities in the FP7 as a share of national HERD for all EU27
‐ Relevance ‐ Effectiveness
‐ Economic Total FP7‐ ICT
‐ Non participants
‐ No ‐ EC admin system ‐ Eurostat
‐ Macro level indicator, which provides information on relevance of EU funded ICT research in certain areas/certain industries
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 102 of 121
Proposed Indicator Key
Evaluative Issues
Impact Channel
InstrumentChallenge
Availability of Benchmarks
Availability Sources
Feasibility & Assumptions
Annual total R&D costs of national participants from research institutes in the FP7 as a share of national GOVERD for all EU27;
‐ Relevance ‐ Effectiveness
‐ Economic Total FP7‐ ICT
‐ Non participants
‐ No ‐ EC admin system ‐ Eurostat
‐ Macro level indicator, which provides information on relevance of EU funded ICT research in certain areas/certain industries
Annual total R&D costs of national participants from firms in the FP7 as a share of national BERD for all EU27.
‐ Relevance ‐ Effectiveness
‐ Economic Total FP7‐ ICT
‐ Non participants
‐ No ‐ EC admin system ‐ Eurostat
‐ Macro level indicator, which provides information on relevance of EU funded ICT research in certain areas/certain industries
‐ Level of R&D investments of participating firms ‐ Level of overall R&D investments in the ICT sector
‐ Effectiveness ‐ Economic All ‐ Non participants
‐ No ‐ Survey/ EC admin system
‐ Allows for before/after comparison, pre‐requisite for econometric modelling
Income gained through royalties/IPRs stemming from FP7‐ICT
‐ Effectiveness ‐ Economic All ‐ Technology Balance of Payments
‐ No ‐ Survey/ EC admin system
‐ Classic commercialisation indicator
Gross value added of a firm ‐ Effectiveness ‐ Economic All ‐ Non participants
‐ No ‐ Survey/ EC admin system
‐ Allows for before/after comparison, pre‐requisite for econometric modelling
Exports of a firm ‐ Effectiveness ‐ Economic All ‐ Non participants
‐ No ‐ Survey/ EC admin system
‐ Allows for before/after comparison, pre‐requisite for econometric modelling
Performance and Monitoring Concepts and Tools for ICT in the EU‐funded RTD ‐ SMART 2010/0028
Version: 2.0 Final Report Status: Final Deliverable: D 4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 103 of 121
5 Proposalforkeyperformancemonitoringindicators
This chapter provides a proposal for a limited number of performance indicators, which have the potential to
extend the existing monitoring system of FP7‐ICT research and CIP ICT‐PSP. The performance monitoring
indicators provide knowledge on inputs, outputs, outcome and impacts produced by the programmes, and by
the partnerships that run the projects. The proposal of the performance measures are elaborated from the
broad list of indicators in Chapter 4. They should be understood as a tool to depict variants of the future of
monitoring and evaluation of EU funded ICT research activities.
Chapter 5 is structured as follows: First, general principles for setting up a performance monitoring system are
provided. Then, a number of key performance indicators are presented, and the operationalisation of the
proposed indicators is exemplified. Third, strengths, weaknesses, opportunities and threats associated with
the indicators are highlighted by conducting SWOT analyses focusing on the usefulness and the
implementation of the proposed indicators.
5.1 Principlesforanappropriateperformancemonitoringsystem
The role of a performance monitoring system is to provide a continuous assessment of programme
achievements organised internally by programme management or a monitoring unit. Through its data
gathering approaches monitoring attempts the development of interim performance measures of programme
progress, outputs and outcomes. The key performance indicators should provide information on the
relevance, effectiveness, efficiency, utility, and sustainability of EU ICT funding, which are the main evaluative
criteria for EU funded research activities.
Yet, performance indicators cannot substitute evaluations, because a number of data sources and a
combination of evaluation methods are needed to capture outcomes and impacts of a programme. However,
the data and indicators obtained from monitoring should have the capacity to feed into the interim
assessments and ex post evaluations.
FP7‐ICT and CIP ICT‐PSP are characterised by a high degree of complexity, and the targets set in the multi‐
annual Work Programmes are highly specific from a technological point of view. Due to the specificity of the
technological objectives, a performance monitoring system employed by the Evaluation and Monitoring Unit
of DG INFSO should not try to monitor specific indicators for each instrument and each specific Work
Programme challenge. This would increase the administrative burden in an unjustified way. Instead, a set of
more generic key performance measures should be applied across the different instruments and challenges,
and interpreted against the aims set in the Work Programmes.
For this aim, the logic chart analysis has shown that it is possible to develop a concept for performance
measurement, which is based upon:
The impact chain of the programmes (inputs, outputs, outcomes and impacts), and
relevant impact channels (knowledge, research networks and collaboration, human capital, economic
and social impact).
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 104 of 121
The project team witnessed that the present monitoring system merely provides information on key objectives
of EU funded ICT research and innovation, and progress made towards achieving the most relevant objectives.
These are:
To increase technological excellence of EU ICT industry;
To increase quality of ICT research;
To re‐inforce coordination and pooling of resources;
To ensure training, dissemination and knowledge management;
To focus on research on themes of major societal needs.
The current monitoring system concentrates on inputs, while outputs and outcomes of EU funded research
and innovation activities are neglected. Relevant data of good quality are provided, but they do not provide
linkages towards the main objectives of FP7‐ICT and CIP ICT‐PSP. The current monitoring system does not fully
utilise information that is collected on a regular basis, because it is only collected in lengthy text format.
In order to minimise the burden on the project participants and the EC administration, the project team
recommends utilising existing data sources better. This can be achieved by introducing structured queries in
project proposals, the project reports, and the periodic and final project reviews. The project reports are
drafted by project coordinators. The periodic reviews and the final project reviews are based upon information
provided by the project leaders, drafted by external reviewers, and reviewed by project officers. Hence, one
has to be aware that the specific contribution of each participant can be assessed only to a limited extent
during monitoring.
Main criteria for devising the proposal of performance monitoring indicators have been:
The feasibility of the proposed monitoring concepts and data collection approaches. The
administrative burden has to be minimised.
The timeliness of the set of indicators. Indicators should be retrieved on an annual basis, allowing for
real time performance monitoring, not ex‐post assessment procedures.
The ability to compare and contextualise the performance measures (e.g. with indicators from similar
programmes, other EU funding mechanisms etc.).
5.2 KeyInputIndicators
In the present monitoring system data on participation and funding are well monitored. Nevertheless, the
present study also identified shortcomings of the present monitoring system as regards consideration of key
aspects of FP7‐ICT and CIP ICT‐PSP. These relate to:
1. The ability to create a well balanced portfolio of projects given the objectives of the individual
instruments and the annual Work Programmes (i.e. as to provide incentives for new actors to be
engaged in European research endeavours and to integrate European research actors).
2. The involvement of relevant industrial actors and research organisations (as to provide a baseline for
evaluations and impact assessments).
The analysis has further shown that the group of actors to be involved in the different thematic areas and the
specific instruments differ considerably from each other. This means that for each thematic area (specific
objectives) a baseline for desired participation should be set up by the commission officials in the course of the
preparation of the multi‐annual Work Programmes. This should serve as a reference framework for a
performance measurement.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 105 of 121
For the first criteria, the project team suggests to establish 5 additional key input measures, which
demonstrate whether EU funded ICT research and innovation programmes are able to create a well balanced
portfolio of projects given the objectives of the individual instruments (Table 22). The indicators should be
provided on an annual basis. A differentiation by specific objective and funding instrument is suggested.
Table 22: Key input measures – project portfolio
Nr. Proposed Indicator Key Evaluative Issues
Impact Instrument Benchmarks AvailabilitySources
1. ‐ New entrants: % of new entrants submitting proposals
‐ Effectiveness ‐ Networks All
‐ Different thematic area and instrument
‐ No ‐ EC administrative system
2. ‐ New coordinators: % of new coordinators in proposals
‐ Effectiveness ‐ Networks All
‐ Different thematic area and instrument
‐ No ‐ EC administrative systems
3. ‐ New collaborations: % of partners who have collaborated for the first time in an ICT related FP project
‐ Effectiveness ‐ Networks All
‐ Different thematic area and instrument
‐ No ‐ EC administrative system
4. ‐ Frequency of participation of individual institutions (per vear)
‐ Relevance ‐ Effectiveness
‐ Networks ‐ Economic
All
‐ Different thematic area and instrument
‐ No ‐ EC administrative system
5. ‐ Repetitive cooperation: Nr. of collaborative projects with the same core partners
‐ Effectiveness ‐ Sustainability
‐ Networks All
‐ Different thematic area and instrument
‐ No ‐ partly EC administrative system
‐ Survey
For the second criteria, the project team suggests to incorporate a number of organisation level data (Table
23). The data provide information on the types of funded firms. The data are critical for econometric analyses
to be performed in interim and ex post evaluations, and allow characterising project participants in a more
detailed manner than now.
Table 23: Key input measures – relevant actors
Nr. Proposed Indicator Key Evaluative Issues
Impact Instrument Benchmarks AvailabilitySources
6.
‐ General data characterising the organisation:
‐ Annual Turnover of the firm (last three years)
‐ Cash flow (last three years) ‐ Exports (last three years) ‐ Number of employees (last three years), ‐ Year of foundation ‐ Area of Economic activity (NACE Code), location.
‐ Effectiveness ‐ Efficiency
‐ Economic All
‐Different thematic area and instrument
‐ Sectors (NACE classification)
‐ Regions
‐ No ‐ EC administrative system
‐ Eurostat
7.
‐ Research and innovation data: ‐ R&D expenditures (last three years) ‐ R&D personnel (last three years) ‐ Nr. of patents applied and patents granted in the last three year
‐ Nr. of publications published in the last three years
‐ Nr. of researchers employed in the last three years(PhDs)
‐ Amount of public funding of research and innovation
‐ Effectiveness, Efficiency
‐ Knowledge ‐ Human Resources
‐ Economic
All
‐Different themes and instruments
‐ Sectors (NACE classification)
‐ Regions
‐ No ‐ EC administrative system
‐ Eurostat
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 106 of 121
In the following, the relevance of these two cases of performance indicators is presented in detail. A SWOT
analysis highlights strengths, weaknesses, opportunities and threats associated with each group of indicators.
Prerequisites for implementation are discussed.
5.2.1 Projectportfolioindicators
In the annual STREAM reports, the Evaluation and Monitoring Unit of DGINFSO provides a number of core
input indicators for FP7‐ICT and CIP ICT‐PSP differentiated by strategic objectives and funding instrument. At
the level of instruments and strategic objectives, the STREAM reports offer information on the participation of
different types of organisations (enterprises, higher education institutions, non‐profit research organisations,
public administration, and other). The data are relevant, because one of the differentiating characteristics of
the funding instruments concerns the profile of participants. The degree to which funding instruments have
attracted their expected participants is important for the relevance and effectiveness of the respective
instruments.
The present STREAM indicators can easily be obtained through the EC reporting systems. In addition to these,
the five proposed indicators provide information on desired criteria of FP7‐ICT and CIP ICT‐PSP related funding.
Table 24 presents the results of the SWOT analysis of the indicator category “project portfolio”.
Table 24: SWOT analysis for proposed indicators on the portfolio of collaborative projects
Strengths Weaknesses
‐ The indicators provide information on whether the
ICT programme is open to new participants (a pre‐
requisite for integration of ERA) and able to build up
and structure research agendas in a European
dimension.
‐ The indicators provide information on relevance and
effectiveness of EU‐ICT programmes.
‐ Indicators 1‐4 are easy to establish with the existing
EC administrative system.
‐ No additional information needs to be retrieved by
project participants.
‐ No international benchmarks are available for the
indicators.
‐ Operationalisation of indicator “repetitive
cooperation” is difficult:
‐ It is unlikely that all project participants appear
again in a number of projects.
‐ No automatic rule/threshold can be established for
this indicator.
‐ Considerable efforts for EC project officers to judge
case by case.
Opportunities Threats
‐ The indicators have the potential to be considered in
the planning for the annual Work Programmes in
terms of targets.
‐ The indicators can be transferred to other areas of
EU research and innovation funding.
‐ The different thematic challenges have different
stages of maturity, numbers of actors etc. This could
have an effect on the representativeness of data
collected.
‐ If indicators are applied at strategic objective level, it
is difficult to ensure consistency of presentation of
results as strategic objectives are likely to change.
Prerequisites for implementation
Except from the indicator “repetitive cooperation” all indicators can be retrieved from existing EC
administrative systems. Prerequisites for the implementation of indicators are that a) each project participant
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 107 of 121
has a unique identifier and that b) changes of instruments and strategic objectives are taken into account in
order to present consistent information on “new entrants”, “new coordinators”, and “new collaborations”. For
the assignment of participant identifiers, the data gathering process should adhere to the standards defined in
the Oslo Manual (OECD, Eurostat 2005). However, “legal entity” is not in all cases the correct level for
assignment. The “legal entity” may be considered the primary statistical unit if it represents an independent
economic entity with decision‐making autonomy. In particular for enterprise groups, enterprises that have
several activity areas, and multinational enterprises, the statistical unit should be the “kind of activity unit – an
enterprise or part of an enterprise which engages in one kind of economic activity without being restricted to
the geographic area in which that activity is carried out” (OECD, Eurostat 2005).
The implementation of the indicator on repetitive co‐operations among partners cannot be achieved through
the present EC administrative system. A first option to implement this indicator is to gather data at the stage
of a project proposal, or at the stage of project start through the project coordinator. The following questions
could be applied:
Table 25: Tracking previous RTD collaborations
Engagement of partners in collaborative research:
1 The core partners of the project have collaborated previously Yes No
2 The core partners of the project have collaborated in research projects which were entirely funded by the partners
Never Sometimes Often
3 The core partners of the project have collaborated in national funded ICT research projects Never Sometimes Often
4 The core partners of the project have collaborated in EU funded ICT research projects Never Sometimes Often
In which research field did this cooperation take place
1 The cooperation took place in a similar/the same research field Yes No
A second option is to pursue network analysis on participating institutions. Network analysis can inform policy
makers of the changing organisational landscape of ICT research and innovation, and point to the effects of EU
funded ICT research in that transformation. Network analysis may reveal “hub” organisations and their critical
roles in maintaining the connectivity of the network and the progress towards the ERA. However, the
complexity of implementation of network analysis goes beyond the regular capacities of a monitoring unit.
Instead, network analysis should be part of interim and ex post evaluations.
5.2.2 Indicatorsonkeyactors
Another important issue of participation and funding is to attract relevant organisations in terms of achieving
the desired goals of FP7‐ICT. Output data, such as counts and statistical analyses of publications, conference
proceedings, patents and other forms of IPRs, provide information if the participants have achieved the
desired results. However, an overall performance framework should take into account the previous
performance of organisations to detect whether ICT research and innovation funding has had effects on R&D
inputs, outputs and behaviour. Therefore, the project team suggests to include a) general data characterising
the participating organisations, and b) data characterising the research and innovation behaviour of the
participating organisations. Table 26 presents the SWOT analysis for the inclusion of these indicators.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 108 of 121
Table 26: SWOT analysis for proposed indicators involvement of key actors
Strengths Weaknesses
‐ Highly relevant for assessment of outputs,
outcomes, and impact of EU ICT funding.
‐ Provides a characterisation of differences between
instruments and strategic objectives.
‐ If collected at application phase much higher level of
completeness than through surveys.
‐ Data gathering can only take place at the level of
project participant.
‐ No short term usability of indicators.
‐ Explanatory power depends upon international
benchmarks: These are restricted to research and
innovation behaviour of firms.
‐ Considerable time is needed to prepare these
indicators for use.
Opportunities Threats
‐ The indicators can be transferred to other areas of
EU research and innovation funding.
‐ In the course of mid‐term and ex post evaluations
data collection via additional surveys can be
restricted to relevant questions on present activities
and results of research funding.
‐ Participant databases allow for micro‐econometric
modelling.
‐ Allows tracking long term research and innovation
behaviour of firms.
‐ Allows employing econometric modelling based
upon participant databases.
‐ The additional burden on participants may further
lower attractiveness of EU funding schemes.
‐ The participant databases need to be maintained
and checked regularly. This may cause additional
burden on EC administrators.
‐ Data graveyard, if databases are not provided to
evaluators of mid‐term and ex post evaluations.
‐ Comparator data may not be provided because of
data protection regulations, which hampers
usefulness of indicators
Prerequisites for implementation
Data characterising the participating organisations may be retrieved either by commercial databases or
through own initiatives of the European Commission. Commercially available databases contain information
on cashflow, turnover, number of employees, ownership, industry classification, location, and the year of
incorporation. Attention needs to be paid to the feasibility of linkages between commercial databases and EC
databases.
Data on research and innovation behaviour, are rarely to be found in commercial databases. Therefore, data
gathering should take place at the time of project proposal acceptance, after negotiation has taken place. The
data should be retrieved via the Electronic Proposal Submission System of the European Commission, which
allows to ensure completeness of data and to build up a database with key data on participating organisations
and their research and innovation patterns.
The burden on data provision the data is exclusively on the participants. The EC services would need to
establish a database, which gathers this information in an electronic form. As each participant is supposed to
have a unique identifier, data may only be filled once for a number of project applications. To allow for
comparability of data and international benchmarking, the data gathering process should adhere to the
standards defined in the Oslo Manual (OECD, Eurostat 2005) for innovation data, and the Frascati Manual
(OECD 2002) for research funding and human resources .International comparability of the collected data has
to be assured.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 109 of 121
5.3 KeyOutputIndicators
Output indicators need to show how a programme is progressing towards meeting its objectives. The current
monitoring system only provides information on the number of scientific articles and open access journals, and
the number of patent applications in a regular manner. Hence, there is considerable room to improve the
performance monitoring of outputs.
The analysis of the multi‐annual Work Programmes for FP7‐ICT identified the following relevant impact
channels for the Work Programmes: knowledge, networks and human capital. For knowledge indicators,
tangible knowledge outputs (new technology, methodology, components, algorithms, etc.) prevail. However,
apart from peer reviewed publications and all forms of IPRs, no quantifiable indicators can be benchmarked
across the Work Programmes or R&D programmes. Output indicators focus often on counts of outputs, but
quality aspects and appropriateness of results are not monitored.
Furthermore, when using patent applications and journal publications as an output measure, one has to be
aware that these might not be a key output for all the different challenges and strategic objectives of the FP7‐
ICT programme. For a scheme like FET, which aims at R&D exploring unconventional ideas and risky scientific
paradigms for industrial research, peer reviewed journal publications are a key output measure, but for other
parts of the Work Programmes which are much more industry oriented, publications may only be key for
academic partners. For intangible outputs (knowledge base, competencies, learning effects,), there is a similar
challenge of creating sound indicators. For network indicators, the project team has shown that network
effects primarily occur at the level of research teams; they are covered by FP7‐ICT at the level of inputs.
For CIP ICT‐PSP the analysis has demonstrated that one of its central features is to pilot interoperable cross‐
border services, and aim to create interoperable standards/solutions in Europe. CIP ICT‐PSP does not cover
research activities. Its objectives cover increased uptake of digital services in the public sector, in areas of high
societal relevance, reducing fragmentation of markets, and increasing the interoperability of public services.
Compared to the FP7 instruments, the objectives and outputs of CIP ICT PSP are not related to R&D but to the
deployment of ICT technology and to alignment and interoperability across Europe. From a monitoring point of
view, the issue of CIP ICT‐PSP project characteristics does not need special consideration as regards input
indicators, if the involvement of stakeholder groups (public entities, agencies etc.) is covered. As regards its
specific objectives the desired outputs are measurable and quantifiable, i.e. increase in uptake of ICT in the
public sector can both be monitored and benchmarked.
Based upon the analysis of the intervention logic and the comparative analysis, the project team recommends
that the performance monitoring system for ICT research should retrieve:
Performance indicators measuring the quality of outputs and the progress made based upon the
periodic project reviews and the final project reviews;
Output counts based upon information provided in the project reports.
The proposed indicators in Table 27 focus on the quality and effectiveness of the programme implementation,
and have the capacity to provide additional information on the impact channels knowledge, networks, human
resources, and economic. The periodic reports and the project review reports constitute the main data
gathering methods for the indicators.
Table 27: Key output indicators retrieved by periodic reviews and final reviews
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 110 of 121
Nr. Proposed Indicator Key Evaluative Issues
Impact Instrument
Benchmarks Availability Sources
1. ‐ Mean score of satisfaction with overall project progress/project results
‐ Quality ‐ Effectiveness
‐ All ‐ Different themes and instruments
‐ Periodic reviews
2. ‐ Achievement of project results in relation to the funding objectives
‐ Quality ‐ Effectiveness
‐ Knowledge‐ Human Resources
‐ Networks
All ‐ Different themes and instruments
‐ Periodic reviews
3.
Evidence on contribution to the following types of outputs:
‐ Tangible knowledge outputs: publications, PhDs, new tools, proof of concepts, proof of technology
‐ Intangible knowledge outputs: enhanced skills, enhanced knowledge bases
‐ Patents, licences, IPRs ‐ Creation of interoperable standards ‐ Creation of interoperable services
‐ Effectiveness
‐ Knowledge ‐ Human Resources
‐ Economic
All
‐ Different themes and instruments
‐ Other prog.
‐ Final project reviews
4. ‐ Mean scores on satisfaction with dissemination (plans) of project results
‐ Effectiveness
‐ Economic ‐ Social
All
‐ Different themes and instruments
‐ Final project reviews
5. ‐ Expected economic impacts, which can be achieved within the next 3 years
‐ Effectiveness ‐ Economic
‐ STREPS‐ Ips ‐ CIP‐ICT‐PSP
‐ Different themes and instruments
‐ Final project reviews
6.
Tangible knowledge results through periodic and final project reports:
‐ Nr. of publications in peer reviewed journals
‐ Nr. of publications in non‐peer reviewed journals
‐ Nr. of articles presented in scientific conferences in different tiers
‐ Nr. of qualifications obtained in the course of the project (MAs, PhDs)
‐ Effectiveness ‐ Knowledge ‐ Human Resources
All
‐ Different themes and instruments
‐ Final project reports
In the following, the relevance of the performance measures for EU‐ICT related research programmes is
discussed. The SWOT analysis shows strengths, weaknesses, opportunities and threats of the indicators.
Prerequisites for implementation are presented.
5.3.1 Resultsoftheprojectreviews
All FP7‐ICT projects and CIP ICT‐PSP projects are subject to periodic reviews and final project reviews, which
provide qualitative assessments about the projects. If these reviews contain standardised electronic queries,
which are stored in a database/repository, a number of indicators can be distilled from this type of
information.
Table 28 shows the results of the SWOT analysis for the inclusion of this type of indicators.
Table 28: SWOT analysis for results of independent project reviews
Strengths Weaknesses
‐ Data based upon information of provided by project
participants and assessed by external reviewers
‐ No additional burden on reviewers, as reports have
to be produced on a regular basis
‐ Restricted to qualitative assessments
‐ Limited opportunities for benchmarking
‐ Requires feasibility tests in order to ensure
acceptance of reviewers
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 111 of 121
‐ Capability to include a number of performance items
which cannot be monitored otherwise
‐ Ability to include all relevant impact channels
‐ Performance measurement at the level of projects,
not participants
‐ Can be easily adapted for different types of
programmes
Opportunities Threats
‐ Indicators can complement mid‐term and ex‐post
evaluations
‐ Indicators can be used for programme learning
purposes
‐ Indicators provide working hypotheses for mid‐term
and ex‐post evaluations
‐ Performance measures are ultimately based upon
answers from projects participants. These are prone
to positive bias.
‐ Data graveyard, if databases are not provided to
evaluators of mid‐term and ex post evaluations.
Prerequisites for implementation
In order to fully exploit the information contained in the periodic review reports, these need to be carefully
adapted:
The periodic review reports and the final project review reports need to be in an electronic format,
using a standardised reporting tool.
Standardised performance monitoring questions need to be included in the reports. In addition to
text based assessments, questions about project performance should be asked either by yes/no
questions or likert scale questions. Structured electronic fields need to be used, which are linked to a
repository of results. Linkages to the project database, the participant database, the instruments and
the specific challenges need to be provided.
The performance monitoring questions should be the same for all instruments and challenges of the
ICT FP and CIP ICT‐PSP programme. Hence, questions should be fairly broad, and the administrators of
the Work Programmes should be able to use the indicators stemming from the periodic reviews as a
source of information and adaption of plans.
In order to ensure acceptance of this type of measurement, a limited number of concrete questions
has to be formulated and tested in the field.
Table 29 provides a set of pilot questions, which need to be answered by the project reviewers, based upon
the information retrieved from project coordinators.
Preferably, for each project the questionnaire has to be filled in at the mid‐time and at the end of a project.
The questionnaire results should be published annually, differentiated by specific challenges of the Work
Programme and by funding instrument.
Table 29: Pilot questionnaire for periodic reviews and final project reviews
Degree of satisfaction
For each of the following categories, please indicate the level of satisfaction with the progress made or the results achieved in the project
Very low Low Moder
ate High Very high
Don't know
Not applicable
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 112 of 121
A
Overall project progress/results
1 The current status of the project
2 Achievement of project results in relation to the targets of the funding instrument
3 Achievement of project results in relation to the funded activities/project plan
B
The balance of the partnership between
the different actors involved
4 The composition of the project team has proven to be adequate, in order to successfully carry out the project activities
5 The resources for carrying out the project activities are allocated in an appropriate manner
C
Dissemination of project results
6 The exploitation plan is likely to ensure an adequate dissemination of project results
7 The appropriate set of stakeholders (e.g. governmental bodies, consumers), enterprises have been involved
8 Due to the dissemination activities, the project is likely to get access to relevant target markets
For each of the following categories,
please describe the scale of the impacts associated with the reviewed project.
Very low
impact
Low impa
ct
Moderate
impact
High impact
Very high impa
ct
Don't know
Not applicable
D The project is likely to succeed/has succeeded in producing the following outputs
9 An appropriate number of 'tangible' knowledge outputs e.g. publications, PhDs, new tools and techniques etc.
10 A significant amount of 'intangible' knowledge outputs e.g. enhanced knowledge bases, improved skills and capabilities etc.
11 Patents, licences, copyrights and other IPRs 12 Open source software 13 Open source algorithms 14 Creation of interoperable standards 15 Creation of interoperable services 16 Creation of plans for sustained services
17 New or improved products 18 New or improved processes 19 New or improved services 20 New or improved standards, regulations or
policies
21 New start-up companies or spin-offs
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 113 of 121
5.3.2 Quantitativeoutputindicators
The present monitoring system concentrates on peer reviewed publications and dissemination activities. The
project team suggests to systematically collect the following outputs:
Number of publications in peer reviewed journals
Number of publications in non‐peer reviewed journals
Number of articles presented in scientific conferences/conference proceedings in different tiers
Number of qualifications obtained in the course of the project (MAs, PhDs)
The main aim of collecting information on tangible project outputs is to establish databases for evaluative
studies. So far, systematic bibliometric studies have been conducted based upon the information gathered in
the monitoring system. In order to utilise output counts also for monitoring purposes, the European
Commission should consider a ranking of peer reviewed publications and conference
presentations/proceedings into prestige tiers41.
It is not suggested to implement such an approach in the short run, but it is worthwhile to explore methods,
which allow not only to count journal publications and articles in conference proceedings but to provide an
indication of the quality and effectiveness of EU funded ICT research by means of output counts.
Table 30 presents the SWOT analysis for establishing a publication indicator system, based upon prestige tiers.
Table 30: SWOT analysis for a prestige tiers based publication measurement
Strengths Weaknesses
‐ Indicators based upon information provided by
project participants
‐ No additional burden on participants, as data has to
be provided in the project final report.
‐ Performance measurement at the level of projects
and participants
‐ Can be easily adapted for different types of
programmes
‐ High administrative costs for setting up and
implementation
‐ Long time needed for creating reference list of
conference venues/journals. No clear rules for rating
of venues exist.
‐ Subject to change and therefore requires constant
updates
‐ Attribution problems: can publications/conference
publications be linked to research project?
Opportunities Threats
‐ Indicators can serve as a reference framework for
European ICT research and innovation.
‐ The impact of a reference tier system may exceed
the level of EU funded ICT research, but allow for
benchmarking of ICT research activities across
Europe.
‐ Indicators can be extended and further tested by
academic community.
‐ Low acceptance of research organisations, if system
is not reasonable for academic and industrial
research community
‐ Too large diversity among ICT research actors may
decrease likelihood of acceptance
‐ Unintended behaviour of researchers (e.g. applying
addressing only highly ranked venues, although
other means of dissemination are of similar
importance)
41 The implementation of this type of approach is described in detail in Butler, L. (2008) ICT assessment: moving beyond journal outputs, Scientometrics, Vol 74. No. 1(2008) 39-55.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 114 of 121
Prerequisites for implementation
Full exploitation of output counts demands the set‐up of standardised repositories for publications. The
repository needs to include basic information on the publications as provided in the current project reports
(title, main author, title of the periodical or the series, number, date or frequency, publisher, place of
publication, year of publication, permanent identifiers). The repository data have to be linked with the project
databases. Information on publications obtained during the lifetime of a project should be collected at an
annual basis.
Additional efforts need to be undertaken, if a tier based system is to be applied. For journal publications, the
“Field Normalised Journal Impact Factors” could serve as a basis for creating a tier based system. The journal
impact factor is a measure of the frequency with which the "average article" in a journal has been cited in a
given period of time. Journal impact factors are produced regularly by the main bibliometric data providers
Thomson‐Reuters (ISI) and Elsevier (SCOPUS). The main methodological problem of journal impact factors is
that a) the citation behaviour varies among fields of science and therefore leads to systematic differences, and
b) there are no statistics to inform us whether differences are significant (Leydesdorff and Opthof, 2011).
Therefore, several attempts for field normalisation have been performed (Moed et al., 1995), Moed (2010).
Despite the existence of journal rankings considerable research efforts are needed for implementing such a
system that is tailored to the needs of European ICT research and innovation outputs.
The prerequisites for implementing a tier based system of conference proceedings are quite extensive and will
cost considerable resources to the European Commission services and the research community. This is
exemplified by the Australian example, which arrived at a comprehensive list of ICT conferences classified into
prestige tiers through a five stage process, which included the following steps:
Identifying relevant conferences/events;
Preliminary ranking of conferences;
Extensive consultation with the research community;
Testing live data; and
Assessing the measures.
In particular for ranking the relevant conferences problems arose because a lack of information regarding
issues like acceptance rates etc., meaning that the process had to rely more heavily on peer opinions than
expected at the beginning. Also a delineation of research areas presented significant problems, because ICT is
a fast developing field in which no consensus/accepted classification scheme of research areas exist. However,
the project managed to arrive at a four tier system, which may serve as an example for the European Union as
well. The suggested tiers are presented in Table 31.
Although the Australian example indicates that the efforts for implementing such a system are quite
burdensome, the article stressed that the ICT community engaged actively in the process not only because
standard bibliometric measures were deemed inappropriate for an assessment of the ICT discipline, but also
because at the individual it was encountering major hurdles (e.g. relating to promotions etc.).
As tier based systems of publication counts are the only option which would allow a monitoring unit to
establish publication indicators which also take into account research quality, it could be worthwhile to further
explore this approach at a European scale. However, for both peer reviewed publications and conferences, a
truly limiting factor is that considerable research efforts are needed for establishing such an approach.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 115 of 121
Table 31: Descriptors for ICT conference Tiers
Overall criterion: Quality of the papers presented at the conference
Tier 1
Typically a Tier 1 conference would be one of the very best in its field or subfield in which to publish and would
typically cover the entire field/subfield. These are conferences where most of the work is important (it will
really shape the field), where researchers boast about getting accepted, and where attendees would get value
from attending even if they didn’t have a paper themselves. Acceptance rates would typically be low and the
program committee would be dominated by field leaders, including many from top institutions (such as
Stanford, MIT, CMU, UC Berkeley, Cornell, UWashington, UTexas, UIllinois, Oxford, Cambridge, Edinburgh,
Imperial College, Microsoft Research, IBM Research, and so on). Tier 1 conferences would be well represented
in the CV of a junior academic (assistant professor) aiming for tenure at a top 10 US university.
Tier 2
Publishing in a Tier 2 conference would add to the author’s respect, showing they have real engagement with
the global research community and that they have something to say about problems of some significance.
Attending a Tier 2 conference would be worth travelling to if a paper got accepted. Typical signs of a Tier 2
conference are lower acceptance rates and a program committee and speaker list which includes a reasonable
fraction of well‐known researchers from top institutions (as well as a substantial number from weaker
institutions), and a real effort by the program committee to look at the significance of the work.
Tier 3
Tier 3 covers conferences where one has some confidence that research was done, so publishing there is
evidence of research‐active status (that is, there is some research contribution claimed, and a program
committee that takes its job seriously enough to remove anything not in line with the state of art), but it is not
particularly significant. This is where PhD students might be expected to send early work. It also includes
places whose main function is the social cohesion of a community. Typical examples would be regional
conferences or international conferences with high acceptance rates, and those with program committees that
have very few leading researchers from top international institutions.
Tier 4
All the rest.
Source: Butler, L. (2008) ICT assessment: moving beyond journal outputs, Scientometrics, Vol 74. No. 1(2008) 39‐55.
5.4 KeyOutcomeandImpactIndicators
Programme outcomes and impacts are rather measured in interim and ex post evaluations than in monitoring
exercises. Attribution problems and timing are the main reasons why outcomes and impacts are rarely
considered in programme monitoring. Key dimensions addressed by outcome and impact performance
measures are matters of relevance, effectiveness and sustainability. These relate to the impact channels
knowledge, networks, human capital, and socio‐economic factors which have been used to structure the
analysis of outcome and impact performance measures and indicators.
A key feature of outcome and impact measures is that these emerge rather at the level of individual
participants and ‐ in the long run ‐ at the level of industrial branch and society than at the project level. The
project team therefore suggests including a number of outcome and impact indicators, which provide
information on a) the level of outcomes and impacts and b) the nature of outcomes and impacts. The
proposed measures are presented in Table 32.
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 116 of 121
Table 32: Key outcome and impact measures
Nr. Proposed Indicator Key Evaluative Issues
Impact Instrument Benchmarks Availability Sources
1. Impacts on participants and non‐participants:
‐ Effectiveness ‐ Sustainability
‐ Knowledge‐ Economic
All ‐No ‐ No ‐ Final project reports
2. Ability to develop product/services or process innovations
‐ Relevance ‐ Effectiveness
‐ Knowledge ‐ Economic
‐ All
‐ Partly: Via linkage with CIS data
‐ No ‐ Final project reports
‐ CIS survey
3. Capability to introduce different types of organisational innovations
‐ Relevance ‐ Effectiveness
‐ Knowledge ‐ Economic ‐ Networks
‐ All
‐ Partly: Via linkage with CIS data
‐ No ‐ Final project reports
‐ CIS survey
4. Increased creativity and skills ‐ Effectiveness ‐ Sustainability
‐ Knowledge‐ Human Resources
All ‐ No ‐No ‐ Final project reports
5. Specific types of tangible economic results at the level of participants and the economy at large
‐ Effectiveness ‐ Sustainability
‐ Economic
All ‐ No ‐No ‐ Final project reports
In the following, the implementation of outcome and impact indicators is discussed.
5.4.1 Outcomeandimpactindicators
The information on project outcomes and project impacts should be collected via the project final reports. To
some extent benchmarks may be achieved through a comparison with Community Innovation Survey data.
Information can be distilled from the project final report, if structured electronic queries are included in these
reports, which are geared at project coordinators. Facts about the projects need to be gathered in a
standardised electronic manner.
The query should focus on the effects of the project on the participants and beyond, and provide indications of
the functioning of the programme. The query should be an integral part of the project final report.
Table 33 presents the SWOT analysis for establishing an outcome oriented indicator system, to be established
at the end of a project, which is based upon a structured query among project coordinators.
Table 33: SWOT analysis for additional outcome and impact indicators based upon a structured query
Strengths Weaknesses
‐ Indicators based upon information provided by
project coordinator
‐ Performance measurement at the level of projects
‐ Can be easily adapted for different types of
programmes
‐ By focussing on capabilities and results, timely
information on expected outcomes and impacts can
be provided
‐ Higher return rates than in surveys of an interim and
ex post nature due to higher commitment of
participants
‐ Limited benchmarking possibilities except from
Community Innovation Survey data, which may only
be retrieved at an aggregate level
‐ Additional burden on project coordinator, who has
to fill in the questionnaire
‐ Database system needs to be set up and
implemented
Opportunities Threats
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 117 of 121
‐ Baseline information for interim and ex post
evaluations
‐ Useful for strategy/development of future
programmes/initiatives
‐ Positive bias in answers due to fear of negative
consequences: opposed to interim and ex post
surveys, no anonymity can be granted.
Prerequisites for implementation
The project final reports need to be carefully adapted:
The project final reports need to be in an electronic format, using a standardised reporting tool.
Standardised performance monitoring questions need to be included in the reports. The questions
about project performance should be asked either by yes/no questions or likert scale questions.
Structured electronic fields need to be used, which are linked to a repository of results. Linkages to
the project database, the participant database, the instruments and the specific challenges need to be
provided.
The performance monitoring questions should be the same for all instruments and challenges of the
ICT FP and CIP ICT‐PSP programme. Hence, questions should be fairly broad, and the administrators of
the Work Programmes and external evaluators in interim and ex post evaluations should be able to
use the indicators stemming as a source of information and their specific tasks.
In order to ensure acceptance of this type of measurement, a limited number of concrete questions
has to be formulated and tested in the field.
Table 34 provides a set of pilot questions, which need to be answered by the project coordinators. For each
project the questionnaire has to be filled at the end of a project. The questionnaire results should be published
on an annual basis, differentiated by specific challenge of the Work Programme and by funding instrument.
Table 34: Pilot questionnaire to operationalize a key number of outcome and impact indicators
For each of the following impact types, please describe the scale of the impacts associated with your project.
Very low
impact
Low impact
Moderate impact
High impact
Very high
impact
Don't know
Not applicable
A Innovation impacts on participants and non-participants
1 Direct impacts on academic participants
2 Direct impacts on industrial participants
3 Direct impacts on other types of participant 4 Indirect impacts on non-participants as a
consequence of the activities of participants (e.g. via the dissemination of project results, the diffusion of technologies or the resultant modification of standards, regulations and policies).
B
Impacts on the capability to develop
product/services or process innovations
5 Increase the range of goods or services
6 Replace outdated products or processes
7 Enter new markets or increase market share 8 Improve quality of existing goods or services
9 Improve flexibility for producing goods or services
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 118 of 121
10 Increase technological capacity for
producing goods or services
11 Reduce environmental impacts 12 Improve health or safety of your
employees
C
Impacts on the capability to introduce
different types of organisational innovations
13 New business practices 14 New methods of organising work 15 New methods of organising external
relations
16 New business partners and long term alliances
D
Impacts on skills
17 New skills in software development
18 New skills in graphic arts, design of objects, web design
19 New skills in engineering/applied sciences, mathematics/statistics/database management
E
Expected economic impacts, which can be
achieved within the next 3 years
20 New or improved products 21 New or improved processes 22 New or improved services 23 New or improved standards, regulations or
policies
24 New start-up companies or spin-offs 25 Improved innovation performance
amongst participants
26 Improved innovation performance of the economy at large
27 Improved turnover, profitability and market sales of participants
28 Improved turnover, profitability and market sales within the economy at large
29 Enhanced competitiveness of participants 30 Enhanced competitiveness of the
European economy at large
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 119 of 121
6 References
Agentschap NL (2010). Monitoringsrapportage Innovatieprogramma's over 2009. The Hague: Agentschap NL. Available upon request.
Australian Research Council (2009). ERA Indicator Benchmark Methodology. Available in the 2009 section of the Excellence in Research for Australia website [www.arc.gov.au/era]
Bach,L. Matt, M., Müller, M. Van Hee, N., Wolff, S. (2006)Analyzing and Evaluating the Impact on Innovation of Publicly Funded Research Programmes, LOT 1 – Evaluation of the Impact of Projects of Community FP5 and FP6, ULP L. Pasteur University of Strasbourg.
Butler, L. (2008), ICT assessment: moving beyond journal outputs, Scientometrics, Vol. 74. No 1 (2008) 39‐55.
Braun et al. (2009), Tools and Indicators for Community Research Evaluation and Monitoring, Final Report, Vol. 1 & Vol. 2. Bad Camberg, 2009.
Campos, A. de (2010). Note on the economic impact assessment of Research Councils. Prepared for Research Councils UK. [www.rcuk.ac.uk/Publications/policy/Pages/impactassessment.aspx]
Donovan, C. (2008). The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environment, and Cultural Returns of Publicly Funded Research. In: New Directions for Evaluation, no.118, pp.47‐60.
DTI (2001). Common core impact indicators for assessing effects of DTI industrial support policies. Prepared by the former DTI and available at the Department for Business Innovation & Skills. [www.bis.gov.uk]
European Commission (2004), Evaluating EU Activities: A practical Guide for the Commission Services, Brussels. [http://ec.europa.eu/dgs/secretariat_general/evaluation/docs/eval_activities_en.pdf]
The European Commission (2005), Impact Assessment and ex ante evaluation, Commission Staff Working Paper, SEC(2005) 430, Brussels.
European Commission (2009), CIP ICT‐PSP Interim Evaluation, Report from an independent expert panel, Brussels.
European Commission (2010), COM(2010) 2020 final, Communication from the Commission: EUROPE 2020 ‐ A strategy for smart, sustainable and inclusive growth, Brussels, 3.3.2010.
European Commission (2010), COM(2010) 245 final/2, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: A Digital Agenda for Europe, Brussels, 26.8.2010.
European Commission (2010), COM(2010) 546 final, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Europe 2020 Flagship Initiative ‐ Innovation Union, Brussels, 6.10.2010.
European Parliament (2006) Decision No 1982/2006/EC of the European Parliament and of the Council of 18 December 2006 concerning the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007‐2013), 30.12.2006.
Forsknings‐ og Innovationsstyrelsen (2010): InnovationDanmark 2010‐2013: Viden til virksomheder skaber vækst (knowledge for companies creates growth) [http://www.fi.dk/publikationer/2010/innovationdanmark‐2010‐2013/innovationdanmark‐2010‐3013‐viden‐til‐virksomheder‐skaber‐vaekst]
Forsknings‐ og Innovationsstyrelsen (2011): Central InnovationsManual for effektmåling (Central InnovationManual for impact assessment and performance measurement) – draft version, not publicly available
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 120 of 121
Forsknings‐ og innovationsstyrelsen (2011): Verdens bedste effektmålinger af forsknings‐ og
innovationspolitik? (World’s best impact assessment and performance measurement?), presentation, [http://www.druid.dk/fileadmin/images/dokumenter/VTU_Oplaeg_12_april_2011.pdf]
HEFCE (2009). Research Excellence Framework, Second consultation on the assessment and funding of research. Prepared in the context of the Research Excellence Framework, by the Higher Education Funding Council for England. [www.hefce.ac.uk/pubs/hefce/2009/09_38/]
HEFCE (2011). Decisions on assessing research impact. Prepared in the context of the Research Excellence Framework, by the Higher Education Funding Council for England. [www.hefce.ac.uk/research/ref/pubs/2011/01_11/]
Jordan, G. (2010), Logic Models – A method for Programme Planning and Evaluation: Applications to Research, Technology Development and Deployment Policies and Programmes, Platform FTEVAL Nr. 35/March 2010, Vienna.
Kanninen S., Lemola T. (2006) Methods for Evaluating the Impact of Basic Research Funding : An analysis of Recent International Evaluation Activity, Academy of Finland www.aka.fi/publications.
Kellog Foundation (2004), Logic Model Development Guide ‐ Using Logic Models to Bring Together Planning, Evaluation, and Action, www.wkkf.org, Battle Creek.
Lieshout, M. et al. (2009): The Impact of FET Research Initiatives (IFETRI), Final report: Part I‐ Methodology, Delft, 16.02.2011.
Leydesdorff, L., Opthof, T (2011, forthcoming), Scopus’s Source Normalized Impact per Paper (SNIP) versus
a Journal Impact Factor based on Fractional Counting of Citations, Journal of the American Society for Information Science & Technology (forthcoming).
Martin, B., Tang, P. (2007), The Benefits from Publicly Funded Research, SPRU Electronic Working Paper Series, Paper No. 161, June 2007, University of Sussex.
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, in print.
Moed, H. F., De Bruin, R. E., & Van Leeuwen, T. N. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics, 33(3), 381‐422.
National Science Foundation (2011): FY 2012 Budget Request to Congress: Performance information [http://www.nsf.gov/about/budget/fy2012/pdf/44_fy2012.pdf]
Office of Project and Programme Advice & Training (2008). Note 1/2008: Project & Programme Identified Evaluation Lessons. [www.offpat.org/readingroom/Index.do]
Platform fteval (2005), Evaluation Standards in Research and Technology Policy, Vienna.
Polt, W., Vonortas, N. (Coordinators), (2006), IST Evaluation and Monitoring, Joanneum Research, Vienna.
Research Council Economic Impact Group, led by Peter Warry (2006). Increasing the Economic Impact of Research Councils. Advice to the Director General of Science and Innovation, DTI. [www.vitae.ac.uk/cms/files/DTI‐Warry‐Report‐July‐2006.pdf]
Ruegg,R., Feller, I. (2003), A Toolkit for Evaluating Public R&D Investment – Models, Methods, and Findings from ATPs First Decade, Gaithersburg.
Streicher, G., Schibany, A., Gretzmacher, N. Falk, M., Falk, R., Wörter, M. (2004), Evaluation of the Austrian Industrial Research Promotion Fund – FFF: Impact Analysis, InTeReg Research Report Nr. 33, Joanneum Research, Vienna http://www.joanneum.at/uploads/tx_publicationlibrary/img2082.pdf.
SQW Consulting (2009). Pushing the boundaries of impact evaluation. Report on knowledge development possibilities. [www.sqw.co.uk/file_download/169]
Performance and Monitoring Concepts and Tools for ICT in the EU funded RTD ‐ SMART 2010/0028
Version: 2.0 Status: Final Deliverable: D4
Prepared by: Joanneum, DTI, TNO Date: 14/12/2011 Page: 121 of 121
Technopolis (2007). Baseline study Point One: Manual. Technopolis Group Netherlands. Available upon
request.
The Networks of Centres of Excellence Canada Secretariat (2007). Joint Results‐based Management and Accountability Framework and Risk‐Based Audit Framework for the Class Grants Program for Centres of Excellence for Commercialization and Research (CECR Program).
The Networks of Centres of Excellence Canada Secretariat (2008). Joint Results‐based Management and Accountability Framework and Risk‐Based Audit Framework for the Class Grant Networks of Centres of Excellence Program (NCE Program). [www.nce‐rce.gc.ca/ReportsPublications‐RapportsPublications/PerformanceFrameworks‐CadresPerformance_eng.asp]