assessment, evaluations, and definitions of research impact ......research excellence framework...
TRANSCRIPT
Assessment evaluations and
definitions of research impact
A review
Teresa Penfield1 Matthew J Baker1 Rosa Scoble2 andMichael C Wykes1
1University of Exeter Innovation Centre Rennes Drive Devon EX4 4RN UK and2Brunel University Kingston Lane Uxbridge UB8 3PH UK
Corresponding author Email mcwykesexeteracuk
This article aims to explore what is understood by the term lsquoresearch impactrsquo and to provide acomprehensive assimilation of available literature and information drawing on global experiencesto understand the potential for methods and frameworks of impact assessment being imple-mented for UK impact assessment We take a more focused look at the impact component ofthe UK Research Excellence Framework taking place in 2014 and some of the challenges toevaluating impact and the role that systems might play in the future for capturing the links
between research and impact and the requirements we have for these systems
Keywords impact research evaluation assessment evidence
1 Introduction what is meant by impact
When considering the impact that is generated as a result ofresearch a number of authors and government recommen-dations have advised that a clear definition of impact isrequired (Duryea Hochman and Parfitt 2007 Grant et al2009 Russell Group 2009) From the outset we note thatthe understanding of the term impact differs between usersand audiences There is a distinction between lsquoacademicimpactrsquo understood as the intellectual contribution toonersquos field of study within academia and lsquoexternal socio-economic impactrsquo beyond academia In the UK evaluationof academic and broader socio-economic impact takes placeseparately lsquoImpactrsquo has become the term of choice in theUK for research influence beyond academia This distinc-tion is not so clear in impact assessments outside of the UKwhere academic outputs and socio-economic impacts areoften viewed as one to give an overall assessment of valueand change created through research
The Oxford English Dictionary defines impact as alsquoMarked effect or influencersquo this is clearly a very broaddefinition In terms of research impact organizations andstakeholders may be interested in specific aspects ofimpact dependent on their focus In this case a specificdefinition may be required for example in the
Research Excellence Framework (REF) lsquoAssessmentframework and guidance on submissionsrsquo (REF20142011b) which defines lsquoimpactrsquo as
an effect on change or benefit to the economy society culturepublic policy or services health the environment or quality oflife beyond academia
Impact is assessed alongside research outputs and envir-onment to provide an evaluation of research taking placewithin an institution As such research outputs forexample knowledge generated and publications can betranslated into outcomes for example new products andservices and impacts or added value (Duryea et al 2007)Although some might find the distinction somewhatmarginal or even confusing this differentiation betweenoutputs outcomes and impacts is important and hasbeen highlighted not only for the impacts derived fromuniversity research (Kelly and McNicol 2011) but alsofor work done in the charitable sector (Ebrahim andRangan 2010 Berg and Mansson 2011 Kelly andMcNicoll 2011) The Social Return on Investment(SROI) guide (The SROI Network 2012) suggests thatlsquoThe language varies ldquoimpactrdquo ldquoreturnsrdquo ldquobenefitsrdquoldquovaluerdquo but the questions around what sort of difference
Research Evaluation 23 (2014) pp 21ndash32 doi101093resevalrvt021Advance Access published on 9 October 2013
The Author 2013 Published by Oxford University PressThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (httpcreativecommonsorglicensesby30) whichpermits unrestricted reuse distribution and reproduction in any medium provided the original work is properly cited
and how much of a difference we are making are the samersquo
It is perhaps assumed here that a positive or beneficial
effect will be considered as an impact but what about
changes that are perceived to be negative Wooding
et al (2007) adapted the terminology of the Payback
Framework developed for the health and biomedical
sciences from lsquobenefitrsquo to lsquoimpactrsquo when modifying the
framework for the social sciences arguing that the
positive or negative nature of a change was subjective
and can also change with time as has commonly been
highlighted with the drug thalidomide which was
introduced in the 1950s to help with among other
things morning sickness but due to teratogenic effects
which resulted in birth defects was withdrawn in the
early 1960s Thalidomide has since been found to have
beneficial effects in the treatment of certain types of
cancer Clearly the impact of thalidomide would have
been viewed very differently in the 1950s compared with
the 1960s or todayIn viewing impact evaluations it is important to consider
not only who has evaluated the work but the purpose of
the evaluation to determine the limits and relevance of an
assessment exercise In this article we draw on a broad
range of examples with a focus on methods of evaluation
for research impact within Higher Education Institutions
(HEIs) As part of this review we aim to explore the
following questions
What are the reasons behind trying to understand and
evaluate research impact What are the methodologies and frameworks that have
been employed globally to assess research impact and
how do these compare What are the challenges associated with understanding
and evaluating research impact What indicators evidence and impacts need to be
captured within developing systems
2 Why evaluate research impact
What are the reasons behind trying to understand and
evaluate research impact Throughout history the
activities of a university have been to provide both edu-
cation and research but the fundamental purpose of a
university was perhaps described in the writings of math-
ematician and philosopher Alfred North Whitehead
(1929)lsquoThe justification for a university is that it preserves the
connection between knowledge and the zest of life by
uniting the young and the old in the imaginative consider-
ation of learning The university imparts information but
it imparts it imaginatively At least this is the function
which it should perform for society A university which
fails in this respect has no reason for existence This
atmosphere of excitement arising from imaginative con-sideration transforms knowledgersquo
In undertaking excellent research we anticipate thatgreat things will come and as such one of the fundamentalreasons for undertaking research is that we will generateand transform knowledge that will benefit society as awhole
One might consider that by funding excellent researchimpacts (including those that are unforeseen) will followand traditionally assessment of university researchfocused on academic quality and productivity Aspects ofimpact such as value of Intellectual Property are currentlyrecorded by universities in the UK through their HigherEducation Business and Community Interaction Surveyreturn to Higher Education Statistics Agency howeveras with other public and charitable sector organizationsshowcasing impact is an important part of attracting andretaining donors and support (Kelly and McNicoll 2011)
The reasoning behind the move towards assessingresearch impact is undoubtedly complex involving bothpolitical and socio-economic factors but neverthelesswe can differentiate between four primary purposes
(1) HEIs overview To enable research organizationsincluding HEIs to monitor and manage their per-formance and understand and disseminate the contri-bution that they are making to local national andinternational communities
(2) Accountability To demonstrate to government stake-holders and the wider public the value of researchThere has been a drive from the UK governmentthrough Higher Education Funding Council forEngland (HEFCE) and the Research Councils (HMTreasury 2004) to account for the spending of publicmoney by demonstrating the value of research to taxpayers voters and the public in terms of socio-economic benefits (European Science Foundation2009) in effect justifying this expenditure (DaviesNutley and Walter 2005 Hanney and Gonzalez-Block 2011)
(3) Inform funding To understand the socio-economicvalue of research and subsequently inform fundingdecisions By evaluating the contribution thatresearch makes to society and the economy futurefunding can be allocated where it is perceived tobring about the desired impact As Donovan (2011)comments lsquoImpact is a strong weapon for making anevidence based case to governments for enhancedresearch supportrsquo
(4) Understand To understand the method and routes bywhich research leads to impacts to maximize on thefindings that come out of research and develop betterways of delivering impact
The growing trend for accountability within the univer-sity system is not limited to research and is mirrored inassessments of teaching quality which now feed into
22 T Penfield et al
evaluation of universities to ensure fee-paying studentsrsquo
satisfaction In demonstrating research impact we can
provide accountability upwards to funders and downwards
to users on a project and strategic basis (Kelly and
McNicoll 2011) Organizations may be interested in re-
viewing and assessing research impact for one or more of
the aforementioned purposes and this will influence theway in which evaluation is approached
It is important to emphasize that lsquoNot everyone within
the higher education sector itself is convinced that evalu-
ation of higher education activity is a worthwhile taskrsquo
(Kelly and McNicoll 2011) The University and College
Union (University and College Union 2011) organized a
petition calling on the UK funding councils to withdraw
the inclusion of impact assessment from the REF pro-
posals once plans for the new assessment of university
research were released This petition was signed by
17570 academics (52409 academics were returned to
the 2008 Research Assessment Exercise) including
Nobel laureates and Fellows of the Royal Society
(University and College Union 2011) Impact assessments
raise concerns over the steer of research towards discip-
lines and topics in which impact is more easily evidenced
and that provide economic impacts that could subse-quently lead to a devaluation of lsquoblue skiesrsquo research
Johnston (Johnston 1995) notes that by developing rela-
tionships between researchers and industry new research
strategies can be developed This raises the questions of
whether UK business and industry should not invest in
the research that will deliver them impacts and who will
fund basic research if not the government Donovan
(2011) asserts that there should be no disincentive for
conducting basic research By asking academics to
consider the impact of the research they undertake and
by reviewing and funding them accordingly the result
may be to compromise research by steering it away
from the imaginative and creative quest for knowledge
Professor James Ladyman at the University of Bristol a
vocal adversary of awarding funding based on the assess-
ment of research impact has been quoted as saying that
lsquo inclusion of impact in the REF will create ldquoselection
pressurerdquo promoting academic research that has ldquomoredirect economic impactrdquo or which is easier to explain to
the publicrsquo (Corbyn 2009)Despite the concerns raised the broader socio-economic
impacts of research will be included and count for 20 of
the overall research assessment as part of the REF in
2014 From an international perspective this represents a
step change in the comprehensive nature to which impact
will be assessed within universities and research institutes
incorporating impact from across all research disciplines
Understanding what impact looks like across the various
strands of research and the variety of indicators and
proxies used to evidence impact will be important to
developing a meaningful assessment
3 Evaluating research impact
What are the methodologies and frameworks that havebeen employed globally to evaluate research impact andhow do these compare The traditional form of evaluationof university research in the UK was based on measuringacademic impact and quality through a process of peerreview (Grant 2006) Evidence of academic impact maybe derived through various bibliometric methods oneexample of which is the H index which has incorporatedfactors such as the number of publications and citationsThese metrics may be used in the UK to understand thebenefits of research within academia and are oftenincorporated into the broader perspective of impact seeninternationally for example within the Excellence inResearch for Australia and using Star Metrics in theUSA in which quantitative measures are used to assessimpact for example publications citation and researchincome These lsquotraditionalrsquo bibliometric techniques can beregarded as giving only a partial picture of full impact(Bornmann and Marx 2013) with no link to causalityStandard approaches actively used in programme evalu-ation such as surveys case studies bibliometrics econo-metrics and statistical analyses content analysis andexpert judgment are each considered by some (Vonortasand Link 2012) to have shortcomings when used tomeasure impacts
Incorporating assessment of the wider socio-economicimpact began using metrics-based indicators such asIntellectual Property registered and commercial incomegenerated (Australian Research Council 2008) In theUK more sophisticated assessments of impact incor-porating wider socio-economic benefits were firstinvestigated within the fields of Biomedical and HealthSciences (Grant 2006) an area of research that wantedto be able to justify the significant investment it receivedFrameworks for assessing impact have been designed andare employed at an organizational level addressing thespecific requirements of the organization and stakeholdersAs a result numerous and widely varying models andframeworks for assessing impact exist Here we outline afew of the most notable models that demonstrate thecontrast in approaches available
The Payback Framework is possibly the most widelyused and adapted model for impact assessment(Wooding et al 2007 Nason et al 2008) developedduring the mid-1990s by Buxton and Hanney working atBrunel University It incorporates both academic outputsand wider societal benefits (Donovan and Hanney 2011)to assess outcomes of health sciences research ThePayback Framework systematically links research withthe associated benefits (Scoble et al 2010 Hanney andGonzalez-Block 2011) and can be thought of in twoparts a model that allows the research and subsequentdissemination process to be broken into specific compo-nents within which the benefits of research can be
Assessment evaluations and definitions of research impact 23
studied and second a multi-dimensional classification
scheme into which the various outputs outcomes and
impacts can be placed (Hanney and Gonzalez Block2011) The Payback Framework has been adopted inter-
nationally largely within the health sector by organiza-
tions such as the Canadian Institute of Health Research
the Dutch Public Health Authority the Australian
National Health and Medical Research Council and the
Welfare Bureau in Hong Kong (Bernstein et al 2006
Nason et al 2008 CAHS 2009 Spaapen et al nd) The
Payback Framework enables health and medical researchand impact to be linked and the process by which impact
occurs to be traced For more extensive reviews of the
Payback Framework see Davies et al (2005) Wooding
et al (2007) Nason et al (2008) and Hanney and
Gonzalez-Block (2011)A very different approach known as Social Impact
Assessment Methods for research and funding instruments
through the study of Productive Interactions (SIAMPI)was developed from the Dutch project Evaluating
Research in Context and has a central theme of capturing
lsquoproductive interactionsrsquo between researchers and stake-
holders by analysing the networks that evolve during
research programmes (Spaapen and Drooge 2011
Spaapen et al nd) SIAMPI is based on the widely held
assumption that interactions between researchers and
stakeholder are an important pre-requisite to achievingimpact (Donovan 2011 Hughes and Martin 2012
Spaapen et al nd) This framework is intended to be
used as a learning tool to develop a better understanding
of how research interactions lead to social impact rather
than as an assessment tool for judging showcasing or
even linking impact to a specific piece of research
SIAMPI has been used within the Netherlands Institutefor health Services Research (SIAMPI nd) lsquoProductive
interactionsrsquo which can perhaps be viewed as instances of
knowledge exchange are widely valued and supported
internationally as mechanisms for enabling impact and
are often supported financially for example by Canadarsquos
Social Sciences and Humanities Research Council which
aims to support knowledge exchange (financially) with
a view to enabling long-term impact In the UKUK Department for Business Innovation and Skills
provided funding of pound150 million for knowledge
exchange in 2011ndash12 to lsquohelp universities and colleges
support the economic recovery and growth and contribute
to wider societyrsquo (Department for Business Innovation
and Skills 2012) While valuing and supporting knowledge
exchange is important SIAMPI perhaps takes this a step
further in enabling these exchange events to be capturedand analysed One of the advantages of this method is that
less input is required compared with capturing the full
route from research to impact A comprehensive assess-
ment of impact itself is not undertaken with SIAMPI
which make it a less-suitable method where showcasing
the benefits of research is desirable or where this justifica-tion of funding based on impact is required
The first attempt globally to comprehensively capturethe socio-economic impact of research across all disciplineswas undertaken for the Australian Research QualityFramework (RQF) using a case study approach TheRQF was developed to demonstrate and justify public ex-penditure on research and as part of this framework apilot assessment was undertaken by the AustralianTechnology Network Researchers were asked toevidence the economic societal environmental andcultural impact of their research within broad categorieswhich were then verified by an expert panel (Duryea et al2007) who concluded that the researchers and case studiescould provide enough qualitative and quantitativeevidence for reviewers to assess the impact arising fromtheir research (Duryea et al 2007) To evaluate impactcase studies were interrogated and verifiable indicatorsassessed to determine whether research had led to recipro-cal engagement adoption of research findings or publicvalue The RQF pioneered the case study approach toassessing research impact however with a change in gov-ernment in 2007 this framework was never implemented inAustralia although it has since been taken up and adaptedfor the UK REF
In developing the UK REF HEFCE commissioned areport in 2009 from RAND to review internationalpractice for assessing research impact and provide recom-mendations to inform the development of the REFRAND selected four frameworks to represent the interna-tional arena (Grant et al 2009) One of these the RQFthey identified as providing a lsquopromising basis for develop-ing an impact approach for the REFrsquo using the case studyapproach HEFCE developed an initial methodology thatwas then tested through a pilot exercise The case studyapproach recommended by the RQF was combined withlsquosignificancersquo and lsquoreachrsquo as criteria for assessment Thecriteria for assessment were also supported by a modeldeveloped by Brunel for lsquomeasurementrsquo of impact thatused similar measures defined as depth and spread Inthe Brunel model depth refers to the degree to which theresearch has influenced or caused change whereas spreadrefers to the extent to which the change has occurred andinfluenced end users Evaluation of impact in terms ofreach and significance allows all disciplines of researchand types of impact to be assessed side-by-side (Scobleet al 2010)
The range and diversity of frameworks developed reflectthe variation in purpose of evaluation including the stake-holders for whom the assessment takes place along withthe type of impact and evidence anticipated The most ap-propriate type of evaluation will vary according to thestakeholder whom we are wishing to inform Studies(Buxton Hanney and Jones 2004) into the economicgains from biomedical and health sciences determinedthat different methodologies provide different ways of
24 T Penfield et al
considering economic benefits A discussion on the benefitsand drawbacks of a range of evaluation tools (bibliomet-rics economic rate of return peer review case study logicmodelling and benchmarking) can be found in the articleby Grant (2006)
Evaluation of impact is becoming increasingly import-ant both within the UK and internationally and researchand development into impact evaluation continues forexample researchers at Brunel have developed theconcept of depth and spread further into the BrunelImpact Device for Evaluation which also assesses thedegree of separation between research and impact(Scoble et al working paper)
4 Impact and the REF
Although based on the RQF the REF did not adopt all ofthe suggestions held within for example the option ofallowing research groups to opt out of impact assessmentshould the nature or stage of research deem it unsuitable(Donovan 2008) In 2009ndash10 the REF team conducted apilot study for the REF involving 29 institutionssubmitting case studies to one of five units of assessment(in clinical medicine physics earth systems and environ-mental sciences social work and social policy and Englishlanguage and literature) (REF2014 2010) These casestudies were reviewed by expert panels and as with theRQF they found that it was possible to assess impactand develop lsquoimpact profilesrsquo using the case studyapproach (REF2014 2010)
From 2014 research within UK universities and institu-tions will be assessed through the REF this will replace theResearch Assessment Exercise which has been used toassess UK research since the 1980s Differences betweenthese two assessments include the removal of indicators ofesteem and the addition of assessment of socio-economicresearch impact The REF will therefore assess threeaspects of research
(1) Outputs(2) Impact(3) Environment
Research impact is assessed in two formats firstthrough an impact template that describes the approachto enabling impact within a unit of assessment andsecond using impact case studies that describe theimpact taking place following excellent research within aunit of assessment (REF2014 2011a) HEFCE indicatedthat impact should merit a 25 weighting within theREF (REF2014 2011b) however this has been reducedfor the 2014 REF to 20 perhaps as a result offeedback and lobbying for example from the RussellGroup and Million+group of Universities who calledfor impact to count for 15 (Russell Group 2009 Jump2011) and following guidance from the expert panels
undertaking the pilot exercise who suggested that duringthe 2014 REF impact assessment would be in a develop-mental phase and that a lower weighting for impact wouldbe appropriate with the expectation that this would beincreased in subsequent assessments (REF2014 2010)
The quality and reliability of impact indicators will varyaccording to the impact we are trying to describe and linkto research In the UK evidence and research impacts willbe assessed for the REF within research disciplinesAlthough it can be envisaged that the range of impactsderived from research of different disciplines are likely tovary one might question whether it makes sense tocompare impacts within disciplines when the range ofimpact can vary enormously for example from businessdevelopment to cultural changes or saving lives An alter-native approach was suggested for the RQF in Australiawhere it was proposed that types of impact be comparedrather than impact from specific disciplines
Providing advice and guidance within specific disciplinesis undoubtedly helpful It can be seen from the panelguidance produced by HEFCE to illustrate impacts andevidence that it is expected that impact and evidence willvary according to discipline (REF2014 2012) Why shouldthis be the case Two areas of research impact health andbiomedical sciences and the social sciences have receivedparticular attention in the literature by comparison withfor example the arts Reviews and guidance on developingand evidencing impact in particular disciplines include theLondon School of Economics (LSE) Public Policy Grouprsquosimpact handbook (LSE nd) a review of the social andeconomic impacts arising from the arts produced by Reeve(Reeves 2002) and a review by Kuruvilla et al (2006) onthe impact arising from health research Perhaps it is timefor a generic guide based on types of impact rather thanresearch discipline
5 The challenges of impact evaluation
What are the challenges associated with understanding andevaluating research impact In endeavouring to assess orevaluate impact a number of difficulties emerge and thesemay be specific to certain types of impact Given that thetype of impact we might expect varies according toresearch discipline impact-specific challenges present uswith the problem that an evaluation mechanism may notfairly compare impact between research disciplines
51 Time lag
The time lag between research and impact varies enor-mously For example the development of a spin out cantake place in a very short period whereas it took around30 years from the discovery of DNA before technologywas developed to enable DNA fingerprinting In develop-ment of the RQF The Allen Consulting Group (2005)highlighted that defining a time lag between research and
Assessment evaluations and definitions of research impact 25
impact was difficult In the UK the Russell GroupUniversities responded to the REF consultation by recom-mending that no time lag be put on the delivery of impactfrom a piece of research citing examples such as the devel-opment of cardiovascular disease treatments which takebetween 10 and 25 years from research to impact (RussellGroup 2009) To be considered for inclusion within theREF impact must be underpinned by research that tookplace between 1 January 1993 and 31 December 2013 withimpact occurring during an assessment window from 1January 2008 to 31 July 2013 However there has beenrecognition that this time window may be insufficient insome instances with architecture being granted an add-itional 5-year period (REF2014 2012) why only architec-ture has been granted this dispensation is not clear whensimilar cases could be made for medicine physics or evenEnglish literature Recommendations from the REF pilotwere that the panel should be able to extend the time framewhere appropriate this however poses difficult decisionswhen submitting a case study to the REF as to what theview of the panel will be and whether if deemed inappro-priate this will render the case study lsquounclassifiedrsquo
52 The developmental nature of impact
Impact is not static it will develop and change over timeand this development may be an increase or decrease in thecurrent degree of impact Impact can be temporary orlong-lasting The point at which assessment takes placewill therefore influence the degree and significance ofthat impact For example following the discovery of anew potential drug preclinical work is required followedby Phase 1 2 and 3 trials and then regulatory approval isgranted before the drug is used to deliver potential healthbenefits Clearly there is the possibility that the potentialnew drug will fail at any one of these phases but each phasecan be classed as an interim impact of the original discov-ery work on route to the delivery of health benefits but thetime at which an impact assessment takes place will influ-ence the degree of impact that has taken place If impact isshort-lived and has come and gone within an assessmentperiod how will it be viewed and considered Again theobjective and perspective of the individuals and organiza-tions assessing impact will be key to understanding howtemporal and dissipated impact will be valued in compari-son with longer-term impact
53 Attribution
Impact is derived not only from targeted research but fromserendipitous findings good fortune and complexnetworks interacting and translating knowledge andresearch The exploitation of research to provide impactoccurs through a complex variety of processes individualsand organizations and therefore attributing the contribu-tion made by a specific individual piece of research
funding strategy or organization to an impact is notstraight forward Husbands-Fealing suggests that toassist identification of causality for impact assessment itis useful to develop a theoretical framework to map theactors activities linkages outputs and impacts within thesystem under evaluation which shows how later phasesresult from earlier ones Such a framework should be notlinear but recursive including elements from contextualenvironments that influence andor interact with variousaspects of the system Impact is often the culmination ofwork within spanning research communities (Duryea et al2007) Concerns over how to attribute impacts have beenraised many times (The Allen Consulting Group 2005Duryea et al 2007 Grant et al 2009) and differentiatingbetween the various major and minor contributions thatlead to impact is a significant challenge
Figure 1 replicated from Hughes and Martin (2012)illustrates how the ease with which impact can beattributed decreases with time whereas the impact oreffect of complementary assets increases highlightingthe problem that it may take a considerable amount oftime for the full impact of a piece of research to developbut because of this time and the increase in complexity ofthe networks involved in translating the research andinterim impacts it is more difficult to attribute and linkback to a contributing piece of research
This presents particular difficulties in research discip-lines conducting basic research such as pure mathematicswhere the impact of research is unlikely to be foreseenResearch findings will be taken up in other branches ofresearch and developed further before socio-economicimpact occurs by which point attribution becomes ahuge challenge If this research is to be assessed alongsidemore applied research it is important that we are able to atleast determine the contribution of basic research It hasbeen acknowledged that outstanding leaps forward inknowledge and understanding come from immersing in abackground of intellectual thinking that lsquoone is able to seefurther by standing on the shoulders of giantsrsquo
54 Knowledge creep
It is acknowledged that one of the outcomes of developingnew knowledge through research can be lsquoknowledge creeprsquowhere new data or information becomes accepted and getsabsorbed over time This is particularly recognized in thedevelopment of new government policy where findings caninfluence policy debate and policy change without recog-nition of the contributing research (Davies et al 2005Wooding et al 2007) This is recognized as being particu-larly problematic within the social sciences where inform-ing policy is a likely impact of research In putting togetherevidence for the REF impact can be attributed to aspecific piece of research if it made a lsquodistinctive contribu-tionrsquo (REF2014 2011a) The difficulty then is how todetermine what the contribution has been in the absence
26 T Penfield et al
of adequate evidence and how we ensure that research that
results in impacts that cannot be evidenced is valued andsupported
55 Gathering evidence
Gathering evidence of the links between research andimpact is not only a challenge where that evidence is
lacking The introduction of impact assessments with therequirement to collate evidence retrospectively posesdifficulties because evidence measurements and baselines
have in many cases not been collected and may no longerbe available While looking forward we will be able toreduce this problem in the future identifying capturing
and storing the evidence in such a way that it can be usedin the decades to come is a difficulty that we will need to
tackle
6 Developing systems and taxonomies forcapturing impact
Collating the evidence and indicators of impact is a signifi-cant task that is being undertaken within universities and
institutions globally Decker et al (2007) surveyed re-searchers in the US top research institutions during 2005the survey of more than 6000 researchers found that on
average more than 40 of their time was spent doingadministrative tasks It is desirable that the assignationof administrative tasks to researchers is limited and there-
fore to assist the tracking and collating of impact datasystems are being developed involving numerous projects
and developments internationally including Star Metricsin the USA the ERC (European Research Council)
Research Information System and Lattes in Brazil (Lane2010 Mugabushaka and Papazoglou 2012)
Ideally systems within universities internationallywould be able to share data allowing direct comparisonsaccurate storage of information developed in collab-orations and transfer of comparable data as researchersmove between institutions To achieve compatible systemsa shared language is required CERIF (CommonEuropean Research Information Format) was developedfor this purpose first released in 1991 a number ofprojects and systems across Europe such as the ERCResearch Information System (Mugabushaka andPapazoglou 2012) are being developed as CERIF-compatible
In the UK there have been several Jisc-funded projects inrecent years to develop systems capable of storing researchinformation for example MICE (Measuring Impacts UnderCERIF) UK Research Information Shared Service andIntegrated Research Input and Output System all basedon the CERIF standard To allow comparisons between in-stitutions identifying a comprehensive taxonomy of impactand the evidence for it that can be used universally is seen tobe very valuable However the Achilles heel of any suchattempt as critics suggest is the creation of a system thatrewards what it can measure and codify with the knock-oneffect of directing research projects to deliver within themeasures and categories that reward
Attempts have been made to categorize impact evidenceand data for example the aim of the MICE Project was todevelop a set of impact indicators to enable impact to befed into a based system Indicators were identified fromdocuments produced for the REF by Research CouncilsUK in unpublished draft case studies undertaken atKingrsquos College London or outlined in relevant publications(MICE Project nd) A taxonomy of impact categories was
Figure 1 Time attribution impact Replicated from (Hughes and Martin 2012)
Assessment evaluations and definitions of research impact 27
then produced onto which impact could be mapped Whatemerged on testing the MICE taxonomy (Cooke andNadim 2011) by mapping impacts from case studies wasthat detailed categorization of impact was found to be tooprescriptive Every piece of research results in a uniquetapestry of impact and despite the MICE taxonomyhaving more than 100 indicators it was found that thesedid not suffice It is perhaps worth noting that the expertpanels who assessed the pilot exercise for the REF com-mented that the evidence provided by research institutes todemonstrate impact were lsquoa unique collectionrsquo Wherequantitative data were available for example audiencenumbers or book sales these numbers rarely reflected thedegree of impact as no context or baseline was availableCooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks ofimpacts that are generally found The Goldsmith report(Cooke and Nadim 2011) recommended making indicatorslsquovalue freersquo enabling the value or quality to be establishedin an impact descriptor that could be assessed by expertpanels The Goldsmith report concluded that generalcategories of evidence would be more useful such that in-dicators could encompass dissemination and circulationre-use and influence collaboration and boundary workand innovation and invention
While defining the terminology used to understand impactand indicators will enable comparable data to be stored andshared between organizations we would recommend thatany categorization of impacts be flexible such that impactsarising from non-standard routes can be placed It is worthconsidering the degree to which indicators are defined andprovide broader definitions with greater flexibility
It is possible to incorporate both metrics and narra-tives within systems for example within the ResearchOutcomes System and Researchfish currently used byseveral of the UK research councils to allow impacts tobe recorded although recording narratives has the advan-tage of allowing some context to be documented it maymake the evidence less flexible for use by different stake-holder groups (which include government funding bodiesresearch assessment agencies research providers and usercommunities) for whom the purpose of analysis may vary(Davies et al 2005) Any tool for impact evaluation needsto be flexible such that it enables access to impact data fora variety of purposes (Scoble et al nd) Systems need tobe able to capture links between and evidence of the fullpathway from research to impact including knowledgeexchange outputs outcomes and interim impacts toallow the route to impact to be traced This database ofevidence needs to establish both where impact can bedirectly attributed to a piece of research as well asvarious contributions to impact made during the pathway
Baselines and controls need to be captured alongsidechange to demonstrate the degree of impact In many in-stances controls are not feasible as we cannot look at whatimpact would have occurred if a piece of research had not
taken place however indications of the picture before andafter impact are valuable and worth collecting for impactthat can be predicted
It is now possible to use data-mining tools to extractspecific data from narratives or unstructured data(Mugabushaka and Papazoglou 2012) This is being donefor collation of academic impact and outputs for exampleResearch Portfolio Online Reporting Tools which usesPubMed and text mining to cluster research projects andSTAR Metrics in the US which uses administrative recordsand research outputs and is also being implemented by theERC using data in the public domain (Mugabushaka andPapazoglou 2012) These techniques have the potential toprovide a transformation in data capture and impact assess-ment (Jones and Grant 2013) It is acknowledged in thearticle by Mugabushaka and Papazoglou (2012) that itwill take years to fully incorporate the impacts of ERCfunding For systems to be able to capture a full range ofsystems definitions and categories of impact need to bedetermined that can be incorporated into system develop-ment To adequately capture interactions taking placebetween researchers institutions and stakeholders theintroduction of tools to enable this would be veryvaluable If knowledge exchange events could be capturedfor example electronically as they occur or automatically ifflagged from an electronic calendar or a diary then far moreof these events could be recorded with relative easeCapturing knowledge exchange events would greatly assistthe linking of research with impact
The transition to routine capture of impact data notonly requires the development of tools and systems tohelp with implementation but also a cultural change todevelop practices currently undertaken by a few to beincorporated as standard behaviour among researchersand universities
7 Indicators evidence and impactwithin systems
What indicators evidence and impacts need to becaptured within developing systems There is a great dealof interest in collating terms for impact and indicators ofimpact Consortia for Advancing Standards in ResearchAdministration Information for example has puttogether a data dictionary with the aim of setting thestandards for terminology used to describe impact and in-dicators that can be incorporated into systems internation-ally and seems to be building a certain momentum inthis area A variety of types of indicators can becaptured within systems however it is important thatthese are universally understood Here we address typesof evidence that need to be captured to enable anoverview of impact to be developed In the majority ofcases a number of types of evidence will be required toprovide an overview of impact
28 T Penfield et al
71 Metrics
Metrics have commonly been used as a measure of impactfor example in terms of profit made number of jobsprovided number of trained personnel recruited numberof visitors to an exhibition number of items purchasedand so on Metrics in themselves cannot convey the fullimpact however they are often viewed as powerful andunequivocal forms of evidence If metrics are available asimpact evidence they should where possible also captureany baseline or control data Any information on thecontext of the data will be valuable to understanding thedegree to which impact has taken place
Perhaps SROI indicates the desire to be able to demon-strate the monetary value of investment and impact by someorganizations SROI aims to provide a valuation of thebroader social environmental and economic impactsproviding a metric that can be used for demonstration ofworth This is ametric that has been usedwithin the charitablesector (Berg andMansson 2011) and also features as evidencein the REF guidance for panel D (REF2014 2012) Moredetails on SROI can be found in lsquoA guide to Social Returnon Investmentrsquo produced by The SROI Network (2012)
Although metrics can provide evidence of quantitativechanges or impacts from our research they are unable toadequately provide evidence of the qualitative impacts thattake place and hence are not suitable for all of the impactwe will encounter The main risks associated with the useof standardized metrics are that
(1) The full impact will not be realized as we focus oneasily quantifiable indicators
(2) We will focus attention towards generating resultsthat enable boxes to be ticked rather than deliveringreal value for money and innovative research
(3) They risk being monetized or converted into a lowestcommon denominator in an attempt to compare thecost of a new theatre against that of a hospital
72 Narratives
Narratives can be used to describe impact the use of nar-ratives enables a story to be told and the impact to beplaced in context and can make good use of qualitative
information They are often written with a reader from aparticular stakeholder group in mind and will present aview of impact from a particular perspective The risk ofrelying on narratives to assess impact is that they oftenlack the evidence required to judge whether the researchand impact are linked appropriately Where narratives areused in conjunction with metrics a complete picture ofimpact can be developed again from a particular perspec-tive but with the evidence available to corroborate theclaims made Table 1 summarizes some of the advantagesand disadvantages of the case study approach
By allowing impact to be placed in context we answerthe lsquoso whatrsquo question that can result from quantitativedata analyses but is there a risk that the full picture maynot be presented to demonstrate impact in a positive lightCase studies are ideal for showcasing impact but shouldthey be used to critically evaluate impact
73 Surveys and testimonies
One way in which change of opinion and user perceptionscan be evidenced is by gathering of stakeholder and usertestimonies or undertaking surveys This might describesupport for and development of research with end userspublic engagement and evidence of knowledge exchangeor a demonstration of change in public opinion as a resultof research Collecting this type of evidence is time-consuming and again it can be difficult to gather therequired evidence retrospectively when for example theappropriate user group might have dispersed
The ability to record and log these type of data is im-portant for enabling the path from research to impact tobe established and the development of systems that cancapture this would be very valuable
74 Citations (outside of academia) anddocumentation
Citations (outside of academia) and documentation can beused as evidence to demonstrate the use research findingsin developing new ideas and products for example Thismight include the citation of a piece of research in policydocuments or reference to a piece of research being cited
Table 1 The advantages and disadvantages of the case study approach
Benefits Considerations
Uses quantitative and qualitative data Automated collation of evidence is difficult
Allows evidence to be contextualized and a story told Incorporating perspective can make it difficult to assess
critically
Enables assessment in the absence of quantitative data Time-consuming to prepare and assess
Allows collation of unique datasets Difficult to compare like with like
Preserves distinctive account or disciplinary perspective Rewards those who can write well andor afford to pay
for external input
Assessment evaluations and definitions of research impact 29
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
and how much of a difference we are making are the samersquo
It is perhaps assumed here that a positive or beneficial
effect will be considered as an impact but what about
changes that are perceived to be negative Wooding
et al (2007) adapted the terminology of the Payback
Framework developed for the health and biomedical
sciences from lsquobenefitrsquo to lsquoimpactrsquo when modifying the
framework for the social sciences arguing that the
positive or negative nature of a change was subjective
and can also change with time as has commonly been
highlighted with the drug thalidomide which was
introduced in the 1950s to help with among other
things morning sickness but due to teratogenic effects
which resulted in birth defects was withdrawn in the
early 1960s Thalidomide has since been found to have
beneficial effects in the treatment of certain types of
cancer Clearly the impact of thalidomide would have
been viewed very differently in the 1950s compared with
the 1960s or todayIn viewing impact evaluations it is important to consider
not only who has evaluated the work but the purpose of
the evaluation to determine the limits and relevance of an
assessment exercise In this article we draw on a broad
range of examples with a focus on methods of evaluation
for research impact within Higher Education Institutions
(HEIs) As part of this review we aim to explore the
following questions
What are the reasons behind trying to understand and
evaluate research impact What are the methodologies and frameworks that have
been employed globally to assess research impact and
how do these compare What are the challenges associated with understanding
and evaluating research impact What indicators evidence and impacts need to be
captured within developing systems
2 Why evaluate research impact
What are the reasons behind trying to understand and
evaluate research impact Throughout history the
activities of a university have been to provide both edu-
cation and research but the fundamental purpose of a
university was perhaps described in the writings of math-
ematician and philosopher Alfred North Whitehead
(1929)lsquoThe justification for a university is that it preserves the
connection between knowledge and the zest of life by
uniting the young and the old in the imaginative consider-
ation of learning The university imparts information but
it imparts it imaginatively At least this is the function
which it should perform for society A university which
fails in this respect has no reason for existence This
atmosphere of excitement arising from imaginative con-sideration transforms knowledgersquo
In undertaking excellent research we anticipate thatgreat things will come and as such one of the fundamentalreasons for undertaking research is that we will generateand transform knowledge that will benefit society as awhole
One might consider that by funding excellent researchimpacts (including those that are unforeseen) will followand traditionally assessment of university researchfocused on academic quality and productivity Aspects ofimpact such as value of Intellectual Property are currentlyrecorded by universities in the UK through their HigherEducation Business and Community Interaction Surveyreturn to Higher Education Statistics Agency howeveras with other public and charitable sector organizationsshowcasing impact is an important part of attracting andretaining donors and support (Kelly and McNicoll 2011)
The reasoning behind the move towards assessingresearch impact is undoubtedly complex involving bothpolitical and socio-economic factors but neverthelesswe can differentiate between four primary purposes
(1) HEIs overview To enable research organizationsincluding HEIs to monitor and manage their per-formance and understand and disseminate the contri-bution that they are making to local national andinternational communities
(2) Accountability To demonstrate to government stake-holders and the wider public the value of researchThere has been a drive from the UK governmentthrough Higher Education Funding Council forEngland (HEFCE) and the Research Councils (HMTreasury 2004) to account for the spending of publicmoney by demonstrating the value of research to taxpayers voters and the public in terms of socio-economic benefits (European Science Foundation2009) in effect justifying this expenditure (DaviesNutley and Walter 2005 Hanney and Gonzalez-Block 2011)
(3) Inform funding To understand the socio-economicvalue of research and subsequently inform fundingdecisions By evaluating the contribution thatresearch makes to society and the economy futurefunding can be allocated where it is perceived tobring about the desired impact As Donovan (2011)comments lsquoImpact is a strong weapon for making anevidence based case to governments for enhancedresearch supportrsquo
(4) Understand To understand the method and routes bywhich research leads to impacts to maximize on thefindings that come out of research and develop betterways of delivering impact
The growing trend for accountability within the univer-sity system is not limited to research and is mirrored inassessments of teaching quality which now feed into
22 T Penfield et al
evaluation of universities to ensure fee-paying studentsrsquo
satisfaction In demonstrating research impact we can
provide accountability upwards to funders and downwards
to users on a project and strategic basis (Kelly and
McNicoll 2011) Organizations may be interested in re-
viewing and assessing research impact for one or more of
the aforementioned purposes and this will influence theway in which evaluation is approached
It is important to emphasize that lsquoNot everyone within
the higher education sector itself is convinced that evalu-
ation of higher education activity is a worthwhile taskrsquo
(Kelly and McNicoll 2011) The University and College
Union (University and College Union 2011) organized a
petition calling on the UK funding councils to withdraw
the inclusion of impact assessment from the REF pro-
posals once plans for the new assessment of university
research were released This petition was signed by
17570 academics (52409 academics were returned to
the 2008 Research Assessment Exercise) including
Nobel laureates and Fellows of the Royal Society
(University and College Union 2011) Impact assessments
raise concerns over the steer of research towards discip-
lines and topics in which impact is more easily evidenced
and that provide economic impacts that could subse-quently lead to a devaluation of lsquoblue skiesrsquo research
Johnston (Johnston 1995) notes that by developing rela-
tionships between researchers and industry new research
strategies can be developed This raises the questions of
whether UK business and industry should not invest in
the research that will deliver them impacts and who will
fund basic research if not the government Donovan
(2011) asserts that there should be no disincentive for
conducting basic research By asking academics to
consider the impact of the research they undertake and
by reviewing and funding them accordingly the result
may be to compromise research by steering it away
from the imaginative and creative quest for knowledge
Professor James Ladyman at the University of Bristol a
vocal adversary of awarding funding based on the assess-
ment of research impact has been quoted as saying that
lsquo inclusion of impact in the REF will create ldquoselection
pressurerdquo promoting academic research that has ldquomoredirect economic impactrdquo or which is easier to explain to
the publicrsquo (Corbyn 2009)Despite the concerns raised the broader socio-economic
impacts of research will be included and count for 20 of
the overall research assessment as part of the REF in
2014 From an international perspective this represents a
step change in the comprehensive nature to which impact
will be assessed within universities and research institutes
incorporating impact from across all research disciplines
Understanding what impact looks like across the various
strands of research and the variety of indicators and
proxies used to evidence impact will be important to
developing a meaningful assessment
3 Evaluating research impact
What are the methodologies and frameworks that havebeen employed globally to evaluate research impact andhow do these compare The traditional form of evaluationof university research in the UK was based on measuringacademic impact and quality through a process of peerreview (Grant 2006) Evidence of academic impact maybe derived through various bibliometric methods oneexample of which is the H index which has incorporatedfactors such as the number of publications and citationsThese metrics may be used in the UK to understand thebenefits of research within academia and are oftenincorporated into the broader perspective of impact seeninternationally for example within the Excellence inResearch for Australia and using Star Metrics in theUSA in which quantitative measures are used to assessimpact for example publications citation and researchincome These lsquotraditionalrsquo bibliometric techniques can beregarded as giving only a partial picture of full impact(Bornmann and Marx 2013) with no link to causalityStandard approaches actively used in programme evalu-ation such as surveys case studies bibliometrics econo-metrics and statistical analyses content analysis andexpert judgment are each considered by some (Vonortasand Link 2012) to have shortcomings when used tomeasure impacts
Incorporating assessment of the wider socio-economicimpact began using metrics-based indicators such asIntellectual Property registered and commercial incomegenerated (Australian Research Council 2008) In theUK more sophisticated assessments of impact incor-porating wider socio-economic benefits were firstinvestigated within the fields of Biomedical and HealthSciences (Grant 2006) an area of research that wantedto be able to justify the significant investment it receivedFrameworks for assessing impact have been designed andare employed at an organizational level addressing thespecific requirements of the organization and stakeholdersAs a result numerous and widely varying models andframeworks for assessing impact exist Here we outline afew of the most notable models that demonstrate thecontrast in approaches available
The Payback Framework is possibly the most widelyused and adapted model for impact assessment(Wooding et al 2007 Nason et al 2008) developedduring the mid-1990s by Buxton and Hanney working atBrunel University It incorporates both academic outputsand wider societal benefits (Donovan and Hanney 2011)to assess outcomes of health sciences research ThePayback Framework systematically links research withthe associated benefits (Scoble et al 2010 Hanney andGonzalez-Block 2011) and can be thought of in twoparts a model that allows the research and subsequentdissemination process to be broken into specific compo-nents within which the benefits of research can be
Assessment evaluations and definitions of research impact 23
studied and second a multi-dimensional classification
scheme into which the various outputs outcomes and
impacts can be placed (Hanney and Gonzalez Block2011) The Payback Framework has been adopted inter-
nationally largely within the health sector by organiza-
tions such as the Canadian Institute of Health Research
the Dutch Public Health Authority the Australian
National Health and Medical Research Council and the
Welfare Bureau in Hong Kong (Bernstein et al 2006
Nason et al 2008 CAHS 2009 Spaapen et al nd) The
Payback Framework enables health and medical researchand impact to be linked and the process by which impact
occurs to be traced For more extensive reviews of the
Payback Framework see Davies et al (2005) Wooding
et al (2007) Nason et al (2008) and Hanney and
Gonzalez-Block (2011)A very different approach known as Social Impact
Assessment Methods for research and funding instruments
through the study of Productive Interactions (SIAMPI)was developed from the Dutch project Evaluating
Research in Context and has a central theme of capturing
lsquoproductive interactionsrsquo between researchers and stake-
holders by analysing the networks that evolve during
research programmes (Spaapen and Drooge 2011
Spaapen et al nd) SIAMPI is based on the widely held
assumption that interactions between researchers and
stakeholder are an important pre-requisite to achievingimpact (Donovan 2011 Hughes and Martin 2012
Spaapen et al nd) This framework is intended to be
used as a learning tool to develop a better understanding
of how research interactions lead to social impact rather
than as an assessment tool for judging showcasing or
even linking impact to a specific piece of research
SIAMPI has been used within the Netherlands Institutefor health Services Research (SIAMPI nd) lsquoProductive
interactionsrsquo which can perhaps be viewed as instances of
knowledge exchange are widely valued and supported
internationally as mechanisms for enabling impact and
are often supported financially for example by Canadarsquos
Social Sciences and Humanities Research Council which
aims to support knowledge exchange (financially) with
a view to enabling long-term impact In the UKUK Department for Business Innovation and Skills
provided funding of pound150 million for knowledge
exchange in 2011ndash12 to lsquohelp universities and colleges
support the economic recovery and growth and contribute
to wider societyrsquo (Department for Business Innovation
and Skills 2012) While valuing and supporting knowledge
exchange is important SIAMPI perhaps takes this a step
further in enabling these exchange events to be capturedand analysed One of the advantages of this method is that
less input is required compared with capturing the full
route from research to impact A comprehensive assess-
ment of impact itself is not undertaken with SIAMPI
which make it a less-suitable method where showcasing
the benefits of research is desirable or where this justifica-tion of funding based on impact is required
The first attempt globally to comprehensively capturethe socio-economic impact of research across all disciplineswas undertaken for the Australian Research QualityFramework (RQF) using a case study approach TheRQF was developed to demonstrate and justify public ex-penditure on research and as part of this framework apilot assessment was undertaken by the AustralianTechnology Network Researchers were asked toevidence the economic societal environmental andcultural impact of their research within broad categorieswhich were then verified by an expert panel (Duryea et al2007) who concluded that the researchers and case studiescould provide enough qualitative and quantitativeevidence for reviewers to assess the impact arising fromtheir research (Duryea et al 2007) To evaluate impactcase studies were interrogated and verifiable indicatorsassessed to determine whether research had led to recipro-cal engagement adoption of research findings or publicvalue The RQF pioneered the case study approach toassessing research impact however with a change in gov-ernment in 2007 this framework was never implemented inAustralia although it has since been taken up and adaptedfor the UK REF
In developing the UK REF HEFCE commissioned areport in 2009 from RAND to review internationalpractice for assessing research impact and provide recom-mendations to inform the development of the REFRAND selected four frameworks to represent the interna-tional arena (Grant et al 2009) One of these the RQFthey identified as providing a lsquopromising basis for develop-ing an impact approach for the REFrsquo using the case studyapproach HEFCE developed an initial methodology thatwas then tested through a pilot exercise The case studyapproach recommended by the RQF was combined withlsquosignificancersquo and lsquoreachrsquo as criteria for assessment Thecriteria for assessment were also supported by a modeldeveloped by Brunel for lsquomeasurementrsquo of impact thatused similar measures defined as depth and spread Inthe Brunel model depth refers to the degree to which theresearch has influenced or caused change whereas spreadrefers to the extent to which the change has occurred andinfluenced end users Evaluation of impact in terms ofreach and significance allows all disciplines of researchand types of impact to be assessed side-by-side (Scobleet al 2010)
The range and diversity of frameworks developed reflectthe variation in purpose of evaluation including the stake-holders for whom the assessment takes place along withthe type of impact and evidence anticipated The most ap-propriate type of evaluation will vary according to thestakeholder whom we are wishing to inform Studies(Buxton Hanney and Jones 2004) into the economicgains from biomedical and health sciences determinedthat different methodologies provide different ways of
24 T Penfield et al
considering economic benefits A discussion on the benefitsand drawbacks of a range of evaluation tools (bibliomet-rics economic rate of return peer review case study logicmodelling and benchmarking) can be found in the articleby Grant (2006)
Evaluation of impact is becoming increasingly import-ant both within the UK and internationally and researchand development into impact evaluation continues forexample researchers at Brunel have developed theconcept of depth and spread further into the BrunelImpact Device for Evaluation which also assesses thedegree of separation between research and impact(Scoble et al working paper)
4 Impact and the REF
Although based on the RQF the REF did not adopt all ofthe suggestions held within for example the option ofallowing research groups to opt out of impact assessmentshould the nature or stage of research deem it unsuitable(Donovan 2008) In 2009ndash10 the REF team conducted apilot study for the REF involving 29 institutionssubmitting case studies to one of five units of assessment(in clinical medicine physics earth systems and environ-mental sciences social work and social policy and Englishlanguage and literature) (REF2014 2010) These casestudies were reviewed by expert panels and as with theRQF they found that it was possible to assess impactand develop lsquoimpact profilesrsquo using the case studyapproach (REF2014 2010)
From 2014 research within UK universities and institu-tions will be assessed through the REF this will replace theResearch Assessment Exercise which has been used toassess UK research since the 1980s Differences betweenthese two assessments include the removal of indicators ofesteem and the addition of assessment of socio-economicresearch impact The REF will therefore assess threeaspects of research
(1) Outputs(2) Impact(3) Environment
Research impact is assessed in two formats firstthrough an impact template that describes the approachto enabling impact within a unit of assessment andsecond using impact case studies that describe theimpact taking place following excellent research within aunit of assessment (REF2014 2011a) HEFCE indicatedthat impact should merit a 25 weighting within theREF (REF2014 2011b) however this has been reducedfor the 2014 REF to 20 perhaps as a result offeedback and lobbying for example from the RussellGroup and Million+group of Universities who calledfor impact to count for 15 (Russell Group 2009 Jump2011) and following guidance from the expert panels
undertaking the pilot exercise who suggested that duringthe 2014 REF impact assessment would be in a develop-mental phase and that a lower weighting for impact wouldbe appropriate with the expectation that this would beincreased in subsequent assessments (REF2014 2010)
The quality and reliability of impact indicators will varyaccording to the impact we are trying to describe and linkto research In the UK evidence and research impacts willbe assessed for the REF within research disciplinesAlthough it can be envisaged that the range of impactsderived from research of different disciplines are likely tovary one might question whether it makes sense tocompare impacts within disciplines when the range ofimpact can vary enormously for example from businessdevelopment to cultural changes or saving lives An alter-native approach was suggested for the RQF in Australiawhere it was proposed that types of impact be comparedrather than impact from specific disciplines
Providing advice and guidance within specific disciplinesis undoubtedly helpful It can be seen from the panelguidance produced by HEFCE to illustrate impacts andevidence that it is expected that impact and evidence willvary according to discipline (REF2014 2012) Why shouldthis be the case Two areas of research impact health andbiomedical sciences and the social sciences have receivedparticular attention in the literature by comparison withfor example the arts Reviews and guidance on developingand evidencing impact in particular disciplines include theLondon School of Economics (LSE) Public Policy Grouprsquosimpact handbook (LSE nd) a review of the social andeconomic impacts arising from the arts produced by Reeve(Reeves 2002) and a review by Kuruvilla et al (2006) onthe impact arising from health research Perhaps it is timefor a generic guide based on types of impact rather thanresearch discipline
5 The challenges of impact evaluation
What are the challenges associated with understanding andevaluating research impact In endeavouring to assess orevaluate impact a number of difficulties emerge and thesemay be specific to certain types of impact Given that thetype of impact we might expect varies according toresearch discipline impact-specific challenges present uswith the problem that an evaluation mechanism may notfairly compare impact between research disciplines
51 Time lag
The time lag between research and impact varies enor-mously For example the development of a spin out cantake place in a very short period whereas it took around30 years from the discovery of DNA before technologywas developed to enable DNA fingerprinting In develop-ment of the RQF The Allen Consulting Group (2005)highlighted that defining a time lag between research and
Assessment evaluations and definitions of research impact 25
impact was difficult In the UK the Russell GroupUniversities responded to the REF consultation by recom-mending that no time lag be put on the delivery of impactfrom a piece of research citing examples such as the devel-opment of cardiovascular disease treatments which takebetween 10 and 25 years from research to impact (RussellGroup 2009) To be considered for inclusion within theREF impact must be underpinned by research that tookplace between 1 January 1993 and 31 December 2013 withimpact occurring during an assessment window from 1January 2008 to 31 July 2013 However there has beenrecognition that this time window may be insufficient insome instances with architecture being granted an add-itional 5-year period (REF2014 2012) why only architec-ture has been granted this dispensation is not clear whensimilar cases could be made for medicine physics or evenEnglish literature Recommendations from the REF pilotwere that the panel should be able to extend the time framewhere appropriate this however poses difficult decisionswhen submitting a case study to the REF as to what theview of the panel will be and whether if deemed inappro-priate this will render the case study lsquounclassifiedrsquo
52 The developmental nature of impact
Impact is not static it will develop and change over timeand this development may be an increase or decrease in thecurrent degree of impact Impact can be temporary orlong-lasting The point at which assessment takes placewill therefore influence the degree and significance ofthat impact For example following the discovery of anew potential drug preclinical work is required followedby Phase 1 2 and 3 trials and then regulatory approval isgranted before the drug is used to deliver potential healthbenefits Clearly there is the possibility that the potentialnew drug will fail at any one of these phases but each phasecan be classed as an interim impact of the original discov-ery work on route to the delivery of health benefits but thetime at which an impact assessment takes place will influ-ence the degree of impact that has taken place If impact isshort-lived and has come and gone within an assessmentperiod how will it be viewed and considered Again theobjective and perspective of the individuals and organiza-tions assessing impact will be key to understanding howtemporal and dissipated impact will be valued in compari-son with longer-term impact
53 Attribution
Impact is derived not only from targeted research but fromserendipitous findings good fortune and complexnetworks interacting and translating knowledge andresearch The exploitation of research to provide impactoccurs through a complex variety of processes individualsand organizations and therefore attributing the contribu-tion made by a specific individual piece of research
funding strategy or organization to an impact is notstraight forward Husbands-Fealing suggests that toassist identification of causality for impact assessment itis useful to develop a theoretical framework to map theactors activities linkages outputs and impacts within thesystem under evaluation which shows how later phasesresult from earlier ones Such a framework should be notlinear but recursive including elements from contextualenvironments that influence andor interact with variousaspects of the system Impact is often the culmination ofwork within spanning research communities (Duryea et al2007) Concerns over how to attribute impacts have beenraised many times (The Allen Consulting Group 2005Duryea et al 2007 Grant et al 2009) and differentiatingbetween the various major and minor contributions thatlead to impact is a significant challenge
Figure 1 replicated from Hughes and Martin (2012)illustrates how the ease with which impact can beattributed decreases with time whereas the impact oreffect of complementary assets increases highlightingthe problem that it may take a considerable amount oftime for the full impact of a piece of research to developbut because of this time and the increase in complexity ofthe networks involved in translating the research andinterim impacts it is more difficult to attribute and linkback to a contributing piece of research
This presents particular difficulties in research discip-lines conducting basic research such as pure mathematicswhere the impact of research is unlikely to be foreseenResearch findings will be taken up in other branches ofresearch and developed further before socio-economicimpact occurs by which point attribution becomes ahuge challenge If this research is to be assessed alongsidemore applied research it is important that we are able to atleast determine the contribution of basic research It hasbeen acknowledged that outstanding leaps forward inknowledge and understanding come from immersing in abackground of intellectual thinking that lsquoone is able to seefurther by standing on the shoulders of giantsrsquo
54 Knowledge creep
It is acknowledged that one of the outcomes of developingnew knowledge through research can be lsquoknowledge creeprsquowhere new data or information becomes accepted and getsabsorbed over time This is particularly recognized in thedevelopment of new government policy where findings caninfluence policy debate and policy change without recog-nition of the contributing research (Davies et al 2005Wooding et al 2007) This is recognized as being particu-larly problematic within the social sciences where inform-ing policy is a likely impact of research In putting togetherevidence for the REF impact can be attributed to aspecific piece of research if it made a lsquodistinctive contribu-tionrsquo (REF2014 2011a) The difficulty then is how todetermine what the contribution has been in the absence
26 T Penfield et al
of adequate evidence and how we ensure that research that
results in impacts that cannot be evidenced is valued andsupported
55 Gathering evidence
Gathering evidence of the links between research andimpact is not only a challenge where that evidence is
lacking The introduction of impact assessments with therequirement to collate evidence retrospectively posesdifficulties because evidence measurements and baselines
have in many cases not been collected and may no longerbe available While looking forward we will be able toreduce this problem in the future identifying capturing
and storing the evidence in such a way that it can be usedin the decades to come is a difficulty that we will need to
tackle
6 Developing systems and taxonomies forcapturing impact
Collating the evidence and indicators of impact is a signifi-cant task that is being undertaken within universities and
institutions globally Decker et al (2007) surveyed re-searchers in the US top research institutions during 2005the survey of more than 6000 researchers found that on
average more than 40 of their time was spent doingadministrative tasks It is desirable that the assignationof administrative tasks to researchers is limited and there-
fore to assist the tracking and collating of impact datasystems are being developed involving numerous projects
and developments internationally including Star Metricsin the USA the ERC (European Research Council)
Research Information System and Lattes in Brazil (Lane2010 Mugabushaka and Papazoglou 2012)
Ideally systems within universities internationallywould be able to share data allowing direct comparisonsaccurate storage of information developed in collab-orations and transfer of comparable data as researchersmove between institutions To achieve compatible systemsa shared language is required CERIF (CommonEuropean Research Information Format) was developedfor this purpose first released in 1991 a number ofprojects and systems across Europe such as the ERCResearch Information System (Mugabushaka andPapazoglou 2012) are being developed as CERIF-compatible
In the UK there have been several Jisc-funded projects inrecent years to develop systems capable of storing researchinformation for example MICE (Measuring Impacts UnderCERIF) UK Research Information Shared Service andIntegrated Research Input and Output System all basedon the CERIF standard To allow comparisons between in-stitutions identifying a comprehensive taxonomy of impactand the evidence for it that can be used universally is seen tobe very valuable However the Achilles heel of any suchattempt as critics suggest is the creation of a system thatrewards what it can measure and codify with the knock-oneffect of directing research projects to deliver within themeasures and categories that reward
Attempts have been made to categorize impact evidenceand data for example the aim of the MICE Project was todevelop a set of impact indicators to enable impact to befed into a based system Indicators were identified fromdocuments produced for the REF by Research CouncilsUK in unpublished draft case studies undertaken atKingrsquos College London or outlined in relevant publications(MICE Project nd) A taxonomy of impact categories was
Figure 1 Time attribution impact Replicated from (Hughes and Martin 2012)
Assessment evaluations and definitions of research impact 27
then produced onto which impact could be mapped Whatemerged on testing the MICE taxonomy (Cooke andNadim 2011) by mapping impacts from case studies wasthat detailed categorization of impact was found to be tooprescriptive Every piece of research results in a uniquetapestry of impact and despite the MICE taxonomyhaving more than 100 indicators it was found that thesedid not suffice It is perhaps worth noting that the expertpanels who assessed the pilot exercise for the REF com-mented that the evidence provided by research institutes todemonstrate impact were lsquoa unique collectionrsquo Wherequantitative data were available for example audiencenumbers or book sales these numbers rarely reflected thedegree of impact as no context or baseline was availableCooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks ofimpacts that are generally found The Goldsmith report(Cooke and Nadim 2011) recommended making indicatorslsquovalue freersquo enabling the value or quality to be establishedin an impact descriptor that could be assessed by expertpanels The Goldsmith report concluded that generalcategories of evidence would be more useful such that in-dicators could encompass dissemination and circulationre-use and influence collaboration and boundary workand innovation and invention
While defining the terminology used to understand impactand indicators will enable comparable data to be stored andshared between organizations we would recommend thatany categorization of impacts be flexible such that impactsarising from non-standard routes can be placed It is worthconsidering the degree to which indicators are defined andprovide broader definitions with greater flexibility
It is possible to incorporate both metrics and narra-tives within systems for example within the ResearchOutcomes System and Researchfish currently used byseveral of the UK research councils to allow impacts tobe recorded although recording narratives has the advan-tage of allowing some context to be documented it maymake the evidence less flexible for use by different stake-holder groups (which include government funding bodiesresearch assessment agencies research providers and usercommunities) for whom the purpose of analysis may vary(Davies et al 2005) Any tool for impact evaluation needsto be flexible such that it enables access to impact data fora variety of purposes (Scoble et al nd) Systems need tobe able to capture links between and evidence of the fullpathway from research to impact including knowledgeexchange outputs outcomes and interim impacts toallow the route to impact to be traced This database ofevidence needs to establish both where impact can bedirectly attributed to a piece of research as well asvarious contributions to impact made during the pathway
Baselines and controls need to be captured alongsidechange to demonstrate the degree of impact In many in-stances controls are not feasible as we cannot look at whatimpact would have occurred if a piece of research had not
taken place however indications of the picture before andafter impact are valuable and worth collecting for impactthat can be predicted
It is now possible to use data-mining tools to extractspecific data from narratives or unstructured data(Mugabushaka and Papazoglou 2012) This is being donefor collation of academic impact and outputs for exampleResearch Portfolio Online Reporting Tools which usesPubMed and text mining to cluster research projects andSTAR Metrics in the US which uses administrative recordsand research outputs and is also being implemented by theERC using data in the public domain (Mugabushaka andPapazoglou 2012) These techniques have the potential toprovide a transformation in data capture and impact assess-ment (Jones and Grant 2013) It is acknowledged in thearticle by Mugabushaka and Papazoglou (2012) that itwill take years to fully incorporate the impacts of ERCfunding For systems to be able to capture a full range ofsystems definitions and categories of impact need to bedetermined that can be incorporated into system develop-ment To adequately capture interactions taking placebetween researchers institutions and stakeholders theintroduction of tools to enable this would be veryvaluable If knowledge exchange events could be capturedfor example electronically as they occur or automatically ifflagged from an electronic calendar or a diary then far moreof these events could be recorded with relative easeCapturing knowledge exchange events would greatly assistthe linking of research with impact
The transition to routine capture of impact data notonly requires the development of tools and systems tohelp with implementation but also a cultural change todevelop practices currently undertaken by a few to beincorporated as standard behaviour among researchersand universities
7 Indicators evidence and impactwithin systems
What indicators evidence and impacts need to becaptured within developing systems There is a great dealof interest in collating terms for impact and indicators ofimpact Consortia for Advancing Standards in ResearchAdministration Information for example has puttogether a data dictionary with the aim of setting thestandards for terminology used to describe impact and in-dicators that can be incorporated into systems internation-ally and seems to be building a certain momentum inthis area A variety of types of indicators can becaptured within systems however it is important thatthese are universally understood Here we address typesof evidence that need to be captured to enable anoverview of impact to be developed In the majority ofcases a number of types of evidence will be required toprovide an overview of impact
28 T Penfield et al
71 Metrics
Metrics have commonly been used as a measure of impactfor example in terms of profit made number of jobsprovided number of trained personnel recruited numberof visitors to an exhibition number of items purchasedand so on Metrics in themselves cannot convey the fullimpact however they are often viewed as powerful andunequivocal forms of evidence If metrics are available asimpact evidence they should where possible also captureany baseline or control data Any information on thecontext of the data will be valuable to understanding thedegree to which impact has taken place
Perhaps SROI indicates the desire to be able to demon-strate the monetary value of investment and impact by someorganizations SROI aims to provide a valuation of thebroader social environmental and economic impactsproviding a metric that can be used for demonstration ofworth This is ametric that has been usedwithin the charitablesector (Berg andMansson 2011) and also features as evidencein the REF guidance for panel D (REF2014 2012) Moredetails on SROI can be found in lsquoA guide to Social Returnon Investmentrsquo produced by The SROI Network (2012)
Although metrics can provide evidence of quantitativechanges or impacts from our research they are unable toadequately provide evidence of the qualitative impacts thattake place and hence are not suitable for all of the impactwe will encounter The main risks associated with the useof standardized metrics are that
(1) The full impact will not be realized as we focus oneasily quantifiable indicators
(2) We will focus attention towards generating resultsthat enable boxes to be ticked rather than deliveringreal value for money and innovative research
(3) They risk being monetized or converted into a lowestcommon denominator in an attempt to compare thecost of a new theatre against that of a hospital
72 Narratives
Narratives can be used to describe impact the use of nar-ratives enables a story to be told and the impact to beplaced in context and can make good use of qualitative
information They are often written with a reader from aparticular stakeholder group in mind and will present aview of impact from a particular perspective The risk ofrelying on narratives to assess impact is that they oftenlack the evidence required to judge whether the researchand impact are linked appropriately Where narratives areused in conjunction with metrics a complete picture ofimpact can be developed again from a particular perspec-tive but with the evidence available to corroborate theclaims made Table 1 summarizes some of the advantagesand disadvantages of the case study approach
By allowing impact to be placed in context we answerthe lsquoso whatrsquo question that can result from quantitativedata analyses but is there a risk that the full picture maynot be presented to demonstrate impact in a positive lightCase studies are ideal for showcasing impact but shouldthey be used to critically evaluate impact
73 Surveys and testimonies
One way in which change of opinion and user perceptionscan be evidenced is by gathering of stakeholder and usertestimonies or undertaking surveys This might describesupport for and development of research with end userspublic engagement and evidence of knowledge exchangeor a demonstration of change in public opinion as a resultof research Collecting this type of evidence is time-consuming and again it can be difficult to gather therequired evidence retrospectively when for example theappropriate user group might have dispersed
The ability to record and log these type of data is im-portant for enabling the path from research to impact tobe established and the development of systems that cancapture this would be very valuable
74 Citations (outside of academia) anddocumentation
Citations (outside of academia) and documentation can beused as evidence to demonstrate the use research findingsin developing new ideas and products for example Thismight include the citation of a piece of research in policydocuments or reference to a piece of research being cited
Table 1 The advantages and disadvantages of the case study approach
Benefits Considerations
Uses quantitative and qualitative data Automated collation of evidence is difficult
Allows evidence to be contextualized and a story told Incorporating perspective can make it difficult to assess
critically
Enables assessment in the absence of quantitative data Time-consuming to prepare and assess
Allows collation of unique datasets Difficult to compare like with like
Preserves distinctive account or disciplinary perspective Rewards those who can write well andor afford to pay
for external input
Assessment evaluations and definitions of research impact 29
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
evaluation of universities to ensure fee-paying studentsrsquo
satisfaction In demonstrating research impact we can
provide accountability upwards to funders and downwards
to users on a project and strategic basis (Kelly and
McNicoll 2011) Organizations may be interested in re-
viewing and assessing research impact for one or more of
the aforementioned purposes and this will influence theway in which evaluation is approached
It is important to emphasize that lsquoNot everyone within
the higher education sector itself is convinced that evalu-
ation of higher education activity is a worthwhile taskrsquo
(Kelly and McNicoll 2011) The University and College
Union (University and College Union 2011) organized a
petition calling on the UK funding councils to withdraw
the inclusion of impact assessment from the REF pro-
posals once plans for the new assessment of university
research were released This petition was signed by
17570 academics (52409 academics were returned to
the 2008 Research Assessment Exercise) including
Nobel laureates and Fellows of the Royal Society
(University and College Union 2011) Impact assessments
raise concerns over the steer of research towards discip-
lines and topics in which impact is more easily evidenced
and that provide economic impacts that could subse-quently lead to a devaluation of lsquoblue skiesrsquo research
Johnston (Johnston 1995) notes that by developing rela-
tionships between researchers and industry new research
strategies can be developed This raises the questions of
whether UK business and industry should not invest in
the research that will deliver them impacts and who will
fund basic research if not the government Donovan
(2011) asserts that there should be no disincentive for
conducting basic research By asking academics to
consider the impact of the research they undertake and
by reviewing and funding them accordingly the result
may be to compromise research by steering it away
from the imaginative and creative quest for knowledge
Professor James Ladyman at the University of Bristol a
vocal adversary of awarding funding based on the assess-
ment of research impact has been quoted as saying that
lsquo inclusion of impact in the REF will create ldquoselection
pressurerdquo promoting academic research that has ldquomoredirect economic impactrdquo or which is easier to explain to
the publicrsquo (Corbyn 2009)Despite the concerns raised the broader socio-economic
impacts of research will be included and count for 20 of
the overall research assessment as part of the REF in
2014 From an international perspective this represents a
step change in the comprehensive nature to which impact
will be assessed within universities and research institutes
incorporating impact from across all research disciplines
Understanding what impact looks like across the various
strands of research and the variety of indicators and
proxies used to evidence impact will be important to
developing a meaningful assessment
3 Evaluating research impact
What are the methodologies and frameworks that havebeen employed globally to evaluate research impact andhow do these compare The traditional form of evaluationof university research in the UK was based on measuringacademic impact and quality through a process of peerreview (Grant 2006) Evidence of academic impact maybe derived through various bibliometric methods oneexample of which is the H index which has incorporatedfactors such as the number of publications and citationsThese metrics may be used in the UK to understand thebenefits of research within academia and are oftenincorporated into the broader perspective of impact seeninternationally for example within the Excellence inResearch for Australia and using Star Metrics in theUSA in which quantitative measures are used to assessimpact for example publications citation and researchincome These lsquotraditionalrsquo bibliometric techniques can beregarded as giving only a partial picture of full impact(Bornmann and Marx 2013) with no link to causalityStandard approaches actively used in programme evalu-ation such as surveys case studies bibliometrics econo-metrics and statistical analyses content analysis andexpert judgment are each considered by some (Vonortasand Link 2012) to have shortcomings when used tomeasure impacts
Incorporating assessment of the wider socio-economicimpact began using metrics-based indicators such asIntellectual Property registered and commercial incomegenerated (Australian Research Council 2008) In theUK more sophisticated assessments of impact incor-porating wider socio-economic benefits were firstinvestigated within the fields of Biomedical and HealthSciences (Grant 2006) an area of research that wantedto be able to justify the significant investment it receivedFrameworks for assessing impact have been designed andare employed at an organizational level addressing thespecific requirements of the organization and stakeholdersAs a result numerous and widely varying models andframeworks for assessing impact exist Here we outline afew of the most notable models that demonstrate thecontrast in approaches available
The Payback Framework is possibly the most widelyused and adapted model for impact assessment(Wooding et al 2007 Nason et al 2008) developedduring the mid-1990s by Buxton and Hanney working atBrunel University It incorporates both academic outputsand wider societal benefits (Donovan and Hanney 2011)to assess outcomes of health sciences research ThePayback Framework systematically links research withthe associated benefits (Scoble et al 2010 Hanney andGonzalez-Block 2011) and can be thought of in twoparts a model that allows the research and subsequentdissemination process to be broken into specific compo-nents within which the benefits of research can be
Assessment evaluations and definitions of research impact 23
studied and second a multi-dimensional classification
scheme into which the various outputs outcomes and
impacts can be placed (Hanney and Gonzalez Block2011) The Payback Framework has been adopted inter-
nationally largely within the health sector by organiza-
tions such as the Canadian Institute of Health Research
the Dutch Public Health Authority the Australian
National Health and Medical Research Council and the
Welfare Bureau in Hong Kong (Bernstein et al 2006
Nason et al 2008 CAHS 2009 Spaapen et al nd) The
Payback Framework enables health and medical researchand impact to be linked and the process by which impact
occurs to be traced For more extensive reviews of the
Payback Framework see Davies et al (2005) Wooding
et al (2007) Nason et al (2008) and Hanney and
Gonzalez-Block (2011)A very different approach known as Social Impact
Assessment Methods for research and funding instruments
through the study of Productive Interactions (SIAMPI)was developed from the Dutch project Evaluating
Research in Context and has a central theme of capturing
lsquoproductive interactionsrsquo between researchers and stake-
holders by analysing the networks that evolve during
research programmes (Spaapen and Drooge 2011
Spaapen et al nd) SIAMPI is based on the widely held
assumption that interactions between researchers and
stakeholder are an important pre-requisite to achievingimpact (Donovan 2011 Hughes and Martin 2012
Spaapen et al nd) This framework is intended to be
used as a learning tool to develop a better understanding
of how research interactions lead to social impact rather
than as an assessment tool for judging showcasing or
even linking impact to a specific piece of research
SIAMPI has been used within the Netherlands Institutefor health Services Research (SIAMPI nd) lsquoProductive
interactionsrsquo which can perhaps be viewed as instances of
knowledge exchange are widely valued and supported
internationally as mechanisms for enabling impact and
are often supported financially for example by Canadarsquos
Social Sciences and Humanities Research Council which
aims to support knowledge exchange (financially) with
a view to enabling long-term impact In the UKUK Department for Business Innovation and Skills
provided funding of pound150 million for knowledge
exchange in 2011ndash12 to lsquohelp universities and colleges
support the economic recovery and growth and contribute
to wider societyrsquo (Department for Business Innovation
and Skills 2012) While valuing and supporting knowledge
exchange is important SIAMPI perhaps takes this a step
further in enabling these exchange events to be capturedand analysed One of the advantages of this method is that
less input is required compared with capturing the full
route from research to impact A comprehensive assess-
ment of impact itself is not undertaken with SIAMPI
which make it a less-suitable method where showcasing
the benefits of research is desirable or where this justifica-tion of funding based on impact is required
The first attempt globally to comprehensively capturethe socio-economic impact of research across all disciplineswas undertaken for the Australian Research QualityFramework (RQF) using a case study approach TheRQF was developed to demonstrate and justify public ex-penditure on research and as part of this framework apilot assessment was undertaken by the AustralianTechnology Network Researchers were asked toevidence the economic societal environmental andcultural impact of their research within broad categorieswhich were then verified by an expert panel (Duryea et al2007) who concluded that the researchers and case studiescould provide enough qualitative and quantitativeevidence for reviewers to assess the impact arising fromtheir research (Duryea et al 2007) To evaluate impactcase studies were interrogated and verifiable indicatorsassessed to determine whether research had led to recipro-cal engagement adoption of research findings or publicvalue The RQF pioneered the case study approach toassessing research impact however with a change in gov-ernment in 2007 this framework was never implemented inAustralia although it has since been taken up and adaptedfor the UK REF
In developing the UK REF HEFCE commissioned areport in 2009 from RAND to review internationalpractice for assessing research impact and provide recom-mendations to inform the development of the REFRAND selected four frameworks to represent the interna-tional arena (Grant et al 2009) One of these the RQFthey identified as providing a lsquopromising basis for develop-ing an impact approach for the REFrsquo using the case studyapproach HEFCE developed an initial methodology thatwas then tested through a pilot exercise The case studyapproach recommended by the RQF was combined withlsquosignificancersquo and lsquoreachrsquo as criteria for assessment Thecriteria for assessment were also supported by a modeldeveloped by Brunel for lsquomeasurementrsquo of impact thatused similar measures defined as depth and spread Inthe Brunel model depth refers to the degree to which theresearch has influenced or caused change whereas spreadrefers to the extent to which the change has occurred andinfluenced end users Evaluation of impact in terms ofreach and significance allows all disciplines of researchand types of impact to be assessed side-by-side (Scobleet al 2010)
The range and diversity of frameworks developed reflectthe variation in purpose of evaluation including the stake-holders for whom the assessment takes place along withthe type of impact and evidence anticipated The most ap-propriate type of evaluation will vary according to thestakeholder whom we are wishing to inform Studies(Buxton Hanney and Jones 2004) into the economicgains from biomedical and health sciences determinedthat different methodologies provide different ways of
24 T Penfield et al
considering economic benefits A discussion on the benefitsand drawbacks of a range of evaluation tools (bibliomet-rics economic rate of return peer review case study logicmodelling and benchmarking) can be found in the articleby Grant (2006)
Evaluation of impact is becoming increasingly import-ant both within the UK and internationally and researchand development into impact evaluation continues forexample researchers at Brunel have developed theconcept of depth and spread further into the BrunelImpact Device for Evaluation which also assesses thedegree of separation between research and impact(Scoble et al working paper)
4 Impact and the REF
Although based on the RQF the REF did not adopt all ofthe suggestions held within for example the option ofallowing research groups to opt out of impact assessmentshould the nature or stage of research deem it unsuitable(Donovan 2008) In 2009ndash10 the REF team conducted apilot study for the REF involving 29 institutionssubmitting case studies to one of five units of assessment(in clinical medicine physics earth systems and environ-mental sciences social work and social policy and Englishlanguage and literature) (REF2014 2010) These casestudies were reviewed by expert panels and as with theRQF they found that it was possible to assess impactand develop lsquoimpact profilesrsquo using the case studyapproach (REF2014 2010)
From 2014 research within UK universities and institu-tions will be assessed through the REF this will replace theResearch Assessment Exercise which has been used toassess UK research since the 1980s Differences betweenthese two assessments include the removal of indicators ofesteem and the addition of assessment of socio-economicresearch impact The REF will therefore assess threeaspects of research
(1) Outputs(2) Impact(3) Environment
Research impact is assessed in two formats firstthrough an impact template that describes the approachto enabling impact within a unit of assessment andsecond using impact case studies that describe theimpact taking place following excellent research within aunit of assessment (REF2014 2011a) HEFCE indicatedthat impact should merit a 25 weighting within theREF (REF2014 2011b) however this has been reducedfor the 2014 REF to 20 perhaps as a result offeedback and lobbying for example from the RussellGroup and Million+group of Universities who calledfor impact to count for 15 (Russell Group 2009 Jump2011) and following guidance from the expert panels
undertaking the pilot exercise who suggested that duringthe 2014 REF impact assessment would be in a develop-mental phase and that a lower weighting for impact wouldbe appropriate with the expectation that this would beincreased in subsequent assessments (REF2014 2010)
The quality and reliability of impact indicators will varyaccording to the impact we are trying to describe and linkto research In the UK evidence and research impacts willbe assessed for the REF within research disciplinesAlthough it can be envisaged that the range of impactsderived from research of different disciplines are likely tovary one might question whether it makes sense tocompare impacts within disciplines when the range ofimpact can vary enormously for example from businessdevelopment to cultural changes or saving lives An alter-native approach was suggested for the RQF in Australiawhere it was proposed that types of impact be comparedrather than impact from specific disciplines
Providing advice and guidance within specific disciplinesis undoubtedly helpful It can be seen from the panelguidance produced by HEFCE to illustrate impacts andevidence that it is expected that impact and evidence willvary according to discipline (REF2014 2012) Why shouldthis be the case Two areas of research impact health andbiomedical sciences and the social sciences have receivedparticular attention in the literature by comparison withfor example the arts Reviews and guidance on developingand evidencing impact in particular disciplines include theLondon School of Economics (LSE) Public Policy Grouprsquosimpact handbook (LSE nd) a review of the social andeconomic impacts arising from the arts produced by Reeve(Reeves 2002) and a review by Kuruvilla et al (2006) onthe impact arising from health research Perhaps it is timefor a generic guide based on types of impact rather thanresearch discipline
5 The challenges of impact evaluation
What are the challenges associated with understanding andevaluating research impact In endeavouring to assess orevaluate impact a number of difficulties emerge and thesemay be specific to certain types of impact Given that thetype of impact we might expect varies according toresearch discipline impact-specific challenges present uswith the problem that an evaluation mechanism may notfairly compare impact between research disciplines
51 Time lag
The time lag between research and impact varies enor-mously For example the development of a spin out cantake place in a very short period whereas it took around30 years from the discovery of DNA before technologywas developed to enable DNA fingerprinting In develop-ment of the RQF The Allen Consulting Group (2005)highlighted that defining a time lag between research and
Assessment evaluations and definitions of research impact 25
impact was difficult In the UK the Russell GroupUniversities responded to the REF consultation by recom-mending that no time lag be put on the delivery of impactfrom a piece of research citing examples such as the devel-opment of cardiovascular disease treatments which takebetween 10 and 25 years from research to impact (RussellGroup 2009) To be considered for inclusion within theREF impact must be underpinned by research that tookplace between 1 January 1993 and 31 December 2013 withimpact occurring during an assessment window from 1January 2008 to 31 July 2013 However there has beenrecognition that this time window may be insufficient insome instances with architecture being granted an add-itional 5-year period (REF2014 2012) why only architec-ture has been granted this dispensation is not clear whensimilar cases could be made for medicine physics or evenEnglish literature Recommendations from the REF pilotwere that the panel should be able to extend the time framewhere appropriate this however poses difficult decisionswhen submitting a case study to the REF as to what theview of the panel will be and whether if deemed inappro-priate this will render the case study lsquounclassifiedrsquo
52 The developmental nature of impact
Impact is not static it will develop and change over timeand this development may be an increase or decrease in thecurrent degree of impact Impact can be temporary orlong-lasting The point at which assessment takes placewill therefore influence the degree and significance ofthat impact For example following the discovery of anew potential drug preclinical work is required followedby Phase 1 2 and 3 trials and then regulatory approval isgranted before the drug is used to deliver potential healthbenefits Clearly there is the possibility that the potentialnew drug will fail at any one of these phases but each phasecan be classed as an interim impact of the original discov-ery work on route to the delivery of health benefits but thetime at which an impact assessment takes place will influ-ence the degree of impact that has taken place If impact isshort-lived and has come and gone within an assessmentperiod how will it be viewed and considered Again theobjective and perspective of the individuals and organiza-tions assessing impact will be key to understanding howtemporal and dissipated impact will be valued in compari-son with longer-term impact
53 Attribution
Impact is derived not only from targeted research but fromserendipitous findings good fortune and complexnetworks interacting and translating knowledge andresearch The exploitation of research to provide impactoccurs through a complex variety of processes individualsand organizations and therefore attributing the contribu-tion made by a specific individual piece of research
funding strategy or organization to an impact is notstraight forward Husbands-Fealing suggests that toassist identification of causality for impact assessment itis useful to develop a theoretical framework to map theactors activities linkages outputs and impacts within thesystem under evaluation which shows how later phasesresult from earlier ones Such a framework should be notlinear but recursive including elements from contextualenvironments that influence andor interact with variousaspects of the system Impact is often the culmination ofwork within spanning research communities (Duryea et al2007) Concerns over how to attribute impacts have beenraised many times (The Allen Consulting Group 2005Duryea et al 2007 Grant et al 2009) and differentiatingbetween the various major and minor contributions thatlead to impact is a significant challenge
Figure 1 replicated from Hughes and Martin (2012)illustrates how the ease with which impact can beattributed decreases with time whereas the impact oreffect of complementary assets increases highlightingthe problem that it may take a considerable amount oftime for the full impact of a piece of research to developbut because of this time and the increase in complexity ofthe networks involved in translating the research andinterim impacts it is more difficult to attribute and linkback to a contributing piece of research
This presents particular difficulties in research discip-lines conducting basic research such as pure mathematicswhere the impact of research is unlikely to be foreseenResearch findings will be taken up in other branches ofresearch and developed further before socio-economicimpact occurs by which point attribution becomes ahuge challenge If this research is to be assessed alongsidemore applied research it is important that we are able to atleast determine the contribution of basic research It hasbeen acknowledged that outstanding leaps forward inknowledge and understanding come from immersing in abackground of intellectual thinking that lsquoone is able to seefurther by standing on the shoulders of giantsrsquo
54 Knowledge creep
It is acknowledged that one of the outcomes of developingnew knowledge through research can be lsquoknowledge creeprsquowhere new data or information becomes accepted and getsabsorbed over time This is particularly recognized in thedevelopment of new government policy where findings caninfluence policy debate and policy change without recog-nition of the contributing research (Davies et al 2005Wooding et al 2007) This is recognized as being particu-larly problematic within the social sciences where inform-ing policy is a likely impact of research In putting togetherevidence for the REF impact can be attributed to aspecific piece of research if it made a lsquodistinctive contribu-tionrsquo (REF2014 2011a) The difficulty then is how todetermine what the contribution has been in the absence
26 T Penfield et al
of adequate evidence and how we ensure that research that
results in impacts that cannot be evidenced is valued andsupported
55 Gathering evidence
Gathering evidence of the links between research andimpact is not only a challenge where that evidence is
lacking The introduction of impact assessments with therequirement to collate evidence retrospectively posesdifficulties because evidence measurements and baselines
have in many cases not been collected and may no longerbe available While looking forward we will be able toreduce this problem in the future identifying capturing
and storing the evidence in such a way that it can be usedin the decades to come is a difficulty that we will need to
tackle
6 Developing systems and taxonomies forcapturing impact
Collating the evidence and indicators of impact is a signifi-cant task that is being undertaken within universities and
institutions globally Decker et al (2007) surveyed re-searchers in the US top research institutions during 2005the survey of more than 6000 researchers found that on
average more than 40 of their time was spent doingadministrative tasks It is desirable that the assignationof administrative tasks to researchers is limited and there-
fore to assist the tracking and collating of impact datasystems are being developed involving numerous projects
and developments internationally including Star Metricsin the USA the ERC (European Research Council)
Research Information System and Lattes in Brazil (Lane2010 Mugabushaka and Papazoglou 2012)
Ideally systems within universities internationallywould be able to share data allowing direct comparisonsaccurate storage of information developed in collab-orations and transfer of comparable data as researchersmove between institutions To achieve compatible systemsa shared language is required CERIF (CommonEuropean Research Information Format) was developedfor this purpose first released in 1991 a number ofprojects and systems across Europe such as the ERCResearch Information System (Mugabushaka andPapazoglou 2012) are being developed as CERIF-compatible
In the UK there have been several Jisc-funded projects inrecent years to develop systems capable of storing researchinformation for example MICE (Measuring Impacts UnderCERIF) UK Research Information Shared Service andIntegrated Research Input and Output System all basedon the CERIF standard To allow comparisons between in-stitutions identifying a comprehensive taxonomy of impactand the evidence for it that can be used universally is seen tobe very valuable However the Achilles heel of any suchattempt as critics suggest is the creation of a system thatrewards what it can measure and codify with the knock-oneffect of directing research projects to deliver within themeasures and categories that reward
Attempts have been made to categorize impact evidenceand data for example the aim of the MICE Project was todevelop a set of impact indicators to enable impact to befed into a based system Indicators were identified fromdocuments produced for the REF by Research CouncilsUK in unpublished draft case studies undertaken atKingrsquos College London or outlined in relevant publications(MICE Project nd) A taxonomy of impact categories was
Figure 1 Time attribution impact Replicated from (Hughes and Martin 2012)
Assessment evaluations and definitions of research impact 27
then produced onto which impact could be mapped Whatemerged on testing the MICE taxonomy (Cooke andNadim 2011) by mapping impacts from case studies wasthat detailed categorization of impact was found to be tooprescriptive Every piece of research results in a uniquetapestry of impact and despite the MICE taxonomyhaving more than 100 indicators it was found that thesedid not suffice It is perhaps worth noting that the expertpanels who assessed the pilot exercise for the REF com-mented that the evidence provided by research institutes todemonstrate impact were lsquoa unique collectionrsquo Wherequantitative data were available for example audiencenumbers or book sales these numbers rarely reflected thedegree of impact as no context or baseline was availableCooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks ofimpacts that are generally found The Goldsmith report(Cooke and Nadim 2011) recommended making indicatorslsquovalue freersquo enabling the value or quality to be establishedin an impact descriptor that could be assessed by expertpanels The Goldsmith report concluded that generalcategories of evidence would be more useful such that in-dicators could encompass dissemination and circulationre-use and influence collaboration and boundary workand innovation and invention
While defining the terminology used to understand impactand indicators will enable comparable data to be stored andshared between organizations we would recommend thatany categorization of impacts be flexible such that impactsarising from non-standard routes can be placed It is worthconsidering the degree to which indicators are defined andprovide broader definitions with greater flexibility
It is possible to incorporate both metrics and narra-tives within systems for example within the ResearchOutcomes System and Researchfish currently used byseveral of the UK research councils to allow impacts tobe recorded although recording narratives has the advan-tage of allowing some context to be documented it maymake the evidence less flexible for use by different stake-holder groups (which include government funding bodiesresearch assessment agencies research providers and usercommunities) for whom the purpose of analysis may vary(Davies et al 2005) Any tool for impact evaluation needsto be flexible such that it enables access to impact data fora variety of purposes (Scoble et al nd) Systems need tobe able to capture links between and evidence of the fullpathway from research to impact including knowledgeexchange outputs outcomes and interim impacts toallow the route to impact to be traced This database ofevidence needs to establish both where impact can bedirectly attributed to a piece of research as well asvarious contributions to impact made during the pathway
Baselines and controls need to be captured alongsidechange to demonstrate the degree of impact In many in-stances controls are not feasible as we cannot look at whatimpact would have occurred if a piece of research had not
taken place however indications of the picture before andafter impact are valuable and worth collecting for impactthat can be predicted
It is now possible to use data-mining tools to extractspecific data from narratives or unstructured data(Mugabushaka and Papazoglou 2012) This is being donefor collation of academic impact and outputs for exampleResearch Portfolio Online Reporting Tools which usesPubMed and text mining to cluster research projects andSTAR Metrics in the US which uses administrative recordsand research outputs and is also being implemented by theERC using data in the public domain (Mugabushaka andPapazoglou 2012) These techniques have the potential toprovide a transformation in data capture and impact assess-ment (Jones and Grant 2013) It is acknowledged in thearticle by Mugabushaka and Papazoglou (2012) that itwill take years to fully incorporate the impacts of ERCfunding For systems to be able to capture a full range ofsystems definitions and categories of impact need to bedetermined that can be incorporated into system develop-ment To adequately capture interactions taking placebetween researchers institutions and stakeholders theintroduction of tools to enable this would be veryvaluable If knowledge exchange events could be capturedfor example electronically as they occur or automatically ifflagged from an electronic calendar or a diary then far moreof these events could be recorded with relative easeCapturing knowledge exchange events would greatly assistthe linking of research with impact
The transition to routine capture of impact data notonly requires the development of tools and systems tohelp with implementation but also a cultural change todevelop practices currently undertaken by a few to beincorporated as standard behaviour among researchersand universities
7 Indicators evidence and impactwithin systems
What indicators evidence and impacts need to becaptured within developing systems There is a great dealof interest in collating terms for impact and indicators ofimpact Consortia for Advancing Standards in ResearchAdministration Information for example has puttogether a data dictionary with the aim of setting thestandards for terminology used to describe impact and in-dicators that can be incorporated into systems internation-ally and seems to be building a certain momentum inthis area A variety of types of indicators can becaptured within systems however it is important thatthese are universally understood Here we address typesof evidence that need to be captured to enable anoverview of impact to be developed In the majority ofcases a number of types of evidence will be required toprovide an overview of impact
28 T Penfield et al
71 Metrics
Metrics have commonly been used as a measure of impactfor example in terms of profit made number of jobsprovided number of trained personnel recruited numberof visitors to an exhibition number of items purchasedand so on Metrics in themselves cannot convey the fullimpact however they are often viewed as powerful andunequivocal forms of evidence If metrics are available asimpact evidence they should where possible also captureany baseline or control data Any information on thecontext of the data will be valuable to understanding thedegree to which impact has taken place
Perhaps SROI indicates the desire to be able to demon-strate the monetary value of investment and impact by someorganizations SROI aims to provide a valuation of thebroader social environmental and economic impactsproviding a metric that can be used for demonstration ofworth This is ametric that has been usedwithin the charitablesector (Berg andMansson 2011) and also features as evidencein the REF guidance for panel D (REF2014 2012) Moredetails on SROI can be found in lsquoA guide to Social Returnon Investmentrsquo produced by The SROI Network (2012)
Although metrics can provide evidence of quantitativechanges or impacts from our research they are unable toadequately provide evidence of the qualitative impacts thattake place and hence are not suitable for all of the impactwe will encounter The main risks associated with the useof standardized metrics are that
(1) The full impact will not be realized as we focus oneasily quantifiable indicators
(2) We will focus attention towards generating resultsthat enable boxes to be ticked rather than deliveringreal value for money and innovative research
(3) They risk being monetized or converted into a lowestcommon denominator in an attempt to compare thecost of a new theatre against that of a hospital
72 Narratives
Narratives can be used to describe impact the use of nar-ratives enables a story to be told and the impact to beplaced in context and can make good use of qualitative
information They are often written with a reader from aparticular stakeholder group in mind and will present aview of impact from a particular perspective The risk ofrelying on narratives to assess impact is that they oftenlack the evidence required to judge whether the researchand impact are linked appropriately Where narratives areused in conjunction with metrics a complete picture ofimpact can be developed again from a particular perspec-tive but with the evidence available to corroborate theclaims made Table 1 summarizes some of the advantagesand disadvantages of the case study approach
By allowing impact to be placed in context we answerthe lsquoso whatrsquo question that can result from quantitativedata analyses but is there a risk that the full picture maynot be presented to demonstrate impact in a positive lightCase studies are ideal for showcasing impact but shouldthey be used to critically evaluate impact
73 Surveys and testimonies
One way in which change of opinion and user perceptionscan be evidenced is by gathering of stakeholder and usertestimonies or undertaking surveys This might describesupport for and development of research with end userspublic engagement and evidence of knowledge exchangeor a demonstration of change in public opinion as a resultof research Collecting this type of evidence is time-consuming and again it can be difficult to gather therequired evidence retrospectively when for example theappropriate user group might have dispersed
The ability to record and log these type of data is im-portant for enabling the path from research to impact tobe established and the development of systems that cancapture this would be very valuable
74 Citations (outside of academia) anddocumentation
Citations (outside of academia) and documentation can beused as evidence to demonstrate the use research findingsin developing new ideas and products for example Thismight include the citation of a piece of research in policydocuments or reference to a piece of research being cited
Table 1 The advantages and disadvantages of the case study approach
Benefits Considerations
Uses quantitative and qualitative data Automated collation of evidence is difficult
Allows evidence to be contextualized and a story told Incorporating perspective can make it difficult to assess
critically
Enables assessment in the absence of quantitative data Time-consuming to prepare and assess
Allows collation of unique datasets Difficult to compare like with like
Preserves distinctive account or disciplinary perspective Rewards those who can write well andor afford to pay
for external input
Assessment evaluations and definitions of research impact 29
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
studied and second a multi-dimensional classification
scheme into which the various outputs outcomes and
impacts can be placed (Hanney and Gonzalez Block2011) The Payback Framework has been adopted inter-
nationally largely within the health sector by organiza-
tions such as the Canadian Institute of Health Research
the Dutch Public Health Authority the Australian
National Health and Medical Research Council and the
Welfare Bureau in Hong Kong (Bernstein et al 2006
Nason et al 2008 CAHS 2009 Spaapen et al nd) The
Payback Framework enables health and medical researchand impact to be linked and the process by which impact
occurs to be traced For more extensive reviews of the
Payback Framework see Davies et al (2005) Wooding
et al (2007) Nason et al (2008) and Hanney and
Gonzalez-Block (2011)A very different approach known as Social Impact
Assessment Methods for research and funding instruments
through the study of Productive Interactions (SIAMPI)was developed from the Dutch project Evaluating
Research in Context and has a central theme of capturing
lsquoproductive interactionsrsquo between researchers and stake-
holders by analysing the networks that evolve during
research programmes (Spaapen and Drooge 2011
Spaapen et al nd) SIAMPI is based on the widely held
assumption that interactions between researchers and
stakeholder are an important pre-requisite to achievingimpact (Donovan 2011 Hughes and Martin 2012
Spaapen et al nd) This framework is intended to be
used as a learning tool to develop a better understanding
of how research interactions lead to social impact rather
than as an assessment tool for judging showcasing or
even linking impact to a specific piece of research
SIAMPI has been used within the Netherlands Institutefor health Services Research (SIAMPI nd) lsquoProductive
interactionsrsquo which can perhaps be viewed as instances of
knowledge exchange are widely valued and supported
internationally as mechanisms for enabling impact and
are often supported financially for example by Canadarsquos
Social Sciences and Humanities Research Council which
aims to support knowledge exchange (financially) with
a view to enabling long-term impact In the UKUK Department for Business Innovation and Skills
provided funding of pound150 million for knowledge
exchange in 2011ndash12 to lsquohelp universities and colleges
support the economic recovery and growth and contribute
to wider societyrsquo (Department for Business Innovation
and Skills 2012) While valuing and supporting knowledge
exchange is important SIAMPI perhaps takes this a step
further in enabling these exchange events to be capturedand analysed One of the advantages of this method is that
less input is required compared with capturing the full
route from research to impact A comprehensive assess-
ment of impact itself is not undertaken with SIAMPI
which make it a less-suitable method where showcasing
the benefits of research is desirable or where this justifica-tion of funding based on impact is required
The first attempt globally to comprehensively capturethe socio-economic impact of research across all disciplineswas undertaken for the Australian Research QualityFramework (RQF) using a case study approach TheRQF was developed to demonstrate and justify public ex-penditure on research and as part of this framework apilot assessment was undertaken by the AustralianTechnology Network Researchers were asked toevidence the economic societal environmental andcultural impact of their research within broad categorieswhich were then verified by an expert panel (Duryea et al2007) who concluded that the researchers and case studiescould provide enough qualitative and quantitativeevidence for reviewers to assess the impact arising fromtheir research (Duryea et al 2007) To evaluate impactcase studies were interrogated and verifiable indicatorsassessed to determine whether research had led to recipro-cal engagement adoption of research findings or publicvalue The RQF pioneered the case study approach toassessing research impact however with a change in gov-ernment in 2007 this framework was never implemented inAustralia although it has since been taken up and adaptedfor the UK REF
In developing the UK REF HEFCE commissioned areport in 2009 from RAND to review internationalpractice for assessing research impact and provide recom-mendations to inform the development of the REFRAND selected four frameworks to represent the interna-tional arena (Grant et al 2009) One of these the RQFthey identified as providing a lsquopromising basis for develop-ing an impact approach for the REFrsquo using the case studyapproach HEFCE developed an initial methodology thatwas then tested through a pilot exercise The case studyapproach recommended by the RQF was combined withlsquosignificancersquo and lsquoreachrsquo as criteria for assessment Thecriteria for assessment were also supported by a modeldeveloped by Brunel for lsquomeasurementrsquo of impact thatused similar measures defined as depth and spread Inthe Brunel model depth refers to the degree to which theresearch has influenced or caused change whereas spreadrefers to the extent to which the change has occurred andinfluenced end users Evaluation of impact in terms ofreach and significance allows all disciplines of researchand types of impact to be assessed side-by-side (Scobleet al 2010)
The range and diversity of frameworks developed reflectthe variation in purpose of evaluation including the stake-holders for whom the assessment takes place along withthe type of impact and evidence anticipated The most ap-propriate type of evaluation will vary according to thestakeholder whom we are wishing to inform Studies(Buxton Hanney and Jones 2004) into the economicgains from biomedical and health sciences determinedthat different methodologies provide different ways of
24 T Penfield et al
considering economic benefits A discussion on the benefitsand drawbacks of a range of evaluation tools (bibliomet-rics economic rate of return peer review case study logicmodelling and benchmarking) can be found in the articleby Grant (2006)
Evaluation of impact is becoming increasingly import-ant both within the UK and internationally and researchand development into impact evaluation continues forexample researchers at Brunel have developed theconcept of depth and spread further into the BrunelImpact Device for Evaluation which also assesses thedegree of separation between research and impact(Scoble et al working paper)
4 Impact and the REF
Although based on the RQF the REF did not adopt all ofthe suggestions held within for example the option ofallowing research groups to opt out of impact assessmentshould the nature or stage of research deem it unsuitable(Donovan 2008) In 2009ndash10 the REF team conducted apilot study for the REF involving 29 institutionssubmitting case studies to one of five units of assessment(in clinical medicine physics earth systems and environ-mental sciences social work and social policy and Englishlanguage and literature) (REF2014 2010) These casestudies were reviewed by expert panels and as with theRQF they found that it was possible to assess impactand develop lsquoimpact profilesrsquo using the case studyapproach (REF2014 2010)
From 2014 research within UK universities and institu-tions will be assessed through the REF this will replace theResearch Assessment Exercise which has been used toassess UK research since the 1980s Differences betweenthese two assessments include the removal of indicators ofesteem and the addition of assessment of socio-economicresearch impact The REF will therefore assess threeaspects of research
(1) Outputs(2) Impact(3) Environment
Research impact is assessed in two formats firstthrough an impact template that describes the approachto enabling impact within a unit of assessment andsecond using impact case studies that describe theimpact taking place following excellent research within aunit of assessment (REF2014 2011a) HEFCE indicatedthat impact should merit a 25 weighting within theREF (REF2014 2011b) however this has been reducedfor the 2014 REF to 20 perhaps as a result offeedback and lobbying for example from the RussellGroup and Million+group of Universities who calledfor impact to count for 15 (Russell Group 2009 Jump2011) and following guidance from the expert panels
undertaking the pilot exercise who suggested that duringthe 2014 REF impact assessment would be in a develop-mental phase and that a lower weighting for impact wouldbe appropriate with the expectation that this would beincreased in subsequent assessments (REF2014 2010)
The quality and reliability of impact indicators will varyaccording to the impact we are trying to describe and linkto research In the UK evidence and research impacts willbe assessed for the REF within research disciplinesAlthough it can be envisaged that the range of impactsderived from research of different disciplines are likely tovary one might question whether it makes sense tocompare impacts within disciplines when the range ofimpact can vary enormously for example from businessdevelopment to cultural changes or saving lives An alter-native approach was suggested for the RQF in Australiawhere it was proposed that types of impact be comparedrather than impact from specific disciplines
Providing advice and guidance within specific disciplinesis undoubtedly helpful It can be seen from the panelguidance produced by HEFCE to illustrate impacts andevidence that it is expected that impact and evidence willvary according to discipline (REF2014 2012) Why shouldthis be the case Two areas of research impact health andbiomedical sciences and the social sciences have receivedparticular attention in the literature by comparison withfor example the arts Reviews and guidance on developingand evidencing impact in particular disciplines include theLondon School of Economics (LSE) Public Policy Grouprsquosimpact handbook (LSE nd) a review of the social andeconomic impacts arising from the arts produced by Reeve(Reeves 2002) and a review by Kuruvilla et al (2006) onthe impact arising from health research Perhaps it is timefor a generic guide based on types of impact rather thanresearch discipline
5 The challenges of impact evaluation
What are the challenges associated with understanding andevaluating research impact In endeavouring to assess orevaluate impact a number of difficulties emerge and thesemay be specific to certain types of impact Given that thetype of impact we might expect varies according toresearch discipline impact-specific challenges present uswith the problem that an evaluation mechanism may notfairly compare impact between research disciplines
51 Time lag
The time lag between research and impact varies enor-mously For example the development of a spin out cantake place in a very short period whereas it took around30 years from the discovery of DNA before technologywas developed to enable DNA fingerprinting In develop-ment of the RQF The Allen Consulting Group (2005)highlighted that defining a time lag between research and
Assessment evaluations and definitions of research impact 25
impact was difficult In the UK the Russell GroupUniversities responded to the REF consultation by recom-mending that no time lag be put on the delivery of impactfrom a piece of research citing examples such as the devel-opment of cardiovascular disease treatments which takebetween 10 and 25 years from research to impact (RussellGroup 2009) To be considered for inclusion within theREF impact must be underpinned by research that tookplace between 1 January 1993 and 31 December 2013 withimpact occurring during an assessment window from 1January 2008 to 31 July 2013 However there has beenrecognition that this time window may be insufficient insome instances with architecture being granted an add-itional 5-year period (REF2014 2012) why only architec-ture has been granted this dispensation is not clear whensimilar cases could be made for medicine physics or evenEnglish literature Recommendations from the REF pilotwere that the panel should be able to extend the time framewhere appropriate this however poses difficult decisionswhen submitting a case study to the REF as to what theview of the panel will be and whether if deemed inappro-priate this will render the case study lsquounclassifiedrsquo
52 The developmental nature of impact
Impact is not static it will develop and change over timeand this development may be an increase or decrease in thecurrent degree of impact Impact can be temporary orlong-lasting The point at which assessment takes placewill therefore influence the degree and significance ofthat impact For example following the discovery of anew potential drug preclinical work is required followedby Phase 1 2 and 3 trials and then regulatory approval isgranted before the drug is used to deliver potential healthbenefits Clearly there is the possibility that the potentialnew drug will fail at any one of these phases but each phasecan be classed as an interim impact of the original discov-ery work on route to the delivery of health benefits but thetime at which an impact assessment takes place will influ-ence the degree of impact that has taken place If impact isshort-lived and has come and gone within an assessmentperiod how will it be viewed and considered Again theobjective and perspective of the individuals and organiza-tions assessing impact will be key to understanding howtemporal and dissipated impact will be valued in compari-son with longer-term impact
53 Attribution
Impact is derived not only from targeted research but fromserendipitous findings good fortune and complexnetworks interacting and translating knowledge andresearch The exploitation of research to provide impactoccurs through a complex variety of processes individualsand organizations and therefore attributing the contribu-tion made by a specific individual piece of research
funding strategy or organization to an impact is notstraight forward Husbands-Fealing suggests that toassist identification of causality for impact assessment itis useful to develop a theoretical framework to map theactors activities linkages outputs and impacts within thesystem under evaluation which shows how later phasesresult from earlier ones Such a framework should be notlinear but recursive including elements from contextualenvironments that influence andor interact with variousaspects of the system Impact is often the culmination ofwork within spanning research communities (Duryea et al2007) Concerns over how to attribute impacts have beenraised many times (The Allen Consulting Group 2005Duryea et al 2007 Grant et al 2009) and differentiatingbetween the various major and minor contributions thatlead to impact is a significant challenge
Figure 1 replicated from Hughes and Martin (2012)illustrates how the ease with which impact can beattributed decreases with time whereas the impact oreffect of complementary assets increases highlightingthe problem that it may take a considerable amount oftime for the full impact of a piece of research to developbut because of this time and the increase in complexity ofthe networks involved in translating the research andinterim impacts it is more difficult to attribute and linkback to a contributing piece of research
This presents particular difficulties in research discip-lines conducting basic research such as pure mathematicswhere the impact of research is unlikely to be foreseenResearch findings will be taken up in other branches ofresearch and developed further before socio-economicimpact occurs by which point attribution becomes ahuge challenge If this research is to be assessed alongsidemore applied research it is important that we are able to atleast determine the contribution of basic research It hasbeen acknowledged that outstanding leaps forward inknowledge and understanding come from immersing in abackground of intellectual thinking that lsquoone is able to seefurther by standing on the shoulders of giantsrsquo
54 Knowledge creep
It is acknowledged that one of the outcomes of developingnew knowledge through research can be lsquoknowledge creeprsquowhere new data or information becomes accepted and getsabsorbed over time This is particularly recognized in thedevelopment of new government policy where findings caninfluence policy debate and policy change without recog-nition of the contributing research (Davies et al 2005Wooding et al 2007) This is recognized as being particu-larly problematic within the social sciences where inform-ing policy is a likely impact of research In putting togetherevidence for the REF impact can be attributed to aspecific piece of research if it made a lsquodistinctive contribu-tionrsquo (REF2014 2011a) The difficulty then is how todetermine what the contribution has been in the absence
26 T Penfield et al
of adequate evidence and how we ensure that research that
results in impacts that cannot be evidenced is valued andsupported
55 Gathering evidence
Gathering evidence of the links between research andimpact is not only a challenge where that evidence is
lacking The introduction of impact assessments with therequirement to collate evidence retrospectively posesdifficulties because evidence measurements and baselines
have in many cases not been collected and may no longerbe available While looking forward we will be able toreduce this problem in the future identifying capturing
and storing the evidence in such a way that it can be usedin the decades to come is a difficulty that we will need to
tackle
6 Developing systems and taxonomies forcapturing impact
Collating the evidence and indicators of impact is a signifi-cant task that is being undertaken within universities and
institutions globally Decker et al (2007) surveyed re-searchers in the US top research institutions during 2005the survey of more than 6000 researchers found that on
average more than 40 of their time was spent doingadministrative tasks It is desirable that the assignationof administrative tasks to researchers is limited and there-
fore to assist the tracking and collating of impact datasystems are being developed involving numerous projects
and developments internationally including Star Metricsin the USA the ERC (European Research Council)
Research Information System and Lattes in Brazil (Lane2010 Mugabushaka and Papazoglou 2012)
Ideally systems within universities internationallywould be able to share data allowing direct comparisonsaccurate storage of information developed in collab-orations and transfer of comparable data as researchersmove between institutions To achieve compatible systemsa shared language is required CERIF (CommonEuropean Research Information Format) was developedfor this purpose first released in 1991 a number ofprojects and systems across Europe such as the ERCResearch Information System (Mugabushaka andPapazoglou 2012) are being developed as CERIF-compatible
In the UK there have been several Jisc-funded projects inrecent years to develop systems capable of storing researchinformation for example MICE (Measuring Impacts UnderCERIF) UK Research Information Shared Service andIntegrated Research Input and Output System all basedon the CERIF standard To allow comparisons between in-stitutions identifying a comprehensive taxonomy of impactand the evidence for it that can be used universally is seen tobe very valuable However the Achilles heel of any suchattempt as critics suggest is the creation of a system thatrewards what it can measure and codify with the knock-oneffect of directing research projects to deliver within themeasures and categories that reward
Attempts have been made to categorize impact evidenceand data for example the aim of the MICE Project was todevelop a set of impact indicators to enable impact to befed into a based system Indicators were identified fromdocuments produced for the REF by Research CouncilsUK in unpublished draft case studies undertaken atKingrsquos College London or outlined in relevant publications(MICE Project nd) A taxonomy of impact categories was
Figure 1 Time attribution impact Replicated from (Hughes and Martin 2012)
Assessment evaluations and definitions of research impact 27
then produced onto which impact could be mapped Whatemerged on testing the MICE taxonomy (Cooke andNadim 2011) by mapping impacts from case studies wasthat detailed categorization of impact was found to be tooprescriptive Every piece of research results in a uniquetapestry of impact and despite the MICE taxonomyhaving more than 100 indicators it was found that thesedid not suffice It is perhaps worth noting that the expertpanels who assessed the pilot exercise for the REF com-mented that the evidence provided by research institutes todemonstrate impact were lsquoa unique collectionrsquo Wherequantitative data were available for example audiencenumbers or book sales these numbers rarely reflected thedegree of impact as no context or baseline was availableCooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks ofimpacts that are generally found The Goldsmith report(Cooke and Nadim 2011) recommended making indicatorslsquovalue freersquo enabling the value or quality to be establishedin an impact descriptor that could be assessed by expertpanels The Goldsmith report concluded that generalcategories of evidence would be more useful such that in-dicators could encompass dissemination and circulationre-use and influence collaboration and boundary workand innovation and invention
While defining the terminology used to understand impactand indicators will enable comparable data to be stored andshared between organizations we would recommend thatany categorization of impacts be flexible such that impactsarising from non-standard routes can be placed It is worthconsidering the degree to which indicators are defined andprovide broader definitions with greater flexibility
It is possible to incorporate both metrics and narra-tives within systems for example within the ResearchOutcomes System and Researchfish currently used byseveral of the UK research councils to allow impacts tobe recorded although recording narratives has the advan-tage of allowing some context to be documented it maymake the evidence less flexible for use by different stake-holder groups (which include government funding bodiesresearch assessment agencies research providers and usercommunities) for whom the purpose of analysis may vary(Davies et al 2005) Any tool for impact evaluation needsto be flexible such that it enables access to impact data fora variety of purposes (Scoble et al nd) Systems need tobe able to capture links between and evidence of the fullpathway from research to impact including knowledgeexchange outputs outcomes and interim impacts toallow the route to impact to be traced This database ofevidence needs to establish both where impact can bedirectly attributed to a piece of research as well asvarious contributions to impact made during the pathway
Baselines and controls need to be captured alongsidechange to demonstrate the degree of impact In many in-stances controls are not feasible as we cannot look at whatimpact would have occurred if a piece of research had not
taken place however indications of the picture before andafter impact are valuable and worth collecting for impactthat can be predicted
It is now possible to use data-mining tools to extractspecific data from narratives or unstructured data(Mugabushaka and Papazoglou 2012) This is being donefor collation of academic impact and outputs for exampleResearch Portfolio Online Reporting Tools which usesPubMed and text mining to cluster research projects andSTAR Metrics in the US which uses administrative recordsand research outputs and is also being implemented by theERC using data in the public domain (Mugabushaka andPapazoglou 2012) These techniques have the potential toprovide a transformation in data capture and impact assess-ment (Jones and Grant 2013) It is acknowledged in thearticle by Mugabushaka and Papazoglou (2012) that itwill take years to fully incorporate the impacts of ERCfunding For systems to be able to capture a full range ofsystems definitions and categories of impact need to bedetermined that can be incorporated into system develop-ment To adequately capture interactions taking placebetween researchers institutions and stakeholders theintroduction of tools to enable this would be veryvaluable If knowledge exchange events could be capturedfor example electronically as they occur or automatically ifflagged from an electronic calendar or a diary then far moreof these events could be recorded with relative easeCapturing knowledge exchange events would greatly assistthe linking of research with impact
The transition to routine capture of impact data notonly requires the development of tools and systems tohelp with implementation but also a cultural change todevelop practices currently undertaken by a few to beincorporated as standard behaviour among researchersand universities
7 Indicators evidence and impactwithin systems
What indicators evidence and impacts need to becaptured within developing systems There is a great dealof interest in collating terms for impact and indicators ofimpact Consortia for Advancing Standards in ResearchAdministration Information for example has puttogether a data dictionary with the aim of setting thestandards for terminology used to describe impact and in-dicators that can be incorporated into systems internation-ally and seems to be building a certain momentum inthis area A variety of types of indicators can becaptured within systems however it is important thatthese are universally understood Here we address typesof evidence that need to be captured to enable anoverview of impact to be developed In the majority ofcases a number of types of evidence will be required toprovide an overview of impact
28 T Penfield et al
71 Metrics
Metrics have commonly been used as a measure of impactfor example in terms of profit made number of jobsprovided number of trained personnel recruited numberof visitors to an exhibition number of items purchasedand so on Metrics in themselves cannot convey the fullimpact however they are often viewed as powerful andunequivocal forms of evidence If metrics are available asimpact evidence they should where possible also captureany baseline or control data Any information on thecontext of the data will be valuable to understanding thedegree to which impact has taken place
Perhaps SROI indicates the desire to be able to demon-strate the monetary value of investment and impact by someorganizations SROI aims to provide a valuation of thebroader social environmental and economic impactsproviding a metric that can be used for demonstration ofworth This is ametric that has been usedwithin the charitablesector (Berg andMansson 2011) and also features as evidencein the REF guidance for panel D (REF2014 2012) Moredetails on SROI can be found in lsquoA guide to Social Returnon Investmentrsquo produced by The SROI Network (2012)
Although metrics can provide evidence of quantitativechanges or impacts from our research they are unable toadequately provide evidence of the qualitative impacts thattake place and hence are not suitable for all of the impactwe will encounter The main risks associated with the useof standardized metrics are that
(1) The full impact will not be realized as we focus oneasily quantifiable indicators
(2) We will focus attention towards generating resultsthat enable boxes to be ticked rather than deliveringreal value for money and innovative research
(3) They risk being monetized or converted into a lowestcommon denominator in an attempt to compare thecost of a new theatre against that of a hospital
72 Narratives
Narratives can be used to describe impact the use of nar-ratives enables a story to be told and the impact to beplaced in context and can make good use of qualitative
information They are often written with a reader from aparticular stakeholder group in mind and will present aview of impact from a particular perspective The risk ofrelying on narratives to assess impact is that they oftenlack the evidence required to judge whether the researchand impact are linked appropriately Where narratives areused in conjunction with metrics a complete picture ofimpact can be developed again from a particular perspec-tive but with the evidence available to corroborate theclaims made Table 1 summarizes some of the advantagesand disadvantages of the case study approach
By allowing impact to be placed in context we answerthe lsquoso whatrsquo question that can result from quantitativedata analyses but is there a risk that the full picture maynot be presented to demonstrate impact in a positive lightCase studies are ideal for showcasing impact but shouldthey be used to critically evaluate impact
73 Surveys and testimonies
One way in which change of opinion and user perceptionscan be evidenced is by gathering of stakeholder and usertestimonies or undertaking surveys This might describesupport for and development of research with end userspublic engagement and evidence of knowledge exchangeor a demonstration of change in public opinion as a resultof research Collecting this type of evidence is time-consuming and again it can be difficult to gather therequired evidence retrospectively when for example theappropriate user group might have dispersed
The ability to record and log these type of data is im-portant for enabling the path from research to impact tobe established and the development of systems that cancapture this would be very valuable
74 Citations (outside of academia) anddocumentation
Citations (outside of academia) and documentation can beused as evidence to demonstrate the use research findingsin developing new ideas and products for example Thismight include the citation of a piece of research in policydocuments or reference to a piece of research being cited
Table 1 The advantages and disadvantages of the case study approach
Benefits Considerations
Uses quantitative and qualitative data Automated collation of evidence is difficult
Allows evidence to be contextualized and a story told Incorporating perspective can make it difficult to assess
critically
Enables assessment in the absence of quantitative data Time-consuming to prepare and assess
Allows collation of unique datasets Difficult to compare like with like
Preserves distinctive account or disciplinary perspective Rewards those who can write well andor afford to pay
for external input
Assessment evaluations and definitions of research impact 29
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
considering economic benefits A discussion on the benefitsand drawbacks of a range of evaluation tools (bibliomet-rics economic rate of return peer review case study logicmodelling and benchmarking) can be found in the articleby Grant (2006)
Evaluation of impact is becoming increasingly import-ant both within the UK and internationally and researchand development into impact evaluation continues forexample researchers at Brunel have developed theconcept of depth and spread further into the BrunelImpact Device for Evaluation which also assesses thedegree of separation between research and impact(Scoble et al working paper)
4 Impact and the REF
Although based on the RQF the REF did not adopt all ofthe suggestions held within for example the option ofallowing research groups to opt out of impact assessmentshould the nature or stage of research deem it unsuitable(Donovan 2008) In 2009ndash10 the REF team conducted apilot study for the REF involving 29 institutionssubmitting case studies to one of five units of assessment(in clinical medicine physics earth systems and environ-mental sciences social work and social policy and Englishlanguage and literature) (REF2014 2010) These casestudies were reviewed by expert panels and as with theRQF they found that it was possible to assess impactand develop lsquoimpact profilesrsquo using the case studyapproach (REF2014 2010)
From 2014 research within UK universities and institu-tions will be assessed through the REF this will replace theResearch Assessment Exercise which has been used toassess UK research since the 1980s Differences betweenthese two assessments include the removal of indicators ofesteem and the addition of assessment of socio-economicresearch impact The REF will therefore assess threeaspects of research
(1) Outputs(2) Impact(3) Environment
Research impact is assessed in two formats firstthrough an impact template that describes the approachto enabling impact within a unit of assessment andsecond using impact case studies that describe theimpact taking place following excellent research within aunit of assessment (REF2014 2011a) HEFCE indicatedthat impact should merit a 25 weighting within theREF (REF2014 2011b) however this has been reducedfor the 2014 REF to 20 perhaps as a result offeedback and lobbying for example from the RussellGroup and Million+group of Universities who calledfor impact to count for 15 (Russell Group 2009 Jump2011) and following guidance from the expert panels
undertaking the pilot exercise who suggested that duringthe 2014 REF impact assessment would be in a develop-mental phase and that a lower weighting for impact wouldbe appropriate with the expectation that this would beincreased in subsequent assessments (REF2014 2010)
The quality and reliability of impact indicators will varyaccording to the impact we are trying to describe and linkto research In the UK evidence and research impacts willbe assessed for the REF within research disciplinesAlthough it can be envisaged that the range of impactsderived from research of different disciplines are likely tovary one might question whether it makes sense tocompare impacts within disciplines when the range ofimpact can vary enormously for example from businessdevelopment to cultural changes or saving lives An alter-native approach was suggested for the RQF in Australiawhere it was proposed that types of impact be comparedrather than impact from specific disciplines
Providing advice and guidance within specific disciplinesis undoubtedly helpful It can be seen from the panelguidance produced by HEFCE to illustrate impacts andevidence that it is expected that impact and evidence willvary according to discipline (REF2014 2012) Why shouldthis be the case Two areas of research impact health andbiomedical sciences and the social sciences have receivedparticular attention in the literature by comparison withfor example the arts Reviews and guidance on developingand evidencing impact in particular disciplines include theLondon School of Economics (LSE) Public Policy Grouprsquosimpact handbook (LSE nd) a review of the social andeconomic impacts arising from the arts produced by Reeve(Reeves 2002) and a review by Kuruvilla et al (2006) onthe impact arising from health research Perhaps it is timefor a generic guide based on types of impact rather thanresearch discipline
5 The challenges of impact evaluation
What are the challenges associated with understanding andevaluating research impact In endeavouring to assess orevaluate impact a number of difficulties emerge and thesemay be specific to certain types of impact Given that thetype of impact we might expect varies according toresearch discipline impact-specific challenges present uswith the problem that an evaluation mechanism may notfairly compare impact between research disciplines
51 Time lag
The time lag between research and impact varies enor-mously For example the development of a spin out cantake place in a very short period whereas it took around30 years from the discovery of DNA before technologywas developed to enable DNA fingerprinting In develop-ment of the RQF The Allen Consulting Group (2005)highlighted that defining a time lag between research and
Assessment evaluations and definitions of research impact 25
impact was difficult In the UK the Russell GroupUniversities responded to the REF consultation by recom-mending that no time lag be put on the delivery of impactfrom a piece of research citing examples such as the devel-opment of cardiovascular disease treatments which takebetween 10 and 25 years from research to impact (RussellGroup 2009) To be considered for inclusion within theREF impact must be underpinned by research that tookplace between 1 January 1993 and 31 December 2013 withimpact occurring during an assessment window from 1January 2008 to 31 July 2013 However there has beenrecognition that this time window may be insufficient insome instances with architecture being granted an add-itional 5-year period (REF2014 2012) why only architec-ture has been granted this dispensation is not clear whensimilar cases could be made for medicine physics or evenEnglish literature Recommendations from the REF pilotwere that the panel should be able to extend the time framewhere appropriate this however poses difficult decisionswhen submitting a case study to the REF as to what theview of the panel will be and whether if deemed inappro-priate this will render the case study lsquounclassifiedrsquo
52 The developmental nature of impact
Impact is not static it will develop and change over timeand this development may be an increase or decrease in thecurrent degree of impact Impact can be temporary orlong-lasting The point at which assessment takes placewill therefore influence the degree and significance ofthat impact For example following the discovery of anew potential drug preclinical work is required followedby Phase 1 2 and 3 trials and then regulatory approval isgranted before the drug is used to deliver potential healthbenefits Clearly there is the possibility that the potentialnew drug will fail at any one of these phases but each phasecan be classed as an interim impact of the original discov-ery work on route to the delivery of health benefits but thetime at which an impact assessment takes place will influ-ence the degree of impact that has taken place If impact isshort-lived and has come and gone within an assessmentperiod how will it be viewed and considered Again theobjective and perspective of the individuals and organiza-tions assessing impact will be key to understanding howtemporal and dissipated impact will be valued in compari-son with longer-term impact
53 Attribution
Impact is derived not only from targeted research but fromserendipitous findings good fortune and complexnetworks interacting and translating knowledge andresearch The exploitation of research to provide impactoccurs through a complex variety of processes individualsand organizations and therefore attributing the contribu-tion made by a specific individual piece of research
funding strategy or organization to an impact is notstraight forward Husbands-Fealing suggests that toassist identification of causality for impact assessment itis useful to develop a theoretical framework to map theactors activities linkages outputs and impacts within thesystem under evaluation which shows how later phasesresult from earlier ones Such a framework should be notlinear but recursive including elements from contextualenvironments that influence andor interact with variousaspects of the system Impact is often the culmination ofwork within spanning research communities (Duryea et al2007) Concerns over how to attribute impacts have beenraised many times (The Allen Consulting Group 2005Duryea et al 2007 Grant et al 2009) and differentiatingbetween the various major and minor contributions thatlead to impact is a significant challenge
Figure 1 replicated from Hughes and Martin (2012)illustrates how the ease with which impact can beattributed decreases with time whereas the impact oreffect of complementary assets increases highlightingthe problem that it may take a considerable amount oftime for the full impact of a piece of research to developbut because of this time and the increase in complexity ofthe networks involved in translating the research andinterim impacts it is more difficult to attribute and linkback to a contributing piece of research
This presents particular difficulties in research discip-lines conducting basic research such as pure mathematicswhere the impact of research is unlikely to be foreseenResearch findings will be taken up in other branches ofresearch and developed further before socio-economicimpact occurs by which point attribution becomes ahuge challenge If this research is to be assessed alongsidemore applied research it is important that we are able to atleast determine the contribution of basic research It hasbeen acknowledged that outstanding leaps forward inknowledge and understanding come from immersing in abackground of intellectual thinking that lsquoone is able to seefurther by standing on the shoulders of giantsrsquo
54 Knowledge creep
It is acknowledged that one of the outcomes of developingnew knowledge through research can be lsquoknowledge creeprsquowhere new data or information becomes accepted and getsabsorbed over time This is particularly recognized in thedevelopment of new government policy where findings caninfluence policy debate and policy change without recog-nition of the contributing research (Davies et al 2005Wooding et al 2007) This is recognized as being particu-larly problematic within the social sciences where inform-ing policy is a likely impact of research In putting togetherevidence for the REF impact can be attributed to aspecific piece of research if it made a lsquodistinctive contribu-tionrsquo (REF2014 2011a) The difficulty then is how todetermine what the contribution has been in the absence
26 T Penfield et al
of adequate evidence and how we ensure that research that
results in impacts that cannot be evidenced is valued andsupported
55 Gathering evidence
Gathering evidence of the links between research andimpact is not only a challenge where that evidence is
lacking The introduction of impact assessments with therequirement to collate evidence retrospectively posesdifficulties because evidence measurements and baselines
have in many cases not been collected and may no longerbe available While looking forward we will be able toreduce this problem in the future identifying capturing
and storing the evidence in such a way that it can be usedin the decades to come is a difficulty that we will need to
tackle
6 Developing systems and taxonomies forcapturing impact
Collating the evidence and indicators of impact is a signifi-cant task that is being undertaken within universities and
institutions globally Decker et al (2007) surveyed re-searchers in the US top research institutions during 2005the survey of more than 6000 researchers found that on
average more than 40 of their time was spent doingadministrative tasks It is desirable that the assignationof administrative tasks to researchers is limited and there-
fore to assist the tracking and collating of impact datasystems are being developed involving numerous projects
and developments internationally including Star Metricsin the USA the ERC (European Research Council)
Research Information System and Lattes in Brazil (Lane2010 Mugabushaka and Papazoglou 2012)
Ideally systems within universities internationallywould be able to share data allowing direct comparisonsaccurate storage of information developed in collab-orations and transfer of comparable data as researchersmove between institutions To achieve compatible systemsa shared language is required CERIF (CommonEuropean Research Information Format) was developedfor this purpose first released in 1991 a number ofprojects and systems across Europe such as the ERCResearch Information System (Mugabushaka andPapazoglou 2012) are being developed as CERIF-compatible
In the UK there have been several Jisc-funded projects inrecent years to develop systems capable of storing researchinformation for example MICE (Measuring Impacts UnderCERIF) UK Research Information Shared Service andIntegrated Research Input and Output System all basedon the CERIF standard To allow comparisons between in-stitutions identifying a comprehensive taxonomy of impactand the evidence for it that can be used universally is seen tobe very valuable However the Achilles heel of any suchattempt as critics suggest is the creation of a system thatrewards what it can measure and codify with the knock-oneffect of directing research projects to deliver within themeasures and categories that reward
Attempts have been made to categorize impact evidenceand data for example the aim of the MICE Project was todevelop a set of impact indicators to enable impact to befed into a based system Indicators were identified fromdocuments produced for the REF by Research CouncilsUK in unpublished draft case studies undertaken atKingrsquos College London or outlined in relevant publications(MICE Project nd) A taxonomy of impact categories was
Figure 1 Time attribution impact Replicated from (Hughes and Martin 2012)
Assessment evaluations and definitions of research impact 27
then produced onto which impact could be mapped Whatemerged on testing the MICE taxonomy (Cooke andNadim 2011) by mapping impacts from case studies wasthat detailed categorization of impact was found to be tooprescriptive Every piece of research results in a uniquetapestry of impact and despite the MICE taxonomyhaving more than 100 indicators it was found that thesedid not suffice It is perhaps worth noting that the expertpanels who assessed the pilot exercise for the REF com-mented that the evidence provided by research institutes todemonstrate impact were lsquoa unique collectionrsquo Wherequantitative data were available for example audiencenumbers or book sales these numbers rarely reflected thedegree of impact as no context or baseline was availableCooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks ofimpacts that are generally found The Goldsmith report(Cooke and Nadim 2011) recommended making indicatorslsquovalue freersquo enabling the value or quality to be establishedin an impact descriptor that could be assessed by expertpanels The Goldsmith report concluded that generalcategories of evidence would be more useful such that in-dicators could encompass dissemination and circulationre-use and influence collaboration and boundary workand innovation and invention
While defining the terminology used to understand impactand indicators will enable comparable data to be stored andshared between organizations we would recommend thatany categorization of impacts be flexible such that impactsarising from non-standard routes can be placed It is worthconsidering the degree to which indicators are defined andprovide broader definitions with greater flexibility
It is possible to incorporate both metrics and narra-tives within systems for example within the ResearchOutcomes System and Researchfish currently used byseveral of the UK research councils to allow impacts tobe recorded although recording narratives has the advan-tage of allowing some context to be documented it maymake the evidence less flexible for use by different stake-holder groups (which include government funding bodiesresearch assessment agencies research providers and usercommunities) for whom the purpose of analysis may vary(Davies et al 2005) Any tool for impact evaluation needsto be flexible such that it enables access to impact data fora variety of purposes (Scoble et al nd) Systems need tobe able to capture links between and evidence of the fullpathway from research to impact including knowledgeexchange outputs outcomes and interim impacts toallow the route to impact to be traced This database ofevidence needs to establish both where impact can bedirectly attributed to a piece of research as well asvarious contributions to impact made during the pathway
Baselines and controls need to be captured alongsidechange to demonstrate the degree of impact In many in-stances controls are not feasible as we cannot look at whatimpact would have occurred if a piece of research had not
taken place however indications of the picture before andafter impact are valuable and worth collecting for impactthat can be predicted
It is now possible to use data-mining tools to extractspecific data from narratives or unstructured data(Mugabushaka and Papazoglou 2012) This is being donefor collation of academic impact and outputs for exampleResearch Portfolio Online Reporting Tools which usesPubMed and text mining to cluster research projects andSTAR Metrics in the US which uses administrative recordsand research outputs and is also being implemented by theERC using data in the public domain (Mugabushaka andPapazoglou 2012) These techniques have the potential toprovide a transformation in data capture and impact assess-ment (Jones and Grant 2013) It is acknowledged in thearticle by Mugabushaka and Papazoglou (2012) that itwill take years to fully incorporate the impacts of ERCfunding For systems to be able to capture a full range ofsystems definitions and categories of impact need to bedetermined that can be incorporated into system develop-ment To adequately capture interactions taking placebetween researchers institutions and stakeholders theintroduction of tools to enable this would be veryvaluable If knowledge exchange events could be capturedfor example electronically as they occur or automatically ifflagged from an electronic calendar or a diary then far moreof these events could be recorded with relative easeCapturing knowledge exchange events would greatly assistthe linking of research with impact
The transition to routine capture of impact data notonly requires the development of tools and systems tohelp with implementation but also a cultural change todevelop practices currently undertaken by a few to beincorporated as standard behaviour among researchersand universities
7 Indicators evidence and impactwithin systems
What indicators evidence and impacts need to becaptured within developing systems There is a great dealof interest in collating terms for impact and indicators ofimpact Consortia for Advancing Standards in ResearchAdministration Information for example has puttogether a data dictionary with the aim of setting thestandards for terminology used to describe impact and in-dicators that can be incorporated into systems internation-ally and seems to be building a certain momentum inthis area A variety of types of indicators can becaptured within systems however it is important thatthese are universally understood Here we address typesof evidence that need to be captured to enable anoverview of impact to be developed In the majority ofcases a number of types of evidence will be required toprovide an overview of impact
28 T Penfield et al
71 Metrics
Metrics have commonly been used as a measure of impactfor example in terms of profit made number of jobsprovided number of trained personnel recruited numberof visitors to an exhibition number of items purchasedand so on Metrics in themselves cannot convey the fullimpact however they are often viewed as powerful andunequivocal forms of evidence If metrics are available asimpact evidence they should where possible also captureany baseline or control data Any information on thecontext of the data will be valuable to understanding thedegree to which impact has taken place
Perhaps SROI indicates the desire to be able to demon-strate the monetary value of investment and impact by someorganizations SROI aims to provide a valuation of thebroader social environmental and economic impactsproviding a metric that can be used for demonstration ofworth This is ametric that has been usedwithin the charitablesector (Berg andMansson 2011) and also features as evidencein the REF guidance for panel D (REF2014 2012) Moredetails on SROI can be found in lsquoA guide to Social Returnon Investmentrsquo produced by The SROI Network (2012)
Although metrics can provide evidence of quantitativechanges or impacts from our research they are unable toadequately provide evidence of the qualitative impacts thattake place and hence are not suitable for all of the impactwe will encounter The main risks associated with the useof standardized metrics are that
(1) The full impact will not be realized as we focus oneasily quantifiable indicators
(2) We will focus attention towards generating resultsthat enable boxes to be ticked rather than deliveringreal value for money and innovative research
(3) They risk being monetized or converted into a lowestcommon denominator in an attempt to compare thecost of a new theatre against that of a hospital
72 Narratives
Narratives can be used to describe impact the use of nar-ratives enables a story to be told and the impact to beplaced in context and can make good use of qualitative
information They are often written with a reader from aparticular stakeholder group in mind and will present aview of impact from a particular perspective The risk ofrelying on narratives to assess impact is that they oftenlack the evidence required to judge whether the researchand impact are linked appropriately Where narratives areused in conjunction with metrics a complete picture ofimpact can be developed again from a particular perspec-tive but with the evidence available to corroborate theclaims made Table 1 summarizes some of the advantagesand disadvantages of the case study approach
By allowing impact to be placed in context we answerthe lsquoso whatrsquo question that can result from quantitativedata analyses but is there a risk that the full picture maynot be presented to demonstrate impact in a positive lightCase studies are ideal for showcasing impact but shouldthey be used to critically evaluate impact
73 Surveys and testimonies
One way in which change of opinion and user perceptionscan be evidenced is by gathering of stakeholder and usertestimonies or undertaking surveys This might describesupport for and development of research with end userspublic engagement and evidence of knowledge exchangeor a demonstration of change in public opinion as a resultof research Collecting this type of evidence is time-consuming and again it can be difficult to gather therequired evidence retrospectively when for example theappropriate user group might have dispersed
The ability to record and log these type of data is im-portant for enabling the path from research to impact tobe established and the development of systems that cancapture this would be very valuable
74 Citations (outside of academia) anddocumentation
Citations (outside of academia) and documentation can beused as evidence to demonstrate the use research findingsin developing new ideas and products for example Thismight include the citation of a piece of research in policydocuments or reference to a piece of research being cited
Table 1 The advantages and disadvantages of the case study approach
Benefits Considerations
Uses quantitative and qualitative data Automated collation of evidence is difficult
Allows evidence to be contextualized and a story told Incorporating perspective can make it difficult to assess
critically
Enables assessment in the absence of quantitative data Time-consuming to prepare and assess
Allows collation of unique datasets Difficult to compare like with like
Preserves distinctive account or disciplinary perspective Rewards those who can write well andor afford to pay
for external input
Assessment evaluations and definitions of research impact 29
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
impact was difficult In the UK the Russell GroupUniversities responded to the REF consultation by recom-mending that no time lag be put on the delivery of impactfrom a piece of research citing examples such as the devel-opment of cardiovascular disease treatments which takebetween 10 and 25 years from research to impact (RussellGroup 2009) To be considered for inclusion within theREF impact must be underpinned by research that tookplace between 1 January 1993 and 31 December 2013 withimpact occurring during an assessment window from 1January 2008 to 31 July 2013 However there has beenrecognition that this time window may be insufficient insome instances with architecture being granted an add-itional 5-year period (REF2014 2012) why only architec-ture has been granted this dispensation is not clear whensimilar cases could be made for medicine physics or evenEnglish literature Recommendations from the REF pilotwere that the panel should be able to extend the time framewhere appropriate this however poses difficult decisionswhen submitting a case study to the REF as to what theview of the panel will be and whether if deemed inappro-priate this will render the case study lsquounclassifiedrsquo
52 The developmental nature of impact
Impact is not static it will develop and change over timeand this development may be an increase or decrease in thecurrent degree of impact Impact can be temporary orlong-lasting The point at which assessment takes placewill therefore influence the degree and significance ofthat impact For example following the discovery of anew potential drug preclinical work is required followedby Phase 1 2 and 3 trials and then regulatory approval isgranted before the drug is used to deliver potential healthbenefits Clearly there is the possibility that the potentialnew drug will fail at any one of these phases but each phasecan be classed as an interim impact of the original discov-ery work on route to the delivery of health benefits but thetime at which an impact assessment takes place will influ-ence the degree of impact that has taken place If impact isshort-lived and has come and gone within an assessmentperiod how will it be viewed and considered Again theobjective and perspective of the individuals and organiza-tions assessing impact will be key to understanding howtemporal and dissipated impact will be valued in compari-son with longer-term impact
53 Attribution
Impact is derived not only from targeted research but fromserendipitous findings good fortune and complexnetworks interacting and translating knowledge andresearch The exploitation of research to provide impactoccurs through a complex variety of processes individualsand organizations and therefore attributing the contribu-tion made by a specific individual piece of research
funding strategy or organization to an impact is notstraight forward Husbands-Fealing suggests that toassist identification of causality for impact assessment itis useful to develop a theoretical framework to map theactors activities linkages outputs and impacts within thesystem under evaluation which shows how later phasesresult from earlier ones Such a framework should be notlinear but recursive including elements from contextualenvironments that influence andor interact with variousaspects of the system Impact is often the culmination ofwork within spanning research communities (Duryea et al2007) Concerns over how to attribute impacts have beenraised many times (The Allen Consulting Group 2005Duryea et al 2007 Grant et al 2009) and differentiatingbetween the various major and minor contributions thatlead to impact is a significant challenge
Figure 1 replicated from Hughes and Martin (2012)illustrates how the ease with which impact can beattributed decreases with time whereas the impact oreffect of complementary assets increases highlightingthe problem that it may take a considerable amount oftime for the full impact of a piece of research to developbut because of this time and the increase in complexity ofthe networks involved in translating the research andinterim impacts it is more difficult to attribute and linkback to a contributing piece of research
This presents particular difficulties in research discip-lines conducting basic research such as pure mathematicswhere the impact of research is unlikely to be foreseenResearch findings will be taken up in other branches ofresearch and developed further before socio-economicimpact occurs by which point attribution becomes ahuge challenge If this research is to be assessed alongsidemore applied research it is important that we are able to atleast determine the contribution of basic research It hasbeen acknowledged that outstanding leaps forward inknowledge and understanding come from immersing in abackground of intellectual thinking that lsquoone is able to seefurther by standing on the shoulders of giantsrsquo
54 Knowledge creep
It is acknowledged that one of the outcomes of developingnew knowledge through research can be lsquoknowledge creeprsquowhere new data or information becomes accepted and getsabsorbed over time This is particularly recognized in thedevelopment of new government policy where findings caninfluence policy debate and policy change without recog-nition of the contributing research (Davies et al 2005Wooding et al 2007) This is recognized as being particu-larly problematic within the social sciences where inform-ing policy is a likely impact of research In putting togetherevidence for the REF impact can be attributed to aspecific piece of research if it made a lsquodistinctive contribu-tionrsquo (REF2014 2011a) The difficulty then is how todetermine what the contribution has been in the absence
26 T Penfield et al
of adequate evidence and how we ensure that research that
results in impacts that cannot be evidenced is valued andsupported
55 Gathering evidence
Gathering evidence of the links between research andimpact is not only a challenge where that evidence is
lacking The introduction of impact assessments with therequirement to collate evidence retrospectively posesdifficulties because evidence measurements and baselines
have in many cases not been collected and may no longerbe available While looking forward we will be able toreduce this problem in the future identifying capturing
and storing the evidence in such a way that it can be usedin the decades to come is a difficulty that we will need to
tackle
6 Developing systems and taxonomies forcapturing impact
Collating the evidence and indicators of impact is a signifi-cant task that is being undertaken within universities and
institutions globally Decker et al (2007) surveyed re-searchers in the US top research institutions during 2005the survey of more than 6000 researchers found that on
average more than 40 of their time was spent doingadministrative tasks It is desirable that the assignationof administrative tasks to researchers is limited and there-
fore to assist the tracking and collating of impact datasystems are being developed involving numerous projects
and developments internationally including Star Metricsin the USA the ERC (European Research Council)
Research Information System and Lattes in Brazil (Lane2010 Mugabushaka and Papazoglou 2012)
Ideally systems within universities internationallywould be able to share data allowing direct comparisonsaccurate storage of information developed in collab-orations and transfer of comparable data as researchersmove between institutions To achieve compatible systemsa shared language is required CERIF (CommonEuropean Research Information Format) was developedfor this purpose first released in 1991 a number ofprojects and systems across Europe such as the ERCResearch Information System (Mugabushaka andPapazoglou 2012) are being developed as CERIF-compatible
In the UK there have been several Jisc-funded projects inrecent years to develop systems capable of storing researchinformation for example MICE (Measuring Impacts UnderCERIF) UK Research Information Shared Service andIntegrated Research Input and Output System all basedon the CERIF standard To allow comparisons between in-stitutions identifying a comprehensive taxonomy of impactand the evidence for it that can be used universally is seen tobe very valuable However the Achilles heel of any suchattempt as critics suggest is the creation of a system thatrewards what it can measure and codify with the knock-oneffect of directing research projects to deliver within themeasures and categories that reward
Attempts have been made to categorize impact evidenceand data for example the aim of the MICE Project was todevelop a set of impact indicators to enable impact to befed into a based system Indicators were identified fromdocuments produced for the REF by Research CouncilsUK in unpublished draft case studies undertaken atKingrsquos College London or outlined in relevant publications(MICE Project nd) A taxonomy of impact categories was
Figure 1 Time attribution impact Replicated from (Hughes and Martin 2012)
Assessment evaluations and definitions of research impact 27
then produced onto which impact could be mapped Whatemerged on testing the MICE taxonomy (Cooke andNadim 2011) by mapping impacts from case studies wasthat detailed categorization of impact was found to be tooprescriptive Every piece of research results in a uniquetapestry of impact and despite the MICE taxonomyhaving more than 100 indicators it was found that thesedid not suffice It is perhaps worth noting that the expertpanels who assessed the pilot exercise for the REF com-mented that the evidence provided by research institutes todemonstrate impact were lsquoa unique collectionrsquo Wherequantitative data were available for example audiencenumbers or book sales these numbers rarely reflected thedegree of impact as no context or baseline was availableCooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks ofimpacts that are generally found The Goldsmith report(Cooke and Nadim 2011) recommended making indicatorslsquovalue freersquo enabling the value or quality to be establishedin an impact descriptor that could be assessed by expertpanels The Goldsmith report concluded that generalcategories of evidence would be more useful such that in-dicators could encompass dissemination and circulationre-use and influence collaboration and boundary workand innovation and invention
While defining the terminology used to understand impactand indicators will enable comparable data to be stored andshared between organizations we would recommend thatany categorization of impacts be flexible such that impactsarising from non-standard routes can be placed It is worthconsidering the degree to which indicators are defined andprovide broader definitions with greater flexibility
It is possible to incorporate both metrics and narra-tives within systems for example within the ResearchOutcomes System and Researchfish currently used byseveral of the UK research councils to allow impacts tobe recorded although recording narratives has the advan-tage of allowing some context to be documented it maymake the evidence less flexible for use by different stake-holder groups (which include government funding bodiesresearch assessment agencies research providers and usercommunities) for whom the purpose of analysis may vary(Davies et al 2005) Any tool for impact evaluation needsto be flexible such that it enables access to impact data fora variety of purposes (Scoble et al nd) Systems need tobe able to capture links between and evidence of the fullpathway from research to impact including knowledgeexchange outputs outcomes and interim impacts toallow the route to impact to be traced This database ofevidence needs to establish both where impact can bedirectly attributed to a piece of research as well asvarious contributions to impact made during the pathway
Baselines and controls need to be captured alongsidechange to demonstrate the degree of impact In many in-stances controls are not feasible as we cannot look at whatimpact would have occurred if a piece of research had not
taken place however indications of the picture before andafter impact are valuable and worth collecting for impactthat can be predicted
It is now possible to use data-mining tools to extractspecific data from narratives or unstructured data(Mugabushaka and Papazoglou 2012) This is being donefor collation of academic impact and outputs for exampleResearch Portfolio Online Reporting Tools which usesPubMed and text mining to cluster research projects andSTAR Metrics in the US which uses administrative recordsand research outputs and is also being implemented by theERC using data in the public domain (Mugabushaka andPapazoglou 2012) These techniques have the potential toprovide a transformation in data capture and impact assess-ment (Jones and Grant 2013) It is acknowledged in thearticle by Mugabushaka and Papazoglou (2012) that itwill take years to fully incorporate the impacts of ERCfunding For systems to be able to capture a full range ofsystems definitions and categories of impact need to bedetermined that can be incorporated into system develop-ment To adequately capture interactions taking placebetween researchers institutions and stakeholders theintroduction of tools to enable this would be veryvaluable If knowledge exchange events could be capturedfor example electronically as they occur or automatically ifflagged from an electronic calendar or a diary then far moreof these events could be recorded with relative easeCapturing knowledge exchange events would greatly assistthe linking of research with impact
The transition to routine capture of impact data notonly requires the development of tools and systems tohelp with implementation but also a cultural change todevelop practices currently undertaken by a few to beincorporated as standard behaviour among researchersand universities
7 Indicators evidence and impactwithin systems
What indicators evidence and impacts need to becaptured within developing systems There is a great dealof interest in collating terms for impact and indicators ofimpact Consortia for Advancing Standards in ResearchAdministration Information for example has puttogether a data dictionary with the aim of setting thestandards for terminology used to describe impact and in-dicators that can be incorporated into systems internation-ally and seems to be building a certain momentum inthis area A variety of types of indicators can becaptured within systems however it is important thatthese are universally understood Here we address typesof evidence that need to be captured to enable anoverview of impact to be developed In the majority ofcases a number of types of evidence will be required toprovide an overview of impact
28 T Penfield et al
71 Metrics
Metrics have commonly been used as a measure of impactfor example in terms of profit made number of jobsprovided number of trained personnel recruited numberof visitors to an exhibition number of items purchasedand so on Metrics in themselves cannot convey the fullimpact however they are often viewed as powerful andunequivocal forms of evidence If metrics are available asimpact evidence they should where possible also captureany baseline or control data Any information on thecontext of the data will be valuable to understanding thedegree to which impact has taken place
Perhaps SROI indicates the desire to be able to demon-strate the monetary value of investment and impact by someorganizations SROI aims to provide a valuation of thebroader social environmental and economic impactsproviding a metric that can be used for demonstration ofworth This is ametric that has been usedwithin the charitablesector (Berg andMansson 2011) and also features as evidencein the REF guidance for panel D (REF2014 2012) Moredetails on SROI can be found in lsquoA guide to Social Returnon Investmentrsquo produced by The SROI Network (2012)
Although metrics can provide evidence of quantitativechanges or impacts from our research they are unable toadequately provide evidence of the qualitative impacts thattake place and hence are not suitable for all of the impactwe will encounter The main risks associated with the useof standardized metrics are that
(1) The full impact will not be realized as we focus oneasily quantifiable indicators
(2) We will focus attention towards generating resultsthat enable boxes to be ticked rather than deliveringreal value for money and innovative research
(3) They risk being monetized or converted into a lowestcommon denominator in an attempt to compare thecost of a new theatre against that of a hospital
72 Narratives
Narratives can be used to describe impact the use of nar-ratives enables a story to be told and the impact to beplaced in context and can make good use of qualitative
information They are often written with a reader from aparticular stakeholder group in mind and will present aview of impact from a particular perspective The risk ofrelying on narratives to assess impact is that they oftenlack the evidence required to judge whether the researchand impact are linked appropriately Where narratives areused in conjunction with metrics a complete picture ofimpact can be developed again from a particular perspec-tive but with the evidence available to corroborate theclaims made Table 1 summarizes some of the advantagesand disadvantages of the case study approach
By allowing impact to be placed in context we answerthe lsquoso whatrsquo question that can result from quantitativedata analyses but is there a risk that the full picture maynot be presented to demonstrate impact in a positive lightCase studies are ideal for showcasing impact but shouldthey be used to critically evaluate impact
73 Surveys and testimonies
One way in which change of opinion and user perceptionscan be evidenced is by gathering of stakeholder and usertestimonies or undertaking surveys This might describesupport for and development of research with end userspublic engagement and evidence of knowledge exchangeor a demonstration of change in public opinion as a resultof research Collecting this type of evidence is time-consuming and again it can be difficult to gather therequired evidence retrospectively when for example theappropriate user group might have dispersed
The ability to record and log these type of data is im-portant for enabling the path from research to impact tobe established and the development of systems that cancapture this would be very valuable
74 Citations (outside of academia) anddocumentation
Citations (outside of academia) and documentation can beused as evidence to demonstrate the use research findingsin developing new ideas and products for example Thismight include the citation of a piece of research in policydocuments or reference to a piece of research being cited
Table 1 The advantages and disadvantages of the case study approach
Benefits Considerations
Uses quantitative and qualitative data Automated collation of evidence is difficult
Allows evidence to be contextualized and a story told Incorporating perspective can make it difficult to assess
critically
Enables assessment in the absence of quantitative data Time-consuming to prepare and assess
Allows collation of unique datasets Difficult to compare like with like
Preserves distinctive account or disciplinary perspective Rewards those who can write well andor afford to pay
for external input
Assessment evaluations and definitions of research impact 29
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
of adequate evidence and how we ensure that research that
results in impacts that cannot be evidenced is valued andsupported
55 Gathering evidence
Gathering evidence of the links between research andimpact is not only a challenge where that evidence is
lacking The introduction of impact assessments with therequirement to collate evidence retrospectively posesdifficulties because evidence measurements and baselines
have in many cases not been collected and may no longerbe available While looking forward we will be able toreduce this problem in the future identifying capturing
and storing the evidence in such a way that it can be usedin the decades to come is a difficulty that we will need to
tackle
6 Developing systems and taxonomies forcapturing impact
Collating the evidence and indicators of impact is a signifi-cant task that is being undertaken within universities and
institutions globally Decker et al (2007) surveyed re-searchers in the US top research institutions during 2005the survey of more than 6000 researchers found that on
average more than 40 of their time was spent doingadministrative tasks It is desirable that the assignationof administrative tasks to researchers is limited and there-
fore to assist the tracking and collating of impact datasystems are being developed involving numerous projects
and developments internationally including Star Metricsin the USA the ERC (European Research Council)
Research Information System and Lattes in Brazil (Lane2010 Mugabushaka and Papazoglou 2012)
Ideally systems within universities internationallywould be able to share data allowing direct comparisonsaccurate storage of information developed in collab-orations and transfer of comparable data as researchersmove between institutions To achieve compatible systemsa shared language is required CERIF (CommonEuropean Research Information Format) was developedfor this purpose first released in 1991 a number ofprojects and systems across Europe such as the ERCResearch Information System (Mugabushaka andPapazoglou 2012) are being developed as CERIF-compatible
In the UK there have been several Jisc-funded projects inrecent years to develop systems capable of storing researchinformation for example MICE (Measuring Impacts UnderCERIF) UK Research Information Shared Service andIntegrated Research Input and Output System all basedon the CERIF standard To allow comparisons between in-stitutions identifying a comprehensive taxonomy of impactand the evidence for it that can be used universally is seen tobe very valuable However the Achilles heel of any suchattempt as critics suggest is the creation of a system thatrewards what it can measure and codify with the knock-oneffect of directing research projects to deliver within themeasures and categories that reward
Attempts have been made to categorize impact evidenceand data for example the aim of the MICE Project was todevelop a set of impact indicators to enable impact to befed into a based system Indicators were identified fromdocuments produced for the REF by Research CouncilsUK in unpublished draft case studies undertaken atKingrsquos College London or outlined in relevant publications(MICE Project nd) A taxonomy of impact categories was
Figure 1 Time attribution impact Replicated from (Hughes and Martin 2012)
Assessment evaluations and definitions of research impact 27
then produced onto which impact could be mapped Whatemerged on testing the MICE taxonomy (Cooke andNadim 2011) by mapping impacts from case studies wasthat detailed categorization of impact was found to be tooprescriptive Every piece of research results in a uniquetapestry of impact and despite the MICE taxonomyhaving more than 100 indicators it was found that thesedid not suffice It is perhaps worth noting that the expertpanels who assessed the pilot exercise for the REF com-mented that the evidence provided by research institutes todemonstrate impact were lsquoa unique collectionrsquo Wherequantitative data were available for example audiencenumbers or book sales these numbers rarely reflected thedegree of impact as no context or baseline was availableCooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks ofimpacts that are generally found The Goldsmith report(Cooke and Nadim 2011) recommended making indicatorslsquovalue freersquo enabling the value or quality to be establishedin an impact descriptor that could be assessed by expertpanels The Goldsmith report concluded that generalcategories of evidence would be more useful such that in-dicators could encompass dissemination and circulationre-use and influence collaboration and boundary workand innovation and invention
While defining the terminology used to understand impactand indicators will enable comparable data to be stored andshared between organizations we would recommend thatany categorization of impacts be flexible such that impactsarising from non-standard routes can be placed It is worthconsidering the degree to which indicators are defined andprovide broader definitions with greater flexibility
It is possible to incorporate both metrics and narra-tives within systems for example within the ResearchOutcomes System and Researchfish currently used byseveral of the UK research councils to allow impacts tobe recorded although recording narratives has the advan-tage of allowing some context to be documented it maymake the evidence less flexible for use by different stake-holder groups (which include government funding bodiesresearch assessment agencies research providers and usercommunities) for whom the purpose of analysis may vary(Davies et al 2005) Any tool for impact evaluation needsto be flexible such that it enables access to impact data fora variety of purposes (Scoble et al nd) Systems need tobe able to capture links between and evidence of the fullpathway from research to impact including knowledgeexchange outputs outcomes and interim impacts toallow the route to impact to be traced This database ofevidence needs to establish both where impact can bedirectly attributed to a piece of research as well asvarious contributions to impact made during the pathway
Baselines and controls need to be captured alongsidechange to demonstrate the degree of impact In many in-stances controls are not feasible as we cannot look at whatimpact would have occurred if a piece of research had not
taken place however indications of the picture before andafter impact are valuable and worth collecting for impactthat can be predicted
It is now possible to use data-mining tools to extractspecific data from narratives or unstructured data(Mugabushaka and Papazoglou 2012) This is being donefor collation of academic impact and outputs for exampleResearch Portfolio Online Reporting Tools which usesPubMed and text mining to cluster research projects andSTAR Metrics in the US which uses administrative recordsand research outputs and is also being implemented by theERC using data in the public domain (Mugabushaka andPapazoglou 2012) These techniques have the potential toprovide a transformation in data capture and impact assess-ment (Jones and Grant 2013) It is acknowledged in thearticle by Mugabushaka and Papazoglou (2012) that itwill take years to fully incorporate the impacts of ERCfunding For systems to be able to capture a full range ofsystems definitions and categories of impact need to bedetermined that can be incorporated into system develop-ment To adequately capture interactions taking placebetween researchers institutions and stakeholders theintroduction of tools to enable this would be veryvaluable If knowledge exchange events could be capturedfor example electronically as they occur or automatically ifflagged from an electronic calendar or a diary then far moreof these events could be recorded with relative easeCapturing knowledge exchange events would greatly assistthe linking of research with impact
The transition to routine capture of impact data notonly requires the development of tools and systems tohelp with implementation but also a cultural change todevelop practices currently undertaken by a few to beincorporated as standard behaviour among researchersand universities
7 Indicators evidence and impactwithin systems
What indicators evidence and impacts need to becaptured within developing systems There is a great dealof interest in collating terms for impact and indicators ofimpact Consortia for Advancing Standards in ResearchAdministration Information for example has puttogether a data dictionary with the aim of setting thestandards for terminology used to describe impact and in-dicators that can be incorporated into systems internation-ally and seems to be building a certain momentum inthis area A variety of types of indicators can becaptured within systems however it is important thatthese are universally understood Here we address typesof evidence that need to be captured to enable anoverview of impact to be developed In the majority ofcases a number of types of evidence will be required toprovide an overview of impact
28 T Penfield et al
71 Metrics
Metrics have commonly been used as a measure of impactfor example in terms of profit made number of jobsprovided number of trained personnel recruited numberof visitors to an exhibition number of items purchasedand so on Metrics in themselves cannot convey the fullimpact however they are often viewed as powerful andunequivocal forms of evidence If metrics are available asimpact evidence they should where possible also captureany baseline or control data Any information on thecontext of the data will be valuable to understanding thedegree to which impact has taken place
Perhaps SROI indicates the desire to be able to demon-strate the monetary value of investment and impact by someorganizations SROI aims to provide a valuation of thebroader social environmental and economic impactsproviding a metric that can be used for demonstration ofworth This is ametric that has been usedwithin the charitablesector (Berg andMansson 2011) and also features as evidencein the REF guidance for panel D (REF2014 2012) Moredetails on SROI can be found in lsquoA guide to Social Returnon Investmentrsquo produced by The SROI Network (2012)
Although metrics can provide evidence of quantitativechanges or impacts from our research they are unable toadequately provide evidence of the qualitative impacts thattake place and hence are not suitable for all of the impactwe will encounter The main risks associated with the useof standardized metrics are that
(1) The full impact will not be realized as we focus oneasily quantifiable indicators
(2) We will focus attention towards generating resultsthat enable boxes to be ticked rather than deliveringreal value for money and innovative research
(3) They risk being monetized or converted into a lowestcommon denominator in an attempt to compare thecost of a new theatre against that of a hospital
72 Narratives
Narratives can be used to describe impact the use of nar-ratives enables a story to be told and the impact to beplaced in context and can make good use of qualitative
information They are often written with a reader from aparticular stakeholder group in mind and will present aview of impact from a particular perspective The risk ofrelying on narratives to assess impact is that they oftenlack the evidence required to judge whether the researchand impact are linked appropriately Where narratives areused in conjunction with metrics a complete picture ofimpact can be developed again from a particular perspec-tive but with the evidence available to corroborate theclaims made Table 1 summarizes some of the advantagesand disadvantages of the case study approach
By allowing impact to be placed in context we answerthe lsquoso whatrsquo question that can result from quantitativedata analyses but is there a risk that the full picture maynot be presented to demonstrate impact in a positive lightCase studies are ideal for showcasing impact but shouldthey be used to critically evaluate impact
73 Surveys and testimonies
One way in which change of opinion and user perceptionscan be evidenced is by gathering of stakeholder and usertestimonies or undertaking surveys This might describesupport for and development of research with end userspublic engagement and evidence of knowledge exchangeor a demonstration of change in public opinion as a resultof research Collecting this type of evidence is time-consuming and again it can be difficult to gather therequired evidence retrospectively when for example theappropriate user group might have dispersed
The ability to record and log these type of data is im-portant for enabling the path from research to impact tobe established and the development of systems that cancapture this would be very valuable
74 Citations (outside of academia) anddocumentation
Citations (outside of academia) and documentation can beused as evidence to demonstrate the use research findingsin developing new ideas and products for example Thismight include the citation of a piece of research in policydocuments or reference to a piece of research being cited
Table 1 The advantages and disadvantages of the case study approach
Benefits Considerations
Uses quantitative and qualitative data Automated collation of evidence is difficult
Allows evidence to be contextualized and a story told Incorporating perspective can make it difficult to assess
critically
Enables assessment in the absence of quantitative data Time-consuming to prepare and assess
Allows collation of unique datasets Difficult to compare like with like
Preserves distinctive account or disciplinary perspective Rewards those who can write well andor afford to pay
for external input
Assessment evaluations and definitions of research impact 29
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
then produced onto which impact could be mapped Whatemerged on testing the MICE taxonomy (Cooke andNadim 2011) by mapping impacts from case studies wasthat detailed categorization of impact was found to be tooprescriptive Every piece of research results in a uniquetapestry of impact and despite the MICE taxonomyhaving more than 100 indicators it was found that thesedid not suffice It is perhaps worth noting that the expertpanels who assessed the pilot exercise for the REF com-mented that the evidence provided by research institutes todemonstrate impact were lsquoa unique collectionrsquo Wherequantitative data were available for example audiencenumbers or book sales these numbers rarely reflected thedegree of impact as no context or baseline was availableCooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks ofimpacts that are generally found The Goldsmith report(Cooke and Nadim 2011) recommended making indicatorslsquovalue freersquo enabling the value or quality to be establishedin an impact descriptor that could be assessed by expertpanels The Goldsmith report concluded that generalcategories of evidence would be more useful such that in-dicators could encompass dissemination and circulationre-use and influence collaboration and boundary workand innovation and invention
While defining the terminology used to understand impactand indicators will enable comparable data to be stored andshared between organizations we would recommend thatany categorization of impacts be flexible such that impactsarising from non-standard routes can be placed It is worthconsidering the degree to which indicators are defined andprovide broader definitions with greater flexibility
It is possible to incorporate both metrics and narra-tives within systems for example within the ResearchOutcomes System and Researchfish currently used byseveral of the UK research councils to allow impacts tobe recorded although recording narratives has the advan-tage of allowing some context to be documented it maymake the evidence less flexible for use by different stake-holder groups (which include government funding bodiesresearch assessment agencies research providers and usercommunities) for whom the purpose of analysis may vary(Davies et al 2005) Any tool for impact evaluation needsto be flexible such that it enables access to impact data fora variety of purposes (Scoble et al nd) Systems need tobe able to capture links between and evidence of the fullpathway from research to impact including knowledgeexchange outputs outcomes and interim impacts toallow the route to impact to be traced This database ofevidence needs to establish both where impact can bedirectly attributed to a piece of research as well asvarious contributions to impact made during the pathway
Baselines and controls need to be captured alongsidechange to demonstrate the degree of impact In many in-stances controls are not feasible as we cannot look at whatimpact would have occurred if a piece of research had not
taken place however indications of the picture before andafter impact are valuable and worth collecting for impactthat can be predicted
It is now possible to use data-mining tools to extractspecific data from narratives or unstructured data(Mugabushaka and Papazoglou 2012) This is being donefor collation of academic impact and outputs for exampleResearch Portfolio Online Reporting Tools which usesPubMed and text mining to cluster research projects andSTAR Metrics in the US which uses administrative recordsand research outputs and is also being implemented by theERC using data in the public domain (Mugabushaka andPapazoglou 2012) These techniques have the potential toprovide a transformation in data capture and impact assess-ment (Jones and Grant 2013) It is acknowledged in thearticle by Mugabushaka and Papazoglou (2012) that itwill take years to fully incorporate the impacts of ERCfunding For systems to be able to capture a full range ofsystems definitions and categories of impact need to bedetermined that can be incorporated into system develop-ment To adequately capture interactions taking placebetween researchers institutions and stakeholders theintroduction of tools to enable this would be veryvaluable If knowledge exchange events could be capturedfor example electronically as they occur or automatically ifflagged from an electronic calendar or a diary then far moreof these events could be recorded with relative easeCapturing knowledge exchange events would greatly assistthe linking of research with impact
The transition to routine capture of impact data notonly requires the development of tools and systems tohelp with implementation but also a cultural change todevelop practices currently undertaken by a few to beincorporated as standard behaviour among researchersand universities
7 Indicators evidence and impactwithin systems
What indicators evidence and impacts need to becaptured within developing systems There is a great dealof interest in collating terms for impact and indicators ofimpact Consortia for Advancing Standards in ResearchAdministration Information for example has puttogether a data dictionary with the aim of setting thestandards for terminology used to describe impact and in-dicators that can be incorporated into systems internation-ally and seems to be building a certain momentum inthis area A variety of types of indicators can becaptured within systems however it is important thatthese are universally understood Here we address typesof evidence that need to be captured to enable anoverview of impact to be developed In the majority ofcases a number of types of evidence will be required toprovide an overview of impact
28 T Penfield et al
71 Metrics
Metrics have commonly been used as a measure of impactfor example in terms of profit made number of jobsprovided number of trained personnel recruited numberof visitors to an exhibition number of items purchasedand so on Metrics in themselves cannot convey the fullimpact however they are often viewed as powerful andunequivocal forms of evidence If metrics are available asimpact evidence they should where possible also captureany baseline or control data Any information on thecontext of the data will be valuable to understanding thedegree to which impact has taken place
Perhaps SROI indicates the desire to be able to demon-strate the monetary value of investment and impact by someorganizations SROI aims to provide a valuation of thebroader social environmental and economic impactsproviding a metric that can be used for demonstration ofworth This is ametric that has been usedwithin the charitablesector (Berg andMansson 2011) and also features as evidencein the REF guidance for panel D (REF2014 2012) Moredetails on SROI can be found in lsquoA guide to Social Returnon Investmentrsquo produced by The SROI Network (2012)
Although metrics can provide evidence of quantitativechanges or impacts from our research they are unable toadequately provide evidence of the qualitative impacts thattake place and hence are not suitable for all of the impactwe will encounter The main risks associated with the useof standardized metrics are that
(1) The full impact will not be realized as we focus oneasily quantifiable indicators
(2) We will focus attention towards generating resultsthat enable boxes to be ticked rather than deliveringreal value for money and innovative research
(3) They risk being monetized or converted into a lowestcommon denominator in an attempt to compare thecost of a new theatre against that of a hospital
72 Narratives
Narratives can be used to describe impact the use of nar-ratives enables a story to be told and the impact to beplaced in context and can make good use of qualitative
information They are often written with a reader from aparticular stakeholder group in mind and will present aview of impact from a particular perspective The risk ofrelying on narratives to assess impact is that they oftenlack the evidence required to judge whether the researchand impact are linked appropriately Where narratives areused in conjunction with metrics a complete picture ofimpact can be developed again from a particular perspec-tive but with the evidence available to corroborate theclaims made Table 1 summarizes some of the advantagesand disadvantages of the case study approach
By allowing impact to be placed in context we answerthe lsquoso whatrsquo question that can result from quantitativedata analyses but is there a risk that the full picture maynot be presented to demonstrate impact in a positive lightCase studies are ideal for showcasing impact but shouldthey be used to critically evaluate impact
73 Surveys and testimonies
One way in which change of opinion and user perceptionscan be evidenced is by gathering of stakeholder and usertestimonies or undertaking surveys This might describesupport for and development of research with end userspublic engagement and evidence of knowledge exchangeor a demonstration of change in public opinion as a resultof research Collecting this type of evidence is time-consuming and again it can be difficult to gather therequired evidence retrospectively when for example theappropriate user group might have dispersed
The ability to record and log these type of data is im-portant for enabling the path from research to impact tobe established and the development of systems that cancapture this would be very valuable
74 Citations (outside of academia) anddocumentation
Citations (outside of academia) and documentation can beused as evidence to demonstrate the use research findingsin developing new ideas and products for example Thismight include the citation of a piece of research in policydocuments or reference to a piece of research being cited
Table 1 The advantages and disadvantages of the case study approach
Benefits Considerations
Uses quantitative and qualitative data Automated collation of evidence is difficult
Allows evidence to be contextualized and a story told Incorporating perspective can make it difficult to assess
critically
Enables assessment in the absence of quantitative data Time-consuming to prepare and assess
Allows collation of unique datasets Difficult to compare like with like
Preserves distinctive account or disciplinary perspective Rewards those who can write well andor afford to pay
for external input
Assessment evaluations and definitions of research impact 29
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
71 Metrics
Metrics have commonly been used as a measure of impactfor example in terms of profit made number of jobsprovided number of trained personnel recruited numberof visitors to an exhibition number of items purchasedand so on Metrics in themselves cannot convey the fullimpact however they are often viewed as powerful andunequivocal forms of evidence If metrics are available asimpact evidence they should where possible also captureany baseline or control data Any information on thecontext of the data will be valuable to understanding thedegree to which impact has taken place
Perhaps SROI indicates the desire to be able to demon-strate the monetary value of investment and impact by someorganizations SROI aims to provide a valuation of thebroader social environmental and economic impactsproviding a metric that can be used for demonstration ofworth This is ametric that has been usedwithin the charitablesector (Berg andMansson 2011) and also features as evidencein the REF guidance for panel D (REF2014 2012) Moredetails on SROI can be found in lsquoA guide to Social Returnon Investmentrsquo produced by The SROI Network (2012)
Although metrics can provide evidence of quantitativechanges or impacts from our research they are unable toadequately provide evidence of the qualitative impacts thattake place and hence are not suitable for all of the impactwe will encounter The main risks associated with the useof standardized metrics are that
(1) The full impact will not be realized as we focus oneasily quantifiable indicators
(2) We will focus attention towards generating resultsthat enable boxes to be ticked rather than deliveringreal value for money and innovative research
(3) They risk being monetized or converted into a lowestcommon denominator in an attempt to compare thecost of a new theatre against that of a hospital
72 Narratives
Narratives can be used to describe impact the use of nar-ratives enables a story to be told and the impact to beplaced in context and can make good use of qualitative
information They are often written with a reader from aparticular stakeholder group in mind and will present aview of impact from a particular perspective The risk ofrelying on narratives to assess impact is that they oftenlack the evidence required to judge whether the researchand impact are linked appropriately Where narratives areused in conjunction with metrics a complete picture ofimpact can be developed again from a particular perspec-tive but with the evidence available to corroborate theclaims made Table 1 summarizes some of the advantagesand disadvantages of the case study approach
By allowing impact to be placed in context we answerthe lsquoso whatrsquo question that can result from quantitativedata analyses but is there a risk that the full picture maynot be presented to demonstrate impact in a positive lightCase studies are ideal for showcasing impact but shouldthey be used to critically evaluate impact
73 Surveys and testimonies
One way in which change of opinion and user perceptionscan be evidenced is by gathering of stakeholder and usertestimonies or undertaking surveys This might describesupport for and development of research with end userspublic engagement and evidence of knowledge exchangeor a demonstration of change in public opinion as a resultof research Collecting this type of evidence is time-consuming and again it can be difficult to gather therequired evidence retrospectively when for example theappropriate user group might have dispersed
The ability to record and log these type of data is im-portant for enabling the path from research to impact tobe established and the development of systems that cancapture this would be very valuable
74 Citations (outside of academia) anddocumentation
Citations (outside of academia) and documentation can beused as evidence to demonstrate the use research findingsin developing new ideas and products for example Thismight include the citation of a piece of research in policydocuments or reference to a piece of research being cited
Table 1 The advantages and disadvantages of the case study approach
Benefits Considerations
Uses quantitative and qualitative data Automated collation of evidence is difficult
Allows evidence to be contextualized and a story told Incorporating perspective can make it difficult to assess
critically
Enables assessment in the absence of quantitative data Time-consuming to prepare and assess
Allows collation of unique datasets Difficult to compare like with like
Preserves distinctive account or disciplinary perspective Rewards those who can write well andor afford to pay
for external input
Assessment evaluations and definitions of research impact 29
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
within the media A collation of several indicators ofimpact may be enough to convince that an impact hastaken place Even where we can evidence changes andbenefits linked to our research understanding the causalrelationship may be difficult Media coverage is a usefulmeans of disseminating our research and ideas and may beconsidered alongside other evidence as contributing to oran indicator of impact
The fast-moving developments in the field of altmetrics(or alternative metrics) are providing a richer understand-ing of how research is being used viewed and moved Thetransfer of information electronically can be traced andreviewed to provide data on where and to whomresearch findings are going
8 Conclusions and recommendations
The understanding of the term impact varies considerablyand as such the objectives of an impact assessment need tobe thoroughly understood before evidence is collated
While aspects of impact can be adequately interpretedusing metrics narratives and other evidence the mixed-method case study approach is an excellent means ofpulling all available information data and evidencetogether allowing a comprehensive summary of theimpact within context While the case study is a useful
way of showcasing impact its limitations must be under-stood if we are to use this for evaluation purposes Thecase study does present evidence from a particular perspec-tive and may need to be adapted for use with differentstakeholders It is time-intensive to both assimilate andreview case studies and we therefore need to ensure thatthe resources required for this type of evaluation arejustified by the knowledge gained The ability to write apersuasive well-evidenced case study may influence the as-sessment of impact Over the past year there have been anumber of new posts created within universities such aswriting impact case studies and a number of companiesare now offering this as a contract service A key concernhere is that we could find that universities which can affordto employ either consultants or impact lsquoadministratorsrsquowill generate the best case studies
The development of tools and systems for assisting withimpact evaluation would be very valuable We suggest thatdeveloping systems that focus on recording impact infor-mation alone will not provide all that is required to linkresearch to ensuing events and impacts systems require thecapacity to capture any interactions between researchersthe institution and external stakeholders and link thesewith research findings and outputs or interim impacts toprovide a network of data In designing systems and toolsfor collating data related to impact it is important toconsider who will populate the database and ensure that
Figure 2 Overview of the types of information that systems need to capture and link
30 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
the time and capability required for capture of informationis considered Capturing data interactions and indicatorsas they emerge increases the chance of capturing allrelevant information and tools to enable researchers tocapture much of this would be valuable However itmust be remembered that in the case of the UK REFimpact is only considered that is based on research thathas taken place within the institution submitting the casestudy It is therefore in an institutionrsquos interest to have aprocess by which all the necessary information is capturedto enable a story to be developed in the absence of a re-searcher who may have left the employment of the institu-tion Figure 2 demonstrates the information that systemswill need to capture and link
(1) Research findings including outputs (eg presenta-tions and publications)
(2) Communications and interactions with stakeholdersand the wider public (emails visits workshopsmedia publicity etc)
(3) Feedback from stakeholders and communicationsummaries (eg testimonials and altmetrics)
(4) Research developments (based on stakeholder inputand discussions)
(5) Outcomes (eg commercial and cultural citations)(6) Impacts (changes eg behavioural and economic)
Attempting to evaluate impact to justify expenditureshowcase our work and inform future funding decisionswill only prove to be a valuable use of time and resources ifwe can take measures to ensure that assessment attemptswill not ultimately have a negative influence on the impactof our research There are areas of basic research where theimpacts are so far removed from the research or are im-practical to demonstrate in these cases it might beprudent to accept the limitations of impact assessmentand provide the potential for exclusion in appropriatecircumstances
Funding
This work was supported by Jisc [DIINN10]
References
Australian Research Council (2008) ERA Indicator Principles(Pubd online) lthttpwwwarcgovaupdfERA_Indicator_Principlespdfgt accessed 6 Oct 2012
Berg LO and Mansson C (2011) A White Paper on CharityImpact Measurement (Pubd online) lthttpwwwcharitystarorgwp-contentuploads201105Return_on_donations_a_white_paper_on_charity_impact_measurementpdfgt accessed6 Aug 2012
Bernstein A et al (2006) A Framework to Measure the Impactof Investments in Health Research (Pubd online) lthttpwwwoecdorgscienceinnovationinsciencetechnologyandindustry37450246pdfgt accessed 26 Oct 2012
Bornmann L and Marx W (2013) lsquoHow Good is ResearchReallyrsquo European Molecular Biology Organization (EMBO)Reports 14 226ndash30
Buxton M Hanney S and Jones T (2004) lsquoEstimating theEconomic Value to Societies of the Impact of HealthResearch A Critical Reviewrsquo Bulletin of the World HealthOrganization 82 733ndash9
Canadian Academy of Health Sciences Panel on Return onInvestment in Health Research (2009) Making an ImpactA Preferred Framework and Indicators to Measure Returnson Investment in Health Research (Pubd online) lthttpwwwcahs-acsscawp-contentuploads201109ROI_FullReportpdfgt accessed 26 Oct 2012
Cooke J and Nadim T (2011) Measuring Impact UnderCERIF at Goldsmiths (Pubd online) lthttpmicecerchkclacukwp-uploads201107MICE_report_Goldsmiths_finalpdfgt accessed 26 Oct 2012
Corbyn Z (2009) Anti-Impact Campaignrsquos lsquoPoster Boyrsquo Sticksup for the Ivory Tower Times Higher Education (Pubd online)lthttpwwwtimeshighereducationcoukstoryaspstoryCode=409614ampsectioncode=26gt accessed 26 Oct 2012
Davies H Nutley S and Walter I (2005) Assessing theImpact of Social Science Research ConceptualMethodological and Practical Issues (Pubd online) lthttpwwwodiorgukrapidEventsESRCdocsbackground_paperpdfgt accessed 26 Oct 2012
Decker R et al (2007) A Profile of Federal-GrantAdministrative Burden Among Federal DemonstrationPartnership Faculty (Pubd online) lthttpwwwiscintelligencecomarchivos_subidosusfacultyburden_5pdfgtaccessed 26 Oct 2012
Department for Business Innovation and Skills (2012) Guide toBIS 2012-2013 (Pubd online) lthttpwwwbisgovukAboutgt accessed 24 Oct 2012
Donovan C (2008) lsquoThe Australian Research QualityFramework A live experiment in capturing the socialeconomic environmental and cultural returns of publiclyfunded researchrsquo In Coryn C L S and Scriven M (eds)Reforming the Evaluation of Research New Directions forEvaluation 118 pp 47ndash60
mdashmdash (2011) lsquoImpact is a Strong Weapon for Making anEvidence-Based Case Study for Enhanced Research Supportbut a State-of-the-Art Approach to Measurement is NeededrsquoLSE Blog (Pubd online) lthttpblogslseacukimpactofso-cialsciencestagclaire-donovangt accessed 26 Oct 2012
Donovan C and Hanney S (2011) lsquoThe ldquoPaybackFrameworkrdquo explainedrsquo Research Evaluation 20 181ndash3
Duryea M Hochman M and Parfitt A (2007) Measuring theImpact of Research Research Global 27 8-9 (Pubd online)lthttpwwwatneduaudocsResearch20Global20-20Measuring20the20impact20of20researchpdfgtaccessed 26 Oct 2012
Ebrahim A and Rangan K (2010) lsquoThe Limits of NonprofitImpact A Contingency Framework for Measuring SocialPerformancersquo Working Paper Harvard Business School(Pubd online) lthttpwwwhbseduresearchpdf10-099pdfgt accessed 26 Oct 2012
European Science Foundation (2009) Evaluation in NationalResearch Funding Agencies Approaches Experiences andCase Studies (Pubd online) lthttpwwwesforgindexphpeID=tx_ccdamdl_fileampp[file]=25668ampp[dl]=1ampp[pid]=6767ampp[site]=European20Science20Foundationampp[t]=1351858982amphash=93e987c5832f10aeee3911bac23b4e0fampl=engt accessed 26 Oct 2012
Jones MM and Grant J (2013) lsquoMethodologies for Assessingand Evidencing Research Impact RAND Europersquo In Dean
Assessment evaluations and definitions of research impact 31
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al
et al (eds) 7 Essays on Impact DESCRIBE Project Report forJISC University of Exeter
Grant J (2006) Measuring the Benefits from Research RANDEurope (Pubd online) lthttpwwwrandorgpubsresearch_briefs2007RAND_RB9202pdfgt accessed 26 Oct 2012
Grant J et al (2009) Capturing Research Impacts A Review ofInternational Practice (Pubd online) lthttpwwwrandorgpubsdocumented_briefings2010RAND_DB578pdfgtaccessed 26 Oct 2012
HM Treasury Department for Education and SkillsDepartment of Trade and Industry (2004) lsquoScience andInnovation Framework 2004ndash2014rsquo London HM Treasury
Hanney S and Gonzalez-Block M A (2011) lsquoYes Researchcan Inform Health Policy But can we Bridge the lsquoDo-Knowing itrsquos been Donersquo Gaprsquo Health Research Policy andSystems 9 23
Hughes A and Martin B (2012) lsquoCouncil for Industry andHigher Education UK Innovation Research CentreEnhancing Impact The Value of Public Sector RampDrsquoSummary report (Pubd online) lthttpukircacukobjectreport8025docCIHE_0612ImpactReport_summarypdfgtaccessed 26 Oct 2012
Husbands-Fealing K (2013) lsquoAssessing impacts of higher edu-cation systemsrsquo University of Minnesota in Dean et al (eds)7 Essays on Impact DESCRIBE project report for JISCUniversity of Exeter
Johnston R (1995) lsquoResearch Impact QuantificationrsquoScientometrics 34 415ndash26
Jump P (2011) lsquoHEFCE Reduces Points of Impact in REFrsquoTimes Higher Education (Pubd online) lthttpwwwtimeshighereducationcoukstoryaspstoryCode=415340ampsectioncode=26gt accessed 26 Oct 2012
Kelly U and McNicoll I (2011) lsquoNational Co-ordinatingCentre for Public Engagementrsquo Through a Glass DarklyMeasuring the Social Value of Universities (Pubd online)lthttpwwwpublicengagementacuksitesdefaultfiles8009620NCCPE20Social20Value20Reportpdfgt accessed26 Oct 2012
Kuruvilla S et al (2006) lsquoDescribing the Impact of HealthResearch A Research Impact Frameworkrsquo BMC HealthServices Research 6 134
LSEPublic PolicyGroup (2011) lsquoMaximising the Impacts ofYourResearch A Handbook for Social Scientistsrsquo (Pubd online)lthttpwww2lseacukgovernmentresearchresgroupsLSEPublicPolicyDocsLSE_Impact_Handbook_April_2011pdfgtaccessed 26 Oct 2012
Lane J (2010) lsquoLetrsquos Make Science Metrics More ScientificrsquoNature 464 488ndash9
MICE Project (2011) Measuring Impact Under CERIF (MICE)Project Blog (Pubd online) lthttpmicecerchkclacukgtaccessed 26 Oct 2012
Mugabushaka A and Papazoglou T (2012) lsquoInformationsystems of research funding agencies in the ldquoera of the BigDatardquo The case study of the Research Information System ofthe European Research Councilrsquo In Jeffery K andDvorak J (eds) E-Infrastructures for Research andInnovation Linking Information Systems to ImproveScientific Knowledge Proceedings of the 11th InternationalConference on Current Research Information Systems (June6ndash9 2012) pp 103ndash12 Prague Czech Republic
Nason E et al (2008) lsquoRAND Europersquo Health ResearchmdashMaking an Impact The Economic and Social Benefits ofHRB-funded Research Dublin Health Research Board
Reeves M (2002) lsquoArts Council Englandrsquo Measuring theEconomic and Social Impact of the Arts A Review (Pubdonline) lthttpwwwartscouncilorgukmediauploadsdocumentspublications340pdfgt accessed 26 Oct 2012
REF 2014 (2010) Research Excellence Framework Impact PilotExercise Findings of the Expert Panels (Pubd online)lthttpwwwrefacukmediarefcontentpubresearchexcellenceframeworkimpactpilotexercisefindingsoftheexpertpanelsre01_10pdfgt accessed 26 Oct 2012
mdashmdash (2011a) Assessment Framework and Guidance onSubmissions (Pubd online) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissions02_11pdfgt accessed 26 Oct 2012
mdashmdash (2011b) Decisions on Assessing Research Impact (Pubdonline) lthttpwwwrefacukmediarefcontentpubassessmentframeworkandguidanceonsubmissionsGOS20including20addendumpdfgt accessed 19 Dec 2012
mdashmdash (2012) Panel Criteria and Working Methods (Pubd online)lthttpwwwrefacukmediarefcontentpubpanelcriteriaandworkingmethods01_12pdfgt accessed 26 Oct 2012
Russell Group (2009) Response to REF Consultation (Pubdonline) lthttpwwwrussellgroupacukuploadsREF-consultation-response-FINAL-Dec09pdfgt accessed 26 Oct 2012
Scoble R et al (2009) Research Impact Evaluation a WiderContext Findings from a Research Impact Pilot (Pubdonline) lthttpwwwlibsearchcomvisit1953719gt accessed26 Oct 2012
mdashmdash (2010) lsquoInstitutional Strategies for Capturing Socio-Economic Impact of Researchrsquo Journal of Higher EducationPolicy and Management 32 499ndash511
SIAMPI Website (2011) (Pubd online) lthttpwwwsiampieuPagesSIA12625bGFuZz1FTkchtmlgt accessed 26 Oct2012
Spaapen J et al (2011) SIAMPI Final Report (Pubd online)lthttpwwwsiampieuContentSIAMPISIAMPI_Final20reportpdfgt accessed 26 Oct 2012
Spaapen J and Drooge L (2011) lsquoIntroducing lsquoProductiveInteractionsrsquo in Social Impact Assessmentrsquo ResearchEvaluation 20 211ndash18
The Allen Consulting Group (2005) Measuring the Impact ofPublicly Funded Research Canberra Department ofEducation Science and Training
The SROI Network (2012) A Guide to Social Return onInvestment (Pubd online) lthttpwwwthesroinetworkorgpublicationsdoc_details241-a-guide-to-social-return-on-investment-2012gt accessed 26 Oct 2012
University and College Union (2011) Statement on the ResearchExcellence Framework Proposals (Pubd online) lthttpwwwucuorgukmediapdfnqucu_REFstatement_finalsignaturespdfgt accessed 26 Oct 2012
Vonortas N and Link A (2013) Handbook on the Theory andPractice of Program Evaluation Cheltenham UK EdwardElgar Publishing Limited
Whitehead A (1929) lsquoUniversities and their functionrsquo TheAims of Education and other Essays New York MacmillanCompany
Wooding S et al (2007) lsquoRAND Europersquo Policy and PracticeImpacts of Research Funded by the Economic Social ResearchCouncil (Pubd online) lthttpwwwesrcacuk_imagesCase_Study_of_the_Future_of_Work_Programme_Volume_2_tcm8-4563pdfgt accessed 26 Oct 2012
32 T Penfield et al