Research assessment: New metrics? More metrics? No metrics?
Anne Cambon-Thomsen CNRS & Université de Toulouse
Epidemiology and Public health, Inserm, UMR 1027
Metrics and Impact factors of what?
The example of BRIF, Bioresource Research
Impact Factor
27/07/2016 1
What is science producing and needing?
• Publications
• Infrastructures
• Databases and datasets
• Collections of biological samples and data attached
• Specialised software and methods
• Bioinformatics tools
• …..
Among thoses which ones are measured, evaluated, valued?
27/07/2016 ESOF 2016, Manchester, 24-27/7/2016 2
Human bioresources are key
components of biomedical research.
Yet, their role is underestimated and the
work provided to setting up and
maintaining a valid bioresource is not
recognized. Cambon-Thomsen et al. Nat Genet 2003, 34:25–26
• biological samples with
associated data (medical/epidemiological, social),
• databases independent of
physical samples
• other biomolecular and
bioinformatics research
tools
BIORESOURCES (BR)
27/07/2016 ESOF 2016, Manchester, 24-27/7/2016 3
Why are such resource sharing important….
Much biomedical/epidemiological research is based on usingbioresources / approx. 300 million of tissue samples stored in the USA and 20 million of biological resources in Europe, for research and market use.
• Their access to all relevant researchers is essential
• Promoting their sharing is crucial, but does not mean « just » putting files on the web!
• It requires work…. Poorly recognised
There are today principles but few tools and ~ no incentive / tools to that.
ESOF 2016, Manchester, 24-27/7/201627/07/2016 4
and poorly done?
ESOF 2016, Manchester, 24-27/7/2016
• BR not visible enough
• difficult to trace
• not acknowledged adequately
• difficult to assess their usage reliably
due to:
lack of a unique BR identification system to trace them precisely
lack of standards for BR citation in the scientific literature
lack of indicators describingefficient usage and management of BR
27/07/2016 5
Anne Cambon-Thomsen, leader
Laurence Mabile, project manager
The Bioresource
Research Impact
Factor initiative
ESOF 2016, Manchester, 24-27/7/201627/07/2016 6
The BRIF initiative
• HOW? • By creating a set of adequate
standardized tools:
• standards for citation / acknowledgement of bioresourcesin scientific articles in order to trace their use on the web
• BRIF indicator: a tool to establish frequency of BR use and evaluate their impact based on metrics and on the use of a unique digital resource identifier
Work in progress, currently developing a framework for recognising the specific contribution of bioresources to Research (in scientific literature)
• Final objective:
• To create tools that will:
- promote a philosophy of sharing in the biomedical community
- facilitate the practice of sharing policies for data and samples
ESOF 2016, Manchester, 24-27/7/2016
www.gigasciencejournal.com/content/2/1/7
27/07/2016 7
Working subgroups
‘BRIF & Digital Identifiers’co-chaired by G. A. Thorisson, University of Leicester, UK and P.A. Gourraud, University of California
SF, USA [email protected]
‘BRIF Parameters’chaired by B. Parodi, National Inst. Cancer Res. Genoa, IT [email protected]
‘BRIF in Access & Sharing Policies’co-chaired by E. Rial-Sebbag, Inserm UMR1027, Toulouse, FR and J. Harris, Norwegian Institute of
Public Health, Oslo, Norway [email protected], [email protected]
‘BRIF and Journal Editors’co-chaired by A. Cambon-Thomsen, Inserm UMR1027, Tlse, FR and
E. Bravo, Istituto Superiore di Sanità, Rome, IT
[email protected], [email protected]
‘BRIF dissemination’chaired by L. Mabile, Inserm UMR1027, Tlse, FR [email protected]
ESOF 2016, Manchester, 24-27/7/201627/07/2016 8
CoBRA : Citation of Bioresources in Research Articles. A milestone developed
by the BRIF Journal Editors’ subgroup
Sensitizing editors and their associations about BR issues (targeted surveys) Dissemination of BRIF in international Science Edition and
other Conferences Organize restricted workshops addressed to Journal editors
and experts (Rome, June 21, 2013; Toulouse, Oct 9, 2016)Work out a guideline for citation of bioresources Launching an open access journal for describing bioresources
with re-use potential
ESOF 2016, Manchester, 24-27/7/201627/07/2016 9
ESOF 2016, Manchester, 24-27/7/201627/07/2016 10
The Open Journal of Bioresources (OJB) features peer-reviewed short papers helping researchers to locate and cite bioresources with high reuse potential.
Making bioresources more openly discoverable has enormous benefits not only for the research community and the wider public, but for the producers of the bioresources as well.
Both the resources and the OJB papers are citable and this will be tracked to provide authors with metrics on reuse and impact.
http://openbioresources.metajnl.com
Collaboration with Ubiquity Press
Launch of an open access data journal dedicated to the publication of description of bioresources
Aim: - Increase the visibility of bioresources by offering the possibility of an open access “marker paper” , according to an established template of description- Provides a bioresourcewith a DOI
ESOF 2016, Manchester, 24-27/7/201627/07/2016 11
How it worksOJB Bioresource papers are:
Peer reviewed
Paper structure:
Abstract
Bioresource overviewMethodsBioresource description
Reuse potential
Short and concise
Open Access only (CC BY)
Fully citable
ESOF 2016, Manchester, 24-27/7/201627/07/2016 12
Machine readable
ESOF 2016, Manchester, 24-27/7/201627/07/2016 13
Next: implementation of CoBRAWhat actions?
• A guideline that is not implemented is of no use!
• What mechanisms: endorsement at various levels• Institutional (Universities, national institutes,
infrastructures...)
• Scientific (Scientific consortia, scientific and professionalsocieties…)
• Administrative : Inclusion of the reference to use in MTA
• Educational : good practices taught to PhD students usingbioresources
• Editorial (instruction to authors, to reviewers; incentives to use EQUATOR’s references guidelines…)
ESOF 2016, Manchester, 24-27/7/201627/07/2016 14
Conclusion
• By providing an operational guideline to cite in a harmonised way the bioresources used or referred to in an article, CoBRA fills a gap, indispensible towards BRIF, to measure impact of such resources.
• Its implementation requires joint efforts from several stakeholders, in particular editors.
• It opens the possibility to attribute impact • To infrastructures and their use• To individuals contributing to such resources through link to ORCID or other
personal Unique Identifiers
• It empowers patients and research participants for following themselves what is done with their samples and data.
ESOF 2016, Manchester, 24-27/7/201627/07/2016 15
Subscribe to the BRIF Newsletter at:
http://listes.univ-tlse3.fr/wws/subscribe/brif.info
Contact us:
[email protected] [email protected]
Thank you!
ESOF 2016, Manchester, 24-27/7/201627/07/2016 16
References used• Napolitani F et al. Treat the poison of invisibility with CoBRA, a systematic way of citing
bioresources in journal articles. Biopreservation and biobanking, 2016, in press
• Bravo E et al. Developing a guideline to standardize the citation of bioresources in journal articles (CoBRA). BMC Med. 2015;13(1):266.
• Bravo E et al. Citation of bioresources in biomedical journals: moving towards standardization for an impact evaluation. European Science Editing 2013;39(2): 36-38.
• De Castro P et al. Open Data Sharing in the Context of Bioresources. Acta Inform Med. 2013, 21(4): 291-292.
• Mabile et al. Quantifying the use of bioresources for promoting their sharing in scientific research. GigaScience 2013, 2:7.
• Cambon-Thomsen A et al The role of a Bioresource Research Impact Factor as an incentive to share human bioresources. Nat Genet. 2011, 43(6):503-4.
• Kauffmann F., Cambon-Thomsen A. Tracing biological collections: between books and clinical trials. JAMA, 2008;299(19): 2316-2318.
• Cambon-Thomsen A. Assessing the impact of biobanks. Nat Genet, 2003, 34, (1) 25-26.27/07/2016 ESOF 2016, Manchester, 24-27/7/2016 17
Beyond notions of ‘misuse’ in
debates on metrics
Sarah de RijckeCentre for Science and Technology StudiesLeiden University
• If more transparency is the solution, what precisely is the problem?
'Misuses' and 'unintended effects’?
Is the JIF 'misused'?
• Arguments against the JIF often cite its technical shortcomings
• "Single numbers conceal skew of distributions and variation incitations received by published papers"
• In line with principle 8 Leiden Manifesto: avoid false precision
But also: too optimistic mode of 'implementation'
Researchers as passive, relatively powerless, actors in science system
Obscures much more fundamental issue about 'constitutive effects'
9. Recognize the systemic effects of assessment and indicators10. Scrutinize indicators regularlyand update them
- selecting useful information from the overwhelming amounts of literature they could potentially read
- settling discussions over whom to collaborate with and when
- how much - additional - time to spend in the laboratory
- deciding on how much time and effort to put into article peer review
How do researchers use indicators?
Does the JIF 'mislead'?
1. Not all embedded uses are grounded in naïve assumptions
2. Who misleads whom? Researchers are active users of metrics; not simply evaluated subjects
Different embedded uses of metrics ‘trickle back up’ into assessments
“Auditors are not aliens. They are versions of ourselves.” (Strathern 1997, 319)
• Calls for researchers and evaluators to ‘drop’ the JIF in assessments are actually calls for quite fundamental transformations in howresearchers currently produce scientific knowledge
• This transformation of the epistemic process is the primary task
• When and how exactly do metrics reshape research? What is at stakein this situations? What kind of research do we want?
Key question
• Constitutive effects of assessment and indicators, affecting the kinds of knowledge researchers consider viable and interesting
• How can we let this work for instead of against us?
--> Add additional 'socially robust' responsible metrics initiatives
With more active involvement of researchers in finding solutions
• Diana Hicks, Paul Wouters, Ismael Rafols, Ludo Waltman, Sarah de Rijcke: The Leiden Manifesto for Research Metrics www.leidenmanifesto.org
• www.cwts.nl/blog - discussion on JIF and citation distributions
• Alex Rushforth – project Impact Indicators in Biomedicine
Thank you !
Citation based Research AssessmentThe Tail is Wagging the
Dog
Impact onJournalsResearchersResearch
Alternatives/Solutions
Journals
JOURNALS
Journals
JOURNALS
One number to rule them allthe 2 year Journal Impact Factor ‘JIF’
‘JIF is used to estimate the expected count of individual papers, …dubious considering
the known skew-ness observed for most journals’Garfield E. JAMA, 2006—Vol 295
Citations are a robust metric as use threshold is high
unfounded assumption:
JIF is predictive of the impact (importance/quality) of a given paper
Publishing
0
100000
200000
300000
400000
500000
600000
700000
800000
19902014
PubMed
wired.com
Research assessment is an ecosystem
FundersInstitutions
Researchers
Journals
Government
The power of JIF
author
journal
JIF
papers
credit
fees $subscriptions
careerfunding $
Institutions, funders and government useJournals as proxy for quality/impact
∴ Journal Editors decide on which research is funded
Research assessment is an ecosystem
Funders
Institutions
Researchers
Journals
www.ascb.org/dora
2013
>12,000 individuals
>550 organizations
good practice examples
Why Thomson Reuters will not change JIF
0
5
10
15
20
25
30
35
40
45
0 10 20 30 40 50 60
Nu
mb
er
of
tie
s
Journal Impact Factor
Three Decimal JIF
( Range 0-60)
16
0
10
20
30
40
50
60
0 10 20 30 40 50 60
Nu
mb
er
of
tie
s
Journal Impact Factor
Two Decimal JIF
( Range 0-60)
17
0
50
100
150
200
250
300
350
400
450
0 10 20 30 40 50 60
Nu
mb
er
of
tie
s
Journal Impact Factor
One Decimal JIF
( Range 0-60)
statistically significant numbersor using median would collapse journal rankings
S. Betuzzi, ASCB blog, 5-2015“A False Sense of Precision”
…even journals that ought to be immune peddle the JIF
BibliometricTransparency
Institutions set JIF expectations
https://www.gov.uk/government/publications/open-access-to-research-independent-advice
‘universities should be encouraged to sign-up to the San Francisco Declaration on Research Assessment.This requires that metrics are used appropriately in the evaluation of individual researchers in order to remove distortion in the market by privileging certain publication routes.’
Article level metrics• Balance journal metrics• ‘Real time’• Risk developing additional flawed metrics
‘Metrics should support, not supplant, expert judgement.’
http://www.hefce.ac.uk/
• Financial compensation? • Discounts• Publish annual list• Certificates• Community credit (editorial boards, open peer review)
• Research Assessment
Reviewer Credit
FundersInstitutions
Researchers
Journals
Referee Creditevaluating the
evaluators
InstitutionFunder
journal
Referee
incentivized review
report
Transparent Review
• Showcase quality • Improve peer review• Teaching tool• Independent opinion
Nature 468, 29–31 (2010)
Publish reports, decision letters & rebuttals (not names)No confidential commentsNamed co-refereesDetailed decision letters
• Microattribution(figure level authorship)
• Digital identifiers• Balanced co-first authorship
Authorship‘credit where credit is due’
Fig 1C• Author contribution• Source Data• Methods• Protocols
Thorough papers first to the finish line
EMBO Transparent Process
scooping protection; one round of revision; preprint publication
dissemination
research finding
quality control
validation
Preprint servers – accelerating discovery
Journals:Peer review
Preprint: Discovery time stamp
time
relia
bili
ty
• Minimal delay to disseminating findings• Global reach• Community comments & collaborations• Time stamp: document priority
Preprints + DORA may Transform Research Assessment
Funders increasingly consider preprint posts for grants and fellowships
Research Assessment: New metrics? More metrics? No metrics?
Ismael Rafols
Ingenio (CSIC-UPV), Univ. Politècnica de València
SPRU (Science Policy Research Unit), Univ. Sussex
Re-shaping design and use of indicators• Indicators may be harming research
• Current indicators are only (partially) appropriate for some types of science.
• Biases against and potential suppression of creative and valuable types of research (agro-, health,…). Threat to diversity.
• Not only more, but other types of indicators needed• Making visible other contributions (e.g. IDR) and other types of research
(e.g. action research, co-creation)• Enhancing visualisation of metrics for “opening up” perspectives rather
than facilitating “closing down”
• Towards different uses of indicators• New embedding in assessment or policy context• Indicators used to pluralise (opening up) perspectives, as tools
for interpretation and deliberation, not a substitute for judgement
Uses of indicators: Pressing demands of research management and evaluation --- Can indicators help?
Yes, indicators can help make decisions…
Reduce time and costs
Increase transparency and sense of objectivity
Reduce complexity, accessible to managers
but do they lead to the “right” decisions?
Evaluation gap (Wouters):
“discrepancy between evaluation criteria [implicit in indicators] and the social and economic functions of science”
*Academia – “excellence” *Innovation – economic “growth”
Missions not well covered: agriculture, public health, defence,development, social inclusion,…
Often related to marginalised / “neglected” populations?
Problems, research, indicators and marginalisation
Space of problems
Space of research
Space of STIindicators
Space of problems
Space of research
Researchwell illuminatedby indicators
Problems, research, indicators and marginalisation
Problems, research, indicators and marginalisation
Space of problems
STI Peripheries:research spaces notwell capturedby indicators
Researchwell illuminatedby indicators
Problems, research, indicators and marginalisation
Multiple types of space:
STI Peripheries:research spaces notwell capturedby indicators
Researchwell illuminatedby indicators
Cognitive: SSH, engineering
Linguistic: non-English
Sectoral: low-tech, agriculture, creative ind.
Social: gender, minorities
Geographical: regional, “South”
Streetlight effect in indicators: mistaking light with “problems”
Space of problems
Space of research
Researchwell illuminatedby indicators
Streetlight effect in indicators: mistaking light with “problems”
Space of problems
Space of research
Space of problemsSpace of research
Space of problems
Hypothesis: reduced indicator coverage may contract research space
Space of research
Space of STIindicators
(No “hard evidence”)The societal needs dealt by research that is under the streetlight effect, will be
better rewarded.
Reduced diversity ofresearch efforts...
…reduced coverageof societal needs
Space of problems
Demands for expanding role of science in society…
Space of research
Space of STIindicators
Space of problems
Demands for expanding role of science in society…
Space of research
Space of STIindicators
Space of problems
…may require an expanded set of indicators: MORE
Space of research
Space of STIindicators
narrow
broad
closing-down opening-up
range of appraisals inputs(issues, perspectives, scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Leach et al. 2010
Broadening out vs. Opening up (Stirling, 2008)
narrow
broad
closing-down opening-up
range of appraisals inputs(issues, perspectives, scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Appraisal methods: broad vs. narrow & close vs. open
cost-benefit analysis
open hearings
consensusconference
scenarioworkshops
citizens’ juries
multi-criteria mapping
q-method
sensitivityanalysis
narrative-based participant observation
decision analysis
risk assessment structured interviews
Stirling et al. (2007)
narrow
broad
closing-down opening-up
range of appraisals inputs(issues, perspectives, scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Appraisal methods: broad vs. narrow & closing vs. opening
Most conventionalS&T indicators??
narrow
broad
closing-down opening-up
range of appraisals inputs(issues, perspectives, scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Broadening out S&T Indicators
ConventionalS&T indicators??
Broadening out
Incorporation plural analytical dimensions:
global & local networkshybrid lexical-actor netsetc.
New analytical inputs: media, blogsphere.
narrow
broad
closing-down opening-up
range of appraisals inputs(issues, perspectives, scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Journal rankings
University rankings Unitary measuresthat are opaque, tendency to favour the established perspectives
… and easily translated into prescription
European InnovationScoreboard
Broadening out S&T Indicators
narrow
broad
closing-down opening-up
range of appraisals inputs(issues, perspectives, scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Opening up in S&T Indicators
ConventionalS&T Indicators??
opening-up
Making explicit underlying conceptualisations and creating heuristic tools to facilitate exploration
NOT about the uniquely best methodOr about the unitary best explanationOr the single best prediction
A research and policy agenda• Pluralising indicators (supply)• Broadening out: Create more diverse indicators
• Indicators of open science, RRI, hidden, social innovation
• Improve representation of SSH scholarship, languages other than English, the “South”,…
• Opening up: develop more pluralistic toolkits that present contrasting perspectives
• Multi-ranking tools
• Interactive visualisations
• New embedding of indicators in assessment (demand) • Develop new social processes on use of indicators
• Indicators to inform decisions, not a substitute for judgement
• STI indicators as tools for interpretation and deliberation
From S&T indicators for justification and disciplining…… towards S&T indicators as tools for deliberation
Model 2: Plural and conditionalExploring diverse choices Facilitating options/choices in landscapes
Model 1: Unique and prescriptiveProposing “best choices”Rankings -- ranking list of preferences
From S&T indicators for justification and disciplining…… towards S&T indicators as tools for deliberation
• ‘Conventional’ use of indicators (‘Pure scientist ‘--Pielke) • Purely analytical character (i.e. free of normative assumptions)
• Instruments of objectification of dominant perspectives
• Aimed at legitimising /justifying decisions (e.g. excellence)
Unitary and prescriptive advice
• Opening up scientometrics (‘Honest broker’ --Pielke)• Aimed at locating the actors in their context and dynamics
Not predictive, or explanatory, but exploratory
• Construction of indicators is based on choice of perspectives
Make explicit the possible choices on what matters
• Supporting debate
Making science policy more ‘socially robust’
Plural and conditional advice
Barré (2001, 2004, 2010), Stirling (2008)
Hicks, Wouters, De Rijcke, Waltmanand Rafols (2015)
Nature (23 Abril 2015)
Principles of the “The Leiden Manifesto”
1. Quantitative evaluation should support qualitative, expert assessment.
2. Measure performance against the research missions of the institution, group or researcher.
3. Protect excellence in locally relevant research.
4. Keep data collection and analytical processes open, transparent and simple.
5. Allow those evaluated to verify data and analysis.
6. Account for variation by field in publication and citation practices.
7. Base assessment of individual researchers on a qualitative judgement of their portfolio.
8. Avoid misplaced concreteness and false precision.
9. Recognize the systemic effects of and indicators.
10.Scrutinize indicators regularly and update them.
Hicks, Wouters, Waltman, de Rijcke and Rafols (Nature, 2015)
2. Examples of Opening Up
a. Broadening out AND Opening up
b. Opening up WITH NARROW inputs
narrow
broad
closing-down opening-up
range of appraisals inputs(issues, perspectives, scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
1. Preserving multiple dimensions in broad appraisals
ConventionalS&T indicators??
Leach et al. 2010
Broadening out opening-up
Composite Innovation Indicators (25-30 indicators)European (Union) Innovation Scoreboard
Grupp and Schubert (2010) show that order is highly dependent on indicators weightings.
Sensitivity analysis
Solution: representing multiple dimensions(critique by Grupp and Schubert, 2010)
Use of spider diagramsallows comparing like with like
U-rank, University performance Comparison tools(Univ. Twente)
5.4 Community trademarks indicator
2. Examples of Opening Up
b. Opening up WITH NARROW inputs
narrow
broad
closing-down opening-up
range of appraisals inputs(issues, perspectives, scenarios, methods)
effect of appraisal ‘outputs’ on decision-making
Opening up S&T Indicators
ConventionalS&T Indicators??
Leach et al. 2010
opening-up
Making explicit underlying conceptualisations and creating heuristic tools to facilitate exploration
NOT about the uniquely best methodOr about the unitary best explanationOr the single best prediction
1. Measures of “scientific excellence”
Measures of “scientific excellence”
0
0.5
1
1.5
2
2.5
3
3.5
4
ISSTI SPRU MIoIR Imperial WBS LBS
AB
S R
ank
0
1
2
3
4
5
ISSTI SPRU MIoIR Imperial WBS LBS
Cit
atio
ns/
pu
b
Jou
rnal
-fie
ld N
orm
alis
ed
Which one is more meaningful??
0
1
2
3
4
ISSTI SPRU MIoIR Imperial WBS LBS
Jou
rnal
Imp
act
Fact
or
Rafols et al. (2012, Research Policy)
Measures of “scientific excellence”
0
0.5
1
1.5
2
2.5
3
3.5
4
ISSTI SPRU MIoIR Imperial WBS LBS
AB
S R
ank
0
1
2
3
4
5
ISSTI SPRU MIoIR Imperial WBS LBS
Cit
atio
ns/
pu
b
Jou
rnal
-fie
ld N
orm
alis
ed
0
0.05
0.1
0.15
0.2
ISSTI SPRU MIoIR Imperial WBS LBS
Cit
atio
ns/
pu
bC
itin
g-p
ape
r N
orm
alis
ed
Which one is more meaningful??
0
1
2
3
4
ISSTI SPRU MIoIR Imperial WBS LBS
Jou
rnal
Imp
act
Fact
or
Rafols et al. (2012, Research Policy)
2. Measures of interdisciplinarity
Multiple concepts of interdisciplinarity:
Conspicuous lack of consensus but most indicators aim to capture the following concepts
Integration (diversity & coherence)
• Research that draws on diverse bodies of knowledge
• Research that links different disciplines
Intermediation
• Research that lies between or outside the dominant disciplines
Coherence
Low High
Diversity
Low
Hig
h
InterdisciplinaryMultidisciplinary
Monodisciplinary
Intermediation
Low High
Monodisciplinary Interdisciplinary
Diversity
ISSTI Edinburgh
WoS Cats of references
Assessing interdisciplinarity
ISSTI EdinburghObserved/ExpectedCross-citations
Assessing interdisciplinarity Coherence
RiskAnal
PsycholBull
PhilosTRSocA
Organization
JPersSocPsychol
JLawEconOrgan
JIntEcon
Interfaces
EnvironSciPolicy
CanJEcon
ApplEcon
AnnuRevPsychol
RandJEcon
JPublicEcon
JManage
JLawEcon
HumRelat
BiomassBioenerg
AtmosEnviron
PolicySci
JIntBusStud
JApplPsychol
Econometrica
PublicUnderstSci
PsycholRev
JFinancEcon
JApplEcolJAgrarChangeClimaticChange
AcadManageJ
JRiskRes
JDevStud
Scientometrics
HarvardBusRev
IntJMedInform
GlobalEnvironChang
EconJ
JFinanc
StudHistPhilosSci
DrugInfJ
Futures
WorldDev
StrategicManageJ
SciTechnolHumVal
EconSoc
PublicAdmin
Lancet
IndCorpChange
AccountOrgSoc
EnergPolicy
Nature
AmJSociol
ResPolicy
TechnolAnalStrateg SocStudSci
BritMedJ
ISSTI EdinburghReferences
IntermediationAssessing interdisciplinarity
3. Research trajectories
Explore different directions of research
Rice VarietiesClassic Genetics
TransgenicsMol. Biology
Genomics
PestsPlant protection
Weeds Plant protection
Plant nutrition
Production & socioeconomic issues
Consumption Hum. nutrition, food
techs)
Thinking in terms of research portfolios: the case of rice
Ciarli and Rafols (2014, unpublished)
US, 2000-12
Ciarli and Rafols (2014, unpublished)
Rice research
India 2000-12Rice research
Ciarli and Rafols (2014, unpublished)
Thailand 2000-12Rice research
Ciarli and Rafols (2014, unpublished)
Brazil 2000-12Rice research
Ciarli and Rafols (2014, unpublished)
3. Summary and conclusions
From S&T indicators for justification and disciplining…… towards S&T indicators as tools for deliberation
Instead of designing indicators for ranking (summative assement)
design indicators that foster reflection (formative assessment)
and pluralisation of perspectives
This shift is facilitated by trends pushed by ICT and visualisation tools
More inputs (pubs, pats, but also news, webs, etc.)
Multidimensional outputs (interactive maps)
Institutional repositories
Multiple solutions -- highlighting variation, confidence intervals
More inclusive and contrasting classifications (by-passing private data ownership? Pubmed, Arxiv)
More possibilities for open scrutiny (new research groups)
S&T indicators as a tools to open up the debate
• ‘Conventional’ use of indicators (‘Pure scientist ‘--Pielke) • Purely analytical character (i.e. free of normative assumptions)
• Instruments of objectification of dominant perspectives
• Aimed at legitimising /justifying decisions (e.g. excellence)
Unitary and prescriptive advice
• Opening up scientometrics (‘Honest broker’ --Pielke)• Aimed at locating the actors in their context and dynamics
Not predictive, or explanatory, but exploratory
• Construction of indicators is based on choice of perspectives
Make explicit the possible choices on what matters
• Supporting debate
Making science policy more ‘socially robust’
Plural and conditional advice
Barré (2001, 2004, 2010), Stirling (2008)
Strategies for opening up or how to “keep it complex” yet “manageable”
• Presenting contrasting perspectives• At least TWO, in order to give a taste of choice
• Simultaneous visualisation of multiple properties / dimensions • Allowing the user take its own perspective
• Interactivity• Allowing the user give its own weigh to criteria / factors
• Allowing the user manipulate visuals
.
Is ‘opening up’ worth the effort? (1)Sustaining diversity in S&T system
Decrease in diversity.
Potential unintended consequence of the evaluation machine:
Why diversity matters
Systemic (‘ecological’) understanding of the S&T• S&T outcomes depend on synergistic interactions between disparate
elements.
Dynamic understanding of excellence and relevance• New social needs, challenges, expectations from S&T
Manage diverse portfolios to hedge against uncertainty in research• Office of Portfolio Analysis (National Institutes of Health)
http://dpcpsi.nih.gov/opa/
Open possibility for S&T to work for the disenfranchised• Topics outside dominant science (e.g. neglected diseases)
STI Indicators ConferenceEuropean and Latin American Networks
14-16 September 2016, València
Measures of “scientific excellence”
0
0.5
1
1.5
2
2.5
3
3.5
4
ISSTI SPRU MIoIR Imperial WBS LBS
AB
S R
ank
0
1
2
3
4
5
ISSTI SPRU MIoIR Imperial WBS LBS
Cit
atio
ns/
pu
b
Jou
rnal
-fie
ld N
orm
alis
ed
Which one is more meaningful??
0
1
2
3
4
ISSTI SPRU MIoIR Imperial WBS LBS
Jou
rnal
Imp
act
Fact
or
Rafols et al. (2012, Research Policy)
Measures of “scientific excellence”
0
0.5
1
1.5
2
2.5
3
3.5
4
ISSTI SPRU MIoIR Imperial WBS LBS
AB
S R
ank
0
1
2
3
4
5
ISSTI SPRU MIoIR Imperial WBS LBS
Cit
atio
ns/
pu
b
Jou
rnal
-fie
ld N
orm
alis
ed
0
0.05
0.1
0.15
0.2
ISSTI SPRU MIoIR Imperial WBS LBS
Cit
atio
ns/
pu
bC
itin
g-p
ape
r N
orm
alis
ed
Which one is more meaningful??
0
1
2
3
4
ISSTI SPRU MIoIR Imperial WBS LBS
Jou
rnal
Imp
act
Fact
or
Rafols et al. (2012, Research Policy)
0
1
2
3
4
5
6
7
ISSTI SPRU MIoIR Imperial WBS LBS
Cit
atio
ns/
Pu
blic
atio
n
Raw
0
1
2
3
4
5
ISSTI SPRU MIoIR Imperial WBS LBS
Cit
atio
ns/
Pu
blic
atio
n
Fie
ld N
orm
alis
ed
0
0.05
0.1
0.15
0.2
ISSTI SPRU MIoIR Imperial WBS LBSC
itat
ion
s/P
ub
licat
ion
Cit
ing-
sid
e N
orm
alis
ed
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
ISSTI SPRU MIoIR Imperial WBS LBS
Cit
atio
ns/
Pu
blic
atio
nJo
urn
al n
orm
alis
ed
Measures of “scientific impact”
Summary: IS (blue) units are more interdisciplinary than BMS (orange)
More Diverse
Rao-Stirling Diversity
More Coherent
Observed/Expected
Cross-Citation Distance
More Interstitial
Average Similarity
0.02
0.03
0.04
0.05
0.06
0.07