research evaluation at cwts meaningful metrics, evaluation in context

47
Research evaluation at CWTS Meaningful metrics, evaluation in context Ed Noyons, Centre for Science and Technology Studies, Leiden University RAS Moscow, 10 October 2013

Upload: yvonne

Post on 23-Feb-2016

47 views

Category:

Documents


1 download

DESCRIPTION

Research evaluation at CWTS Meaningful metrics, evaluation in context. Ed Noyons, Centre for Science and Technology Studies, Leiden University RAS Moscow, 10 October 2013. Outline. Centre of science and Technology Studies (CWTS, Leiden University) history in short; CWTS research program; - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Research evaluation at CWTS Meaningful metrics, evaluation in context

Research evaluation at CWTS

Meaningful metrics, evaluation in context

Ed Noyons, Centre for Science and Technology Studies, Leiden University

RAS Moscow, 10 October 2013

Page 2: Research evaluation at CWTS Meaningful metrics, evaluation in context

Outline

• Centre of science and Technology Studies (CWTS, Leiden University) history in short;

• CWTS research program;• Recent advances.

Page 3: Research evaluation at CWTS Meaningful metrics, evaluation in context

History in Short

25 years CWTS

3

Page 4: Research evaluation at CWTS Meaningful metrics, evaluation in context

25 years CWTS history in short (1985-2010)

• Started around 1985 by Anthony van Raan and Henk Moed; One and a half person funded by university;

• Context is science policy, research management;

• Mainly contract research and services (research evaluation);

• Staff stable around 15 people (10 researchers);

• Main focus on publication and citation data (in particular Web of Science).

Page 5: Research evaluation at CWTS Meaningful metrics, evaluation in context

25 years CWTS history in short (2010 - …)

• Block funding since 2008;• Since 2010

– moving from Services mainly with some research to:

– Research institute with services;– New director Paul Wouters;

• New recruitments: now ~35 people.

5

Page 6: Research evaluation at CWTS Meaningful metrics, evaluation in context

CWTS Research programme

Research and services

6

Page 7: Research evaluation at CWTS Meaningful metrics, evaluation in context

Bibliometrics (in context science policy) is ...

Page 8: Research evaluation at CWTS Meaningful metrics, evaluation in context

Opportunities

• Research Accountability => evaluation• Need for standardization, objectivity• More data available

Page 9: Research evaluation at CWTS Meaningful metrics, evaluation in context

Vision

• Quantitative analyses• Beyond the ‘lamppost’

– Other data– Other outputs

• Research 360º– Input – Societal impact/quality– Researchers themselves

Page 10: Research evaluation at CWTS Meaningful metrics, evaluation in context

Background of the CWTS research program

• Already existing questions• New questions:

1. How do scientific and scholarly practices interact with the “social technology” of research evaluation and monitoring knowledge systems?

2. What are the characteristics, possibilities and limitations of advanced metrics and indicators of science, technology and innovation?

Page 11: Research evaluation at CWTS Meaningful metrics, evaluation in context

Current CWTS research organization

• Chairs– Scientometrics – Science policy– Science Technology & innovation

• Working groups– Advanced bibliometrics– Evaluation Practices in Context (EPIC)– Social sciences & humanities– Society using research Evaluation (SURE)– Career studies

Page 12: Research evaluation at CWTS Meaningful metrics, evaluation in context

Back to Bibliometrics

A look under the lamp post

12

Page 13: Research evaluation at CWTS Meaningful metrics, evaluation in context

Recent advances at CWTS

• Platform: Leiden ranking• Indicators: New normalization to address:

1. Multidisciplinary journals2. (Journal based) classification

• Structuring and mapping– Advanced network analyses– Publication based classification– Visualization: VOSviewer

Page 14: Research evaluation at CWTS Meaningful metrics, evaluation in context

The Leiden Ranking

http://www.leidenranking.com

14

Page 15: Research evaluation at CWTS Meaningful metrics, evaluation in context

Platform: Leiden Ranking http://www.leidenranking.com

• Based on Web of Science (2008-2011);• Only universities (~500);• Only dimension is scientific research; • Indicators (state of the art):

– Production– Impact (normalized and‘absolute’)– Collaboration.

15

Page 16: Research evaluation at CWTS Meaningful metrics, evaluation in context

Leiden Ranking – world top 3 (PPtop10%)

16

PPtop10%: Normalized impact Stability:

Intervals to enhance certainty

Page 17: Research evaluation at CWTS Meaningful metrics, evaluation in context

Russian universities (impact)

Page 18: Research evaluation at CWTS Meaningful metrics, evaluation in context

Russian universities (collaboration)

Page 19: Research evaluation at CWTS Meaningful metrics, evaluation in context

Impact Normalization (MNCS)

Dealing with field differences

19

Page 20: Research evaluation at CWTS Meaningful metrics, evaluation in context

20

Background and approach

• Impact is measured by numbers of citations received;

• Excluding self-citations;

• Fields differ regarding citing behavior;

• One citation is one field is more worth than in the other;

• Normalization– By journal category– By citing context.

Page 21: Research evaluation at CWTS Meaningful metrics, evaluation in context

Issues related to journal category-based approach

• Scope of category;• Scope of journal.

21

Page 22: Research evaluation at CWTS Meaningful metrics, evaluation in context

Journal classification ‘challenge’(scope of category) (e.g. cardio research)

Page 23: Research evaluation at CWTS Meaningful metrics, evaluation in context

23

Approach Source-normalized MNCS

• Source normalization (a.k.a. citing-side normalization):– No field classification system;– Citations are weighted differently depending on

the number of references in the citing publication; – Hence, each publication has its own environment

to be normalized by.

Page 24: Research evaluation at CWTS Meaningful metrics, evaluation in context

24

Source-normalized MNCS (cont’d)

• Normalization based on citing context;• Normalization at the level of individual papers

(e.g., X)• Average number of refs in papers citing X;• Only active references are considered:

– Refs in period between publication and being cited– Refs covered by WoS.

Page 25: Research evaluation at CWTS Meaningful metrics, evaluation in context

Networks and visualization

Collaboration, connectedness, similarity, ...

25

Page 26: Research evaluation at CWTS Meaningful metrics, evaluation in context

VOSviewer: collaboration Lomonosov Moscow State University (MSU)

26

• WoS (1993-2012)• Top 50 most collaborative partners• Co-published papers

Page 27: Research evaluation at CWTS Meaningful metrics, evaluation in context

Other networks

• Structure of science output (maps of science);

• Oeuvres of actors;• Similarity of actors (benchmarks based

on profile);• …

27

Page 28: Research evaluation at CWTS Meaningful metrics, evaluation in context

Publication based classificationStructure of science independent from journal classification

28

Page 29: Research evaluation at CWTS Meaningful metrics, evaluation in context

Publication based classification (WoS 1993-2012)• Publication based clustering (each pub in one cluster);

• Independent from journals;

• Clusters based on Citing relations between publications

• Three levels:– Top (21)– Intermediate (~800)– Bottom (~22,000)

• Challenges:– Labeling– Dynamics.

29

Page 30: Research evaluation at CWTS Meaningful metrics, evaluation in context

Map of all sciences (784 fields, WoS 1993-2012) Each circle represents

a cluster of pubs

Surface represents volume

Distance represents relatedness

(citation traffic)

Physical sciencesEarth,

Environ, agricult sciences

Biomed sciences

Cognitive sciences

Social and health

sciences Maths, computer sciences

Colors indicate clusters of fields, disciplines

Page 31: Research evaluation at CWTS Meaningful metrics, evaluation in context

Positioning of an actor in map

• Activity overall (world and e.g., Lomonosov Moscow State Univ, MSU)o Proportion Lomonosov relative to world;

• Activity per ‘field’ (world and MSU)o Proportion MSU in field;

• Relative activity MSU per ‘field’;

• Scores between 0 (Blue) and 2 (Red);

• ‘1’ if proportion same as overall (Green).

31

Page 32: Research evaluation at CWTS Meaningful metrics, evaluation in context

Positioning Lomonosov MSU

32

Page 33: Research evaluation at CWTS Meaningful metrics, evaluation in context

Positioning Lomonosov MSU

33

Page 34: Research evaluation at CWTS Meaningful metrics, evaluation in context

Positioning Russian Academy of Sciences (RAS)

34

Page 35: Research evaluation at CWTS Meaningful metrics, evaluation in context

Alternative view Lomonosov (density)

35

Page 36: Research evaluation at CWTS Meaningful metrics, evaluation in context

Using the map: benchmarks

• Benchmarking on the basis of research profile– Distribution of output over 784 fields;

• Profile of each university in Leiden Ranking;– Distributions of output over 784 fields;

• Compare to MSU profile;• Identify most similar.

36

Page 37: Research evaluation at CWTS Meaningful metrics, evaluation in context

Most similar to MSU (LR) universities• FR - University of Paris-Sud 11• RU - Saint Petersburg State University• JP - Nagoya University• FR - Joseph Fourier University• CN - Peking University• JP - University of Tokyo

37

Page 38: Research evaluation at CWTS Meaningful metrics, evaluation in context

Density view MSU

38

Page 39: Research evaluation at CWTS Meaningful metrics, evaluation in context

Density view St. Petersburg State University

39

Page 40: Research evaluation at CWTS Meaningful metrics, evaluation in context

VOSviewer (Visualization of Similarities)http://www.vosviewer.com

• Open source application;• Software to create maps;• Input: publication data;• Output: similarities among publication

elements:– Co-authors– Terms co-occurring– Co-cited articles– …

40

Page 41: Research evaluation at CWTS Meaningful metrics, evaluation in context

More information CWTS and methods• www.cwts.nl• www.journalindicators.com• www.vosviewer.com• [email protected]

41

Page 42: Research evaluation at CWTS Meaningful metrics, evaluation in context

THANK YOU

42

Page 43: Research evaluation at CWTS Meaningful metrics, evaluation in context

Basic model in which we operate (research evaluation)

• Research in context

Page 44: Research evaluation at CWTS Meaningful metrics, evaluation in context

Example (49 Research communties of a FI univ)

0.00

1.00

2.00

3.00

4.00

5.00

0.00 1.00 2.00 3.00 4.00 5.00

MNCS (traditional)

MN

CS

(new

)

High Int-cov and large P Low Int_cov & small P

‘Positive’ effect

‘Negative’ effect

Page 45: Research evaluation at CWTS Meaningful metrics, evaluation in context

RC with a‘positive’effect

0 20 40 60 80 100 120 140

MULTIDISCIPLINARYSCIENCES (0.5 -> 0.6)

PHYSICS, PARTICLES &FIELDS (3.2 -> 4.6)

PHYSICS,MULTIDISCIPLINARY (7.1 ->

7.7)

PHYSICS, NUCLEAR (1.9 ->1.5)

GEOCHEMISTRY &GEOPHYSICS (0.8 -> 1.3)

GEOSCIENCES,MULTIDISCIPLINARY (0.8 ->

1.1)

METEOROLOGY &ATMOSPHERIC SCIENCES

(0.8 -> 1.2)

ASTRONOMY &ASTROPHYSICS (0.8 -> 1.3)

0 20 40 60 80 100 120 140

mncs high mncs Agv mncs low

• Most prominent field

• Impact increases

Page 46: Research evaluation at CWTS Meaningful metrics, evaluation in context

Rc with a‘negative’ effect

0 5 10 15 20 25 30 35

ALLERGY (3.4 -> 1.8)

MEDICINE, GENERAL &INTERNAL (3.8 -> 3.5)

PUBLIC, ENVIRONMENTAL &OCCUPATIONAL HEALTH (0.9

-> 0.9)

RHEUMATOLOGY (1.0 -> 1.1)

IMMUNOLOGY (2.0 -> 1.3)

PEDIATRICS (1.3 -> 0.8)

NUTRITION & DIETETICS (0.6-> 0.5)

ENDOCRINOLOGY &METABOLISM (1.0 -> 1.1)

0 5 10 15 20 25 30 35

mncs high mncs Agv mncs low

• Most prominent field

• Impact same

• Less prominent field

• Impact decreases

Page 47: Research evaluation at CWTS Meaningful metrics, evaluation in context

Wrap up Normalization

• Normalization based on journal classification has its flaws;

• We have developed recently an alternative;

• Test sets in recent projects show small (but relevant) differences;