address line 1 - department of education and skills...2013/01/31  · professor ellen hazelkorn...

16
Speakers’ Abstracts Professor Jürgen Barkhoff (Trinity Long Room Hub Arts & Humanities Research Institute, Trinity College Dublin) Break-Out Discussion: Evaluating the Core Activities of Higher Education—Evaluation of Research in the Arts & Humanities University rankings are becoming ever more influential, yet their methodology, which is by necessity largely quantitative and metrics-based, captures the quantity and quality of the research outputs across the disciplines to immensely varying degrees. While the bibliometrical tools for the sciences and engineering subjects are well developed, fairly comprehensive and precise, the diversity of disciplines, methodologies, languages and publication formats in the arts and humanities make a predominantly bibliometrical approach to research evaluation in these areas deeply problematic and flawed. As a result, the diversity of arts and humanities research and much of its excellence is currently not adequately captured by rankings and other international, cross-institutional evaluation exercises. Traditional peer-review methods on the other hand are too elaborate and costly and do not easily lend themselves to the necessary standardisation to allow international comparability. This brief intervention will look at the inherent problems in taking a metrics-based approach to the evaluation of humanities research such as: the poor coverage of the arts and humanities by citation indices such as Thomson Reuters Web of Science or Elsevier Scopus; the non-inclusion of monographs; the slower, more-long term and more diverse impact of arts and humanities research; the issue of ‘perverse incentives’; the difficulty of including research in languages other than English; the importance of disciplines with a regional or national orientation; and the complexities around capturing interdisciplinary research. It will also briefly explore recommendations and possible solutions arising out of open access policies and the digital humanities agenda. Professor David Charles (EPRC, University of Strathclyde) Break-Out Discussion: Assessing Regional Engagement and Knowledge-Transfer— Ranking or Benchmarking? It can be argued that the assessment of university involvement in regional engagement (RE) and knowledge transfer (KT) is qualitatively different from assessing teaching or research performance. Teaching and research are two core activities of universities on which there is some

Upload: others

Post on 18-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

Speakers’ Abstracts

Professor Jürgen Barkhoff

(Trinity Long Room Hub Arts & Humanities Research Institute, Trinity College Dublin) Break-Out Discussion: Evaluating the Core Activities of Higher Education—Evaluation of

Research in the Arts & Humanities

University rankings are becoming ever more influential, yet their methodology, which is by necessity largely quantitative and metrics-based, captures the quantity and quality of the research outputs across the disciplines to immensely varying degrees. While the bibliometrical tools for the sciences and engineering subjects are well developed, fairly comprehensive and precise, the diversity of disciplines, methodologies, languages and publication formats in the arts and humanities make a predominantly bibliometrical approach to research evaluation in these areas deeply problematic and flawed. As a result, the diversity of arts and humanities research and much of its excellence is currently not adequately captured by rankings and other international, cross-institutional evaluation exercises. Traditional peer-review methods on the other hand are too elaborate and costly and do not easily lend themselves to the necessary standardisation to allow international comparability. This brief intervention will look at the inherent problems in taking a metrics-based approach to the evaluation of humanities research such as: the poor coverage of the arts and humanities by citation indices such as Thomson Reuters Web of Science or Elsevier Scopus; the non-inclusion of monographs; the slower, more-long term and more diverse impact of arts and humanities research; the issue of ‘perverse incentives’; the difficulty of including research in languages other than English; the importance of disciplines with a regional or national orientation; and the complexities around capturing interdisciplinary research. It will also briefly explore recommendations and possible solutions arising out of open access policies and the digital humanities agenda.

Professor David Charles (EPRC, University of Strathclyde)

Break-Out Discussion: Assessing Regional Engagement and Knowledge-Transfer—Ranking or Benchmarking?

It can be argued that the assessment of university involvement in regional engagement (RE) and knowledge transfer (KT) is qualitatively different from assessing teaching or research performance. Teaching and research are two core activities of universities on which there is some

Page 2: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

consensus regarding excellence, and general consensus that it is desirable for universities to aspire towards these levels of excellence. Success on these two measures is largely in the hands of the university, subject to appropriate financial resources. RE & KT are different in that they are not unproblematically seen as indicating institutional excellence; the level of activity does not necessarily correlate with teaching or research excellence; activity is strongly oriented to particular disciplines; and levels of activity depend at least in part on external demand and capacities. So while we can say that a good university should be good at teaching and research, it does not necessarily follow that it is good at RE & KT— that might depend on the disciplinary mix and on the regional context in which the university is based, as well as on the availability of financial support. It is often suggested that there are trade-offs between excellence in research and relevance through RE & KT, although some universities manage to score well on both. However even within RE & KT different universities can specialise in different forms, so there is not just one path to excellence—some may focus on research-based knowledge transfer such as patents, while others are more focused on transferring knowledge to SMEs via training activities. Widening the agenda to include cultural activities, engagement with disadvantaged communities and regeneration projects further complicates the assessment of what constitutes excellence. In some areas we can measure direct outputs, but in others the achievements are intangible and difficult to measure in any quantitative sense. Ranking of universities on RE & KT is therefore problematic because, as we struggle to compare diverse achievements, we have no sensible way of weighting the value of different forms of engagement, and we are not even sure if achievement is due to the university or its environment. A further problem is assessing the benefit to the university or the wider community. Some forms of engagement might benefit one firm but give a strong benefit to the university financially, whereas other forms may yield no financial reward to the university but benefit many people in the community. Whose perspective do we take in making the assessment—the university or the community, and which community? One way of dealing with this uncertainty is simply to collect lots of different indicators —qualitative and quantitative—to indicate the profile of activity of a university. Ranking may be made for some specific forms of engagement where numbers are available (such as revenue from knowledge transfer), but otherwise institutions can use this data to help them make decisions on priorities in engagement, and use benchmarking to ensure that they are performing well in the specific areas in which they want to engage. One approach developed initially for the Higher Education Funding Council for England (HEFCE) was a tool to enable comparisons, with the expectation that universities could use this to make comparisons with their peers rather than

Page 3: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

compiling absolute rankings. This way assessment could be focused on improvement and differentiation rather than imitation.

Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European Higher Education Area (EHEA):

Benchmarking the Total Student Experience

The quality of higher education has become a key determinant of reputation and status in a globally competitive market. While global rankings purport to measure higher education quality, it is widely acknowledged they do not measure what is meaningful and they ignore multi-dimensional attributes of European higher education. Nonetheless, their importance has been to place consideration of higher education quality within a wider comparative and international framework. Accordingly, over the past decade, since Shanghai Jiao Tong University first published its Academic Ranking of World Universities (2003), many governments have devised policies that directly link the performance and productivity of their respective higher education and research systems with social and economic development, and now economic recovery. The Sorbonne Declaration, 1998, and subsequent Bologna Process, was a voluntary arrangement between national governments. It was predicated on the free movement of students, faculty and workers across national boundaries, and anticipated the need for enhanced convergence across national systems in order to compete internationally. Focused on enhancing co-operation, the Leuven/Louvain-la-Neuve Communiqué (2009) emphasised the necessity to ‘fully recognise the value of various missions of higher education, ranging from teaching and research to community services and engagement in social cohesion and cultural development’. Today, there are 46 Member States, and the Bologna Process has given way to the European Higher Education Area (EHEA), launched in Vienna in 2010 and now a key component of Europe 2020. From today’s vantage point, the Bologna Process can claim major achievements: the introduction of the three cycle system (bachelor/master/doctorate), quality assurance and recognition of qualifications and periods of study. Rather than measuring inputs (e.g. credit hours or classroom teaching), Bologna has formalised the concept of learning outcomes. In many different national and institutional contexts, higher education institutions (HEIs) have adopted the European Standards and Guidelines (ESG) for Quality Assurance. Other countries have recognised the inherent significance of bringing coherence to otherwise disparate national systems and creating a system which makes European higher education unique and attractive internationally. In a world gone global, the EHEA provides the basis for a coherent educational roadmap for students and other stakeholders for what often appears to be a mystifying and fragmented

Page 4: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

landscape of higher education options. More importantly, it has the capability of strategically positioning Europe’s higher education, capitalising on the benefits of a truly international experience. It presents an opportunity for a stronger European dimension in education during this era of globalisation, which can help improve the status and visibility of European higher education by synergising the educational capacities of EU member states. The term ‘total student experience’ refers to all aspects of the engagement of students with higher education. Because it shapes future citizens, it is important to understand not only how higher education aids human capital capacity and capability but also how it enhances the ability of individuals to make choices, have control over their lives and contribute to society. The concept of ‘the whole student’ recognises the importance of the quality student experience both inside and beyond the classroom. How can we build on these attributes? Rather than seeking to position Europe or individual institutions according to their place in global rankings, this presentation will discuss how the EHEA can be used to actively promote a genuinely international educational experience across diverse institutions, focused on learning outcomes and aided by structured mobility—all within a single framework. Erasmus/Erasmus Mundus and Marie Curie actions provide a glimpse of what is possible. This is a way of projecting European soft power globally—rather than conceptualising the EHEA as simply a European initiative. In this way, the EHEA can become synonymous with a ‘quality mark’, overcoming concerns of consumer protection by extending quality assurance, qualification recognition and accreditation to transnational or borderless education.

André Kristiansen (Norwegian Ministry of Education & Research) Break-Out Discussion: User-Perspectives on Rankings—Policy Perspective

In 2010 the Norwegian Ministry of Education and Research launched a classification system for public higher education institutions. The project is based on what we now know as U-Multirank. By comparison with other higher education systems, Norway has a very solid statistical basis that is provided by the Database of Statistics on Higher Education (http://dbh.nsd.uib.no/). Consequently, the Norwegian Ministry of Education and Research has data that covers all dimensions and most indicators in the U-Map project. The aim of the Norwegian classification project is to create a framework that visualizes the profile of the different higher education institutions, not to rank them, and as such it demonstrates the diversity of the higher education sector in Norway. Diversity in a broad range of issues is taken as a starting point of the project and regarded as a positive characteristic.

Page 5: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

The classification system is meant to be a tool for making comparisons between higher education institutions in Norway according to given dimensions as well as providing a better overview and understanding of the different profile of universities and university colleges in Norway. The classification system is also intended to give a picture of performance of the individual institutions on different indicators and as such be an instrument in developing institutional strategy and profile. Furthermore, the system is meant to highlight key parameters for the Ministry for further development of the higher education sector in Norway. The project is also aimed to create transparency within the sector. As the U-map project the Norwegian classification system is intended to provide a scheme for illustrating similarities and differences between the HEIs. In the Norwegian classification system each institution is visualized by a figure (or “flower chart”) where the different parts/petals reflect a certain dimension (group of indicators). After a short introduction of the Norwegian “flower chart”, I will open up for a plenary discussion surrounding two main themes (as a starting point):

• how can U-Multirank become a sufficient tool for policy-makers regarding education, research and internationalisation and other areas?

• And how can U-Multirank become a sufficient tool for navigating/managing the HEIs in different countries?

Professor George D. Kuh (National Institute for Learning Outcomes Assessment, U.S.A.)

Measuring What Matters: Forging the Right Tools to Assess Learning Outcomes Worldwide rising enrolments, economic pressures, and other factors make it imperative that institutions of higher education insure that their graduates acquire the broad range of skills and competences demanded in the twenty-first century. Courses, credits, certificates and degrees are traditional proxies for student accomplishment, but they are only proxies. For these reasons, the performance-based tasks instructors assign to students—in their courses and field-based learning—must be the centrepiece of quality assurance in terms of student learning. This means, among other things, that universities must become better at assessing student learning outcomes, using the resulting data to inform resource allocation and other decisions, and communicating to their constituents how well they are performing.

Student attainment is typically determined by examining and reporting the average performance of samples of students using such methods as standardised tests, portfolios, and demonstrations. While these institutional averages are used in contemporary ranking systems, the metrics are

Page 6: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

inherently flawed because of Thorngate’s (1976) postulate of commensurate complexity, whereby an empirical observation can simultaneously achieve only two out of the three virtues of being general, accurate, and simple. This means efforts to assess student learning using, for example, a simple approach such as a single standardised measure to make general claims about the performance of multiple institutions will not likely be accurate.

To respond to the accountability and improvement challenges, universities must forge tools sensitive to individual institutional contexts and a wide array of desired learning outcomes, and report the results in ways that are understandable and meaningful to external audiences and helpful for informing internal improvement efforts. If a robust, widely-adopted qualifications framework exists, it may serve as ‘universal translator’ (Ewell, 2013) to examine learning outcomes across diverse institutions. First-order institutional actions include curricular mapping to identify intersections between course content and competencies set forth in the qualifications framework. Assignments, products, and examination questions must be audited along with assignment templates and rubrics to determine that they adequately represent proficiency in specified competencies. A comprehensive record-keeping system is needed to post, house, and manipulate data about what students have learned, and for ‘rolling up’ programme or major-field level learning outcomes to the institutional level for benchmarking and comparative analyses. In such an approach, assessment results become high-stakes in the sense that they certify (rather than merely represent) specific, pre-determined, publicly-established attainment standards, thus meeting accountability expectations. At the same time, assessment results amassed and examined over time can reveal observable patterns of institutional and sectoral strengths and weaknesses, in particular competencies or for different groups of students, that can inform improvements in curriculum or pedagogy.

Tia Loukkola (European University Association) Break-Out Discussion: User-Perspectives on Rankings—Institutional Perspective

‘Rankings are here to stay’ is a sentence repeated constantly in publications and events that discuss rankings in recent years and it appears to be true. The interest in rankings is clearly not fading away; in fact, we could say it is increasing. After a decade or so of experience with major global rankings, we are starting to also see the influence that rankings are having on higher education systems and institutions. In this session we will focus on the institutional perspective on rankings, while the other two parallel sessions discuss policy and student perspectives. When the major global rankings first appeared, they provoked a lot of interest among the universities from the very start. Perhaps a major part of the discussion focused on the

Page 7: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

methodological flaws of the rankings. Naturally this discussion about the shortcomings of the indicators used and how the data is collected still continues and is relevant. However, as time passes and rankings are being referred to by different policy-makers on a more frequent basis, rankings have started to shape our higher education systems as well as the universities. The focus of this session is to discuss how rankings can and are being used by the universities, and how the universities are responding to the ranking. Some potential ways of responding to, or viewing, rankings brought up by literature will be explored during the session:

- Rankings are often expected to enhance transparency regarding the universities’ performance and provide universities with benchmarking opportunities. Is this the case? Or are universities perhaps using them for other purposes, such as marketing?

- What is the role of rankings in shaping institutional strategies and priorities? - Can a university afford to decide to stay out of rankings in a globalised higher education?

What are the consequences, if any, of not being included? - To what extent are the rankers turning into policy-makers? Who guides the decisions of a

university? - What is the work-load related to rankings at institutional level?

Dr. Eucharia Meehan (Higher Education Authority) Break-Out Discussion: Evaluating the Core Activities of Higher Education—Research

Knowledge-generation and challenging received wisdom are central to the mission of higher education. Institutions’ research and teaching missions are inextricably linked and mutually beneficial. The centrality of institutions’ research mission to their role in the ‘knowledge society’ should be reflected in rankings, but for the arts, humanities and social sciences, just as in many areas of science, engineering and technology, evaluation is not straightforward, is often dependent on proxies and is subject to considerable time-lags. Thus potential limitations exist with respect to the implementation of ranking systems including but not limited to (a) the challenges to attain coverage for all disciplines and interdisciplinary research and scholarship, particularly in areas with disparate publication cultures such as the applied sciences and engineering, and the humanities and social sciences; and (b) the ability to recognise new and emerging disciplines. The growing trend world-wide to consider the humanities and social sciences as a single broad field is also arguably a cause for concern. Recognising the potential limitations in the evaluation of higher education research activity and the associated construction of rankings, and being aware of those indicators typically selected for

Page 8: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

research activity, we will explore the extent to which indicators used in rankings, such as U-Multirank, are helpful in developing a framework for the evaluation of research quality which reflects the broad role of higher education institutions in research and the knowledge society. Furthermore we will explore whether ranking systems and associated indicators support the strategic development of research quality within higher education institutions, and consider whether academic researchers, higher education institutions, funding agencies and government share a common understanding of the parameters and time-frames so as to develop a ranking system that addresses all needs.

Professor Sarah Moore (University of Limerick) Break-Out Discussion: Evaluating the Core Activities of Higher Education—Teaching and

Learning Any reasonable evaluation of teaching and learning should commit to collecting and exploring a range of data from different perspectives in order to make sensible assessments and to provide useful guidance for enhancement. It also makes sense to break teaching and learning down into phases and to pay attention to each (e.g. planning, provision, processes and outcomes). Hard measures such as student performance should be accompanied by more qualitative ways of exploring the complexities of the process of teaching and learning. Curriculum design plays a role, but so does curriculum delivery and the relationship between the two is not always uncomplicated. Demographics make a difference, but so can teacher orientation and innovative ways of engaging learner groups. Furthermore, the dialogue between different constituencies for which teaching and learning matters is often quite fragmented and may lead to different conclusions about what good teaching and learning represents. The process of agreeing institutional graduate attributes often stimulates useful debate and some convergence around the questions about what represents effective, purposeful, high-quality teaching and learning. This short presentation will talk briefly about some of these ideas and stimulate debate about what constitutes effective teaching and learning.

Muiris O’Connor & Dr. Vivienne Patterson (Higher Education Authority) Balancing Autonomy and Accountability through Transparency: The Irish Approach

This presentation will outline an emerging performance evaluation framework for Irish higher education that will support strategic planning at institutional and national level. This is being developed within the context of the implementation of the National Strategy for Higher Education to 2030 with its emphasis on fostering the coherence, and maximising the performance, of the

Page 9: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

higher education system—as a system. As a transparency tool, it is intended to facilitate greater alignment between the advancement of institutional strategies and national priorities. The rationale for this work overlaps substantially with that which underpins the development of U-Multirank, in terms of promoting a shared appreciation of the breadth of the higher education mission and sensitivity to the dangers of narrow ranking frameworks. Institutional profile templates have been developed which encompass the increasing range of roles, responsibilities and expectations that higher education as a whole must address, and provide a basis for evaluating institutional performance against key performance indicators that are reflective of the mission diversity of Irish higher education institutions. The development of these templates within a broader performance-evaluation framework represents a new approach within the HEA to the presentation and organisation of data intended to support strategic planning at institutional and system levels. The value of these profiles, which are currently being developed, will grow over time, facilitating the monitoring of trends in higher education provision in terms of the student numbers, fields of study, participation metrics, and the financial and human resource-base underpinning the sector. At the heart of the profile templates are the three dimensions of the core mission of higher education—teaching and learning, research, and engagement. In seeking to account for the richness and depth of higher education institutions’ missions, the HEA is cognisant of the vital importance of institutional autonomy and of the dangers of unintended consequences associated with the implementation of accountability frameworks. Thus the approach adopted, which is being developed in partnership with higher education institutions, seeks to balance autonomy and accountability through transparency. The partnership approach adopted is further exemplified by the on-going work on the development of the Irish National Student Survey (INSS) which is being implemented to enrich our understanding of student-perspectives on the quality of teaching and learning in higher education. Furthermore, the creation of a National Forum for the Enhancement of Teaching and Learning for Irish higher education will focus on the development of strong collegiate networks and on academically-led enhancement of the student learning experience. The National Forum will explore the potential of an open-access digital platform to exhibit and share pedagogical resources and research outputs. This has enormous potential to make transparent the scale and significance of higher education institutions’ contribution to the dissemination and creation of knowledge and understanding.

Page 10: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

Professor Jean-Charles Pomerol (Université Pierre et Marie Curie) Break-Out Discussion: Evaluating the Core Activities of Higher Education—Teaching and

Learning With reference to U-Multirank, this presentation will consider how best to evaluate teaching and learning in higher education. While calling into question the validity of staff–student ratios as proxies for quality, the presentation will emphasise the importance of using a range of qualitative and quantitative indicators to evaluate students’ experience and satisfaction, institutions’ programme provision, and graduates’ academic and employment outcomes. The importance of distinguishing between different types of higher education institutions will be stressed, along with the merits of establishing a European agency for the surveying of students’ and employers’ satisfaction.

Deborah Roseveare (Directorate for Education, OECD) Update on the Assessment of Higher Education Learning Outcomes (AHELO)

The OECD’s ‘Feasibility Study on the Assessment of Higher Education Learning Outcomes’ (AHELO) is now reaching its conclusion and has provided a rich set of insights into the value and importance of meaningful international assessments of learning outcomes in higher education, as well as into the challenges and complexities involved in their development. Planning for the future development of AHELO is based on an agreement among OECD countries that its main purpose should be to provide higher education institutions with feedback on the learning outcomes of their students to enable them to improve these. Yet the Feasibility Study process has revealed important conceptual complexities that will need to be fully addressed before advancing to the technical development stage, including (1) developing instruments that are both internationally comparable and relevant to higher education institutions across a spectrum of diverse institutional missions, student-mix, and priorities; (2) defining learning outcomes that are important and establishing how these can best be measured; (3) clarifying what data institutions need from AHELO to foster improvement; (4) establishing how AHELO could be interpreted and applied to teaching and learning practices within the institution; and (5) establishing how to persuade sufficient numbers of higher education institutions to participate. Experience gained during the Feasibility Study process also underlines the need to situate AHELO within a broader policy framework for higher education that includes the expanding range of measures and instruments designed to stimulate improvement in higher education learning outcomes at national and international levels, and that may be designed to meet a range of purposes including accountability, transparency and improvement. Developing this framework approach to underpin AHELO development would also enable a deeper understanding of the

Page 11: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

strengths and drawbacks of different instruments; provide insights into how different instruments would lead to improvement and under what circumstances; and enable all stakeholders to better assess which combination of instruments and approaches would be most relevant and cost-effective to strengthen learning outcomes in each country context.

Dr. Irene Sheridan (Cork Institute of Technology) Break-Out Discussion: Evaluating the Core Activities of Higher Education—Regional

Engagement and Knowledge-Transfer In a brief exploration of what is meant by the term ‘engagement’ and what an ‘engaged university’ might look like, this opening statement will explore a range of partnership interactions underway and develop an outline map of engagement activities. These engagements include:

• Contribution by the external partner into the curriculum development and learning processes—through advisory panels, guest lectureships, work placement, project sponsorship, building entrepreneurial and employability skills into the curriculum;

• Co-creation of curriculum and the valuing of the work-place as a centre for learning —through the recognition of prior learning (RPL); work-based learning (WBL); professional postgraduate pathways; the development of flexible, responsive, and focused courses; learning needs analysis; and collaboration in training and development planning;

• Collaboration in promoting greater access to learning—promotion of STEM subjects to under-represented groups, co-development of focused learning opportunities for the unemployed and marginalised;

• Collaboration in research and enterprise development through exchange and co-creation of knowledge through short, sponsored, applied research initiatives, the commercialisation of collaborative research, and the incubation of emerging enterprises.

Early REAP (Roadmap for Employer–Academic Partnerships) project findings indicated that successful and sustainable partnerships need ‘resources, relationships and realistic objectives’ on both sides of the partnership. The investigation of the breadth of existing relationships between Irish higher education institutions (HEIs) and external entities found that the HEI tends to operate not as a single homogenous entity but as a series of separate and distinct units. Academic and research units operate as separate, and sometimes competing entities, from the perspective of the external partner. Initial investigation found that there was often no single view of this relationship extant within the HEI. This leads to difficulties, as without an understanding of the breadth and the extent of the relationship, it cannot be replicated elsewhere or be built into the

Page 12: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

strategy of the institution. Interdisciplinary responses are more likely to meet complex enterprise or community needs than disparate ones arranged around departments and single disciplines. Considering how improved engagement could best be facilitated, an approach consisting of the following key elements is proposed:

• Clear points of contact; • Informed view of commitment, capacity and capabilities on both sides of the divide; • Professional approach to managing engagement interactions; • Stimulus for engagement through exemplars; • Referral process throughout the higher education sector.

The statement will challenge higher education management to move engagement out of the peripheral edges of tech-transfer and CPD offices etc., and into the core of the institution. It will also point out that effective engagement requires considerable effort from enterprise, and that this is often under-estimated.

Professor Marijk van der Wende (Amsterdam University College) Rankings and the Visibility of Quality Outcomes in the European Higher Education Area

(EHEA) I will argue that although rankings are an exponent of globalisation, they do not necessarily spur the right responses in higher education to the demands of globalisation. They may lead to greater public and global transparency and accelerated investment in research, but are still no more than a mixed blessing. Indeed, by contrast, they seem to constrain essential results in terms of enhanced system-diversity, and more particularly in excellence in undergraduate education. As university rankings have established themselves at global level (dominated by the ones published by the Shanghai Jiao Tong University (SJTU/ARWU) and Times Higher Education (THE) and QS rankings), universities are subject to ranking whether they like it or not. And even though rankings are far from problem-free, they seem to be here to stay. In other words, they cannot be (and in fact are not) ignored; research shows a great impact on policy-makers at all levels. ‘To rank or to be ranked’ is thus the question. Obviously, this argument has at the same time a political dimension: it cannot be left to others (only) to define the criteria against which performance is measured and the methods on which comparisons are based, and (thus) reputation is established. In Europe this notion has led to the U-Multirank initiative.

Page 13: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

The wealth of critiques on existing rankings have urged for a new and more sophisticated approach. These critiques focus in particular on the fact that rankings are biased towards research and more particularly towards the natural and medical sciences and use of the English language. Essentially all of the measures used to assess quality and construct rankings enhance the stature of the large comprehensive research universities in the major English-speaking centres of science and scholarship and especially the U.S. and the U.K.. In this way global rankings suggest that the model global university is English-speaking and science-oriented and that there is in fact only one model that can have global standing: the large comprehensive research university. Common limitations on the methodological side are that most ranking systems evaluate universities as a whole, denying the fact that they are internally differentiated, that the weightings used to construct composite indexes covering different aspects of quality or performance may be of arbitrary character, and that they provide little to no guidance on the quality of teaching. It should be noted that the higher regard for research institutions cannot be blamed on the rankings as such, but arises from the academy’s own stance towards the importance of research. Although it can be argued that a league of world-class universities needs to exist as role models, the evidence that strong institutions inspire better performance is so far mainly found in the area of research rather than that of education. Critics even claim that world-class research universities need not be doing a good job at (undergraduate) education at all. Especially the concerns related to biases vis-à-vis particular functions (research) and types of institutions (global research universities) ensure that rankings are seen as problematic in relation to the provision of information to stakeholders (especially students), institutional development, and diversification at system-level. Holistic institutional rankings do not only ignore the fact that higher education institutions are internally differentiated, but also the fact that they have different goals and missions. Rankings tend to norm one kind of higher education institution with one set of institutional qualities and purposes and in doing so strengthen its authority at the expense of all other kinds of institutions and all other qualities and purposes. This type of one-sided competition jeopardises the status of activities that universities undertake in other areas, such as undergraduate teaching, innovation, their contribution to regional development, and to lifelong learning, as well as jeopardising institutions with different missions and profiles. Consequently, variation in institutional development and the need for diversification at the system’s level are under pressure, since academic and mission drift (isomorphism) can be expected to intensify. Vertical stratification rather than horizontal diversification may be the result. Hierarchy rather than diversity; specialisation and diversification are not generated unless the incentive structure favours this.

Page 14: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

Diversity is of particular importance in response to the pressures of globalisation. It is generally understood that in order for higher education systems to respond effectively to external demands from the students with an increasingly wide range of educational backgrounds, from the labour market, and from the economy (e.g. contributions to regional development, innovation and economic growth), diversity is a favourable condition. However insufficient diversity at system-level has been identified as a problem in a range of higher education systems, including for European higher education, in which the lack of differentiation is seen as one of the key problems. Consequently experts advocate ‘world-class systems’ rather than ‘world-class universities’. Rankings spur the formation of the latter and induce a zero-sum game in the sense that resources get concentrated in flag-ship institutions at the expense of quality and balance in the remainder of the national system. World-class systems are internally differentiated in a systematic and consistent way (as for instance the Californian Master Plan). However rankings discourage the required differentiation of mission and provision, as they fuel academic drift, convergence, and a dominance of research over teaching. On top of the ‘publish-or-perish’ paradigm, underpinning research as the one and only task of the academy that really matters, global rankings have redoubled the strivings for research excellence. Clearly, what is ranked has little to do with education. Prestige can hardly be built on achievements in teaching. Excellence initiatives in teaching, which are costly if they are to be made general to systems, are far less frequent than excellence initiatives in research. It cannot be denied that the growing dominance of research is to the detriment of undergraduate teaching. There has been abundant recognition of the fact that global rankings seem to enhance this effect. In this sense, in the context of the research university, undergraduate education is more endangered than professional or graduate education. It is rarely adequately prioritised. In the eyes of some it has even become the main problem of the research university, perhaps a handicap to individual and institutional progress in research and thus to global prestige. This kind of thinking leads readily to claims that the only real model of the globally competitive research university entails large and growing proportions of graduate students. So here is the paradox: rankings seek to predict the future in terms of performance, but in fact narrow our ideas about it to a large extent by leaving out essential dimensions. Convergence rather than divergence, or an enlarged range of ideas and options, is the result. Several ensuing questions can be proposed for debate: are single indicator rankings better that multiple indicator rankings? Should teaching and learning be ranked separately? Should commercial rankings be avoided as they will always have to produce dynamic results in order to generate revenue? The more rankings the better? And what will be the next ‘big thing’ in higher education after rankings?

Page 15: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

Professor Frank Ziegele (CHE) & Dr. Don F. Westerheijden (CHEPS) U-Multirank: The Implementation of a Multi-dimensional International Ranking

U-Multirank was initiated to provide a fairer way of ranking higher education institutions than current one-dimensional league tables of ‘world-class universities’. It is meant to make transparent to different user-groups not only vertical but also horizontal diversity among higher education institutions. After the proof of concept phase was completed in 2011, U-Multirank has now entered the implementation phase. This presentation will outline the state of the art of U-Multirank and the initial steps in our European Commission-funded project to implement the new ranking, which will result in the first published world-wide multi-dimensional ranking of higher education institutions early in 2014. Starting from the assumption that multiple stakeholders have multiple information needs in their dealings with higher education, U-Multirank was conceived as an alternative to the global league tables that have been published for almost ten years. In an intense interactive process, involving representatives of higher education institutions, disciplines, professions, students and policy-makers, the U-Multirank feasibility study designed a multi-dimensional tool that makes the multiple performances of higher education institutions transparent in five dimensions:

• Education; • Research; • Knowledge-transfer; • International orientation; • Regional engagement.

In our presentation, we briefly outline the methodological underpinnings of this tool, focusing in particular on:

• Robustness, reliability, validity of indicators and feasibility of data collection; • Not including composite indicators or pre-defined indicator weightings; • Personalised, user-driven ranking; • Two levels of comparison: focused institutional rankings and field (academic discipline)-

based rankings. The current phase of the project focuses on recruiting an initial group of 500 higher education institutions across Europe and beyond with different missions and profiles (research universities but also universities of applied science, colleges etc.), and on continued stakeholder-consultations

Page 16: Address line 1 - Department of Education and Skills...2013/01/31  · Professor Ellen Hazelkorn (Dublin Institute of Technology) Increasing the Visibility of the Quality of the European

on the methodology as well as the web tool. We will show what selected user-groups may expect to get out of U-Multirank’s draft web tool. The fields to be included in the first round of multi-dimensional ranking will be mechanical and electrical engineering, business studies, and physics. Thereafter U-Multirank will be rolled out further in terms of its coverage of institutions and fields.

Department of Education and Skills Marlborough Street, Dublin 1 www.education.ie www.eu2013.ie