web viewi highlight three issues arising from this ‘work-in-progress’. first, the...

21
Falling between the cracks: A scholarly-practitioner approach to the evaluation of doctoral education Author Valerie Anderson University of Portsmouth Richmond Building Portland Street Portsmouth, UK PO1 3DE Email: [email protected] Telephone: +44 (0) 2392844029 Submitted to: The 13th International Conference on HRD Research and Practice across Europe, Universidade Lusiada de Vila Nova de Famalicao. 23rd – 25th May, 2012 Stream: Scholarly Practitioner Submission type: Working Paper 1

Upload: vantu

Post on 10-Feb-2018

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

Falling between the cracks: A scholarly-practitioner approach to the evaluation of doctoral education

AuthorValerie Anderson

University of PortsmouthRichmond Building

Portland StreetPortsmouth, UK

PO1 3DE

Email: [email protected]: +44 (0) 2392844029

Submitted to:The 13th International Conference on HRD Research and Practice across Europe,

Universidade Lusiada de Vila Nova de Famalicao.23rd – 25th May, 2012

Stream: Scholarly Practitioner

Submission type: Working Paper

1

Page 2: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

Falling between the cracks: A scholarly-practitioner approach to the evaluation of doctoral education

Abstract This paper discusses a scholarly-practitioner project to apply HRD insights about evaluation to doctoral education. Research students are important for both the research culture of Universities and the innovative capacity of the global labour market. I discuss the conceptual development and implementation of the learning evaluation framework I sought to introduce and the resulting insights into theory and the scholarly-practitioner endeavour. Following a contextual explanation of the development of doctoral education in UK I offer a conceptual critique of the dominant evaluation approach; present an alternative framework, and discuss its implementation in practice.

I highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training and education; most published normative approaches under-estimate the effect of the internal and environmental context in which it occurs. Second, current evaluation models focus most attention on data collection and overlook the imperative of developing a wider sense of ‘ownership’ of evaluation; an issue that social exchange theory might illuminate. Finally I discuss the challenges to the social identity of those who engage in the ‘messy’ and iterative process of engaging with operational priorities and conceptual thinking that characterises the scholarly-practitioner endeavour.

Key words: Evaluation, Higher Education; Doctoral Education; Measurement, Learning

1. Introduction and objectives

This paper reports a project to apply HRD insights about evaluation to doctoral training and education in a UK Higher Education Institution (HEI). It offers a reflective and reflexive account from the perspective of a ’scholar as practitioner’.

Within Higher Education (HE) in UK most attention is focused on evaluating the student experience of undergraduates. However postgraduate education accounts for 25% of learners (HESA, 2012) and postgraduate research degree students are acknowledged to form an important ’bridge’ between education and knowledge generation and application. Estimates suggest that research activities account for one-quarter of HEI revenue in UK (UUK, 2005) and the movement of trained researchers from Higher Education into the labour market is important for the development of innovative responses to global competitive challenges (RCUK, 2006). As a result employer representatives, Research Councils and the UK Quality Assurance Agency for Higher Education (QAA) now require HEIs to provide effective support and development processes for research students (RCUK, 2006; QAA, 2011).

2

Page 3: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

In recognition of the importance of developing research excellence, adaptability, flexibility, and personal and career development skills amongst its doctoral students my University established a University-wide Graduate School in October 2011. A key function of the newly established Graduate School is the provision of a comprehensive and coherent programme of development opportunities for the population of around 650 research degree students.

I am an HRD academic and am seconded to the Graduate School to take responsibility for initiating, developing, co-ordinating and evaluating the development programme for doctoral students. The Graduate School Development Programme (GSDP) that I initiated was extensive, involving the provision of around 100 workshops each year facilitated by about 75 different expert tutors from across the University and supplemented by a range of e-learning opportunities. The workshops cover general research methods topics; personal effectiveness skills; research ‘impact and engagement’ issues and research governance. This represents a considerable investment of time and other resources by the University and effective evaluation is a priority.

This paper reflects my multiple ‘positions’ including those of: course designer; tutor; scholar, evaluator, researcher and manager. It is a ‘work in progress’ providing a reflective account of the process of developing a framework of evaluation based on the insights of the HRD literature. It discusses the implementation issues that are being encountered and the insights into concepts, theories and practice that have resulted from engagement with the process.

The objectives of the paper are:

To discuss the evaluation framework that has been developed; its initial implementation and the influences of both scholarship and operational pragmatism on the process.

To identify areas of fresh insight into concepts and theories arising from the scholarly-practitioner process.

2. Doctoral education and evaluation policy in UK

This section provides a contextual background to the project. I begin by defining evaluation and measurement and their different characteristics before highlighting the dominance of the Kirkpatrick & Kirkpatrick (2006) model in UK national evaluation policy for doctoral education.

The terms ‘standards’, ‘measures’ and ‘evaluation’ are closely connected. Following Scriven (1991), measurement is defined here as a process of obtaining information as a result of comparison with a given standard. Evaluation is defined as the process of making judgments and determining value based on the information provided by measurement. Measurement and evaluation processes may be differently undertaken according to three sets of criteria (Edelenbos and van Buuren, 2005). The distinction between formative and summative functions is well-known within the education environment as is the second distinction between continuous improvement and ‘quality assurance’; the former being an internal process of assessing and enhancing ‘the learner experience’ and the latter being led by external or regulatory

3

Page 4: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

evaluators. The third distinction, which became fundamental to the approach to evaluation developed in this project, is between a rational / objectivist purpose involving judgement and assessment of achievement against linear and hierarchical targets and a constructivist function of enhancing learning opportunities through valuing and fostering intangible and unexpected outcomes of development that arise through processes of interaction between multiple interested parties or stakeholders (Guba and Lincoln, 1989).

The HRD literature highlights how measurement and evaluation are undertaken differently as a result of the socially situated, complex and dynamic environmental context in which they occur (Russ-Eft and Preskill, 2005). Within UK HEIs this context is dominated by the values of ‘new managerialism’ (Deem and Brehony, 2005) leading to a focus on students’ achievement against check-lists of behaviours and standards driven by performance indicators, league tables, and student satisfaction targets; something which, critics argue, overlooks intangible and developmental features of student learning (Schuck et al, 2008; Arthur, 2009; Blackmore, 2009).

This feature applies as much to research degree programmes as it does to taught courses. The UK ‘standards’ for doctoral education are set out in the Researcher Development Framework (RDF) which articulates extensive lists of knowledge, behaviours and attributes associated with research which students are expected to develop during their doctoral programme. The framework is structured into four domains and each of these is divided into three further sub-domains; the sixty-three descriptors that result each contain further indicators of expectation spread over three to five ‘phases’, which represent distinct stages of development or levels of performance (Vitae, 2012).

The Researcher Development Framework was devised and introduced in 2010 by Vitae the organisation supported by RCUK to champion the development of doctoral researchers in UK HEIs and research institutes. The same organisation launched an evaluation framework for doctoral education in 2008. Originally known as the ‘Rugby Framework’ this ‘Impact Framework’ (IF) is derived from a training evaluation approach set out in Kirkpatrick and Kirkpatrick (2006) and Kearns (2005). It categorises five hierarchical points of evaluation. The first (foundational) level relates to investment in the infrastructure for doctoral education. The second level (referred to as impact level one) is the reaction of participants to training activities. Impact level two concerns the extent to which knowledge, skills or attitudes have been developed as a result of attending doctoral education programmes. Impact level three refers to the extent to which behaviour change and applications have occurred and impact level four assesses the final results indicated by research outcomes, research quality and research capability.

3. Critique of IF framework approach

The launch of the GSDP in 2011 provided an opportunity to utilise a constructively critical scholarly approach to the development of an evaluation framework. The Kirkpatrick taxonomy of evaluation was originally devised in 1959 and has been an enduring feature of training and development practice ever since (Russ-Eft and Preskill, 2005). When commencing this project as a scholar-practitioner I critically

4

Page 5: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

reviewed the literature about the Kirkpatrick approach and identified five areas of conceptual and practical difficulty.

First, although the systematic nature of the ‘levels’ approach to the IF makes it attractive to policy makers the literature suggests that its comprehensive approach is unwieldy and time-consuming for evaluators. In particular measurement at the ‘higher levels’ of the framework is problematic as the distinction between ‘outputs’ (quantities) and ‘outcomes’ (consequences) is unclear (Guerci and Vinante, 2011; Nickols, 2005), something which is acknowledged by the ‘Rugby Team’ responsible for the Impact Framework. Whilst a comprehensive approach provides a range of different general descriptions of training effects, this ‘one-size-fits-all-approach may fail to highlight factors underpinning the success (or failure) of aspects of doctoral education provision. Brinkerhoff (2005) argues instead for a focus on the experiences of individual students or members of research teams who have been most (or least) successful to enable a more telling identification of the actual nature of the outcomes and to understand and learn from how they have occurred.

A second conceptual difficulty is the direct attribution of cause and effect which is assumed. At an institutional level research activities and outcomes result from the collective effect of a range of interrelated factors including: the research environment, quality of supervision, ability, and motivational factors. Detailed attempts to directly attribute cause and effect relationships to a doctoral training event or programme are unrealistic; a focus on the aggregate contribution made by a more dispersed range of learning processes is more appropriate (Sugrue et al, 2005; Deem and Brehony, 2005).

A third area of difficulty is the ’backwards’ time orientation. The IF ’looks back’ at programmes previously undertaken and the emphasis is on ’proving’ outcomes and value (Russ-Eft and Preskill, 2005). This overlooks opportunities for forward-looking assessments to ‘improve’ learning and adapt provision to engender and support the development of professional, reflective, academic practice by doctoral students.

The fourth issue is the focus of the IF on formal taught programmes. In recent years UK Government funding of research studentships has declined and new models of funding (principally by employers, overseas governments or students themselves) have led to an increasingly diverse and ‘non-traditional’ research student population who represent a wider range of ethnicity; age, experience and modes of study (Hodsdon and Buckley, 2011; CFGE, 2010). This requires increasing recognition of the role of flexible, ‘informal’ and non-classroom based learning and development processes for doctoral students grounded in regular supervision processes and supplemented through activities such as mentoring, work-placements and informal learning opportunities (Boud and Lee, 2005; Malfroy, 2011).

Finally, the ‘separation’ between ‘evaluator’ and the ‘subject’ of the evaluation implied in the IF approach fosters a ’distance’ between those involved in measurement and evaluation and other stakeholders involved in doctoral learning (Jayanti, 2011; Edelenbos and van Buuren, 2005; Guerci and Vinante, 2011). The IF pays scant attention to the different perceptions and expectations of stakeholders (such as supervisors, principal investigators, deans and research directors) without whose engagement doctoral student development is diminished. Nickols (2005)

5

Page 6: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

argues for a ‘stakeholder-return-on-expectation’ approach to evaluation incorporating both formative and summative functions to incorporate multiple perspectives about the value of development processes, taking into account both past achievement and expected contribution.

4. Developing a revised framework

Having identified a range of difficulties with the ’status quo’ of the IF approach to evaluation I sought the assistance of a ‘critical friend’ from within the HRD domain to challenge me to turn my critiques into positive proposals for a new approach. We set out to move away from the objectivist and linear approach underpinned by the Kirkpatrick framework and to develop an inclusive, multi-dimensional, pluralistic model grounded in different stakeholder perspectives (Donaldson and Preston, 1995; Freeman, 1984; Nickols, 2005). Our use of the term ’pluralistic’, often used to refer to languages (such as English) or phenomena that have more than one centre, was our attempt to recognise that learning and evaluation are both emergent and result from dynamic interaction between different interested parties or stakeholders. In addition to data about the experiences of learners and ‘deliverers’ we aimed to give equivalent attention to the expectations and concerns of stakeholders including supervisors, principal investigators, deans, research directors, senior managers, project sponsors and funders.

As we developed the proposed framework we articulated four principles we wished to adhere to. First, a recognition that student learning, at doctoral as at any other level, is an unpredictable process that may lead to unintended, often tacit, outcomes which may be at least as valuable as explicit expectations of behaviours, skills or knowledge. Second, to illuminate the effects of this more dispersed range of learning processes and outcomes, we proposed to focus on longitudinal, aggregated institutional data in place of the ‘event by event’ evaluation approach of the IF. Third, mindful that evaluation can be seen as unrealistically time consuming we wanted to use existing institutional data sources where possible. Fourth, whilst accepting the necessity of quantitative ‘metrics’ we sought to integrate qualitative information into the evaluation process.

The framework we developed is illustrated in Figure 1and resulted from a number of iterative discussions with those closely involved in the Graduate School project as well as comments of reviewers from conferences to which we submitted our initial thoughts. This process of debate and revision led to three changes in our thinking. First, although a key principle of the new approach was to adopt a constructivist stance, the institutional imperative of efficient and effective provision of doctoral education and measurement of the student experience and learning outcomes was too powerful to overlook and so we incorporated it more coherently. Second, I came to realise that my initial focus on benchmarks and KPIs, which I had adopted from the strategy literature failed to address the provisional nature of learning outcomes and the importance of engaging stakeholders in their development. Third, my rather grand plan of measuring achievement against University or Graduate School strategic aspirations was set back when we realised that no explicit research strategy had yet been formulated.

5. Process of implementation

6

Page 7: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

Figure 1: Evaluation framework

When this scholarly-practitioner project commenced I expected that we would undertake a linear process of: conceptual review of ‘received wisdom’; development of an enhanced framework; data collection to inform our evaluation process and, finally, communication of our findings to different stakeholders. This was a naive expectation: having critiqued a rational-linear model of evaluation I now recognise that I was making rational-linear assumptions about my own ’research into practice’ process. The experience has demonstrated the two-way nature of the research–practice nexus. It is ’messy’ and dynamic: operational exigencies and perspectives interact with theoretical and conceptual understanding throughout the process (Nutley et al, 2007). Four elements of my project highlight this dynamic interaction.

5.1 Engagement and motives

Nutley et al (2007) highlight how different stakeholder groups such as policy makers, organisational decision makers and practitioners have very different expectations of measures and metrics. This project illuminated these differences. First, my own position was, and remains, highly ambiguous. I am committed as a practitioner to

developing and managing an excellent and robust development programme of doctoral education. This drives my intention to develop and implement a worthwhileevaluation process. However, in equal measure I am an HRD academic who is motivated by (and equally rewarded by) knowledge dissemination through scholarly publications. In a context where time is short my preference to write about evaluation impacted negatively on my ability to enact what I was writing about.

Second, the motives of other key individuals within the University varied depending on their background and position. For example, the Director of the Graduate School

S t a k e h o l d e r e n g a g e m e n t m e a s u r e sS t r a t e g i c c o n t r i b u ti o n m e a s u r e s

Efficiency and effectiveness measures

7

Page 8: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

was supportive and also encouraged the idea of publications associated with the project. He was also committed to developing and utilising KPIs for the Graduate School as a whole and so this project was very timely. However, in our discussion the Director made clear his view that existing institutional measures were more than adequate as ’discrete, unambiguous facts’ that could be to be transferred in a straightforward way to policy or practice. He was less impressed with my desire to question existing practice and measures. Perhaps reflecting the view of Powers (1997) that decision makers often ritualise the use of numerical indicators as a way of making complex issues and processes look manageable, another key stakeholder, the Director of Research took a conceptually similar but none-the-less alternative position to the Director of the Graduate School. She was developing a new research strategy for the University and her focus was on the identification and development of new strategically relevant measures although it quickly became apparent that very little ’base line data’ existed.

5.2 Data availability

The project also revealed the importance of a range of anticipated but not fully recognised issues of data availability and generation. Planning for implementation of our evaluation framework quickly highlighted for me the extent to which Universities are repositories for extensive data sets related with the student experience but have very little data from any other stakeholder perspective.

When it was established the Graduate School had invested in a software system to manage workshop bookings and attendance and the fortuitous identification of an administrator with an interest in metrics and a good knowledge of the institutional student records system meant that I was able to identify how to gather, collate and tabulate operational efficiency and effectiveness data from both data sets. However this initially required individual ’data requests’ at appropriate points in time. In addition the University participates in the bi–annual UK Higher Education Academy Postgraduate Research Experience Survey (PRES). This provides an opportunity for HEIs to gather student data relating to areas such as: research supervision; the doctoral education infrastructure; perception of provision for the development of professional research skills and career development capabilities (Hodsdon and Buckley, 2011). This survey was undertaken in May 2011 and was reported early in 2012. However, I found that no corresponding data about issues from the research supervisors’ perspectives have ever been collected.

As a key part of my evaluation framework I wanted to gather qualitative data from workshop tutors, Deans, Research Tutors, Principal Investigators etc. Quality assurance processes with relevance to research students is undertaken by Faculties on an annual basis and the written reports that are generated provide a basis for some qualitative assessment. However interrogation of this data indicated its limited depth and it proved necessary to identify other opportunities when such data could be generated and utilised within the evaluation framework.

5.3 Analysis

A third ’operational reality’ to impose itself was the time-frames for data gathering and analysis. A critique of the Kirkpatrick approach is its ‘one event at a time’

8

Page 9: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

sequence. My framework set out to be more integrative making use of institutional aggregated data where appropriate. However the timing issues associated with this are complex and demanding as a ‘month-by-month’ schedule of activity demonstrates (see Table 1). Compiling this schedule highlights the practical challenges of moving away from a sporadic approach to evaluation; namely the requirement for continuous activity and an acceptance of the perpetually provisional nature of analytic outcomes. This in turn raises questions about the processes of data dissemination and communication which are considered next.

Month Collate and analyse Resource requirementApril 2012 PRES data 5 days

April 2012 Workshop feedback forms Half a dayMay 2012 Supervisor survey 5-10 daysJune 2012 Faculties consultation 5 x Faculty meetings +

admin time to arrangeJune 2012 Tutor consultation One meeting +

arrangements = 2 daysJuly 2012 Workshop feedback forms Half a dayAugust 2012 Review UoP Research strategy

KPIs and GSDP implications1 day (but subsequent measurement implications)

September 2012 QMD data on throughput/ days per student etc

QMD – year 1 establish base data for comparison year on year and establish reporting system

September 2012 Establish benchmarking club and measures

Vitae conference and follow-up2 days for conference + 5 days for follow-up

October 2012 Major Review progression and reports

Half a day

November 2012 ASQRs Half a dayDecember 2012 Tutor consultation One meeting +

arrangements = 2 daysJanuary 2013 Workshop feedback forms Half a dayMarch 2013 Major Review progression and

reportsHalf day

Table 1: Data collection and analysis schedule

5.4 Dissemination and Communication

The initial process of implementing the framework also highlighted how easy it is to become absorbed in a focus on data gathering and to overlook the issues associated with disseminating information in forms and at times that are most meaningful to different stakeholders. This raises interesting questions about the ‘ownership’ of evaluation. Traditional approaches to evaluation focus on the ‘mental models’ of

9

Page 10: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

trainers, providing data that are more meaningful to this group than to other stakeholders (Anderson, 2007). My framework acknowledges a wider range of ‘owners’, from senior management to doctoral candidates and their supervisors and this has implications for dissemination and communication which I was slow to recognise. Specifically it blurs the distinction between development processes and evaluation which presents challenges of dissemination (both form and timing) which I have yet to fully address.

6. Theory – practice reflections.

This section begins with two reflections about the scholarly practice process before considering broader issues about the extent to which fresh insights about concepts and theories may be identified.

One personal reflection concerns the nature of the scholarly-practitioner endeavour. Traditional policy-led approaches to the relationship between research and practice assume a ‘rational-linear’ process where knowledge is generated and disseminated by researchers before being adopted, adapted or applied by practitioners or policy-makers (Nutley et al, 2007; Glasziou & Haynes, 2005; Starkey & Madan, 2001). My experience challenges this and illuminates the interactivity of the scholarly-practice process.

Second, the process has encouraged me to reflect on the social identity consequences of the scholarly practitioner process. In pursuing this project I have been attempting to achieve the socially distinctive qualities (Turner and Haslam, 2001; Turner, 2005) of both a scholar-researcher and a manager-practitioner. This has not been a great success; most of the time I have ‘fallen between the cracks’ of the academic and practitioner communities. Practitioners view my sporadic attempts at operational implementation with some amusement and impatience: I spend more time writing about evaluation than enacting it. At the same time academics view my work (particularly my writing about the project) as representing unnecessarily instrumental and legitimative knowledge, falling well short of the ’gold standard’ expected within the academic community (Walsh et al, 2007).

However, the project has generated some interesting insights into the theory and practice of evaluation. First, it highlights the multiple and often ambiguous purposes of evaluation when understood from the perspectives of different stakeholders who variously are concerned that it: ‘proves’ that all is going well; demonstrates institutional ‘value’; ‘improves’ student satisfaction’ or ‘enhances learning’. Second, the project has shown the importance of environmental context in shaping approaches to evaluation. In spite of an explicit attempt to devise a constructivist framework, my model had to be adapted rather than adopted in recognition of the imperative of operational efficiency, throughput and student satisfaction issue. In addition, the provisional nature of strategic measures in a dynamic and fast-changing local context led me to make important revisions to my thinking about measurement options. Third, although a critique of the IF approach is its time consuming nature, the framework that I have developed requires the commitment and ’ownership’ of a wider variety of groups to articulate expectations of doctoral education processes and participate in generating evaluation data. In its attempt to engage multiple stakeholders, is likely to be at least (if not more) time-consuming.

10

Page 11: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

Finally, the project offers fresh insight into the conceptual ‘positioning’ of evaluation. Evaluation is undertaken with normative intent within the broad assumptions of the systems theory of organisations (Russ-Eft and Preskill, 2005). The evaluation literature represents frameworks rather than explanatory theories. The model that I have tried to develop and introduce is both normative and constructivist. In its pluralistic intent it requires active involvement and engagement by many different stakeholders. Achieving such levels of participation is challenging and I have wondered whether social exchange theory might offer insights in this area. The social exchange approach is predicated on utilitarian and behaviourist theories (Cook and Rice, 2006) assessing the extent to which relationships and networks are formed as a result of a subjective cost-benefit analysis by individuals involved in social transactions. To be willing to participate in and ‘own’ evaluation processes, therefore, stakeholders must estimate that the ‘reward’ of their involvement, time and attention will be greater than the cost. Social exchange theory has been subject to a number of critiques (see, for example, Miller, 2005) not least as a result of its individualist approach and its reduction of human behaviour to rational process associated with economic theory. However, given the common critique that there is scant evidence that traditional evaluation models are enacted in practice, social exchange theory may provide a lens through which to examine the factors that encourage and inhibit the involvement of different stakeholders in evaluation processes.

Conclusion

In offering a reflective and reflexive account from the perspective of a ‘scholar as practitioner’ this paper outlines the development and implementation of an evaluation process for doctoral education in a UK HEI. It highlights the influences of both scholarship and operational pragmatism on the process and offers some reflections about the insights into theory and practice that I have identified. This paper is a work-in-progress and so definitive conclusions are inappropriate. However three points may be drawn. First, evaluation is ‘contextually loaded’; its purposes are differently understood by stakeholders in different ‘positions’ and affected differently by the internal and environmental context in which they operate. Second, current evaluation models focus most attention on data collection and overlook the imperative of developing a wider sense of ‘ownership’ of evaluation of learning and development; an issue that social exchange theory might illuminate. Finally, scholarly-practice involves participants in participating in a ‘messy’ and iterative process whereby operational exigencies and priorities interact with theory and conceptual thinking. It presents challenges for which I was unprepared. In attempting not to ‘fall between the cracks’ of acting as both ‘practitioner’ and ‘researcher’ I have learned that the scholarly-practitioner process is not for the faint-hearted.

References

Arthur, L. (2009) From performativity to professionalism: lecturers’ responses to student feedback, Teaching in Higher Education, 14(4), 441-454.

Anderson, V. (2007) The value of learning: from return on investment to return on expectation. London: Chartered Institute of Personnel and Development.

Blackmore, J. (2009) Academic pedagogies, quality logics and performative

11

Page 12: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

universities: evaluating teaching and what students want, Studies in Higher Education, 34(8), 857-872.

Boud, D. & Lee, X. (2005) ‘Peer Learning’ as Pedagogic Discourse for Research Education, Studies in Higher Education, 30(5): 501-516.

Brinkerhoff, R.O. (2005) The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training, Advances in Developing Human Resources, 7(1): 86 – 10.

Commission on the Future of Graduate Education [CFGE] (2010), The Future of Graduate Education in the United States. [online] http://www/fgereport.org/rsc/pdr/ExecSum.pdf

Cook, K.S. & Rice, E. (2006) Social Exchange Theory, in J. Delamater (Ed.) Handbook of Social Psychology, Handbooks of Sociology and Social Research, pp. 53-76.

Deem, R. & Brehony, K.J. (2005) Management as Ideology: the Case of ‘New Managerialism’ in Higher Education, Oxford Review of Education, 31 (2): 217-235.

Donaldson, T. & Preston, L. (1995). The Stakeholder Theory of the Corporation: Concepts, Evidence, and Implications. Academy of Management Review, 20(1): 65-91.

Edelenbos, J. & van Buuren, R. (2005) The learning evaluation: a theoretical and empirical exploration, Evaluation Review, 29, 591-612.

Freeman, R. (1984). Strategic Management: A Stakeholder Approach. Boston: Ballinger.

Glasziou, P. & Haynes, B. (2005) The paths from research to improve health outcomes, ACP Journal Club, 142 (2): A8-A10.

Guerci, M. & Vinante, M. (2011) Training evaluation: an analysis of the stakeholders’ evaluation needs, Journal of European Industrial Training , 35(4): 385-410.

Guba, E.G. & Lincoln, Y.S. (1989) Fourth generation evaluation. Newbury Park, CA: Sage.

Higher Education Statistics Agency [HESA] (2012) Headline Statistics, [on-line] http://www.hesa.ac.uk/

Hodsdon, A. & Buckley L. (2011), Postgraduate Research Experience Survey: 2011 Results. [online] http://www.heacademy.ac.uk/assets/documents/postgraduate/PRES_report_2011.pdf

12

Page 13: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

Jayanti, E.B. (2011) Through a different lens: a survey of linear epistemological assumptions underlying HRD models, Human Resource Development Review, 10(1), 101-114.

Kearns, P. (2005) From return on investment to added value evaluation: the foundation for organizational learning, Advances in Developing Human Resources 7(1), 135-145.

Kirkpatrick, D. L., & Kirkpatrick, J. D., (2006) Evaluating training programmes, SanFransisco: Berrett-Koehler.

Malfroy, J. (2011). The Impact of University–Industry Research on Doctoral Programs and Practices, Studies in Higher Education, 36(5): 571-584.

Miller, K. (2005) Communication Theories. Maidenhead: McGraw Hill.

Nickols, F.W. (2005) Why a Stakeholder Approach to Evaluating Training, Advances in Developing Human Resources , 7(1), 121- 134.

Nutley, S.M., Walter, I. & Davies, H.T.O. (2007) Using Evidence: How research can inform public services, Bristol: Policy Press.

Powers, M. (1997). The audit society: Rituals of verification. Oxford: Oxford University Press.

Quality Assurance Agency for Higher Education (QAA) (2011) Doctoral Degree Characteristics, [online] http://www.qaa.ac.uk/Publications/InformationAndGuidance/Documents/Doctoral_Characteristics.pdf

Research Councils UK, (2006) Adding Value: How the Research Councils Benefit the Economy, [online] http://www.rcuk.ac.uk/documents/publications/addingvalue.pdf

Russ-Eft, D. & Preskill, H. (2005) In search of the holy grail: return on investment evaluation in human resource development, Advances in Developing Human Resources , 7(1), 71-85.

Scriven, M. (1991) Evaluation thesaurus, Thousand Oaks, CA: Sage.

Schuck, S., Gordon, S. & Buchanan, J. (2008) What are we Missing Here? Problematising Wisdoms on Teaching Quality and Professionalism in Higher Education, Teaching in Higher Education, 13 (5): 537-547.

Starkey, K., & Madan, P. (2001) Bridging the relevance gap: Aligning stakeholders in the future of management research. British Journal of Management. 12: 3–26.

Sugrue, B., O’Driscoll, T., & Vona, M.K. (2006) C level perceptions of the strategic value of learning, Research Report, Alexandria, VA: American Society for Training and Development and IBM.

13

Page 14: Web viewI highlight three issues arising from this ‘work-in-progress’. First, the purposes of evaluation are differently understood by those involved in doctoral training

Turner, J.C. (2005), Explaining the Nature of Power: A Three-Process Theory, European Journal of Social Psychology, 35, 1-22.

Turner, J.C., & Haslam, S.A. (2001) Social Identity, Organisations and Leadership, in M.E. Turner & N.J.Hillsdale (Eds.) Groups at Work: Advances in Theory and Research, Erlbaum pp. 25-65.

Universities UK [UUK] (2005) The Economic Impact of UK Higher Education, University of Strathclyde, [online] http://www.universitiesuk.ac.uk/Publications/Documents/economicimpact3.pdf

Vitae (2012) Introducing the researcher development framework, [online] http://www.vitae.ac.uk/researchers/429351/Introducing-the-Researcher-Development-Framework.html

Walsh, J.P., Tushman, M.L., Kimberly, J.R., Starbuck, B. & Ashford, S. (2007) On the Relationship Between Research and Practice : Debate and Reflections, Journal of Management Inquiry, 16(2): 128-154.

14