web viewthis potential for incremental growth has also been described by patton as...

49
Title Self-evaluation of pedagogic practice in Higher Education – from connoisseurs to expert witnesses. Name of author Claire Sparrow Author affiliation Principal Lecturer, School of Law, University of Portsmouth, and PhD student on Lancaster University’s PhD programme in Higher Education, Research, Evaluation and Enhancement (HEREE) Email address [email protected]

Upload: vuongque

Post on 07-Feb-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Title

Self-evaluation of pedagogic practice in Higher Education – from connoisseurs to expert witnesses.

Name of author

Claire Sparrow

Author affiliation

Principal Lecturer, School of Law, University of Portsmouth, and

PhD student on Lancaster University’s PhD programme in Higher Education, Research,

Evaluation and Enhancement (HEREE)

Email address

[email protected]

Page 2: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Abstract

Purpose

This paper examines how academics evaluate the development of their own professional

pedagogic practice. The aims of the research are to examine the approach to evaluation strategy

taken by academic staff when experimenting with and developing their pedagogic practice and

establish the extent to which scholarly theory of evaluation informed such strategies. The research

also critically examines whether self-evaluation might be developed to improve its effectiveness

and wider credibility.

Design

The approach to self-evaluation and the use of scholarly theory were investigated through a survey

of academic staff in a Business School in a UK Higher Education Institution (HEI). The responses

were examined to identify common themes and were also analysed and discussed in the context of

a critical review of the literature on self-evaluation.

Findings

Self-evaluation of practice occurs at a formal level (in the context of institutional quality processes)

and also in ways that are less systematic. Scholarly theories of evaluation were rarely used

explicitly. This less formal evaluation may be very powerful in influencing practice but is difficult

to capture and validate at institutional level. It may therefore remain an unexamined and hidden

part of what academics do in practice.

Implications

The research primarily aims to examine whether there is a problem and has only considered a

small sample in one context. Suggestions are made, however, about how approaches to self-

2

Page 3: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

evaluation may be made more explicit and rigorous and hence a more credible part of institutional

quality processes.

Keywords: Evaluation, Higher Education, Teaching Practice

3

Page 4: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Self-evaluation of pedagogic practice in Higher Education – from connoisseurs to expert

witnesses.

Introduction

Bamber defines self-evaluation as:

… not just … what academics have been obliged to do as a result of quality cultures,

but also … what they choose to do in order to improve their practice, and their

understanding of that practice. (2011a, p. 166)

I have chosen to focus on self-evaluation because I observed that I was experimenting and

developing within my own pedagogic practice but with little explicit use of scholarly theories of

evaluation. While I suspected that other academic colleagues were in much the same position, this

was only an assumption. Part of the aim of this paper is therefore to examine the approach taken

by other academic staff and whether their approach takes into account such scholarly theories.

Self-evaluation is not a simple area. It takes place in the context of processes which are more or

less constrained by the institution (Bamber, 2011a) or which are influenced by the different

disciplines in which academics operate (Tight, 2012, 205). Tight also makes the point that much

research in Higher Education (HE) does not engage explicitly with theory and so risks being

dismissed as ‘low level’ or lacking in general significance (2012, p. 196). Borrego and Henderson

also identify a tendency among ‘change agents’ to consider only a single or limited set of

perspectives when they plan and undertake changes to teaching (2014, p. 245).

4

Page 5: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Self-evaluation is nonetheless significant in terms of its effects. Smith suggests that the greatest

impact on a student’s learning experience is likely to come from individual teachers and how they

choose to design and deliver their modules (2008, p. 517). In approaching this research, my feeling

was that academics were carrying out evaluation of their pedagogic practice but that the less formal

aspects of this were not really acknowledged as ‘proper’ evaluation, even by the academics

concerned. McCluskey characterizes much of this less formal evaluative practice as ‘moments of

evaluation’ (2011, p. 101). These are commonly occurring evaluative practices which an expert

evaluator would recognize but which are not necessarily seen as ‘evaluation’ by those involved or

by experts. McCluskey describes these moments as being embedded practices which may be ‘just

as professional and possibly more appropriate in the given context than those carried out by experts

in evaluation’ (2011, p. 102). He notes that such evaluation may equally be poorly designed,

relying too heavily on tacit and informal evidence. The issue that he identifies is how such

moments of evaluation may be more thoughtfully designed without making them too formal. As

self-evaluation is a potentially powerful force, this paper aims to consider how it may become

more credible and effective.

This paper will discuss whether absence of explicit evaluation theory is in fact a problem in the

area of self-evaluation and what benefits a more scholarly approach might bring. It will also

consider how embedded evaluation practices might be captured and used. It therefore has the

following research aims:

1. To examine the approach to evaluation strategy taken by academic staff when

experimenting with and developing their pedagogic practice.

5

Page 6: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

2. To establish the extent to which scholarly theory of evaluation informs such evaluation

strategies.

3. To critically examine how self-evaluation might be developed to improve its effectiveness

and wider credibility.

The paper will first consider the policy context relevant to this issue, in particular the de-

legitimation of professional knowledge, as well as relevant literature on the role of self-evaluation

in HE. It will then examine data on approaches to evaluation of pedagogic practice from a survey

of staff in one faculty at a UK HEI. These data will be discussed and analysed to examine how any

‘evaluative moments’ might be captured and perhaps even strengthened by use of scholarly

theories of evaluation. It will ultimately consider how the academic practitioner may be re-

imagined as an expert witness – someone whose experience and knowledge qualifies them to offer

specialist evidence to decision-makers – and how this re-imagining might shape organisational and

individual approaches to evaluation.

Context

Within English HE there has been a de-legitimation of professionals as evaluators of their own

practice, especially where the services they provide are publicly funded. Examining the policy

context of evaluation in HE, Trowler describes the growing influence of ‘new managerialism’ in

public services which has led to a greater emphasis on measuring the effective and efficient

delivery of a service to consumers (2011, p. 20). Where HE is publicly funded, the government

will therefore wish to see the sort of evidence that allows it to compare performance between HEIs.

Trowler further describes relationships between funders and funded becoming ‘low-trust’ (2011, p.

20) with growing evaluation through external agencies such as the Quality Assurance Agency

6

Page 7: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

(QAA) and Higher Education Funding Council for England (HEFCE). Put simply, the

government does not trust HEIs to regulate themselves. Where there has also been massification

of HE and increased government funding of student places, pressure has grown to demonstrate that

government is getting good value for money and that students have the information needed to

choose the best product in a competitive market. This may be through instruments such as the

National Student Survey. HEIs can also demonstrate that teaching standards are being maintained

by ensuring all of their academic staff undergo training in teaching and become at least Associate

Members of the Higher Education Academy (HEA), the body that oversees the Professional

Standards Framework for teaching in HE. It is notable that, for many, this will be the only formal

training in teaching and learning theory and practice that they undertake in their careers. Its

importance as a source of pedagogic knowledge will be returned to later in the paper.

In the context of evaluation in HE, Henkel identifies one school of evaluation as the

‘connoisseurial’ approach. She describes the connoisseur as reliant on the ‘deep knowledge and …

intuitive judgements of those immersed in the field being evaluated’ (1998, p. 287). While these

connoisseurs are experts in the context in which they operate, there may be mistrust from outsiders

(such as government) where they are evaluating themselves and where their data does not appear to

be systematic or objective. Such evaluation is also unlikely to satisfy external funders as there is

no way to compare such opinions between HEIs. Institutional evaluation therefore tends to focus

on generating data that may be compared between courses, departments and other HEIs, rather

than on individual self-evaluation by practitioners. As a result, the connoisseur is certainly not

currently in fashion. This paper will consider whether these embedded professionals may be re-

imagined not as connoisseurs delivering judgement but rather as expert witnesses offering

testimony based on their experience and knowledge. In English courts, such expert opinion

evidence is commonly admitted where its purpose is to assist the decision maker by providing

7

Page 8: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

experience and knowledge in a specialist field. The decision maker will assess the weight of such

evidence and may accept or reject it. It may be that this picture of the self-evaluator – offering

critical insights into their specific context – is a more acceptable image of the expert in evaluation

than the connoisseur. I will return to this idea in the analysis section.

Turning to the more local context of this paper, the data considered come from staff in one faculty

of UniA, a post-1992 HEI. The concerns and issues discussed may well not be common to all

HEIs and this is a limitation of the present evaluation. However, it is suggested that the issues in

the broader policy context are likely to be issues for other post-1992 HEIs at least. In 2011-12

there were 163 HEIs in the UK from which data were collected by the Higher Education Statistics

Agency (HESA) (Universities UK, 2013, p. 3) – and of these approximately 73 were established

post-1992. They therefore form a substantial part of the HE landscape in the UK. These HEIs also

tend to have high student numbers and a strong focus on teaching, therefore it is suggested that the

changes in policy context noted above impact them strongly and merit further scrutiny.

The role of self-evaluation in HE

This paper focuses on what Saunders et al (2011) identify as a separate domain of evaluation

practice. Bamber characterizes self-evaluation as generally self-driven in terms of the focus and

form of the evaluation and it tends to be located within the specific contextual needs of the

evaluator (2011b).

In terms of understanding the context of self-evaluation, we might characterize modern HEIs as

sets of interlocking activity systems (Engestrom, 2009). These systems might concern research

activity or teaching. In this context, we might see an intersection between an individual’s teaching

practice and their involvement in the quality systems and processes within their institution.

8

Page 9: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Engestrom identifies that there may be ‘contradictions’ between those systems, which he defines as

‘historically accumulating structural tensions within and between activity systems’ (2009, p. 57).

Such contradictions might occur where conditions in one system change (such as the policy

environment in HE) and aggravate some part of another activity system (such as self-evaluation in

the context of individual pedagogic practice). Where some aspects of self-evaluation are informal

or based on ‘evaluative moments’ it may be that quality systems capture only some of the

evaluative data actually being used by academics to develop teaching practice. Engestrom

suggests that contradictions may be fertile ground for ‘expansive learning’ – creative efforts to

make the systems work and generate new solutions. Because expansive learning is prompted by a

need to make systems work better together and is rooted in its context, it is characterised by

experimental and incremental change. It will be interesting to see if the data in this research reveal

evidence of such contradictions and, if so, whether expansive learning appears to be taking place.

This potential for incremental growth has also been described by Patton as ‘tinkering’, or

‘bricolage’ (2011, p. 264). Patton uses these terms when describing how developmental evaluation

may pick up and build on existing approaches (such as action research or reflective practice)

already being used in the contexts evaluated. Bamber also characterizes self-evaluation as

involving ‘tinkering’ with existing practice (2011b, p. 195), not necessarily with the aim of

publishing findings more widely. There is a clear link here to McCluskey’s ‘evaluative moments’

(2011, p. 101), the unrecognised and embedded evaluations undertaken as part of activities such as

process design, generation of data or integration of new knowledge into an existing process. As

McCluskey suggests, this sort of embedded practice may well generate immediately useful and

highly appropriate evaluation because it grows from what is needed in that context.

9

Page 10: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Bamber suggests that, because self-evaluation is grounded in the reality of its context, ‘emerging

outcomes are worked with, not against’ (2011b, p. 198). She identifies a commitment to

acknowledging unintended and unanticipated outcomes as well as the views of stakeholders.

There are echoes here of Patton’s developmental evaluation, where genuinely accepting and

working with complexity and the unanticipated are key aspects of the approach (Patton, 2011).

Working with the emergent is also identified by Dick as part of action research (2007). He

describes action research as a cyclic process where a teacher plans, acts, reflects and plans again.

For this to be more than just ‘action’ there needs to be research for understanding and the

generation of emergent theory tailored to the context in which the experimentation takes place

(2007, p. 159).

While action research approaches to HE are quite widely reported and discussed in the literature,

evaluation explicitly using (for example) developmental evaluation does not feature at all. It may

be that this is not a significant problem and that approaches such as action research (as described

by Dick) or reflective practice as described by Schon (1983) are already supplying an adequate

theoretical repertoire for academic practitioners. The suggestion, however, is that many

practitioners are not explicitly using even these theoretical frameworks when planning and

evaluating practice. Borrego and Henderson suggest that being explicit and mindful of scholarly

theory in this context may lead to developments which have influence beyond the specific

programme and even lead to an advancement of knowledge (2014, p. 245). An absence of theory

may well also expose self-evaluation to the criticisms identified by Tight – of being low level and

of limited significance (2012, p. 196).

10

Page 11: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

While this paper is also not a systematic investigation of what is happening, it takes a first step by

testing the waters of what academics actually have in mind when they evaluate developments in

their pedagogic practice.

Methodology

I wanted to explore what approach other academics took to evaluating experimentation within their

own pedagogic practice. Where the subject matter in question is based on self-evaluation, it is

important to know what other individuals do, or perceive themselves as doing.

In order to gather these data, I carried out a short survey of colleagues in one faculty of UniA that

asked three open questions:

1. How do you experiment within your own pedagogic practice? Experimenting might

include development of your practice, new programmes or units and enhancement of

existing programmes and units.

2. How do you plan that experimentation?

3. What approach do you take to evaluating your experimentation? For example, would

you say that you have an evaluation strategy or that it is informed by a particular scholarly

theory of evaluation? Please explain how and why you take this approach.

The questions were designed to elicit data that would address directly the first two research aims

(examining the approach taken to evaluation strategy and the extent to which it was informed by

scholarly theory). The data generated would then be a foundation for a discussion of the third

11

Page 12: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

research question on how self-evaluation might be developed to improve its effectiveness and

credibility.

The survey was created using a Google Form and the link was then shared via email. This was

opened to the 160 academic colleagues within one faculty. Responses were automatically

downloaded to a spreadsheet. The responses were first read by question to see whether any

common attitudes, approaches and strategies could be discerned and these provisional themes were

noted. There was then an iterative process where the data were re-read and compared to the

themes to ensure that the themes accurately reflected what respondents had said. Where themes

appeared to be sub-sets of a larger theme, these were merged. The data were then read by

respondent to see whether certain attitudes and approaches tended to be linked within individual

responses. As I am also a member of staff at UniA I was mindful of my own assumptions and

preconceptions when analysing the data and attempted to ensure by careful re-reading that the

analysis reflected honestly what respondents had actually said. All responses were anonymous and

no data have been used which may indirectly identify individual respondents. Where individual

responses are reproduced respondents are identified by the row number on the spreadsheet of

responses.

Data

There were 18 responses to the survey – a response rate of just over 11%. It is acknowledged that

respondents are more likely to be those who are interested in discussing pedagogy, but it is argued

that the response rate is adequate to explore what approaches academic staff take to self-evaluation

within their own pedagogic practice. These are authentic and diverse voices that reveal some

common themes and issues of interest to this paper.

12

Page 13: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Question One

How do you experiment within your own pedagogic practice?

This question was designed to be broad enough to include the sort of small scale enhancement

which occurs from class to class as well as larger innovations, such as new units and courses. The

term ‘experimentation’ was chosen as meaningful and also able to capture the ‘tinkering’ that

many academic staff undertake in their teaching.

Responses ranged from innovating new modules through to enhancement at micro level (12 stated

that ‘every lecture and seminar’ may be an experiment). The majority of responses described a

process of enhancement and continual development. 13 stated, ‘I continually experiment with new

delivery and assessment methods….’. Others described minor changes in resources, ways of

communicating, embedding ‘real-world’ skills and novel teaching methods.

Most responses to this question approached it from the viewpoint of the individual teacher (perhaps

not surprisingly given the phrasing of the question). Two responses (3 and 6) described working

with a team – either in response to a departmental initiative or a colleague’s research interests.

While these responses were not typical, they are a reminder that experimentation may be a group

or individual activity.

Question Two

How do you plan that experimentation?

13

Page 14: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Responses suggested the importance of context when planning experimentation. Respondents

discussed constraints, how evaluation tended to be responsive to what was happening at that time

and the impact of working with other academic colleagues.

Constraint

Constraint here was both institutional and in terms of personal time and effort. There was some

reference to the inflexibility of quality processes from 7, who said:

The system is so inflexible, and the cost in time and effort of making changes so high,

that major changes, that would merit systematic evaluation, are pretty rare.

Many other responses, while not as strong on this aspect of institutional control, referred directly or

indirectly to the investment of time and effort required when planning and experimenting. In the

context of question one, 13 described a number of new units they had created but then said:

…but I do find that they put me under a lot of pressure to train and support the

students, and I am just getting too old for this...

Other respondents described planning, consultation and locating resources, all of which

constrained what might be achieved. The language used (‘I am getting too old for this’) also

suggested the personal investment and challenge that may be involved in making changes to one’s

practice in this context.

14

Page 15: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Planning

There was a clear thread in the responses which identified the approach to planning as quite

informal – nearly half of responses made some mention of this. 12 described planning ‘on the

hoof’ and 5 described their approach as ‘evolutionary’, with changes made over time to suit what

was currently needed. Lack of formality did not appear to mean that there was no planning. The

experimentation described clearly came about through a process of reflection – it did not simply

happen. What the responses do seem to suggest is that, for some, change happened as a quite

immediate response to feedback or circumstances.

A little over half of responses described processes of thought and reflection that aimed to harness

past experience to avoid pitfalls. 3 and 6, for example, (whose experimentation crossed over with

research or departmental activity) described a more formal process of planning and testing. 6 in

particular described fully a process of planning, experimentation and reflection ‘on the outcomes in

class and the feedback’ with a plan ‘for an enhanced version next year’ and plans for redesigned

assessment in subsequent years. Within this sample, this was an unusually fully realized

description of the process – perhaps because it linked to other research interests. Even in responses

that did not tease out this process, there tended to be an acknowledgement of past experience being

used to plan future experimentation, suggesting that respondents repeated cycles of reflection,

planning and testing.

Collaboration and consultation

Another significant thread was collaboration and consultation with others. This might be with

fellow teachers as a way of building an evidence base of experience. 8, for example, emphasized

particularly the importance of ‘professional conversations’ as a ‘key part of refining ideas and

sparking creativity’ and helping to ‘avoid repeating mistakes or at least being aware of the

15

Page 16: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

dangers’. Discussion with peers was also mentioned by 9 as a way of ensuring that the planned

changes were developed in a ‘coordinated’ way. While there are not enough data in these

responses to suggest a deliberately participatory approach to evaluation, it seems that a significant

proportion of respondents consulted with stakeholders (including external examiners and

accrediting bodies) when planning. 9’s approach also suggested that involving team members in

the planning process was in part to ensure ‘buy in’ from those who would be responsible for

delivering the change.

Question three

What approach do you take to evaluating your experimentation?

Nearly all responses said that they made use of informal student feedback – as distinct from more

formal feedback in institutional satisfaction questionnaires. 4 expressed this particularly neatly:

Informal student enjoyment and feedback - very immediate, very simple, very

effective. I may glean ideas from the literature but what tells me if it works is the

actual practice.

Formal quality processes and data were also mentioned by many respondents, but always

alongside informal data. 5, for example, said:

Evaluation comes formally via unit feedback. However, I also speak extensively to

students on a 1-2-1 basis to get their thoughts so I can either discontinue with that

experiment or modify it.

16

Page 17: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Again, evaluation seems to run in cycles (5 had already described their approach as

‘evolutionary’). Informal feedback from students seems to be strongly linked to discontinuing or

modifying experiments in action.

Another source of feedback mentioned by several respondents was discussion with fellow teachers

– particularly those involved in delivery of the change. As well as providing a fuller picture of

what worked, this seemed to perform a therapeutic function – as 8 noted:

Discuss how it went with colleagues both to pat myself on the back or to work out

why it didn't go as planned.

A number of responses mentioned positive feedback as an obvious source of satisfaction and

reward. Where experimenting involves challenge then celebrating success is welcome and may be

a key reason why individuals choose to face that challenge. 13, for example, identifies student

success as a key reward:

I get a lot of informal messages from students to say that the highly relevant work

experience has helped to get them onto shortlists and given them something to

enthuse over in interviews. Some are kind enough to say that it got them the job.

But I realise that this is quite biased info.

The process of evaluating – based on what 13 saw as incomplete data and a partial sample – was

nonetheless valuable in motivating experimentation.

17

Page 18: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

13 was not unusual in downplaying this sort of feedback. Other respondents also disparaged the

rigour of their evaluation approach or quality of their data. 14, for example, said:

It's very unscientific and based predominantly on what I feel to be working. As

mentioned, this is ameliorated by student feedback, but this is likely to also be

subjective.

While there tended to be a slightly embarrassed acknowledgement of the lack of objectivity present

in the evaluation approach, some responses asserted the value of practitioner experience. 6, for

example, said:

I am not sure that it is a strategy but I see myself as a reflective practitioner and an

experienced teacher (since 1995). I can gauge interest and engagement fairly

accurately, and solicit verbal feedback.

This might be seen as an example of connoisseurial judgement - based on just knowing when

something is right. However, it is clear that respondents such as 6 do not rely solely on intuition

and are using data from observation and feedback to inform their views and this may in fact be an

example of an ‘evaluative moment’. As McCluskey suggests, while the type and range of data are

different to those used in institutional quality processes, such embedded practices may be more

compelling and revealing than formal evaluation (2011, p. 101). The issue with such practices is

capturing the data and using them appropriately so that the evaluation is not based on evidence that

is too informal or simply tacit. As some respondents note, individual conversations in a class

context are informative but they may not be representative and there may even be a tendency to

cherry-pick what we want to hear. Where such evaluation is being used to inform change then

18

Page 19: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

there is a clear argument that it should be open and explicit – to allow for it to be tested and shared

by others.

Moving into a consideration of the place of scholarly theory in the practice of respondents, 12

made a particularly interesting statement. After acknowledging the role of formal and informal

feedback, they said:

As with grammar usage, it is a more natural process rather than a close reliance

on a theory (but such theories may well underlie that and imbue practice

pervasively).

This is a neat analogy – that rules may be observed in our ways of doing things without necessarily

articulating what those rules or assumptions are (or perhaps we might say that practices become

‘engrooved’ (Saunders et al., 2011, p. 204) or ‘embedded’ (McCluskey, 2011, p. 101)). The use of

the ‘grammar’ analogy, however, also suggests that we are speaking a language governed by the

same rules and assumptions. Looking at the other responses, most respondents were unable or

reluctant to identify a theoretical framework that they used when evaluating their experimentation.

This is not to suggest that the approaches were unprincipled, or even ineffective, but it does suggest

that scholarly theory is not widely discussed and articulated at this stage of experimentation.

Without that articulation, it may be an assumption too far that we all speak the same language.

Those respondents who work collaboratively are in a better position to test that assumption as they

must articulate their conclusions and support them with evidence. Such a ‘professional

conversation’ allows the group to test and develop a shared language of evaluation as well as a

wider bank of examples on which to draw. This suggests that collaborative working has greater

potential for effective evaluation and to develop an ‘evaluative culture’ (Saunders, 2006, p. 207).

19

Page 20: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

The bigger question is how then to encourage such opportunities across an institution – a theme I

will return to in the analysis that follows.

Returning to the responses, Kolb’s cycle of experiential learning (1984) featured most often. 10

noted that they were introduced to Kolb as part of their initial teacher training – and this may

explain in part its popularity. Kolb’s cycle (concrete experience – reflective observation – abstract

conceptualization – active experimentation) is also congruent with what is described by

respondents when planning experimentation. As Dick observes (in the context of action research)

the cycle of planning, experimentation and reflection is a natural process of problem solving (2007,

p. 149) and so the fit with the lived experience may be deeper still than training.

Other than Kolb, there was no other scholarly theory of evaluation shared by more than one

respondent. Discipline knowledge was referred to by 9, who referred to their professional

background and use of Kirkpatrick’s levels of evaluation of training (2006). This supports what

Tight has observed about how discipline knowledge may be borrowed as part of pedagogic

practice (2012, p. 205).

Analysis

Relationship between self-evaluation and institutional evaluation

Returning to Bamber’s definition of self-evaluation (2011a, p. 166), we can see that the

respondents in this survey distinguished between what they were obliged to do as part of quality

processes and what they chose to do when evaluating their practice. There is undoubtedly a

crossover between the institutional processes and self-evaluation, with most responses referring to

use of ‘formal’ as well as ‘informal’ data.

20

Page 21: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Remembering Engestrom’s intersecting activity systems (2009), we might describe this as

individual academics being engaged in an activity system which flows into the institution

(producing reports on teaching quality and student satisfaction) as well as in an intersecting system

which centres more on personal reflection and on what works in their specific context. The systems

intersect and each may be usefully informed by the other. However, it seems likely that much of

the less formal data and reflection are screened out of the quality reports in the knowledge that they

are not perceived as credible or authoritative in that context. There was awareness that this

embedded evaluation would not bear scrutiny from others because it might come from an

unrepresentative sample or was gathered in an informal way. There appears to be what Engestrom

might call a contradiction between the data referred to in quality reports and the more messy and

informal embedded practice that is also happening. If this is a contradiction then Engestrom would

suggest that there is potential for learning to grow from it, even for new cultural practices to

develop (2009, p. 58). It may be that the answer for some individual academics has been to divide

their evaluative practices to address formal quality processes where required, but to maintain an

‘under the radar’, less formal, layer of practice which draws on ‘evaluative moments’ and is

seldom reported more widely. If activity systems are multi-voiced, as Engestrom suggests (2009,

p. 56), then some of the voices are presently not being heard and so opportunities to learn and

perhaps develop a more ‘evaluative culture’ (Saunders, 2006, p. 213) in HEIs are being missed.

If informal evaluation is not being captured by formal quality processes – and if formal quality

processes dominate the discussion of issues such as teaching quality – then it follows that key

evidence is not being heard or discussed by decision makers. Without a full evidence base on

which to draw, institutional decision-makers only have part of the picture when they try to respond

to changing policy conditions. As discussed earlier in this paper, UK HE is living through a time

of near constant change in terms of policy and so informed decision making is more challenging

21

Page 22: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

than ever. Saunders suggests that by allowing all stakeholder voices to be heard in evaluation

(through gathering case studies, for example), it may be possible to create ‘provisional stabilities’

(2006, p. 213), spaces where it is possible to pause and reflect on the effects of change and then

plan how to respond. By including more qualitative evidence from the point of view of

participants, he suggests that evaluation may be a ‘bridging tool’ allowing effective and

incremental development based on a sound understanding of what is happening in practice. If so,

then it again seems that an opportunity may be missed to strengthen decision making at

institutional level if these evaluative moments referred to by respondents are not sought out and

captured in some way. This is not to suggest that such evidence should be accepted merely

because it comes from practitioners, only that it is a vital part of a complete assessment of

institutional life.

Use and Usability

Another key theme that emerges from the responses is in the ‘use’ of self-evaluation. Saunders

makes a distinction between ‘use’ and ‘usability’ (2012, p. 422). ‘Use’ he defines as the capacity

of the evaluation outputs to effect change in that organizational context. ‘Usability’ on the other

hand, refers to the factors in the design of an evaluation that contribute towards (or hinder) its

potential use. When dealing with self-evaluation one would expect any potential gap between use

and usability to be small. If the evaluation is prompted by a pressing need to address a problem

then there is already fertile ground for that evaluation to be used. Saunders identifies a number of

factors affecting usability of an evaluation. These include relevance, degree of specificity to the

context, how well reality is reflected in the evaluation’s concerns and the involvement of

stakeholders in the evaluation and decision-making (2012, p. 434). In the context of individuals or

small groups evaluating their own practice, there is likely to be a high degree of usability as that

evaluation will be embedded in the realities of the context and immediately involves at least one

22

Page 23: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

stakeholder as evaluator. The sort of self-evaluation described by respondents which goes beyond

simply addressing formal quality processes is therefore more likely to be of ‘use’ to the individual

who needs it and ‘usable’ in terms of its design addressing immediate concerns of interest to those

involved.

Patton contends that simply participating in an evaluation may lead to changes in the thinking and

behaviour of the individuals involved, even before any final output is produced – a sort of ‘process

use’ (1998). In self-evaluation, there is significant potential for the individual to become more

engaged in how they think about their own practice and identity as a professional teacher.

However, this is a potential – and the evidence from the survey suggests that many respondents

experiment to address problems, but take less time to evaluate fully. All respondents were mindful

of the need to evaluate and shared common ground about reliability of data (even if they disagreed

on what made it reliable). However, it seems that evaluation as a means of developing one’s own

practice remains largely unacknowledged. Implicit in the responses is the perception that while

informal evaluation is happening, it is not explicitly a part of the institutional conversation on

quality or policy. Given its potential impact, the question must be how it can be captured and

brought into the light.

As discussed earlier, it is suggested that working with peers strengthens the potential for evaluation

to be usable. A minority of respondents discussed collaboration or consultation with peers when

planning experimentation in their own practice and its evaluation. Where these ‘professional

conversations’ were discussed the respondents were very positive about the effects. They

mentioned benefits of tapping into a wider bank of experience and knowledge, sparking creativity

and also ensuring buy in from stakeholders. For these respondents at least, support from peers was

a source of strength and confidence as well as enriching the experimentation and ensuring its

23

Page 24: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

successful delivery. By involving others who would deliver the teaching, there is a more

participatory approach and there must also be a more fully articulated theory of what the

experiment is to achieve and how. If you need to explain the approach to others then this in itself

tests the idea before it is even launched – something Patton describes as an example of process use

(1998, p. 227). The question is perhaps then how to stimulate more of this sort of conscious

collaboration and consultation within our own pedagogic practice. I will return to this issue later in

the analysis and conclusion.

Conscious use of scholarly theory relating to evaluation

Within the responses there was very little acknowledgement of scholarly theories of evaluation

outside of Kolb’s experiential learning cycle. It seemed that many respondents had encountered

this particular theory as part of training delivered early in their careers and that it had stuck as part

of their pedagogic repertoire. Respondents had also borrowed from their discipline areas. This

suggests that in terms of an ‘evaluative toolkit’ for pedagogic practice, individuals are tending to

rely on their initial training and also borrow useful approaches from within their disciplines. They

are focused very much on the practical – finding out what works. Their toolkit therefore may be

small but perfect for the job at hand – and as McCluskey suggests, evaluation that grows from

knowledge of its specific context may more effective than that which is led by an expert evaluator.

However, where the toolkit is bounded by one’s own experience, it may leave the self-evaluator ill-

equipped to deal with something new.

In the context of curriculum evaluation, Hubball and Pearson (2011) argue strongly for greater

engagement with scholarly theory. A failure to do so, they suggest, leads to evaluation which may

be sporadic rather than ongoing and which may fail to adopt appropriate methodologies for

‘fostering engagement and change’ (2011, p. 191). They describe some approaches to evaluation

24

Page 25: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

as ‘ad hoc’ and note that these evaluations are less likely to be used by an institution because data

are perceived as unhelpful, there is poor visibility within and outside of academic units,

engagement with stakeholders may be poor, timelines are ill-defined and there is a lack of clarity

about who is responsible for reporting what (2011, p. 192). Greater planning of data gathering and

a more conscious use of scholarly theory could certainly improve the use and usability of the

evaluation beyond its immediate context. This is especially so when presenting evaluation to one’s

institution or even more widely within the academic community. However, looking at the

responses from the survey, respondents already appeared sensitive to what ‘good’ data and

evaluation might look like - despite not gathering data systematically. This may be because much

of the experimentation described was on a smaller scale than the course design discussed by

Hubball and Pearson. It may also be because much of the data were hard to capture – perhaps they

were conversations, reflections while driving home from work or a snatched conversation with

students between classes. It could be argued that having a planned approach would allow these

data to be captured, but it is not always realistic to expect this where practice is evolving in the

moment and in complex and changing environments. Formalising such data gathering also runs

the risk of losing immediacy and authenticity.

Hubball and Pearson acknowledge the complexity of the HE environment and that academic staff

are dealing with competing pressures and lack of time – as well as there being a lack of agreement

on even how important such evaluation and development is. They suggest that a key factor in

promoting more scholarly approaches is to have adequate institutional support of efforts to evaluate

(2011, p. 192). I would suggest also an enrichment of the repertoire of theory offered during initial

training and induction of new HE teaching staff. It is not for this paper to say what an appropriate

syllabus would include, but I would suggest that theories that work with what academics actually

do and which offer a practical approach would be most likely to be adopted and used. Action

25

Page 26: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

research, reflective practice and tools such as the RUFDATA framework (Saunders, 2000) seem

congruent with existing practice and may offer a bigger evaluative toolkit to suit different sizes of

project. A more conscious use of theory might also situate evaluation planning and discussion

more objectively and as part of a wider critical and scholarly conversation. This could be a great

support for lone self-evaluators, especially when tackling outcomes that are unexpected or even

unwelcome.

There is also an argument that it is too easy to dismiss data and evaluation generated ‘on the hoof’,

especially where one appears to rely heavily on the expertise of the evaluator rather than on

objective evidence. While one response may be to encourage a more mindful and scholarly

approach to self-evaluation, there also needs to be some adjustment to how we perceive what

actually happens in practice. The informality of self-evaluation may allow it to be responsive and

context sensitive. A conversation with a group of students in class about an issue in a programme

is authentic (even if we cannot be sure how representative their views are). Patton’s developmental

evaluation (2011) argues strongly for an approach which embraces the complexity of its context

and which can work with it to support the ongoing development of a project or programme. This

may involve responding promptly to problems based on the best evidence available – without

formal data or generation of a report. This is not to say that there is no rigour or rules – the

evaluator bases their opinions on all of the evidence as well as trying to respond in a timely and

context-appropriate fashion. Greater familiarity with approaches such as developmental evaluation

may offer new ways of thinking about the purpose of self-evaluation within institutions and what it

may legitimately look like.

A greater openness from institutions and from practitioners to the role of such informal and

embedded practices is important. While it is unrealistic to suggest that all of such evaluative

26

Page 27: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

moments should be formally caught and recorded, there are ways in which they can be captured

and used as part of a more complete discussion at institutional level. This is not to suggest that

practitioners should assume the position of connoisseur or that their evidence should be accepted as

decisive. Rather that it should be acknowledged that those who are designing modules and

working directly with staff and students in teaching and assessment material do have experience

that is worth hearing. These data will not be tidy and will not present a complete picture of how

things are, even across that institution. The evaluations of practitioners involved will, however,

offer evidence from which a bigger conversation may develop. As Saunders suggests, making a

space for such stories (alongside more formal data) could enrich institutional discussions and allow

a place of ‘provisional stability’ from which to make decisions in rapidly changing times. Equally,

for self-evaluators, sharing experience allows assumptions to be tested and ideas to be exchanged.

Where diverse voices are heard, this may encourage greater participation and the stimulation of a

more ‘evaluative culture’.

Conclusion

This paper set out to examine the approaches to self-evaluation of pedagogic practice taken by a

group of academic staff and the extent to which these were informed by scholarly theories of

evaluation. The data suggest that while evaluation takes place both formally and informally, it

seldom consciously involves scholarly theory and is often not part of any planned strategy. What

theory respondents did report using tended to come from early training or from within their own

discipline. It was noted that those respondents who described collaboration with peers, or whose

research intersected with teaching, had a more fully formed evaluation strategy than those who

described working alone. The analysis suggested that where collaboration was possible and

practical, it could bring benefits in terms of better planning, support and in the usability of the

evaluation.

27

Page 28: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

The paper also examined what benefits an increased use of scholarly theories of evaluation might

bring to self-evaluation of pedagogic practice. The literature suggested that greater planning and

use of theory might lead to evaluations which were more credible within HEIs and the academic

field of pedagogy. Initial training of new academics could be more thoughtfully designed to

introduce a wider theoretical repertoire and to tap into the existing skills in evaluation that

academics have in their own disciplines. However, I have also suggested that simply saying that

academics need to become more like professional evaluators is only a partial answer. Using the

thinking behind approaches such as developmental evaluation and process use, evaluation may be

defined more widely. It may be seen as desirable to develop an ‘evaluative culture’ and to ensure

that what actually happens in practice has the space to be articulated. Without admitting the

potential authenticity of this sort of expert evidence, contradictions and opportunities to learn

remain hidden. An evaluative culture, as Saunders suggests, will struggle to develop where

stakeholder voices are not heard (2006, p. 213). There may no longer be a place for the

practitioner as connoisseur, but they may still be used as expert witnesses whose evidence may be

weighed, accepted or rejected. For this seam of evidence to be hidden or disconnected from

measurement of academic quality and standards is to waste a resource. As Webster-Wright says:

…most professionals [are] enthusiastic learners who want to improve their

practice. Let us listen to their experience and work to support, not hinder, their

learning. Rather than deny, seek to control, or standardize the complexity and

diversity of professional learning experiences, let us accept, celebrate and develop

insights from these experiences to support professionals as they continue to learn.

(2009, p. 728)

Word count: 7,216

28

Page 29: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Reference List

Bamber, Veronica, 2011a. Self-evaluative practice: diversity and power, in: Saunders, Murray,

Trowler, Paul, Bamber, Veronica (Eds.), Reconceptualising Evaluation In Higher

Education: The Practice Turn. McGraw-Hill Education.

Bamber, Veronica, 2011b. Evaluative practice and outcomes: issues at the self-evaluative level., in:

Saunders, Murray, Trowler, Paul, Bamber, Veronica (Eds.), Reconceptualising Evaluation

In Higher Education: The Practice Turn. McGraw-Hill Education.

Borrego, M., Henderson, C., 2014. Increasing the Use of Evidence-Based Teaching in STEM

Higher Education: A Comparison of Eight Change Strategies. J. Eng. Educ. 103, 220–252.

Dick, B., 2007. Action Research as an Enhancement of Natural Problem Solving. Int. J. Action

Res. 3, 149–167.

Engestrom, Yrjo, 2009. Expansive Learning. Toward an activity -theoretical reconceptualization.,

in: Illeris, Knud (Ed.), Contemporary Theories of Learning: Learning Theorists ... in Their

Own Words. Routledge, UK.

Henkel, M., 1998. Evaluation in higher education: Conceptual and epistemological foundations.

Eur. J. Educ. 33, 285–297.

Hubball, H., Pearson, M.L., 2011. Scholarly approaches to curriculum evaluation: critical

contributions for undergraduate degree program reform in a Canadian context., in:

Saunders, M., Trowler, P., Bamber, V. (Eds.), Reconceptualising Evaluation In Higher

Education: The Practice Turn. McGraw-Hill Education.

Kirkpatrick, D.L., Kirkpatrick, J.D., 2006. Evaluating training programs : the four levels. San

Francisco, Calif. : Berrett-Koehler ; London : McGraw-Hill [distributor], c2006.

Kolb, D.A., 1984. Experiential learning : experience as the source of learning and development /

David A. Kolb. Englewood Cliffs, N.J. ; London : Prentice-Hall, 1984.

29

Page 30: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

McCluskey, A., 2011. Evaluation as deep learning: a holistic perspective on evaluation in the

PALETTE project, in: Saunders, M., Trowler, P., Bamber, V. (Eds.), Reconceptualising

Evaluation In Higher Education: The Practice Turn. McGraw-Hill Education.

Patton, M.Q., 1998. Discovering Process Use. Evaluation 4, 225–233.

Patton, M. Q. (2011b). Developmental evaluation: applying complexity concepts to enhance

innovation and use. New York: Guilford.

Saunders, M., 2000. Beginning an Evaluation with RUFDATA: Theorizing a Practical Approach

to Evaluation Planning. Evaluation 6, 7–21. doi:10.1177/13563890022209082

Saunders, M., 2006. The “presence” of evaluation theory and practice in educational and social

development: toward an inclusive approach. Lond. Rev. Educ. 4, 197–215.

Saunders, M., 2012. The use and usability of evaluation outputs: A social practice approach.

Evaluation 18, 421–436. doi:10.1177/1356389012459113

Saunders, Murray, Trowler, Paul, Bamber, Veronica (Eds.), 2011. Reconceptualising Evaluation In

Higher Education: The Practice Turn. McGraw-Hill Education.

Schon, D., 1983. The Reflective Practitioner. Maurice Temple Smith Ltd, UK.

Smith, C., 2008. Building effectiveness in teaching through targeted evaluation and response:

connecting evaluation to teaching improvement in higher education. Assess. Eval. High.

Educ. 33, 517–533.

Tight, M., Society for Research into Higher Education, 2012. Researching higher education. Open

University Press, Maidenhead.

Trowler, Paul, 2011. The Higher Education policy context of evaluative practices, in: Saunders,

Murray, Trowler, Paul, Bamber, Veronica (Eds.), Reconceptualising Evaluation In Higher

Education: The Practice Turn. McGraw-Hill Education.

Universities UK, 2013. Patterns and Trends in UK Higher Education 2013. Universities UK.

30

Page 31: Web viewThis potential for incremental growth has also been described by Patton as ‘tinkering’, or ‘bricolage’ (2011, p. 264). ... Word count: 7,216. Reference List

Claire Sparrow UFHRD 2015 Track 8

Webster-Wright, A., 2009. Reframing Professional Development Through Understanding

Authentic Professional Learning. Rev. Educ. Res. 79, 702–739.

doi:10.3102/0034654308330970

31