the travelling workshops experiment: an attempt at ‘illuminative evaluation’

7
Soczal Science Information Studies ( 198 l), I (247-2531 c 198 1 Butterworths THE TRAVELLING WORKSHOPS EXPERIMENT: AN ATTEMPT AT ‘ILLUMINATIVE EVALUATION’ COLIN HARRIS” Director ofthe Centrefor Research on liser Studies, University of Sheffield, England ABSTRACT An account of the background against which a non-traditional strategy of evaluation was adopted in the Travelling Workshops Experiment at Newcastle upon Tyne Polytechnic. The author points out that research rarely proceeds along textbook lines and describes how the clear cut distinctions originally envisaged for the internal and external components of the evaluation of the project soon became blurred in the process of evolving practicable procedures. At the same time, the research team had to resolve the conflict which arose between pressures towards a traditional, tightly controlled approach to evaluation and their own preference for a more exploratory strategy in which behavioural ob,jectives would not be specified at the outset. An alternative model was found in ‘illuminative evaluation’, which eschews pre- and post-testing in favour of a more wide-ranging study of the contextual determinants of the success of an innovation. Use of this model nonetheless poses many problems, not the least ofwhich is that illuminative evaluation is a broad strategy rather than a set of specific techniques. In February 197 7, I wrote a paper on ‘illuminative evaluation of user education programmes’. The paper was later published (in October 1977) in Aslib Proceedings. The abstract stated that the paper ‘describes the adoption of an evaluation strategy-“illuminative evaluation”-by a ma,jor user education research prqject, the Travelling Workshops Experiment’ (Harris, 1977). This paper is, in a sense, the story behind that paper. It is now commonly accepted that research does not always proceed, or indeed rarely proceeds, in the manner suggested in textbook accounts of ‘scientific’ research. In order to understand how the research methodology of the Travelling Workshops Experiment evolved, it is necessary to understand some of its general background. The Travelling Workshops Experiment was concerned with education in information use, or user education (Clark et al., 198 1). It was located in the library of Newcastle upon Tyne Polytechnic, and funded by the British Library Research ” Colin Harris is Director of the Centre for Research on User Studies at the University 01‘ Sheffield. He was previously Subject Specialist in Social Welfare with the Travelling Workshops Experiment at Newcastle upon Tyne Polytechnic, where he had already been working in research on academic library use. Before that, he was a college librarian in Toronto. His training is in social administration, sociology, and library and information science.

Upload: colin-harris

Post on 19-Nov-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The travelling workshops experiment: An attempt at ‘illuminative evaluation’

Soczal Science Information Studies ( 198 l), I (247-2531 c 198 1 Butterworths

THE TRAVELLING WORKSHOPS EXPERIMENT: AN ATTEMPT AT ‘ILLUMINATIVE EVALUATION’

COLIN HARRIS”

Director ofthe Centrefor Research on liser Studies, University of Sheffield, England

ABSTRACT

An account of the background against which a non-traditional strategy of evaluation was adopted in the Travelling Workshops Experiment at Newcastle upon Tyne Polytechnic. The author points out that research rarely proceeds along textbook lines and describes how the clear cut distinctions originally envisaged for the internal and external components of the evaluation of the project soon became blurred in the process of evolving practicable procedures. At the same time, the research team had to resolve the conflict which arose between pressures towards a traditional, tightly controlled approach to evaluation and their own preference for a more exploratory strategy in which behavioural ob,jectives would not be specified at the outset. An alternative model was found in ‘illuminative evaluation’, which eschews pre- and post-testing in favour of a more wide-ranging study of the contextual determinants of the success of an innovation. Use of this model nonetheless poses many problems, not the least ofwhich is that illuminative evaluation is a broad strategy rather than a set of specific techniques.

In February 197 7, I wrote a paper on ‘illuminative evaluation of user education programmes’. The paper was later published (in October 1977) in Aslib Proceedings. The abstract stated that the paper ‘describes the adoption of an evaluation strategy-“illuminative evaluation”-by a ma,jor user education research prqject, the Travelling Workshops Experiment’ (Harris, 1977). This paper is, in a sense, the story behind that paper.

It is now commonly accepted that research does not always proceed, or indeed rarely proceeds, in the manner suggested in textbook accounts of ‘scientific’ research. In order to understand how the research methodology of the Travelling Workshops Experiment evolved, it is necessary to understand some of its general background.

The Travelling Workshops Experiment was concerned with education in information use, or user education (Clark et al., 198 1). It was located in the library of Newcastle upon Tyne Polytechnic, and funded by the British Library Research

” Colin Harris is Director of the Centre for Research on User Studies at the University 01‘ Sheffield. He was previously Subject Specialist in Social Welfare with the Travelling Workshops Experiment at Newcastle upon Tyne Polytechnic, where he had already been working in research on academic library use. Before that, he was a college librarian in Toronto. His training is in social administration, sociology, and library and information science.

Page 2: The travelling workshops experiment: An attempt at ‘illuminative evaluation’

248 7’hr Travel&g Il’orkshops Experiment: an attempt at ‘illuminatlrje evaluation’

and Development Department l’or a period of’ four and a half’ years in all.’ The Pro,jcct Head and grant holder was the Polytechnic Librarian; the Pro.ject Leader for- the lirst year of‘ the prqject was a senior iibt-at-y school lecturer; thet-e were also three sub,ject specialists, in biology, mechanical engineering and social weIf&, who were librarians with qualifications in chemistry, physics and social admittistt-ation/sociology respectively. Only one of these ftve people had training itt social research and experience of library research. None \vas trained ot experienced in educational research.

ACTION OR RESEARCH?

TWO aspects of’ the prqjccr are important. First, it leas known from the verb beginning, before the prqject proposal was prepared, that the project would be subject to an independent assessment, which would be required to ‘assess the impact’ of the project. Second, the prqject proposal itself was concerned with the \va)-s in which the prqject would demonstrate to academics and academic, librarians how user education could be incorporated into academic programmcs, and hvith the details-who, what, when, where, how-of the pro,jcct’s own teaching or instructional pt~ogt~atnnie.

The prqject, then, was to consist of action. If it was ‘research’, then the nature of’ the research was not made clear. The task of evalulation or assessmettt- involving what was more recognizable as conventional-research was to be conducted separately (by Aslib, it was later learned).

This division of labour later becatne more complex. After the beginning of the project, as the design of courses of instruction proceeded. it became clear that JOW assessment or evaluation by the pro,ject team would be necessary. The arrangement agreed was that the project would be concerned with evaluation as it related to educational ob,jectives--the adequacy or effectiveness of particular methods or media, the relevance of content, etc.--and that Aslib \vould be cottccrned with general project ob,jectives and the extent to which they had been achieved. This arrangement worked quite well in the early part of’ the project, for the simple reason tttat it was possible to distinguish between the project’s objectives and the educational objectives formulated within the pro,ject. This distinction, in turn, was made possible by the assumption within the pro,jccr and by the project’s advisory committee that the prqjcct’s educational programme would be fdt-tnulated on the basis ofbehavioural ob,jectives.

The distinction soon broke down, fbr several reasons: 1 Sornc tnembers of’ the project teatn became increasingly uneasy about the

reliance upon behavioural ob,jectivcs. It had been argued by some individuals associated with the project that behavioural ob,jectives must be spelled out or: (i) we could rtot later know whether or how far we had been successful (i.e. achieved our objectives) and (ii) other people. particularly the prospective users of’ kvhatcvet- the project produced (guidelines, materials, set-vices) could not know whether they were appropriate to their ttreds. The belief‘ growing in the project was that it might be inappropriate to formulate a set of objectives almost arbitrarily and then set about achieving them. Although we dared not say SO explicitly at the time, thct-e seemed to be a strong case for doing \vhar itituiti\,cl> seemed more or less appropriate and then trying to discover the appropriatertess to other people of‘ what had been done. This argument lvould not be advanced in

Page 3: The travelling workshops experiment: An attempt at ‘illuminative evaluation’

COLIN HARRIS 249

all instructional situations, of course. We thought it to be right because the project was exploratory and experimental. A firm statement of ob.jectives would certainly enable others to identify whether their objectives were similar, but that, as it turned out, would have been dysfunctional: many people probably eventually used the materials produced by the project to pursue quite different ob,jectives.

2 Partly because of the inappropriateness of behavioural objectives, the distinction between the educational aims as the province of the prqject and the project aims as the province of the independent evaluation, was also seen as inappropriate. It became clear that the project team should be concerned with a variety of factors or issues that would affect the relevance or effectiveness of educational activities. They would be every bit as concerned with the ‘impact’ of the prqject as a whole as the independent evaluator would. It had gradually occurred to the project team that although the prqject’s central activity was ‘user education’ in some shape or form, the perspective needed to evaluate its activities was not that simply of user education, but one of the development and dissemination of an educational innovation. Thus the educational aims, though obviously not unimportant, were but a part of the totality of the prqject’s concerns, which also included problems of gaining access to a wide range of institutions of higher education, of introducing the innovation from outside, gaining support from within and, at the same time, providing support from outside, trying to make an organization ‘work’ for the outsider, trying to appreciate the politics of an institution that involved students, faculty and librarians.

3 As the project developed in the early stages, it became clear that the feedback needed for both the project and the independent evaluation came from the same sources-students, faculty, librarians (and others&and used the same data collection devices-questionnaires, unstructured interview, observations, documents (the project also used tests). To avoid duplicating large parts of instruments, the project and the evaluation collaborated in questionnaire construction and so on.

While much of the data to be used by two parties was similar, it was assumed that the purposes to which the information would be put would be different. The early assumption was that the prqject’s use of the data would be for formative evaluation purposes, i.e. constantly providing input into the revision of instructional programmes or materials. Use of the data by the independent evaluation would be for summatiue evaluation purposes, or in other words to provide the funding body with a final assessment of the impact of the prqject as a whole. Again this distinction soon became unworkable. For many reasons, the independent evaluator’s activities contributed to the prqject’s formative evalu- ation; indeed, the opinions, observations and findings of someone not involved in the project on a dav to day basis were particularly important for formative evaluation purposes. Similarly, though less important, the prqject team would have to write some summative account of the prqject as a final report to the sponsors.

4 The final reason for the breakdown of the distinction between the pro,ject as activity and the independent evaluation as research was simply that some members of the project team were concerned about the status of the prqject as a research project and sought either to understand more clearly the nature of the activity in which they were engaged or, perhaps, to identify or create some basis for legitimizing their roles as researchers. Not all of the prqject team were

Page 4: The travelling workshops experiment: An attempt at ‘illuminative evaluation’

250 The Trauelling Workshops Experiment: an attempt at ‘illuminative evaluation’

concerned. One accepted that ‘we aren’t really doing research’; another was adamant on having been appointed to the pro.ject as a bibliographer and teacher, and not as a researcher. Similarly some others associated with the project expressed no concern about the nature of the research, the ‘methodology’, .etc. However, concern was expressed by some scientists and ‘hard’ social scientists who wanted the behavioural ob,jectives specified, the tests ot learning emphasized, control groups used, statistical analysis emphasized.

Some of the team struggled hard to convince their advisers that behavioural objectives might be inappropriate, that tests of learning should not be treated as central, that the use of control groups would be infeasible or unethical or both and that this, together with the difficulty of statistical analysis, was a fact of life, not a weakness in design.

So, with action, not ‘research’, as the central activity of the pro,ject, and with a conviction that the classic social science research model was inappropriate to its needs, the team sought another model.

QUALITATIVE RESEARCH

It is probably not accurate to paint a picture that suggests that all of the foregoing had occurred, leaving the researchers in search of a research model. It is quite probably the case that the eventual discovery of a model that seemed appropriate to their task enabled the researchers to appreciate retrospectively what had been inadequate about their earlier conceptions or activities. For present purposes, the actual course of‘events is not important save in one fundamental respect.

Changes in the models or paradigms that underlie a particular prqject’s methodology can come about in a number of- ways. Most probablv a researcher discovers a new approach by talking to others or reading the literature and identifies the positive value of a new or partially new approach in his own research. But what happens when researchers are committed, implicitly or explicitly, to a research model that they think inadequate, not because they know a particular alternative to be better, but because they ‘feel’ the approach being taken to be inappropriate. One solution, of course, is to create the appropriate model, but in the face of strong opposition, that is difficult. It is also time consuming. The main activity of the prqject continues to be action in some area of’ user education, not the search for a research model. 111 addition, to add legitimacy to any model that is eventually adopted, it is necessary to find something that is already established in some form, for which some other ‘experts’ have already advanced a case, and which has other preferably eminent or accomplished subscribers.

In OUT case, a new model was suggested in the paper on illuminative evaluation by Parlett and Hamilton. The doubts expressed about the appropriateness of ‘the classical or “agriculture-botany” paradigm, which utilizes a hypothetico- deductive methodology derived from the experimental and mental testing traditions in psychology’ (Parlett and Hamilton, 1976:85) were shared whole-heartedly by the pro,ject. In particular it seemed inappropriate to the evaluation ofeducational innovations.

The shortcomings of the paradigm are summarized by Parlett and Hamilton :

(a) The data collection and statistical controls that would be necessary are rarcl) possible in educational evaluations.

Page 5: The travelling workshops experiment: An attempt at ‘illuminative evaluation’

COLIN HARRIS 251

(b) ‘ “Before and after” research designs assume that curriculum prqjects undergo little or no change during the period of study. This built-in premise is rarely agreed in practice.’ (p. 8 7 .)

(c) ‘The methods used in traditional evaluations impose artificial and arbitrary restrictions on the scope of the study. For instance, the concentration of seeking quantitative information by objective means can lead to neglect of other data, perhaps more salient to the innovation, which are disregarded as being “subjective”, “ anecdotal”, or “impressionistic”.’ (p. 87).

(d) ‘Research of this type, by employing large samples and seeking statistical generalizations, tends to be insensitive to local perturbations and unusual effects. Atypical results are seldom studied in detail.’ (‘p. 88.)

(e) Finally, this type of evaluation often fails to articulate with the various concerns and questions of participants, sponsors and other interested parties.

‘Illuminative evaluation’ (a term coined by Trow [19701) tries to avoid or overcome these problems. The aims of

‘illuminative evaluation are to study the innovatory prqject: how it operates; how it is influenced by the various school situations in which it is applied; what those directly concerned regard as its advantages and disadvantages; and how students’ intellectual tasks and academic experiences are most affected. It aims to discover and document what it is like to be participating in the scheme, whether as teacher or pupil; and in addition, to discern and discuss the innovation’s most significant features, recurring concomitants, and critical processes. In short, it seeks to address and to illuminate a complex array of questions.’ (Parlett and Hamilton, 1976:89.)

On the methodology of illuminative evaluation, Parlett and Hamilton point out that:

‘Illuminative evaluation is not a standard methodological package, but a general research strategy. It aims to be both adaptable and eclectic. The choice of research tactics follows not from research doctrine, but from decisions in each case as to the best available techniques; the problem defines the methods used, not vice versa. Equally, no method (with its own built-in limitations) is used exclusively or in isolation; different techniques are combined to throw light on a common problem.’ (Parlett and Hamilton, 1976:93.)

The attractiveness to the project of illuminative evaluation as outlined by Parlett and Hamilton was enormous. It provided a convincing argument against the classic social science research model, and therefore against the emphasis upon pre- and post-testing of the achievement of behavioural ob.jectives. It underlined the need to examine a wider range of contextual and environmental determinants of the success or otherwise of an educational innovation. Finally, it confirmed that a range of research techniques should be used in combination, the range to include observation, interviews, questionnaires and tests, documentan; and background sources.

Convincing as the arguments were, however, the paper was by no means a manual of’ illuminative evaluation. It drew upon a relatively narrow range of past studies, mainly educational evaluations each of a specific character. But, of course, it was proposed as a broad strategy, not a set of specific techniques.

Page 6: The travelling workshops experiment: An attempt at ‘illuminative evaluation’

252 The Trauelling Workshops Experiment: an attempt at ‘illumlnatiue evaluation’

In the project, the task was to work out which aspects of the courses, materials etc. would be studied by which methods. The following range was eventually used :

(a) Tests of learning A fairly rough-and-ready multiple-choice test was used, where possible, as a pre- and post-test. In fact, the test proved to be of minimal value. The courses that were run involved self-paced individual study, and attendance was usually voluntary. Although it was usually possible to get pre-tests completed, it was often hard to get post-tests completed, so the number of comparable scores was rather small.

(b) Questionnaires Both the independent evaluation and the project needed a great deal of feedback, from students and faculty and librarians, of the kind that could be gathered fairly easily by questionnaires. The questionnaires were designed jointly, but were distributed and collected by the independent evaluator, with promises of confidentiality. The promises were honoured in the sense that, although the project received copies of most parts of all of the questionnaires, we never knew the respondents’ names.

(c) Observation/discussion/interview With both tests and questionnaires, it is necessary to specify in advance the questions to which answers are needed. Where an educational innovation is being used in a wide variety of institutions, there are likely to be many questions that cannot be specified precisely in advance. There may be major common issues that have not been anticipated; there may be relatively minor local circumstances that are important but cannot be made known in advance to an outsider. Observation, discussion and interviews were carried out to amplify and supplement the information likely to be obtained by questionnaires. All of these activities were unstructured (with the exception of a concluding group discussion, usually led by the independent evaluator). Not only were they unstructured, but they were often barely distinguishable from teaching activity.

Since these unstructured activities were being carried out by the project teachers, as well as by the independent evaluator, some degree of uniformity was thought to be desirable, if only because a single report on the whole activity would eventually have to be written. We achieved a minimal level of uniformity by drawing up a checklist of activities or aspects of the course to be analysed or recorded, a simplified version of Stake’s matrix of antecedents, transactions and outcomes. The checklist eventually listed selected aspects of each course, and indicated the techniques to be used to collect information, and the sources of that information (students, faculty, librarian, documents, etc.).

(d) Documents and background information Prospectuses, course outlines, lecture notes and handouts, etc. were used where available and appropriate.

The use of the variety of instruments or sources posed manv problems. The numbers choosing to attend any one course might be qdite small; tests, questionnaires and observation and discussion might or might not relate to the same students. Often all types of responses would come from the same small group of enthusiastic students, and attempts to contact those students choosing not to participate or to respond were relatively unsuccessful. The perceptions and

Page 7: The travelling workshops experiment: An attempt at ‘illuminative evaluation’

COLIN HARRIS 253

interpretations of aspects of a student’s tasks might be quite different by students. faculty or librarians. The discrepancies had to be unravelled as did those between official documentary sources (the prospectus) and reality.

CONCLUSION

Research and evaluation are held by some to be quite different; others regard evaluation as a particular type of research. Research and evaluation may have different purposes, but they may, nevertheless, lead, in large measure, to the same type of conclusions.

One advantage of the type of evaluation model adopted in a project involving the development and implementation of some kind of innovation is that it does not assume that the researcher knows at the outset all the questions he will want to ask in the course of the evaluation. Parlett and Hamilton (1976) recommend a broadly based start and ‘progressive focusing’. This inevitably involves some compromise between breadth and depth, but while this is a difficult compromise, it is not impossible to make. Part of the current effort in qualitative research goes to improving the tools that make depth of treatment possible. Some observers of illuminative evaluation, such as Parsons (19761, have pointed out that there is already a wealth of experience and method available from sociology for purposes of qualitative research. But the message of illuminative evaluation is breadth and variety, and a perspective that includes ‘what it is like to be participating .’ before the application of the techniques begins.

In large scale information science research, attention will focus upon the techniques, but in small scale, domestic research (evaluations of modest but important innovations, for example) the perspective perhaps most important is: ‘what is it like to be a user?’

REFERENCES AND NOTES

CLARK, D., HARRIS, c. G. s.,TAYLOR, P.J., DOUGLAS, A. and LACEY,S. ~.~.(1981). i% Trauelling Workshops Experiment in library user education. London: British Library Research and Development Department. (BLR&D Report No. 5602.)

HARRIS, c. (1977). Illuminative evaluation of user education programmes. Aslib Proceedings, 29,348-362.

PARLETT, M. and HAMILTON, D. (1976). Evaluation as illumination. In: Tawney, D. (ed) Curriculum evaluation today: trends and implications. London: Macmillan. (Quotations and page references are from this edition, but the paper was originally published, in 1972, as Occasional Paper No. 9 of the Centre for Research in Educational Sciences, University of Edinburgh.)

PARSONS, c. (1976). The new evaluation: a cautionary note. Journal ofCurrictAum Studies, 8, 125-138.

TROW, M. (1970). Methodological problems in the evaluation of innovations. In: Wittrock, M. C. and Wiley, D. E. (eds). Problems in the evaluation ofinstruction. New York: Holt, Rinehart. 289-305.

1. The Travelling Workshops Experiment, referred to in this paper in the past tense, was the funded research project. In fact, at the end of the funded research, the Polytechnic took over the project as a commercial enterprise, so the Travelling Workshops Experiment still exists.