experts and foresight: review and experience denis loveridge · by definition, opinions are...

37
Experts and foresight: review and experience Denis Loveridge PREST, The University of Manchester, Oxford Road, Manchester, M13 9PL, UK E-mail: [email protected] Abstract: Experts and their opinions are widely sought, but there are serious questions to be asked about both experts and their opinions. Governments, through their advisory committees, companies and adversarial groups eagerly seek expert opinion with which to influence the formulation of policy and regulations of all kinds. In a different sphere expert opinion is also a powerful feature of many complex court cases where many expert witnesses find themselves in circumstances beyond their normal experience and where their evidence may be manipulated in ways they never anticipated. Here, the nature of expert opinion and its elicitation are first surveyed, necessarily in brief and subsequently the methods and methodological issues of elicitation are discussed, all in relation to foresight activity. Finally, the implications for the practice of foresight, particularly in their manifestation in public foresight programmes, are discussed. Keywords: experts; foresight; human judgement; practical experience; preference; science in court; uncertainty. Reference to this paper should be made as follows: Loveridge, D. (2004) ‘Experts and foresight: review and experience’, Int. J. Foresight and Innovation Policy, Vol. 1, Nos. 1/2, pp.33–69. Biographical notes: Denis Loveridge is a Visiting Professor at the University of Manchester, where he is a member of PREST. He has 20 years’ experience in a major international company in foresight activity, the development of new high-technology businesses and long-term corporate development. He was deeply involved in the first UK Foresight programme from its beginnings in 1992 to its completion in 1995. Previously he had 14 years’ experience first in basic research and later in applied process research as a Principal Scientist and Project Manager/Coordinator and was responsible for world firsts in the application of fluidised bed combustion systems and in instrumentation. He has extensive experience of managing and coordinating complex international research programmes in both industry and the public sector. 1 Introduction Currently there is a paradox where expert opinion is simultaneously avidly sought and lampooned with equal ferocity. Governments seek expert opinion eagerly, through advisory committees, whilst companies and adversarial groups do so with equal ferocity, all with the intention of influencing the formulation of policy and regulations of all kinds. Self-interest is never far away. However, there are serious questions to be asked about experts, their opinions and how they are elicited. Int. J. Foresight and Innovation Policy, Vol. 1, Nos. 1/2, 2004 33 Copyright # 2004 Inderscience Enterprises Ltd.

Upload: others

Post on 05-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Experts and foresight: review and experience

Denis Loveridge

PREST, The University of Manchester, Oxford Road,Manchester, M13 9PL, UKE-mail: [email protected]

Abstract: Experts and their opinions are widely sought, but there are seriousquestions to be asked about both experts and their opinions. Governments,through their advisory committees, companies and adversarial groupseagerly seek expert opinion with which to influence the formulation ofpolicy and regulations of all kinds. In a different sphere expert opinion isalso a powerful feature of many complex court cases where many expertwitnesses find themselves in circumstances beyond their normal experienceand where their evidence may be manipulated in ways they neveranticipated. Here, the nature of expert opinion and its elicitation are firstsurveyed, necessarily in brief and subsequently the methods andmethodological issues of elicitation are discussed, all in relation to foresightactivity. Finally, the implications for the practice of foresight, particularlyin their manifestation in public foresight programmes, are discussed.

Keywords: experts; foresight; human judgement; practical experience;preference; science in court; uncertainty.

Reference to this paper should be made as follows: Loveridge, D. (2004)`Experts and foresight: review and experience', Int. J. Foresight andInnovation Policy, Vol. 1, Nos. 1/2, pp.33±69.

Biographical notes: Denis Loveridge is a Visiting Professor at the Universityof Manchester, where he is a member of PREST. He has 20 years' experiencein a major international company in foresight activity, the development ofnew high-technology businesses and long-term corporate development. Hewas deeply involved in the first UK Foresight programme from its beginningsin 1992 to its completion in 1995. Previously he had 14 years' experience firstin basic research and later in applied process research as a Principal Scientistand Project Manager/Coordinator and was responsible for world firsts in theapplication of fluidised bed combustion systems and in instrumentation. Hehas extensive experience of managing and coordinating complexinternational research programmes in both industry and the public sector.

1 Introduction

Currently there is a paradox where expert opinion is simultaneously avidly soughtand lampooned with equal ferocity. Governments seek expert opinion eagerly,through advisory committees, whilst companies and adversarial groups do so withequal ferocity, all with the intention of influencing the formulation of policy andregulations of all kinds. Self-interest is never far away. However, there are seriousquestions to be asked about experts, their opinions and how they are elicited.

Int. J. Foresight and Innovation Policy, Vol. 1, Nos. 1/2, 2004 33

Copyright # 2004 Inderscience Enterprises Ltd.

Page 2: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

The world now and into the far future faces many situations of the kind describedby Weinberg (1972) as trans-science and characterised by questions that are posed toscience but that cannot be answered by science. Typically they are questions ondisease control, climate change, sustainable development and other complex1 mattersthat involve the intermingling of thought streams from the Social, Technological,Economic, Ecological, Political and Values worlds (STEEPV). In giving an opinionan expert manipulates and integrates knowledge2 subjectively from all six elements ofthe STEEPV set. In the world in which advisers work and expert opinion is elicited,the rules of the frequentist tradition are transgressed, as large numbers of opinionsusually cannot be sought.3 The time needed and costs involved preclude theobservation of frequentist rules. More fundamentally, when dealing with the futurethere is no system of measurement comparable with physical measurement thatfrequentist rules usually require. By definition, opinions are subjective and can onlybe the expert's best judgement under the terms and rules imposed by the situation.Current notions of risk assessment, presented in a frequentist fashion, are then atodds with the subjective basis of the opinions expressed by experts.

The roots of subjective probability go back to the Ancients. In their modernidiom the rules were laid out from the 1800s onwards and have given rise to anextensive literature and many methods for assessing expert opinion; these will betraced briefly, but not definitively, in the survey that forms the first part of thispaper. The later parts concern practical methods; these have proved elusive toformulate as they must be readily appreciated by the interviewee, must not beoffensive to him or to her and must not be simply a `black box' applied by andunderstood only by the interviewer. These matters are of increasing importance asthe role of expert advisers and expert witnesses becomes ever more important ingovernment and in the courts where caveats and uncertainties count for little. Incourt a trial judge and jury are required to reach a judgement to resolve a disputeabout a specific set of circumstances in spite of caveats and uncertainties. Similarly,expressions of uncertainty by experts (often scientists) are not welcomed bygovernment whose duty it is to frame regulations, preferably unambiguously.

In Experts in Uncertainty, Cooke (1991) has thoroughly reviewed theconsiderable literature in the field of subjective probability. In this paper, sometopics will be added that Cooke does not cover including the selection of people forexpert committees, as this is much more hazardous than is generally appreciated. Therole of subjective opinion in foresight programmes will be constantly borne in mind,tackling the thorny question of `who is an expert?' in these programmes where thephenomenon of relativism is rife, through questioning whether a non-expert'sopinions are as valid as those of an expert of international standing. Throughout,there is an emphasis on the practical use of subjective opinion as experience hastaught me that there is more to using it than theory alone, however elegant that maybe. The paper also spreads into other fields that either draw upon or influencesubjective opinion. The first of these is court proceedings where expert opinion isoften involved. The second field is the nature of preference shifts that may causevolatility in expert opinion and the third draws in matters relating to humanjudgement. For the narrow specialist the running together of these three fields maybe either incomprehensible or anathema, but outside their world the interaction ofthese three fields is an everyday occurrence met with by decision makers in spheres

D. Loveridge34

Page 3: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

ranging from judges to government ministers and their advisers. It may, therefore, beconsidered an omission not to have dwelt at length on the burgeoning politicalaspects of expert advice giving that has accompanied the growth of advisorycommittees and of Quangos. In that field the work of Barker and Peters (1993) hasmuch to commend it, as has the work of others in the related fields of accident andother forms of public enquiries.4 However, it will be helpful to record, for laterinterpretation of comments on the selection of members of advisory committees, thatBarker and Peters believe there to be six levels of cognitive difficulty that face publicpolicy makers; these they describe in terms of the policy field's character as follows:

1 elaborate but not difficult detail

2 situations involving complicated (but not complex in Perrow's terms) matters

3 situations with technical difficulty but that are amenable to non-expert study

4 problems with technical difficulty requiring expert training for theircomprehension

5 technical situations bordering on the scientifically unknown and involvingcompeting and conflicting scientific opinions

6 situations where science has nothing to offer; the subject is unknown to scienceand there are no claims from experts.

The first three have characteristics that make them amenable to study by non-expertpolicy and decision makers. Thereafter the situation becomes increasinglyincomprehensible to the polity in general, whilst the last is clearly in that fieldwhere what is later called `conjecture' is the only mode of thought available even tothe expert community. In a telling comment, Barker and Peters point to the myth ofomnicompetence that policy makers endeavour to maintain throughout all of theabove six fields. In this context, they conclude that policy makers indulge in adangerous self-delusion typified by `. . . a vain hope of maintaining the myth that allproblems and issues are safely under their hand, usually with the help of expertadvisers'. The dangers of this mindset are enormous when faced with the complexityof trans-science and the characteristics of the last three policy fields listed by Barkerand Peters. Added to this must be the ever-shifting nature of society as it evolvesfrom the era of modernity into something different, characterised by an everdecreasing level of trust in policy makers and their advisers.

2 Who is an expert?

The above question is the first that needs to be examined though often it is not askedat all. Here it is nothing to do with identifying people with expertise, but is that loftyinquiry implying doubt about the whole idea of expert opinion and its relevance orplace in the modern world. Attitudes to expert opinion have become like Janus'shead being seen simultaneously as highly desirable and greatly mistrusted. The resultin foresight studies is paradoxical, as the response to the lofty inquiry may well bethat expert opinion `of course' can be sought but it is not to be given any particularvalue by comparison with everyday experience by the public at large. In the

Experts and foresight: review and experience 35

Page 4: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

remainder of this paper, the existence of valuable expert opinion that needs to bedrawn upon by some process is taken for granted, but in the real world the Janus-likenature of how such opinions are regarded should not be forgotten.

3 Expert opinion ± its diversity, evolution and uses

Expert opinion exhibits diversity that has evolved over centuries. The discussion willproceed as follows:

* subjective opinion

* the courts

* preferences

* human judgement.

For methodological issues the discussion will concern the:

* selection of `experts'

* elicitation of expert opinion.

3.1 Subjective probability

Often the distinction between the frequentist (`objective') and subjective forms ofprobability is ignored to its detriment. Discussions, particularly among the polity, thenbecome muddled with confusing notions of what any kind of probability is about. Asthe history of probability shows the subjective form is by far the older. The frequentistform grew out of its predecessor when the notion of frequent trials to establishdistributions of likelihood became established. Nevertheless, in the search for advice onwhich to act, kings, rulers, company chairmen and boards and other decision takerscannot do other than work with the subjective opinions of a small group of advisersfor reasons including security, cost or simply the inapplicability of frequentist notions.

Cooke opened his book with the sentence ``Broadly speaking, this book is aboutthe speculations, guesses and estimates of people who are considered experts, in sofar as these serve as `cognitive input' in some decision process.'' Cooke consideredsubjective probability and opinion under three headings, namely, experts andopinions; subjective probability and its underlying theoretical concepts; and theproblem of combining expert opinions. Whilst these three broad headings illustratethe main features of expert opinion they have to be teased apart, as Cooke does, toreveal the multiplicity of factors involved. Cooke demonstrates how these methodshave been used in some situations, but experts, advisory committees and others whodepend on opinion to make judgements or decisions may not think in the ways hedescribes. Communicating the nature of expert opinion remains one of the mostdifficult tasks facing anyone who works with advisory bodies, especially when theyare expected to produce advice against tight time scales and in situations resemblingthose of the UK's BSE or similar crises. Cooke's apt observation is that ``An expertwho knows more about a particular subject than anyone else may be able to talk anopponent under the table. This does not mean that his opinions are more likely to be

D. Loveridge36

Page 5: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

true''. Experts are not omniscient, but have their own agendas, predilections andprejudices that become embedded in their opinions.

It would be hard for any facilitator to introduce the many themes underlyingexpert opinion to any advisory body. Rather the facilitator's role must be to embodythem, `black box' fashion, so that probabilistic thinking, the nature of biases,heuristics and the combination of several sources of opinion are incorporated but notexposed directly. The `black box' problem is the kind that Budescu and Rantilla(2000) approached in a study of how a decision maker aggregates the opinions ofmultiple advisers and their confidence in the outcome. The problem surrounding theacquisition of knowledge from multiple experts has been explored by Medsker et al.(1995). Their concern was not related primarily to opinion as subjective probabilitybut with the many social aspects such as privacy, geographic dispersion and howcomputers may be used to automate parts of the process. The latter topic had beenexplored much earlier by Lipinski et al. (1973) with much the same objectives, tomake expert participation independent of geographic separation as well as enablingthe elicitation process. Multiple sources of opinion naturally provoke the question ofhow they can be aggregated. One approach was described by Lipinski and Loveridge(1982) whilst another, the analytical hierarchy process (AHP), has been described bySaaty (1980), Xu (2000), Zio (1996) and Basak (1998) have worked with AHP and thelatter claims to have developed a new approach to eliciting and synthesising expertopinion for group processes within the AHP framework.

Hull (1976) reviewed the use of subjective probability for risk assessment inrelation to major capital investment under seven methods:

1 fixed interval

2 variable interval

3 relative likelihood

4 psychometric ranking

5 equivalent prior sample

6 hypothetical future sample

7 other.

In the fixed interval method the assessor is asked to state the probability estimate foreach interval in a range that covers all the possible values of an uncertain quantity. Theorder in which the intervals are presented to the assessor is thought to be influentialand Huber (1974) has proposed a procedure to overcome this concern. In contrast, thevariable intervalmethod requires the assessor to identify a number of intervals of equalprobability that cover the entire range of the uncertain quantity. Morrison (1967) hasoutlined an appropriate procedure, sometimes called the method of successivebisection, but whilst the method can become tedious as greater precision is sought, it isan approach favoured by Peterson et al. (1972) and Murphy and Winkler (1973).

The relative likelihood method expects the assessor to respond to questions toreveal the relative probabilities of different values or ranges of values. Again,procedures have been described by Winkler (1967) and by Barclay and Peterson(1973). The remaining methods do not fall within the context of the present paper.

Experts and foresight: review and experience 37

Page 6: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

DeWispelare et al. (1995) describe the use of elicitation to generate cumulativedistributions representing the uncertainty in predictions in the USA high-levelnuclear waste regulation programme. In a very different field, Dransfield et al. (2000)used weighted expert opinions, in conjunction with a Delphi survey, to study thefuture of interactive television and retailing.

Lastly, there is the use of subjective probability in scenario development; there aremany threads here that have surprising antecedents. Perhaps the most surprising arosein discussions between de Finetti, Keynes and Jeffreys (de Finetti, 1985). Among manyclaims in the dialogue, Jeffreys' claims that probability tells us what we ought to believe,might make a connection with other theories of belief enunciated by Shafer in hisdevelopment of a mathematical theory of evidence.5 It will also connect with notionsfrom other fields with far longer claims to proposing what we ought to believe. Howthen do these thoughts relate to the use of subjective probability in scenario building?

Scenarios display sets of beliefs in logical or pseudo logical order; they are to anextent an art form underlain by the theory of subjective opinion. For all that has beenwritten about and in scenarios, very few of them pay overt attention to this lineage andalmost none has been prepared using it explicitly. Some authors have been concernedwith probability assessment procedures6,7 (Bunn, 1979a,b) but not necessarily in waysthat relate to scenario building (Bunn and Salo, 1993), whilst Bunn and Salo have beenconcerned with scenarios and forecasting. However, there are at least two scenariobased studies8 that are based explicitly on the elicitation of subjective opinion; thesestudies enabled the construction of scenarios for the earth's climate and the future ofthe UK. Both used a similar approach in which experts were asked to respond to adefined set of questions through an elicitation process.

For the climate study ten questions dealt with specified climate variables whilstfor the UK study nine variables were specified for use in an economic model. Theelicitation sought information about the specified variables in three ways:probabilistic forecasts on each variable, the reasons for the quantitative estimatesand self and peer expertise rating. The characteristics of the experts in each studywere threefold (Lipinski et al., 1973):

1 substantive knowledge in their chosen spheres of interest

2 assessing ability to relate how their sphere may evolve in the future

3 imagination, as this lies behind how the adviser extends his or her substantiveknowledge into the future and subsequently assesses it.

Calibration tests exist for the characteristics that `measure' 1 and 2; 3 cannot beassessed directly. However, it is also important that the potential adviser assesses hisown level of expertise according to some simple but well defined rules (see Lipinskiand Loveridge, 1982) as a further part of the procedure. Lipinski and Loveridge usedtheir self-evaluation of expertise criteria extensively and a similar set, modified totake account of spheres outside science and technology, was used in the 1994±95 UKTechnology Foresight Programme (Loveridge et al., 1995).

The climate study used a structured questionnaire. The data for each respondentwas converted into a probability density function (PDF) relating to the changeforeseen in the particular variable. The PDF was adjusted for the respondent'sexpertise weighting and the weighted PDFs were added and normalised. The panel's

D. Loveridge38

Page 7: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

responses to each climate variable were then combined into a set of scenarios thatcovered the range of uncertainty or range of conditions described by the respondents.

In the UK study a similar but more developed version of the climate studymethodology was used; this is partially described later. The refinements were mainlyconcerned with the calibration of the experts so that their elicited opinions could beconverted to approximate a notional `perfect assessor'. Generation of the scenarioswas highly computerised enabling the interpolation of `missing pathways' throughthe tree structure involved. Whilst the climate study was placed firmly in the publicdomain, the outcome of the UK study has remained confidential, only a partialrevelation of the methods being permitted by its sponsors.

Making the process of elicitation acceptable to the expert who is to beinterrogated has not been discussed here, but will be later as its importance shouldnot be overlooked. Public foresight programmes have so far shown limitedunderstanding of the nature of expert opinion on which they depend. Many areconducted by panels selected by unknown processes and whose members' claims toexpertise often go unexamined. In a few studies attempts have been made tointroduce some of the elementary notions of the elicitation of expert opinion, butthese have not gone beyond the notions of the self-assessment of expertise. In onlyone case (Loveridge et al., 1995) did this introduce well-structured criteria, developedfrom those due to Lipinski and Loveridge (1982), for the respondents to assessthemselves against. The outcome was used only to set a minimum level of expertise forinclusion of the respondent's opinions, a matter referred to obliquely by Dransfieldet al. (2000) in their work, but weighting of opinions was not introduced. All foresightstudies eventually become political instruments where a practitioner's wishes fordeeper understanding have to be subsumed into a world that sets different criteria andhas less patience. The best the practitioner can do is to infect this different world withthe ideas of elicitation of expert opinion so that they spread to influence the use ofthe insights that elicitation and subjective opinion can offer in decision making.

3.2 The courts

The use of expert witnesses has long been a feature of court proceedings. Foster andHuber (1999) have written trenchantly on the role of scientific knowledge and ofscientists as expert witnesses, a timely reminder that the pretensions of both mustnow face the judgements of the world through judges in their courtrooms and thelegal process. The authors analysed the Daubert vs. Merrell Dow Pharmaceuticals Inc.case9 in depth to illustrate the various aspects of the use of scientific knowledge incourt and the role of scientists as expert witnesses.

Daubert illustrates how the nature of scientific evidence may prejudice, confuse ormislead a jury depending on how it is presented. It also exposes how scientificevidence, in relation to the judicial process, may or may not fit with:

* the social dimension

* notions of testability and falsification

* errors in science, reliability and scientific validity

* the role and efficacy of peer review in the scientific community.

Experts and foresight: review and experience 39

Page 8: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

The conduct of a trial is antithetical to the purpose of science and to the experience ofscientists used to uncertainties even in those areas where core knowledge is believedto be well established. Weinberg's (1972) notion of trans-science hinges on the phrase`. . . yet cannot be answered by science' linking time and the associated advance ofunderstanding to the constantly shifting boundaries between questions that can beasked of and answered by science. The formulation of policy is mostly based on theopinions of expert advisers particularly if it proceeds in a quasi-legal fashion. Asscience and technology intrude ever more deeply into sensitive areas of life andsociety, judges and juries may frequently define what is expected of scientists and ofscience. The only certainty is that the frequency of these clashes will increaseexposing science, its processes, scientists and the workings of expert advisorycommittees to public examination. Elsewhere (Foster et al., 1993) Foster and Huber,with others, explored other cases in which their emphasis was on the intersectingthemes of the problems of assessing subtle environmental or occupational risks andthe havoc this creates in the courtroom.

The foregoing concerns the nature of evidence and in this field Shafer hasdeveloped a mathematical theory of evidence (Shafer, 1976). Shafer presented hiswork as a theory of evidence and of probable reasoning. Bayesian theory is drawn onextensively whilst the difference between chance and `partial belief', which isrepresented by subjective probability, is emphasised. Shafer avoided the word`probability' as far as possible because of the misunderstandings he claims that itintroduces. The notions of chance and degrees of belief are separated, allotting tochance the characteristics of frequentist probability and the subjective variant todegrees of belief. Whilst conditional probability is introduced as a minor effect in thefirst field, the act of conditioning is of clear importance in influencing degrees ofbelief, an important point in expert evidence, its presentation and manipulation incourt. However, there is a subtle relationship between chance and partial beliefs,since the latter can be influenced if the frequentist probability distribution of somecomponent of belief structure is known. Shafer does not pretend that there is anobjective way in which given evidence and a given proposition can be representednumerically, nor does he believe that a degree of belief can similarly be represented.Rather, he proposes that an individual can pronounce, as a matter of judgement, anumber that represents a degree of belief about a proposition that is vague, fuzzy andperceived in a confused way. The relationship here is to the inevitable courtroomquestion to attest a level of probability to a proposition that may subsequentlyinfluence the court's judgement of the case.

The notion of partial belief axiomatically admits ignorance, a point that Shaferdeals with explicitly. It is also a point that can be skilfully exploited by an advocatewithout the expert witness realising the danger of the situation. Also important is thedifference between lack of belief and disbelief in relation to ignorance. Throughout,Shafer is careful to distinguish between objective and subjective probability which,when neglected, creates much misunderstanding in courts and advisory committees;the latter has relevance to foresight programmes, their formulation and theimplementation of their outcomes.

Williams (1978) reviewed Shafer's work and concluded that it ``. . . hasconsiderable appeal and naturalness.''. . . and that ``. . . we owe Shafer a debt ofgratitude for developing a systematic and comprehensive alternative to Bayes

D. Loveridge40

Page 9: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

theory''. The influence of Shafer's work in the world of practical decision makingremains uncertain but Mayo and Hollander's (1991) confronted the notion of`acceptable evidence' directly.

From case studies, Mayo and Hollander concluded that despite the attentiongiven to the role of values in determining acceptable risks, little attention had beenpaid to the essential step of examining what type of risk evidence is acceptable. It maybe allowed that values influence the acceptability of risk, but that the associatedevidence is risk-free, because it is scientific, objective and therefore uncontroversial.In contrast, risk evidence may be seen as permeated by value judgements to theextent that there is no one type of data more objective, valuable or worthy thananother, a recourse almost to relativism. Mayo and Hollander were clear in theirbelief that values influence risk assessment and management, while maintaining thepossibility of developing tools for appraising critically the evidence on risk issues.Risk evidence is shot through with value judgements that are socio-cultural andpolitical, but these do not make it meaningless, a practical and theoretical pointalready encountered in Foster and Huber (1999) and in Shafer (1976). Uncertainty inrisk evidence and ways of dealing with it, lie at the heart of Mayo and Hollander'sdiscourse and of the case studies contributed by different authors. However, onlySlovic discusses communicating risk to the polity and goes into any detail of thepracticalities involved. Mayo and Hollander's focus is on the ethical aspect of riskevidence; there are connections to be made with Shafer's theoretical work, but ratherless so with the courts and advisory committees. For the latter there may be moreinsights to be found in Vickers' discussion (Vickers, 1963), now some 40 years ago, ofwhat he called appreciative behaviour. Perhaps even more relevant is the series ofdiscussion meetings held over several years by the UK's Royal Society concerning theperception, analysis and management of risk (The Royal Society, 1981, 1992, 1997)and recent work by Stirling and Meyer in relation to genetically modified crops(Stirling and Meyer, 1999).

3.3 Preferences

We all exhibit preferences; expressing them is the task of all advisory committees; it istheir reason for existence. Preferences are closely aligned to Shafer's degrees of beliefand to his notions of how changes in opinion occur to accommodate new evidence.However, the underlying theory of preferences comes from a different stable. In itsmodern incarnation, the theory of preferences stems from Savage (1954) and isunderpinned by symbolic logic. The interpretation put on preference theory here isrelated to the phenomenon of foresight and to the formalism of public foresightprogrammes.

Life is made up of events, each of which is itself a set of states; the event that is asingle independent state is simply a myth. Similarly, the event that has no states aselements of its set is called vacuous since logically this cannot be accepted. Savage'sexample of a vacuous event is the notion of one that is simultaneously good and bad.Clearly, these notions are those of set theory in which Venn diagrams figure stronglyexpressing, as they do, intersection and unionisation of sets. Venn diagrams arepowerful in understanding preferences, their limitations lying in the rapid increase incomplication once, say, six intersecting sets or more are involved. Three-dimensional

Experts and foresight: review and experience 41

Page 10: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Venn diagrams have been developed though it is not suggested they can be of use inthe foresight studies.

A preference is expressed in the course of making a decision which, on everyoccasion, implies a choice between two or more acts, each with its ownconsequence(s). Theoretically, the choice of an act requires that the entire set ofpossible states of the world be taken into account and also the consequences that areimplied by that choice for each of the possible states of the world. These conditionsmay be met in a small world of a few states, but in the real world in which foresightoccurs the theoretical conditions cannot be adhered to if only because of Simon's(1957) condition of `bounded rationality'. A further complication lies in the futureuncertainties regarding the consequences of a choice among acts since these, bydefinition, cannot be specified entirely. Nevertheless, in the real world choicesbetween acts are made and the consequences are lived with. Additionally, thedistinction between preference and indifference is important both theoretically andpractically. The difference implies a reason for shifts between the two conditions; thisis related to the influence of new evidence, where there is also a concern forpreference ordering. In the latter any numerical values attached to preferences arenot important, it is their ordering that matters, as illustrated by Casti (1992) in hisdiscussion of the 1973 alert crisis that involved the USA, the USSR and the MiddleEast countries. Savage describes two ways of looking at shifts of this kind. The shiftfrom indifference to preference may be due to the individual recognising that newevidence brings some form of bonus to one or more of the component states. Buthow does the individual recognise this bonus for what it is? Alternatively, anindividual may know introspectively whether an act has been chosen haphazardly orfrom a definite feeling of preference. The latter is an issue that can arise in prioritysetting during foresight programmes, due to simple mental exhaustion accompaniedby an inconsistent expression of preferences among acts typified by circularity.

Savage examined two proverbs `look before you leap' and `you can cross thatbridge when you come to it' as two extremes of policy. The first proverb may beinterpreted as requiring anticipation. The act of foresight must envelop the entire setof events and states, in detail, of the future world for that individual and for theselection of one policy for life from the entire set of possible policies for living in theanticipated world. Savage regards this extreme as `preposterous'. In contrast, thenotion of `crossing bridges when you come to them' denies any form of foresight andis entirely reactive, limiting decisions to an artificially simple and small world whereresponses are made to events as they are encountered. Savage is ``. . . unable toformulate criteria for selecting these small worlds'' believing that ``. . . their selectionmay be a matter of judgement about which it is impossible to enunciate complete andsharply defined general principles''. Lastly, in this brief excursion into Savage's workis the idea of admissibility10 which states that if ``. . . several individuals agree amongthemselves on their preference among consequences, then they must also agree intheir preferences among certain acts''. In the current context the relevance of thisnotion to foresight is clear.

The part played by psychology in preference formation is self-evident. However,preferences and their reversals are dwelt on by several authors in widely differentsituations. Among these are Lichtenstein and Slovic (1971) who first found evidenceof preference reversals in laboratory experiments and later in casino experiments in

D. Loveridge42

Page 11: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Las Vegas (Lichtenstein and Slovic, 1973). Hogarth (1988) similarly discussespreference reversal. Essentially, preference reversals between a pair of choices occurwhen the probability of the one originally preferred falls below the probability of thealternative. Psychologically, the committee member or expert adviser will makesimilar assessments either consciously or unconsciously of both the value, which isnot necessarily monetary and probability of the outcome of expressing his preferenceand how that may change under different circumstances. Whilst this conclusion mayseem `common sense', because of the frequency with which preference reversals areencountered in everyday life, it becomes more problematic when it is allied withtransgressions of the transitivity principle (intransitivity11) where illogical preferencesare displayed. The transitivity principle requires the absence of circularity among aseries of preferences; for example a preference for A over B and B over C shouldimply a preference for A over C. Intransitivity, where this kind of sequence does notoccur, is a common problem, but is important in expert or advisory committeeswhere it requires deliberate recognition and management. Intransitivity was one ofSavage's concerns and should be of concern to anyone involved in formal foresightprogrammes, particularly when the time comes for prioritisation.

In the outcome of a workshop held at IIASA in the late 1970s Bell et al. (1977)trace the steps to the development of ways to assess and model a decision makers'preference system. The emphasis is on the construction of decision rules usable inmulticriterion problems using heuristic and axiomatic rules; one-step and multi-steprules; and rules leading to full or partial ordering of a set of feasible alternatives. Thesteps in building complete or partial models of preferences were set out. However, inthe closing debate Rivett somewhat testily made the comment that elegant though allthe methods described were he rarely met them in practical use, a feature that theexpert adviser or committee member cannot avoid since actionable advice is expectedof them.

Lastly, preference measurement is often a feature lying behind marketing. In thiscontext Green et al. (1972), Watkins (1984) and Stanek and Mokhtarian (1998) areexamples drawn from different eras and different fields. It may seem that much ofpreference theory boils down to common sense. If that were the case there would belittle to be concerned about. However, events in the practical world reveal thepresence of intransitivity and preference reversals as commonplace; this evidencealone indicates the need for watchfulness in the areas influenced by expert opinionhowever it is gathered. For the conduct and implementation of foresight programmesthis need is obvious.

3.4 Human judgement and bias

The purpose from the outset has been to relate subjective opinion and itsprobabilistic representation to the current rash of interest in what has becomecalled foresight, so matters that are judged not to relate to that phenomenon areomitted. In foresight programmes, public and private, the time-scale is hardly lessthan ten years and is often 20 to 30 years, ruling out short-term judgements. Inaddition, foresight is concerned with possibilities that result from conjunctions andinterrelatedness of events and trends, so notions that relate to single events aredemoted.

Experts and foresight: review and experience 43

Page 12: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

The exercise of judgement is commonplace and is taken for granted, though insome instances its exercise is not immediately welcome or accepted. Vickers (1981)said that ``Since the Second World War the most rapidly expanding field ofprofessional study has been the field of human governance''. Foresight has long beena part of that field though often only by implication. Vickers later goes on to quotehow Churchill's decision in 1925 for Britain to revert to the gold standard at its pre-warparity was taken against his judgement because the opinions of all his advisersunanimously favoured the move. Strongly aired opinion, even when based onsubstantive knowledge, need not turn out to be the most valuable. If subjectiveprobability is the public expression of beliefs, as Jeffreys, Shafer and others mightmaintain, the factors that lie behind these expressions spread widely into manydomains of science and sociology. The connections between the underlying notionsof the workings of the human brain, the sociological and ethical notions of values/norms that characterise human behaviour and probability have proved elusive. Thepoint was made cogently by Hammond and Adelman (1976) who saw judgement asthe key element in integrating values and science. Now, in a world that 25 years lateris even more riven by disagreements over what can be done and what ought to bedone, that conclusion is even more important.

There is a long running assumption that human judgement is highly fallible whencompared with the prescriptive models of the judgement process. So `Is humanjudgement so fallible that it should be discarded?' is a question examined by Beach etal. (1987); their conclusions are instructive:

* good judgements are rarely reported (this is the media `bad news is news'phenomenon)12

* experiments on judgement assessment are often contrived and unrepresentative ofthe real world

* real world assessors often frame tasks in very different ways from experimenters(revealing a difference in task perception)

* the problem framework is often unclear making performance criteria unclear.

Beach et al. point to the ``. . . bewildering array of `Biases', `Heuristics', and`Fallacies' that serve to justify the negative view of judgement and reasoning''.However, they conclude that in recent years there has been ``. . . a change in the tone,if not the methodology, of work in this area . . . change emphasises that the models'inadequacy is the issue of interest rather than the inadequacy of the behaviour''. Thechange has come late in the day but reflects a major shift that occurred elsewhere inthe acceptability of modelling complicated processes. Beach et al.'s final conclusionscould hardly be plainer that:

``. . . the question of the general quality [of human judgement] wouldbecome meaningless because it is meaningless. This in turn would allow us

to . . . get down to work on the . . . substantive, problem of how to properlyintegrate human judgement and reasoning into advanced technologies.'' . . .

``. . . the question of the general quality of judgement and reasoning not onlyis not settled, it probably never will be and it probably never need be . . .''.

D. Loveridge44

Page 13: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Both statements are supportable13 from experience. The relevance of Beach et al. toforesight and to advisers is clear:

* the use of judgements is inevitable so recognising the fallibility of bothjudgements and models of decision is important

* that possibly the models are more fallible than human reasoning and that the useof complex computational procedures does not sanctify the outcome.

Human judgement is clearly related to expert opinion; it is what the expert uses whenhe gives his opinion. Hogarth (1987) catalogues concerns for the fallibility of humanjudgement, whilst acknowledging that despite these fallibilities human judgement hasso far ensured survival of the race. Hogarth examines human fallibility by asking whyit is that humans who can and do regularly make considerable physical andconceptual achievements exhibit systematic biases in their judgements. Judgement isconcerned with adaptation to survive in the future. Human society has been throughseveral adaptations in which physical survival had to be ensured before mental andconceptual skills could be developed; the similarity with the base of Maslow'shierarchy is itself evident. The relationship between human judgement and foresightis also self-evident; so what can be expected of human judgement?14

Hogarth says judgements involve biases that result from ``. . . trade-off betweendifferent types of error in the design of the human system''; thus the followingcontributions to bias can be identified:

* Failure to appreciate or identify randomness.

* Inconsistency in judgements made across time; in foresight studies inconsistencyover the span of a study of limited duration requires explanation. Volatility betweenstudies is to be expected.

* Learning and the willingness to learn are important aspects of foresight.

* Memory inadequacies and imperfections; predisposition to selective recall andperception, leads to interpreting information in ways to support underlyingexpectations and hypotheses about the future.

* Computational capability of the human mind is limited.

* Order can be brought out of chaos but this may limit creativity.

* Cognitive myopia, the unwillingness to use imagination or to conduct thoughtexperiments about the future.

Judgement enters into scenario writing and that is the next field for consideration.Reasoning about the future is more art than science; recognition of this point was

the reason for Wright's (Harries, 1999) twofold attack on decision analysis. First,Wright declared that scenario development was creative and based on causalreasoning, whereas decision analysis did not encourage speculation about ``. . . allpossibilities''. The last reference is unfortunate since it is a psychological and logicalimpossibility unless the boundaries to the circumstances15 are declared. Wright'ssecond attack was that probability calculus made decision analysis ``. . . unwieldy andless useful than scenario planning''. Creating visions of the future is the purpose of

Experts and foresight: review and experience 45

Page 14: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

scenario planning to which foresight is an essential adjunct. The separation offunctions is not pedantry since the introduction of scenario building too early inhibitsthe search for trends, issues and events that are likely to be of importance in shapingthe future and weakens thinking about highly uncertain and contentious matters thatshould rightly be subject to causal reasoning.

Scenario preparation16 is not a scientific process, but is an art form with someunderlying theoretical principles and association with the disciplines of causalreasoning. The popularity of scenarios lies in their perceived ability to capture thecomplexity of the real world allowing the nested structure of scenarios withinscenarios to be presented. `Nesting' visions or scenarios is common when the futureof one entity arises from it being a subset of a wider one.

It is obvious that foresight depends crucially on expert opinion for its substanceand vice versa. Some studies seek to understand the substantiveness of the expertopinions they obtain and exhibit in their output. Those studies that ignore this pointare likely to be opinionated `hand waving' and the output should deservedly receivelimited credibility. Human society being what it is, these strictures often go unheededso that the selection of experts often proceeds by patronage with consequent effectsin the advisory sphere. It seems, to use Holling's (1977) notions, that unlikeengineering systems where the `fail-safe' principle dominates design, human societyremains `safe when it fails' through processes of adaptation that lie well outside thedomain of subjective probability.

4 Methodological issues ± opinion and foresight

There are two practical issues associated with the question `who is an expert?'; theseare how to find people who are considered to be experts and having done so, how tomake the elicitation process acceptable to those involved. The first can be copedwith by the co-nomination process (Nedeva et al., 1996) used to find a large numberof experts to take part in the Delphi study in the UK Technology ForesightProgramme (TFP) of 1994±95. For the second, the practical process used to elicitopinion from a group of experts, in a study of the future of the UK (Lipinski andLoveridge, 1982), will be described. Chronologically these two events were widelyspaced; the co-nomination process was used almost 20 years after the study of thefuture of the UK. If co-nomination had been known earlier it might have made theUK study much easier, but that is mere speculation. Here the two methods will bedescribed in their correct technical sequence.

4.1 Co-nomination ± a procedure for finding a group of experts

Finding people with demonstrable expertise is difficult; finding a lot of them is manytimes more so. There are ways of finding special people in any society by `askingaround' after having identified a starting point. The process has a certain inevitablyabout it tending always to lead towards particular groups of people recognisableanywhere as `the great and the good' of the particular society or community. Theserecommendations will be vouched for, whilst it is only sensible to do some `duediligence' on each one. Alternatively, the starting point can be with the appropriate

D. Loveridge46

Page 15: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

professional institutions and their membership lists, but after a while there is agravitation toward the `asking around' principle following recommendations fromthe institution itself. Outwardly, there may seem to be nothing wrong with thisprocess which is used, more often than not, in filling positions in advisory committeesand similar bodies. At this point it is as well to look back to the characteristics of anexpert set out in the earlier discussion of subjective opinion. With these and theaccompanying self-assessment criteria in mind, the `asking around' process does notshow up well, particularly if one adds that the search process should, as far as possible,find a slice through the demographic variables of age, gender and occupationalposition. The co-nomination procedure described here aims to respond to all of thesecriteria, but it can be thwarted by the difficult step of identifying an initial group.

There were three reasons why the co-nomination method was used in the UKTFP:

1 Political advice that the TFP should seek advice from people beyond thosealready advising government, especially as this was the first nationwide study ofthis kind to be held in the UK.

2 Many study panels were to be formed for which members would be needed.

3 The planned Delphi survey would require a correspondingly large number ofpotential respondents with known characteristics. The group could not beidentified in any time-honoured way; this had been demonstrated by experienceelsewhere, such as in Germany.

The origins of co-nomination lie in bibliometrics and in the mapping tools used eitherin classifying clusters of researchers or in identifying academic±industrial networks ofresearchers (Georghiou et al., 1988). Co-nomination is a survey process that startswith a selected group of people who are thought to be representative of the widergroup to be explored. Each of these respondents is asked to identify up to amaximum specified number of people who meet defined criteria, so that if there are Noriginals each of whom nominate up to a maximum of M others, then the samplegrows to a maximum of N*M people. The further group of M is then asked to repeatthe nomination process under the same rules with further rounds as necessary (thisleads to the notion of `snowball' sampling).

The underlying principle of co-nomination lies in the generation of a networkbased on recurring pairs of names (revealed from the questionnaire). It is assumedthat similarities in the nominated persons work and that of the co-nominees implies acognitive link. So if:

* A nominates B, C and D

* the presumed links are B-C, B-D and C-D.

If nominations recur in two or more responses, then the likelihood of these links isincreased. Nomination itself is an indication of the influence the nominee is havingon the respondent (the nominator) which leads naturally to the hypothesis thatfrequent co-nomination identifies influential actors in a field and through a networkillustrates their interrelationship. As the questionnaire makes no reference to thequality of the work of the individuals concerned, the notion of a `popularity contest'

Experts and foresight: review and experience 47

Page 16: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

is avoided. Co-nomination rather than simple nomination is preferred since it avoids`prestige' effects where respondents link themselves to `key' actors whilst it alsoenables a scale that allows the strength of links to be examined at different cut-offlevels. Those people who are highly co-nominated clearly occupy special positions inthe community involved. In this way the identification of potential participants isplaced firmly in the hands of the community involved.

The process is outwardly straightforward, but as ever requires much attention tothe detail of questionnaire design and survey management. These were increasedmany times over in the UK TFP by the multiplicity of panels and associated sectorsaround which it was organised. It is best to start with the design requirements of thequestionnaire, ignoring the complications of the UK TFP. When completing theirquestionnaire the respondent:

1 Gives clear and exact demographic details including title, address, age, gender,affiliation, job role (chosen from a set of defined categories) and other similarinformation.

2 Declares their field of expertise and level of expertise in it chosen from theself-assessment of expertise schedule (Loveridge et al., 1995).

3 Lists, up to a maximum specified number, people whose work has influenced theway the respondent does their work and estimates the level of expertise of eachnominated person; immediate colleagues in their organisation are excluded.

4 Similarly, lists a specified maximum number of colleagues who have influencedthe way the respondent works (this is usually a much smaller number than in3 above).

5 Suggests the name of one person whom they believe to be of outstandinginfluence, not status.

The questionnaire is intended to be completed by the individual and not undersupervision nor in a group setting. The purposes of steps 1 and 2 are self-evident.Steps 3 and 4 are intended to prevent an individual from playing the `promotiongame' by quoting colleagues only. Step 5 provides the opportunity to list someonewho may lie well outside the field concerned but may still be of outstanding influence.

The survey requires a rigorously constructed and managed database. Twocommon problems are:

1 The necessity to ensure that each respondent nominated more than once receivesonly one questionnaire to avoid bias and to avoid giving the impression that thesurvey is not being well managed and is out of control.

2 Misspellings of names, which could also introduce errors into the process,must be eradicated along with alternative groupings of initials.

The application of co-nomination in the UK TFP is a marker that separates thatprogramme from any other to date. It is hardly surprising that some practical problemswere met with and some lessons were learned for the future use of co-nomination toidentify a large body of experts; for example:

D. Loveridge48

Page 17: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

* For an exercise as complicated as the UK TFP, constructing a representativeinitial list of respondents was difficult and needs to be improved in future.

* The time-scale was not ideal. It allowed time for only two iterations of thequestionnaire and only a short time for analysis of the outcome.

* During later parts of the programme, respondents to the co-nomination surveyhad a higher propensity to participate than people who were added to thedatabase of participants by other means, perhaps indicating that they felt more intouch with the programme than later additions.

Since the use of the co-nomination process in the UK TFP one of the UK researchcouncils proposed using the method to identify peer-review colleges but time pressureprevented this. More successful was the way co-nomination was used in the SouthAfrican foresight programme to identify expert participants where political and ethnicconstituencies had to be seen to receive due recognition. Recently, in planning a secondround, the Swedish programme organisers have been considering using co-nominationto avoid the problem of closed panels experienced in their first programme. Theoutcome of the co-nomination process also has clear benefits in determiningappointments to panels and advisory committees as it has the potential to reveal whois an expert in undeniable terms. However, membership of panels and advisorybodies brings with it the exercise of power, an attraction in itself, as well as, orperhaps even more than expertise, whilst rational methods of assessing who is anexpert may also deny patronage to those in whose hands the power to appoint lies.To quote Cooke again `An expert who knows more about a particular subject thananyone else may be able to talk an opponent under the table. This does not mean thathis opinions are more likely to be true.'

4.2 Elicitation of expert opinion ± a practical approach

Historically, the methods described here grew out of a partial dissatisfaction with theDelphi and cross-impact methods used in technology forecasting. The first steps weremade in the method used in the climate change study, conducted under a DARPAcontract, referred to earlier.8 In that study the first steps in eliciting subjectiveopinion were introduced via a questionnaire. The methods described here were animprovement on those used in the DARPA contract and were used in a study of thefuture of the UK made in 1978. It was a particularly apt time for the study as at thattime many studies had portrayed the UK as the `sick man of Europe' with a bleak, ifnot hopeless future. The study was delivered to its sponsoring clients at the end of theinfamous `winter of discontent' and within weeks of the 1979 General Election. Thestudy's 17 clients from the UK and the USA agreed to the partial publication of themethods used (this is what is drawn upon here) but nothing more.

All questionnaire-based elicitation lacks face-to-face contact. The opinionsexpressed by the respondents are necessarily `black boxes' and remain so unless somesubsidiary interviewing takes place. However, the use of questionnaires is notnecessarily in tune with the time scale of corporate decision making nor is it secureenough if opinions are sought widely. For government the first condition applies but

Experts and foresight: review and experience 49

Page 18: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

the notion of secrecy is being challenged frequently in the industrialised countries.Since governments and companies now face issues that more often than not have thecharacteristics of trans-science (Weinberg, 1972) how can they proceed to gain benefitfrom expert opinion?

It would help if they followed the dicta of co-nomination to find people to consultand those of the elicitation of subjective opinion in those consultations, but needlessto say they do not. The first requirement of any practical elicitation process has to beits acceptance by people who do not normally think in terms of uncertainty let aloneprobability. The process outlined in the ensuing paragraphs was intended to meet thisproblem and was used in some 100 face-to-face elicitations with varying degrees ofsuccess (no interview failed its purpose, but some were distinctly `difficult'). It isgeneric and was used in each interview relating to each variable identified by theclients. The principles behind the procedure will be described before setting out theprocedural steps.

4.3 Assessing ability

As described earlier, the `perfect' respondent possesses three attributes:17

1 substantive knowledge in a particular field

2 the ability to cope when faced with an uncertain extension of substantiveknowledge into the future

3 imagination.

How a respondent thinks about the future depends on the way he or she extends hisor her knowledge into a possible but uncertain future. Unprepared, the respondentmay perform badly at this task so an assessing ability test was used firstly, to alerteach respondent to the mode of thought expected during the later elicitation andsecondly, as an attempt to characterise how the respondents coped with their inneruncertainties.

The test tries to measure how respondents allocate probability to uncertain valuesof a variable. Knowledge in this field was slight in 1978 and probably still is, so noclaims for the test are made beyond empirical experience ± it seems to work. Animportant outcome of the test is the subsequent capability to:

* convert all opinions to a common basis of `perfect assessment'

* to merge them according to the rules of objective probability.

The test is carried out in two parts as follows:

Step 1 The respondent replies probabilistically to ten questions from the GuinnessBook of Records (Morris, 1979). The respondent is asked to quote a high anda low limit so that there is an 80% probability that the answer quoted in theGuinness Book of Records lies between these limits. Whilst the questions arelikely to be outside the respondent's special knowledge, their generalknowledge should enable them to set most limits correctly;18 the target for agood assessor is eight out of ten and this is a difficult task.

D. Loveridge50

Page 19: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Step 2 The respondent replies in the same manner to a further ten questions,but this time they are drawn from a publication such as the UK's SocialTrends or a corresponding publication elsewhere. However, in this step therespondent is asked to give two sets of responses corresponding to the 50%and 80% probabilities of embracing the correct result.

`Perfect assessors'19 would score eight and five out of ten correct results at the 80%and 50% probability levels respectively. Practical experience shows that experts mostfrequently capture three out of ten correct results at the 80% level on the firstoccasion and that with practice scores can be improved consistently; there is a lessonhere about the conditions under which all expert advice is sought and given. Thesepoints are illustrated in Figures 1 and 2.

Figure 1 Probability allocation in the assessing ability test. With acknowledgements toLipinski and Loveridge (1982)

Figure 2 Distribution of assessing performance. With acknowledgements to Lipinski andLoveridge (1982)

Experts and foresight: review and experience 51

Page 20: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

4.4 The elicitation process

During the interview the expert is asked to respond interactively to questions thatenumerate a nine point cumulative probability distribution. The distributionrepresents his or her substantive knowledge, simultaneously tempered by his or herimagination and how he or she copes with subjective uncertainty as imaginationleads from the present (where substantive knowledge is valid) into the future (wherethat knowledge may not apply). It is no accident that this part of the entire interviewis placed centrally. Because large numbers of opinions cannot be sought, as would bethe case in the frequentist tradition and because when dealing with the future there isno system of measurement comparable to physical measurement, opinions aresubjective by definition. The elicited data are the respondent's best judgement ± his orher opinion ± about the variable under discussion. Elicitation raises a linguisticproblem discussed below and shown in Figure 3.

Figure 3 Extraction of nine point cumulative probability distribution. Withacknowledgements to Lipinski and Loveridge (1982)

D. Loveridge52

Page 21: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Respondents often find it difficult to relate to probability levels without a referenceframework, a problem recognised by Alpert and Raiffa (1969). The problem wasexplored by Lipinski et al. (1973) and has latterly caught the attention of otherinvestigators including Beyth-Marom (1982), Delgado et al. (1998) and Chakraborty(2001). However, in the procedure described here Alpert and Raiffa's terms were thebest available and probably remain so. Alpert and Raiffa's work on wordassociations is summarised below (Table 1):

Table 1 Word association probability levels (Alpert and Raiffa, 1969)

Probability level Word association

99% and 1% Astonished or astounded

90% and 10% Surprised

50% Even bet

These word equivalents were used often during the interview procedure and therewas little doubt that respondents found them helpful in orientating their thinking. Asthere were no easy word associations with the 75, 66, 33 and 25% intervals,respondents found these more difficult to deal with. There are similarities to thevariable interval method discussed earlier, but it is also a demanding step to reasonout the difference in opinion between a tercile and a quartile. There is no doubt thatthese responses were sometimes guesswork born of frustration at the demands of themental reasoning which was detectable in the subsequent processing of the data.

Whilst the elicited nine-point distribution conveys succinctly the respondent'sopinion about future uncertainties, how respondents reached their conclusions isderived from their `thinking out loud' during this stage of the interview. Useful as it is,the cumulative distribution hides important information that can be derived from itsprobability density function (PDF). The PDF can be interpreted in the following ways:

* A normal distribution indicates that the respondent's opinion is based on a singledominant trend. If that peak is sharp and encloses a high percentage of the areaunder the curve, that implies a great deal of certainty in the respondent's mindabout the future of the particular variable.

However, single peaked functions are rare; this leads to the second and morecomplicated, but at the same time more interesting and more valuable interpretationas follows:

* Where the PDF exhibits more than one peak, this indicates that the respondenthas a mental model of the future for the variable and a way of dealing with theuncertainties that surround it; that leads to the conclusion that more than onedominant trend is possible which is important information for decision makers.

Several independent PDFs are valuable each in their own right, but it is usually morehelpful to merge the distributions to create a single PDF that represents the jointopinion of the `n' individuals. In this step, the second part of the assessing ability testis used to account for the differing levels of expertise and differing assessing abilities

Experts and foresight: review and experience 53

Page 22: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

of individual respondents. If wished, the expected value of the joint opinion can becalculated either as a single value for the entire distribution or in, say, three equallylikely segments.

4.5 Elicitation process ± the practical procedure

The respondent received a description of the interview process in advance which wasaccompanied by general information about the study so that each respondent couldunderstand his or her role in the study as a whole. On the day, the sequence of eventsproceeded as follows:

Step 1 Respondent was asked to use the self-assessment of expertise criteria toassess his or her level of expertise.

Step 2 Experience relating to substantive knowledge, assessing ability andimagination was explained. The respondent then undertook the first part ofthe assessing ability test, based on the Guinness Book of Records, to alert therespondent to their imperfect treatment of uncertainty as they oftenallocated limits that were too narrow (see Figure 2).20

Step 3 The respondent was then taken through the elicitation of the nine pointcumulative distribution. Initially the respondent was given informationabout other impinging variables (if any).21 The respondent was guidedthrough the elicitation, being encouraged to `think out loud' so that thevariables and events being manipulated mentally in framing his or herresponse could be noted and recorded along with his or her estimate of thevalue of the variable at each probability level. The interviewers andinterviewee were in almost constant dialogue throughout this step for whichthere was no time limit.

Step 4 The second part of the assessing ability test was administered last andcompleted the interview. Here the questions were drawn from a publicationsuch as the UK's Social Trends.

Generally, the interviews lasted one and a half to two hours.The procedure described was only the initial part of a complicated process of

creating scenarios for the future of the UK. However, the subsequent steps are notrelevant to the present paper.

5 Implication for practice in foresight programmes

Foresight programmes, public or private, are mostly organised around panels withadditional activities such as a Delphi survey; the approach tends to be mechanistic.Much of the original inspiration came from the desire to imitate the Japanesetechnology forecasting studies, based on Delphi surveys, which at first accentuatedthe mechanistic approach to what has become known illogically as `foresight.' Theterm may be a form of shorthand for a wider intent now that recent programmeshave tended to migrate from Delphi based surveys, but what that intent is has notbeen made clear. Consequently, there is a prospect of drifting towards an activity that

D. Loveridge54

Page 23: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

gives the impression of being a basis for national planning using scenarios without thedepth of understanding of the underlying processes that is necessary. Outwardly atleast, a deep understanding of the role of expert opinion and its roots in subjectiveprobability, human judgement and preferences is not overt amongst the current cropof foresight programmes. Where self-assessment of expertise is included the definingrules are not rigorous, whilst the interpretation of the outcome of Delphi surveys oftensticks to rigid conventions that are entirely inappropriate, as will be discussed shortly.Indeed, the interpretation hides more important information than it reveals.

Some recent foresight programmes have abandoned the formality of methodssuch as Delphi in favour of panel discussions that amount to brainstorming andstrength, weakness, opportunities and threat analyses. These are conducted by panelsformed by unknown processes that are possibly self-selecting amongst a small groupof people well known to each other with little attention being paid to the desirableinfluences of transparency and the strictures implied by the use of the co-nominationprocess. Worse still continuity of corporate memory from one study to the next hasnot featured highly outside Japan and Germany.

5.1 Some personal experiences

Hull's review of subjective probability assessment contained some examples thatapproximate Loveridge's practical experience (Loveridge, 1981) in the early 1970s inforecasting sales of a new product, with no previous history, for several years aheadon the basis of eliciting subjective opinion. The `experts' were marketing teammembers who obtained from potential customers firstly, their likelihood of buyingthe product at all and secondly, their likely purchases, not as single figures but asdistributions, in each of the following three years. The team members were `trained'in the elicitation process and were taken through a debriefing after each potentialcustomer visit. The method was effectively that of fixed intervals but with a numberof complications arising from the number of potential customers, different firm sizesand different purchasing intentions. These factors complicated merging thedistributions to create a single set of three year distributions and presented anumber of problems as follows:

* Technically the model assumed that there was no competition as the product wasunique so that the probability of buying was either zero or one. However, thissituation did not persist so that intermediate probabilities had to be permitted.

* The assessors had difficulty with the elicitation procedure and frequently,the distributions did not sum to unity. The choice was then either to repeat theirdebriefing or to assume that the form of the distribution was `correct' and tonormalise it without further reference to the assessor. Provided the error in theprobability sum was small the second course was used.

* The greatest difficulty lay in how the elicited data was approved by seniormanagers before the combined forecast was computed. Here the pitfall was thesimple and obvious one with senior managers altering elicited data arbitrarily andwith only minimal consultation. The resulting distributions were chaotic oftenwith summations far removed from unity. The result was frequent, long andsometimes fractious meetings causing the approval process to become protracted.

Experts and foresight: review and experience 55

Page 24: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

The initial foray into elicitation lasted about two years and led to the formulation of asimpler procedure based on eliciting a three point triangular distribution. The revisedprocedure recognised the frustration that arose from the tedium, for the assessor, ofthe elicitation process. The decision to elicit less information was a trade-off to retainthe cooperation of the assessors. The revised procedure was facilitated by:

* providing the assessors with pictures of normal and left and right skewed threepoint distributions

* asking the assessor to obtain just three numbers to define the distribution;the maximum and minimum sales and the `best guess' or mode.

Experience with the second model was limited as the project was soon moved on toanother part of the organisation. However, it seemed to fulfil expectations of it assimplification of the elicitation process made it much easier for the assessors togather intended purchasing data from firms. The question of getting the dataapproved by senior managers remained unresolved and is likely to remain so. Theexperience in this project led to a far better understanding of the interpretation of theresponses to a Delphi survey, which is the next subject.

Institutional foresight is usually consultative, except in the very rare circumstancesof a small expert committee that consults no one but itself. The wider the consultationthe stronger the process, but for those charged with reporting the outcome the price isaggregation, negotiation, re-aggregation and so on, until the process is stopped byfiat. When surveys such as Delphi are used, important detail can be hidden forreasons that are far from clear. In Delphi surveys in foresight programmes the timeof occurrence of each topic in the questionnaire is the important response. In the1994±95 UK TFP the sponsors did not allow the respondents to include a probabilityestimate for the time of occurrence; the probability of the event occurring in the timeperiod declared by the respondent was assumed to be one.

The number of responses to each topic was sufficient to construct a frequencyhistogram representing the variation of opinion; this is common to most Delphisurveys. Often these histograms exhibit more than one peak, indicating that therespondents' opinions are divided between two or more beliefs about the likelyoccurrence of the topic, an eventuality in line with the earlier discussion of Lipinskiand Loveridge's study of the future of the UK. However, instead of presenting thisimportant information, there is a propensity to hide it by adopting the conventionof representing the distribution of opinion not as a histogram, but as a calculatedthree-point distribution (the upper and lower quartiles and the median) whichadmits of no division of opinion. In interpreting Delphi survey data this is a strangeprocess of denial since, if there are sufficient responses to calculate a distribution ofopinion, it is inconceivable that there is not a division of opinion in some instances.The interpretation is more complicated than simply one of divisions of opinion as thereport on the Delphi survey (Loveridge et al., 1995) makes plain.

Wherever there was a clear modal value it indicated a strong consensus on thosetopics. Where responses to adjacent time periods differed by only a small percentage(3%) these were included in the modal value thus tending to spread or flatten thedistribution indicating an increase in uncertainty among the population ofrespondents. Eventually, the outcome can be a flat distribution that falls off sharplyat each end, indicating a very uncertain view amongst the population of respondents.

D. Loveridge56

Page 25: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

The interpretation of distributions with more than one peak has already been referredto above. With only five point distributions to work with, it may seem risky tointroduce these more detailed interpretations. However, provided defined separatingcriteria are used, as indicated, then the insights gained are valuable and ought not to beobscured since divisions of opinion can have an important influence on priority setting,on policy making and even on the fate of an entrepreneur's business plan.

As already referred to, the recent public foresight programmes in TheNetherlands and the UK have abandoned formal methods in favour of paneldiscussions where, apart from naming each panel's area of work, direction from thesponsor has been minimal. The outcomes of The Netherlands programme (Anon.,1996) and that of the UK (Anon., 2001) are now in the public domain. The effect ofthe shift from structured elicitation of opinions to the vagaries of panel discussions isimpossible to assess without any transparent assessment of the expertise of the peopleinvolved in the panels. In The Netherlands the Steering Group Report has effectivelybeen `laid on the table' for any interested party to pick up and choose ideas toexploit; there has been no overt policy intervention. What will happen in the UKremains to be seen, but anecdotally the outcome is considered `insubstantial' and,following a review of the programme, has caused the next `round' to be a revival ofthe maligned notion of `picking winners' (Anon., 2002). Consequently, the newprogramme will take the form of a rolling programme of themes to be chosen by`visionaries' whose method of selection has remained unclear.

Lastly, in these personal experiences Lipinski and Loveridge were allowed toreport from the study of the future of the UK expert opinion on the future price ofoil. All the respondents in the elicitation rated themselves in the expert category andtheir joint opinion for 1985 (remember the elicitations were carried out in 1978) isillustrated in Figure 4.

Figure 4 Joint opinion on 1985 price of oil. With acknowledgements to Lipinski andLoveridge (1982)

Experts and foresight: review and experience 57

Page 26: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Figure 4 illustrates the level of confidence the experts had in their opinions, whichindicated a price for oil in 1985 in real terms of $12.50 per barrel. The samerespondents forecast that the price of oil would remain virtually unchanged in realterms for the indefinite future (then up to 1995). Using a simple assumption of a 7%annual inflation rate, the contention made in 1978 has a remarkable correspondencewith what has happened over the following 23 years. In 1978 the price of oil wascommonly expected to be in the region of $60 per barrel by 1985 and certainly by the1990s; the actual outcome then provides a strong lesson to avoid `knee jerk' reactionsand `bandwagon' effects. There is much that can be learned about the oil marketfrom data of this kind; similar comments could be applied in other fields. It isdoubtful if panels in foresight programmes would produce such insights simply frompanel discussions.

6 Epilogue

Public foresight studies have created their own heroes and myths in a remarkablyshort space of time. However, the act of foresight has a long history of achievementin the development of human society despite, as Hogarth commented, the humanpropensity not to follow `the rules' implied by formal studies of human judgement,subjective probability, preferences and mathematical theories of evidence. As Rivetttersely remarked in the IIASA conference, he did not see many decision theoristsmaking decisions `in anger' in the real world of investment or anywhere else. Whilstthis may have been a cruel quip it is a sentiment that seems to be echoed by Beach,Christensen-Szalanski and Barnes in their obvious frustration with the directionsbeing taken by research into human judgement. Maybe this could also be reflected inthe contest between human judgement and computer modelling of the earth's climateas referred to earlier.

Some final reflections on the relevance of subjective opinion to foresightprogrammes are a fitting way to end. In doing so these will be blended with aparaphrase of Cooke's conclusions on the notion of `experts in uncertainty'. Thesereflections are as follows where Cooke's views are given in parentheses with my ownadditions in italics:

* There is ``. . . a need for an overarching methodology for using expert opinion inscience''. Whilst this almost goes without saying, how that advice is provided iscrucial, given the political interventions that become inevitable.

* Subjective or expert opinion may be a ``. . . useful new form of information,but rules for its collection and processing must be developed'' and that these rulesneed to assist in ``. . . building a rational consensus and . . . take into account thepeculiar feature of subjective data''. For this to happen there needs to be muchwider appreciation of the nature of subjective opinion and the impinging fields ofevidence in relation to the courts, the propensity for preference shifts and thevagaries and disputes surrounding human judgement. Extensions of knowledge ofhow to select `experts' and of processes for the elicitation of expert opinion will alsobe needed.

D. Loveridge58

Page 27: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

* Savage's classical theory of rational decision (making) can be used as a basis forrepresenting uncertainty and subjective probability.

* The statistical tools available for analysing expert opinion are ``. . . extremelylimited''. The available tools are not widely known even in the research community.

* When an expert quantifies his uncertainty through a subjective probabilitydistribution, this ``. . . effectively'' . . . transforms his opinion into units ofprobability that permit the use of a wider range of statistical methods, bothclassical and Bayesian and this is the real reason for ``. . . quantifying expertopinion as subjective probability''. The need to think in terms of distributions ofopinion is rarely accepted in advisory committees and is almost entirely absent inforesight programmes.

* There is a need for better and user-friendly techniques of eliciting subjectiveprobability distributions. Again the methods available are not widely known, someare highly abstruse without necessarily improving the outcome and there is oftencontention about the viability of different methods.

* Closely related to elicitation is the need for better ways of communicatingprobabilistic information to policy and decision makers; this is not a simpleinversion of the elicitation process.22

* In eliciting opinions in foresight studies it is necessary to have ways of `seeding' theexpert's mind since he or she will be expected to provide responses about events thatwill happen in the future, but where the responses will be dependent on events withknown outcomes; the process of seeding is critical, difficult and in need ofimprovement (author's addition not due to Cooke (1996)).

* When combining expert opinions three operational models exist; these are:

* The classical model that can be applied when seed variables are available;its results are `modestly' robust and the model ``. . . has never yielded strangeor intuitively unacceptable results.''

* The Bayesian model is most suited to use with a single expert, but the outcomeis very sensitive to ``. . . decisions of the analyst''. The model has thestrongest theoretical foundation, but practical implementation remains``. . . a challenge.''

* Psychological scaling models are very different, are user friendly andconsensus building, but theoretical problems remain and little is known(according to Cooke) about their validity in practical applications.

If elicitation has proved to be an art more than a science, then the possibility of trainingpeople to be skilled in the art of expressing subjective opinion, the creation of`probability assessors', has remained a matter of speculation. The possibility of`calibrating' probability assessors remains vague whilst the experiments to do so provideconflicting outcomes. Important as the question of expertise is in any foresightprogramme, it is unlikely to shed much light on the difficult topic that can be broadlyheaded `How good are the expert assessors?' if only because their opinions are neversought in the form of subjective probability distributions. Suffice it to say that some

Experts and foresight: review and experience 59

Page 28: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

experts are much better performers in extending their knowledge into the future thanothers. Expert opinion is fundamental to foresight but its underlying nature is not wellrecognised by programme promoters.

References

Alpert, M. and Raiffa, H. (1969) `A progress report on the training of probability assessors',Unpublished Manuscript, Harvard University.

Anon. (1996) A Vital Knowledge System: Dutch Research With a View to the Future, ForesightSteering Committee, June.

Anon. (2001) `Messages from the second round of foresight', The Steering Group Report, OST,August.

Anon. (2002) Foresight: Proposals for the Future Programme, OST, June.

Barclay, S. and Peterson, C.R. (1973) `Two methods for assessing probability distributions',Decisions and Designs Inc., Technical Report, Vol. 73, 1 August.

Barker, A. and Peters, B.G. (1993) The Politics of Expert Advice: Creating, Using andManipulating Scientific Knowledge for Public Policy, Edinburgh University Press.

Basak, I. (1998) `Probabilistic judgements specified partly in the analytical hierarchy process',European Journal of Operational Research, Vol. 108, pp.153±164.

Beach, L.R., Christensen-Szalanski, J. and Barnes, V. (1987) `Assessing human judgement: hasit been done, can it be done, should it be done?' in G. Wright and P. Ayton (Eds.)Judgmental Forecasting, John Wiley & Sons Ltd.

Bell, D.E., Keeney, R.L. and Raiffa, H. (1977) `Conflicting objectives in decisions',International Series on Applied Systems Analysis, John Wiley: IIASA.

Beyth-Marom, R. (1982) `How probable is probable? A numerical translation of verbalprobability expressions', Journal of Forecasting, Vol. 1, pp.257±269.

Budescu, D.V. and Rantilla, A.K. (2000) `Confidence in aggregation of expert opinions', ActaPsychologica, Vol. 104, pp.371±398.

Bunn, D.W. (1979a) `A perspective on subjective probability for prediction and decision',Tech. For. & Soc. Change, Vol. 14, pp.39±45.

Bunn, D.W. (1979b) `Estimation of subjective probability distributions in forecasting anddecision making', Tech. For. & Soc. Change, Vol. 14, pp.205±216.

Bunn, D.W. and Salo, A.A. (1993) `Forecasting with scenarios', European Journal ofOperational Research, Vol. 68, pp.291±303.

Casti, J.L. (1992) Searching for Certainty: What Science Can Know About the Future,Scribiners.

Chakraborty, D. (2001) `Structural quantization of vagueness in linguistic expert opinions inan evaluation programme', Fuzzy Sets and Systems, Vol. 119, pp.171±186.

Cooke, R.M. (1991) Experts in Uncertainty: Opinion and Subjective Probability in Science,OUP.

Delgado, M., Herrera, F., Herrera-Viedma, E. and Martinez, L. (1998) `Combining numericaland linguistic information in group decision making', Journal of Information Sciences,Vol. 107, pp.177±194.

DeWispelare, A.R., Herren, L.T. and Clemen, R.T. (1995) `The use of probability elicitation inthe high level nuclear waste regulation program', International Journal of Forecasting, Vol.11, pp.5±24.

Dransfield, H., Pemberton, J. and Jacobs, G. (2000) `Quantifying weighted expert opinion: thefuture of interactive television and retailing', Tech. For. & Soc. Change, Vol. 63, pp.81±90.

D. Loveridge60

Page 29: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Finetti, B. de (1985) `Cambridge probability theorists', Manchester School of Economic andSocial Studies, Vol. 4, December, pp.347±346.

Foster, K.R. and Huber, P.W. (1999) Judging Science: Scientific Knowledge and the FederalCourts, The MIT Press.

Foster, K.R., Bernstein, D.E. and Huber, P.W. (1993) Phantom Risk: Scientific Inference andthe Law, The MIT Press.

Georghiou, L.G, Giusti, W.L, Cameron, H.M. and Gibbons, M. (1988) `The use of co-nominationanalysis in the evaluation of collaborative research', in A.F.J van Raan (Ed.) Handbook ofQuantitative Studies of Science and Technology, North-Holland.

Green, P.E, Wind, Y. and Jain, A.K. (1972) `Preference measurement of item collections',Journal of Marketing Research, Vol. IX, November, pp.371±377.

Hammond, K.R and Adelman, L. (1976) `Science, value and human judgment', Science, Vol.194, 22 October, pp.389±396.

Harries, C. (1999) `Judgmental inputs to the forecasting process: research and practice', Tech.For. & Soc. Change, Vol. 61, pp.75±82.

Hogarth, R. (1987) Judgement and Choice: The Psychology of Decision, 2nd Edition, John Wiley.

Hogarth, R. (1988) Judgement and Choice, Wiley-Interscience, Second Edition.

Holling, C.S. (1977) `The curious behaviour of complex systems: lessons from ecology', inH. Linstone and W.H.C. Simmonds (Eds.) Futures Research: New Directions,Addison-Wesley.

Huber, G.P. (1974) `Methods for quantifying probabilities and multi-variate utilities', DecisionSciences, Vol. 5, pp.430±458.

Hull, J.C. (1976) `Review of the literature on the assessment of subjective probabilitydistributions', Cranfield Institute of Technology, School of Management, PhD Thesis(part of).

Lichtenstein, S. and Slovic, P. (1971) `Reversals of preference between bids and choices ingambling decisions', Journal of Experimental Psychology, Vol. 89, pp.46±55.

Lichtenstein, S. and Slovic, P. (1973) `Response induced reversals of preference in gambling',Journal of Experimental Psychology, Vol. 101, pp.16±20.

Lipinski, A.J. and Loveridge, D.J. (1982) `How we forecast: Institute for the Future's study ofthe UK: 1978±1995', Futures, Vol. 14, No. 3, pp.205±239.

Lipinski, A.J, Lipinski, H.M. and Randolph, R.H. (1973) `Computer-assisted expertinterrogation: a report on current methods development', Tech. For. & Soc. Ch., Vol. 5,pp.3±18.

Loveridge, D. (1981) `Systems and ecology: a study of the application of systems thinking andsome simple ecological principles to new business development', M. Phil Thesis,University of Sussex.

Loveridge, D., Georghiou, L. and Nedeva, M. (1995) `UK technology foresight programmeDelphi survey', A Report to the Office of Science and Technology, September.

Mayo, D.G. and Hollander, R.D. (1991) Acceptable Evidence: Science and Values in RiskManagement, Oxford University Press. Here I wish to acknowledge the work of my onetime colleague Alison Dale whose review of this book I have drawn on here.

Medsker, L., Tan, M. and Turban, E. (1995) `Knowledge acquisition from multiple experts:problems and issues', Expert Systems With Applications, Vol. 9, No. 1, pp.35±40.

Morris, P.A. (1979) `Bayesian expert resolution', PhD Dissertation, Stanford University, May.

Morrison, D.G. (1967) `Critique of ranking procedures and subjective probabilitydistributions', Management Science, Vol. 14, B, 4, pp.253±254.

Murphy, A.H. and Winkler, R.L. (1973) `Subjective forecasting of temperature: someexperimental results', in Proceedings of the Third Conference on Probability and Statisticsin Atmospheric Science, American Meteorological Society, Boulder, Colorado, June.

Experts and foresight: review and experience 61

Page 30: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Nedeva, M., Georghiou, L., Loveridge, D. and Cameron, H. (1996) `The use of co-nominationto identify expert participants for technology foresight', R&D Management, Vol. 26, No.2, April.1

Peterson, C.R., Snapper, K.J. and Murphy, A.H. (1972) `Credible interval temperatureforecasts', Bull. American Meteorological Society, Vol. 53, No. 10, pp.966±970.

Saaty, T.L. (1980) The Analytical Hierarchy Process, McGraw-Hill.

Savage, L.J. (1954) The Foundations of Statistics, Dover Publications, and (1972) secondrevised edition.

Shafer, G. (1976) A Mathematical Theory of Evidence, Princeton University Press.

Simon, H. (1957) Models of Man, New York: Wiley.

Stanek, D.M. and Mokhtarian, P.L. (1998) `Developing models of preference for home-basedand center-based telecommuting: findings and forecasts', Technological Forecasting andSocial Change, Vol. 57, pp.53±74.

Stirling, A. and Meyer, S. (1999) `Rethinking risk: a pilot multi-criteria mapping of agenetically modified crop in agricultural systems in the UK', Report on research supportedby the ESRC's Global Environmental Change Programme, August.

The Royal Society (1981) `The assessment and perception of risk', Proc. Roy. Soc. London,Series A, Vol. 376, p.1764.

The Royal Society (1992) Risk: Analysis, Perception, Management, London: The RoyalSociety.

The Royal Society (1997) Science, Policy and Risk, London: The Royal Society, London.

Vickers, Sir G. (1963) `Appreciative behaviour', Acta Psychologica, Vol. 21, pp.274±293.

Vickers, Sir G. (1981) `Systems analysis: a tool subject or judgment demystified?', PolicySciences, Vol. 14, pp.23±29.

Watkins, P.R. (1984) `Preference mapping of perceived information structure: implications fordecision support systems design', Decision Sciences, Vol. 15, pp.92±106.

Weinberg, A.M. (1972) `Science and trans-science', Minerva, Vol. 10, pp.209±222.

Williams, P.M. (1978) `On a new theory of epistemic probability', Brit. Jnl. Phil. Sci., Vol. 29,pp.375±387.

Winkler, R.L. (1967) `The assessment of prior distributions in Bayesian analysis', JournalAmerican Statistical Society, Vol. 62, pp.1103±1120.

Xu, Z. (2000) `On consistency of the weighted geometric mean complex judgment matrix inAHP', European Journal of Operational Research, Vol. 126, pp.683±687.

Zio, E. (1996) `On the use of the analytical hierarchy process in the aggregation of expertjudgements', Reliability Engineering and System Safety, Vol. 53, pp.127±138.

Bibliography

Abbott, A. (1996) `Gene tests: who benefits from risk?', Nature, Vol. 379, 1 February,pp.389±392 .

Ackerman, W. (1981) `Cultural values and social choice of technology', Int. Soc. Sci. J., Vol.33, No. 3, pp.445±465.

Adelman, L. and Mumpower, J. (1979) `The analysis of expert judgement', Tech. For. & Soc.Ch., Vol. 15, pp.191±204.

Aggleton, P. et al. (1994) `Risking everything? Risk behaviour, behaviour change, and AIDS',Science, Vol. 265, 1 July, pp.341±344.

Amara, R.C. and Lipinski, A.J. (1972) `Some views on the use of expert judgement', Tech. For.& Soc. Ch., Vol. 3, pp.279±289.

D. Loveridge62

Page 31: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Amara, R.C. and Lipinski, A.J. (1983) Business Planning for an Uncertain Future, PergamonPress.

Anon. (1982) `At risk ± a survey of international insurance', The Economist.

Anon. (1981) `Business International's country risk assessment service', BI/CAS, April.

Apostolakis, G. (1990) `The concept of probability in safety assessments of technologicalsystems', Science, Vol. 250, 7 December, pp.1359±1364.

Asch, S. (1958) `Effects of group pressure upon the modification and distortion of judgements',Readings in Social Psychology, 3rd. Ed., London: Holt Rinehardt & Winston.

Ashby, Lord (1981) `The assessment and perception of risk: introductory remarks', Proc. Roy.Soc. Lond. A, Vol. 376, pp.3±4.

Ashworth, J. (1997) `Science, policy and risk', Discussion Meeting held at the Royal Society,Tuesday 18 March.

Bartholomew, D.J. (1975) `Probability and social science', Int. Soc. Sci. Jnl., Vol. 27, No. 3,pp.421±436.

Benforado, J. et al. (1987) `Comparative ecological risk ± a report of the ecological riskworkgroup', US EPA, Feb., PERA-OTIS 87/6768.

Blackman Jr., A.W. (1971) `The use of Bayesian techniques in Delphi forecasts', Tech. For. &Soc. Ch.. Vol. 2, pp.261±268.

Blokker, E.F. et al. (1981) `Risk analysis in the chemical industry', Angewandte Systemanalyse,Vol. 2, No. 4, pp.167±182.

Bowen, K.C. and Stratton, A. (1981) `The information content of data in defined choicesituations', Jnl. OR, Vol. 32, pp.79±98.

Bowman, E.H. (1980) `A risk/return paradox for strategic management', M.I.T.-ILP, WP,pp.1107±1108.

Brainerd, C.J. (1981) `Working memory and the developmental analysis of probabilityjudgement', Psychological Review, Vol. 88, No. 6, Nov., pp.463±502.

Brehmer, B. (1976) `Social judgement theory and the analysis of interpersonal conflict',Psychological Bull., Vol. 83, No. 6, pp.985±1003.

Brockhaus, W.L. (1975) `A quantitative analytical methodology for judgemental and policydecisions', Tech. For. & Soc. Ch., Vol. 7, pp.127±137.

Brown, B. (1968) `The Delphi process: a methodology used for elicitation of opinions ofexperts', The RAND Corp., P-3925.

Cane, V.R. (1977) `Models of subjective probability', SSRC Res. Rept., HR3609.

Christie, B. (1975) `Travel or telecommunicate? Some factors affecting the choice', Comm.Studies Group, London: Joint Unit for Planning Research, University College London &London School Economics.

Christie, B. and Kingan, S. (1975) `Type of experience as a determinant of judgements oftelecommunication: an arousal theory interpretation', Communications Studies Group,Joint Unit for Planning Research, University College London & London SchoolEconomics.

Collen, R. `A study of Sherlock Holmes based on Kostick's perception and preferenceinventory'.

Dale, A.J. (1998) `Acceptable evidence: science and values in risk management', 1993 Bookreview of book by D.G. Mayo and R.D. Hollander, Futures Delgado, M. et al. CombiningNumerical and Linguistic Information in Group Decision Making, Journal of InformationSciences, Vol. 107, pp.177±194.

Dickey, J.M. (1979) `Expert uncertainty and the use of subjective probability models', PrivateCommunication.

Dickey, J.M. (1979) `Beliefs about beliefs: a theory for stochastic assessment of subjectiveprobabilities', Private Communication.

Experts and foresight: review and experience 63

Page 32: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Dommelen, A. van (1996) (Ed.) Coping with Deliberate Release: The Limits of Risk Assessment,Tilburg/Buenos Aires: International Centre for Human and Public Affairs, Series B:Social Studies of Science and Technology 1.

Dressler, F.R.S. (1972) `Subjective methodology in forecasting', Tech. For. & Soc. Ch., Vol. 3,pp.427±439.

Eliashberg, J. and Winkler, R.L. (1981) `Risk sharing and group decision making', Mangt.Sci., Vol. 27, No. 11, Nov., pp.1221±1235.

Farrell, C. (1986) `The insurance crisis: now everybody is in a risky business', Business Week,March 10, pp.62±66.

Finetti, B. de (1964) `Foresight: its logical laws, its subjective sources', in Kyburg Jnr. andSmokler Studies in Subjective Probability, N.Y.: John Wiley & Sons, pp.93±158.

Fishburn, P.C. (1983) `Research in decision theory: a personal perspective', Math. Soc. Sci.,Vol. 5, pp.129±148.

Fishoff, B. et al. (1978) `How safe is safe enough? A psychometric study of attitudes towardstechnological risks and benefits', Policy Sci., in press.

Foerster, H. von (1972) `Perception of the future and the future of perception', Inst. Sci., Vol.1, No. 1, pp.31±43.

Funtowicz, S.O. and Ravetz, J.R. (1990) `Uncertainty and quality in science for policy', Theoryand Decision Library, Series A: Philosophy and Methodology of the Social Sciences KluwerAcademic Publishers.

George, J.J. (1988) `Expanding the R&D pipeline with reduced risk', SRI-BIP File No.D88-1222.

Gershuny, J. (1976) `The choice of scenarios', Futures, Vol. 8, No. 6, pp.496±508.

Gillies, D. (1991) `Intersubjective probability and confirmation theory', Brit. J. Phil. Sci., Vol.42, pp.513±533.

Giusti, W.I. and Georghiou, L. (1988) `The use of co-nomination analysis in real-timeevaluation of an R&D programme', Scientometrics, Vol. 14, Nos. 3±4, pp.265±281.

Goupy, V. de M. et al. (1978) `A study of political risk: Latin America, Asia, and Africa',Hudson Res. Europe, prepared for World 1995, Paris, Nov.

Green, M. and Harrison, P.J. (1973) `Fashion forecasting for a mail order company using aBayesian approach', ORQ, Vol. 24, pp.193±205.

Green, P.E. and Frank, R.E. `Bayesian statistics and marketing research', Applied Statistics,pp.172±190.

Gregory, R.L. (1974) Concepts and Mechanisms of Perception, Duckworth Press.

Grether, D.M. and Wilde, L.L. (1983) `Consumer choice and information', Info. Econ. &Policy, Vol. 1, pp.115±144.

Grove, J.W. (1985) `Rationality at risk: science against pseudoscience', Minerva, Vol. 23, No.2, pp.216±240.

Hammond, J.S. (1967) `Better decisions with preference theory', Harvard Bus. Rev., Dec.,pp.123±141.

Hammond, K.R. (1974) `Human judgement and social policy', Program of Research on HumanJudgement and Social Interaction.

Harris, C.C. (1993) `p53: at the crossroads of molecular carcinogenesis and risk assessment',Science, Vol. 262, 24 December, pp.1980±1982.

Heilig, K. (1978) `Carnap and de Finetti on bets and the probability of singular events: theDutch book argument reconsidered', Brit. J. Phil. Sci., Vol. 29, pp.325±346.

Hirota, T. (1995) `Innovation risk dilemma', Private Communication.

Horwich, P. (1982) Probability and Evidence, Cambridge Univ. Press.

Howard, R.A. et al. (1977) Readings in Decision Analysis, Stanford Research Institute.

D. Loveridge64

Page 33: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Howson, C. and Urbach, P. (1991) `Bayesian reasoning in science', Nature, Vol. 350, 4 April,pp.371±374.

Hunsaker, P.L. (1975) `Incongruity adaptation capability and risk preference in turbulentdecision-making environments', Org. Behav. & Human Performance, Vol. 14, pp.173±185.

Jones, M. (1976) `Time, our lost dimension: toward a new theory of perception, attention andmemory', Psychological Rev., Vol. 83, No. 5, pp.323±355.

Kadane, J.B. et al. (1978) `Interactive elicitation of opinion for a normal linear model',Carnegie-Mellon University, Tech. Report No. 150, Dept. Stats.

Kahn, P. (1996) `Coming to grips with genes and risk', Science, Vol. 274, 25 October,pp.496±498.

Kahneman. D. et al. (1982) Judgment under Uncertainty: Heuristics and Biases, Cambridge:UP.

Kanizsa, G. `Subjective contours', Unknown.

Kaplan, M.F. and Schwartz, S. (1975) Human Judgement and Decision Processes, NY, SF andLondon: Academic Press.

Kerr, R.A. (1996) `A new way to ask the experts: rating radioactive waste risks', Science, Vol.274, No. 8 November, pp.913±914.

Kmietowicz, Z.W. and Pearman, A.D. (1981) `Decision theory, strict ranking of probabilitiesand statistical dominance', Sch. Econ. Studies Discussion Paper 108.

Knief, et al. (1991) (Eds.) Risk Management, London: Taylor & Francis Ltd.

Kruger, L. et al. (1987) The Probabilistic Revolution ± Volume 1 ± Ideas in History, The MITPress.

Kruger, L. et al. (1987) The Probabilistic Revolution ± Volume 2 ± Ideas in Science, The MITPress.

Kyburg Jnr., H.E. and Smokler, H.E. (1964) (Eds.) Studies in Subjective Probability, N.Y.:John Wiley & Sons.

Lee, T.R. (1981) `The public's perception of risk and the question of irrationality', Proc. Roy.Soc. Lond. A, Vol. 376, pp.5±16.

Levi, I. (1979) `Support and surprise: L.J. Cohen's view of inductive probability', Brit. Jnl.Phil. Sci., Vol. 30, pp.279±292.

Levi, I. (1983) The Enterprise of Knowledge: An Essay on Knowledge, Credal Probability andChance, The M.I.T. Press.

Lindley, D.V. (1982) `The subjectivist view of decision-making', European Jnl. Op. Res., Vol. 9,pp.213±222.

Long, J. (1983) `Risk and benefit', `A report and reflections on a series of consultations St.George's House, Windsor Castle', St. George's House Paper No. 5.

Lonsdale, A.J. (1978) `Judgement research in policy analysis', Futures, Vol. 10, No. 3,pp.213±226.

Loveridge, D. (1984) `Seeing things differently . . .', Lecture at Inst. of Welding.

Madigan, Sir R.T. (1985) `The anatomy of risks', SRI-BIP D83-820.

Makridakis, S. (1988) `Metaforecasting: ways of improving forecasting accuracy andusefulness', Int. Jnl. Forecasting, Vol. 4, pp.467±491.

Malakoff, D. (1999) `Bayes offers a ``new way'' to make sense of numbers', Science, Vol. 286,19 November, pp.1460±1464.

Malsch, I. (1997) `Nanotechnology in Europe: expert's perceptions and scientific relationsbetween sub-areas', IPTS Technical Report Series, EUR 17710 EN, October.

Margolis, H. (1988) Patterns, Thinking and Cognition: A Theory of Judgement, Univ. ChicagoPress.

Matthews, R. (1993) `Linguistics on trial', New Sci., 21 August, pp.12±13.

Experts and foresight: review and experience 65

Page 34: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Milburn, M.A. (1978) `Sources of bias in the prediction of future events', Org. Behav. &Human Perf., Vol. 21, pp.17±26.

Miller, H.I. and Gunary, D. (1993) `Serious flaws in the horizontal approach to biotechnologyrisk', Science, Vol. 262, 3 December, pp.1500±1501.

Missan, D. (1980) `Linguistic innovation', Private Communication.

Mitchell, R.B. (1980) `Subjective conditional probabilities ± a new approach', Tech. For. &Soc. Ch., Vol. 16, pp.343±349.

Moskowitz, H. and Sarin, R.K. (1980) `Improving conditional probability assessments forforecasting and decision making', Inst. Res. in the Behaviour, Economic and Mangt. Sci.,Krannert Grad. Sch. Mangt., Perdue Univ., Paper No. 734.

Mumford, E. and Spackman, H. (1975) (Eds.) Human Choice and Computers.

North, D.W. (1968) `A tutorial introduction to decision analysis', IEEE Trans. Sys. Sci. &Cyb., SSC-4, Sept., pp.200±210.

Osgood, C.E. (1974) `Probing subjective culture. Pt.1: cross-linguistic tool-making', Jnl.Communication, Winter, pp.21±35.

Otway, H.J. and Pahner, P.D. (1976) `Risk assessment', Futures, Vol. 8, No. 2, pp.122±134.

Otway, H.J. et al. (1975) `Revealed preferences: comments on the Starr benefit-riskrelationships', IIASA, RM-75-5, Laxenburg.

Pendse, S.G. (1978) `Category perception, language and brain hemispheres: and informationtransmission approach', Behavioural Science, Vol. 23, pp.421±428.

Platz, W.E. and Oberlaender, F.A. (1995) `On problems of expert opinion on holocaustsurvivors submitted to the compensation authorities in Germany', International Journal ofLaw and Psychiatry, Vol. 18, No. 3, pp.305±321.

Plous, S. (1993) The Psychology of Judgment and Decision Making, McGraw-Hill.

Polya, G. et al. `Mathematics, logic, probability and statistics', in I.J. Good The ScientistSpeculates ± An Anthology of Partly Baked Ideas.

Ponder, B. (1997) `Genetic testing for cancer risk', Science, Vol. 278, 7 November,pp.1050±1054.

Raiffa, H. (1968) Look Up, Addison-Wesley.

Raiffa, H. (1970) Decision Analysis, Introductory Lectures on Choices Under Uncertainty,Reading, Mass: Addison Wesley.

Roberts, L. (1991) `Dioxin risks revisited', Science, Vol. 251, 8 February, pp.624±626.

Roberts, P. (1989) `Study project ± New challenges and choices: strategic vision and themanagement of the UK land resource', Strategic Planning Society, April.

Rosemund, C. et al. (1993) `Nonuniform probability of glutamate release at a hippocampalsynapse', Science, Vol. 262, 29 October, pp.754±759.

Rothschild, Lord (1978) `Risk', The Listener, 30 November, The Dimbleby Lecture.

Rowley, C.K. (1979) `Liberalism and collective choice', Nat. West. Quart. Rev., May.

Ruspini, E.H. (1990) `Possibility as similarity: the semantics of fuzzy logic', SRI-BIP D90-1478,October.

Sackman, H. (1974) `In defense of Delphi: a review of Delphi assessment, expert opinion,forecasting and group processes', RAND Report R-1283-PR; Reviewed by Coates, J.F.(1975) Tech. For. & Soc. Ch., Vol. 7, pp.193±194.

Sackman, H. (1974) `Scientific inquiry or political process? Remarks on Delphi assessment,expert opinion, forecasting and group processes', RAND Report R-1283-PR: Reviewed byGoldschmidt, P.G. (1975) Tech. For. & Soc. Ch., Vol. 7, pp.195±213.

Sahal, D. and Yee, K. (1975) `Delphi: an investigation from a Bayesian viewpoint', Tech. For.& Soc. Ch., Vol. 7, pp.165±178.

D. Loveridge66

Page 35: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Salo, A.A. and Bunn, D.W. (1995) `Decomposition in the assessment of judgmentalprobability forecasts', Tech. For. & Soc. Ch., Vol. 49, pp.13±25.

Sampson, G. (1975) `Theory choice in a two-level science', Brit. Jnl. Phil. Sci., Vol. 26,pp.303±318.

Sarin, R.K. (1978) `Elicitation of subjective probabilities in the context of decision making',Decision Sci., Vol. 9, No. 1, pp.37±48.

Schick, F. (1979) `Self-knowledge, uncertainty and choice', Brit. Jnl. Phil. Sci., Vol. 30,pp.235±252.

Schmeidler, D. `Subjective probability without additivity', The Foerder inst. Econ. Res. Tel-AvivUniversity.

Schomberg, R. von (1995) `Contested technology: ethics, risk and public debate', InternationalCentre for Human and Public Affairs, Series B: Social Studies of Science and Technology, 1Tilburg/Buenos Aires.

Simon, J.D. (1985) `Political risk forecasting', Futures, Vol. 17, No. 2, April, pp.132±148.

Slovic, P. et al. (1991) `Perceived risk, trust, and the politics of nuclear waste', Science, Vol.254, 13 December, pp.1603±1607.

Slovic, P., Fischoff, B. and Lichtenstein, S. (1981) `Perceived risk: psychological factors andsocial implications', Proc. Roy. Soc. Lond. A, Vol. 376, pp.17±33.

Spetzler, C.S. and Holstein, von C.A.S.S. (1975) `Probability encoding in decision analysis',Mangt. Sci., Vol. 22, p.3.

Staelin, R. and Turner, R.E. (1973) `Error in judgemental sales forecasts: theory and results',Jnl. Marketing Res., Vol. 10, pp.10±16.

Starr, C. (1969) `Social benefits versus technological risk', Science, Vol. 165, pp.1232±1238.

Stevens, C.F. and Harrison, P.J. (1973) Bayesian Forecasting Pt. 1: The Dynamic Linear ModelPt. 2, University of Warwick.

Stirling, A. and Mayer, S. (1999) `Rethinking risk', SPRU Report.

Stirling, A. et al. (1999) `On science and pre-caution in the management of technological risk',Report to EC Forward Studies Unit, May.

Swalm, R.O. (1966) `Utility theory ± insights into risk taking 1966', Harvard Bus. Rev., Nov/Dec., pp.123±136.

Thomas, K. (1981) `Comparative risk perception: how the public perceives the risks andbenefits of energy systems', Proc. Roy. Soc. Lond. A, Vol. 376, pp.35±50.

Tversky, A. and Kahneman, D. (1974) `Judgement under uncertainty: heuristics and biases',Science, Vol. 185, pp.1124±1131.

Tversky, A. and Kahneman, D. (1983) `Extensional versus intuitive reasoning: theconjunctional fallacy in probability judgement', Psychological Rev., Vol. 90, No. 4,pp.293±315.

Velimirovic, H. (1975) `An anthropological view of risk phenomena', IIASA, RM-75-55.

Vickers, Sir G. (1965) The Art of Judgement, London: Tavistock Publications.

Vickers, Sir G. (1966) `The multivalued choice', Proc. 2nd. Int. Symp. on CommunicationTheory and Research, Excelsior Springs, Missouri, pp.259±278.

Vines, G. (2000) `Rethinking risk', SPRU Report.

Warner, Sir F. (1992) `Risk, analysis, perception and management', Report of the RoyalSociety Study Group.

Warner, Sir F. and Slater, D.H. (1981) `The assessment and perception of risk', Royal Society`sStudy Group on Risk, Proc. Royal Society of London, Series A, No. 1764, 30 April.

Weisman, J. and Holzman, A.G. (1972) `Engineering design optimization under risk', Mangt.Sci., Vol. 19, pp.235±249.

Experts and foresight: review and experience 67

Page 36: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

Weizenbaum, J. (1978) Computer Power and Human Reason: From Judgement to Calculation,W.H. Freeman & Company.

Wilde, L.L. (1980) `A model of ``satisfactory'' choice: optimal and non-optimal satisficing',California Inst. Tech., Div. Humanities & Soc. Sci., Working Paper 363, Nov.

Winkler, R.L. (1982) `Research directions in decision making under uncertainty', Decision Sci.,Vol. 13, No. 4, pp.517±533.

Wright, G. and Ayton, P. (1987) Judgmental Forecasting, John Wiley.

Wright, G. et al. (1996) `The role and validity of judgment in forecasting', International Journalof Forecasting, Vol. 12, pp.1±8.

Wright, M. (2000) `Risk roulette', Green Futures, November/December.

Wriston, W.B. (1986) Risk and other Four-Letter Words, N.Y.: Harper & Row (book excerptfrom Security National Pacific Bank).

Xu, Z. (2000) `On consistency of the weighted geometric mean complex judgment matrix inAHP', European Journal of Operational Research, Vol. 126, pp.683±687.

Yager, R.R. (1979) `An eigenvalue method of obtaining subjective probabilities', BehaviouralScience, Vol. 24, pp.382±387.

Zeckhauser, R.J. and Viscusi, W.K. (1990) `Risk within reason', Science, Vol. 248, 4 May,pp.559±564.

Notes

1 Complex here has the connotation of complexity and not simply complication as describedby Charles Perrow (1984) in Normal Accidents, Basic Books.

2 It is necessary to treat this term carefully; here it implies information that has been operatedon by an individual's value/norm set and involves the notion of selectivity in relation to theexpert's perception of the scope of the inquiry and (i) the selection of what is perceived to bethe relevant information and (ii) the bounds of the individual's information set.

3 This point will be returned to later in discussing the use of survey methods in foresightprogrammes.

4 See for example Perrow in Note 1 of this paper and the report into the BSE inquiry, http://www.bse.org.uk/

5 This is described later.6 Cooke also devotes time to this concern.7 There is a voluminous literature on probability encoding related to utility, but that is notnecessarily related to scenario building.

8 `Climate Change to the Year 2000: a survey of expert opinion', Conducted by the ResearchDirectorate of the National Defense University jointly with the US Department ofAgriculture, Defense Advanced Research Projects Agency, National Oceanic andAtmospheric Administration and the Institute for the Future, DARPA ContractMDA903-77-C-0914, February, 1978, and the Institute for the Futures 1978 Study of theFuture of the UK (see Lipinski and Loveridge, 1982).

9 Bendectin was claimed to have caused congenital limb defects to a child due to her mothertaking the drug during the early stages of pregnancy.

10 Due to Abraham Wald; see Savage (1954).11 Note that this had been referred to as inconsistency by Raiffa in 1957 and also by Granberg

in 1966.12 My additions are in parentheses.

D. Loveridge68

Page 37: Experts and foresight: review and experience Denis Loveridge · By definition, opinions are subjective and can only be the expert’s best judgement under the terms and rules imposed

13 I pointed to the first, but in a different way, in an internal company paper in 1977, laterpublished as Loveridge, D. (1983) `Computers and you', Futures, December, pp.498±503.

14 In what follows I have made some personal additions but for the sake of ease of readingthese are not indicated.

15 See Popper on `holism' and Simon on `bounded rationality'.16 I have described a process for scenario building in (1992) The Challenge of the External

Environment, Supplementary Reading Book 1, Open Business School Course B885, TheOpen University.

17 These stem from empirical experience gained by the Institute for the Future which classifiedexperts under three categories: generalists, persons of thought and persons of present andfuture action. The first have a spread of interests, perception and a high level of awareness,the second have deep knowledge in a special field and the last are likely decision makers.

18 It would be very easy to quote absurd limits in order to score ten apparently `correct'results, but that amounts to ill considered `hand waving' and is exactly the kind of responsedecision makers do not want from expert advisers.

19 Such people have been found in practical experience.20 Whilst this part of the interview had the appearance of a party game its intent was serious

enough and the outcome often left a marked impression on the respondent.21In the study of the future of the UK this was important, for a single variable only the

historical reference material, sent in advance, was provided.22 Author's note.

Experts and foresight: review and experience 69