assessing the need for decision support systems

10
European Journal of Operational Research 37 (1988) 73-82 73 North-Holland Assessing the need for decision support systems John HOLT CAP Scientific, Scientific House, 40-44 Coombe Road, New Malden, Surrey, UK Abstract: This paper shows how a number of simple, well-known, behaviourally-based techniques were applied to a specific, complex decision-making task to define the most appropriate areas for providing decision support. The methods used were structured observation, the job inventory questionnaire and hierarchical task analysis. The task was the job of the air defence officer on a Royal Navy destroyer. The paper addresses some of the problems of designing Decision Support Systems (DSS) in the real world. In this case, there was found to be uncertainty about the true nature of the decision maker's task and disagreement about which were the main problem areas. The paper compares the use of an empirical, positivist approach to problem structuring based on observing and measuring the decision maker's tasks with a phenomenological approach, Checkland's Soft Systems Methodology. Phenomenological approaches see systems as existing within the mental constructs of observers. Keywords: Systems, military, decision support, command and control, practice Introduction This paper describes part of a three-year re- search project by an operational research team at the University of Southampton to look at the provision of Decision Support Systems (DSS) in a tactical naval command and control system. When the project was initiated a number of problem areas in the system were outlined. We initially expected that we would study the system for a few months, and then build some prototype DSS which would be tested experimentally and modified as necessary. However, the problems of identifying ap- propriate areas for applying decision support proved to be much more complex than we first imagined. Much of the time was spent coming to grips with the complexity of the system and un- derstanding its problems. Our final perceptions of the areas which required decision support were very different from the initial impressions con- veyed by the sponsors. This paper describes the empirical methods that were used to identify the problems. Received December 1987 The purpose of DSS is to provide support for decision makers in complex, relatively unstruc- tured situations. A critical factor for the success of such systems is that they are relevant to the deci- sion maker's needs. Sprague and Carlson argue that many systems have not been used because they do not meet the decision makers' require- ments [1]. A criticism that is made of operational research in general, by systems theorists, is that insufficient attention is paid in OR to understand- ing the perceptions of decision makers within par- ticular situation [2]. Such issues are clearly of relevance to the provision of DSS. The goal of this paper is to discuss the difficul- ties that can arise in evaluating the need for DSS in complex systems based on our experience with the naval DSS, and to provide an empirically based framework to overcome these problems. This framework attempts to define those areas where decision makers are most in need of sup- port, incorporating their perceptions into the anal- ysis. Outline of the paper This paper is structured in the following way. To begin with, the decision-making task under 0377-2217/88/$3.50 © 1988, Elsevier Science Publishers B.V. (North-Holland)

Upload: john-holt

Post on 21-Jun-2016

218 views

Category:

Documents


6 download

TRANSCRIPT

European Journal of Operational Research 37 (1988) 73-82 73 North-Holland

Assessing the need for decision support systems

John H O L T CAP Scientific, Scientific House, 40-44 Coombe Road, New Malden, Surrey, UK

Abstract: This paper shows how a number of simple, well-known, behaviourally-based techniques were applied to a specific, complex decision-making task to define the most appropriate areas for providing decision support. The methods used were structured observation, the job inventory questionnaire and hierarchical task analysis. The task was the job of the air defence officer on a Royal Navy destroyer.

The paper addresses some of the problems of designing Decision Support Systems (DSS) in the real world. In this case, there was found to be uncertainty about the true nature of the decision maker's task and disagreement about which were the main problem areas.

The paper compares the use of an empirical, positivist approach to problem structuring based on observing and measuring the decision maker's tasks with a phenomenological approach, Checkland's Soft Systems Methodology. Phenomenological approaches see systems as existing within the mental constructs of observers.

Keywords: Systems, military, decision support, command and control, practice

Introduction

This paper describes part of a three-year re- search project by an operational research team at the University of Southampton to look at the provision of Decision Support Systems (DSS) in a tactical naval command and control system. When the project was initiated a number of problem areas in the system were outlined. We initially expected that we would study the system for a few months, and then build some prototype DSS which would be tested experimentally and modified as necessary.

However, the problems of identifying ap- propriate areas for applying decision support proved to be much more complex than we first imagined. Much of the time was spent coming to grips with the complexity of the system and un- derstanding its problems. Our final perceptions of the areas which required decision support were very different from the initial impressions con- veyed by the sponsors. This paper describes the empirical methods that were used to identify the problems.

Received December 1987

The purpose of DSS is to provide support for decision makers in complex, relatively unstruc- tured situations. A critical factor for the success of such systems is that they are relevant to the deci- sion maker's needs. Sprague and Carlson argue that many systems have not been used because they do not meet the decision makers' require- ments [1]. A criticism that is made of operational research in general, by systems theorists, is that insufficient attention is paid in OR to understand- ing the perceptions of decision makers within par- ticular situation [2]. Such issues are clearly of relevance to the provision of DSS.

The goal of this paper is to discuss the difficul- ties that can arise in evaluating the need for DSS in complex systems based on our experience with the naval DSS, and to provide an empirically based framework to overcome these problems. This framework attempts to define those areas where decision makers are most in need of sup- port, incorporating their perceptions into the anal- ysis.

Outline of the paper

This paper is structured in the following way. To begin with, the decision-making task under

0377-2217/88/$3.50 © 1988, Elsevier Science Publishers B.V. (North-Holland)

74 J. Holt / Assessing the need for decision support systems

study is described. The problems in defining where appropriate decision support is required are de- tailed.

In the project, a number of approaches to sys- tem design were evaluated. Cleckland's methodol- ogy, a systems based approach, is here discussed in detail. It is argued that the methodology devel- oped in this study has certain advantages over these approaches in some situations, particularly where it is only possible to introduce change very gradually. It is suggested that in certain situations, Checkland's approach places too great an empha- sis on decision maker's perceptions of their prob- lems.

The methodology is described. It involves deriv- ing an initial agreed representation of the decision maker's job, and then assessing priorities for deci- sion support for each task in the job. This is done by rating particular attributes of decision-making tasks.

Two behaviourally-based empirical approaches to collecting data for the rating procedure are presented and discussed in this paper. The use of these methods is described and the results from applying them are outlined.

Overview of the problem area

Description of the decision maker's task

The aim of this project was to design decision aids for the air defence officer on a Royal Navy Type 42 Destroyer. This officer is responsible for defending the ship against attack from aircraft and missiles, and is usually a Lieutenant Com- mander. The Type 42 is the primary air defence ship in the Royal Navy. It has to provide defence for a group of ships. The main defence on the Type 42 is provided by the Seadart missile systems and by the use of fighter aircraft under the ship's control. In addition, there are various Electronic Warfare devices for drawing off the radar of in- coming aircraft and missiles.

The air defence officer has to co-ordinate the activities of two groups. The first group, the air picture team, processes data from different sources and tries to provide a coherent picture of the air battle. The second group reporting to the air officer are the teams responsible for the various weapons systems at the ship's disposal, including the fighter

aircraft under its control. From the data with which he is provided, the air defence officer has to formulate and implement a co-ordinated response to the air threat for the group of ships. He is continually passing on information verbally to his own team and to other ships. He also spends much time commenting to his team members on the performance of their tasks. The air defence environment is a dynamic and rapidly changing one, so the air defence officer's job can be very demanding.

Sources of difficulty in providing decision support

We encountered a number of problems in try- ing to identify areas in which we could usefully provide decision support for the air defence officer. It is likely that many of these difficulties will be present in other complex decision tasks. Measur- ing the effectiveness of particular decisions may be difficult because there may be many back- ground variables that affect the result. In particu- lar tasks it may not be very clear what constitutes success. Even in a task such as naval air defence where success is clear-cut, (i.e. the ship avoids being hit by a missile), it is clear from observing the training that some strategies are riskier than others, and it takes an expert to detect the dif- ference. Disagreements can arise even between experts as to what constitutes a good strategy.

The specific sources of difficulty in defining the air defence officer's job and providing appropriate DSS were as follows:

(1) Conflicting accounts of the decision maker's task. There were conflicting accounts of what the job entailed. Different air defence officers had different ways of performing the job and different areas that they felt to be important which required support.

(2) Unclear description of the decision maker's task. The job itself was not easy to define. The job description appears to overlap with other tasks in the air defence organization of the Type 42. This is because if particular individuals are inexperi- enced at their jobs then the air defence officer has to provide support for them. The difficulties in defining this decision maker's task meant that it was difficult to assess which, if any, parts were in need of decision support.

J. Holt / Assessing the need for decision support systems 75

(3) Difficulty in changing the system It is dif- ficult to introduce change into the system due to the age of the architecture. Moreover, it is already operating near to capacity. For these reasons changes that are made in the system tend to be very incremental.

It is unlikely that these type of problems are unique to this particular system. The problems of the air defence officer constitute what Ackoff calls a 'mess' [3]. They are difficult to describe and formulate in a neat way and they defy easy solu- tion.

Alternative approaches to decision aid design

In the initial stages of the project, a literature review was conducted to examine existing strate- gies for designing and implementing decision sup- port [4]. The emphasis of much of the DSS litera- ture in the management science area was on the development of particular techniques such as deci- sion analysis [5] or simulation [6] within a very specific context. Very little has been written about structuring a problem, collecting data or diagnos- ing where decision support is needed. These is- sues, however, were the main concerns of this study.

A distinctive approach to structuring messy problems in the systems literature is Checkland's 'soft systems' methodology [7]. In the initial phase of this approach the analyst draws an outline of the situation, which would be as rich as possible, drawing on the knowledge of all those involved. The next stages of the methodology are diagnostic, attempting to show where the system can be im- proved. Firstly, a ' root def ini t ion ' - -a succinct summary of the purpose of the system--is de- vised. Following this is the 'conceptual modelling' stage, where the analyst attempts to devise a hypo- thetical system that would fully meet the require- ments of the goals expressed by the root defini- tion. The analyst then examines the differences between the ideal system from the conceptual modelling stage and the existing system described in the earlier phases of the analysis. Changes which would improve the current system and bring it closer to the ideal are suggested.

This is, of necessity, a very brief description of Checkland's methodology which hardly does it justice. The approach was not used in this study

although it might well have been useful. However, there are a number of ways in which the method- ology would not have been sufficient, on its own, for this project.

The first problem with using Checkland's ap- proach in this context is in excessively phenome- nological nature of the method, and, indeed, other well-known methods of problem structuring in OR e.g. the use of 'Cognitive Mapping' by Eden, Jones and Simms [8]. In this philosophy, systems are seen as the mental constructs of observers rather than as having objective existence in the outside world. It is difficult to deny that there is a great deal of truth in this proposition. However, there are dangers in relying excessively on this type of approach for certain problems.

Checkland is critical of the traditional 'positi- vist' approach to OR which assumes there is an external reality waiting to be discovered by the analyst. He argues that this traditional mode of thinking cannot take account of the complexity of a human activity system [23]. However, the experi- ence of this study is that people's perceptions cannot be relied upon. There appeared to be in- consistencies within accounts and conflicts be- tween accounts. Because of this a positivist em- pirical approach, observing what decision makers actually did, was adopted. This was used in con- juction with obtaining decision makers' percep- tions of problem areas.

This experience of not being able to rely on participants' own accounts of their actions is de- scribed in an example of insurance claims investi- gators given by Graham, who described an an- thropological approach to OR [24]. When he asked them to describe their work, they gave him an account identical to the firm's guidelines. How- ever, when he spent time with the investigators, he found that they would instigate an unofficial type of investigation when certain types of claims arose. The unconscious motive behind this behaviour was to increase their chances of detecting fraudu- lent claims in order to swap anecdotes with col- leagues at coffee breaks.

This informal behaviour had important impli- cations for the investigators' performance, and explained why an initial numerical analysis had produced nonsensical results. Though this accounted for up to 30% of claims, investigators were not conscious that they were frequently acting in a way that was different to the official ap-

76 J. Holt / Assessing the need for decision support systems

proach. It was only by informal observation that Graham was able to discover this behaviour and point it out to those being observed.

A further reason for not adopting Checkland's approach was the very constrained nature of what recommendations could be made for the reasons described in the previous section. Only small-scale, easily implementable changes were acceptable to the sponsors. Thus, a method was needed that focussed closely on existing tasks rather than gen- erating broad new perspectives.

The purpose of this discussion is not to dismiss Checkland's work but to suggest that in some cases a positivist, empirical approach is more use- ful than a phenomenologically-based one. Where the aim of a study is to take a strategic view about the future of a system, Checkland's approach is preferable. If a study is conducted at this level of abstraction it may not matter greatly if decision makers have slight misperceptions about their tasks.

The type of situation where a positivist, em- pirical approach is likely to be more appropriate is if the study is not intended to generate imagin- ative options for the future, rather the desire is to improve efficiency at a tactical level. It is prefer- able when the options are very constrained. It is likely to be preferable when it does make a dif- ference to the results when people are inaccurate in describing their jobs. Many studies are con-

ducted within such limitations, where it is im- portant if reality conflicts with people's percep- tions. This may explain why positivistic ap- proaches continue to be popular in OR.

The methods described in the paper do make use of decision makers ' perceptions. However, it does not rely on them as the only source of data.

Having compared our approach with Check- land's, the following section outlines the method in more detail.

Framework for assessing decision support require- ments

The investigative framework devised for indi- cating areas that may require decision support can be broken down into the following stages.

Ini t ial representation o f the task

It was important to have an agreed representa- tion of the decision maker 's task. This was par- ticularly critical given our initial uncertainty about the task. Information from a number of sources was used to prepare a representation. This infor- mation was integrated into a task decomposition framework from applied psychology known as 'hierarchical task analysis' [9].

1.1.1 enemy

information on own forces

T PREPLANNING ACTION PRIOR

TO THE BAll'LE

I

.i i i Initial threat Denial of evalution with intelligence intelligence e[c to enemy

I 1.1.4 I Assess strength and direction of threat

t

1.1.5 Assess environmental conditions

THE AIR BATTLE I

Initial Measures detection against of threat enemy

recon etc.

COORDINATION ACTIVITIES WHICH ~ R C O N T I N ~ Y

I I ' ~

Measures Managing Managing against the air own air missiles defence team resources

Figure 1. The air defence officer's task (simplified)

1.6.1 Monitor performance of radar tracking

1.6.2 Keep account of the missiles and chaff used

J. Holt / Assessing the need for decision support systems 77

In hierarchical task analysis, the task of an individual is decomposed into a number of main tasks which are further decomposed into their main sub-tasks. Further decomposition continues as necessary. This is illustrated in Figure 1, which shows the main tasks in the centre of the diagram with the lines from each box going to one or more level of decomposition. The diagram was devel- oped iteratively with the analyst drawing out an initial impression of the task which was then modified by feedback from experienced officers. Comments from different officers were compared and also critically evaluated in the light of other information.

The hierarchical task analysis, together with an observation study measuring the time that was spent on different tasks (described later in this paper) helped to reconcile the ambiguities about the nature of the decision maker 's job. Moreover, the hierarchical task analysis increased our credi- bility with air defence officers. They could see that, although we were not naval personnel, we had a good understanding of how they did their job.

Assessment of priorities for decision support

The are a number of methods of varying com- plexity for identifying possible areas that may require decision support [4]. The method chosen was to assess each of the air defence officer's activities according to indicators that would show whether decision support was required. The three indicators selected for this were the difficulty of the task, the significance of the task to the overall goal of doing the job effectively, and the amount of time spent on the task. It was felt that any task that was rated high on at least one of these variables would be a strong candidate for decision support. These measures give an indication of areas where decision support could be helpful. It would then be necessary to see whether it is feasi- ble to provide DSS for the tasks that have been highlighted.

There are indicators that could be used apart from significance, difficulty and time. For exam- ple, quality of performance for each task, error rate and mental work load could all have been measured. However, gathering data on each of these is much more problematic and complex than for the measures used. With the time and re-

sources we had available, it would have been necessary to have focussed only on a small num- ber of tasks. This would have resulted in a much less comprehensive study.

Measurement of the D S S variables

Both objective and subjective methods were used to obtain the information on significance, difficulty and time spent on different aspects of the air defence officer's job. Initial observation was conducted in a very realistic training simula- tion of the Type 42 operations room. Training is continually being conducted here and ready access was granted for purposes of study. Using observa- tion methods, it was possible to get accurate mea- sures of how the air defence officer spends his time. The observation also showed the nature of the data flows to and from the air defence officer.

Two of the decision support variables, signifi- cance and difficulty, were measured by asking the decision makers to rate them for each task. To do this in a systematic way, a job inventory question- naire was constructed. This is a list of the tasks together with a rating scale for each of the indica- tors being measured.

Data from the observation and questionnaire studies, these were integrated. From the measure- ments for each task in terms of significance, diffi- culty and time spent, it was possible to see which tasks were candidates for decision support.

Generation of decision support

Once a small number of tasks had been high- lighted as possible areas for decision support, each of these could be studied in detail to determine what was needed. By looking at a small number of tasks intensively in this way, it was possible to suggest various types of decision support. These suggestions were kept as simple as possible be- cause of the constraints in changing the system already described. In some cases it was clear that a radical improvement was required if the system was to function effectively. Even in these cases, it was possible to recommend an incremental im- provement that would assist the officer pending a more fundamental change to the system.

By these methods, areas for DSS which had not previously been discussed were highlighted and a

78 J. Holt / Assessing the need for decision support systems

number of new ways of providing decision sup- port were suggested.

searchers have suggested using periodic interviews or having subjects think aloud while performing a task [11].

Observation study

In troduction

The method of observation used in this study is termed 'structured observation' [10,11]. In this method, the observer notes all the behaviour of the subject being observed and devises a set of categories into which observed behaviour may be classified. Data collection may be assisted by video or tape recording. These categories are derived from what are perceived to be the significant activities within the job itself. For example, cate- gories used to analyse the air defence officer's job included 'Looking at the radar plot', 'Speaking to the Captain' , 'Ordering the missile to be fired'.

When the categories have been constructed they can be used for conducting further observation. The observer can use a grid with a cell for each type of behaviour and tick the relevant cell when that behaviour occurs.

An important aspect of this method is that the categories are not derived from the literature nor from the observer's previous experience but are developed from observation of the task itself.

A consideration in using observation is the possibility of the subject being affected by the presence of the observer and the process of ob- servation (A Hawthorne effect). In the case of the air defence officer this effect is likely to have been minimal as they are used to being observed by instructors and visitors in the training environ- ments in which they were being observed.

In other situations, there is no simple answer to ensuring that a Hawthorne effect does not arise. Observers can only try to sample a number of subjects in a number of different situations and be aware of the possibility that they may be receiving a distorted picture.

A further criticism of observation methods for DSS design is that the tasks for which they are designed are primarily to do with thinking, or cognitive processes, and that these cannot be in- ferred from observation [11,12]. With our particu- lar study of the air defence officer this was not a problem because the essence of his job is to com- municate his thoughts to others around him. In other contexts, to overcome this problem, re-

Method

In applying the structured observation method, behaviour was recorded as it occurred by use of written notes. A tape recorder was used to help in the capture of verbal data. Afterwards, the data was analysed using a set of pre-defined behaviour categories. These categories were set up from ini- tial observations and f rom knowledge of the task from other sources. The analysis was conducted by ticking the appropriate cell when a air defence officer performed an action. The frequency of occurrence of each type of behaviour could then be calculated for each session. At a later stage in the study data collection and analysis was com- puterised.

Approaches to data collection

Following our study in the training environ- ment, the opportunity came to conduct the ob- servation study in the more realistic environment of a N A T O training exercise on the North Sea. To make the best use of this, it was decided to computerise the data gathering process. Reaction to the initial results from Naval Officers suggested that the method produced an accurate account of what was being observed. The purpose of com- puterisation was to streamline the approach for the more arduous task of data collection under operational conditions. It was also possible to measure the duration of time spent on different tasks much more effectively with the computer.

A microcomputer-based approach to the observation.

The second part of the observation was conducted on a ten day exercise on the North Sea watching fully trained teams perform under fairly realistic conditions. Data was collected using an Epson HX 20 portable microcomputer. The software for this task had already been developed for other purposes [13]. Tasks were entered by pressing the appropriate key at the start and at the end on completion.

Using the microcomputer greatly simplified in- formation gathering and is likely to have increased

J. Holt / Assessing the need for decision support systems 79

its accuracy. The time taken for the collation of data was vastly reduced.

There were seventeen periods of observation in total, each lasting about an hour and a half. This corresponded to the memory limitations of the microcomputer and the attentional limitations of the observer.

Results of the observation study

From the results of the study it was possible to tabulate the proportion of time the air defence officer spent performing each task. This provided a very good picture of how he spends his time.

Using this type of observation data it was pos- sible to make some useful comparisons of be- haviour in different situations that would not have been possible with a less quantitative approach. Attempts were made to validate the results in a number of ways.

Comparison of behaviour in different contexts

Comparison was made of behaviour between contexts using Spearman's rank correlation coeffi- cient, a nonparametric measure of correlation [14]. It was possible to check for differences in be- haviour in the trainer from what happened at sea. The exercise at sea was conducted in a somewhat different way in the first week from the second week. Spearman's correlation of concordance showed that behaviour was similar over the two weeks (p < 0.01). The observation data enabled comparisons to be made between the way in which different air defence officers performed the job. There was a significant correlation between the performance of the two experienced officers who were observed during the North Sea exercise ( p < 0.01).

These comparisons were important for the overall validity of the analysis. If there had been differences because of any of these factors it would have been necessary to have looked at the data in each case separately, instead of aggregating it, as we were able to.

was done with for a sample of three out of the seventeen periods. Using a paired T-test, for two out of three of the sample periods there was found to no significant difference from the initial data recording period at the 5% level. The difficulty in matching the third was due to the level of back- ground noise on the particular recording. This made it difficult to pick out what was being said when the validation was conducted.

A further measure of validity for the observa- tions was their consistency from period to period. If there is no underlying consistency this means either that the observation method is unstable or that the behaviour itself is subject to large varia- tions. Either of these factors would undermine the usefulness of the whole approach. There were quite large variations in the numbers recorded from period to period. However, when the data was ranked, significant agreement was obtained for each of the ways the behaviour was grouped using Kendall 's Coefficient of Concordance [14], with p < 0.001.

Conclusions

Some recommendations were made as a direct result of the North Sea observation study. These included a data base for finding procedural infor- mation and an evaluation of the allocation of task between the air defence officer and one of his team.

Structured observation was found to be a flexi- ble and informative method of gathering data on the air defence officer's task. The study showed that the microcomputer was a tool that greatly facilitated the observation process.

Using structured observation it is possible to investigate behaviour in different contexts and to examine how different individuals do the same job. Seeing how much time decision makers spend on different tasks was important in clarifying the initial uncertainty about what the job entailed.

Questionnaire study

Vafidation Introduction

Because analysis of data is so convenient with the microcomputer it was possible to check the observations for the verbal data from tape record- ings made during the original data collection. This

The observation study could give useful infor- mation about the amount of time the air defence officer spent on different activities. It could not, however, be used to measure the significance and

80 J. Holt / Assessing the need for decision support systems

difficulty of tasks. The most convenient way of finding these is to ask Officers with relevant expe- rience. In order to measure this in a systematic way a job inventory questionnaire was used. Job ratings based ratings of significance, time and difficulty were originally developed by the US Air Force and have been used in many different areas [15,16,17,18].

There are two important measures of question- naires as tools. There is validity, or whether they measure what they are supposed to measure, and there is reliability, or whether the measurements obtained now are the same as those that would be obtained at some further time.

Various studies show that scales of this type can produce valid and reliable data [16,19,20,21]. It was not possible to investigate all these con- cepts of validity and reliability in single study of the air defence officer's task. This type of data must be collected over time. What has been looked for in this study is consistency between respon- dents and a good measure of this was achieved as will be shown in the results.

difficulty, though all three scales were examined. There were a number of areas that came out clearly in need of decision support. There is the allocation of particular 'soft kill' weapons systems used for deceiving enemy radar. This is an area that had not been highlighted in previous studies to our knowledge. Threat evaluation and weapon allocation for the whole force, as opposed to the single ship, was another area that was singled out. Long range threat evaluation was a further area. Problems with monitoring the performance of the air defence team also emerged.

Consistency of responses was used as a method of validation of the questionnaire results. Al- though the variance or responses as a proportion of the mean was quite high for each of the varia- bles and each of the scenarios, the ordering of the tasks for each of the measures was consistent. Kendall 's Coefficient of Concordance produced highly significant agreement for the ranking of occurrences of behaviour within each category and scenario, with p < 0.001 in each case [14].

Method

The questionnaire was constructed by compil- ing a list of tasks from the initial task analysis. Respondents were asked to rate each of the tasks on the list on a scale from 0 to 7 on significance. They were then asked to rate all the tasks on difficulty and then on time. The approach to the construction and use of the questionnaire was based on Gael 's approach, except that he derives the list of tasks from group discussions with expe- rienced members of the target organization [18]. An extension of the standard approach that was made in this study was that respondents were asked to rate tasks within the context of two tactically very different scenarios.

The questionnaire was administered to 19 serv- ing Naval Officers, 16 of whom were currently working, or who had worked, as air defence officers. The other 3 had had very closely related experience. They had done similar jobs and were involved in training or development of the air defence system.

Results

For the results of the questionnaire the analysis mainly focused on the ratings of significance and

Conclusions

The purpose of this paper has been to describe a number of practical problems in the allocation of DSS which we observed as a result of our experiences with one particular complex system which seem to have relevance to many other deci- sion-making tasks and yet which appear to have been little discussed in the DSS literature. As well discussing these issues, the aim has been to outline a framework and some empirical methods to help overcome these problems. These methods could easily be adapted to many other contexts.

The methods provided an objective means for reconciling the initial conflicting accounts of the air defence officer's task and to provide some structure for the 'mess ' .

The recommendations from the questionnaire and observation were briefly described in the re- suits of each section. They covered a wide area from changing the manpower in the air defence team to very specific decision aids for particular tasks. Thus the method is not restrictive only to providing recommendations for decision support.

A feature of DSS design that is not fully recog- nised in the literature is the difficulty of imple- menting change. Adopting a systems approach, it

J. Holt / Assessing the need for decision support systems 81

would have been easy to have attempted to com- pletely redesigned the system along more rational lines. Indeed, we did make recommendations for major changes to the system. The need for most of these had already been acknowledged before the study had begun. What was important was to suggest improvements that were implementable and developing a clear set of priorities for change through quantification.

Future work

The methods developed in this study have clear application for other jobs within the Operations Room of a warship, and in other command and control systems. It is hoped that there will be further opportunities to extend the work in these areas. The methods described would also be ap- propriate in quite different areas, particularly where decisions have to be made quickly. Certain areas of financial dealing appear to be very similar to the task of the air defence officer in that much of the information is received verbally, either by telephone or across dealing rooms, and decisions have to be made rapidly.

Discussion

The approach of this study has been to provide methods both for eliciting decision makers' per- ceptions about problems and to enable the analyst to see and measure for himself. The philosophy underlying our approach is similar to Wise's the- ory of emergent decision making. This is that " . . . once a decision task is 'well-structured' it is pain- fully obvious to all involved what the appropriate course of action should be" [22]. Much of the work on DSS seems to emphasize the development of tools and techniques [5]. It appears that under- standing the problems is as much a priority as developing decision support tools. Once it is clearly understood where help is needed, developing DSS is very straightforward. The methods that have been described in this paper can help to find the structure of decision tasks.

Acknowledgements

This work was sponsored by the Ministry of Defence, Admiralty Research Establishment,

Portsdown, Cosham, Portsmouth, PO6 4AA, to whom we express our thanks. We would also like to thank Dr. David Felce and the Health Care Evaluation Research Team, Southampton Univer- sity, for lending us the Epson microcomputer and the software that they developed for use in ob- servation studies.

I would like to thank Dr. Dale Cooper, formerly of the University of Southampton, now with Spicer and Pegler Associates, who supervised the project. I am indebted to colleagues from the University of Southampton who commented on various drafts of this paper, particularly Dr. Jonathan Klein and also Philip Powell and Con Connell.

References

[1] Sprague, R.M., and Carlson, E.D., Building Effective Deci- sion Support Systems, Prentice-Hall, Englewood Cliffs, N J, 1982.

[2] Checkland, P., "OR and the systems movement", Journal of the Operational Research Society 34/8 (1983) 661-675.

[3] Ackoff, R., Redesigning the Future, Wiley New York, 1974.

[4] Holt, J., "The application of decision support to complex decision making", unpublished Ph.D. Thesis, University of Southampton, 1986.

[5] Barclay, S., and Randall, L.S., "Interactive decision analy- sis for intelligence analysts", Decisions and Designs Inc., McLean, VA, Technical Report DT/TR 75-4, 1975.

[6] Patterson, J.F., and Randall, L.S., "Advisory decision aids: A prototype", Decisions and Designs Inc., McLean, VA, AD-A098-640, 1981.

[7] Checkland, P., Systems Thinking, Systems Practice, Wiley, Chichester, 1981.

[8] Eden, C., Jones, S., and Sims, D., Messing About in Problems, Pergamon, Oxford, 1983.

[9] Annett, J., and Duncan, K.D., "Task analysis and train- ing design", Occupational Psychology 41 (1967)211-221.

[10] Mintzberg, M., "Structured observation as a method to study managerial work", The Journal of Management Sci- ences, February 1970.

[11] Martinko, M.J., and Gardner, W.L., "Beyond structured observation: Methodological issues and new directions", Academy of Management Review 10/4 (1985) 676-693.

[12] Stabell, C.B., "'A decision-oriented approach to building DSS", in: J.L. Bennett (ed.), Building Decision Support Systems, Addison-Wesley, Reading, MA, 1983.

[13] Felce, D., Thomas, M., de Kock, U., Saxby, H., and Repp, A,, "An ecological comparison of small community based houses and traditional institutions II", Behalf. Res. Ther. 23/3 (1985) 337-358.

[14] Siegel, S., Nonparametric Statistics for the Behavioural Sciences, McGraw Hill, New York, 1956.

[15] Driskill, W.E., "Occupational analysis in the United States Air Force", P.E. Schroeder (ed.), Proceedings of the Sym-

82 J. Holt / Assessing the need for decision support systems

posium on Task Analysis/Task Inoentories, Columbus: Centre for Vocational Education, Ohio State University.

[16] Gael, S., "Development of job task inventories and their use in job analysis research". JSAS Catalog of Selected Documents in Psychology 7 (1977) 25.

[17] Terry, D.A., and Evans, R.N., "Methodological study for determining the task content of dental auxiliary education programs", No. HRP 000-4628 Bethesda, MD: Bureau of Health Manpower, Education, National Institutes of Health, 1973.

[18] Gael, S., Job Analysis, Jossey Bass, San Francisco, CA, 1983.

[19] Morsh, J.E., Madden, J.M., and Christal, R.C., "Job analysis in the United States Air Force", No. WADD-TR- 61-113. Lackland Air Force Base, Texas: Personnel Laboratory, Wright Air Development Division, 1961.

[20] McCormick, E.J., and Tombrink, K.B., "A comparison of three types of work activity statements in terms of the

consistency of job information reported by incumbents", No. WDD-TR-60-80, Lackland Air Force Base, Texas: Personnel Laboratory, Wright Air Development Division, 1960.

[21] Moore, B.E., "Occupational analysis for human resource development: A review of utility of the task inventory", No. OCMM-RR-25, Washington, DC.: Office of Civilian Manpower Management, Navy Department, 1976.

[22] Wise, J.A., "Cognitive basis for an expanded decision theory", IEEE Proceedings of the International Conference on Cybernetics and Society, October 8-10, 1979, 336-339.

[23] Checkland, P., "Achieving 'desirable and feasible' change: An application of soft systems methodology", Journal of the Operational Research Society 36/9 (1985) 821-831.

[24] Graham, R.J., "Anthropology and OR: The place of observation in Management Science Process", Journal of the Operational Research Society 35/6 (1984) 527-536.