[iee international conference on people in control (human interfaces in control rooms, cockpits and...

4
418 GROUND STATIONS FOR ANALYSIS OF ELECTRONIC SURVEILLANCE IMAGERY I D C Andrew Defence Evaluation & Research Agency (DERA), Malvern, UK INTRODUCTION Since World War I, people have been taking photographs of the ‘ground beneath an aircraft in an effort to find out what is going on in places that they may not have direct access to (see Fig. 1). In modem terms, this is known as ISR (Intelligence, Surveillance and Reconnaissance). However, photographs, in themselves, can often be of little value and it is only after the images have been ‘exploited’ by highly trained and experienced imagery analysts that their full worth, in terms of intelligence information, is realised. __ Fig 2. Example of imagery Fig 1. ISR in W.W.1 (Courtesy of II(AC) Squadron, RAF) Today, the tasks and techniques used in ISR remain very similar, the main differences being the quantity of data to be processed, the quality of the intelligence that it is possible to gain, and the advanced technologies used to collect, analyse and disseminate the information. This paper outlines the general formal approach which is now being adopted by the research community at DERA in support of ISR systems projects for all three UK armed services. THE TASKS By using electro-optical, infra-red or radar sensors, it is now possible to collect electronic imagery of large tracts of land and sea very quickly from airborne platforms, manned and unmanned (see Figs 2 & 3). Fig 3. A modern airborne sensor platform The raw data having been collected, it is then passed, either by direct datalink or from a storage device retrieved from the aircraft, to a ground-based imagery analysis facility. The main building blocks of the process have always been: input of data to the exploitation function; exploitation by analysis, bringing in “collateral” data (information from previous missions) and “auxiliary” data (position, etc of aircraft); output of intelligence information by means of a standard report. Until recently, all such imagery has been collected on “wet film”, passed through a developing process, and People in Control: An International Conference on Human Interfaces in Control Rooms, Cockpits and Command Centres: 21 - 23 June 1999, Conference Publication No. 463,O IEE, 1999

Upload: idc

Post on 11-Apr-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEE International Conference on People in Control (Human Interfaces in Control Rooms, Cockpits and Command Centres) - Bath, UK (21-23 June 1999)] International Conference on People

418

GROUND STATIONS FOR ANALYSIS OF ELECTRONIC SURVEILLANCE IMAGERY

I D C Andrew

Defence Evaluation & Research Agency (DERA), Malvern, UK

INTRODUCTION

Since World War I, people have been taking photographs of the ‘ground beneath an aircraft in an effort to find out what is going on in places that they may not have direct access to (see Fig. 1). In modem terms, this is known as ISR (Intelligence, Surveillance and Reconnaissance). However, photographs, in themselves, can often be of little value and it is only after the images have been ‘exploited’ by highly trained and experienced imagery analysts that their full worth, in terms of intelligence information, is realised.

__ Fig 2. Example of imagery

Fig 1. ISR in W.W.1 (Courtesy of II(AC) Squadron, RAF)

Today, the tasks and techniques used in ISR remain very similar, the main differences being the quantity of data to be processed, the quality of the intelligence that it is possible to gain, and the advanced technologies used to collect, analyse and disseminate the information.

This paper outlines the general formal approach which is now being adopted by the research community at DERA in support of ISR systems projects for all three UK armed services.

THE TASKS

By using electro-optical, infra-red or radar sensors, it is now possible to collect electronic imagery of large tracts of land and sea very quickly from airborne platforms, manned and unmanned (see Figs 2 & 3).

Fig 3. A modern airborne sensor platform

The raw data having been collected, it is then passed, either by direct datalink or from a storage device retrieved from the aircraft, to a ground-based imagery analysis facility. The main building blocks of the process have always been:

input of data to the exploitation function; exploitation by analysis, bringing in “collateral” data (information from previous missions) and “auxiliary” data (position, etc of aircraft); output of intelligence information by means of a standard report.

Until recently, all such imagery has been collected on “wet film”, passed through a developing process, and

People in Control: An International Conference on Human Interfaces in Control Rooms, Cockpits and Command Centres: 21 - 23 June 1999, Conference Publication No. 463,O IEE, 1999

Page 2: [IEE International Conference on People in Control (Human Interfaces in Control Rooms, Cockpits and Command Centres) - Bath, UK (21-23 June 1999)] International Conference on People

419

then analysed by magnification of the film as it is passed over a light table (see Fig 4). Very high resolution images can be produced using film techniques, and excellent intelligence results can be obtained but, because it is essentially a physical medium, there is little scope for enhancement or manipulation of the imagery in order to be able to extract the maximum information from it.

Fig 4. Image analysis using a light table

Now, with the possibilities that computer technology has opened up, the data can be handled, manipulated, annotated, and reported on more effectively. Images can now be roamed through very quickly; those from different types of sensors can be blended together or warped to fit standard maps; they can be quickly rotated, zoomed and enhanced by the application of mathematical algorithms. Once identified, features of interest can be measured, labelled and stored away. Facilities are incorporated which enable reports to be written on the results of the exploitation for onward transmission to battlefield commanders.

However, the same technology has enabled data to be collected and stored from sensors at very high speeds and in vast quantities. This has become known as “data deluge”, and it is the staff who manage and operate the ground station who are being deluged.

Typically, of the total imagery collected, sometimes only approximately 5% is known to be potentially useful, the rest is not known to be useful but has to be analysed in case it contains further useful information. These two situations constitute two distinct tasks. This, in turn, requires that the system designs for each task has to be approached separately, but with links across certain common elements such as input of data and output of report.

Task analysis. It is important to ensure that the system design makes the best use of the human resources that may be available, and matches the capabilities and limitations of the target population of users. Therefore, it is vital that a formal analysis of the system functions

be carried out to an appropriate level of detail, including the tasks required to hlfil those functions (see Fig. 5) .

Such an analysis, as exemplified by Kirwan and Ainsworth (l) , when carried out recently for a system soon to be in service with the RAF, considered nearly 800 separate tasks, including feedback loops in the task flow.

Fig. 5 . System analyses

Each one of those tasks was allocated to human or equipment as appropriate.

Allocation of function. In any system, the functions to be fulfilled can be carried out by .an item of equipment, or “engineering component” (hardware and, usually, also software), or by a “human component”. The choice rests with the system designer, working within the project constraints of cost, technology limitations, staffing policies, etc.

However, the choice is not always straightforward. In choosing between the relative capabilities of humans and machines, there are many aspects to consider, all of which are specific to the system being designed. The capabilities of both humans and machines have increased decade by decade, with machines progressing more rapidly (thanks to human design). Therefore, the basis for choosing between a human and a machine to carry out a particular function is constantly changing. Shneiderman (2) provides a relatively up-to-date set of criteria for the choice; some examples are:

Humans 0 develop new solutions; 0 draw on experience and adapt decisions to situation; 0 recognise constant patterns in varying situations.

Machines maintain operations under heavy information load

0 recall quantities of detailed information accurately; 0 count or measure physical quantities.

Fig. 6 presupposes (as happens in real life) an initial “stab” at allocating functions between “Engineering” and “Human”, based usually on past experience (which is often erroneous). However, it is necessary to start somewhere, and that is preferably with those functions that naturally fall into one category or the other.

Page 3: [IEE International Conference on People in Control (Human Interfaces in Control Rooms, Cockpits and Command Centres) - Bath, UK (21-23 June 1999)] International Conference on People

420

However, it is usually the marginal choices that are also the most important to make correctly and they often need some detailed investigation into human capabilities andor selection in order to establish a sound understanding of the alternatives, ie technological solutions or human function. Additionally, it has to be borne in mind that personnel are discrete units and, therefore, the allocation of a task to an individual human must be done only in the context of hisher whole job (ie the summation of tasks). ’

m

Fig. 6. Allocation of function

However, once the choice is made, the clear distinction between “engineering” and “human” tasks can be drawn and the task descriptions enable the specifications to be written for the “human components” of the system. On the human factors side, the analysis yields information requirements for each task, which must be built into HCI specifications, and the required characteristics of the population of candidate personnel, eg qualifications, mental and physical abilities, previous training and experience, and the number of each type. Considerations of team-working and individual workloads have to be made at this stage as well as the building of trust and confidence in the engineering system components.

THE DESIGN

Once it has been decided to what extent people will be ‘in control’, the interface, between the people and the equipment that forms the tools, must provide means for exercising control and providing information on the outcome of control actions.

Systems for imagery exploitation must be designed to perform the data analysis as rapidly and effectively as possible. The design must be centred around the mission/function/task analysis that identifies the information requirements of the image analysts at every stage of the process. From this, the Human Computer Interface (HCI) can be specified and potential workload issues addressed. During the design phase, consultations and liaison with expert users provide the designers with insights into the skills that are applied by image analysts, enabling the design to support the development of those skills.

By following this formal design process, it has been possible to provide workstations for image analysts which will cope with the large amount of information resulting from such complex operations. Fig. 7. shows an example of the results of the design process.

Fig 7. Modem ISR ground station

Although it is already possible, to a certain extent, for imagery collected by one system to be used by another, there is a growing need for the various systems being developed by coalition forces, particularly NATO countries, to be “interoperable” in terms of both the equipment interfaces and HCIs. To this end, international collaboration groups have been set up.

THE FUTURE

Research programmes at DERA Malvern are constantly seeking to provide the users with more naturally intuitive interface techniques in order to overcome the problems arising from data deluge. This includes determining the effectiveness of image enhancement techniques and increasing the amount of background processing of the raw data to sift for features of interest while rejecting the remainder as being of no interest.

Data from multi-sensor platforms, coupled with other data about the environment surrounding a target, should make accurate identification easier. However, the problem of handling all the data in an ordered way, without slowing down the rate of production of reports, poses a serious challenge to the HCI designers.

In order to maximise the output of intelligence from ground stations, there is constant striving to enable the users to concentrate on the extraction of information, rather than having to devote a significant amount of their time to manipulating the interface, eg controlling a pointer by moving a mouse about on a mat and pressing keys on a keyboard to enter alpha-numeric characters. To free them of as much of that “overhead” activity as

Page 4: [IEE International Conference on People in Control (Human Interfaces in Control Rooms, Cockpits and Command Centres) - Bath, UK (21-23 June 1999)] International Conference on People

\ 42 1

possible, DERA is now exploring interface techniques which, when used separately or in combination, will be as natural and intuitive as possible. Such devices include pen gesture recognition and direct voice input.

This is coupled with the development of other sophisticated tools which can carry out the mundane aspects of analysis automatically, eg detection of unknown possible targets which may lay hidden in the mass of data collected, bringing them to the notice of the analysts for their expert attention.

In addition, research is beginning into providing an “adaptive” interface. This will sense user performance and skills, as well as the immediate difficulty of the task, and maximise the interface configuration accordingly without the need for the user to intervene manually.

The research facility, known as the AGS (Advanced Ground Station) is shown in Fig. 8.

Fig 8. The Advanced Ground Station facility at Malvern

REFERENCES

1 . Kinvan, B. and Ainsworth,’L. (Ed), 1993, “A Guide to Task Analysis”, Taylor & Francis, London, UK.

2. Shneiderman, B., (1998), “Designing the User Interface”, Addison-Wesley, Massachusetts, US.

0 British Crown Copyright 1999 Published with the permission of the Controller of Her Britannic Majesty’s Stationery Office