john kerekes, davidpogorzald', john parkes, arnab shaw ... · pdf filejohn kerekes",...

4
HYPERSPECTRAL TARGET DETECTION USING MULTIPLE PLATFORM CUING John Kerekes", David Pogorzald', John Parkes", Arnab Shaw", Daniel Rahn b "Rochester Institute of Technology, Rochester, NY, USA "Gitam Technologies, Inc. and Wright State University, Dayton, OH, USA ABSTRACT Hyperspectral imaging has been demonstrated to achieve unresolved object detection through use of the spectral information. However, in many cases, these demonstrations have been in near ideal situations where the use of laboratory spectra with pristine data has lead to success. Complexities introduced in real-world situations such as a cluttered urban environment make successful detection challenging. One approach to improving performance is to use the synergistic effects of multiple sensors surveying a common area. These multiple sensors can be used to cue each other and enhance detection or tracking of objects. For maximum robustness, however one would want to minimize the complexity of processing algorithms such as those used to compensate for atmospheric and illumination effects. This paper investigates the limits of the use of spectra observed under one set of conditions to be used to detect an object under a different set of conditions. The results indicate good performance can be achieved across a reasonable range of illumination and viewing conditions. Index Terms- target detection, cuing, multiple platform, simulated data. 1. INTRODUCTION Hyperspectral imaging has been demonstrated to achieve unresolved object detection through use of the spectral information. However, in many cases, these demonstrations have been in near ideal situations where the use of laboratory spectra with pristine data has lead to success. Complexities introduced in real-world situations such as a cluttered urban environment make successful detection challenging. One approach to improving performance is to use the synergistic effects of multiple sensors surveying a common area. These multiple sensors can be used to cue each other and enhance detection or tracking of objects. An example would be the use of a high altitude UAV- based hyperspectral sensor collecting imagery in a surveillance mode looking for objects of potential interest. Upon detection of a particular object, the approximate location and the spectrum for that object could be extracted from the image and sent to a low altitude sensor with higher 978-1-4244-4687-2/09/$25.00 ©2009 IEEE ground resolution operating in the area. This sensor could then use the object's spectrum to acquire it in its field of view and through the enhanced spatial resolution obtain more detail and information about it. Thus the high altitude sensor has cued the low altitude sensor by sending the location and spectrum. This process would be more robust if it could be accomplished without the complexity of compensating for atmospheric and illumination conditions and use only the measured spectra. The question then becomes how different are the spectra under these conditions and what are the limits for which a spectrum collected at one location and viewing geometry can be used in the detection of the same object in imagery collected at a different location and geometry. This paper investigates the phenomenology and predicted performance of this cuing concept in an urban vehicle detection scenario. 2. SIMULATED DATA While it will be ultimately necessary to collect real hyperspectral imagery under these varying conditions to demonstrate and validate the concept, we discuss here the use of highly accurate physics-based synthetic image generation tools for the analysis. In particular, we use the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model developed by scientists at the Rochester Institute of Technology [1]. DIRSIG is a first principles image simulation environment that produces physically accurate synthetic spectral imagery of predefined scenes. This synthetic spectral imagery can then be used in a host of applications including spectral sensor development, algorithm test and evaluation, and image analyst training. DIRSIG uses detailed computer aided design (CAD) drawings for man-made and natural objects, along with material maps and associated characteristics as first level input parameters. The standard atmospheric propagation code Modtran [2] is used to predict the at-sensor radiance from the scene as would be seen by a broadband or spectral imaging sensor. Detailed models for the sensor are applied to the at-sensor radiance to render a radiometrically correct simulated digital image. Authorized licensed use limited to: Rochester Institute of Technology. Downloaded on December 17, 2009 at 08:45 from IEEE Xplore. Restrictions apply.

Upload: dinhbao

Post on 28-Feb-2018

222 views

Category:

Documents


2 download

TRANSCRIPT

HYPERSPECTRAL TARGET DETECTION USING MULTIPLE PLATFORM CUING

John Kerekes", David Pogorzald', John Parkes", Arnab Shaw", Daniel Rahnb

"Rochester Institute of Technology, Rochester, NY, USA"Gitam Technologies, Inc. and Wright State University, Dayton, OH, USA

ABSTRACT

Hyperspectral imaging has been demonstrated to achieveunresolved object detection through use of the spectralinformation. However, in many cases, these demonstrationshave been in near ideal situations where the use oflaboratory spectra with pristine data has lead to success.Complexities introduced in real-world situations such as acluttered urban environment make successful detectionchallenging. One approach to improving performance is touse the synergistic effects of multiple sensors surveying acommon area. These multiple sensors can be used to cueeach other and enhance detection or tracking of objects. Formaximum robustness, however one would want to minimizethe complexity of processing algorithms such as those usedto compensate for atmospheric and illumination effects. Thispaper investigates the limits of the use of spectra observedunder one set of conditions to be used to detect an objectunder a different set of conditions. The results indicate goodperformance can be achieved across a reasonable range ofillumination and viewing conditions.

Index Terms- target detection, cuing, multipleplatform, simulated data.

1. INTRODUCTION

Hyperspectral imaging has been demonstrated to achieveunresolved object detection through use of the spectralinformation. However, in many cases, these demonstrationshave been in near ideal situations where the use oflaboratory spectra with pristine data has lead to success.Complexities introduced in real-world situations such as acluttered urban environment make successful detectionchallenging. One approach to improving performance is touse the synergistic effects of multiple sensors surveying acommon area. These multiple sensors can be used to cueeach other and enhance detection or tracking of objects.

An example would be the use of a high altitude UAV­based hyperspectral sensor collecting imagery in asurveillance mode looking for objects of potential interest.Upon detection of a particular object, the approximatelocation and the spectrum for that object could be extractedfrom the image and sent to a low altitude sensor with higher

978-1-4244-4687-2/09/$25.00 ©2009 IEEE

ground resolution operating in the area. This sensor couldthen use the object's spectrum to acquire it in its field ofview and through the enhanced spatial resolution obtainmore detail and information about it. Thus the high altitudesensor has cued the low altitude sensor by sending thelocation and spectrum.

This process would be more robust if it could beaccomplished without the complexity of compensating foratmospheric and illumination conditions and use only themeasured spectra. The question then becomes how differentare the spectra under these conditions and what are thelimits for which a spectrum collected at one location andviewing geometry can be used in the detection of the sameobject in imagery collected at a different location andgeometry. This paper investigates the phenomenology andpredicted performance of this cuing concept in an urbanvehicle detection scenario.

2. SIMULATED DATA

While it will be ultimately necessary to collect realhyperspectral imagery under these varying conditions todemonstrate and validate the concept, we discuss here theuse of highly accurate physics-based synthetic imagegeneration tools for the analysis. In particular, we use theDigital Imaging and Remote Sensing Image Generation(DIRSIG) model developed by scientists at the RochesterInstitute of Technology [1]. DIRSIG is a first principlesimage simulation environment that produces physicallyaccurate synthetic spectral imagery of predefined scenes.This synthetic spectral imagery can then be used in a host ofapplications including spectral sensor development,algorithm test and evaluation, and image analyst training.DIRSIG uses detailed computer aided design (CAD)drawings for man-made and natural objects, along withmaterial maps and associated characteristics as first levelinput parameters. The standard atmospheric propagationcode Modtran [2] is used to predict the at-sensor radiancefrom the scene as would be seen by a broadband or spectralimaging sensor. Detailed models for the sensor are appliedto the at-sensor radiance to render a radiometrically correctsimulated digital image.

Authorized licensed use limited to: Rochester Institute of Technology. Downloaded on December 17, 2009 at 08:45 from IEEE Xplore. Restrictions apply.

Figure lao ROB of 0.4 m resolution simulated imagery.

Figure 2a. Zoom in on cars in parking lot in 0.4 m resolutionsimulated image.

For this work, we choose an existing predefined sceneknown as MegaScene [3]. This scene models a residentialarea near Rochester , New York containing a school, housesand roads. Starting with the base scene we inserted anumber of cars with a variety of paint spectra for use astargets.

Table 1 shows the details and combinations for severalimages which were simulated. In particular we generatedlow and a high resolution images corresponding to a sensorwith a fixed angular resolution flying at two differentaltitudes. Each of these two resolutions was also simulated

Figure 2b. Zoom in on cars in parking lot in 4 m resolutionsimulated image.

with two different atmospheric visibilities and with twodifferent solar zenith angles to produce a total of eighthyperspectral images. Figure 1a shows an example imagegenerated at high (0.4 m) ground resolution while Figure lbshows the scene at low (4 m) ground resolution. Note thelow resolution scene encompasses a much larger area on theground. Figures 2a and 2b show corresponding zoomedimages for an area in the school parking lot containing a fewof the vehicles. Note these images are ROB compositesusing three bands of the 224 spectral bands included in thesynthetic imagery.

Authorized licensed use limited to: Rochester Institute of Technology. Downloaded on December 17, 2009 at 08:45 from IEEE Xplore. Restrictions apply.

Table I. DlRSIG Simulated Image Characteristics 4. IMAGE ANALYSIS RESULTS

Parameter Value(s)Visibility 5 and 23 km urban aerosol modelSolar zenith angle 20° and 60°Atmospheric model Summer mid-latitudeSensor altitude 0.4 and 4 kmSensor view angle Nadir (0°)Sensor spectral 224 bands from 0.4 to 2.5 urn incharacteristics 10 nm stepsImage size 600 x 600 for 0.4 m resolution

300 x 300 for 4 m resolution

For the results presented in this paper red cars wereselected as the target. Given the high resolution of the lowaltitude images and multiples instances of the cars, therewere a total 136 pixels that were labeled as target.

3. IMAGE ANALYSIS PROCEDURE

We investigated the concept of using a spectrum from thehigh altitude low resolution image to detect the object in thelow altitude high resolution image through the use of aspectral matched filter [4]. First we identified a pixel in thelow resolution image that corresponded to a vehicle ofinterest. We then formed the matched filter operator w asfollows.

Figure 3 presents the ROC curve for two situationsinvolving the use of high altitude low resolution spectrum ofa red car to detect the car in a low altitude high resolutionhyperspectral image. First we took the spectrum of the carin the high altitude image generated using a 5 km visibilityatmosphere and a 20° solar zenith angle and applied itthrough the matched filter to a low altitude high resolutionimage generated using the same atmospheric andillumination conditions. This would correspond to the highaltitude sensor cuing the low altitude sensor at nearly thesame time during mid-day. The result is the solid curve inFigure 3.

The second experiment was to use the spectrum of thecar taken from a high altitude scene generated using a 23 kmvisibility atmosphere and 60° solar zenith angle and apply itto the same high resolution image used in the firstexperiment. This would correspond to the case of using thespectrum from a morning high altitude image to find thesame car in the mid-day low altitude image. The result is thedashed curve in Figure 3.

As can be seen, both cases result in a fairly highdetection probability even down to false alarm probabilitiesof 10-5 or 10-6, although there is a modest loss in detectionprobability when using the spectrum collected under adifferent set of atmospheric and illumination conditions.

The filter vector w is then applied to every pixelspectral vector as an inner product with the result being ascalar image of the test statistic. We then form a ReceiverOperating Characteristic (ROC) curve showing the tradeoffin detection probability versus false alarm probability bysweeping a threshold from the lowest to the highest teststatistic values and counting the number of background andtarget pixels which exceeding each threshold value. Theprobabilities are then estimated empirically by the ratio ofthe detected pixel count to the total possible for target andbackground pixels, respectively.

To avoid singularities and low signal-to-noise regionseach spectral vector was first reduced in dimensionality bysimply omitting "bad" bands at short and long wavelengthends of the spectrum as well as those around the majorwater vapor absorption lines. However, experiments foundthat the best results were obtained using only 40 bandsspanning the 0.5 to 0.9 urn spectral range.

t = target spectrum vectorx image mean spectrum vector~ = image wide covariance matrix

O.0 '-LJJ.lJJlllL-LUJJillL..LLLWlll-LLLilllll-1.-LLllJJ.lL-~JJJllJ

10-6 10-5 10-4 10-3 10-2 10-1 100

Probability of False Alarm

Figure 3. ROC curves for detection of red cars when usinga low resolution target spectrum from a scene with sameatmospheric/illumination conditions and when using aspectrum collected under a different condition.

--- Same Conditions

--------- Different Conditions0.2

coTIQ)

Q) 0.6o-o~.s 0.4ro.0e0...

0.8

(1)(t-xy~-1

(t - xy~ - I (t - x)w

where,

Authorized licensed use limited to: Rochester Institute of Technology. Downloaded on December 17, 2009 at 08:45 from IEEE Xplore. Restrictions apply.

Figure 4. ROC curves for detecting red cars using AdaptiveSubspace Detector.

Next, the same detection experiment was performed usingAdaptive Subspace Detector (ASD) statistics,

T T

ASD(x) = X ~~,- X ~:bX (2)

X ~:bX

where, x represents test pixel spectrum, Pb is the projectionmatrix for the subspace orthogonal to the image widebackground and P t:b is the projection matrix for the subspac eorthogonal to the target and background. The ASD-baseddetection results are shown in Figure 4.

1

- SameConditions I- - - Dif ferent Condition s

[1] Schott, lR., S.D. Brown, R. V. Raquefio, H.N. Grossand G. Robinson, "An advanced synthetic image generationmodel and its application to multi-hyperspectral algorithmdevelopment," Canadian Journal of Remote Sensing, 25(2),99-111, (1999).

objects in the simulated scene and comparison with othersubspace and covariance-based detection methods [4] are inprogress. We are also currently investigating the applicationof covariance equalization [5] to these data. In addition weare verifying the accuracy of the simulated images.

The authors would like to acknowledge the support of Dr.Devert Wicker and Richard Van Hook of the Air ForceResearch Laboratory 's Sensors Directorate. This work wassupported in part by the U.S. Air Force Research Laboratoryunder contracts FA8650-07-M-l159 and FA8650-08-C­1406.

6. ACKNOWLEDGMENTS

7. REFERENCES

[2] A. Berk, L. S. Bernstein, G. P. Anderson, P. K. Acharya ,D. C. Robertson , J. H. Chetwynd, and S. M. Adler-Golden,"MODTRAN cloud and multiple scattering upgrades withapplication to AVIRIS," Remote Sens . Environ., vol. 65, no.3,pp.367-375,Sep.1998.

o10

.,10

. • · 2

10 10 10Probability of False ,.&J(lJm

/-s

10

0.2

0 .1

.,10

0.3

07

cQ

~ 0.6

i'5'Q 0 .5

~

~ 0.4

£

0.9

0 .8

5. SUMMARY AND CONCLUSIONS

[3] E. Ientilucci and S. Brown, "Advances in wide-areahyperspectral image simulation," Proceedings of Targetsand Backgrounds IX: Characterization and Representation ,SPIE Vol. 5075, pp. 110-121,2003.

This paper presents an example application of high fidelityspectral image simulation to study multiplatform cuingperformance with hyperspectral imaging. The resultsdemonstrate over a modest range of atmospheric andillumination conditions, spectra collected by one sensor canbe used in a spectral detection scheme to detect the sameobject imaged by a similar, yet distinct , sensor underdifferent conditions.

The ROC curve comparison of detector performance inFigures 3 and 4 illustrate the superior performance of theMatched filter approach over the subspace based method inthis case. Analysis of the results for other vehicles and

[4] D. Manolakis and G. Shaw, "Detection algorithms forhyperspectral imaging applications," IEEE Signal Process,Mag. 19 (1) 29-43 (2002).

[5] A. Schaum and A. Stocker, "Hyperspectral changedetection and supervised matched filtering based oncovariance equalization," Proceedings of Algorithms andTechnologies for Multispectral, Hyperspectral, andUltraspectral Imagery X, SPIE Vol. 5425, pp. 77-90, 2004.

Authorized licensed use limited to: Rochester Institute of Technology. Downloaded on December 17, 2009 at 08:45 from IEEE Xplore. Restrictions apply.