high-resolution image reconstruction from a sequence of

14
High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system Russell C. Hardie University of Dayton Department of Electrical and Computer Engineering Dayton, Ohio 45469-0226 E-mail: [email protected] Kenneth J. Barnard, MEMBER SPIE Air Force Wright Laboratory Electro-Optics Sensor Technology Branch 3109 P Street Wright-Patterson Air Force Base Dayton, Ohio 45433-7700 John G. Bognar Ernest E. Armstrong Technology/Scientific Services, Inc. P.O. Box 3065 Dayton, Ohio 45437-3065 Edward A. Watson, MEMBER SPIE Air Force Wright Laboratory Electro-Optics Sensor Technology Branch 3109 P Street Wright-Patterson Air Force Base Dayton, Ohio 45433-7700 Abstract. Some imaging systems employ detector arrays that are not sufficiently dense to meet the Nyquist criterion during image acquisition. This is particularly true for many staring infrared imagers. Thus, the full resolution afforded by the optics is not being realized in such a system. This paper presents a technique for estimating a high-resolution image, with reduced aliasing, from a sequence of undersampled rotated and translationally shifted frames. Such an image sequence can be obtained if an imager is mounted on a moving platform, such as an aircraft. Sev- eral approaches to this type of problem have been proposed in the lit- erature. Here we extend some of this previous work. In particular, we define an observation model that incorporates knowledge of the optical system and detector array. The high-resolution image estimate is formed by minimizing a regularized cost function based on the observation model. We show that with the proper choice of a tuning parameter, our algorithm exhibits robustness in the presence of noise. We consider both gradient descent and conjugate-gradient optimization procedures to minimize the cost function. Detailed experimental results are provided to illustrate the performance of the proposed algorithm using digital video from an infrared imager. © 1998 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(98)03401-1] Subject terms: high resolution; image reconstruction; aliasing; infrared; Nyquist; minimization. Paper 39047 received Apr. 30, 1997; revised manuscript received Aug. 5, 1997; accepted for publication Aug. 8, 1997. 1 Introduction In imaging systems where the focal plane detector array is not sufficiently dense to meet the Nyquist criterion, the resulting images will be degraded by aliasing effects. This undersampling is a common problem among staring infra- red imaging systems. This is due to fabrication complexi- ties associated with manufacturing dense infrared focal plane arrays ~FPAs!. While it is possible to equip such systems with optics to properly bandlimit the input, this generally means employing optics with a very small instan- taneous field of view ~IFOV!. This may be highly undesir- able in some applications. Furthermore, charge-coupled de- vice ~CCD! cameras may also employ detector arrays that are not sufficiently dense for the desired optics. Thus, the goal of this work is to obtain high-resolution images, with reduced aliasing, from such systems. In addition, we wish to remove the blurring effects of the finite-size detectors and the optics. One way to overcome the undersampling problem, with- out low-pass filtering the input or sacrificing IFOV, is to exploit multiple frames from an image sequence. This is possible if there is relative motion between the scene and the FPA during image sequence acquisition. In this case, a unique ‘‘look’’ at the scene ~i.e., a unique set of samples! may be provided by each frame. The desired image se- quence can be obtained if an imager is mounted on a mov- ing platform, such as an aircraft. We refer to this as uncon- trolled microscanning. 1 It is also possible to introduce a controlled mirror in the optical path to translate the scene intensity image across the FPA. This is referred to as con- trolled microscanning. 1,2 Here we focus on uncontrolled microscanning, and we consider both rotational and trans- lational motion of the scene relative to the FPA. The key to the high-resolution image recovery algorithm is accurate knowledge of the subpixel translation and rotation of each frame. If these parameters are unknown, as in the uncon- trolled case, they must be estimated from the observed im- ages. Thus, we must consider both image registration and high-resolution image reconstruction. Several approaches to the high-resolution image recon- struction problem have been proposed in the literature. 3–11 A maximum likelihood ~ML! technique using the expecta- tion maximization ~EM! algorithm has been developed and applied to forward-looking infrared ~FLIR! data. 12 This method seeks to jointly estimate translational shifts and a high-resolution image. In addition, a joint registration and high-resolution reconstruction technique using maximum a posteriori ~MAP! estimation has been proposed. 13 These ML and MAP algorithms do not explicitly treat the case of rotational motion, which is the focus of this paper. The reconstruction algorithm presented here can be viewed as an extension of the basic approach presented by Irani and Peleg. 11 That approach seeks to minimize a speci- 247 Opt. Eng. 37(1) 247260 (January 1998) 0091-3286/98/$10.00 © 1998 Society of Photo-Optical Instrumentation Engineers

Upload: others

Post on 08-Jan-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: High-resolution image reconstruction from a sequence of

High-resolution image reconstruction from asequence of rotated and translated frames and itsapplication to an infrared imaging system

Russell C. HardieUniversity of DaytonDepartment of Electrical and Computer

EngineeringDayton, Ohio 45469-0226E-mail: [email protected]

Kenneth J. Barnard, MEMBER SPIEAir Force Wright LaboratoryElectro-Optics Sensor Technology Branch3109 P StreetWright-Patterson Air Force BaseDayton, Ohio 45433-7700

John G. BognarErnest E. ArmstrongTechnology/Scientific Services, Inc.P.O. Box 3065Dayton, Ohio 45437-3065

Edward A. Watson, MEMBER SPIEAir Force Wright LaboratoryElectro-Optics Sensor Technology Branch3109 P StreetWright-Patterson Air Force BaseDayton, Ohio 45433-7700

Abstract. Some imaging systems employ detector arrays that are notsufficiently dense to meet the Nyquist criterion during image acquisition.This is particularly true for many staring infrared imagers. Thus, the fullresolution afforded by the optics is not being realized in such a system.This paper presents a technique for estimating a high-resolution image,with reduced aliasing, from a sequence of undersampled rotated andtranslationally shifted frames. Such an image sequence can be obtainedif an imager is mounted on a moving platform, such as an aircraft. Sev-eral approaches to this type of problem have been proposed in the lit-erature. Here we extend some of this previous work. In particular, wedefine an observation model that incorporates knowledge of the opticalsystem and detector array. The high-resolution image estimate is formedby minimizing a regularized cost function based on the observationmodel. We show that with the proper choice of a tuning parameter, ouralgorithm exhibits robustness in the presence of noise. We consider bothgradient descent and conjugate-gradient optimization procedures tominimize the cost function. Detailed experimental results are provided toillustrate the performance of the proposed algorithm using digital videofrom an infrared imager. © 1998 Society of Photo-Optical Instrumentation Engineers.[S0091-3286(98)03401-1]

Subject terms: high resolution; image reconstruction; aliasing; infrared; Nyquist;minimization.

Paper 39047 received Apr. 30, 1997; revised manuscript received Aug. 5, 1997;accepted for publication Aug. 8, 1997.

y ishehisfra-xi-ca

isan--de

hatthethish

ors

ith-ois

ande, a

seov

on-aneon-

ns-to

atechon-im-and

on-

-d

d and

of

bed byci-

1 Introduction

In imaging systems where the focal plane detector arranot sufficiently dense to meet the Nyquist criterion, tresulting images will be degraded by aliasing effects. Tundersampling is a common problem among staring inred imaging systems. This is due to fabrication completies associated with manufacturing dense infrared foplane arrays~FPAs!. While it is possible to equip suchsystems with optics to properly bandlimit the input, thgenerally means employing optics with a very small insttaneous field of view~IFOV!. This may be highly undesirable in some applications. Furthermore, charge-coupledvice ~CCD! cameras may also employ detector arrays tare not sufficiently dense for the desired optics. Thus,goal of this work is to obtain high-resolution images, wireduced aliasing, from such systems. In addition, we wto remove the blurring effects of the finite-size detectand the optics.

One way to overcome the undersampling problem, wout low-pass filtering the input or sacrificing IFOV, is texploit multiple frames from an image sequence. Thispossible if there is relative motion between the scenethe FPA during image sequence acquisition. In this casunique ‘‘look’’ at the scene~i.e., a unique set of samples!may be provided by each frame. The desired imagequence can be obtained if an imager is mounted on a m

Opt. Eng. 37(1) 247–260 (January 1998) 0091-3286/98/$10.00

l

-

--

ing platform, such as an aircraft. We refer to this as unctrolled microscanning.1 It is also possible to introducecontrolled mirror in the optical path to translate the sceintensity image across the FPA. This is referred to as ctrolled microscanning.1,2 Here we focus on uncontrolledmicroscanning, and we consider both rotational and tralational motion of the scene relative to the FPA. The keythe high-resolution image recovery algorithm is accurknowledge of the subpixel translation and rotation of eaframe. If these parameters are unknown, as in the unctrolled case, they must be estimated from the observedages. Thus, we must consider both image registrationhigh-resolution image reconstruction.

Several approaches to the high-resolution image recstruction problem have been proposed in the literature.3–11

A maximum likelihood~ML ! technique using the expectation maximization~EM! algorithm has been developed anapplied to forward-looking infrared~FLIR! data.12 Thismethod seeks to jointly estimate translational shifts anhigh-resolution image. In addition, a joint registration ahigh-resolution reconstruction technique using maximumaposteriori ~MAP! estimation has been proposed.13 TheseML and MAP algorithms do not explicitly treat the caserotational motion, which is the focus of this paper.

The reconstruction algorithm presented here canviewed as an extension of the basic approach presenteIrani and Peleg.11 That approach seeks to minimize a spe

247© 1998 Society of Photo-Optical Instrumentation Engineers

Page 2: High-resolution image reconstruction from a sequence of

Hardie et al.: High-resolution image reconstruction . . .

Fig. 1 Continuous observation model resembling the physical process of image acquisition.

stvedta.

gh-delhee as adelrm

hisof

rmtheatemchlly,m-m.. Ininutioon

theonidetedys-im-me

hisionus

ond. Im.theke

istedenans

ercu-gese

e

ift

ze

eed,and

le,se.

entrac-

e-

si-de-is,

fied cost function using an iterative algorithm. This cofunction is the total squared error between the obserlow-resolution data and the predicted low-resolution daThe predicted data are the result of projecting the hiresolution image estimate through the observation moMore will be said about this shortly. Here we employ tsame registration technique, and we seek to minimizrelated cost function. However, our approach includenumber of differences. In particular, the observation mouses information about the optical system and FPA to foa theoretical point spread function~PSF!. Also, the costfunction defined here includes a regularization term. Textra term gives the cost function the desirable propertyhaving a unique global minimum. We show that this tecan add robustness, particularly when the the fidelity ofdata is low. We employ a gradient descent and a conjuggradient technique for minimizing the cost function to forour estimate. We show that the conjugate-gradient tenique, in particular, provides rapid convergence. Finahere we investigate the application of high-resolution iage reconstruction to a real-time infrared imaging syste

The organization of the rest of the paper is as followsSec. 2 the observation model is described. Both a contous and a discrete model are developed. Image registrais addressed in Sec. 3. The high-resolution image recstruction algorithm is developed in Sec. 4. In particular,regularized cost function is defined, and two optimizatiprocedures are described. Experimental results are provin Sec. 5. These include results obtained using simuladata and using FLIR images acquired from a real-time stem. Quantitative error analysis is provided and severalages are shown for subjective evaluation. Finally, soconclusions are given in Sec. 6.

2 Observation Model

In this section, the observation model is presented. Tmodel is the basis for the high-resolution reconstructalgorithm developed in Sec. 4. We begin with a continuomodel which closely follows the physical image acquisitiprocess. An equivalent discrete model is then presenteis the one that is utilized in the reconstruction algorithWe conclude this section with the characterization ofsystem point spread function, since this represents aelement in the observation model.

2.1 Continuous Model

A block diagram of the continuous observation modelshown in Fig. 1. The true scene intensity image is denoby o(x,y). The motion of the imager that occurs betweimage acquisitions is modeled as a pure rotation and tr

248 Optical Engineering, Vol. 37 No. 1, January 1998

.

-

-

-n-

d

t

y

-

lation of the scene intensity image. For a moving imagand a stationary scene in the far field, this is a fairly acrate model, since occlusion effects and perspective chanare minimal. Thus, thek’th observed frame in a sequenccan be expressed as

ok~x,y!5o~x cosuk2y sin uk1hk ,y cosuk

1x sin uk1vk! ~1!

for k51,2,3,...,p. Here uk represents the rotation of thk’th frame about the origin~i.e., x50, y50!. The param-etershk and vk represent the horizontal and vertical shassociated with thek’th frame.

The blurring effect of the optics and finite detector siis modeled by a convolution operation yielding

ok~x,y!5ok~x,y!* hc~x,y!, ~2!

wherehc(x,y) is the continuous system PSF. More will bsaid about the PSF in Sec. 2. Finally, the blurred, rotatand translated image is sampled below the Nyquist ratecorrupted by noise. This yields thek’th low-resolution ob-served frame

yk~n1 ,n2!5 ok~n1T1 ,n2T2!1hk~n1 ,n2!, ~3!

where T1 and T2 are the horizontal and vertical sampspacings andhk(n1 ,n2) is an additive noise term. Ideallythe FPA performance is limited by the photon or shot noiIn this case, the signal follows Poisson statistics.12 For highlight levels, however, we believe that signal-independadditive Gaussian noise is a sufficiently accurate and ttable model.

Let the dimensions of the low-resolution imagyk(n1 ,n2) be N13N2 . These data in lexicographical notation will be expressed asyk5@yk,1 ,yk,2 ,...,yk,M#T, whereM5N1N2 . Finally, let the full set ofp observed low-resolution images be denoted

y5@y1T ,y2

T ,...,ypT#T5@y1 ,y2 ,...,ypM#T. ~4!

Thus, all observed pixel values are contained withiny.

2.2 Discrete Model

While the continuous model provides insight into the phycal process, we require a discrete observation model tovelop the high-resolution reconstruction algorithm. That

Page 3: High-resolution image reconstruction from a sequence of

Hardie et al.: High-resolution image reconstruction . . .

Fig. 2 Equivalent discrete observation model illustrating the relationship between the ideally sampledimage z and the observed frames y.

age

. 1.

ithwegh-

pa-to

cto

sion,forn,istdsb

be

F.richeta-er-m-ved

rmtedveur-

of

ow-

nal

on-

ghemehtson

n

reir-

torllsw-n

sw

ern.he

we need a model relating a discrete high-resolution imto the low-resolution observed framesy. Figure 2 illustratessuch a discrete model, which is equivalent to that in FigThe difference here is that we first definez(n1 ,n2) to be anintensity image sampled at or above the Nyquist rate wno blur or noise degradation. It is this discrete imagewish to estimate from the observed frames. Let this hiresolution image be of sizeL1N13L2N25N, whereL1 andL2 are positive integers. More will be said about theserameters shortly. In later analysis it will be convenientexpress this image in lexicographical notation as the vez5@z1 ,z2 ,...,zN#T.

As before, this image is rotated byuk and shifted byhk

and vk , producing zk(n1 ,n2). Here we will define theshifts hk and vk in terms of low-resolution pixel spacingfor convenience. Note that this step requires interpolatsince the sampling grid changes in the geometric transmation. Theoretically, one could use ideal interpolatiosincez(n1 ,n2) is defined to be sampled above the Nyqurate. However, in practice, simpler interpolation methosuch as nearest neighbor and bilinear interpolation canused.13 For large values ofL1 andL2 , the high-resolutiongrid is so dense that simple interpolation methods canreasonably accurate.

Next, the system PSF is taken into account, yielding

zk~n1 ,n2!5zk~n1 ,n2!* hd~n1 ,n2!, ~5!

where hd(n1 ,n2) is the equivalent discrete-system PSNote that the blurring is performed after the geomettransformation. If the PSF is circularly symmetric, then tblurring can be equivalently introduced prior to the rotion. This saves on computations, since the blurring is pformed only once in this case. Finally, the transformed iage is subsampled down to the resolution of the obserframes, yielding

yk~n1 ,n2!5 zk~n1L1 ,n2L2!1hk~n1 ,n2!. ~6!

Aliasing is generally introduced in this final step.This discrete model can be rewritten in a simple fo

where the low-resolution pixels are defined as a weighsum of the appropriate high-resolution pixels with additinoise. This generalized form can allow for the PSF blring, the geometric transformation, and the subsampling

r

-

e

the high-resolution image. Specifically, the observed lresolution pixels from framek are related to the highresolution image as follows:

yk,m5(r 51

N

wk,m,r~uk ,hk ,vk!zr1hk,m ~7!

for m51,2,...,M and k51,2,...,p. The weightwk,m,r(uk ,hk ,vk) represents the contribution of ther ’thhigh-resolution pixel to them’th low-resolution observedpixel of the k’th frame. The parametersuk , hk , and vkrepresent the rotation, horizontal, and vertical translatioshifts, respectively, of thek’th frame with respect to somereference on the high-resolution grid. The termhk,m in Eq.~7! represents an additive noise sample. To further cdense the notation, the model in Eq.~7! can be expressed interms of the entire set of low-resolution pixels as

ym5(r 51

N

wm,rzr1hm ~8!

for m51,2,...,pM and wherewm,r is simply the contribu-tion of the r ’th high-resolution pixel inz to them’th low-resolution pixel iny. It is assumed that the underlyinscene,z, remains constant during the acquisition of tmultiple low-resolution frames. Furthermore, we assuhere that the only frame-to-frame differences in the weigresult from rotation and translation of each low-resolutiframe relative to the high-resolution grid.

A simple way to visualize the form of the observatiomodel in ~8! is to consider only the blur from the finitedetector size. This scenario is illustrated in Fig. 3. Heeach low-resolution pixel is obtained by summing the ‘‘vtual’’ high-resolution pixels within the support of that lowresolution detector. One low-resolution detector in Fig. 3~a!is shaded to illustrate this point. This discrete detecmodel simulates the integration of light intensity that fawithin the span of the low-resolution detector. As the loresolution grid shifts relative to the fixed high-resolutiogrid, as in Fig. 3~b!, a different set of high-resolution pixelcontribute to each low-resolution pixel. This yields a neset of linearly independent equations from Eq.~8!. Clearly,some type of interpolation is required for any nonintegshift on the high-resolution grid or any nontrivial rotatioThis interpolation can be accomplished by modifying t

249Optical Engineering, Vol. 37 No. 1, January 1998

Page 4: High-resolution image reconstruction from a sequence of

dbed

theitellyu-forseea

mn-sheted

om

m.

m.

us

the

mar-4.etec-

re-

es

e

Hardie et al.: High-resolution image reconstruction . . .

weights in ~8!. This simple detector model can give gooresults. However, a more realistic PSF model is descriin the following section.

2.3 System Point Spread Function

For most systems, there are two main contributors tosystem PSF. The primary contributor is generally the findetector size as illustrated in Fig. 3. This effect is spatiainvariant for a uniform detector array. The second contribtor is the optics. Here we assume an isoplanatic modelthe optics.14 We derive and use a theoretical PSF becaufor the type of systems considered in this paper, direct msurement of anunaliasedsystem PSF is not possible.

Let us begin by considering a system with a unifordetector array. The effect of the integration of light intesity over the span of the detectors can be modeled alinear convolution operation with a PSF determined by tgeometry of a single detector. Let this PSF be denod(x,y). Applying the Fourier transform tod(x,y) yieldsthe effective continuous frequency response resulting frthe detectors,

D~u,v !5F T $d~x,y!%, ~9!

whereF T $•% represents the continuous Fourier transforNext, define the incoherent optical transfer function~OTF!of the optics to beHo(u,v). The overall system OTF isgiven by the product of these, yielding

H~u,v !5D~u,v !Ho~u,v !. ~10!

The overall continuous system PSF is then given by

hc~x,y!5F T 21$H~u,v !%, ~11!

whereF T 21$•% represents the inverse Fourier transforFinally, the impulse-invariant discrete system PSF15 on thehigh-resolution grid is obtained by sampling the continuoPSF such that

hd~n1 ,n2!5T1T2

L1L2hcS n1T1

L1,n2T2

L2D . ~12!

Fig. 3 Discrete detector model showing those ‘‘virtual’’ high-resolution pixels that contribute to a low-resolution pixel for two dif-ferent registration positions.

250 Optical Engineering, Vol. 37 No. 1, January 1998

,-

a

This accurately represents the continuous blurring wheneffective sampling frequencyL1 /T1 exceeds two times thehorizontal cutoff frequency ofH(u,v) andL2 /T2 exceedstwo times the vertical cutoff frequency.15

Let us now specifically consider a system with uniforrectangular detectors. An illustration of such a detectorray with critical dimensions labeled is provided in Fig.The shaded areas represent the active region of each dtor. The detector model PSF in this case is given by

d~x,y!51

abrectS x

a,y

bD5 H1

0for ux/au,1/2 and ux/bu,1/2,otherwise. ~13!

Let the active region dimensions,a andb, be measured inmillimeters. Thus, the effective continuous frequencysponse resulting from the detectors is

D~u,v !5sinc~au,bv !5sin~pau!sin~pbv !

p2aubv, ~14!

whereu and v are the horizontal and vertical frequencimeasured in cycles per millimeter.

The incoherent optical transfer function~OTF! ofdiffraction-limited optics with a circular exit pupil can bfound14 as

Ho~u,v !

5H 2

p H cos21S r

rcD2

r

rcF12S r

rcD 2G1/2J for r,rc ,

0 otherwise,~15!

wherer5(u21v2)1/2. The parametersrc is the radial sys-tem cutoff frequency given by

rc51

l f /#, ~16!

Fig. 4 Uniform detector array, illustrating critical dimensions.

Page 5: High-resolution image reconstruction from a sequence of

Hardie et al.: High-resolution image reconstruction . . .

Fig. 5 (a) Effective MTF of the detectors in the FLIR imager, (b) diffraction-limited OTF of the optics,(c) overall system MTF, and (d) overall continuous system PSF.

y-gh-

eataLIR.

.

-

3

0ingateire

ob-

wheref /# is the f number of the optics andl is the wave-length of light considered. Since the cutoff ofHo(u,v) isrc , so is the cutoff of the overall systemH(u,v). Thus, theimpulse-invariant discrete system defined in Eq.~12! willaccurately model the continuous system whenL1

> d2rcT1e andL2> d2rcT2e. This choice ofL1 andL2 alsodefines a high-resolution sampling grid at or above the Nquist rate for an arbitrary scene. That is, the effective hiresolution sampling rates ofL1 /T1 andL2 /T2 will be morethan twice the OTF cutoff frequency.

Figure 5 shows an example ofD(u,v), Ho(u,v),H(u,v), andhc(x,y) for a particular imaging system. Thsystem considered is the FLIR imager used to collect dfor the experimental results presented in Sec. 5. The Fcamera uses a 1283128 Amber AE-4128 infrared FPAThe FPA is composed of indium antimonide~InSb! detec-tors with a response in the 3- to 5-mm wavelength bandThis system has square detectors of sizea5b50.040 mm. The imager is equipped with 100-mmf /3 op-

tics. The center wavelength,l50.004 mm, is used in theOTF calculation. Figure 5~a! shows the effective modula-tion transfer function~MTF! of the detectors,uD(u,v)u.The diffraction-limited OTF for the optics,Ho(u,v), isshown in Fig. 5~b!. Note that the cutoff frequency is 83.cycles/mm. The overall system MTF,uH(u,v)u, is plottedin Fig. 5~c!. Finally, the continuous system PSF,hc(x,y),is plotted in Fig. 5~d!.

The detector spacing on the Amber FPA isT15T2

50.050 mm, yielding a sampling frequency of 2cycles/mm in both directions. Thus, the effective samplrate must be increased by a factor of 8.33 to eliminaliasing entirely for an arbitrary scene. This would requthat we selectL1 ,L2>9. In practice, we find that goodresults can be obtained with smaller values ofL1 andL2 .

3 Image Registration

In most applications, the registration parameters in theservation model,uk , hk , and vk , will not be known a

251Optical Engineering, Vol. 37 No. 1, January 1998

Page 6: High-resolution image reconstruction from a sequence of

edkeytionvemethe

ive-

ef-

of

ex-

-ows

ng

-ns.as-

d

the

o so

Hardie et al.: High-resolution image reconstruction . . .

priori . Thus, they must be estimated from the observimage sequence. Accurate subpixel registration is theto the success of the high-resolution image reconstrucalgorithm. A number of image registration techniques habeen proposed in the literature. We have found that in socases a practical and effective method of estimatingsubpixel translation and rotation is using an iteratgradient-based technique.11,16 For convenience, this algorithm is presented here using the current notation.

To begin, define the first observed frame to be the rerence frame, and without loss of generality letu15h1

5v150. According to our model,

ok~x,y!5o1~x cosuk2y sin uk1hk ,y cosuk

1x sin uk1vk! ~17!

for k52,3,...,p. Note that this assumes that the centerrotation is at the origin~i.e., x50, y50!. This is not re-strictive, however, since we allow any shifthk and vk . Ifthe PSF blur is approximately circularly symmetric, then

ok~x,y!' o1~x cosuk2y sin uk1hk ,y cosuk

1x sin uk1vk!. ~18!

For very small values ofuk , we can make the followingapproximations: sinuk'uk and cosuk'1. Using theseyields

ok~x,y!' o1~x2yuk1hk ,y1xuk1vk!. ~19!

Now we use the first three terms of the Taylor seriespansion as an approximation for the right side in Eq.~19!.This yields

ok~x,y!' o1~x,y!1~hk2yuk!gx~x,y!

1~vk1xuk!gy~x,y!, ~20!

wheregx(x,y)5] o1(x,y)/]x andgy(x,y)5] o1(x,y)/]y.In light of the relationship~20!, we define the least

squares estimates for the registration parameters as foll

uk ,hk ,vk5arg minuk ,hk ,vk

Ek~uk ,hk ,vk!, ~21!

where

Ek~uk ,hk ,vk!5 (~x,y!PS

@ ok~x,y!2 o1~x,y!2~hk2yuk!

3gx~x,y!2~vk1xuk!gy~x,y!#2. ~22!

HereS represents the grid of points in theR2 space, definedby x andy, at which we have discrete samples. Rewritithis error in terms of the observed images yields

Ek~uk ,hk ,vk!5 (nPN

@yk~n!2y1~n!2~hk2n2T2uk!gx~n!

2~vk1n1T1uk!gy~n!#2, ~23!

252 Optical Engineering, Vol. 37 No. 1, January 1998

:

wheren5@n1 ,n2# andN is the set of indices on the lowresolution discrete grid for which we have observatioNote that the center of rotation on the discrete grid issumed to be atn15n250. The functionsgx(n) and gy(n)are discrete estimates ofgx(x,y) andgy(x,y), respectively,at locationx5n1T1 andy5n2T2 . These can be computeusing scaled Prewitt operators,13 for example.

To solve the minimization problem in Eq.~21!, we beginby differentiatingE(uk ,hk ,vk) with respect touk , hk , andvk and set the derivatives equal to zero. This yieldsfollowing three equations:

(nPN

@hkgx2~n!1vkgx~n!gy~n!1uk g~n!gx~n!#

5 (nPN

y k~n!gx~n!, ~24!

(nPN

@hkgx~n!gy~n!1vkgy2~n!1uk g~n!gy~n!#

5 (nPN

y k~n!gy~n!, ~25!

and

(nPN

@hkg~n!gx~n!1vk g~n!gy~n!1uk g2~n!#

5 (nPN

y k~n! g~n!, ~26!

where

g~n!5n1T1gy~n!2n2T2gx~n! ~27!

and

y k~n!5yk~n!2y1~n!. ~28!

We then simultaneously solve these expressions. To dlet

MRk5Vk , ~29!

where

M53(

nPN

gx2~n! (

nPN

gx~n!gy~n! (nPN

g~n!gx~n!

(nPN

gx~n!gy~n! (nPN

gy2~n! (

nPN

g~n!gy~n!

(nPN

g~n!gx~n! (nPN

g~n!gy~n! (nPN

g2~n!4 ,

~30!

Page 7: High-resolution image reconstruction from a sequence of

gs

onlaseive-

ete

,niontes

albly

ineccure-

rorow-atesin

lgo-

rvab-

.

en-es

--a

rse-

op-at

iontial

le.hem,

t

isbleerar

qs.anr

h-

ate-

of

-

Hardie et al.: High-resolution image reconstruction . . .

Rk5@hk ,vk ,uk#T, and

Vk53(

nPN

y k~n!gx~n!

(nPN

y k~n!gy~n!

(nPN

y k~n! g~n!4 . ~31!

Finally, the estimated registration vector,Rk

5@ hk ,vk ,uk#T, can be computed as

Rk5M21Vk . ~32!

To obtain shifts in terms of low-resolution pixel spacin~rather than in millimeters!, we setT15T251 in Eq. ~27!.

Because of the assumptions made, this technique isaccurate for small shifts and rotations. To treat the cwhere larger values are expected, we follow an iteratmethod.11,16In this technique, the initial registration parameters are estimated according to Eq.~32!. Next yk(n) isshifted and rotated according to the registration paramestimates so as to more closely matchy1(n). This modifiedimage is then registered toy1(n). The process continueswherebyyk(n) is continually modified until the registratioestimates become sufficiently small. The final registratestimate is obtained by summing all of the partial estimaThe iterative registration procedure foryk(n), where k52,3,...,p, is summarized in Table 1. Based on empiricobservation, the algorithm appears to converge reliawhen the shifts and rotations are not too large~e.g., lessthan 10-pixel shifts and 10-deg rotations!.

Because the three parameters are well overdetermby the data, this least-squares estimate is generally arate. We find that the main source of error lies in thesampling ofyk(n) ~step 6 in Table 1!, since this requiresinterpolation on the low-resolution aliased grid. Some eris also introduced in the discrete gradient estimates. Hever, the algorithm appears to provide sufficiently accurresults for the data studied here. In cases where the aliais more severe, a joint registration and reconstruction arithm may be advantageous.10,12,17

4 High-Resolution Image Reconstruction

With estimates of the registration parameters, the obsetion model can be completely specified. In light of the oservation model in Eq.~8!, we define the high-resolutionimage estimate to be

z5arg minz

C~z!, ~33!

where

C~z!51

2 (m51

pM S ym2(r 51

N

wm,rzr D 2

1l

2 (i 51

N S (j 51

N

a i , j zj D 2

,

~34!

y

r

.

d-

g

-

and ym for m51,2,...,pM are the observed pixel values

Thus, the estimatez is thez that minimizes Eq.~34!.The cost function~34! balances two types of errors. Th

first term on the right-hand side is minimized when a cadidatez, projected through the observation model, matchthe observed data. To clarify this, note that( r 51

N wm,rzr canbe thought of as the ‘‘predicted’’ low-resolution pixelm,obtained by projecting a candidatez through the observation model, andym is the actual observed pixel value. However, direct minimization of this term alone can lead topoor estimate due to the ill-posed nature of the inveproblem. That is,z is often underdetermined by the observed data, and many candidatez’s will minimize this firstterm. Thus, the second term serves as a regularizationerator. The parametersa i , j are generally selected so ththis regularization term is minimized whenz is smooth.This is a commonly applied constraint on image restoratproblems, since natural scenes often exhibit strong spacorrelation. Here we select

a i , j5 H1 for i 5 j ,21/4 for j :zj is a cardinal neighbor ofzi .

~35!

The cost function is convex and is readily differentiabFurthermore, the regularization term provides it with tdesirable property of having a unique global minimuwhich yields a unique optimal estimate imagez.

The ‘‘weight’’ of the two competing terms in the cosfunction is controlled by the tuning parameterl. Largervalues ofl will generally lead to a smoother solution. This useful when only a small number of frames are availaor the fidelity of the observed data is low. On the othhand, if l is too small, the resulting estimate may appenoisy. It is also possible to makel spatially adaptive.18,19

Finally, it can be shown that the estimate defined in E~33! and ~34! is a MAP estimate in the case of Gaussinoise and wherez is viewed as a realization of a particulaGibbs random field.17

Next we consider two unconstrained optimization tecniques for minimizing the cost function in~34!. First weconsider a gradient descent method, and then a conjuggradient method.

4.1 Gradient Descent Optimization

To derive the gradient descent update procedure20 for theimage estimate, we begin by differentiating Eq.~34! withrespect to some pixelzk for k51,2,...,N. This partial de-rivative is given by

gk~z!5]C~z!

]zk5 (

m51

pM

wm,kS (r 51

N

wm,rzr2ymD1l(

i 51

N

a i ,kS (j 51

N

a i , j zj D . ~36!

The iterative procedure begins with an initial estimatethe high-resolution imagez0. A relatively simple startingpoint can be obtained by interpolating the first low

253Optical Engineering, Vol. 37 No. 1, January 1998

Page 8: High-resolution image reconstruction from a sequence of

icxel

e-rger oini-

ng

xel

it-or

ro-

za-

-this

by

be

ini-

Hardie et al.: High-resolution image reconstruction . . .

resolution frame using standard bilinear or bicubinterpolation.13 The gradient descent update for each piestimate is

zkn115 zk

n2«ngk~ zn! ~37!

for n50,1,2,... andk51,2,...,N. Alternatively, the updatecan be written as

zn115 zn2«ngn, ~38!

where

gn5F g1~ zn!

g2~ zn!

AgN~ zn!

G . ~39!

The parameter«n in Eqs. ~37! and ~38! represents thestep size at then’th iteration. This parameter must be slected to be small enough to prevent divergence and laenough to provide convergence in a reasonable numbeiterations. The optimal step size can be calculated by mmizing

C~ zn11!5C~ zn2«ngn! ~40!

with respect to«n. To do so we begin by writingC( zn11)using Eqs.~34! and ~37!. Next we differentiate this withrespect to«n and set the derivative equal to zero. Solvifor «n yields, after some manipulation,

«n5(m51

pM gm~( r 51N wm,r zr

n2ym!1l( i 51N g i~( j 51

N a i , j zjn!

(m51pM gm

2 1l( i 51N g i

2,

~41!

where

gm5(r 51

N

wm,rgr~ zn! ~42!

is the gradient projected through the low-resolution piformation model, and

Table 1 Iterative registration algorithm for yk(n), where k52,3,...,p.

Step 1 : Compute gx(n), gy(n), and g(n) from y1(n), andform the matrix M.

Step 2 : Let n50, yk0(n)5yk(n), and Rk

05@0,0,0#T.

Step 3 : Compute Vk using yk(n)5ykn(n)2y1(n), and then

let n←n11.

Step 4 : Compute Rkn5M21Vk1Rk

n21 .

Step 5 : If iRkn2Rk

n21i /iRkn21i,T, let Rk5Rk

n , and stop.

Step 6 : Resample yk(n) towards y1(n) according to Rkn to

create ykn(n), and go to step 3.

254 Optical Engineering, Vol. 37 No. 1, January 1998

f

g i5(j 51

N

a i , jgj~ zn! ~43!

is a weighted sum of neighboring gradient values. Thiseration continues until the cost function stabilizesi zn112 zni /i zni,T, where T is a specified thresholdvalue. A summary of the gradient descent optimization pcedure is provided in Table 2.

4.2 Conjugate-Gradient Optimization

In this section, we describe a conjugate-gradient optimition procedure for minimizing the cost function~34!. Inparticular, we employ the Fletcher-Reeves method.20 Welater show that with little additional computational complexity, faster convergence can be achieved usingmethod than using gradient descent.

The basic conjugate-gradient image update is given

zkn115 zk

n1«ndk~ zn! ~44!

for n50,1,2,... andk51,2,...,N. Here dk( zn) is the

conjugate-gradient term. Alternatively, the update canwritten as

zn115 zn1«ndn, ~45!

where

dn5F d1~ zn!

d2~ zn!

AdN~ zn!

G . ~46!

As before, the parameter«n is the step size at then’thiteration. The optimal step size can be calculated by mmizing C( zn11)5C( zn1«ndn) with respect to«n. Thisyields

«n52(m51

pM fm~( r 51N wm,r zr

n2ym!1l( i 51N d i~( j 51

N a i , j zjn!

(m51pM fm

2 1l( i 51N d i

2,

~47!

where

Table 2 Proposed gradient descent iterative estimation algorithm.

Step 1 : Begin at n50 with initial estimate z0 being theinterpolated low-resolution frame 1.

Step 2 : Compute the gradient gk( zn) given in Eq. (36) fork51,2,...,N, yielding gn.

Step 3 : Compute the optimal step size «n using Eq. (41).

Step 4 : Let zn115 zn2«ngn.

Step 5 : If i zn112 zni /i zni,T, let z5 zn11 and stop.

Step 6 : Let n←n11 and go to step 2.

Page 9: High-resolution image reconstruction from a sequence of

ureset

ese-

re-prontiaec-er

o-ofrete

ingona

f

tion.ate.ilyal

n al-

hasin

a

ifi-hethe

m-igiding

al-inent

thater-inesors

Hardie et al.: High-resolution image reconstruction . . .

fm5(r 51

N

wm,rdr~ zn! ~48!

and

d i5(j 51

N

a i , jdj~ zn!. ~49!

The conjugate-gradient vector and its update procedare defined below. The initial conjugate gradient isequal to

d052g0. ~50!

This is updated according to

dn1152gn111bndn, ~51!

where

bn5~gn11!Tgn11

~gn!Tgn . ~52!

Again, the iterations continue until the estimate convergA summary of the conjugate-gradient optimization procdure is given in Table 3.

5 Experimental Results

In this section a number of experimental results are psented in order to demonstrate the performance of theposed algorithm. The first set of experiments use sequeframe data obtained from a FLIR imaging system. The sond set of results use simulated data for quantitative pformance analysis.

5.1 Infrared Image Reconstruction

Here we consider the application of the multiframe algrithm to a FLIR imager.* The optical system parametersthe FLIR are described in Sec. 2.3. The theoretical disc

*The data were acquired at the FLIR facility at Wright Laboratory SensTechnology Branch~WL/AAJT!.

Table 3 Proposed conjugate-gradient iterative optimization algo-rithm.

Step 1 : Begin at n50 with initial estimate z0 being theinterpolated low-resolution frame 1.

Step 2 : Compute g0, and initialize theconjugate-gradient vector as d052g0.

Step 3 : Compute the optimal step size «n using Eq. (47).

Step 4 : Let zn115 zn1«ndn.

Step 5 : If i zn112 zni /i zni,T, let z5 zn11 and stop.

Step 6 : Compute gn11 and let dn1152gn111bndn,where bn5 ((gn11)Tgn11)/((gn)Tgn).

Step 7 : Let n←n11 and go to step 3.

.

-l

-

PSF of the FLIR system on the high-resolution samplgrid is shown in Fig. 6. This is an incoherent PSF basedthe assumption of diffraction-limited optics operating atmidband wavelength of 4mm, and it includes the effects othe finite detector size.14,21 The weights in Eq.~8! are de-termined by this PSF positioned at the appropriate locaon the high-resolution grid for each low-resolution pixel

Twenty frames have been acquired at a 60-frame/s rGlobal rotation and translation are introduced by arbitrarmanipulating the imager during acquisition. One typicoriginal resolution frame is shown in Fig. 7~a!. The scenecontains a number of small power boats and trailers ogravel parking lot with a fence in the foreground. The mutiframe estimate is shown in Fig. 7~b! for L15L255. Theimage used to initiate the iterative estimation procedurebeen obtained by bilinearly interpolating the first framethe sequence. The image shown in Fig. 7~b! is the result of10 iterations of the conjugate-gradient procedure withl50.1. Generally, increasingl will yield a smoother imageestimate. For comparison, a bicubic interpolation ofsingle frame is shown in Fig. 7~c!.

The multiframe reconstruction appears to show signcantly improved image detail. In addition, note that taliasing artifacts on the diagonal beam of the gate inforeground of Figs. 7~a! and 7~c! are virtually eliminated inthe multiframe estimate. The estimated registration paraeters for the 20 observed frames are shown in Fig. 8. Rtranslational and rotational motion is assumed, yieldthree parameters for each frame.

Finally, to illustrate the convergence behavior of thegorithms using the FLIR data, the cost function is plottedFig. 9 versus iteration number for both the gradient descand the conjugate-gradient optimization methods. Notethe conjugate-gradient algorithm exhibits faster convgence. The algorithm run time is approximately 5 minMatlab running on a Sun SPARCstation 20. Larger imagtend to run proportionately longer.

Fig. 6 Theoretical discrete system PSF of the FLIR imager for L1

5L255.

255Optical Engineering, Vol. 37 No. 1, January 1998

Page 10: High-resolution image reconstruction from a sequence of

Hardie et al.: High-resolution image reconstruction . . .

Fig. 7 (a) FLIR low-resolution frame 1, showing small power boats and trailers on a gravel parking lot.(b) Multiframe estimate using 20 frames with L15L255 and l50.1. (c) Bicubic interpolation of frame1.

gitaa

s.ry

mamepa-

thein

ning

5.2 Flight Data

In this subsection, results are presented that use divideo obtained by hard-mounting the Amber imager toCessna 172 aircraft and collecting air-to-ground image†

Line-of-sight jitter and optical flow provide the necessa

†The data have been collected courtesy of the Infrared Threat WarLaboratory, Threat Detection Branch, at Wright Laboratory~WL/AAJP!.

256 Optical Engineering, Vol. 37 No. 1, January 1998

l

motion. Twenty frames have been arbitrarily extracted frothe video sequence. These frames were acquired at a frrate of 100 frames/s. The estimated global registrationrameters for the 20 frames are shown in Fig. 10.

One standard resolution frame is shown in Fig. 11~a!.This image shows a variety of buildings and roads inDayton, Ohio area. The multiframe estimate is shownFig. 11~b! for l50.05 andL15L255. The initial image

Page 11: High-resolution image reconstruction from a sequence of

stce-ter

tlyta,

ofat--

-iseyen

isesithithhmr.tlyheTheiftsor

sels

or-

a-c-

beedos-reterm,arise.e

Hardie et al.: High-resolution image reconstruction . . .

estimate is obtained by bilinearly interpolating the firframe, and five iterations of the conjugate-gradient produre have been performed. For comparison, a bicubic inpolation of a single frame is shown in Fig. 11~c!. Themulti-frame reconstruction appears to show significanimproved image detail. By processing multiple sets of daa high-resolution video sequence can be generated.

5.3 Quantitative Analysis

In order to evaluate the algorithms quantitatively, a setsixteen 8-bit low-resolution images are simulated by roting, translating, blurring, and subsampling an ‘‘ideal’’ image. The original image is of size 2503250, and the down-sampling factors areL15L255. The blurring function is a

Fig. 8 Estimated registration parameters (uk , hk , and vk) for the 20frames acquired with the FLIR imager.

Fig. 9 Convergence behavior of the gradient descent andconjugate-gradient optimization techniques using the FLIR data.Twenty observed frames have been used with L15L255 and l50.1.

-

535 moving-average filter which simulates the lowresolution detector effects. Finally, additive Gaussian noof variancesh

2525 is introduced in each frame. Twentiterations of the conjugate-gradient optimization have beperformed withl50.1. The mean absolute error~MAE!between the multiframe estimate and the ‘‘ideal’’ imageplotted in Fig. 12 as a function of the number of framused. For comparison, the MAEs of the first frame wbilinear and bicubic interpolation are also shown. Wonly one frame, the performance of the proposed algoritis only slightly better than that of the bicubic interpolatoHowever, with additional frames, the estimate significanimproves with respect to the single-frame interpolators. Testimated registration parameters are shown in Fig. 13.MAE between the actual and estimated translational shis 0.0414 low-resolution pixel spacings, and the MAE fthe rotation parameters is 0.0250 deg.

To study the sensitivity of the algorithm to noise, MAEare computed and plotted in Fig. 14 for various noise levand three choices ofl. Note that withl50, the error growsrapidly with the noise level. By choosing a larger value fthis parameter~e.g.,l50.1 or 0.5!, more robustness is possible. It should be noted that the estimate withl50 can beimproved by halting the iterations earlier. With fewer itertions, the estimate will tend to retain more of the charateristics of the starting image~which is generally smooth!.These results simply show the effect of changingl with allother factors being equal.

6 Conclusions

Aliasing reduction and resolution enhancement canachieved by exploiting multiple frames that are rotatand/or translated with respect to one another. This is psible because each frame offers a unique set of discsamples. For an imager mounted on a moving platfosuch as an aircraft, the desired image sequence mayfrom natural line-of-sight jitter and rotation of the platformWith this in mind, it may then be possible to relax imag

Fig. 10 Estimated registration parameters for the 20 frames of flightdata with the infrared imager.

257Optical Engineering, Vol. 37 No. 1, January 1998

Page 12: High-resolution image reconstruction from a sequence of

Hardie et al.: High-resolution image reconstruction . . .

Fig. 11 (a) Low-resolution infrared frame 1 from flight data. (b) Multiframe estimate using 20 frameswith L15L255 and l50.05. (c) Bicubic interpolation of frame 1.

taingo-

angistio

icather-

te-

lti-aner-ul-Inlti--

stabilization requirements in some applications and obimproved resolution images through the proposed alrithm.

The key to the success of the algorithm is havingaccurate observation model. This includes the image retration parameters and the system PSF. The observamodel proposed here includes information about the optsystem and FPA. A regularized cost function definesimage estimate. Minimization of the cost function is pe

258 Optical Engineering, Vol. 37 No. 1, January 1998

-nl

formed using either a gradient descent or a conjugagradient technique.

The quantitative results obtained show that the muframe image estimates have significantly lower error thestimates formed by single-frame interpolation. Furthmore, we believe that the FLIR results show that the mtiframe estimate has significantly improved image detail.particular, edges and fine structure emerge in the muframe reconstruction that are not visible in the low

Page 13: High-resolution image reconstruction from a sequence of

t viage

n-

cttonct.ol-noforon

Wef

andell,ffi-

n,k

o-

f,’’

g,

ra-

of

m

n-

oure

l-

n

on,,’’er,

,’’

ed

Hardie et al.: High-resolution image reconstruction . . .

resolution data. Because these features offer importansual cues, we believe that the utility of the processed imis greatly enhanced.

Acknowledgments

This work has been supported by Wright Laboratory’s Sesors Technology Branch~WL/AAJT! at Wright PattersonAir Force Base in Dayton Ohio under Air Force contraF33601-95-DJ010. In particular, the authors would likethank Dr. Mohammad Karim, Dr. Paul McManamon, DoTomlinson, and Brian Yasuda for supporting this projeThanks to Eric Kaltenbacher for his assistance in data clection. Thanks are also due to Tony Absi and Art Serrafor their administrative support. Thanks to Steve Cainsuggesting and initially developing a statistical estimati

Fig. 12 Mean absolute error for the multiframe estimator using dif-ferent numbers of frames. The noise variance is sh

2510, L15L2

55, and l50.1.

Fig. 13 Actual and estimated registration parameters for each ofthe 16 simulated low-resolution frames. The shifts hk and vk aremeasured in terms of low-resolution pixel spacings.

-approach to the FLIR resolution enhancement problem.would also like to gratefully acknowledge the work othose FLIR team members who support the hardwaresoftware that made this work possible: James GlidewBob Gualtieri, Rich Hill, Al Meyer, Joe Richardson, JeSweet, Tim Tuinstra, Carl White, and Ron Wiensch. Fnally, we would like to thank Dr. Richard B. SandersoJoel Montgomery, Mike Hall, David Coates, and FranBaxely, with the Threat Detection Branch at Wright Labratory ~WL/AAJP!, for collecting the infrared flight imagesused here.

References1. J. C. Gillette, T. M. Stadtmiller, and R. C. Hardie, ‘‘Reduction o

aliasing in staring infrared imagers utilizing subpixel techniquesOpt. Eng.34, 3130–3137~Nov. 1995!.

2. E. A. Watson, R. A. Muse, and F. P. Blommel, ‘‘Aliasing and blurrinin microscanned imagery,’’ inInfrared Imaging Systems: DesignAnalysis, Modeling and Testing III Proc. SPIE1689, 242–250~Apr.1992!.

3. R. Tsai and T. Huang, ‘‘Multiframe image restoration and registtion,’’ in Advances in Computer Vision and Image Processing, Vol. 1,pp. 317–339, JAI Press, 1984.

4. S. Kim, N. Bose, and H. Valenzuela, ‘‘Recursive reconstructionhigh resolution image from noisy undersampled multiframes,’’IEEETrans. Acoust. Speech Signal Process.38, 1013–1027~June 1990!.

5. H. Stark and P. Oskoui, ‘‘High-resolution image recovery froimage-plane arrays, using convex projections,’’J. Opt. Soc. Am. A6~11!, 1715–1726~1989!.

6. A. Patti, M. Sezan, and A. Tekalp, ‘‘High-resolution standards coversion of low resolution video,’’ inProc. IEEE Int. Conf. on Acous-tics, Speech and Signal Processing (ICASSP), Detroit, MI, pp. 2197–2200 ~May 1995!.

7. A. J. Patti, M. I. Sezan, and A. M. Tekalp, ‘‘Superresolution videreconstruction with arbitrary sampling latices and nonzero aperttime,’’ IEEE Trans. Image Process.6, 1064–1076~Aug. 1997!.

8. S. Mann and R. W. Picard, ‘‘Virtual bellows: constructing high quaity stills from video,’’ in Proc. IEEE Int. Conf. on Image Processing,Austin TX ~Nov. 1994!.

9. R. R. Schultz and R. L. Stevenson, ‘‘Extraction of high-reslutioframes from video sequences,’’IEEE Trans. Image Process.5, 996–1011 ~June 1996!.

10. P. Cheeseman, B. Kanefsky, R. Kraft, J. Stutz, and R. Hans‘‘Super-resolved surface reconstruction from multiple imagesNASA Technical Report FIA-94-12, NASA Ames Research CentMoffett Field CA ~Dec. 1994!.

11. M. Irani and S. Peleg, ‘‘Improving resolution by image registrationCVGIP: Graph. Models and Image Process.53, 231–239 ~May1991!.

12. S. Cain, R. C. Hardie, and E. E. Armstrong, ‘‘Restoration of alias

Fig. 14 MAE for simulated data with varying levels of noise.

259Optical Engineering, Vol. 37 No. 1, January 1998

Page 14: High-resolution image reconstruction from a sequence of

ors

-

ch-

is-un-

n-

ra-

Hardie et al.: High-resolution image reconstruction . . .

video sequences via a maximum-likelihood approach,’’ inNationalInfrared Information Symposium (IRIS) on Passive Sens,Monterey, CA~Mar. 1996!.

13. A. K. Jain,Fundamentals of Digital Image Processing, Prentice-Hall,1989.

14. J. Goodman,Introduction to Fourier Optics, McGraw-Hill, 1968.15. A. V. Oppenheim and R. W. Schafer,Discrete-Time Signal Process

ing, Prentice-Hall, 1989.16. B. D. Lucas and T. Kanade, ‘‘An iterative image registration te

nique with an application to stereo vision,’’ inInternational JointConference on Artificial Intelligence, Vancouver~Aug. 1981!.

17. R. C. Hardie, K. J. Barnard, and E. E. Armstrong, ‘‘Joint map regtration and high resolution image estimation using a sequence ofdersampled images,’’IEEE Trans. Image Process.~in press!.

18. Y.-L. You and M. Kaveh, ‘‘A regularized approach to joint blur idetification and image restoration,’’IEEE Trans. Image Process.5,416–428~Mar. 1996!.

19. R. L. Lagendijk, J. Biemond, and D. E. Boekee, ‘‘Regularized itetive image restoration with ringing reduction,’’IEEE Trans. Acoust.Speech Signal Process.36, 1874–1888~Dec. 1988!.

20. D. G. Luenberger,Linear and Nonlinear Programming, Addison-Wesley, 1984.

21. J. D. Gaskill,Linear Systems, Fourier Transforms, and Optics, Wiley,1978.

Russell C. Hardie graduated magna cumlaude from Loyola College in Baltimore in1988 with a BS degree. He obained a MSdegree in electrical engineering from theUniversity of Delaware in 1990. In 1992,he obtained a PhD in electrical engineer-ing from the University of Delaware. Dr.Hardie is currently an assistant professorin the Department of Electrical and Com-puter Engineering at the University of Day-ton. Prior to teaching at the Univeristy of

Dayton, Dr. Hardie was a senior scientist at Earth Satellite Corpo-ration (EarthSat) in Rockville, Maryland. His research interests in-clude signal and image processing, infrared and multispectral imag-ing systems, nonlinear filters, adaptive filter theory, patternrecognition, and remote sensing.

Kenneth J. Barnard is a research scien-tist at the University of Dayton and is cur-rently performing research at Wright-Patterson Air Force Base through theIntergovernmental Personnel Act (IPA)and the AFOSR University Resident Re-search Program (URRP). He has a BS de-gree in electrical engineering from the Uni-versity of Michigan, and MS and PhDdegrees in electrical engineering from theUniversity of Central Florida. Dr. Barnard

is a member of SPIE, OSA, and IEEE. His current research includeswork in active and passive electro-optic sensors.

260 Optical Engineering, Vol. 37 No. 1, January 1998

John G. Bognar received a BS degree inengineering physics from Wright StateUniversity in Dayton, Ohio, in 1988, andan MS in electrical engineering from theUniversity of Dayton in 1995. Mr. Bognarcurrently works for Technology/ScientificServices Inc. at the EO Sensors Technol-ogy Branch of Wright Laboratory at WrightPatterson AFB. His research interests in-clude image processing, infrared imaging

systems, and real-time digital video processing.

Ernest E. Armstrong is currently workingas a senior computer scientist forTechnology/Scientific Services Incorpo-rated, a defense contractor at WrightPatterson AFB. Mr. Armstrong received aBS degree in computer science and a MSin x-ray crystallography from Wright StateUniversity in 1977 and 1980, respectively.He previously held positions with ExxonProdution Research and British Petroleum

Research, where his research was primarily concerned with the ap-plication of signal and image processing techniques to crystal struc-ture analysis.

Edward A. Watson is an electronics engi-neer with the Electro-Optics Branch ofWright Laboratory at Wright-Patterson AirForce Base. He is currently involved in re-search in active and passive electro-opticsensors, including the application of light-weight diffractive optical components tothe agile steering of monochromatic andpolychromatic light beams, the modelingof aliasing and blurring in sampled imag-

ery, and the application of electro-optic sensors to automatic targetrecognition. He has a BS degree in physics and an MS degree inoptical sciences from the University of Arizona, and a PhD in opticsfrom the University of Rochester.