ieeetransactions on geoscience and remote sensing 1 …

12
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 An Analytical Model for Optical Polarimetric Imaging Systems Lingfei Meng, Member, IEEE, and John P. Kerekes, Senior Member, IEEE Abstract—Optical polarization has shown promising applica- tions in passive remote sensing. However, the combined effects of the scene characteristics, the sensor configurations, and the differ- ent processing algorithm implementations on the overall system performance have not been systematically studied. To better un- derstand the effects of various system attributes and help optimize the design and use of polarimetric imaging systems, an analytical model has been developed to predict the system performance. The model propagates the first- and second-order statistics of radiance from a scene model to a sensor model and, finally, to a processing model. Validations with data collected from a division of time polarimeter are presented. Based on the analytical model, we then define a signal-to-noise ratio of the degree of linear polarization and receiver operating characteristic curves as two different sys- tem performance metrics to evaluate the polarimetic signatures of different objects, as well as the target detection performance. Several examples are presented to show the potential applications of the analytical model for system analysis. Index Terms—Analytical model, imaging system, polarization, target detection. I. I NTRODUCTION P OLARIZATION of light provides valuable information about scenes that cannot be obtained directly from inten- sity or spectral images. Rather than treating the optical field as scalar, polarization images seek to obtain the vector nature of the optical field from the scene [1], [2]. Polarized light reflected from scenes has been found to be useful in several applications, such as material classification [3], [4], computer vision [5], and target detection [6]–[10]. Recently, a study on polarimetric combined with spectral imaging has demonstrated the capabil- ity of remote sensing platforms to detect [11] and track vehicles [12], [13]. Polarimetric imaging is however a relatively new and not fully explored field for remote sensing, as pointed out in [2]. The relatively unstable polarimetric signatures that are highly dependent on the source–target–sensor geometry, as well as the lack of available polarimetric imaging data, make the utility of optical polarimetric imaging still an open question for remote sensing applications. The system approach to analyze the remote sensing system has been addressed in [14] and [15]. The Forecasting and Manuscript received April 29, 2013; revised September 19, 2013; accepted December 17, 2013. L. Meng is with Ricoh Innovations Corporation, Menlo Park, CA 94025- 7057 USA (e-mail: [email protected]). J. P. Kerekes is with the Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY 14623 USA. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TGRS.2014.2299272 Analysis of Spectroradiometric System Performance (FASSP) model presented in [15] is a mathematical model for spectral imaging system analysis. This model divides the remote sens- ing system into three subsystems: scene, sensor, and process- ing. This mathematical modeling approach generates statistical representations of the scene and analytically propagates the statistics through the whole system. The FASSP model has been found to be extremely helpful in the study of different applications, such as land cover classification [16] and target detection [15], [17]. Another approach for imaging system anal- ysis is image simulation modeling, such as the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model [18], which produces synthetic images. Many studies on optical polarimetric imaging systems have been done on different system components, such as scene simulation, polarimetric sensor design, and polarimetric data processing. In the study on scene simulation, characterization of the optical properties of material has always been challenging. The polarimetric BRDF (pBRDF) is often used to characterize fully the polarization behavior of object surface. The acquisition of pBRDF is often based on empirical modeling or experimental measurement. A pBRDF model based on microfacet theory is given in [19]. Recently, this model has been extended by including a shadowing/masking function and a Lambertian component [20]. The pBRDF model has been implemented in the image simulation model DIRSIG [21]. A physics-based model pBRDF was also extended to cover the long-wave IR wavelengths [22]. An approach for measuring the first column of the pBRDF matrix is given in [23]. A method to determine pBRDF using in-scene calibration materials is shown in [24]. Some early work of measuring the polarization of natural backgrounds, such as soil, vegetation, and snow, can be found in [25]–[27]. Several models are available to characterize the pBRDF of natural background materials. In [28] and [29], pBRDF models were developed for general vegetation. A polar- ized reflectance representation for soil can be found in [30] and [31]. Recently, in [32], modeling and simulation of polarimetric hyperspectral images has been presented by including skylight polarization and polarized reflectance models. In terms of polarimetric sensor design, a number of stud- ies have reported on designing an optimum polarimeter for Stokes parameter measurement by minimizing various condi- tion numbers of the system measurement matrices [33]–[35]. Recently, several methods have been proposed for optimizing the polarization state of illumination in active Stokes images by maximizing the contrast function in the presence of additive Gaussian noise [36] or maximizing the Euclidean distance 0196-2892 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Upload: others

Post on 19-Dec-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1

An Analytical Model for OpticalPolarimetric Imaging Systems

Lingfei Meng, Member, IEEE, and John P. Kerekes, Senior Member, IEEE

Abstract—Optical polarization has shown promising applica-tions in passive remote sensing. However, the combined effects ofthe scene characteristics, the sensor configurations, and the differ-ent processing algorithm implementations on the overall systemperformance have not been systematically studied. To better un-derstand the effects of various system attributes and help optimizethe design and use of polarimetric imaging systems, an analyticalmodel has been developed to predict the system performance. Themodel propagates the first- and second-order statistics of radiancefrom a scene model to a sensor model and, finally, to a processingmodel. Validations with data collected from a division of timepolarimeter are presented. Based on the analytical model, we thendefine a signal-to-noise ratio of the degree of linear polarizationand receiver operating characteristic curves as two different sys-tem performance metrics to evaluate the polarimetic signaturesof different objects, as well as the target detection performance.Several examples are presented to show the potential applicationsof the analytical model for system analysis.

Index Terms—Analytical model, imaging system, polarization,target detection.

I. INTRODUCTION

POLARIZATION of light provides valuable informationabout scenes that cannot be obtained directly from inten-

sity or spectral images. Rather than treating the optical field asscalar, polarization images seek to obtain the vector nature ofthe optical field from the scene [1], [2]. Polarized light reflectedfrom scenes has been found to be useful in several applications,such as material classification [3], [4], computer vision [5],and target detection [6]–[10]. Recently, a study on polarimetriccombined with spectral imaging has demonstrated the capabil-ity of remote sensing platforms to detect [11] and track vehicles[12], [13]. Polarimetric imaging is however a relatively new andnot fully explored field for remote sensing, as pointed out in [2].The relatively unstable polarimetric signatures that are highlydependent on the source–target–sensor geometry, as well as thelack of available polarimetric imaging data, make the utility ofoptical polarimetric imaging still an open question for remotesensing applications.

The system approach to analyze the remote sensing systemhas been addressed in [14] and [15]. The Forecasting and

Manuscript received April 29, 2013; revised September 19, 2013; acceptedDecember 17, 2013.

L. Meng is with Ricoh Innovations Corporation, Menlo Park, CA 94025-7057 USA (e-mail: [email protected]).

J. P. Kerekes is with the Chester F. Carlson Center for Imaging Science,Rochester Institute of Technology, Rochester, NY 14623 USA.

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TGRS.2014.2299272

Analysis of Spectroradiometric System Performance (FASSP)model presented in [15] is a mathematical model for spectralimaging system analysis. This model divides the remote sens-ing system into three subsystems: scene, sensor, and process-ing. This mathematical modeling approach generates statisticalrepresentations of the scene and analytically propagates thestatistics through the whole system. The FASSP model hasbeen found to be extremely helpful in the study of differentapplications, such as land cover classification [16] and targetdetection [15], [17]. Another approach for imaging system anal-ysis is image simulation modeling, such as the Digital Imagingand Remote Sensing Image Generation (DIRSIG) model [18],which produces synthetic images.

Many studies on optical polarimetric imaging systems havebeen done on different system components, such as scenesimulation, polarimetric sensor design, and polarimetric dataprocessing.

In the study on scene simulation, characterization of theoptical properties of material has always been challenging. Thepolarimetric BRDF (pBRDF) is often used to characterize fullythe polarization behavior of object surface. The acquisition ofpBRDF is often based on empirical modeling or experimentalmeasurement. A pBRDF model based on microfacet theoryis given in [19]. Recently, this model has been extended byincluding a shadowing/masking function and a Lambertiancomponent [20]. The pBRDF model has been implementedin the image simulation model DIRSIG [21]. A physics-basedmodel pBRDF was also extended to cover the long-wave IRwavelengths [22]. An approach for measuring the first columnof the pBRDF matrix is given in [23]. A method to determinepBRDF using in-scene calibration materials is shown in [24].Some early work of measuring the polarization of naturalbackgrounds, such as soil, vegetation, and snow, can be foundin [25]–[27]. Several models are available to characterize thepBRDF of natural background materials. In [28] and [29],pBRDF models were developed for general vegetation. A polar-ized reflectance representation for soil can be found in [30] and[31]. Recently, in [32], modeling and simulation of polarimetrichyperspectral images has been presented by including skylightpolarization and polarized reflectance models.

In terms of polarimetric sensor design, a number of stud-ies have reported on designing an optimum polarimeter forStokes parameter measurement by minimizing various condi-tion numbers of the system measurement matrices [33]–[35].Recently, several methods have been proposed for optimizingthe polarization state of illumination in active Stokes images bymaximizing the contrast function in the presence of additiveGaussian noise [36] or maximizing the Euclidean distance

0196-2892 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

between the target and background Stokes parameters [37].A framework of finding the optimal combination settings ofanalyzer rotation angles to improve the target detection perfor-mance was presented in [38].

Regarding the processing of polarimetric data, some studieshave focused on reconstructing the Stokes parameters fromintensity images. A maximum-likelihood blind deconvolutionalgorithm was proposed in [39] for the estimation of unpo-larized and polarized components, as well as the angle ofpolarization and the point spread function. A Stokes parameterrestoration algorithm was proposed in [40] based on penalized-likelihood image reconstruction techniques. Recently, band-limited reconstruction algorithm has been proposed to measuretemporally varying Stokes parameters without constant sceneassumption [41], [42]. Other interesting studies on polarimetricdata processing are material classification and object detection.The techniques of using polarization to improve material clas-sification or segmentation can be found in [43]–[45]. Materialclassification in turbulence-degraded polarimetric imagery us-ing the blind deconvolution algorithm [39] was introduced in[4] and [46]. Man-made target detections based on RX andtopological anomaly detection approaches were compared us-ing synthetic polarimetric images generated by DIRSIG [9] andlaboratory measurements [47]. An image fusion approach forobject detection can be found in [48]. A maximum-likelihood-based detection and tracking algorithm using long-wave IRmicrogrid polarimeter data was presented in [49].

The related studies presented are very helpful for obtaininga better understanding of each subsystem of polarimetric imag-ing. However, the combined effects of the scene characteristics,the sensor configurations, and the different processing algo-rithm implementations on the overall system performance havenot been systematically studied, and a convenient system ap-proach to analyze the end-to-end system performance of polari-metric imaging systems does not exist. It is the goal of gaininga better understanding of the potential performance and limita-tions of polarimetric imaging technology and of heading towardmore optimal designs of polarimetric imaging systems that mo-tivates the system approach presented in this paper. The outlineof this paper is as follows. In Section II, a detailed introductionof the analytical model for a polarimetric imaging system is firstintroduced. Then, the proposed model is validated based on realdata in Section III. In Section IV, based on the system model,we define different performance metrics. In Section V, exam-ples of system performance evaluation based on the analyticalmodel are presented. We conclude this paper in Section VI.

II. FORMULATION OF ANALYTICAL MODEL

The framework of the analytical model is shown in Fig. 1.The passive optical polarimetric imaging system is analyzedfrom the perspective of scene, sensor, and processing. In thescene model, we analyze how the light interacts with the objectsurface and how the polarized light is reflected to the sensoraperture. In the sensor model, we consider how the light ispassed through optical components and then digitized by thecamera. In the image processing model, Stokes parameters arereconstructed based on polarization images, and the degree of

Fig. 1. Framework of the analytical model of the polarimetric imaging system.

linear polarization (DoLP) is further calculated, Finally, somesystem performance metrics are generated to help evaluatethe end-to-end system performance and system optimization.The analytical model is introduced below in detail. Notethat the model is limited to the reflective portion of the opticalspectrum as thermal emission aspects are not considered.

A. Scene Model

1) Optical Properties of Object Surface: The optical prop-erties of the object surfaces are often characterized by a bidi-rectional reflectance distribution function (BRDF) in spectralimage analysis. When considering polarization, the BRDFshould be extended to a polarimetric BRDF (pBRDF). In gen-eral, the pBRDF can be represented by a 3 × 3 Mueller matrixwhen only linear polarization is considered, which is the case inthis paper. The pBRDF can be estimated based on an empiricalapproach or an analytical approach. For the empirical approach,the pBRDF matrix is found based on measurement. In [50], aframework for the outdoor measurement of pBRDF is intro-duced, but only the first column of the pBRDF matrix is con-sidered. The full measurement of the pBRDF can be achievedbased on different permutations of polarization states of thelight generator and the analyzer [19], and at different scene ge-ometries [2]. The empirical approach is therefore not very flexi-ble and requires a large number of measurements. Alternatively,the pBRDF could be analytically estimated using physics-based models. In this paper, the pBRDF is predicted based onmicrofacet theory, which is commonly used in the modelingof the BRDF [51], and recently extended to the modeling ofthe pBRDF [19]–[22]. Generally, the pBRDF is modeled as acombination of a polarized specular component and an unpolar-ized diffuse component that is caused by scattering. Readers arereferred to the literature for detailed introduction of the pBRDFcalculation, and we only present the simplified pBRDF Mullermatrix of the object class as a 3 × 3 matrix F. The incidentirradiance and reflected radiance are accordingly extended toStokes parameters and are related by the pBRDF matrix as⎡

⎣L0

L1

L2

⎤⎦ =

⎡⎣ f00 f01 f02f10 f11 f12f20 f21 f22

⎤⎦⎡⎣E0

E1

E2

⎤⎦ (1)

Page 3: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

MENG AND KEREKES: ANALYTICAL MODEL FOR OPTICAL POLARIMETRIC IMAGING SYSTEMS 3

where [L0 L1 L2]T (T denotes transpose) is the Stokes vector

of the reflected radiance with units of Wm−2sr−1 (sr issteradian), [E0 E1 E2]

T is the Stokes vector of the incidentirradiance with units of Wm−2, and fij is one component ofthe F matrix with units of sr−1.

2) Solar and Atmosphere Modeling: The radiance calcula-tions are functions of the incident and reflected zenith anglesθi and θr, and the relative azimuth angle φ between the sunand the sensor. All radiance calculations are functions of wave-length λ, but the subscript has been dropped for clarity.

The direct solar irradiance is randomly polarized, and thesolar reflected radiance can be calculated by considering onlythe first column of the pBRDF Mueller matrix [f00 f10 f20]

T

and is expressed as

Lr = τ cos θiEs[f00 f10 f20]T (2)

whereLr=[Lr0Lr1Lr2]T is the Stokes vector representation of

the direct solar reflected radiance, τ is the transmittance along thesurface-to-sensor path, and Es is the solar irradiance incidenton the surface from direct transmission through the atmosphere.

Due to Rayleigh scattering, the downwelling radiance dis-tributed over the entire sky is polarized. The polarization stateof the skylight is dependent on the incident zenith angle θdand azimuth angle φd. The object surface also shows variouspolarization behaviors when the light comes from differentdirections since the pBRDF is considered. Therefore, it isnecessary to integrate the skylight reflected off the surface overthe whole hemisphere, and the surface reflected downwellingradiance can be expressed as

Ld = τ

∫F(θi, φ)Lsky(θd, φd) cos θddΩ (3)

where Ld = [Ld0 Ld1 Ld2]T is the Stokes vector representation

of the reflected downwelling radiance, Lsky(θd, φd) is thedownwelling radiance originated from direction (θd, φd), anddΩ = sin θddθddφd.

The upwelling radiance is also polarized and expressed as

Lu = [Lu0 Lu1 Lu2]T . (4)

The solar irradiance and the magnitude of the down-welling radiance and the upwelling radiance (L0 component)are often estimated using an atmospheric scattering codesuch as the MODerate resolution atmospheric TRANsmis-sion (MODTRAN) [52]. A polarized version of MODTRAN(MODTRAN-P) has been developed to include the polarizationinfluence due to Rayleigh scattering, and used to estimate theStokes parameters of atmospheric radiance [53]. Alternatively,the atmospheric polarization effect can be estimated based onmeasurement. In [54] and [55], all-sky polarization imageswere acquired. In this paper, MODTRAN-P is used for solarand atmosphere modeling.

In Fig. 2, we present a simulation of the Stokes parameters ofdownwelling radiance after iteratively running MODTRAN-P370 times. The DoLP of downwelling radiance is also com-puted based on Stokes parameters as

DoLPd =

√L2d1 + L2

d2

Ld0. (5)

Fig. 2. MODTRAN-P-predicted Stokes vector and DoLP of downwelling ra-diance. (a) S0, (b) S1, (c) S2, and (d) DoLP. The radial coordinate correspondsto the observation zenith angle, and the angular coordinate corresponds to theobservation azimuth angle.

TABLE IMODTRAN-P INPUT PARAMETER SETTING OF SIMULATION IN FIG. 2

The zenith angle is changed from 0◦ to 90◦ with a 10◦ step, andthe azimuth angle is changed from 0◦ to 360◦, also with a 10◦

step. The parameter settings of MODTRAN-P input are listedin Table I.

3) Statistics of Radiance at Sensor Aperture: Based on thepBRDF model, and the solar and atmosphere models, thestatistics of radiance at the sensor aperture can be generated.The radiance reaching the sensor aperture is found as the com-bination of the solar reflected radiance, downwelling radiance,and upwelling radiance, i.e.,

LS = Lr + Ld + Lu. (6)

For convenience of computation, the sensor spectral responsefunctions are convolved with the radiance in the scene model.In this paper, we are particularly interested in the polariza-tion image captured in grayscale (panchromatic image) overvisible wavelengths. The radiance received by the sensor istherefore an intensity that is no longer specified as a function ofwavelengths.

The radiance is represented by the first- and second-orderstatistics. The mean of the radiance is also represented as Stokesvector, i.e.,

LS =[LS0 LS1 LS2

]T. (7)

Page 4: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

Fig. 3. Radiance variation due to sensor viewing geometry.

The variation associated with LS is defined as covariancematrix ΣLS . The variation of sensor reaching radiance valuesmay come from different sources, and it is modeled as acombination of viewing geometry variation and object texturevariation.

The scene variation may come from the scene geometryeffect, which is shown in Fig. 3. If the scene is viewed by asensor with a large field of view, which is very common for air-borne or spaceborne sensors, it is possible for the surface to ex-hibit various polarization properties, because both pBRDF andatmospheric polarization are dependent on the sun–surface–sensor geometry. To compute the variation of the sensor reach-ing radiance resulting from the sensor viewing geometry, adiscrete approach is employed, as shown in Fig. 3. The focalplane of the sensor or field of view for scanning systems isfirst projected on the ground. In this paper, we only considera sensor with a 2-D array. The whole focal plane or a region ofinterest (ROI) that covers the interested object is then dividedinto different tiles. Each tile is associated with a different sensorviewing geometry. The scene geometry of each tile is fed intothe scene model to calculate the Stokes vector radiance reachingthe sensor’s aperture for each tile. Assuming that the focal planeor the ROI is equally divided into K different tiles, then themean of the ith component of the sensor reaching radianceStokes vector Ls can be found as

LSi =1

K

K∑g=1

LSi(θi, g, θr, g, φg) (8)

where θi, g is the solar zenith angle, θr, g is the sensor’s zenithangle, and φg is the relative azimuth angle, all are computedat the gth tile of the focal plane. Parameter K can be selectedbased on the number of pixels of the focal plane or the num-ber of pixels in the ROI of a real detector array. However,choosing a larger K value will increase the computation time.In this paper, we choose a smaller K value to speed up thesimulation. The pBRDF, solar illumination, and atmosphericeffects are computed for each geometry (θi, g, θr, g, φg), andsensor reaching radiance LSi(θi, g, θr, g, φg) is then generatedaccordingly using (6). A derivation of computing θi, g , θr, g , andφg was presented in [56].

A covariance matrix due to this viewing geometry can befound accordingly as

Σg = E{(LS − LS)(LS − LS)

T}. (9)

When the field of view of the sensor is narrow or the objectis not widely distributed, the angle range shown in Fig. 3 canbe approximated as Δθx = Δθy = 0. The mean LSi can bethen simply estimated using (6), and the covariance matrix Σg

is omitted.Another source of radiance variation may be due to the

nonuniformity of the surface reflectance or, for example, texturevariation. This kind of variation can also be characterized byintroducing covariance matrix Σv . In our analytical model, Σv

is considered a diagonal matrix and could be derived frommeasurements. Derivations of estimating Σv were presented in[38] and [56].

Finally, the covariance matrix of Stokes vector radiance atsensor’s aperture is found as

ΣLS = Σg +Σv. (10)

The radiance statistics will be further propagated into thesensor model to generate signal statistics.

B. Sensor Model

A polarization-sensitive optical system can be used to mea-sure the polarization state of light. The framework of the sensormodel is shown in Fig. 4. The sensor model takes the mean andcovariance of the sensor reaching radiance to produce the signalmean and covariance by applying sensor effects, such as the po-larizing effects of optical components, sensor spectral response,radiometric noise sources, etc. The sensor model introducedhere is specified for a division of time polarimeter (DoTP)system, and the scene considered is stationary. However, itshould be noted that the sensor model could be easily extendedto other types of polarimeters, such as division of focal-planepolarimeter. For an ideal linear polarizing filter, which is alsocalled an analyzer, rotated at an angle of ψi with respect tohorizontal, the transmitted radiance Li can be expressed as

Li =1

2(LS0+cos 2ψi ·LS1+sin 2ψi ·LS2) = mT

i LS (11)

where mi = 0.5[1 cos 2ψi sin 2ψi]T , and LS is the incident

Stokes vector radiance. With the linear polarizing filter rotatedat N different angles, an N -channel polarization-sensitive sys-tem can be constructed as⎡

⎣ L1...

LN

⎤⎦ =

⎡⎢⎣mT

1...

mTN

⎤⎥⎦⎡⎣LS0

LS1

LS2

⎤⎦ = MLS . (12)

An N × 3 system matrix can be made up by including all thevectors associated with the rotation of analyzers as

M = [m1 · · · mN ]T . (13)

Several operation strategies can be used to set the analyzer rota-tion angles, for example, Pickering’s method of (0◦, 45◦, 90◦),

Page 5: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

MENG AND KEREKES: ANALYTICAL MODEL FOR OPTICAL POLARIMETRIC IMAGING SYSTEMS 5

Fig. 4. Framework of the sensor model.

Fessenkov’s method of (0◦, 60◦, 120◦), and modified Pickering’smethod of (0◦, 45◦, 90◦, 135◦).

The passed radiance shown in (11) is an ideal case. Someradiance of the orthogonal polarization state may also passthrough the analyzer. This leakage effect can be character-ized by including a copolarized transmission τ‖ and a cross-polarized transmission τ⊥. The ith component of the systemmatrix can be updated by considering the leakage effect as

mi=1

2

[τ‖+ τ⊥ (τ‖−τ⊥) cos 2ψi (τ‖−τ⊥) sin 2ψi

]T. (14)

The statistics of the radiance passed through the analyzerscan be found by applying the system matrix to the sensorreaching radiance statistics. The mean of the radiance can befound as

L = E{MLS} = MLS . (15)

The covariance matrix of the radiance can be also found accord-ingly as

ΣL = E{(L− L)(L− L)T

}= MΣLSM

T . (16)

The radiance passed through the analyzers is converted toelectrons and further to sensor output signals using

I =

[∫L · τo · η · t · λ ·Ad

(1 + 4f#2) · h · c R(λ)dλ+Nd

]· ge (17)

where I is the intensity per channel, L is the input radiance, τois the transmittance of the optical components, η is the quantumefficiency of the detector, t is the integration time, λ is thecentral wavelength, Ad is the effective area of the detector, f#is the f number, h is the Plank’s constant, c is the speed oflight, R(λ) is the normalized sensor response, Nd is the darkcurrent, and ge is the sensor gain. As aforementioned, the sensorspectral response functions are convolved with the radiance inthe sensor reaching radiance calculation. Radiance mean L andcovariance matrix ΣL are accordingly converted to signal meanI and covariance matrix ΣIs. The signal mean of an N -channelsystem is expressed as

I = [I1 · · · IN ]T (18)

and the covariance matrix ΣIs is an N ×N matrix.As shown in Fig. 4, various noise sources can be observed in

a sensor system. These noise sources add more uncertainties tooutput signal and can be characterized by including covariancematrix Σn. In our model, Σn is an N ×N diagonal matrix,whose ith component in the diagonal is the variance of sensornoise in the ith channel and is computed as

σ2n, i = geIi + σ2

C (19)

where σ2n, i is the variance of total sensor noise in the ith chan-

nel, Ii is the signal mean in the ith channel (representing shotnoise modeled as a Poisson process), and σ2

C is the variance ofsignal independent noise, such as read noise and quantizationnoise.

The covariance matrix of the digital output signal can beexpressed as

ΣI = ΣIs +Σn. (20)

C. Processing Model

In the processing model, we consider the transformation ofdigitized intensity to Stokes parameters, and the reconstructedStokes parameters are used as features for further analysis.

The mean and the covariance matrix of the reconstructedStokes parameters can be found, respectively, as

S = [S0 S1 S2]T = WI (21)

ΣS =WΣIWT (22)

where W is the inverse of the estimated or calibrated systemmatrix M. If M is a non-square matrix, its pseudo-inverse canbe applied using

W = (MTM)−1MT . (23)

A DoLP value can be then calculated based on the recon-structed Stokes parameters. In this paper, the mean of DoLP isapproximated based on the means of Stokes parameters, i.e.,

DoLP =

√S21 + S2

2

S20

(24)

and the variance of DoLP can be found as

σ2DoLP=a21σ

2S0+a2σ

2S1+a3σ

2S2+

∑ρijaiajσSiσSj (25)

where

a1 = −√S21 + S2

2

S20

(26)

a2 =S1

S0

√S21 + S2

2

(27)

a3 =S2

S0

√S21 + S2

2

(28)

σSi is the standard deviation of the ith component in Stokesvector, ρij is the correlation coefficient between variables Si

and Sj , and the summation is taken over all the possible com-binations of Stokes parameters. A derivation of the statistics ofDoLP was presented in [56].

Page 6: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

Fig. 5. Comparison of DoLP statistics based on Monte Carlo experiment andanalytical model prediction. The incident light is with normalized Stokes vectorS=[1 0.5 0.5]T . (a) Comparison of mean. (b) Comparison of standard deviation.

It should be noticed that the estimation of the DoLP statisticsare affected by the intensity SNR as well as different estimators.As shown in [57], the mean and standard deviation of estimatedDoLP diverge from the true values at low SNR, and differentestimators such as maximum likelihood, empirical mean, andmedian estimator present a different behavior. To show howthe estimated DoLP statistics is affected by the intensity SNR,Monte Carlo experiments are performed. The experiments areperformed based on specified Stoke vector radiance receivedby the sensor, and are run at different SNR of S0. A fourpolarization sensitive imaging system with the analyzers rotatedat 0◦, 45◦, 90◦, and 135◦ is considered. In the sensor model,only shot noise is considered. With an increasing SNR of S0,104 realizations of the intensity are generated for each SNR.The mean and the standard deviation of DoLP are calculatedat each SNR level. The analytical model predictions of DoLPstatistics is compared with the results generated by Monte Carlosimulations in Fig. 5. In the simulation, the incident light iswith normalized Stokes vector S = [1 0.5 0.5]T . It is shownthat the analytical-model-predicted statistics of DoLP diverge at

Fig. 6. (Left) Panchromatic and (Right) DoLP images of a black-painted paneland a white-painted panel. The pixels within the red outlines are used for modelvalidation.

low SNR, which is accordance with the observation presentedin [57]. In this paper, all the experiments are performed withthe S0 SNR larger than 20 dB to guarantee the model-predictedDoLP statistics have better agreement with the true values.

III. MODEL VALIDATION

The proposed analytical model has been validated using realdata. A DoTP system composed of a 12-bit Prosilica GC780camera containing a 782 × 582 Sony charge-coupled device(CCD) camera, an Edmund Optics 35-mm focal length lens,a UV–IR cut filter, and a rotatable linear polarizer, was usedto collect outdoor imagery. The noise characteristics of theCCD camera were calibrated and presented in our previouspublication [38]. Polarization images were collected using themodified Pickering method from a rooftop on May 25, 2010.The scene was composed of a piece of black-painted panel,a piece of white-painted panel, and a piece of asphalt. Twodifferent sun–object–sensor geometries were chosen to validatethe model: One was with a solar zenith angle θi of 22◦, andrelative azimuth angle φ between the sun and the sensor of180◦, and the other was with θi of 43◦ and φ of 90◦. Bothwere viewed by the sensor at zenith angle θr of 53◦. Due tothe limited length of this paper, only validations for black-and white-painted panels at one of the two scene geometriesare presented. (See [56] and [58] for more complete validationresults.) The panchromatic and DoLP images of the black- andwhite-painted panels collected at one of the scene geometriesare shown in Fig. 6. The input parameters of the scene modelare refractive indexes, surface roughness, and directional hemi-spherical reflectance values of the black paint and white paint.MODTRAN-P was run to generate the solar illumination andthe polarized atmospheric radiance. Because the test objectsare not widely distributed, no scene geometry variation wasconsidered in the validation, and the texture variation of theobject surface was measured using the technique introduced in[56]. The sensor model parameter settings were selected basedon documents provided by the manufacturer and sensor cali-bration. The detailed parameter settings for scene and sensormodels were presented in [56].

A. Multichannel Intensity Statistics

The output of the sensor model is the intensity in digitalcounts. We compare the model-predicted mean and standarddeviation of the intensity to the real measurement of the black

Page 7: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

MENG AND KEREKES: ANALYTICAL MODEL FOR OPTICAL POLARIMETRIC IMAGING SYSTEMS 7

Fig. 7. Comparison of the analytical-model-predicted intensity mean andstandard deviation to the real measurements of the black paint and the whitepaint at scene geometry (θi, θr, φ) = (22◦, 53◦, 180◦). (a) Comparison ofmean. (b) Comparison of standard deviation.

paint and white paint as an intermediate result. The resultsfor the measurement taken at (θi, θr, φ)=(22◦, 53◦, 180◦) areshown in Fig. 7. The statistics of the intensity are comparedat four different channels, which were measured using themodified Pickering method with 0◦, 45◦, 90◦, and 135◦ analyzerrotations. It can be seen that the analytical model can predict thetrend of the data very well, although some discrepancy betweenthe model predictions and real measurements can be seen inmagnitude.

B. Stokes Vector and DoLP Statistics

The model has been further validated based on the Stokesvector and DoLP statistics. The histograms of the normalizedS1 and S2 Stokes parameters and the DoLP values as observedfrom the black paint and white paint are shown in Fig. 8(a).The histograms of the real data were fitted by Gaussian curves,which are shown as solid lines in the figures. It is shown thatthe black paint and white paint exhibit different polarizationproperties, and the black paint shows higher DoLP due to itslower reflectance. The white paint shows smaller variance thanthe black paint in both Stokes parameters and DoLP since thesignal, as measured from the white paint, has a higher SNR. TheStokes parameters and DoLP were then predicted by the analyt-ical model. The mean and variance of the Stokes parameters andDoLP were predicted using the analytical model, from whichGaussian curves are plotted in Fig. 8(b). It is shown that thepredicted statistics have a close match with the measured data.The discrepancy between the model predictions and real mea-surements are possibly due to differences between the real at-mosphere and the predicted atmosphere by MODTRAN-P, any

Fig. 8. Model validation with black paint and white paint at scene geometry(θi, θr, φ) = (22◦, 53◦, 180◦). (a) Real data. (b) Model prediction.

diversity between the estimated and actual material properties,or calibration error of the sensor. However, the results show thatthe analytical model is able to predict the general polarizationbehavior of different materials and data trends at different scenegeometries, which can be found with more details in [56].

IV. SYSTEM PERFORMANCE METRICS

Here, system performance metrics are defined from twodifferent perspectives: strength of polarimetric signatures ofobjects and target detection performance. Those performancemetrics can be then used to evaluate the potential performanceof a polarimetric imaging system.

A. Strength of Polarimetric Signature

The polarization signatures of different objects are oftendefined using Stokes vector and DoLP values. The DoLP isparticularly helpful since it is a normalized value and canbe used to evaluate quantitatively the strength of polarimetricsignature. However, the measurement of DoLP is always con-taminated with sensor noise. The DoLP measurement can bevarying, and the variance of DoLP due to sensor noise is closelyrelated to the SNR of intensity measurement. Therefore, thestrength of polarimetric signature is characterized dynamicallyby introducing DoLP SNR as

SNRDoLP =DoLP

σDoLP(29)

where DoLP is the mean of DoLP, which can be computedusing (24), and σDoLP is the standard deviation of DoLP, whichcan be computed using (25). The DoLP SNR represents anormalized performance metric, as compared with the noiseequivalent DoLP (NEDoLP). An expression of the NEDoLPbased on modified Pickering measurement is presented in [59].

B. Target Detection Performance

Target detection algorithms that have been developed forspectral imagery have potential applications for polarimetricimagery by replacing the spectral vector with the polarimetricintensity or Stokes vector. The RX anomaly detection algorithm[60] that has been widely used for spectral image analysis is

Page 8: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

modified and used to evaluate the target detection performance.The RX detector is expressed as

DRX(z) = zTΣ−1z≥ η ⇒ H1

< η ⇒ H0(30)

where z = x−m; x is the sample vector composed of mul-tichannel intensity or Stokes parameters; m and Σ are thebackground mean and covariance matrix, respectively; η is theuser-specified threshold to control the false alarm rate; H1 isthe hypothesis that the target is present; and H0 is the hy-pothesis that the target is absent. The RX anomaly detectionalgorithm can be recognized as the estimation of the squaredMahalanobis distance of the test pixel from the mean of thebackground. The RX algorithm is only applied on the Stokesparameters in this paper. A closed-form expression of the statis-tical distribution of DRX cannot be easily derived. Monte Carloexperiments are therefore performed to estimate DRX of thetest pixels. In each Monte Carlo experiment, 105 realizationsof each Stokes parameter are generated based on its statisticsestimated using (21) and (22) in the analytical model. TheStokes parameters are assumed to follow Gaussian distributionaccording to the observations shown in [7].

Receiver operating characteristic (ROC) curves are then gen-erated based on the DRX values to help evaluate the targetdetection performance. The ROC curve is a graphic plot thatcharacterizes the probability of detection (PD) at differentprobability of false alarm levels by varying threshold η.

V. EXAMPLES OF SYSTEM PERFORMANCE EVALUATION

Here, we present how the system model and defined per-formance metrics may be used to study potential systemperformance based on different parameters. In these studies,two different materials were considered: black paint used asthe target and asphalt used as the background. Their opticalproperties were presented in [56]. MODTRAN-P was run togenerate the solar illumination and the polarized atmosphericradiance, based on the 1976 U.S. standard atmospheric model,rural extinction, 23-km meteorological range, and the day ofyear of 145. The simulated sensor has an f number of 2, pixelpitch of 8.3 μm, a depth of 12 bits, field of view of 15◦ × 15◦,and integration time of 30 μs. The background object asphaltis considered widely distributed within the field of view. Otherconfigurations of the sensor, such as the sensor response, can befound in [56]. In the processing model, the modified Pickeringmethod is used to reconstruct the Stokes parameters.

A. Polarization Contributions of Radiometric Sources

The radiance reaching sensor’s aperture has contributions fromdifferent radiometric sources, as modeled in Section II-A2.The impact of different radiometric sources on the DoLP SNRwas first studied. The contributions from different radiometricsources are modeled as the reflected radiance, the upwellingradiance, and the total radiance. The reflected radiance is thecombined solar and downwelling radiance reflected only fromthe object surface and can be represented as

Lreflect = Lr + Ld (31)

Fig. 9. Polarization contribute of radiometric sources at solar zenith angleθi = 25◦, and two sensor zenith angles (a) θr = 10◦ and (b) θr = 50◦.

where the radiance is also modeled as a Stokes vector. Thereflected radiance can be thought of as the target’s polarimetricsignature. The upwelling radiance is the radiance propagatedto sensor’s aperture without reaching the object surface, whichcan be thought of as contaminating the target polarimetricsignature. The total radiance can be found using (6). The DoLPSNR of a black-painted object with contributions from eachradiometric source is computed at different scene geometries,and the results are compared in Fig. 9.

In Fig. 9, it is shown that the DoLP SNR calculated based onthe total radiance is different from the radiance truly reflectedfrom the object surface, which is due to the contamination fromthe upwelling radiance. At certain scene geometries, such asθi = 25◦ and θr = 10◦, as shown in Fig. 9(a), the upwellingradiance results in an even higher DoLP SNR than the reflectedradiance at Δφ = 0◦ and Δφ = 90◦. In the same figure, it isalso shown that, although the DoLP SNR of the total radiance ismuch higher at Δφ = 0◦, the polarization information obtainedat this azimuth angle is dominated by the upwelling radianceand reflects very little information about the polarimetric sig-nature of the object. The polarization sensed by the sensor isusually based the total radiance; therefore, a higher DoLP SNRdoes not necessarily mean a stronger polarimetric signature.However, these figures show the general trend that the DoLPSNR of reflected radiance increases when the azimuth anglechanges from 0◦ to 180◦, and therefore stronger polarimetricsignature is expected to be captured at the forward reflectiondirection. A higher DoLP SNR of reflected radiance is also ob-served at an oblique viewing angle, such as θr = 50◦ shown inFig. 9(b). In the following performance studies, we will concen-trate on the azimuth angle of 180◦, at which more accurate po-larimetric signatures of the object are expected to be captured.

Page 9: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

MENG AND KEREKES: ANALYTICAL MODEL FOR OPTICAL POLARIMETRIC IMAGING SYSTEMS 9

Fig. 10. Comparison of DoLP SNR of black paint with the (a) reflectedradiance and (b) total radiance captured by the sensor at different altitude.

B. Effect of Sensor Platform Altitude

The DoLP SNR was analyzed by considering the altitudeof the sensor platform. The DoLP SNRs of the black paintare estimated at platform altitudes of 3, 5, 7, and 9 km, withscene geometry at θi = 50◦, θr = 50◦, and Δφ = 180◦. Thesimulation results are shown in Fig. 10. The reflected radianceand total radiance were also considered separately in the cal-culation of the DoLP SNR. In Fig. 10(a), the DoLP SNR wascalculated based on reflected radiance, and in Fig. 10(b), theDoLP SNR was calculated based on the total radiance. It isshown that the DoLP SNR of reflected radiance decreases athigher altitude and therefore results in a weaker polarimetricsignature. However, this trend is not followed in Fig. 10(b)when DoLP SNR is calculated based on the total radiance. Thisis because, at higher altitudes, upwelling radiance is increasedsignificantly. The observations suggest that, at higher altitudes,higher DoLP SNR does not necessarily mean a stronger po-larimetric signature since the polarimetric signature may beseverely contaminated by the upwelling radiance.

The effect of sensor platform altitudes on the target detectionperformance is further analyzed by applying the RX anomalydetector and by plotting ROC curves. The target detectionperformance is compared at different platform altitudes inFig. 11. It is shown that the detection performance decreases byincreasing the altitude. The observation is in accordance withthe results shown in Fig. 10. At a lower platform altitude, theDoLP SNR of reflected radiance from target is much higher;therefore, stronger polarimetric signature is present.

C. Effect of Scene Geometry

In this section, we further study the effect of scene geometryon the polarization behavior of object, as well as the targetdetection performance.

Fig. 11. ROC curve comparison at different platform altitudes.

Fig. 12. DoLP SNR of black paint at different solar incident angles θi andsensor viewing angles θr .

The DoLP SNRs of black paint were estimated at differentsolar incident and sensor viewing angles, and the results areshown in Fig. 12. In the simulations shown in the following,two different sensor viewing angles were considered: θr = 20◦

and θr = 40◦. For each of the θr value, the solar incident anglesθi were changed from 10◦ to 80◦ with a 10◦ increment. It isshown that, at an oblique viewing angle θr = 40◦, the blackpaint shows higher DoLP SNR, and stronger polarization canbe found around 40–60◦ incident angles.

The effect of scene geometry was further analyzed by eval-uating the target detection performance. The ROC curves areplotted in Fig. 13 by performing RX detection at four differentscene geometries. It is shown that, at an oblique solar incidentangle θi = 50◦, the detection performance is enhanced. Whenθi = 10◦, the detection at an oblique sensor viewing angleθr = 40◦ outperforms the detection at θr = 20◦, and the resultis in accordance with the DoLP SNR analysis shown in Fig. 12.When θi = 50◦, the detection at θr = 20◦ outperforms thedetection at θr = 40◦. This result however does not follow theobservation that we have in Fig. 12, where the DoLP SNRof black paint at θr = 40◦ is always higher than the DoLPSNR at θr = 20◦. This is mainly due to the fact that theDoLP SNR is calculated solely based on the target object,but the detection performance is determined by the combinedfactors of both the target and background. In this simulation,

Page 10: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

Fig. 13. ROC curve comparison at different scene geometry settings.

the background object asphalt is not randomly polarized, andits polarization properties also determine the target detectionperformance.

D. Sensitivity to Sensor Noise

The effect of sensor noise is studied by defining a sensornoise factor Kn; using which, the total sensor noise can bescaled as

σN = Kn

√geIi + σ2

C (32)

where σN is the standard deviation of total sensor noise basedon the noise factor Kn and parameters ge, Ii, and σC , as definedin Section II-B.

The DoLP SNR of black paint was first studied by changingthe sensor noise factor from 1 to 3.5 with 0.5 step, and theresults are shown in Fig. 14. The DoLP SNR was also analyzedat different solar incident zenith angles. It is shown that, witha larger sensor noise factor, the DoLP SNR of black paintdecreases. The black paint presents higher DoLP SNR at anincident zenith angle of around 40◦ and 60◦, showing goodagreement with the results presented earlier.

The target detection performance was also studied with threedifferent sensor noise factors Kn =1, 2, and 3, and the ROCcurves are compared in Fig. 15. In this simulation, the incidentzenith angle was fixed at 50◦. The target detection performanceis affected significantly by the level of sensor noise. The PDvalue at a constant false alarm rate decreases when the sensornoise factor is increased.

VI. CONCLUSION

An analytical model has been presented to predict the end-to-end system performance of a passive optical polarimetricimaging system. Three subsystems, i.e., scene, sensor, andprocessing, were modeled, respectively. The scene model pro-duces the first- and second-order statistics of the radiance

Fig. 14. Effect of the sensor noise factor and the incident zenith angle on theDoLP SNR of black paint with θr = 40◦ and Δφ = 180◦.

Fig. 15. ROC curve comparison with different sensor noise factors.

reaching sensor’s aperture. The sensor model then convertsthe radiance statistics to the statistics of digital output signals.The processing model extracts the features (Stokes parametersand DoLP) from the multichannel intensity signals, and thestatistics of the extracted features were generated. The ana-lytical model was validated using real data collected from arooftop. The results based on model prediction have showngood agreement with experimental measurement. Based on theanalysis, two different performance metrics were defined toevaluate the end-to-end system performance. The DoLP SNRwas defined to estimate the strength of polarimetric signature,and ROC curves generated by applying an RX anomaly detectorwere used to evaluate target detection performance. Studieswere performed to analyze the potential system performance,and several examples were presented by varying different sys-tem parameters. It has been shown that the analytical modelcan help us better understand the effects of various systemattributes.

Page 11: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

MENG AND KEREKES: ANALYTICAL MODEL FOR OPTICAL POLARIMETRIC IMAGING SYSTEMS 11

REFERENCES

[1] J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review ofpassive imaging polarimetry for remote sensing applications,” Appl. Opt.,vol. 45, no. 22, pp. 5453–5469, Aug. 2006.

[2] J. R. Schott, Fundamentals of Polarimetric Remote Sensing.Bellingham, WA, USA: SPIE Press, 2009.

[3] L. Wolff and T. Boult, “Constraining object features using a polarizationreflectance model,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 7,pp. 635–657, Jul. 1991.

[4] M. Hyde, S. Cain, J. Schmidt, and M. Havrilla, “Material classificationof an unknown object using turbulence-degraded polarimetric imagery,”IEEE Trans. Geosci. Remote Sens., vol. 49, no. 1, pp. 264–276, Jan. 2011.

[5] L. B. Wolff, “Polarization camera for computer vision with a beam split-ter,” J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 11, no. 11, pp. 2935–2945,Nov. 1994.

[6] J. S. Tyo, M. P. Rowe, J. E. N. Pugh, and N. Engheta, “Target detectionin optically scattering media by polarization-difference imaging,” Appl.Opt., vol. 35, no. 11, pp. 1855–1870, Apr. 1996.

[7] F. Goudail, P. Terrier, Y. Takakura, L. Bigué, F. Galland, andV. DeVlaminck, “Target detection with a liquid-crystal-based passivestokes polarimeter,” Appl. Opt., vol. 43, no. 2, pp. 274–282, Jan. 2004.

[8] K. M. Yemelyanov, S. S. Lin, E. N. Pugh, and N. Engheta, “Adaptivealgorithms for two-channel polarization sensing under various polariza-tion statistics with nonuniform distributions,” Appl. Opt., vol. 45, no. 22,pp. 5504–5520, Aug. 2006.

[9] M. G. Gartley and W. Basener, “Topological anomaly detection perfor-mance with multispectral polarimetric imagery,” in Proc. SPIE, 2009,vol. 7334, no. 1, pp. 73341O-1–73341O-12.

[10] G. D. Lewis, D. L. Jordan, and P. J. Roberts, “Backscattering targetdetection in a turbid medium by polarization discrimination,” Appl. Opt.,vol. 38, no. 18, pp. 3937–3944, Jun. 1999.

[11] B. M. Flusche, “An analysis of multimodal sensor fusion for target de-tection in an urban environment,” Ph.D. dissertation, Chester F. CarslonCenter Imaging Sci., Rochester Inst. Technol., Rochester, NY, USA,2011.

[12] M. Presnar, A. Raisanen, D. Pogorzala, J. Kerekes, and A. Rice, “Dynamicscene generation, multimodal sensor design, and target tracking demon-stration for hyperspectral/polarimetric performance-driven sensing,” inProc. SPIE, 2010, vol. 7672, pp. 76720T-1–76720T-12.

[13] M. Presnar, “Modeling and simulation of adaptive multimodal opticalsensors for target tracking in the visible to near infrared,” Ph.D. disser-tation, Chester F. Carslon Center Imaging Sci., Rochester Inst. Technol.,Rochester, NY, USA, 2010.

[14] J. Kerekes and D. Landgrebe, “An analytical model of earth-observationalremote sensing systems,” IEEE Trans. Syst., Man, Cybern., vol. 21, no. 1,pp. 125–133, Jan./Feb. 1991.

[15] J. P. Kerekes and J. E. Baum, “Spectral imaging system analytical modelfor subpixel object detection,” IEEE Trans. Geosci. Remote Sens., vol. 40,no. 5, pp. 1088–1101, May 2002.

[16] J. Kerekes and D. Landgrebe, “Parameter trade-offs for imaging spec-troscopy systems,” IEEE Trans. Geosci. Remote Sens., vol. 29, no. 1,pp. 57–65, Jan. 1991.

[17] J. Kerekes and J. Baum, “Hyperspectral imaging system modeling,”Lincoln Lab. J., vol. 14, no. 1, pp. 117–130, 2003.

[18] J. R. Schott, Remote Sensing: the Image Chain Approach, 2nd ed.New York, NY, USA: Oxford Univ. Press, 2007.

[19] R. G. Priest and S. R. Meier, “Polarimetric microfacet scattering theorywith applications to absorptive and reflective surfaces,” Opt. Eng., vol. 41,no. 5, pp. 988–993, May 2002.

[20] M. W. Hyde, J. D. Schmidt, and M. J. Havrilla, “A geometrical opticspolarimetric bidirectional reflectance distribution function for dielectricand metallic surfaces,” Opt. Exp., vol. 17, no. 24, pp. 22 138–22 153,Nov. 2009.

[21] J. R. Shell, “Polarimetric remote sensing in the visible to near infrared,”Ph.D. dissertation, Chester F. Carslon Center Imaging Sci., Rochester Inst.Technol., Rochester, NY, USA, 2005.

[22] M. Gartley, “Polarimetric modeling of remotely sensed scenes in thethermal infrared,” Ph.D. dissertation, Chester F. Carslon Center ImagingSci., Rochester Inst. Technol., Rochester, NY, USA, 2007.

[23] J. Shell, C. Salvaggio, and J. Schott, “A novel BRDF measurement tech-nique with spatial resolution-dependent spectral variance,” in Proc. IEEEIGARSS, Sep. 2004, vol. 7, pp. 4754–4757.

[24] B. Bartlett, C. Devaraj, M. Gartley, C. Salvaggio, and J. R. Schott,“Spectro-polarimetric BRDF determination of objects using in-scenecalibration materials for polarimetric imagers,” in Proc. SPIE, 2009,vol. 7461, pp. 74610T-1–74610T-11.

[25] V. Vanderbilt and L. Grant, “Plant canopy specular reflectance model,”IEEE Trans. Geosci. Remote Sens., vol. GE-23, no. 5, pp. 722–730,Sep. 1985.

[26] D. A. Talmage and P. J. Curran, “Remote sensing using partially polarizedlight,” Int. J. Remote Sens., vol. 7, no. 1, pp. 47–64, Jan. 1986.

[27] J. Peltoniemi, T. Hakala, J. Suomalainen, and E. Puttonen, “Polarised bidi-rectional reflectance factor measurements from soil, stones, and snow,”J. Quant. Spectrosc. Radiat. Transf., vol. 110, no. 17, pp. 1940–1953,Nov. 2009.

[28] G. Rondeaux and M. Herman, “Polarization of light reflected by cropcanopies,” Remote Sens. Environ., vol. 38, no. 1, pp. 63–75, Oct. 1991.

[29] L. Bousquet, S. Lacherade, S. Jacquemoud, and I. Moya, “Leaf BRDFmeasurements and model for specular and diffuse components dif-ferentiation,” Remote Sens. Environ., vol. 98, no. 2/3, pp. 201–211,Oct. 2005.

[30] F.-M. Breon, D. Tanre, P. Lecomte, and M. Herman, “Polarized reflectanceof bare soils and vegetation: Measurements and models,” IEEE Trans.Geosci. Remote Sens., vol. 33, no. 2, pp. 487–499, Mar. 1995.

[31] F. Nadal and F.-M. Breon, “Parameterization of surface polarized re-flectance derived from polder spaceborne measurements,” IEEE Trans.Geosci. Remote Sens., vol. 37, no. 3, pp. 1709–1718, May 1999.

[32] J. Zhang, J. Chen, B. Zou, and Y. Zhang, “Modeling and simulation ofpolarimetric hyperspectral imaging process,” IEEE Trans. Geosci. RemoteSens., vol. 50, no. 6, pp. 2238–2253, Jun. 2012.

[33] J. S. Tyo, “Optimum linear combination strategy for an n-channelpolarization-sensitive imaging or vision system,” J. Opt. Soc. Amer. A,Opt. Image Sci., vol. 15, no. 2, pp. 359–366, Feb. 1998.

[34] J. S. Tyo, “Design of optimal polarimeters: Maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt., vol. 41,no. 4, pp. 619–630, Feb. 2002.

[35] D. Sabatke, M. Descour, E. Dereniak, W. Sweatt, S. Kemme, andG. Phipps, “Optimization of retardance for a complete stokes polarime-ter,” Opt. Lett., vol. 25, no. 11, pp. 802–804, Jun. 2000.

[36] F. Goudail, “Optimization of the contrast in active stokes images,” Opt.Lett., vol. 34, no. 2, pp. 121–123, Jan. 2009.

[37] M. Richert, X. Orlik, and A. D. Martino, “Adapted polarization statecontrast image,” Opt. Exp., vol. 17, no. 16, pp. 14 199–14 210, Aug. 2009.

[38] L. Meng and J. P. Kerekes, “Adaptive target detection with a polarization-sensitive optical system,” Appl. Opt., vol. 50, no. 13, pp. 1925–1932,May 2011.

[39] D. A. LeMaster and S. C. Cain, “Multichannel blind deconvolution ofpolarimetric imagery,” J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 25, no. 9,pp. 2170–2176, Sep. 2008.

[40] J. R. Valenzuela and J. A. Fessler, “Joint reconstruction of stokes imagesfrom polarimetric measurements,” J. Opt. Soc. Amer. A, Opt. Image Sci.,vol. 26, no. 4, pp. 962–968, Apr. 2009.

[41] C. F. LaCasse, R. A. Chipman, and J. S. Tyo, “Band limited data re-construction in modulated polarimeters,” Opt. Exp., vol. 19, no. 16,pp. 14 976–14 989, Aug. 2011.

[42] C. F. LaCasse, J. S. Tyo, and R. A. Chipman, “Role of the null space ofthe drm in the performance of modulated polarimeters,” Opt. Lett., vol. 37,no. 6, pp. 1097–1099, Mar. 2012.

[43] F. Goudail and P. Réfrégier, “Target segmentation in active polarimetricimages by use of statistical active contours,” Appl. Opt., vol. 41, no. 5,pp. 874–883, Feb. 2002.

[44] V. Thilak, J. Saini, D. G. Voelz, and C. D. Creusere, “Pattern recognitionfor passive polarimetric data using nonparametric classifiers,” in Proc.SPIE, 2005, vol. 5888, pp. 337–344.

[45] P. Terrier, V. Devlaminck, and J. M. Charbois, “Segmentation of roughsurfaces using a polarization imaging system,” J. Opt. Soc. Amer. A, Opt.Image Sci., vol. 25, no. 2, pp. 423–430, Feb. 2008.

[46] M. W. Hyde, IV, J. D. Schmidt, M. J. Havrilla, and S. C. Cain, “Enhancedmaterial classification using turbulence-degraded polarimetric imagery,”Opt. Lett., vol. 35, no. 21, pp. 3601–3603, Nov. 2010.

[47] B. D. Bartlett, A. Schlamm, C. Salvaggio, and D. W. Messinger,“Anomaly detection of man-made objects using spectropolarimetric im-agery,” in Proc. SPIE, 2011, vol. 8048, pp. 80480B-1–80480B-7.

[48] Y.-Q. Zhao, P. Gong, and Q. Pan, “Object detection by spectropolarime-teric imagery fusion,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 10,pp. 3337–3345, Oct. 2008.

[49] B. M. Ratliff, D. A. LeMaster, R. T. Mack, P. V. Villeneuve,J. J. Weinheimer, and J. R. Middendorf, “Detection and tracking of rcmodel aircraft in {LWIR} microgrid polarimeter data,” in Proc. SPIE,2011, vol. 8160, pp. 816 002-1–816 002-13.

[50] J. R. Shell and J. R. Schott, “A polarized clutter measurement techniquebased on the governing equation for polarimetric remote sensing in thevisible to near infrared,” in Proc. SPIE, 2005, vol. 5811, pp. 34–45.

Page 12: IEEETRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 …

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

[51] K. E. Torrance and E. M. Sparrow, “Theory for off-specular reflectionfrom roughened surfaces,” J. Opt. Soc. Amer., vol. 57, no. 9, pp. 1105–1112, Sep. 1967.

[52] A. Berk, L. S. Bernstein, and D. C. Robertson, “MODTRAN: A moderateresolution model for LOWTRAN 7,” Spectr. Sci. Inc., Burlington, MA,USA, GL-TR-89-0122, 1989.

[53] W. G. Egan, Optical Remote Sensing: Science and Technology.New York, NY, USA: Marcel Dekker, 2004.

[54] N. J. Pust and J. A. Shaw, “Dual-field imaging polarimeter using liquidcrystal variable retarders,” Appl. Opt., vol. 45, no. 22, pp. 5470–5478,Aug. 2006.

[55] N. J. Pust and J. A. Shaw, “Digital all-sky polarization imaging of partlycloudy skies,” Appl. Opt., vol. 47, no. 34, pp. H190–H198, Dec. 2008.

[56] L. Meng, “Analytical modeling, performance analysis, and optimizationof polarimetric imaging system,” Ph.D. dissertation, Chester F. CarslonCenter Imaging Sci., Rochester Inst. Technol., Rochester, NY, USA,2012.

[57] A. Bénière, F. Goudail, M. Alouini, and D. Dolfi, “Degree of polariza-tion estimation in the presence of nonuniform illumination and additiveGaussian noise,” J. Opt. Soc. Amer. A, Opt. Image sci., vol. 25, no. 4,pp. 919–929, Apr. 2008.

[58] L. Meng and J. Kerekes, “Analytical modeling of optical polarimetricimaging systems,” in Proc. IEEE IGARSS, 2011, pp. 3998–4001.

[59] M. W. Jones and C. M. Persons, “Performance predictions for micro-polarizer array imaging polarimeters,” in Proc. SPIE, 2007, vol. 6682,pp. 668 208-1–668 208-11.

[60] I. Reed and X. Yu, “Adaptive multiple-band CFAR detection of an opticalpattern with unknown spectral distribution,” IEEE Trans. Acoust., Speech,Signal, vol. 38, no. 10, pp. 1760–1770, Oct. 1990.

Lingfei Meng (S’10–M’11) received the B.S. de-gree in electrical engineering from Tianjin Univer-sity, Tianjin, China, in 2007 and the Ph.D. degreein imaging science from the Rochester Institute ofTechnology, Rochester, NY, USA, in 2012.

From 2008 to 2011, he was a Research Assis-tant with the Chester F. Carlson Center for ImagingScience, Rochester Institute of Technology. He iscurrently a Research Scientist with Ricoh Innova-tions Corporation, Menlo Park, CA, USA, wherehe is creating technology for computational imaging

systems. His research interests include computational imaging, imaging systemmodeling, digital image processing, and computer vision.

Dr. Meng serves for the Technical Program Committee of The Optical Soci-ety of America (OSA) Imaging Systems and Applications Topical Meeting, andthe IS&T/SPIE Electronic Imaging Conference. He has served as a Reviewerfor several international journals, including the IEEE TRANSACTIONS ON

GEOSCIENCE AND REMOTE SENSING, IEEE JOURNAL OF SELECTED TOP-ICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, AppliedOptics, and the Journal of the Optical Society of America A. He was one ofthe winners of the 2011 IEEE Geoscience and Remote Sensing Data FusionContest.

John P. Kerekes (S’81–M’89–SM’00) received theB.S., M.S., and Ph.D. degrees from Purdue Univer-sity, West Lafayette, IN, USA, in 1983, 1986, and1989, respectively, all in electrical engineering.

From 1983 to 1984, he was a Technical StaffMember with the Space and Communications Group,Hughes Aircraft Company, El Segundo, CA, USA,where he performed circuit design for communi-cations satellites. From 1986 to 1989, he was aGraduate Research Assistant with both the Schoolof Electrical Engineering and the Laboratory for

Applications of Remote Sensing, Purdue University. From 1989 to 2004, hewas a Technical Staff Member with the Lincoln Laboratory, MassachusettsInstitute of Technology, Lexington, MA, USA. Since 2004, he has been withthe Chester F. Carlson Center for Imaging Science, Rochester Institute ofTechnology, Rochester, NY, USA, first as an Associate Professor and, currently,as a Professor. His research interests include the modeling and analysis ofremote sensing system performance in pattern recognition and geophysicalparameter retrieval applications.

Dr. Kerekes is a member of Tau Beta Phi, Eta Kappa Nu, the American Geo-physical Union, and the American Society for Photogrammetry and RemoteSensing. He is a Senior Member of The Optical Society of America and SPIE.He served as the Chair of the Boston Section Chapter of the IEEE Geoscienceand Remote Sensing Society (GRSS) from 1995 to 2004 and as the foundingChair of the Western New York Chapter of GRSS from 2007 to 2010. He wasa General Cochair of the 2008 IEEE International Geoscience and RemoteSensing Symposium held in Boston, MA, USA. He is currently a member ofthe GRSS Administrative Committee and is serving as the Vice President ofTechnical Activities of the GRSS.