unbiased histogram matching quality measure for …quantify the histogram matching quality, which is...

12
ASPRS 2008 Annual Conference Portland, Oregon April 28 - May 2, 2007 UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR OPTIMAL RADIOMETRIC NORMALIZATION Zhengwei Yang, Rick Mueller United States Department of Agriculture National Agricultural Statistics Service Research and Development Division 3251 Old Lee Highway, Room 305 Fairfax, VA 22030 [email protected] [email protected] ABSTRACT Radiometric normalization is critical for multi-spectral image change detection. In this paper, a histogram matching method is proposed to perform relative radiometric normalization among heterogeneously sensed images. To quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure, such as Euclidean or Manhattan distance, was proposed. However, when the image difference based measure is used to optimize the reference image and band for the best histogram match, it is always biased to the reference image with the histogram compacting at the lower bits. To overcome this problem, image preprocessing, such as histogram equalization, mean standard deviation normalization or image bit clipping can be used to spread the histograms to the full dynamic range and thus eliminates the bias effect. However, this significantly increases the computational burden. In this paper, a new unbiased symmetric image pixel ratio is proposed as a measurement criterion for the histogram matching quality measurement. This measure consistently picks one of two relative ratios of every pixel pair of the reference image and the histogram matched subject image, which is consistently either less than or greater than 1 as selected; and the average of the ratios over the image reflects the goodness of the match. The proposed new measure is experimentally compared with the Manhattan distance measure with/without image stretching. In addition, the experimental results using image preprocessing are also presented. The results indicate that the new measure is unbiased and performs well for histogram matching optimization. INTRODUCTION Change detection in remote sensing is of great importance to the USDA, National Agricultural Statistics Service (NASS) for inventory monitoring, the production statistics, and policy making. It highlights and reflects changes in the interested targets such as land cover, land use, the soil condition, wetness, or biomass, etc. There are many change detection methods that have been developed (Singh, 1989; Coppin and Bauer, 1996; Yuan, et al., 1998; Radke et al, 2005; Yang and Mueller, 2007b). Many of them are sensitive to radiometric distortion and difference. The radiometric response differences from different sensors are particularly of great impact on change detection results. The radiometric distortions caused by sensor differences and data acquiring condition differences may cause non-change pixels to be identified as change pixels or vise-versa. For the multi-temporal images acquired from the same sensor types, many changes can be detected without applying radiometric calibration or correction. But it can be more difficult to quantify and interpret changes on multi-temporal images from different sensors under different atmospheric conditions without radiometric calibration and correction. To overcome the radiometric distortion or difference, the absolute radiometric calibration or correction should be performed if enough information about the sensors and data acquiring conditions are available. However, radiometric calibration and correction are costly and troublesome, and in many cases this information is not available. Moreover, for the images acquired with different types of sensors, the radiometric characteristics are almost always different even if atmospheric or sensor conditions are the same. For those images, the absolute radiometric calibration and correction will not solve the radiometric difference problem. Therefore, the relative radiometric normalization should be used to solve the radiometric difference problem. Relative image normalization uses one image as a reference and adjusts the radiometric properties of subject images to match that of the reference (Hall et al., 1991; Yuan and Elvidge, 1996). Relative radiometric normalization methods can basically be classified into two categories: linear normalization methods and non-linear

Upload: others

Post on 10-Mar-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR OPTIMAL RADIOMETRIC NORMALIZATION

Zhengwei Yang, Rick Mueller

United States Department of Agriculture National Agricultural Statistics Service Research and Development Division 3251 Old Lee Highway, Room 305

Fairfax, VA 22030 [email protected]

[email protected] ABSTRACT Radiometric normalization is critical for multi-spectral image change detection. In this paper, a histogram matching method is proposed to perform relative radiometric normalization among heterogeneously sensed images. To quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure, such as Euclidean or Manhattan distance, was proposed. However, when the image difference based measure is used to optimize the reference image and band for the best histogram match, it is always biased to the reference image with the histogram compacting at the lower bits. To overcome this problem, image preprocessing, such as histogram equalization, mean standard deviation normalization or image bit clipping can be used to spread the histograms to the full dynamic range and thus eliminates the bias effect. However, this significantly increases the computational burden. In this paper, a new unbiased symmetric image pixel ratio is proposed as a measurement criterion for the histogram matching quality measurement. This measure consistently picks one of two relative ratios of every pixel pair of the reference image and the histogram matched subject image, which is consistently either less than or greater than 1 as selected; and the average of the ratios over the image reflects the goodness of the match. The proposed new measure is experimentally compared with the Manhattan distance measure with/without image stretching. In addition, the experimental results using image preprocessing are also presented. The results indicate that the new measure is unbiased and performs well for histogram matching optimization.

INTRODUCTION

Change detection in remote sensing is of great importance to the USDA, National Agricultural Statistics Service (NASS) for inventory monitoring, the production statistics, and policy making. It highlights and reflects changes in the interested targets such as land cover, land use, the soil condition, wetness, or biomass, etc. There are many change detection methods that have been developed (Singh, 1989; Coppin and Bauer, 1996; Yuan, et al., 1998; Radke et al, 2005; Yang and Mueller, 2007b). Many of them are sensitive to radiometric distortion and difference. The radiometric response differences from different sensors are particularly of great impact on change detection results. The radiometric distortions caused by sensor differences and data acquiring condition differences may cause non-change pixels to be identified as change pixels or vise-versa. For the multi-temporal images acquired from the same sensor types, many changes can be detected without applying radiometric calibration or correction. But it can be more difficult to quantify and interpret changes on multi-temporal images from different sensors under different atmospheric conditions without radiometric calibration and correction. To overcome the radiometric distortion or difference, the absolute radiometric calibration or correction should be performed if enough information about the sensors and data acquiring conditions are available. However, radiometric calibration and correction are costly and troublesome, and in many cases this information is not available. Moreover, for the images acquired with different types of sensors, the radiometric characteristics are almost always different even if atmospheric or sensor conditions are the same. For those images, the absolute radiometric calibration and correction will not solve the radiometric difference problem. Therefore, the relative radiometric normalization should be used to solve the radiometric difference problem.

Relative image normalization uses one image as a reference and adjusts the radiometric properties of subject images to match that of the reference (Hall et al., 1991; Yuan and Elvidge, 1996). Relative radiometric normalization methods can basically be classified into two categories: linear normalization methods and non-linear

Page 2: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

normalization methods. There are several linear normalization methods (Yuan et al, 1998) have been developed, including simple regression (Jenson, 1983), pseudo-invariant feature (Schott et al., 1988, Du et al., 2002), dark-bright method (Hall et al, 1991), Minimum-maximum method, mean-standard deviation, no change set regression (Yuan and Elvidge, 1993), and multivariate alteration detection (Canty et al, 2004). The various linear normalization methods differ with the estimation of linear model coefficients. For some of the linear model based methods, one of the biggest challenges is how to select the ideal target areas for estimating the model parameters (Schott et al., 1988; Eckhardt et al., 1990). The linear normalization methods work effectively for sensors of similar radiometric characteristics but nonlinear methods are more suitable for the heterogeneous sensors, which have intrinsic radiometric characteristics difference and are not linearly-related.

The most widely used nonlinear method for radiometric normalization is the histogram matching. Hame (Coppin and Bauer, 1996) suggested using histogram matching before differencing TM data to reduce the radiometric difference impact. Yang and Lo (2000) empirically compared the performance of histogram matching (HM) method with other linear methods on Landsat MSS data and found that it was superior to pseudo-invariant feature set (PIF) method and dark-bright set method (DB) but inferior to simple image regression method (SR). In application to heterogeneous sensors, Hong and Zhang (2005) conducted research on normalizing the images acquired from IKONOS and QuickBird sensors using various existing linear normalization models and histogram matching methods. They reported that the linear normalization category did not perform as well as the nonlinear histogram matching method for the images acquired from comparable but different types of sensors. But they did not indicate that there is a big impact to the relative normalization techniques’ applicability from the nonlinearity of the relationship between the radiometric responses of the IKONOS and the QuickBird sensors. The histogram matching method eliminates the subjectivity problem of selecting ideal targets for linear model parameter estimation and reduces the dependence on the accurate spatial registration between multi-date images.

Yang and Muller (2007a) investigated the applicability of histogram matching method to perform the multispectral heterogeneous image radiometric normalization for change detection. They found that the histogram matching results were reference image dependent and spectral band dependent. Therefore, they further proposed to optimize histogram matching in performing the radiometric normalization between the images. The Manhattan distance was used as a similarity measure in optimizing the performance of the histogram matching normalization. However, it was found that the Manhattan distance was biased for normalization optimization. In this paper, a new unbiased similarity measure as a criterion for the histogram matching performance measurement is proposed. The remainder of this paper is organized as follows. First, the background of the histogram matching radiometric normalization method and concept of the histogram matching optimization are presented. Then, the new similarity measure for histogram matching is described. The performance of the new similarity measure and the Manhattan distance measure are compared and evaluated with respect to the histogram matching. Finally, the experiment results and discussion of the results are presented.

HISTOGRAM MATCHING RADIOMETRIC NORMALIZATION

Histogram matching (HM) is one of the histogram transformation methods, including histogram equalization, histogram modification and histogram matching or specification. It transforms the subject image histogram distribution into the specified histogram of a given reference image so that the radiometric appearance of the image to be transformed and the reference image become similar. The underlying assumptions for using the histogram matching method are that a) scene changes over time occur only on very small portions of images; b) histograms should be roughly the same for images taken at different times for the same scene; c) the relationship among the radiometric response characteristics of the heterogeneous sensors is nonlinear. Therefore, if two histograms are well matched against each other, their accumulated histograms should be the same. Let’s look at the algorithm for histogram matching.

For any given image, its gray level u (u ≥ 0) is a discrete random variable with the histogram pu(u). Suppose it is to be transformed to the discrete variable v ≥ 0 such that it has a specified histogram pv(v). Let u be the gray level variable of the subject image and v be the corresponding reference image variable. They take the values xi and yi, for i=0, …, L-1, with the histograms pu(xi) and pv(yi) respectively. Then, we have the corresponding cumulative distributions wu(n) and wv(k) associated with two histograms pu(u) and pv(v) as following:

∑=

=n

iu nw

0iu )(xp)( , n = 0, …, L-1 (1)

Page 3: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

∑=

=k

iv kw

0iv )(yp)( , k = 0, …, L-1 (2)

The histogram matching matches these two cumulative distributions with respect to two indices n and k. Specifically, the task of matching is finding the minimal k for a given u=xn, such that )(nwu ≤ )(kwv . The v = yk is the corresponding matched value to the reference variable u = xn. After exhausting all possible u values, a mapping relationship between variables u and v is established and a histogram transformation from variable u (xi) to variable v (yi) is defined.

It should be indicated that the mapping relationship from histogram matching is not symmetric. Some of the gray values may have a many-to-one relation. Therefore, the direct inverse transformation of a histogram matching transformation cannot be uniquely defined. Furthermore, the histogram matching algorithm approximately matches one histogram to another. It cannot match the histograms exactly because of the discrete nature of the histogram. Therefore, the performance of the histogram matching operation for the given two images will depend on the selection of the reference image. In the case of change detection, the reference image is relative. Either of the two images compared could be selected as the reference image. Therefore, for any two given images taken at different dates/times, we have to answer: “Which image is better for reference and under what criterion?”

OPTIMAL HISTOGRAM MATCHING

As indicated above, there are two possible combinations for a given image pair. Different reference image and subject image selections will result in different normalization performance. In addition, different image bands have different radiometric responses even for the same scene, and thus their histograms are different. Those differences also result in different normalization results under histogram matching. In general, the minimum matching error means less uninterested targets are being detected as changes, since the radiometric differences of the uninterested targets are supposed to be smaller than that of the real changes. Therefore, histogram matching will have relatively less impact on those real changes. Unless a specific band has been identified as optimum for a specific land cover type, it is natural to find a best reference image and a best band such that the image difference between the reference image band and the histogram matched image band is the minimum. In general it is assumed that the interested change targets in the images to be normalized in histogram matching, covers only a small portion of the overall image. This assumption assures that the histogram matching be valid for the majority of the image and keeps the distortion minimum. Under this assumption, the best reference image and spectral band found yields an optimal histogram match. Assume that there are L different images acquired on different dates and all have K bands. Each of those images can be selected as the reference image. Let Ir be reference image and Irh be its histogram matched subject image. Then, finding the best performing histogram match becomes {

,Min

kr srk(Ir(x, y, k), Irh(x, y, k))}, ∀ k∈{1,2, ..., K}, r∈{1,2, ..., L} (3)

where Ir(x, y, k) is the pixel value of layer k of the rth reference image at (x, y) location and Irh(x, y, k) is the pixel value of layer k of its histogram matched subject image at (x, y) location. The function srk(Ir, Irh) is the similarity measure, which measures overall similarity of all pixels of a given image band. It can take form of any similarity measure, such as Euclidean distance, Manhattan distance, etc.

The reference image and the band, which yield the minimum value in formula (3), are the optimal solution. The proposed formula (3) presents a best normalization solution using the histogram matching method. It significantly improves the normalization and change detection results as compared with randomly picking up the reference image and spectral band as shown in the experimental results (Yang and Mueller 2007a).

SIMILARITY MEASURE FOR HISTOGRAM MATCHED IMAGES

Measurement of Image Similarity (Difference) To find the optimal histogram matching, we have to have a criterion to judge the performance of the histogram

matching normalization. It is straightforward to measure the normalization performance by directly measuring the difference between the histogram matched subject image and the reference image. Yuan et al. (1998) and Yang et al. (2000) used Euclidean distance as defined by (4), i.e. root-mean-square error (RMSE) to measure this difference, i.e.

Page 4: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

they used RMSE as a criterion for comparing the performance of the different normalization methods. However, the RMSE measure emphasizes the bigger pixel errors and suppresses the smaller ones. For change detection, the large errors caused by the change area should not be over weighted. Therefore, Yang and Mueller (2007a) used the Manhattan distance (absolute difference) given by (5) to measure the performance of the histogram matching. This measure is band specific as the histogram matching operation is band specific. It is less computationally extensive than Euclidean distance.

srk = 2

1 1

)],,(),,([ kjiIkjiI rhr

M

i

N

j

−∑∑= =

, ∀ k∈{1,2, ..., K}, r∈{1,2, ..., L} (4)

srk =∑∑= =

−M

i

N

jrhr kjiIkjiI

1 1),,(),,( , ∀ k∈{1,2, ..., K}, r∈{1,2, ..., L} (5)

However, Yang and Mueller (2007a) indicated that the Manhattan distance measure is biased to the histogram matching reference image with the histogram concentrating at the lower bits because most image pixels have lower gray values than that of images having more evenly distributed histograms across the whole dynamic range. Obviously, a difference between two pixels in low bits is much more significant than the same difference between two pixels in high bits. The Euclidean distance measure has the same problem. This indicates that the distance based similarity measures are sensitive to the offset and gain (scale) differences of the two images/ two different bands from different sensors. To overcome this problem, Yang and Mueller (2007a) indicated that both reference images and subject images could be histogram equalized, or dynamic range clipped or even mean-standard deviation normalized before performing the histogram matching. This expands both images to the same dynamic range or reshapes the histogram to similar distributions and thus reduces the bias-effect of the Manhattan distance measure. Although the pre-processing, such as histogram equalization or dynamic range clipping reduces the bias in the performance measurement, it significantly increases the computational burden. To overcome this problem without performing image pre-processing such as clipping and stretching, it is necessary to find a new unbiased similarity measure robust to the offset and gain. Image Ratioing

It is known that a ratio of any two variables in a data set will eliminate the gain factor from that data set. If we can find a similarity measure that takes the form of a ratio, it can potentially be a good candidate as an unbiased similarity measure. Following this lead, we found the image ratioing method, one of the change detection methods (Yuan et al. 1998), (Howarth and Wickware, 1981), worth studying. Image rationing takes a ratio of two corresponding pixels from two co-registered corresponding image bands. The rational of this measure is that all pixels of images without spectral changes will yield same image ratio value 1, and yield ratio values less than or greater than 1 for pixels with spectral differences. This measure is simple yet it is not sensitive to the offset and gain differences of the two images/ two different bands from two sensors after histogram matching normalization since the ratioing cancels the scale factor. However, the relative image ratio does not reflect the absolute gray level value change accordingly. The increasing and decreasing of the same number of gray levels will yield significant different ratio values. For example, for a given reference image pixel with a gray value of 100, an increase or decrease of the gray value of 30 of the corresponding image pixel yields an image ratio of (100+30)/100 = 1.3 and (100-30)/100 = 0.7. If these two image ratios are averaged, the end result is 1! This means that two images have no difference at all. Obviously, this direct image pixel ratio cannot be summed up to measure the overall image difference. In addition, this image ratio is asymmetric measure. Let I1 and I2 be two image pixels to be computed for a ratio. Then the image ratio measure can be defined as

r(I1, I2 ) = I1/I2. (6) Obviously, r(I2, I1) = I2/I1 ≠ r(I1, I2 ) for any I1≠ /I2. This asymmetry of the measure leads that the ratio value differs greatly when the numerator and the denominator of the image ratio are switched, especially when the ratio value is far from 1. For example, if I1 = 50 and I2 =100, then I1/I2 = 0.5 but I2/I1 = 2.0; Therefore, it is important for all bands from one image to be in the same position (either all are numerators or all are denominators) so that the image ratios will be consistent for inter-band comparison. To solve this problem a new symmetric image ratio is proposed in this paper.

Page 5: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

Unbiased Symmetric Image Ratio (SIR)

To make the regular image ratio consistent in value, the following definition is proposed:

rrk(i, j)=

⎪⎪⎪

⎪⎪⎪

=∨=∀

>∀

≤∀

)0),,(0),,((,0

),,(),,(,),,(),,(

),,(),,(,),,(),,(

kjiIkjiI

kjiIkjiIkjiIkjiI

kjiIkjiIkjiIkjiI

rhr

rhrr

rh

rhrrh

r

, ∀ k∈{1,2, ..., K}, r∈{1,2, ..., L} (7)

Though it is a ratio of a pair of pixels, this definition of similarity measure (error measure) is always less than or equal to 1 for every pair of pixel. It reverts the pixel ratio which is larger than 1 and keeps it consistently less than 1, so that the measure of the ratio is consistent with the absolute difference with respect to the similarity value of 1. This definition provides an absolute percentile difference measure with respect to 1. It is not a relative percentile increase or decrease. It truly provides a measure of the difference between a pair of pixels from different images. If a pair of pixel values ),,( kjiIr and ),,( kjiI rh are the same, the consistent pixel ratio similarity measure will reach

its maximum value 1. If the pixel values ),,( kjiIr and ),,( kjiI rh are different, the consistent ratio will be less than 1. Its value is defined consistently within [0, 1] with 0 represents no similarity and 1 represents the maximum similarity. Since every individual pixel ratio is always within the range of 0 to 1, the sum of all individual pixel ratios reflects the overall image similarity. The overall similarity srk of the image pair compared can thus be measured by

srk =∑∑= =

M

i

N

jrk jir

1 1),( , ∀ k∈{1,2, ..., K}, r∈{1,2, ..., L} (8)

This image similarity measure is a symmetric measure. It is ratio based not distance based. Therefore it is scale invariant; it is symmetric in contrast to the regular ratio. Moreover, it is a relative measure, which does not give the physical radiometric difference as the Manhattan distance does.

EXPERIMENTAL DATA AND PREPROCESSING

Experimental Data To compare the new unbiased symmetric image ratio similarity measure with Manhattan distance measure

with/without image stretching or clipping and to evaluate the effectiveness of the new unbiased symmetric image ratio similarity measure for measuring the performance of the histogram matching normalization, a pair of typical Florida citrus images were selected for experiment. The Florida citrus image data was not systematically acquired. It came from various sources and was acquired by different agencies written under different acquisition contracts that have different specifications and requirements, with different sensors. The available imageries for this change detection study were acquired in 2004 and 1999 from different sensors as shown in Fig. 1 a) and c); one was taken with a digital camera while the other was taken with a film camera and scanned with a film scanner. From Fig. 1a) and 1c), it is observed that these two original images have dramatic radiometric differences though the ground coverage changes are very limited. The imageries had different spatial resolutions; one is 1 meter, the other is 2 meters. The 2-meter 1999 images were re-sampled to 1-meter resolution so that they were consistent with the 2004 images. Their dynamic ranges were also different; one is 8-bit while the other is 16-bit. The spectral coverage of the images was different, though the number of bands was the same; one image had R/G/IR bands while the other had R/G/B bands. The images acquired in 1999 and in 2004 were both geo-rectified, pre-registered, and geo-referenced. But the map projection information of the image set acquired in 1999 was missing and was corrected with ESRI’s ArcGIS software according to the 2004 image. In addition, the exact image acquisition time and the weather conditions were unknown for both images; the atmospheric condition and sun angles were not corrected or compensated either. The interested targets to be watched for changes are the citrus farms in the right side of the images.

Page 6: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

(a) (b) (c)

(d) (e) (f) (g) Figure 1. Original images and histogram matching normalized images. (a) unclipped 2004 8-bit image; (b) Clipped 2004 8-bit image; (c) 1999 8-bit image; (d) Histogram matched 1999 image with clipped 2004 image as reference; (e) Histogram matched 2004 image with 1999 image as reference; (f) Histogram matched 1999 image with unclipped 2004 image as reference; (g) Histogram matched 1999 image with unclipped 2004 image as reference.

SIMILARITY MEASURE COMPARISON AND DISCUSSION

Different Measures for Histogram Matching Normalization To correct dramatic radiometric differences between two original images as shown in Figure 1a) and 1c),

histogram matching was performed to normalize a subject image to its reference image. The histogram matched 1999 and 2004 images are shown from Fig. 1d) to 1g). Their radiometric appearances are very close to those of their corresponding reference images. To further examine their image differences, histograms of all 1999 and 2004 images, both as reference images and as normalized subject images, are computed and illustrated in Fig. 1(a) ~ Fig. 1(g). From Fig. 1(a) to Fig. 1(g), it is observed that the silhouettes of histograms of the reference image bands are roughly the same as the silhouettes the histograms of the their corresponding normalized subject image bands, except for a few spikes, which were caused by the nature of the approximation of the histogram matching algorithm. The normalization process helps to reduce the bias effect associated with the new symmetric image ratio measure. As shown in Fig. 2a) and Fig. 2c), the histograms of the clipped 2004 image bands are similar to those of the unclipped 2004 image. The only difference is that the histograms of the unclipped 2004 image is all concentrated at the low-bit area while the clipped 2004 image has more spread histograms that is closer to the histogram distributions of the 1999 image.

To quantitatively measure the effectiveness of the histogram matching normalization, the similarity between reference image and its corresponding histogram matched subject image has been computed, according to a given similarity measure, either the proposed symmetric image ratio or the Manhattan distance, band by band with respect to the different reference image selection. With different image treatments, the following two experimental scenarios were designed for experiment.

(a) In the first experimental scenario, the 16-bit image was uniformly rescaled to match the other 8-bit image without any stretching or clipping performed, as shown in Figure 1(a)

Page 7: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

(b) In the second experimental scenario, the 16-bit image was first clipped to eliminate high-bit pixels with very little signal so that the resulting image histogram will be well spread across the dynamic range. The clipped image was then uniformly quantized to 8-bit. The resulting image is shown in Figure 1(b).

Both symmetric image ratio and Manhattan distance similarity measures are applied to these two experimental data scenarios. The similarity measurement results for different measures are listed from Table 1 to Table 4. In all these tables, row 2 to row 4 correspond to three experimental cases: a) no normalization performed; b) normalized using histogram matching with 1999 image as a reference; and c) normalized using histogram matching with 2004 image as a reference respectively. In these tables, the smaller Manhattan distance measure value means a higher similarity while the larger symmetric image ratio value indicates a higher similarity. In these tables, the bolded figures highlight the bands of the highest similarity among all three bands for a given experimental case while the italic bolded figures indicate the highest similarity among all experiment cases and all bands for a given data scenario and a given similarity measure.

From the last rows of both Table 1 and Table 2, the Manhattan distance values yielded from using the clipped 2004 image as a reference are all significantly larger (3 times) than those of using the unclipped 2004 image as a reference. This huge difference clearly implies that the best similarity search using Manhattan distance measure is biased to the group of variables with smaller values (the unclipped 2004 image). This is evidenced by the histograms as shown in Figure 2(a) and (c). For the unclipped image, image pixels are concentrated at low bit area as shown in the histogram Figure (c), i.e. image pixels are of lower values. Moreover, for the unclipped image scenario, comparing row 3 with row 4 of Table 1 and their corresponding histograms as shown in Figure 2(b) and (c), it is found that the lower bits concentrated reference image yields smaller distance values than more evenly distributed reference image does. However, this observation is not true for the clipped scenario as indicated in Table2 since the lower-bits concentrated original 16-bit 2004 image was clipped first so that the histograms of its bands are much more spread across the dynamic range, and are closer to the distributions of the 1999 image histograms. The Manhattan difference measure between these two images is more determined by their histogram distributions. As observed from Table 1 and Table 2, the highest similarity with Manhattan distance measure is reference image dependent. From Table 1 and Table 2, however, it is also observed that the band and reference image of the best similarity for the same experiment case are the same for both unclipped and clipped 2004 images. The global optimal similarity value among the results measured by Manhattan distance is 223,107908, which corresponds the band 3 of the unclipped 2004 image used as reference image, as highlighted in bold and italic in Table 1. Comparing Figure 2b) with 2c), it is found that when the unclipped 2004 image is used as a reference image, algorithmically, matching the more spread 1999 image histogram to the compacted original 2004 image histogram yields less matching errors as compared to the opposite direction matching. This result hints that the image with compact histogram should be selected as a reference and it also implies that for the original 2004, image clipping should not be performed when Manhattan distance is used to measure the similarity. By comparing Table 3 with Table 4, it is concluded that the rank of similarity measure value of each image band among all experimental cases is consistent for the new symmetric image ratio measure whether it is clipped or unclipped scenario. Similarly, the highest similarity with the new symmetric image ratio measure is also reference image dependent. It is also observed that the band of the reference image of the highest similarity with the new symmetric image ratio measure for the same experiment case is the same for both unclipped and clipped image scenarios. As highlighted in bold and italic, the band of the reference image of the global optimal similarity is band 3 of the clipped 2004 image. Though the similarity measure value (35,622,059) is a little bit better than that of the unclipped image, the difference is not significant. Moreover, the corresponding similarity values in the last two rows in both Table 3 and Table 4 are very close. These observations indicate that the new symmetric image ratio similarity measure is unbiased and no need for the image clipping to alleviate the bias effect. For the non-normalized case in both tables, though the ranks of their corresponding similarity values are consistent, the similarity values for the non-normalized images are significantly different. This means that the histogram compactness difference still affects the similarity measurement in using the new proposed symmetric image ratio similarity measure without image normalization.

As observed from all tables, the image similarity between reference image and its respective histogram matched subject image is higher than the similarity between reference image and its corresponding subject image without normalization, no matter which similarity measure is used. It is also observed that not only the reference image selection will affect the histogram matching performance, but also the different image band will result in a different performance, no matter which similarity measure is used to quantify the similarity.

Page 8: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

a1) 2004 band-1 a2) 2004 band-2 a3) 2004 band-3

b1) 1999 band-1 b2) 1999 band-2 b3) 1999 band-3

c1) 2004 band-1 c2) 2004 band-2 c3) 2004 band-3

Figure 2. Histograms of original images: a) clipped 2004 image; b) original 1999 image; c) 2004 image.

a1) HM 1999 band-1 a2) HM 1999 band-2 a3) HM 1999 band-3

b1) HM clipped 2004 band-1 b2) HM clipped 2004 band-2 b3) HM clipped 2004 band-3

Figure 3. Histograms of histogram matched images. (a) 1999 image; (b) clipped 2004 image.

Page 9: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

a1) HM 1999 band-1 a2) HM 1999 band-2 a3) HM 1999 band-3

b1) HM 2004 band-1 b2) HM 2004 band-2 b3) HM 2004 band-3

Figure 4. Histograms of histogram matched images. (a) 1999 image; (b) unclipped 2004 8-bit image.

Table 1. Manhattan distance measuring results for images without image clipping.

Unclipped 2004 image Band 1 Band 2 Band 3

No Normalization 5,165,526,637 3,334,340,163 4,489,143,486 HMN, 1999 Image as Reference 1,333,636,088 1,164,335,668 1,238,088,703 HMN, 2004 Image as Reference 440,286,597 318,965,703 223,107,908

Table. 2. Manhattan distance measuring results for images with image clipping.

High-bit clipped 2004 image Band 1 Band 2 Band 3

No Normalization 2,460,670,698 1,242,35,0692 2,344,830,977 HMN, 1999 Image as Reference 1,353,792,013 1,180,279,785 1,255,995,069 HMN, 2004 Image as Reference 1,584,393,561 1,159,968,546 825,605,177

Table 3. Symmetric image ratio measuring results for images without image clipping.

Unclipped 2004 image Band 1 Band 2 Band 3

No Normalization 7,804,863 11,139,156 7,493,628 HMN, 1999 Image as Reference 35,314,399 33,674,501 34,378,529 HMN, 2004 Image as Reference 30,706,733 33,069,553 35,408,049

Table 4. Symmetric image ratio measuring results for images with image clipping.

High-bit clipped 2004 image Band 1 Band 2 Band 3

No Normalization 27,544,433 33,053,400 25,968,637 HMN, 1999 Image as Reference 35,325,968 33,686,762 34,374,198 HMN, 2004 Image as Reference 31,100,131 33,389,944 35,622,059

By comparing the results yielded from the Manhattan distance measure with those yielded from the new

symmetric image ratio, it is found that the best band and the best reference image for the different similarity measures are not the same. This is expected because of the biasness difference between the different similarity measures.

Page 10: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

Change Detection Results and Discussion Change detection

results as shown in Figure 5, were gener-ated by thresholding the image similarity maps yielded from either using the Man-hattan distance mea-sure or the new symmetric image ratio measure. For the Manhattan distance based similarity maps, the highlighted change detection maps are generated by highlighting the change pixels in either red or green, with a given threshold of 20% (increase or decrease). However, for the symmetric image ratio based similarity maps, the change detection maps are highlighted by the change pixels in red only, with a given threshold determined by the similarity map. Those black areas in change maps are non-change areas. By comparing the original reference and subject images, it is found that the percentile of change areas in the scene is very small. This implies that major area of the scene should be in black in the detected change maps. The best result should have most of the areas colored black. Figure 5

a1) Band2 (Direct difference) a2) Band2 (1999 reference) a3) Band3 (2004

b1) Band2 (Direct difference) b2) Band2 (1999 reference) b3) Band3 (2004

c1) Band2 (Direct SIR) c2) Band1 (1999 reference) c3) Band3 (2004

d1) Band2 (Direct SIR) d2) Band1 (1999 reference) d3) Band3 (2004

Figure 5. Highlighted change maps, which include all the best bands for all three experimental cases. (a) Changes detected with unclipped 2004 image using Manhattan distance; (b) Changes detected with clipped 2004 image using Manhattan distance; (c) Changes detected with unclipped 2004 image using new symmetric image ratio measure; (d) Changes detected with clipped 2004 using new symmetric image ratio measure.

Page 11: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

demonstrates the highlighted change maps of all best bands in term of either Manhattan distance or symmetric image ratio similarity measure for all three experimental cases.

Figure 5(a) and 5(b) show the change maps produced by using Manhattan distance. As expected, the direct image differencing without normalization yields two worst results as shown in Fig. 5(a1) and Fig. 5(b1), but the result without clipping is even worse between those two results. As shown in Fig. 5(a2) and Fig. 5(b2), the change detection results are very close for the cases normalized with the 1999 image as a reference; and they are significantly improved as compared with those without normalization. From Fig. 5(a3) and 5(b3), it is observed that the detected change patterns are basically the same for band 3s of both clipped and unclipped 2004 images used as the reference. However, it should be noted that the threshold for generating Fig. 5(a3) is ± 5%, not ± 20% like others. This means that normalizing the 1999 image to the unclipped 2004 image by histogram matching yields a much smaller matching error than normalizing it to the clipped 2004 image. This is consistent with the results shown in Table 1 and Table 2. From Figure 5(a) and 5(b), the band 3s of the normalized 1999 image using the 2004 image as a reference produces the best change detection results, which not only have the minimum Manhattan distance but also identify the interested target (citrus farm) changes and best suppress the uninterested targets. The change maps Figure 5(c) and 5(d) are generated with the new symmetric image ratio similarity measure. As shown in Figure 5(c1), without normalization, the symmetric image ratio of the unclipped 2004 image and the 1999 image gives the worst change detection results though the Band 2 gives the best result among all three bands. Similarly, for the clipped 2004 image scenario as shown in Figure 5(d), the non-normalization case also gives the worst result though it is much better than the result as shown in Figure 5(c1). By comparing the change maps Figure 5(c2) and Figure 5(d2) with Figure 5(c3) and Figure 5(d3) respectively, it is found that the changes detected after histogram matching are almost the same whether the 2004 image is clipped or unclipped. This means that image clipping does not affect the similarity measurement with the symmetric image ratio. It also implies that the symmetric image ratio is unbiased for the histogram matched image similarity measurement. However, the change results illustrated in Figure 5(c2) and Figure 5(c3) are dramatically different from those shown in Figure 5(d2) and Figure 5(d3). Though the similarity measures are higher than those of Figure 5(c1) and Figure 5(d1), most of the interested citrus farm changes are missing. It should be indicated that the best band for the cases with 1999 image as reference is Band 1, not Band 2 as in Manhattan distance scenarios. Similar to Manhattan distance scenarios, the best change detection results come from the normalization case using band 3 of 2004 image as a reference. As shown in Figure 5(c3) and Figure 5(d3), the Band 3 gives the highest similarity in symmetric image ratio measure and identifies the interested citrus farm changes. Moreover, as observed from Figure 5(c) and 5(d) and Table 4 and 5, the change detection with new symmetric image ratio (SIR) similarity measure is more band dependent and less reference reliant.

CONCLUSION

In literatures, the Euclidean distance was used for comparing different radiometric normalization methods and Manhattan distance was proposed in our previous work to quantify the histogram matching quality. However, it was found that the Manhattan distance measure used for optimizing the histogram matching normalization is always biased to the reference image with the histogram concentrated at the lower bits. In this paper, a new unbiased symmetric image ratio is proposed as a similarity measure for histogram matching quality measurement. This measure is derived from the regular change detection method image ratioing, by adaptively determining the numerator and denominator of an image ratio from a pair of pixels of a reference image and its corresponding histogram matched subject image. This new measure has several nice properties: (a) it is ratio based not distance based; (b) it is symmetric in contrast to the regular image ratio; (c) its value is defined consistently within [0, 1] with 0 represents no similarity and 1 represents the maximum similarity; (d) individual pixel ratios are summable; the summation preserves the similarity, i.e., the sum of all individual pixel ratio represents the overall similarity; (e) it is scale invariant. In general, this new measure is not biased to the low-bit concentrated image histogram as Manhattan distance does. However, it is a relative measure, which does not give the physical radiometric difference as the Manhattan distance does.

As observed from the experimental results, the proposed new symmetric image ratio based similarity measure performs effectively and consistently in measuring the similarity of a reference image and its corresponding histogram matching normalized subject image whether image stretching (clipping) is applied or not. In contrast to the Manhattan distance, this consistency experimentally proves that the new symmetric image ratio is unbiased to the image histogram distribution patterns. However, the experiment also indicates that the new proposed symmetric

Page 12: UNBIASED HISTOGRAM MATCHING QUALITY MEASURE FOR …quantify the histogram matching quality, which is reference image and band dependent, the image differencing based quantitative measure,

ASPRS 2008 Annual Conference Portland, Oregon ♦ April 28 - May 2, 2007

image ratio similarity measure is still biased when it is used without image normalization. The experimental results for the histogram matching optimization using the symmetric image ratio show that the optimal band of the optimal reference image are the same as those from Manhattan distance. In general, the image clipping has significant impact to the Manhattan distance measure. It is also found that the performance of the image histogram normalization is not only reference image selection dependent but also band dependent no matter which similarity measure is used for quantifying the similarity. Moreover, it is observed that the change detection with new symmetric image ratio (SIR) similarity measure is more band dependent and less reference reliant. Furthermore, the histogram matching normalization yields better change detection results for both similarity measures.

REFERENCES

Byrne, G. F., P.F. Crapper, and K.K. Mayo, 1980. Monitoring land cover change by principal component analysis of multitemporal Landsat data, Remote Sensing Environment, 10, pp. 175-184.

Canty, M.J., et al., 2004. Automatic radiometric normalization of multitemporal satellite imagery, Remote Sensing Environ., 91, (3-4), pp. 441-451.

Coppin, P.R. and M. Bauer, 1996. Digital change detection in forest ecosystems with remote sensing imagery, Remote Sensing Review, Vol. 13, pp. 207-234.

Du, Yong, Philippe M. Teillet and Josef Cihlar, 2002. Radiometric normalization of multitemporal high-resolution satellite images with quality control for land cover change detection, Remote Sensing of Environment, Vol. 82, Issue 1, September 2002, pp 123-134.

Eckhardt, D.W., et al., 1990. Automated update of an irrigated lands GIS using SPOT HRV imagery, Photogrammetric Engineering and Remote Sensing, 60(11), pp. 1515-1522.

Hall, F.G., D.E. Strebel, J.E. Nickeson and S.J. Goetz, 1991. Radiometric rectification: toward a common radiometric response among multi-data, multi-sensor images, Remote Sensing of Environment, 35, pp. 11–27.

Hong, Gang and Yun Zhang, 2005. Radiometric normalization of IKONOS image using Quickbird image for urban area change detection, Proceedings of ISPRS 3rd Int. Symposium on Remote Sensing and Data Fusion Over Urban Areas (Urban 2005), Tempe, Arizona, March 2005.

Howarth, J.P., and Wickware, G.M., “Procedure for change detection using Landsat digital data, 1981. Int. J. Remote Sensing Environment, No. 13, pp. 149-160.

Jenson, J.R., 1983. Urban/suburban land use analysis, In R.N. Colwell (Ed.), Manual of Remote Sensing, 2nd Ed., American Society of Photogrammetry, Fall Church, VA, pp. 1571-1666.

Radke, Richard J., Srinivas Andra, Omar Al-Kofahi, Badrinath Roysam, 2005. Image change detection algorithms: A systematic survey,” IEEE Trans. Image Processing, Vol. 14, No. 3, pp. 294-307, March 2005.

Schott, J.R., C. Salvaggio and W.J. Volchok, 1988. Radiometric scene normalization using pseudoinvariant features, Remote Sensing Environ., 26(1), pp. 1-16.

Singh,A., 1989. Digital change detection techniques using remotely sensed data, Int. J. Remote Sensing, Vol. 10 No. 6, pp. 989-1003.

Yang, Xiaojun, and C.P. Lo, 2000. Relative radiometric normalization performance for change detection from multi-date satellite images, Photogrammetric Engineering & Remote Sensing, Vol. 66, No. 8, pp. 967-980.

Yang, Z., and R. Mueller, 2007a. Heterogeneously sensed imagery radiometric response normalization for citrus grove change detection, with Rick Mueller, Proceedings of SPIE Optics East, Volume 6761 - Optics for Natural Resources, Agriculture, and Foods II, Boston, MA. September 2007.

Yang, Z., and R. Mueller, 2007b. Spatial-spectral cross-correlation for change detection - A case study for citrus coverage change detection, with Rick Mueller, Proceedings of American Society of Photogrammetry and Remote Sensing, Tampa, Florida, May 2007.

Yuan, D., Elvidge, C.D., 1993. Application of relative radiometric rectification procedure to Landsat data for use in change detection, In Proceedings of the Workshop on Atmospheric Correction of Landsat Imagery, The Defence Landsat Program Office, Torrance, California, June 29 - July 1, 1993, pp.162-166.

Yuan, Ding, C.D. Elvidge, 1996. Comparison of relative radiometric normalization techniques, ISPRS Journal of Photogrammetry and Remote Sensing, 51, pp. 117-126.

Yuan, Ding, C.D. Elvidge, and R.S. Lunetta, 1998. Survey of multispectral methods for land cover change analysis, Remote Sensing Change Detection, Lunetta & Elvidge (Ed.), Ann Arbor Press, pp. 21-39.