using anisotropic diffusion for coherence...

6
USING ANISOTROPIC DIFFUSION FOR COHERENCE ESTIMATION Franc ¸ois Aspert 1 , Alessio Cantone 2 , Jean-Philippe Thiran 1 , and Paolo Pasquali 2 1 Ecole Polytechnique F´ ed´ erale de Lausanne, 1015 Lausanne, SWITZERLAND, Email: [email protected] 2 Sarmap S.A., Cascine di Barico, 6989 Purasca, SWITZERLAND, Email: [email protected] ABSTRACT In this paper we will present a new coherence estimation technique for SAR interferometry products that adapts the estimation window size and shape during processing. This is of particular interest for sensors with medium spa- tial resolution, like the ASAR WS mode, where the esti- mator shall cope with the spatial variability of the targets in the imaged area. This method is designed to remove low coherence magnitude bias while keeping a good spa- tial resolution. Finally, this new approach will be com- pared to an existing algorithm for quantifying resulting quality improvement. 1. INTRODUCTION In synthetic aperture radar (SAR) interferometry (In- SAR), the use of the coherence measure is required in order to achieve a precise estimation of the correspond- ing interferogram quality. It is therefore a crucial value as it enables the reliable and accurate extraction of dis- placement and elevation information [1] [2] as well as performing land-use classification tasks [3] [4]. However, existing estimators are biased towards higher values espe- cially for low coherence magnitudes. Such a bias can be removed by increasing the number of samples taken into account during coherence computation, provided that the scene is stationary in mean. These conditions tend to pre- vent the use of large windows to compute the estimation and limits the overall resulting accuracy. Our assumption here is to use the backscattering coefficients of amplitude images [5] to identify the single objects that are ergodic. The proposed technique uses anisotropic diffusion to ap- proximate local stationary processes and being less sen- sitive to unwanted amplitude variations due to speckle in the sources interferogram images. Such a technique aims at providing an edge sensitive filtering so as to pre- serve natural discontinuities that are present in the im- age. In this manner, we are able estimate coherence using these regularized regions while enabling accurate non-stationarity detection. The chosen algorithm has the property of being fully applicable to vector-valued im- ages with a significant result improvement in resulting filtering quality when images from same location are pro- vided. In the case of Interferometry, source amplitude images can thus be exploited together so that it takes into account independent realizations of the speckle. Differ- ent measures will be shown in order to prove the accuracy of our methods onto two different data sets (originating from ERS-1 and ENVISAT-WS) First in Section 2, we will present the theory related to co- herence estimation problem. Then in Section 3, we will introduce the basic concept of anisotropic filtering and its vector-valued counterpart. In section 4, three differ- ent (but complementary) methods for coherence estima- tion are presented. Results and qualitative observations are outlined in Section 5. Finally, conclusions and future work will be discussed in Section 6. 2. COHERENCE ESTIMATION The complex coherence has been defined in [6] as the correlation between two complex signals. If v 1 and v 2 are two zero-mean complex random stationary processes, the coherence magnitude can be expressed as: γ = E(v 1 · v 2 ) E(|v 1 | 2 ) · E(|v 2 | 2 ) , (1) where denotes the complex conjugate and E(v) is the ensemble average operator. The sample coherence esti- mator (maximum-likekihood estimator) is then [7] [8]: ˆ γ = N i=0 v 1i · v 2i N i=0 |v 1 | 2 N i=0 |v 2 | 2 , (2) computed over N independent observations of the pro- cesses involved. This estimator is asymptotically unbi- ased. Nevertheless, if the ensemble average of Eq. 1 can be replaced by the time average in Eq. 2, it is in practice difficult to estimate as it requires a large number of ac- quisitions of the same pixel under the same conditions to ensure processes stationarity and mean ergodicity. Therefore, in order to obtain a tailored estimator for co- herence for InSAR products we can substitute the ensem- ble average by a spatial average within stationary regions. _____________________________________________________ Proc. ‘Envisat Symposium 2007’, Montreux, Switzerland 23–27 April 2007 (ESA SP-636, July 2007)

Upload: others

Post on 02-Dec-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: USING ANISOTROPIC DIFFUSION FOR COHERENCE ...earth.esa.int/workshops/envisatsymposium/proceedings/...USING ANISOTROPIC DIFFUSION FOR COHERENCE ESTIMATION Franc¸ois Aspert1, Alessio

USING ANISOTROPIC DIFFUSION FOR COHERENCE ESTIMATION

Francois Aspert1, Alessio Cantone2, Jean-Philippe Thiran1, and Paolo Pasquali2

1Ecole Polytechnique Federale de Lausanne, 1015 Lausanne, SWITZERLAND, Email: [email protected] S.A., Cascine di Barico, 6989 Purasca, SWITZERLAND, Email: [email protected]

ABSTRACT

In this paper we will present a new coherence estimationtechnique for SAR interferometry products that adaptsthe estimation window size and shape during processing.This is of particular interest for sensors with medium spa-tial resolution, like the ASAR WS mode, where the esti-mator shall cope with the spatial variability of the targetsin the imaged area. This method is designed to removelow coherence magnitude bias while keeping a good spa-tial resolution. Finally, this new approach will be com-pared to an existing algorithm for quantifying resultingquality improvement.

1. INTRODUCTION

In synthetic aperture radar (SAR) interferometry (In-SAR), the use of the coherence measure is required inorder to achieve a precise estimation of the correspond-ing interferogram quality. It is therefore a crucial valueas it enables the reliable and accurate extraction of dis-placement and elevation information [1] [2] as well asperforming land-use classification tasks [3] [4]. However,existing estimators are biased towards higher values espe-cially for low coherence magnitudes. Such a bias can beremoved by increasing the number of samples taken intoaccount during coherence computation, provided that thescene is stationary in mean. These conditions tend to pre-vent the use of large windows to compute the estimationand limits the overall resulting accuracy. Our assumptionhere is to use the backscattering coefficients of amplitudeimages [5] to identify the single objects that are ergodic.

The proposed technique uses anisotropic diffusion to ap-proximate local stationary processes and being less sen-sitive to unwanted amplitude variations due to specklein the sources interferogram images. Such a techniqueaims at providing an edge sensitive filtering so as to pre-serve natural discontinuities that are present in the im-age. In this manner, we are able estimate coherenceusing these regularized regions while enabling accuratenon-stationarity detection. The chosen algorithm has theproperty of being fully applicable to vector-valued im-ages with a significant result improvement in resultingfiltering quality when images from same location are pro-

vided. In the case of Interferometry, source amplitudeimages can thus be exploited together so that it takes intoaccount independent realizations of the speckle. Differ-ent measures will be shown in order to prove the accuracyof our methods onto two different data sets (originatingfrom ERS-1 and ENVISAT-WS)

First in Section 2, we will present the theory related to co-herence estimation problem. Then in Section 3, we willintroduce the basic concept of anisotropic filtering andits vector-valued counterpart. In section 4, three differ-ent (but complementary) methods for coherence estima-tion are presented. Results and qualitative observationsare outlined in Section 5. Finally, conclusions and futurework will be discussed in Section 6.

2. COHERENCE ESTIMATION

The complex coherence has been defined in [6] as thecorrelation between two complex signals. Ifv1 andv2

are two zero-mean complex random stationary processes,the coherence magnitude can be expressed as:

γ =

E(v1 · v∗2)

E(|v1|2) · E(|v2|2)

, (1)

where∗ denotes the complex conjugate andE(v) is theensemble average operator. The sample coherence esti-mator (maximum-likekihood estimator) is then [7] [8]:

γ =

∑N

i=0 v1i · v∗2i

∑N

i=0 |v1|2∑N

i=0 |v2|2

, (2)

computed overN independent observations of the pro-cesses involved. This estimator is asymptotically unbi-ased. Nevertheless, if the ensemble average of Eq. 1 canbe replaced by the time average in Eq. 2, it is in practicedifficult to estimate as it requires a large number of ac-quisitions of the same pixel under the same conditions toensure processes stationarity and mean ergodicity.

Therefore, in order to obtain a tailored estimator for co-herence for InSAR products we can substitute the ensem-ble average by a spatial average within stationary regions.

_____________________________________________________

Proc. ‘Envisat Symposium 2007’, Montreux, Switzerland 23–27 April 2007 (ESA SP-636, July 2007)

Page 2: USING ANISOTROPIC DIFFUSION FOR COHERENCE ...earth.esa.int/workshops/envisatsymposium/proceedings/...USING ANISOTROPIC DIFFUSION FOR COHERENCE ESTIMATION Franc¸ois Aspert1, Alessio

The sampled estimator becomes [9]:

γi,j =|∑M

m=0

∑N

n=0 v1(m,n) · v∗2(m,n)|

∑Mm=0

∑Nn=0 |v1(m,n)|2

∑Mm=0

∑Nn=0 |v2(m,n)|2

.

(3)Here, for each pixel of coordinates(i, j), the coherenceis computed by spatially averaging pixels values withina finite window with a fixed sizeM × N centered in(i, j). For InSAR, the two images differs by a determin-istic phaseφ. If this phase is not constant inside a givenwindow, it can induce a bias in the estimator. However, itis very likely that this phase is not constant (especially inmountainous areas where steep slopes are imaged), it hasto be compensated. Eq. 4 gives the expression for terrainslope compensated coherence estimator.

γ =|∑M

m=0

∑N

n=0 v1(m,n) · v∗2(m,n) exp−jφ(m,n) |

∑Mm=0

∑Nn=0 |v1(m,n)|2

∑Mm=0

∑Nn=0 |v2(m,n)|2

.

(4)But, if such a method is capable of removing the biasfor low-coherence zones, it requires an important num-ber of samples to be taken into account, thus lowering theoverall spatial resolution. Moreover, within a given win-dow, the scene is not ensured to be stationary, leading toa false estimation of the coherence. In [9], a tentative forestimating coherence using images intensity and variablewindows size is introduced. However, the determinationof the optimal size remains problematic as it is difficultto choose which pixels to include in the final estimation.

3. ANISOTROPIC DIFFUSION

3.1. Single Image Diffusion Filtering

First introduced in [10] for single optical images, this par-ticular type of filtering allows a high level of regulariza-tion in homogenous areas while preserving the relevantfeatures. For a continuous image, diffusion on imageImay be enacted by the partial differential equation 5:

∂I

∂t= div[c(‖∇Iσ‖) · ∇Iσ], (5)

where∇ is the gradient ,div is the divergence operator,andc, the conduction coefficient is a matrix of diffusioncoefficients of the same size asI . c is designed to be anon-linear function of the smoothed image gradient mag-nitude‖∇Iσ‖. The design ofc if extremely important.Black [11] made an in-depth study of the design ofc andlink the Perona diffusivity function to the weighting func-tions of robust statistical estimation. This led to anotherdiffusivity function from the Tukeys biweight:

c(‖∇Iσ‖, λ) =

{

12 [1 − (‖∇Iσ‖

λ)2]2 ‖∇Iσ‖ ≤ λ

0 otherwise(6)

Whereλ refers to the sensitivity parameter and is set upin an automated way using fixed size windows in order to

have a spatially varying threshold and be more sensitiveto potential image variations. But, the main drawbackof nonlinear diffusion is that such a technique leaves theedge features unfiltered. To overcome this situation, We-ickert [12] [13] introduced edge-direction sensitive dif-fusion. The amount of diffusion is controlled by a ma-trix D (also called diffusion tensor) of values specifyingthe diffusion importance in the features direction. Theanisotropic diffusion is thus described by Eq. 7.

∂I

∂t= div[D(∇‖Iσ‖) · ∇Iσ], D =

(

a bb c

)

,

(7)

where,

a = φ1 cos2 α + φ2 sin2 α, (8)

b = (φ1 − φ2) sin α cosα, (9)

c = φ1 sin2 α + φ2 cos2 α. (10)

whereα is the direction of the gradient (maximum vari-ation angle).φ1 controls the diffusion along the gradientwhereasφ2 will be in charge of the filtering process per-pendicular to this gradient. Therefore,φ1 h will be cho-sen to behave in the same way asc in nonlinear diffusion.φ2 will be fixed to a constant value as we require edgesto be smoothed uniformly.

3.2. Vector-Valued Diffusion Filtering

The difference with the work done in [14] lies in the dif-fusion amount computation which is no longer varyingwith a single-image gradient. The choice is made to mea-sure the gradient using the whole set of images. Themost natural choice is then to use the reliable formulationfor gradient computation with vector data stated in [15]used by Sapiro [16] which takes the gradient as a two di-mensional manifold embedded inℜm. We obtain for themulti-temporal image sequences the following First Fun-damental Form (FFF):

df2 =

(

dxdy

)T (

g11 g12

g12 g22

) (

dydy

)

(11)

Where,

g11 =∑m

i=1 ∇I2σ,(i,x),

g12 =∑m

i=1 ∇Iσ,(i,x)∇Iσ,(i,y),

g22 =∑m

i=1 ∇I2σ,(i,y).

(12)

WhereI2σ,(i,x) andI

2σ,(i,y) stands respectively for gradient

estimation along columns and lines. The direction andmagnitude of the maximum and minimum rate of changecorresponding to the computed gradient directions can bethen extracted from the FFF eigenvalues and eigenvectors

Page 3: USING ANISOTROPIC DIFFUSION FOR COHERENCE ...earth.esa.int/workshops/envisatsymposium/proceedings/...USING ANISOTROPIC DIFFUSION FOR COHERENCE ESTIMATION Franc¸ois Aspert1, Alessio

in Eq. 11. Finally, Eq. 13 gives the practical frameworkof the anisotropic diffusion process.

∂I1

∂t= div[D · ∇

−→I ]

...∂In

∂t= div[D · ∇

−→I ]

(13)

Where−→I corresponds to the whole image sequence and

Ii is theith image in the sequence. Therefore, each imageis filtered separately using the global sequence informa-tion, taking into account features from all images.

3.3. Discretization schemes

Since the equation presented in Equation 5 holds for con-tinuous images, one have to discretize it in space and intime in order to apply it to digital images while preserv-ing the consistency and accuracy of the solution. Here,for both space and time, finite difference methods willbe used. For discretization in time, finite differences ap-proximates∂I

∂tby I

t+1−It

νwhereν stands for the discrete

time step. Scharr [17] proved that for tensor diffusion,an 8-element discretization stencil was consistent and ac-curate. If the concerned pixel is(i, j). Then on a squaregrid, we can extract the space discretization stencil shownin Fig. 1.

−bi−1,j−bi,j+1

4ci,j+1+ci,j

2bi+1,j+bi,j+1

4

ai−1,j+ai,j

2 −ai−1,j+2ai,j+ai+1,j

2 −ai+1,j+ai,j

2

ci,j−1+2ci,j+ci,j+1

2

bi−1,j+bi,j−1

4ci,j−1+ci,j

2−bi+1,j−bi,j−1

4

Figure 1. 3x3 discretization stencil

4. ANISOTROPIC COHERENCE

In this section, we present two different approaches tosolve the problem of estimating the coherence magnitudeusing two power images and their resulting complex in-terferogram. The principal philosophy is to identify orapproximate homogenous (and thus stationary) regionswithin available images. Indeed, if these regions appearto be sufficiently large, it will provide a reliable basis ofpixels for using Eq. 4 and compute an effective estimationof γ.

4.1. Amplitude driven diffusion

The first method is designed to identify the homogenousregions using source amplitude images discontinuities. In

such a case, the vector-valued definition of the gradient(Eq. 11) can be here used to combine different sourcesimages of the same scene (even multi-temporal) to ob-tain an improved gradient map based on images redun-dancies. From Fig. 1, we know that during the diffu-sion process, each pixel of the image will be averagedby a linear combination of each surrounding pixel withthe coefficients computed using Eq. 9-10. These diffu-sion coefficients are computed with respect to the imagegeometry. That means that each pixel will be averagedwith neighbors possessing similar backscattering charac-teristics rather than those from different areas.

We propose here to filter each image involved in thecoherence computation according to 13. We will fur-ther refers to this method as Amplitude Driven Diffusion(ADD). In the case of InSAR data, the diffusion Equationis stated as follows:

∂S1

∂t= div[D(S1, S2) · ∇(S1, S2)]

∂S2

∂t= div[D(S1, S2) · ∇(S1, S2)]

∂I

∂t= div[D(S1, S2) · ∇(S1, S2)]

(14)

WhereS1, S2 are the source amplitude images andI thescene flattened interferogram. Since each image will bediffused according to the same parameters with respect tothe ground topography, the stationary hypothesis requiredin Eq. 4 holds. The filtered amplitude images and filteredcomplex interferogram can then be directly used in Eq. 4for coherence estimation. Fig. 2 summarizes a single stepof ADD.

Figure 2. Amplitude Driven Diffusion

4.2. Boxcar Driven Diffusion

The goal here is still to identify homogenous regions. Buthere it is no longer done with amplitude image intensitiesbut rather from a basic3×3 boxcar coherence estimationmap. Indeed, in [18], it has been proved that a “rough” es-timation using small windows size can be used as a goodstarting point for coherence estimation. The use of suchsmall windows prevents from averaging pixels belongingto different regions. However, for low coherence values,it has the major drawback of not using enough samplesand provides a biased estimation. But, it contain some

Page 4: USING ANISOTROPIC DIFFUSION FOR COHERENCE ...earth.esa.int/workshops/envisatsymposium/proceedings/...USING ANISOTROPIC DIFFUSION FOR COHERENCE ESTIMATION Franc¸ois Aspert1, Alessio

useful information about the image geometry when inho-mogeneous coherence zones appear within homogenousbackscatter areas. Therefore the Boxcar map convey adifferent but complementary information from amplitudeimages.

What we propose here is to follow the framework set-up in Section 4.1. The major modification with ADD isthat diffusion coefficients describing image discontinu-ities (named here a,b and c) are extracted using a simplecentered finite difference discretization scheme directlyfrom boxcar estimation map. We will further refer to thistechnique as Boxcar-Driven Diffusion (BDD).Finally, thediffusion equation described in Eq. 14 becomes:

∂S1

∂t= div[D(I) · ∇(I)]

∂S2

∂t= div[D(I) · ∇(I)]

∂I

∂t= div[D(I) · ∇(I)]

(15)

Figure 3 summarizes a single step of BDD scheme.

Figure 3. Amplitude-Boxcar Driven Diffusion

4.3. Boxcar-Amplitude Methods Combination

What we propose here is an original combination of thetwo methods introduced before in Sec. 4.1 and 4.2. In-deed, when trying to identify homogenous areas, twomain cases have to be distinguished:

• Type 1: Homogenous (or very similar) backscatter-ing coefficients areas can have inhomogeneous co-herence.

• Type 2: Homogenous coherence magnitude ar-eas can have inhomogeneous backscattering coeffi-cients.

With ADD we are able to handle cases of type 2. Buteven if it is not precise enough to provide a reliable es-timation for small features, it will perform well in lowgradient magnitude areas such as water. Conversely, withBDD, we can discriminate zones that appear very sim-ilar in terms backscaterring coefficient. However, it isvery likely that it will lead to formation of high coher-ence artifacts where boxcar estimation areas is not reli-able (water areas). Therefore and in order two fulfill the

aformentionned requirements, we propose a new contri-bution which ultimately allows to handle both of thesecriteria. The final coherenceγ is computed as:

γ =

{

max(γADD, γBDD) γADD > 0.5, γBDD > 0.5min(γADD, γBDD) otherwise,

(16)whereγADD andγBDD denote the coherence estimationobtained respectively with ADD and BDD techniques.

5. EXPERIMENTS AND RESULTS

In this section we present the different results obtainedusing ADD, BDD and their combination. Our first datasetis composed of two ERS amplitude images and their cor-responding phase compensated interferogram from theBienne region (Switzerland). The working cut is an arearepresenting a part of Bienne and Neuchatel (small por-tion) lakes and their coastal zone. The second one is com-posed of two ENVISAT-WS power images and their cor-responding flattened interferogram.

For all the testing involving diffusion that have beencarried out, we used 120 iterations of the diffusion al-gorithm, a median-based threshold (λ) estimation com-puted over100 × 100 non-overlapping windows and ananisotropy parameterφ2 = 0.2.

In Figs. 4(b) 4(c) and 4(d), the estimated coherence re-sults using a3 × 3 Boxcar estimation, ADD and BDDare shown. Figs 4(e) and 4(f) illustrates the ability ofanisotropic diffusion to provide a selective smoothing ofthe processed image. In Figure 5, the histogram of coher-ence magnitude over ERS dataset and for the three tech-niques are presented. It is clear that for ADD and BDD,the estimated coherence is much more lower for low co-herence values as the histograms are shifted to the leftcompared to the one obtained with the Boxcar. However,from the ADD filtered image, we observe that a lot ofsmall details are blurred together, leading to biased highcoherence estimations. This is due to the difficulty of es-timating the true gradient angle in these small areas. Forthe BDD, this problem does not occur even if in waterareas, the gradient values and angles bring false informa-tion and lead to formation of high coherence artifacts.

This is confirmed by Figure 6 which plots the coherencemagnitude along the red and green profiles (see Fig.4(a))for ADD,BDD and Boxcar Estimation. From Fig.6(a),we can observe the ADD technique provides the lowestcoherence variability in the lake area among the threetechniques. But, from Fig. 6(b), we can observe thatwhen facing small features like the river, it provides amuch more higher estimation than BDD and Boxcar. Inthis case, BDD gives an effective estimation while pre-serving the spatial resolution with sharp transitions be-tween high and low coherence areas (transitions from andto the river area).

In Fig. 8, the result of BDD and ADD combination areshown. When observing the profiles, we can see that

Page 5: USING ANISOTROPIC DIFFUSION FOR COHERENCE ...earth.esa.int/workshops/envisatsymposium/proceedings/...USING ANISOTROPIC DIFFUSION FOR COHERENCE ESTIMATION Franc¸ois Aspert1, Alessio

we successfully combined both advantages of ADD andBDD with low coherence variability in the lakes area(Fig. 8(c)) and preservation of small features such as theriver (Fig. 8(d)).

Finally, Figure. 7 shows BDD results compared to a3×3boxcar estimation for the ENVISAT-WS dataset where agood accuracy is achieved.

(a) Original Amplitude image (b) 3x3 Boxcar coherenceestimation

(c) ADD coherence estimation (d) BDD coherence estimation

(e) Resulting ADD filtered image(f) Resulting BDD filtered image

Figure 4. ERS dataset coherence estimation results

0 0.2 0.4 0.6 0.8 10

1000

2000

3000

4000

5000

6000

7000

8000

9000

Coherence Magnitude

Pix

el C

ount

(a) 3x3 Boxcarcoherence histogram

0 0.2 0.4 0.6 0.8 10

1000

2000

3000

4000

5000

6000

Coherence magnitude

Pix

el C

ount

(b) ADD coherencehistogram

0 0.2 0.4 0.6 0.8 10

1000

2000

3000

4000

5000

6000

7000

8000

Coherence Magnitude

Pix

el C

ount

(c) BDD coherencehistogram

Figure 5. ERS-Dataset coherence histograms

6. CONCLUSIONS

In this paper, two approaches for coherence estimationhave been proposed and validated. Indeed, the use ofanisotropic diffusion allows the determination a reliablepixel basis for the required spatial averaging in coher-ence estimation formula. It has been shown that the biaspresent in another method based on spatial averaging hasbeen highly decreased. Moreover, an original combina-tion of the two methods has been introduced. Such anadditional information enabled to take into account cases

0 20 40 60 80 100 120 1400

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Cohe

rence

Magn

itude

Pixel Index

Boxcar DiffusionAmplitude Diffusion3*3 Boxcar Estimation

(a) Red profile for ADD, BDD and 3*3 Boxcar

0 10 20 30 40 50 600.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Pixel Index

Coher

ence M

agnitud

e

Boxcar DiffusionAmplitude Diffusion3*3 Boxcar Estimation

(b) Green profile for ADD, BDD and 3*3 Boxcar

Figure 6. Comparative coherence profile

(a) Original Amplitude image (b) Resulting BDD filtered image

(c) 3x3 Boxcar coherenceestimation

(d) BDD coherence estimation

0 0.2 0.4 0.6 0.8 10

1

2

3

4

5

6x 10

4

Coherence Magnitude

Pix

el C

ount

(e) 3x3 Boxcar coherencehistogram

0 0.2 0.4 0.6 0.8 10

1

2

3

4

5

6x 10

4

Coherence Magnitude

Pix

el C

ount

(f) BDD coherence histogram

Figure 7. ENVISAT-WS dataset coherence estimation re-sults

where coherence and amplitude behave oppositely, im-proving the overall coherence map reliability by takingadvantages of both methods. However, it is just a pre-liminary attempt to solve complex cases and some re-

Page 6: USING ANISOTROPIC DIFFUSION FOR COHERENCE ...earth.esa.int/workshops/envisatsymposium/proceedings/...USING ANISOTROPIC DIFFUSION FOR COHERENCE ESTIMATION Franc¸ois Aspert1, Alessio

(a) 3x3 Boxcar coherenceestimation

(b) BDD+ADD coherenceestimation

20 40 60 80 100 120

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Pixel Index

Coheren

ce Magni

tude

Boxcar−Amplitude combination3*3 Boxcar estimation

(c) Red profile

5 10 15 20 25 30 35 40 45 500.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Pixel Index

Coheren

ce Magni

tude

Boxcar−Amplitude combination3*3 Boxcar Estimation

(d) Green profile

Figure 8. ERS dataset method combination results

search is needed to find an optimal was to combine bothtechniques. Finally, some quality comparison has beenmade between ADD, BDD and Boxcar estimation, show-ing that BDD provides a high degree of smoothing whilepreserving a significant spatial resolution.

As a conclusion, we can say the this new techniqueis already gives promising results some further work isneeded toward the elaboration of a reliable scheme forADD and BDD combination.

REFERENCES

1. J. Hagberg, L. Ulander, and J. Askne. Repeat-passSAR Interferometry over Forested Terrain.IEEETransactions on Geoscience and Remote Sensing,33:331–340, March 1995.

2. C. Livingstone, A. Gray, R. Hawkins, P. Vachon,T. Lukowski, and M. Lalonde. The CCRS AirborneSAR Systems: Radar for Remote Sensing Research.Canadian Journal of Remote Sensing, 21(4):468–491, 1995.

3. E.J.M. Rignot and J.J. van Zyl. Change DetectionTechniques for ERS-1 SAR Data.IEEE Transac-tions on Geoscience and Remote Sensing, 31:896–906, July 1993.

4. U. Wegmuller and C. Werner. Retrieval of VegetationParameters with SAR Interferometry.IEEE Transac-tions on Geoscience and Remote Sensing, 35:18–24,January 1997.

5. J. Lee, S Cloude, K. Papathanassiou, M. Grunes,and I. Woodhouse. Speckle Filtering and Coher-

ence Estimation of Polarimetric SAR InterferometryData for Forest Applications.IEEE Transactions onGeoscience and Remote Sensing, 41(10):2254–2263,2003.

6. M. Born and E. Wolf. Principles of Optics: Elec-tromagnetic Theory of Propagation, Interference andDiffraction of Light. Pergamon Press, Elmsford, NJ,USA, 1985.

7. R. Touzi, A. Lopes, J. Bruniquel, and P. Vachon. Co-herence Estimation for SAR Imagery.IEEE Transac-tions on Geoscience and Remote Sensing, 37(1):135–149, 1999.

8. M. Seymour and I. Cumming. Maximum Likeli-hood Estimation for SAR Interferometry. InInter-national Geoscience and Remote Sensing SymposiumIGARSS’94, Pasadena, volume 4, 1994.

9. A. Monti Guarnieri and C. Prati. SAR Interferom-etry: A “Quick and Dirty” Coherence Estimator forData Browsing. IEEE Transactions on Geoscienceand Remote Sensing, 35(3):660–669, May 1997.

10. P. Perona and J. Malik. Scale-Space and Edge De-tection Using Anisotropic Diffusion.IEEE Transac-tions on Pattern Analysis and Machine Intelligence,12(7):629–639, 1990.

11. M.J. Black, G. Sapiro, D.H. Marimont, andD. Heeger. Robust Anisotropic Diffusion.IEEE Tans-actions on Image Processing, 7(3), March 1998.

12. J. Weickert.Anisotropic Diffusion in Image Process-ing. B.G.Teubner, 1998.

13. J. Weickert. A Review of Nonlinear Diffusion Fil-tering. InScale-Space Theories in Computer Vision,pages 3–28. Springer, 1997.

14. S. T. Acton and J. Landis. Multi-SpectralAnisotropic Diffusion. International Journal of Re-mote Sensing, 18(13):2877–2886, January 1997.

15. S. Di Zenzo. A Note on the Gradient of a Multi-Image. Computer Vision, Graphics, and Image Pro-cessing, 33:116–125, January 1986.

16. G. Sapiro and D.L. Ringach. Anisotropic Diffusionof Multivalued Images with Applications to ColorFiltering. IEEE Transactions on Image Processing,5(11), November 1996.

17. H. Scharr and J. Weickert. An Anisotropic DiffusionAlgorithm with Optimized Rotation Invariance, 2000.

18. A. Monti Guarnieri, P. Guccione, and Y. DesnosP. Pasquali. Multi-Mode ENVISAT ASAR Interfer-ometry: Techniques and Preliminary Results. InIEEProceeding of Radar, Sonar and Navigation, volume150, pages 193–200, June 2003.