ps tutorial (preliminary)

39
1 PS Tutorial Version 0.9 November 2014

Upload: vuongthu

Post on 01-Jan-2017

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PS Tutorial (preliminary)

1

PS Tutorial

Version 0.9

November 2014

Page 2: PS Tutorial (preliminary)

2

Index

PS Workflow ........................................................................................................................................... 3

PS Main performed steps and relative sub-steps ....................................................................................... 4

Training Data sets description .................................................................................................................. 5

PS processing chain tools ........................................................................................................................ 7

Before start ......................................................................................................................................... 7

General procedure and some important considerations .......................................................................... 7

Connection Graph Generation ............................................................................................................... 9

Auxiliary file ................................................................................................................................... 14

Area of Interest Definition .................................................................................................................. 16

Interferometric Process ...................................................................................................................... 17

Inversion: First Step ........................................................................................................................... 20

Inversion: Second Step ...................................................................................................................... 26

Geocoding ......................................................................................................................................... 29

Geocoded products ......................................................................................................................... 30

General products ............................................................................................................................ 31

Time Series Analyzer .......................................................................................................................... 32

Results .............................................................................................................................................. 34

Stacking methods comparison. PS and SBAS ........................................................................................... 38

Page 3: PS Tutorial (preliminary)

3

PS Workflow

The logical workflow of the PS module is shown in the following Figure.

Start

N SAR SLC Images (Interferometric

Stack)

Reference DEM (e.g. SRTM)

Master selection

Differential InterferogramsGeneration

PS candidates selection

Estimation on candidates of average Displacement Rate and

DEM correction factor

Atmospheric Phase Screen estimation and subtraction

Estimation of averageDisplacement Rate and DEM

correction factor

Location Time Series estimation

Average SAR backscatter Image generation

Averagedisplacement Rate

Location Time Series

Average SAR backscatter

3D Location of the PSs

End

Connection Graph

AOI definition(Optional)

Figure 1. Logical workflow for the PS module.

The list of the corresponding tools is given in the next section, and each of them is described in the details in

the following part of the document.

Page 4: PS Tutorial (preliminary)

4

PS Main performed steps and relative sub-steps

1. Connection Graph Generation

2. Area of interest Definition (Optional)

3. Interferometric Workflow

3.1. Co-registration

3.2. Interferograms generation and Flattening

3.3. Mean power image and Amplitude dispersion index generation (MuSigma)

4. Inversion First Step

4.1. Coherence, velocity and residual topography estimation

5. Inversion Second Step

5.1. Atmosphere pattern removal

5.2. Coherence, velocity and residual topography estimation

5.3. Displacement component estimation

6. Geocoding

6.1. Velocity and precision results geocoding

6.2. Displacement geocoding

Page 5: PS Tutorial (preliminary)

5

Training Data sets description

This tutorial is intended to give an exhaustive description of the SARscape PS processing chain. The main goal

is to help the user during his applications. For this reason most of the figures shown in this tutorial, are taken

from two real training stacks. The entire sample data stack and the PS results are provided together with this

tutorial. SARscape version 5.1 and ENVI version 5.2 are used. For some particular tools, and to speed up some

files visualization (e.g. file meta), also ENVI classic is used.

The area of interest chosen is Urayasu city, located in the Chiba prefecture (Japan) close to Tokyo. Two

dataset are available: ENVISAT-ASAR and ALOS-PALSAR. They have a similar the same spatial and temporal

coverage, to better compare the retrieved results. Moreover the GPS measurements (with a daily position

measurement rate) are provided. The GPS will then be used to validate the PS results obtained.

The ASAR dataset contains 34 descending acquisitions in IS2 acquisition mode and VV polarization. The

temporal coverage is from 2006-09-04 till 2010-08-09.

The PALSAR dataset contain 23 descending acquisitions in single or dual polarization (FBS-FBD) HH channel.

The temporal coverage is from 2006-08 26 till 2010 10 22.

The use of two dataset with different wavelength on the same zone is helpful since if two different sensor with

different incidence angle and different wavelength give similar results, they are more likely to be near the

reality.

Along this tutorial, the two datasets (ASAR and Palsar) are treated in parallel. However, it is strongly

recommended to go along the same dataset until the end before starting with the second one.

Figure 2. The red shape defines the coverage of the ASAR sample stack and the blue shape defines the coverage of the PALSAR sample stack.

Page 6: PS Tutorial (preliminary)

6

The following table shows, there are the acquisition lists of ASAR and PALSAR dataset.

ASAR stack PALSAR stack

Asar_20060904_slc Palsar_FBD_20060826_slc

Asar_20090720_slc Palsar_FBD_20061011_slc

Asar_20061009_slc Palsar_FBD_20061126_slc

Asar_20090511_slc Palsar_FBD_20071014_slc

Asar_20061113_slc Palsar_FBD_20071129_slc

Asar_20090302_slc Palsar_FBD_20081201_slc

Asar_20061218_slc Palsar_FBS_20070111_slc

Asar_20070122_slc Palsar_FBS_20070226_slc

Asar_20070507_slc Palsar_FBS_20070714_slc

Asar_20070611_slc Palsar_FBS_20070829_slc

Asar_20070716_slc Palsar_FBS_20080114_slc

Asar_20070820_slc Palsar_FBS_20080531_slc

Asar_20070924_slc Palsar_FBS_20080716_slc

Asar_20071029_slc Palsar_FBS_20080831_slc

Asar_20071203_slc Palsar_FBS_20081016_slc

Asar_20080107_slc Palsar_FBS_20090116_slc

Asar_20080211_slc Palsar_FBS_20090603_slc

Asar_20080317_slc Palsar_FBS_20090719_slc

Asar_20080421_slc Palsar_FBS_20090903_slc

Asar_20080526_slc Palsar_FBS_20091019_slc

Asar_20080630_slc Palsar_FBS_20091204_slc

Asar_20080804_slc Palsar_FBS_20100421_slc

Asar_20080908_slc Palsar_FBS_20100606_slc

Asar_20081013_slc Palsar_FBS_20100722_slc

Asar_20081117_slc Palsar_FBS_20100906_slc

Asar_20081222_slc Palsar_FBS_20101022_slc

Asar_20090928_slc

Asar_20091207_slc

Asar_20100215_slc

Asar_20100322_slc

Asar_20100426_slc

Asar_20100531_slc

Asar_20100705_slc

Asar_20100809_slc

Table 1. ASAR and PALSAR stack list.

Each of the provided binary raster files are delivered with 3 auxiliary files:

.hdr (an ASCII header for ENVI visualization)

.sml (an ASCII header for SARscape processing with the main acquisition information)

.kml (an ASCII file for the Google Earth).

In this training area a lot of terrain subsidence and some uplifts pattern are expected. The ground

displacements are caused by gas extraction and water injection.

Page 7: PS Tutorial (preliminary)

7

PS processing chain tools

Before start

Before starting with the PS processing chain, ENVI preferences and SARscape default values have to be set

according the type of data that will be used and the suggested values presented in the common document.

General procedure and some important considerations

First of all, it is important to choose an area of interest where scatterers remain stable in both radiometric and

interferometric phase terms (e.g. urban areas). Please note that the number of input images is crucial for the

pixel coherence estimate, since it enables the identification of suitable PSs. The use of an insufficient number

of acquisitions will produce a coherence overestimate throughout the entire scene, resulting in an

overestimation of PSs, thus consequently in false displacements. The identification of PS is generally

considered reliable when 20 or more acquisitions are used. More so, if temporally regular acquisitions are

available.

The interferograms, which are generated pairing all the input slc data with the master slant range image (also

called: Reference file), are flattened using the Digital Elevation Model projected onto the geometry of the

master slant range image. In case of precise orbits and accurately geocoded reference Digital Elevation Model,

this procedure is run in a fully automatic way. However, in case of inaccuracies in the satellite orbits or in the

DEM geolocation, a Ground Control Point - "Geometry GCP file" - is required to correct the SAR data (i.e.

Reference acquisition of the series) with respect to the reference DEM. In this case, the shift calculated in the

coregistration process, is combined with the GCP shift in order to correct all input data files (slaves) according

to the reference (master) image. It is important to note that if the "Reference file" has already been corrected

with either the manual or the automatic procedure the GCP is not needed.

After the interferometric step, it is very important to check the first results, in particular if the coregistration

step has worked well and if all slc data have been coregistered. At this point, the PS kernel is run by setting

the input parameters: velocity and height range, according to the area of interest and from the type of

subsidence and uplift that we expect.

The criteria for setting the parameters is to consider that in the range between two acquisitions, the

displacement sensitivity is λ/2, hence the smaller time interval will determine the maximum search range of

velocities to avoid the possibility of velocities aliasing.

According to the size of the study zone, the PS algorithm follows one of the two following approaches:

If the area is larger than a given surface threshold, the whole image will be divided in more subsets

(considering an overlap) and each one will be processed separately with an own reference point,

otherwise too many parameters should be estimated and the atmosphere estimation would become

less accurate, so. In the final step, all single areas will be mosaicked.

On the other hand, if the area is smaller than this surface threshold, one reference point is sufficient

and the image will not be divided.

This surface threshold is called “Area for Single Reference Point (sqkm)“. As default this value is 5, hence for

images with larger surface the first approach will be executed; changing this parameter has an effect on the

approach that will carried out. It is suggested lo keep the default value of 5 because it is more performing in

the atmosphere estimation and removal.

Page 8: PS Tutorial (preliminary)

8

In particular, the PS kernel carries out two steps:

1) Displacement First Inversion (inversion: first step):

All output files: velocity, height, coherence and displacements will be generated from all differential

interferograms files (_dint, after topography removal) without atmospheric compensation. The output

names extension will be "_first".

The estimated linear model (vel + residual height) will be subtracted from all dint files. From the

residual, the atmosphere will be estimated taking into account the noise and non-linear target motion.

Only the pixels displaying enough coherence are considered.

2) Displacement Second Inversion (inversion: second step):

Atmosphere estimation on differential interferograms (dint);

The joint estimation of target elevation and LOS velocity is carried out again but with atmospheric

previously estimated compensation.

This step allows one to identify more PSs, it is anyway important to have well estimated the

atmosphere in order to improve the coherence;

This can be checked by comparing both coherence files (_cc_first and _cc) _cc should have coherence

values higher than _cc_first;

In general, if an area with lots of vegetation is processed (it is suggested to avoid this, by selecting

urban areas) the low pass filter size should be decreased (e.g. 500 m, by default it is 1200 meters);

By editing the auxiliary file (please see the dedicated chapter at page 14) it is possible to re-run only

the atmosphere generation step and regenerating the displacements.

At this point, the last step of the PS process is the PS Geocoding, in order to produce the shape files and/or

raster files.

Page 9: PS Tutorial (preliminary)

9

Connection Graph Generation

This functionality defines the SAR pair combination (Master and Slaves) and connection network, which is used

for the generation of the multiple differential interferograms. This step is mandatory and, unlike the SBAS tool,

this network cannot be edited.

This step defines the combination of pairs (interferograms) that will be processed by the PS. These pairs are

shown as connections in a network that links each acquisition to the Master (reference) file. Given N

acquisitions, the maximum theoretical available connections are N-1. The Connection Graph tool allows

choosing the most a-priori reliable connections. These pairs will correspond to a stack of interferograms (after

the Interferometric Workflow tool), used as input measures for the PS inversion kernel (Inversion First Step

and Inversion Second Step tools).

The Master will be automatically chosen among the input acquisitions. The Master is the reference image of

the whole processing, and all the processed slant range pairs will be co-registered on this reference geometry.

The user can decide by himself which image shall be the Master. However, this is not strictly necessary because

the final results are not affected by this selection. By manually choosing the reference Super Master, there

could be the risk of retrieving a Master with few connections.

To start with the Connection graph Generation, please insert all the ASAR or Palsar

_slc files, leave the Input Master File empty (so that this file will be chosen

automatically by the software) and choose an Output Root name: The Output Root

Name will be used to create a folder named: “(Output Root

Name)+_PS_processing”.

Figure 3. Connection Graph tool ASAR processing. In this example, the Input Master File box is empty

because this file will be automatically be chosen among the input files.

Page 10: PS Tutorial (preliminary)

10

Figure 4. Connection Graph tool PALSAR processing. In this example, the Input Master File box is empty because this file will be automatically be chosen among the input files.

The connections created in this step are only those connecting pairs that are within the spatial baseline values

specified in the relevant "Preferences", (refer to the Preferences>Persistent Scatterers>Baseline Threshold

(%) setting). By default, the spatial baseline is set to 500% of critical baseline. Those acquisitions, which

exceed this threshold, are discarded from the further PS analysis.

At the end of the process, an IDL graph is shown, where all connection are represented as well as the selected

Master. Each acquisition is represented by a diamond. The color of the diamond symbol is as follows: i)

Discarded acquisitions due to user specific constraints in red color; ii) Valid acquisitions in green color; iii)

Master acquisition in yellow colour.

The generated graphs are the following:

Time-Position plot, which provides the normal distance from the Master (y axis) and the input

acquisition dates (x axis) (On the left in Figure 5 and Figure 6)

Time-Baseline plot, which provides the normal baseline (y axis) and the input acquisition dates (x

axis) (on the right in Figure 5 and Figure 6). This graph allows to better visualize the interferogram

coverage for each date.

The graphs can be reloaded at any processing step by means of the Plot Viewer found in the Stacking tool.

Page 11: PS Tutorial (preliminary)

11

Figure 5. Plots generated by the Connection Graph Generation tool for ASAR processing: Time-Position (on the left), and Time-Normal Baseline (on the right).

Figure 6. Plots generated by the Connection Graph Generation tool for PALSAR processing: Time-Position (on the left), and Time-Normal Baseline (on the right).

Page 12: PS Tutorial (preliminary)

12

The multilooking factors of the output Master image (suffix, "_pwr") are automatically calculated by taking

into account the Cartographic Grid Size, which is set in the relevant SARscape Preferences panel.

The software can deal with spotlight and stripmap interferometrizable data. The ALOS PALSAR data, both FBD

and FBS acquisitions (dual and single polarizations with different slant range spatial resolution) can be used

together during the processing, provided that the same polarisation for the ingested data is be used.

Figure 7. ASAR processing: multi-looked power image of the Master acquisition (_Asar_20080804_slc_pwr image in _PS_processing/connection_graph/ folder), shown by ENVI with SARscape image stretch

visualization (red circle).

Page 13: PS Tutorial (preliminary)

13

Figure 8. PALSAR processing: multi-looked power image of the Master acquisition (Palsar_FBS_20080716_slc_pwr image in _PS_processing/connection_graph folder), shown by ENVI with

SARscape image stretch visualization.

Page 14: PS Tutorial (preliminary)

14

Auxiliary file

The auxiliary file is used in all the following stacking interferometry steps. This file is the stacking central

thread; it reports the necessary information as the internal path folders locations, and contains the list of steps

and sub-steps performed by the SBAS or PS. Should a step of the stacking interferometry processing be

stopped for any reason, the same can be started again and the SBAS or PS module will be able to continue

from the last processed pair on, without re-starting from scratch. The "Auxiliary file" (marked by the name

auxiliary.sml) is saved in the root output directory and it is updated during the execution of the different

processing step. It is the input used, from the Area of Interest Definition step onwards, throughout the whole

processing chain; it is important to note that the first input to enter, in any processing panel, is the "Auxiliary

file". To further simplify this procedure, the steps to be executed can be stored in a batch sequence so that it

will be very easy to stop them and let them continue again by exploiting the Batch Browser tool. Moreover, it

is possible to edit this file when the user decides to perform again a particular operation. For example: while

looking at intermediary results, it was discovered that a new tuning of the parameters should be done. To

remake a sub-step, the auxiliary file must be edited by setting “NotOK” the corresponding sub-step, and setting

to 0 the relative counters. Once the SBAS or PS is called again, it will start from here and will go on, by skipping

the previous sub-steps and hence saving time. Attention is to be paid to the fact that the sub-steps are ordered

according to their execution sequence. It means that all the following sub-step after the “NotOK” will be

performed again.

Example for SBAS: the re-flattening (and the next steps) must be re-done:

Portion of the Auxiliary as before re-starting:

<reflat>OK</reflat>

<reflat_master>20</reflat_master>

<reflat_slave>0</reflat_slave>

Same portion after proper editing, that will force the re-execution of the flattening when the tool is re-started: <reflat>NotOK</reflat>

<reflat_master>0</reflat_master>

<reflat_slave>0</reflat_slave>

Note The IDs can be found in the Report file appearing in a popup at the end of the execution of the

Connection Graph Generation. This Report file contains the list of images to be process and a sequence

of useful information. The processed pair nomenclature is the following:

_MasterDate_m_ID_SlaveDate_s_ID. In this way, it is easy to retrieve which is the pair under analysis

by looking at the acquisition dates and-or the acquisition IDs. The report file (CG_report.txt) can be

found within the _SBAS_processing\connection_graph folder.

Example for PS: coregistration must be re-done:

Portion of the Auxiliary as before re-starting:

<step_COREGISTRATION>OK</step_COREGISTRATION>

< step_coregistration_start_id_image>20</step_coregistration_start_id_image>

Page 15: PS Tutorial (preliminary)

15

Same portion after proper editing, that will force the re-execution of the coregistration when the tool is

re-started: <step_COREGISTRATION>NotOK</step_COREGISTRATION>

<step_coregistration_start_id_image>0</step_coregistration_start_id_image>

Note: PS does not have master and slave in the auxiliary file because the master remains always the same.

The IDs is found in the upper part of the Auxiliary file.

For any reason it could also be decided to execute again one entire stacking step. In this case no editing is

requested to the auxiliary file: just call again the desired stacking tool with the Rebuild All flag active.

Another important file to take into account is the work_parameters.sml file, located in the work folder. This

file documents the setting of all parameters used to run the SBAS or PS process. This file may be useful to

remember to the user, whenever he need it, the settings exploited to process the actual data stack (as for

example in case of Incremental stacking processing).

The incremental mode, possible only for SBAS processing, is intended to save processing time whenever a

new acquisition is be added to an existing stack: the exploitation of this file makes not necessary to perform

again the whole interferometry on all the pairs, but just on the newer ones. In this case it is better to set on

the new stack, the same parameters used for the old one (e.g. the same multi-looking, the same thresholds

etc.).

When the incremental mode is used, the Connection Graph generation must not be performed again!

The input acquisitions location as well as the stacking (SBAS or PS) output folder cannot be moved till the end

of the Interferometric Workflow. After this step the user can decide to move the stacking output

folder anywhere he likes and there perform the rest of the stacking process.

Page 16: PS Tutorial (preliminary)

16

Area of Interest Definition

When the land displacement area is known (location and dimension) and it is smaller than the whole frame

coverage, it is better to focus the attention only on this point; the PS processing time will then be significantly

reduced as the hard disk usage.

A working area over the reference Master can hence be defined with this tool; the entire input data stack will

be resized accordingly to the pixels shift values of each acquisition with respect to the reference one. The

acquisition samples are stored in a new folder inside the output root path.

This tool is optional. If it is not performed, the PS will process the entire dataset.

Figure 9. Area of Interest Definition tool.

An area of interest a bit larger than the real interest area (at least 200 pixels more) should be defined, since

the processing filters can cause some border effects. Among these filters, the atmospheric filter (applied during

the Second Inversion Step tool) must be particularly taken into account. This filter has a window dimension

corresponding to some km on the ground (typically 1.2 km as default): for this reasons the AOI should be

defined with a buffer of 2-3 km around the real area of interest. In any case the entire AOI should not be

smaller than 2-300 pixels by 2-300 pixels.

If the user is working in incremental mode (SBAS only), and in the previous PS process the Area of Interest

Definition step has been called, it is necessary to start again from this step to process the added acquisitions

and the new pairs connections.

The dataset of this tutorial is already cut on the correct Area of interest, this step will then be skipped.

Page 17: PS Tutorial (preliminary)

17

Interferometric Process

This functionality enables to execute, in an automatic way, the following processing sequence: Coregistration,

Interferogram Generation, Flattening and Amplitude dispersion index. This step is mandatory.

The different steps implemented here are executed using the default processing approach (consult the

reference guide specific to each processing step for more details); in particular:

During the coregistration step, all sampled images are coregistered onto the resampled "Master file".

This involves an oversampling of a factor 4, in range direction (refer to the Preferences>Persistent

Scatterers>Range Looks setting), which is executed to avoid aliasing of fast fringes in case of large

baseline values. Differently from the standard InSAR processing, since the PS approach is looking at

point targets, the spectral shift and the common Doppler bandwidth filters are not executed. The

interferograms are then generated for each slave (i.e. "Input File List") using always the same master

image (i.e. "Reference file").

The Interferogram Flattening is performed using an input reference Digital Elevation Model or the

ellipsoidal model if the DEM is not inputted; the "Geometry GCP file", if entered, is used to correct the

master image (i.e. Master acquisition of the interferometric stack) onto the Digital Elevation Model.

The better the reference Digital Elevation Model accuracy/resolution the better the result in terms of

topography removal. It is important to note that, in case the Master image has already been corrected

with the manual orbital correction or the automatic orbital correction (general tools>Orbital

Correction) procedure the GCP is not needed.

The Amplitude Dispersion Index (D), is defined as

𝐷 =

where is the temporal standard deviation of the amplitude and the temporal average of the

amplitude for a certain pixel. Thus, a pixel that constantly has a similar, relatively large, amplitude

during all acquisitions is expected to have a small phase dispersion.

This relation enables the identification of potentially coherent points without the need to analyze the

phase because at this stage the phase still contains unknown signal contributions and is difficult to

analyze.

For smaller values of D it is expected a large phase standard deviation (i.e. rural areas, vegetation …).

Nonetheless, points with a larger amplitude dispersion index are expected to have a smaller phase

standard deviation (i.e. urban areas).

Therefore, thresholding on the dispersion index is a very practical way of selecting points that are

expected to have the smallest phase dispersion.

The images are automatically radiometrically calibrated in order to allow a precise estimation of and

.

As stated before, the PS interferometric process performs an overlooking in order to generate interferometric

results. However, these data cannot be visually interpreted because of their geometry. To have a quick look

with an almost square pixel of the same sensor resolution, it is possible to generate multilooked Dint for quick

view. The multilooking factor that has to be insert is relevant to the Master image.

Tip It is advisable to work with a multi-looking bigger than 1:1, for two reasons:

To increase the SNR (Signal-to-Noise Ratio) of the interferograms and provide a more reliable

coherence estimation

Speed up the processing on large areas.

Page 18: PS Tutorial (preliminary)

18

By increasing the multi-looking factor, the pixel spatial resolution decrease. To estimate the most appropriate

multi-looking, open master single look complex SARscape header (.sml), located in the connection_graph

folder.

ASAR processing:

Extract from Asar_20080804_slc_pwr.sml

<PixelSpacingRg>7.8039736747741699</PixelSpacingRg>

<PixelSpacingAz>4.0483512878418004</PixelSpacingAz>

<IncidenceAngle>22.843563079833999</IncidenceAngle>

GroundRangePixelSpacing = PixelSpacingRg / sin(IncidenceAngle) = 20.114 For an almost square pixel of 20 meters the correct multi-looking should be: Multi-look range = 1 Multi-look azimuth = groundRangePixelSpacing/PixelSpacingAz = 4.048 ~ 4

Figure 10: Azimuth Looks for quick view value for ASAR processing.

PALSAR processing:

Extract from Palsar_FBS_20080716_slc_pwr.sml

<PixelSpacingRg>4.6842571562500002</PixelSpacingRg>

<PixelSpacingAz>3.0766829298668399</PixelSpacingAz>

<IncidenceAngle>38.762999999999998</IncidenceAngle>

groundRangePixelSpacing = PixelSpacingRg / sin(IncidenceAngle) = 7.422

Page 19: PS Tutorial (preliminary)

19

For an almost square pixel of 15 meters the correct multi-looking should be:

Multi-look range = 1

Multi-look azimuth = groundRangePixelSpacing/PixelSpacingAz = 4.878 ~ 5

Figure 11: Azimuth Looks for quick view value for ASAR processing.

To start the PS Interferometric Process, the auxiliary file, the provided DEM (found

in the common_input folder) and the correct looks value (in case the Generate Dint

Multilooked for Quick View flag is selected) have to be inserted: Az 4 Rg 1 for ASAR

and Az 5 Rg 1 for Palsar.

Due to the large number of output products, the following three meta files are generated. They enable to load

at once all relevant output products:

slant_dint_meta, which refers to all flattened interferograms.

slant_pwr_meta, which refers to all slant range power images.

Note In order to avoid loading failures it is recommended not to move any file from its original

repository folder.

Page 20: PS Tutorial (preliminary)

20

Inversion: First Step

This functionality estimates the first model inversion to derive the residual height and the displacement

velocity. They are used to flatten the complex interferograms. This step is mandatory.

After the interferograms generation, an offset phase is removed from all interferograms. One or more pixels

(Reference Points), based on the Amplitude Dispersion Index, are automatically selected by the program for

the calculation of the phase offset to remove. The number of the 'Reference Points' depends on the size of

the Area of Interest. As default, just one 'Reference Point' is selected for Areas within 5 km2, (refer to the

Preferences>Persistent Scatterers>Area for Single Reference Point).

The approach is based on the identification of a certain number of “coherent radar signal reflectors” (Persistent

Scatterers). The processing is then focused on the analysis of the phase history of these reliable single targets

(each one represented by an image pixel), in opposition to conventional approaches that process the input

scene as a whole. A Persistent Scatterer is subject to two main constraints: it has to be stable (fluctuations

lower than a millimeter/year) and it has to be properly oriented in order to be detectable from the SAR antenna.

It is important to know that:

Constant displacements, which affect all Persistent Scatterers in the area of interest are not detected.

The system is designed to estimate displacements characterized by a linear trend, which means that

displacement rate variations over time are not properly represented.

Good PS candidates - like roofs, poles, bridges - are typically found in urban settlements, or other human-

made structures such as greenhouses, dams, metallic and concrete features (e.g. well fields surrounding

structures, pipelines and dwells). Beside these artificial features, also natural targets such as well-exposed

outcropping rock formations are potential PS.

The temporal distribution of the acquisitions shall also be adequate compared with the expected dynamics of

the displacements under analysis.

At this point, the algorithm can follows two kind of directions:

Areas of analysis with a size within the value specified by the 'Area for Single Reference Point'

parameter are processed using just one 'Reference point' for the entire Area.

A second approach is carried out when larger Area has to be analyzed. Then the entire area is split

into more sub-areas taking into account the overlap percentage too, each one with size corresponded

to the input parameter (Overlap for SubAreas []). Every sub areas is processed in independent way.

Finally, a mosaicking operation is carried out to merge all sub - areas and getting the whole result.

Note The ‘Area for Single Reference Point' parameter defines a square-shaped area.

Unlike the SBAS tool, just one model is implemented:

Linear Model, to estimate residual height and displacement velocity.

The model can be summarized as follows:

𝐷𝑖𝑠𝑝 = 𝑉 ∙ (𝑡 − 𝑡0)

where Disp is the displacement at time t; V is the displacement velocity.

Page 21: PS Tutorial (preliminary)

21

Before proceeding with the Inversion Second Step, it is important to check the results of the Inversion first

step in order to detect errors in the mosaicking of the different subareas (blocks). In fact, during the

mosaicking of “Velocity_first” and “Heigh_first”, an offset in the overlap zone is estimated and it will be used

to rephase each block. This offset is estimated using the Coherence Threshold for Merging value.

To check the results, the “Velocity_first” and “Heigh_first” should be loaded in ENVI, together with the

SubAreas.shp file. (Figure 12 for ASAR processing and Figure 13 for Palsar processing). Velocity and Height

should not show (important) jumps in value. It can happen than sometimes; small value jumps in the first

inversion results can be removed by the atmospheric filter.

Tip Applying a color scale on the inversion First Step results can help the interpretation and the

detection of jumps in the data..

Figure 12: Velocity_first (left) and Heigh_first (right) for ASAR Processing .

Page 22: PS Tutorial (preliminary)

22

Figure 13: Velocity_first (top) and Heigh_first (bottom) for Palsar processing. Value jump can be seen in a

zone in the centre of the Velocity First. However, because the zone is limited, these results can be used for the Inversion Second Step.

Note In the overlap zones between the different blocks of “cc_first”, simply the better coherence is

taken. This can lead to value jumps between blocks. This effect is normal for coherence and

should not be considered as an error (Figure 14 and Figure 15).

Page 23: PS Tutorial (preliminary)

23

Figure 14: cc_first result of inversion first step for ASAR processing. Several vertical value jumps can be

noticed.

Figure 15: cc_first result of inversion first step for Palsar processing. Value jumps in the middle of the image can be noticed.

Page 24: PS Tutorial (preliminary)

24

If jumps in values of velocity or of height between different blocks are found, the user should then operate

on the Area for single reference point, Overlap for SubAreas, and/or the Coherence Threshold for Merging

parameters in order to obtain results without jumps.

Tip To remove value jumps from the Velocity First or Height First it is suggested firstly to tune the

Coherence Threshold for Merging value (found in “other parameters” in the dropdown list). It

should be decreased if the area of interest is mainly an urban area in order to augment the

density of points used for offset estimation. However, it should be increased if the area of interest

in mainly covered by vegetation in order to avoid taking into account too much noise.

If this change does not lead to better results, the Area for single reference point and the overlap

for SubAreas have to be changed in order to obtain high coherence inside the overlap areas.

Augmenting the amount of blocks (by decreasing its size and/or by augmenting the overlap), the

processing time augments.

To start the First Inversion, insert the Auxiliary.sml file in the input.

In the ASAR processing chain, the Area for Single Reference Point (sqkm) value can

remain 5 km. The Coherence Threshold for Merging should can also be kept at 0.66

(Figure 16).

For Palsar processing, the default parameters are also good (5 km for Area for single

reference point , found in the principal parameters tab and 0.66 for Coherence

Threshold for merging, found in the Other parameters tab, as can be seen in Figure

17).

Figure 16: PS Inversion: First Step panel for ASAR processing.

Page 25: PS Tutorial (preliminary)

25

Figure 17: PS Inversion: First Step panel for Palsar processing.

Page 26: PS Tutorial (preliminary)

26

Inversion: Second Step

This second and final inversion uses the first linear model products coming from the previous step to estimate

the atmospheric phase components.

Once the linear model has been estimated, it is subtracted from all differential interferograms before the

atmosphere estimation process takes place in order to refine the velocity and residual height values.

During the atmosphere estimation process, the PS technique takes advantage of the dense distribution of

scatterers to remove most of the fluctuation of the signal propagation delay, which is mostly due to variations

in the troposphere.

The atmospheric filter uses a temporal Hi Pass Filter and a spatial Low Pass Filter:

The Atmospheric Low Pass Filter accounts for the spatial distribution of the atmospheric

variations. It is implemented by using a square window: large windows are more suitable to correct

large-scale variations, while small windows are better to correct isolated artifacts due to localized

variations. The smaller is the window size; stronger will be the filter effect.

The Atmospheric Hi Pass Filter accounts for the temporal distribution of the atmospheric

variations. It is implemented by using a temporal window: large windows are more suitable to correct

effects with low temporal variability, while small windows are better to correct frequent atmospheric

variations. The bigger is the window size; stronger will be the filter effect.

The second model inversion is implemented to derive the date-by-date displacements, after removing the

atmospheric phase components and eventually fit the final displacement velocity model. This step is

mandatory.

To start the PS Inversion: Second Step, insert the Auxiliary.sml file as Input file, let

the parameters as shown in the next figures and then click Exec.

Page 27: PS Tutorial (preliminary)

27

Figure 18: PS Inversion: Second Step panel for ASAR processing.

Figure 19: PS Inversion: Second Step panel for Palsar processing.

Page 28: PS Tutorial (preliminary)

28

The results of the second inversion is a directory containing the following products:

Height, corresponding to the correction (in meters) with respect to the input Digital Elevation Model,

after atmospheric correction.

precision_height, corresponding to the estimate in meters of the residual height measurement

average precision (refer to the Phase to Height conversion for more details).

Velocity, corresponding to the mean displacement velocity (in mm/year, after atmospheric

correction).

precision_vel, corresponding to the estimate in millimeter/year of the velocity measurement average

precision (refer to the Phase to Displacement conversion for more details).

cc, corresponding to the multitemporal coherence. It shows how much the displacement trend fits

with the selected model.

In addition, following Meta files are created, allowing loading the specific processing results (_meta).

slant_atm_meta, which refers to date-by-date atmospheric related components in slant range

geometry. This meta file can be found in the working folder.

slant_dint_reflat_meta, which refers to the date-by-date flattened interferograms, measured in

slant range geometry, after the atmospheric correction.

slant_disp_meta, which refers to the date-by-date displacements, measured in slant range

geometry, after the atmospheric correction.

Page 29: PS Tutorial (preliminary)

29

Geocoding

The PS products are geocoded and the displacements can be displayed in two kind of format: Shape and/or

Raster according to the flag selected in the parameters.

In order to obtain more accurate displacement measurements, one or more Ground Control Points (e.g. coming

from GPS or other ground measurements) - "Displacement GCP file" - can be entered as input to the

processing. This information is used to optimize the displacement trend assessment. In case only one GCP is

selected, the correction will consist of a mean velocity constant offset, which does not have any spatial

variation; if more GCPs are selected, the correction will consist of the best fitting calculated from all GCPs. The

Ground Control Points must be provided in cartographic coordinates. This adjustment is not mandatory. The

Product Coherence Threshold means that Pixels with coherence values smaller than this threshold cannot be

kept as Persistent Scatterer.

To start the geocoding process, insert the auxiliary file, the DEM (found in the

common inputs folder) and leave all the default parameters as shown in the following

screenshots. Set the flag make Geocoded Shape as true: the slant range products

are then geocoded onto the Digital Elevation Model cartographic reference system

and the ultimate PS products are generated in vector format. The results should be

analyzed using the Vector Time Series Analyzer. GPS data are not inserted for this

tutorial.

Figure 20: Principal parameters of the PS Geocoding panel for ASAR Processing.

Page 30: PS Tutorial (preliminary)

30

Figure 21: Principal parameters of the PS Geocoding panel for Palsar Processing.

Geocoded products The output results consist of geocoded products, which correspond to the outputs of both the inversion step

1 and the inversion step 2. These are:

Ref_GCP_geo, shape file corresponding to the Reference Points of the highest MuSigma values (i.e.

those used for the phase-offset removal) automatically selected during the step 1.

SubAreas_geo, shape file corresponding to the sub areas estimated according to the "Area For Single

Reference Point" parameter.

mean_geo, SAR Intensity average image and associated header files (.sml, .hdr).

Meta files (_meta), useful to load at once the displacement velocity, the residual height, the

coherence images and the date-by-date displacements (the latter only when the "Geocoded Raster

Products" flag is checked.

Meta files (_meta), with the displacement projected along the maximum slope direction (_SD) and

on the vertical plane (_VD).

"work" subfolder, where intermediate processing results are stored.

PS map and related information (PS_thrCohe_Id.shp and .kml), where thrCohe is the

coherence threshold and Id is the shape index.

Maximum slope direction values (_ADF), with the associated header files (.sml, .hdr).

Maximum slope inclination values (_IDF), with the associated header files (.sml, .hdr).

Azimuth Line of Sight (_ALOS) with the associated header files (.sml, .hdr). Positive angles are

measured clockwise from the North; negative angles are measured counterclockwise from the North.

Incidence angle of the Line of Sight (_ILOS) with the associated header files (.sml, .hdr). The

angle is measured between the Line Of Sight and the vertical on the ellipsoid (flat earth).

Page 31: PS Tutorial (preliminary)

31

General products General products, which are:

_geo_vel+height_meta, which refers to the height and displacement velocity measurements in the

output cartographic projection.

_geo_otherinfo_meta, which refers to the power mean, the multitemporal coherence, the height

measurement precision and the corrected height measurements in the output cartographic projection.

_geo_disp_first_meta, which refers to the date-by-date displacements, measured in the output

cartographic projection, without atmospheric correction.

_geo_disp_meta, which refers to the date-by-date displacements, after the atmospheric correction,

in the output cartographic projection.

At the end of the Geocoding step, it is finally possible to move the entire “Root name”_PS_processing folder

in another disk location without causing any problem in the further steps.

Note The displacement results can be projected along both the maximum slope direction (_SD) and on

the vertical plane (_VD) only if “make geocoding raster” flag has been selected.

Page 32: PS Tutorial (preliminary)

32

Time Series Analyzer

A graphic, showing the extracted displacement information for a selected PS, can be created using the General

Tools>Vector analyzer (in case the Make Geocoded Shape has been selected in the Geocoding Step) or using

General Tools>Raster Analyzer (in case the Make Geocoded Raster has been selected in the Geocoding Step).

In this tutorial, only vector results are taken into account.

To study a time series of the displacement of a specified PS, load all the .shp results

as well as the mean_geo image in ENVI. Once one .shp is selected in the layer

manager, launch the General tools>Time Series Analyzer>Vector tool (Figure 22).

Choose a data range, click on Color apply and then select a PS on the image. Once

the desired PS is selected, click on “Plot Time Series” in order to open a Plot showing

the movement history in time.

Clicking with the secondary mouse button on a different .shp file in the layer

manager and selecting “Set as active layer" allows to change the active layer (Figure

23). This can be done even leaving the TS Vector Analyzer window open.

Tip Choose the same data range for all the different shapefile.

Note It is not possible to use the Vector Time Series Analyzer if the mean_geo file is not loaded.

Figure 22: Custom data Range for PS_75_0.shp layer in order to have all the velocity layers with the same

range namely from -15 to 15 mm/y.

Page 33: PS Tutorial (preliminary)

33

Figure 23: Secondary mouse click on a non-active layer Set as Active Layer changes the shapefile where

the TS Vector Analyzer has effect.

Page 34: PS Tutorial (preliminary)

34

Results

As it can be seen in Figure 24 and Figure 26, the Urayasu reclaimed land (Zone A) shows a pronounced

subsidence whereas in the outback (zone B), the surface shows a more stable behavior. It can also be seen

that ASAR time series are more stable in comparison to Palsar; in fact, the ASAR stacking contains more input

than the Palsar one. In addition, the difference in wavelength between the two sensors has an influence on

the ground coverage.

It can be noticed that, even if the coverage is not the same, the structure remains the same. In particular, the

two subsidence zones that are detected by both sensors. In order to have exhaustive results, it is suggested,

when possible, to compare the results coming from different sensors.

Note Only a maximum of 99’999 points can be stored in one single .shp file. If the number of PSs found

is larger than 99’999, then the PSs points are split into different shapefiles. To avoid it,

augmenting the Product Coherence Threshold will create less PSs, they are so more likely to be

stored in one single .shp file. However, the spatial repartition will be less rich.

Note Sometimes, depending on the video card, the use of a remote desktop connection does not allow

showing different colors on the shapefile. To remedy this limitation, please connect directly at the

remote location and reboot the machine or copy the results on a local computer.

Velocity [mm/y]

-15 mm/y +15 mm/y

Envisat PS results

+B

A+

Figure 24: ASAR PS results for velocity displayed on the Mean Geo intensity image.

Page 35: PS Tutorial (preliminary)

35

A B

Figure 25: Three time series on three different ASAR PS.

Velocity [mm/y]

-15 mm/y +15 mm/y

PALSAR PS results

A

B+

+

Figure 26: Palsar PS results for velocity displayed on the Mean Geo intensity image.

Page 36: PS Tutorial (preliminary)

36

A B

Figure 27: Three time series on three different Palsar PS.

Page 37: PS Tutorial (preliminary)

37

Page 38: PS Tutorial (preliminary)

38

Stacking methods comparison. PS and SBAS

Stacking interferometry methods are ideal to exploit a series of N SAR images (Interferometric stacking) in

order to identify areas (pixels) that show coherent and consistent signal (displacement) over the time. PS and

SBAS have properties and characteristics that make them more or less suitable according to the typology of

area of interest taken into account and the expected results.

The Persistent Scatterers (PS) is intended for the analysis of point targets. The resulting product is relevant to

the measurements of linear displacements and the derivation of precise heights of local scatterers, which are

typically characterized by high coherence. The number of acquisitions is crucial for the coherence estimate,

which, in turn, enables the identification of suitable PSs. The use of an insufficient number of acquisitions will

produce a high coherence estimate throughout the entire scene, resulting in an overestimation of PSs, and,

consequently in false displacements. The identification of PS is generally considered reliable when 20 or more

acquisitions are used and regular, in temporal terms, acquisitions are available. PS should exclusively be used

in urban areas, or in general, where scatterers remain stable in radiometric and interferometric phase terms.

Depending upon the scatterer stability (time coherence), the achievable displacement precision can reach the

precision of millimeters, while the maximum velocity is limited by the minimum time distance between the

acquisition and the sensor wavelength. Concerning the elevation, the PS technique provides for the identified

scatterers higher accuracy with respect to the Small Baseline Subset one (refer to 2). Particularly in layover

(typically skyscrapers in urban areas), where, while the PS enables the identification of single scatterers and

a phase unwrapping step is avoidable, the SBAS, due to the spatial phase unwrapping process, tends to

smooth the elevations.

Small Baseline Subset (SBAS) is intended for the analysis of distributed targets. The resulting product

resembles that one generated by the conventional DInSAR: the key difference is that SBAS enables the analysis

of large time-series, while DInSAR is limited to the 2-, 3-, and 4-pass. With respect to the PS, the SBAS

technique is less sensitive to the number of acquisitions; because it exploits the spatially distributed coherence,

instead of exclusively consider single points, as in the PS. However, in general, it is worth mentioning, that

the more the acquisitions, the better the resulting product quality, this, also because the atmospheric

component can be better estimated and reduced. Concerning the displacement, with respect to the PS

technique, the SBAS one is not exclusively limited to the linear one: in fact, beside the linear displacement,

quadratic and cubic models are supported. Moreover, no modeled displacements are derived. On the maximum

displacement, while there is no constrains in temporal term, the displacement is limited with respect to the

spatial variability, due to the phase unwrapping. In addition, the SBAS approach is more robust than the PS

one, because it takes advantage from the higher redundancy of all available cross-interferograms.

Consistent, mm/year-precise displacement rate measurements can be obtained with SAR

Interferometric Stacking approaches and the proper software tools in an operational way;

The SBAS approach is more suitable to provide dense coverage and higher precision for spatially

smooth phenomena, PS more attractive for high spatial resolution phenomena;

Lower frequencies (e.g. L-Band) allow to increase spatial coverage; higher frequencies (e.g. C-Band)

provide better precisions;

The availability of the different approaches and sensors allow to cover the broadest variety of

application cases

Page 39: PS Tutorial (preliminary)

39

Regardless of the selected technique, pre-requisites are:

all data must be acquired by the same sensor;

all data must be acquired with the same viewing geometry;

in case of multi-polarization acquisitions, the same polarization must be used.

In order to obtain reliable results, it is always a good idea to combine these two techniques and crosscheck

the results. In addition to the use of two stacking methods, it can also be helpful to have results coming from

two or more sensors, operating with different wavelength.