digital image processing and interpretation

Post on 25-Jan-2015

435 Views

Category:

Education

5 Downloads

Preview:

Click to see full reader

DESCRIPTION

Digital image processing and interpretation for remote sensing study.

TRANSCRIPT

Introduction to Digital Image Interpretation

What is a Digital Image?

Most remote sensing data can be represented in 2 interchangeable forms:

Photograph-like imagery

Arrays of digital brightness values

Colour Composite Displays

We typically create multispectral image displays or colour composite images by showing different image bands in varying

display combinations.

True Colour Composites

Standard False Colour Composites

Colour Composite Images

Colour Composite Images

General Appearance of Surface Features on Colour Composite Images

Feature True ColourFalse

Colour

trees and bushes olive green red

crops medium to light green pink to red

wetland vegetation dark green to black dark red

water shades of blue and green blue to black

urban areas white to light blue blue to grey

bare soil white to light grey blue to grey

Source: U.S. Department of Defense, 1995. Multispectral Users Guide.

Digital Image Processing Steps

1.Preprocessing

2.Enhancement

3.Transformation

4.Classification

Image Preprocessing

Operations aim to correct distorted or degraded image data to create a more faithful representation of the original scene. "rectification and restoration"

spatial filtering radiometric restoration (destriping) geometric correction

Preprocessing functions involve those operations that are normally required prior to the main data analysis and extraction of information, and are generally grouped as radiometric corrections geometric corrections.

Radiometric corrections include correcting the data for sensor irregularities and unwanted sensor or atmospheric noise, and converting the data so they accurately represent the reflected or emitted radiation measured by the sensor.

Geometric corrections include correcting for geometric distortions due to sensor-Earth geometry variations, and conversion of the data to real world coordinates (e.g. latitude and longitude) on the Earth's surface.

Various methods of atmospheric correction can be applied ranging from detailed modeling of the atmospheric conditions during data acquisition, to simple calculations based solely on the image data.

An example of the latter method is to examine the observed brightness values (digital numbers), in an area of shadow or for a very dark object (such as a large clear lake - A) and determine the minimum value (B). The correction is applied by subtracting the minimum observed value, determined for each specific band, from all pixel values in each respective band.

Noise in an image may be due to irregularities or errors that occur in the sensor response and/or data recording and transmission. Common forms of noise include systematic striping or banding and dropped lines. Both of these effects should be corrected before further enhancement or classification is performed.

Image Registration (Geo-referencing) Registration is the process of superimposing an image over a map or over another already registered data. The method of image registration or “geo-referencing” can be divided into two types: “image-to-image-registration” and “image-to-map-registration”. Selected image data of the Khorat area was rectified with reference to the 1:50 000 scale topographic maps (image-to-map-registrationimage-to-map-registration). Further imagery was geo-referenced to this already registered satellite image using the image-to-image registration.

The geometric registration process involves identifying the image coordinates (i.e. row, column) of several clearly discernible points, called ground control points (or GCPs), in the distorted image (A - A1 to A4), and matching them to their true positions in ground coordinates (e.g. latitude, longitude).

The true ground coordinates are typically measured from a map (B - B1 to B4), either in paper or digital format. This is image-to-map registration.

Geometric registration may also be performed by registering one (or more) images to another image, instead of to geographic coordinates. This is called image-to-image registration and is often done prior to performing various image transformation procedures,

In order to actually geometrically correct the original distorted image, a procedure called resampling is used to determine the digital values to place in the new pixel locations of the corrected output image.

3 common methods for resampling: Nearest neighbour, Bilinear interpolation, Cubic convolution.

Nearest neighbour resampling uses the digital value from the pixel in the original image which is nearest to the new pixel location in the corrected image.

This is the simplest method and doesnot alter the original values, but may result in some pixel values being duplicated while others are lost. This method also tends to result in a disjointed or blocky image appearance.

Bilinear interpolation resampling takes a weighted average of four pixels in the original image nearest to the new pixel location. The averaging process alters the original pixel values and creates entirely new digital values in the output image.

This may be undesirable if further processing and analysis, such as classification based on spectral response, is to be done. If this is the case, resampling may best be done after the classification process.

Cubic convolutionresampling goes even further to calculate a distance weighted average of a block of sixteen pixels from the original image which surround the new output pixel location. As with bilinear interpolation, this method results in completely new pixel values.

However, these two methods both produce images which have a much sharper appearance and avoid the blocky appearance of the nearest neighbour method.

31

Spatial filtering

• Spatial information– Things close together more alike than things further apart

(spatial auto-correlation)– Many features of interest have spatial structure such as

edges, shapes, patterns (roads, rivers, coastlines, irrigation patterns etc. etc.)

• Spatial filters divided into two broad categories– Feature detection e.g. edges– Image enhancement e.g. smoothing “speckly” data e.g.

RADAR

32

Low/high frequency

DN DN

Gradual change = low frequency

Rapid change = high frequency

33

How do we exploit this?

• Spatial filters highlight or suppress specific features based on spatial frequency– Related to texture – rapid changes of DN value =

“rough”, slow changes (or none) = “smooth”

4943 48 49 51

5043 65 54 51

1412 9 9 10

4943 48 49 51

225210 199 188 189

Smooth(ish)

Rough(ish)Darker, horizontal linear feature

Bright, horizontal linear feature

34

Convolution (spatial) filtering

• Construct a “kernel” window (3x3, 5x5, 7x7 etc.) to enhances/remove these spatial feature

• Compute weighted average of pixels in moving window, and assigning that average value to centre pixel.

• choice of weights determines how filter affects image

35

Convolution (spatial) filtering

• Filter moves over all pixels in input, calculate value of central pixel each time e.g.

4943 48 49 51

5043 65 54 51

1412 9 9 10

4943 48 49 51

225210 199 188 189

1/9 1/9 1/9

1/9 1/9 1/9

1/9 1/9 1/9

Input image

filter

??

Output image

?? ??

36

Convolution (spatial) filtering• For first pixel in output image

– Output DN = 1/9*43 + 1/9*49 + 1/9*48 + 1/9*43 + 1/9*50 + 1/9*65 + 1/9*12 + 1/9*14 + 1/9*9 = 37

– Then move filter one place to right (blue square) and do same again so output DN = 1/9*(49+48+49+50+65+54+14+9+9) = 38.6

– And again….. DN = 1/9*(48+49+51+65+54+51+9+9+10) = 38.4

• This is mean filter• Acts to “smooth” or blur image

37

Output image

38.6 38.44943 48 49 51

5043 65 54 51

1412 9 9 10

4943 48 49 51

225210 199 188 189

37

Convolution (spatial) filtering

• Mean filter known as low-pass filter i.e. allows low frequency information to pass through but smooths out higher frequency (rapidly changing DN values)– Used to remove high frequency “speckle” from data

• Opposite is high-pass filter– Used to enhance high frequency information such as

lines and point features while getting rid of low frequency information

High pass

38

Convolution (spatial) filtering

• Can also have directional filters– Used to enhance edge information in a given direction– Special case of high-pass filter

Vertical edge enhancement filter

Horizontal edge enhancement filter

39

Practical• Try out various filters of various sizes• See what effect each has, and construct your

own filters– High-pass filters used for edge detection

• Often used in machine vision applications (e.g. robotics and/or industrial applications)

– Directional high-pass filters used to detect edges of specific orientation

– Low-pass filters used to suppress high freq. information e.g. to remove “speckle”

40

Example: low-pass filter

•ERS 1 RADAR image, Norfolk, 18/4/97

•Original (left) and low-pass “smoothed” (right)

41

Example: high-pass edge detection

•SPOT image, Norfolk, 18/4/97

•Original (left) and directional high-pass filter (edge detection), right

top related