18707_unit 3

94
Spatial Resolution

Upload: shobhit-mishra

Post on 24-Dec-2015

213 views

Category:

Documents


0 download

DESCRIPTION

geoinformatics

TRANSCRIPT

Spatial Resolution

Radiometric resolution

Multispectral Vs Hyperspectral Image

Concepts of Aerial Photography

• Scale of photograph :

Fiducial Marks• small registration marks exposed on the edges of a photograph

Flight plan

Overlap in flight plan

Aerial Photograph mosaic

Stereoscopy

• The ability to see and appreciate depth of field through the perception of parallax.

Photogrammetry using Stereo pairs

Modern Stereoscopy

Visual Image Interpretation

• act of examining photographic images for the purpose of identifying objects and judging their significance.

• Elements of visual image interpretation :• location• size• Shape• shadow• tone/color• Texture• pattern• height/depth• site/situation/association

Size

• Size of objects in an image is a function of scale.

• It is important to assess the size of a target relative to other objects in a scene, as well as the absolute size, to aid in the interpretation of that target

Shape

• Shape refers to the general form, structure, or outline of individual objects.

• Shape can be a very distinctive clue for interpretation.

Tone/Color

• Tone refers to the relative brightness or colour of objects in an image.

• Variations in tone also allows the elements of shape, texture, and pattern of objects to be distinguished.

Tone vs Color

Texture

• Texture refers to the arrangement and frequency of tonal variation in particular areas of an image.

• Rough textures would consist of a mottled tone where the grey levels change abruptly in a small area.

• Smooth textures would have very little tonal variation.

Pattern

• Pattern refers to the spatial arrangement of visibly discernible objects.

• Typically an orderly repetition of similar tones and textures will produce a distinctive and ultimately recognizable pattern

Shadow

• Shadow is also helpful in interpretation as it may provide an idea of the profile and relative height of a target or targets which may make identification easier.

• Shadows can also reduce or eliminate interpretation in their area of influence

Site /Situation/Association

• Association takes into account the relationship between other recognizable objects or features in proximity to the target of interest

Visual Image interpretation

Advantages

Interpreter’s knowledge are available

Excellent in spatial information extraction

Limitations

Time consuming

Individual difference

Simultaneous analysis of multiple bands/images is difficult

Serious biased error may introduce due to analyst’s own assumption orexpectation

Digital Image interpretation

• Digital image processing is the application of algorithms on digital

images to perform processing, analysis, and information extraction.

• Data must be recorded and available in digital form.

• Data recorded on a photographic film can also be converted into

digital form, but a very few of the digital processing techniques can be

applied on them.

Advantages of Digital Image Processing

• Short processing time

• Re-productivity

• Extraction of physical quantities

• Data commonly transmitted or converted to digital format

• Analysis of individual points (pixel)

• Analysis of multiple bands/images in a single platform

• Handling of large data volumes

• Accuracy assessments

Digital Image

Pixels

• The smallest two-dimensional non-divisible element of an image is called pixel.

• Each pixel stores a digital number (DN) measured by the sensor.

• Represents individual areas scanned by the sensor.

• Smaller pixel size offers greater spatial accuracy.

Process of Digital Image Processing

Pre - processing

• In their raw form, remotely sensed data generally contain flaws or deficiencies. The correction of deficiencies and the removal of flaws present in the data are termed pre-processing.

• Pre-processing includes:• Radiometric corrections

• Geometric corrections

• Miscellaneous pre-processing

Radiometric Correction

• The main purpose for applying radiometric corrections is to reduce the influence of errors or inconsistencies in image brightness values.

• Radiometric errors and inconsistencies are often referred to as “noise”.

• Noise means any undesirable variation in image brightness.

De striping

• Striping occurs if a detector goes out of adjustment.

• Individual detectors appear lighter or darker than their neighboring detectors

• Correction is applied by selecting one sensor as a standard and adjusting the brightness for all other pixels.

• Independent reference value of brightness can also be used.

Removal of Missing Scan Lines

• Missing scan line occurs when a detector either completely fails to function, or becomes temporarily saturated during a scan.

• Corrected by replacing the bad line with a line of estimated data file values, which is based on the lines above and below it.

Random Noise Removal

• Odd pixels that have varying DNs frequently in images, and if they are not systematic, they can be considered as random noise.

• Noisy pixels can be replaced by substituting an average value of neighboring DNs.

Atmospheric Correction

• Atmospheric effects are not considered as errors.

• They are part of the signal received by the sensing device.

• A number of algorithms have been developed to correct atmospheric effects.

Courtesy: CCRS

Before After

Examples of atmospheric correction

Geometric Correction

• Digital images often contain systematic and non-systematic geometric errors that arise from the earth curvature, platform motion, relief displacement, non-linearities in scanning motion, the earth rotation, etc.

• Digital images are not geographically referenced.

• Removing these errors is known as geometric correction.

Systematic Correction

• Systematic errors:• Scan skew

• Known mirror velocity variation

• Earth-rotation skew

• Platform velocity variation

• Systematic distortions are corrected by applying formulas derived by modelling the sources of distortions mathematically.

Non Systematic Errors

• Establishing the relationship between two different coordinate systems.

• Two approaches:

• Image to ground geo-correction (georeferencing)

• Image to image correction (registration)

Image to Ground Geo-correction

Correction is the process of giving an image a Real World

coordinate system

116 41 52.20 W, 33 50 03.23 N

(Longitude, Latitude)

Image to Image Correction

Assigning the co-ordinate system of one image to a second image of

the same area

Fitting the co-ordinate

system to another image

Subsetting

Breaking out a portion of a

large file/image into one or

more smaller files/images.Area of interest

Mosaicking

Combining multiple scenes to cover a larger area.

Image Enhancement

• Image enhancement can be defined as the conversion of the image quality to a better and more understandable level for feature extraction or image interpretation.

• Enhancement is generally applied to single-band images or separately to individual bands

• Principle objective is to process an image so that the result is more suitable than the original image for a specific application.

Procedures

• Two type of procedures :• Point operations

• Local operations

Point operations : When we change the value of each individual pixel independent of all other pixels.

Local Operations : When we change the value of individual pixels in the context of the values of neighboring pixels.

Image Reduction

2x image reduction30 55 35 76 48 89 98 36 33 76

87 34 55 98 45 75 62 98 78 12

12 69 87 36 87 69 47 69 78 98

39 87 95 22 36 14 65 68 38 35

69 97 65 98 32 72 91 38 78 65

66 54 57 85 95 36 96 16 3 9

5 75 98 75 98 78 89 65 88 9

54 92 80 60 34 43 78 53 58 50

Original image

Reduced image

30

30

35

35

48

48

98

98

33

33

12

12

87

87

87

87

47

47

78

78

66 57 95 96 3

54 80 34 78 58

66 57 95 96 3

54 80 34 78 58

Image Magnification

2x image magnification

Original image Magnified image

30 35 48 98 33

12 87 87 47 78

66 57 95 96 3

54 80 34 78 58

30 30

30 30

35 35

3535

48 48

48 48

98 98

98 98

33 33

33 33

12 12

12 12

87 47

66 57 95

54 80 34

87

87 87

87 87

87 87

47

47 47

78 78

78 78

54

54 54

80

80 80

34

34 34

78

78

78

78

58

58

58

58

66

66 66 57

57

57 95

95

95

96

96

96

96

3

3

3

3

Color Compositing

• A color image can be generated by compositing three selected bands of multi-band image and with the use of three primary colors (RGB).

• Display contains three color guns : RGB , hence only 3 bands can be seen at a time.

• Color Composites :• Additive Color composites : which use RGB• Subtractive Color composites : use three pigments of RGB : cyan , magenta

and yellow

True Color Image vs False color image

• True Color : A combination where image captured in blue band is passed though blue gun , red from red gun and green from green gun.

• False color : when one of the primary color is not present , mathematical combinations are made. Eg if blue is not present :

RED = Red band

GREEN = 0.75 × Green band + 0.25 × NIR band

BLUE = 0.75 × Green band – 0.25 × NIR band

Infrared Color Composite : BLUE = Green Band

GREEN = Red Band

RED = Infrared Band

True color vs False color

Infrared color composite

Transect Extraction

Pixels that lie on a transect can be

measured and displayed to compare

spectral or spatial differences.

Band-1

Band-2

Band-3Band-4

Contrast Enhancement

• The range of brightness values present on an image is referred to as contrast.

• Contrast enhancement is a process that makes the image features stand out more clearly by making optimum use of the color intensities available on the display or output device.

• Look-up Table (LUT) – computer stores new values are stored in LUT and utilizes these values to display the image.

Histogram

• A histogram is a graph of data frequency or distribution.

• A statistical graphic representation of the range of tones from dark to light and associated number of pixels for each tone for an image.

Image contrast and Histogram

• Contrast: The range of brightness values present on an image

Image Contrast

Contrast Enhancement

Contrast manipulations involve changing the range of values in an image in order

to increase the contrast.

Contrast Enhancement

Types of contrast enhancement

• Linear contrast enhancement

• Minimum-maximum linear contrast stretch

• Percentage linear contrast stretch

• Average and standard deviation stretch

• Piecewise linear contrast stretch

• Nonlinear contrast enhancement

• Histogram equalization

• Histogram normalization

• Reference stretch

• Density slicing

• Thresholding

Minimum – Maximum Stretch

Saturation Stretch

Average and Standard Deviation Stretch

Piecewise Stretch

Different Contrast Enhancements

No stretch Min-max stretch 5% tail trim

20% tail trim 1 x standard deviation 2 x standard deviation

Filtering

• Filtering is a process of changing the spatial frequency.

• Purposes:

• To improve interpretability of image data

• To highlight or suppress specific features of an image based on their spatial frequency

• Methods:

• Convolution filtering

• High Pass

• Low Pass

• Edge Detection

Spatial Frequency

Zero spatial frequency Low spatial frequency High spatial frequency

Convolution Filtering

• It means moving a window of set of pixels in dimensions (3x3 , 5x5) over each pixel in the image, applying a mathematical calculation using the pixel values under that window, and replacing the central value with that pixel.

• This window is known as “ convolution kernel”

Low Pass filtering

• Low-frequency kernels are applied to decrease the spatial frequency (provides smooth image).

Low Pass Filtering

Before filtering After filtering

High Pass Filter

• It is the opposite of low pass filter ( sharpens the image)

• High-frequency kernels are applied to increase the spatial frequency (provides sharper image).

High Pass Filtering

Before filtering After filtering

Edge Detection Filtering

• They are used to highlight linear features like roads, field boundaries,etc.

• Zero Sum kernels are used and division is not applied.

• This generally causes the output values to be zero in areas where all input values are equal, low in areas of low spatial frequency and extreme in areas of high spatial frequency.

Edge Detection

Before filtering After filtering

Image Transformation

• Image transformations generate ‘new’ images from two or more sources which highlight particular features or properties of interest, better than the original input images.

• Common transformations:

• Image arithmetic operations

• Principal component transformation (PCT)

• Tasselled cap transformation (TCT)

• Colour space transformation (CST)

• Fourier transformation

• Image fusion

Arithmetic Operations

• Addition

• Subtraction

• Multiplication

• Division

• The images may be separate spectral bands from a single multi-spectral data set or they may be individual bands from image data sets that have been collected at different dates.

Image Addition (Averaging)

• Addition is getting new DN value of a pixel in the output image by averaging the DN values of corresponding pixels of input images

Image Subtraction (Change Detection)

• It is the process of subtracting the DN value of one image from corresponding pixel of other image.

Change Detection

• 1987 1997 Change 1987

Image Multiplication

• In this process pixel-by-pixel multiplication of two images is performed.

Indices and Rationing

• Used to create output image by mathematically combining the DN values of different bands.

• Eg. Vegetation index DNNIR / DNR

Normalized differential vegetation index (DNNIR – DNR) / (DNNIR + DNR)

Concept of Indices

Image Classification

• Image classification is sorting pixels into a finite number of individual classes, or categories of data, based on their DN values.

Continuous image Thematic image

Supervised vs Unsupervised Classification

• Supervised • Have a set of desired classes in mind and then create the appropriate

signatures from the data. • when one wants to identify relatively few classes• when one has selected training sites that can be verified with ground truth

data• when one can identify distinct, homogeneous regions that represent each

class.

• Unsupervised• Classes to be determined by spectral distinctions that are inherent in the data

and define the classes later. • when one wants to define many classes easily, and then identify classes.

Training for Classification

• Computer system must be trained to recognize patterns in image data.

• Process of defining the criteria by which these patterns are recognized.

• Supervised Training is controlled by the analyst. • Select pixels that represent patterns and instruct the computer system to

identify pixels with similar characteristics.

• More accurate but requires high skill.

• Unsupervised Training is computer-automated.• Specify number of classes and the computer uncovers statistical classes.

• Less accurate and less skill required.

Supervised Classification

Signature

Collection

Signature

EvaluationClassification

Raw

DataPrepro-

-cessing

Signature

A set of pixels selected to represent each primary land-cover.

• Knowledge

Field Data

Personal Experience

Photos

Previous Studies

Lake

River

Agriculture

Forest

Minimum Distance Classifier

Unsupervised Classification

• Clustering algorithms are used in unsupervised classification.

• Spectral values of pixels are grouped first and then matched by the analyst to any specified class.

• Analyst specifies how many classes are required and threshold values of variation within the clusters and among the clusters.

• If a cluster needs to be broken , or clusters need to be combined –analyst can take the decision.

Accuracy Assessment

Accuracy assessment of remote sensing product is a feedback

system for checking and evaluating the objectives and the results.

Contingency MatrixContingency matrix for accuracy assessment