digital image analysis - ibis.geog.ubc.ca · the segmentation module in idrisi creates an image of...

Post on 18-Mar-2020

2 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

• Image classification– Quantitative analysis used to automate the identification

of features– Spectral pattern recognition

• Unsupervised classification• Supervised classification• Object-based classification

DIGITAL IMAGE ANALYSIS STEPS

• Fundamental difference:– Unsupervised classification: assigning meaningful

names to software-identified spectrally-similar clusters

– Supervised classification: assigning user-identified spectral clusters meaningful names, and then assigning the remaining pixels to a specified class.

– Object-based image classification: creating objects (or segments) and assigning to some of those spectral and spatial objects meaningful names, and then assigning the remaining unknown objects to a specified class

THE YIN YANG OF CLASSIFICATION

• Object-based classification requires the analyst to either:– Let the software identify spectrally- and spatially-similar

groups of pixels (i.e., ‘objects’) based upon a set of rules the analyst sets out, and then to assign some of the objects to classes so that class signatures can be developed. The software can then assign the remaining objects to a specific class. (Image segmentation; Approach taken by ESRI and IDRISI)

– Identify groups of pixels in the image that belong to a specific class that is of interest, and let the software automatically identify spectrally- and spatially-similar groups of pixels. (View the Erdas Imagine Objective video)

TWO DISTINCT APPROACHES

• Information classes are those categories of interest that the analyst is actually trying to identify in the imagery, such as different kinds of crops, different forest types or tree species, different geologic units or rock types, etc.

• Objects are groups of pixels that are uniform (or similar enough) with respect to variances of, for example, spectral reflectance values in the different bands of the data (but also could include, for example, elevation from a DEM), and that are spatially clustered.

• The aim is to match the objects identified in the data to the information classes of interest.

SUMMARY: THE DIFFERENT CLASSES

Image segmentation is the process of dividing an image into multiple parts. This is typically used to identify objects or other relevant information (groups of pixels with similar spectral reflectances) in digital images.

IMAGE SEGMENTATION

The SEGMENTATION module in IDRISI creates an image of segments that have spectral similarity. The image on the left is a false color composite image. The image on the

right is a result of segmentation on the original multispectral bands, producing crown-level segments.

• The segmentation process typically involves four main steps:1. Identifying segments or objects in the data

1. Using Segment Mean Shift (ESRI)2. Using a watershed approach (IDRISI)

2. Creating signatures (spectral response patterns associated with each group of objects of interest).

3. Classifying the image.4. Determining the classification accuracy.

OBJECT CLASSIFICATION

• Image objects are groups of pixels that are similar to one another based on a measure of spectral properties (i.e., color), as well as (possibly) size, shape, and texture, derived from a neighborhood surrounding the pixels.

• The following is a list of examples of features commonly used in identifying distinctive objects (the variance factors):

– Color: mean or standard deviation of each band, mean brightness, band ratios

– Size: area, length to width ratio, relative border length– Shape: roundness, asymmetry, rectangular fit– Texture: smoothness, local homogeneity– Class level: relation to neighbors, relation to sub-objects

and super-objects

IDENTIFYING OBJECTS

• Although this is not the exact filter used, it does illustrate how the outlines of the segments can be identified.

• A high pass filter accentuates the comparative difference between a cell's values and its neighbors. It has the effect of highlighting boundaries between features (for example, where a water body meets the forest)

IDENTIFYING OBJECTS

• The values in the variance image can be considered as elevations (the higher the value, the greater the variation between pixels).

• Using methods used to create (actual) watersheds, the software identifies blocks of pixels that can be interpreted as watersheds, defined by ridges (of pixels with the highest variances).

IDENTIFYING WATERSHEDS

• The next step is to merge watersheds (segments) that are adjacent to each other and share similar spectral characteristics (much like sub-basins can be grouped into larger watersheds).

• How similar the adjacent blocks of pixels must be before they are merged is up to the analyst to define.

MERGING WATERSHEDS

IDENTIFYING OBJECTS

OBJECTS

• Sub-objects could be individual pixels, while

Super-objects could be adjacent objects that share similar spectral and spatial characteristics.

Super-objects

OBJECT SIGNATURE DEVELOPMENT

OBJECT CLASSIFICATION

• These methods have many more user-specified parameters than do the unsupervised and the supervised classification methods.

• Since the objects (or segments) are identified based on user-defined parameters (such as the minimum difference in the variances between blocks of pixels that can be accepted before the blocks can be joined into a single object, and how much each layer [band] contributes to the overall object identification process), the knowledge of the analyst plays an important role in the overall classification process.

• As with all of the other classification methods, there are many choices to be made throughout the process, and what works well in one instance may not work in another.

OBJECT-BASED CLASSIFICATION

True colour composite

Default segmentation results

Modified segmentation results

True colour composite

Maximum likelihood classification

Segmentation classificationbased on the Max Classification

ESRI’S IMAGE SEGMENTATION PROCESS

• Segmentation and classification tools provide an approach to extracting features from imagery based on objects. These objects are created via an image segmentation process--pixels in close proximity and having similar spectral characteristics are grouped together into a ‘segment’. Segments exhibiting certain shapes, spectral, and spatial characteristics can be further grouped into objects. The objects can then be grouped into classes that represent real-world features on the ground.

• The object-oriented feature extraction process is a workflow supported by tools covering three main functional areas: image segmentation, deriving analytical information about the segments, and classification.

• Data output from one tool is the input to subsequent tools, where the goal is to produce a meaningful object-oriented feature class map.

• The object-oriented process is similar to a traditional image, pixel-based classification process, utilizing supervised and unsupervised classification techniques. Instead of classifying pixels, the process classifies segments, which can be thought of as super pixels. Each segment, or super pixel, is represented by a set of attributes that are used by the classifier tools to produce the classified image.

ESRI’S IMAGE SEGMENTATION PROCESS

• Set the level of importance given to the spectral differences of features in your imagery.

• Valid values range from 1.0 to 20.0. A higher value is appropriate when you have features you want to classify separately but have somewhat similar spectral characteristics. Smaller values create spectrally smoother outputs. For example, with higher spectral detail (> 15) in a forested scene, you will be able to have greater discrimination between the different tree species.

Spectral detail01, 10, 20

• Set the level of importance given to the proximity between features in your imagery.

• Valid values range from 1.0 to 20. A higher value is appropriate for a scene where your features of interest are small and clustered together. Smaller values create spatially smoother outputs. For example, in an urban scene, you could classify an impervious surface using a smaller spatial detail, or you could classify buildings and roads as separate classes using a higher spatial detail.

Spatial detail01, 10, 20

Both 01Both 10Both 20

• Merge segments smaller than this size with their best fitting neighbour segment.

• Units are in pixels.

DefaultsSegments: 20Segments: 80

ISO Classification of a segmented image using all of the defaults

• Object-based Image Analysis (OBIA) is now considered the best approach for higher spatial resolution data (> 5 m)

• Accuracy is generally found to be better.• Ability to include ancillary data (e.g., DEMs) is a great benefit.• Reduces the need for filtering the results (no salt/pepper, but still

boundary smoothing and merging of too-small objects).• The results are much easier to manipulate—instead of 4000 pixels

labeled ‘water’ you may have 5 lake polygons.• Much easier to manually edit the results since you are working

with polygons (block of pixels) rather than individual pixels.• Likely to become the dominant classification method in the future.

OBJECT-BASED CLASSIFICATION

Unsupervised classification

Supervised classification

Object-based classification

• Object-based classification:– Guide the software in identifying blocks of pixels (objects)

in the image, using image layers as well as, possibly, DEMs and other ancillary layers.

– Assign class names to some of the objects, and develop spectral signatures for those objects.

– Using a supervised classification method, classify the image layers using the object-based spectral signatures.

– Can also perform unsupervised classification.– Determine the accuracy of the classification.

SUMMARY

• Image Classification– Unsupervised or supervised– Pixel- or object-based– Assigning classes in a LULC system– Outputs are used to create thematic maps

SUMMARY

top related