a thesis presented to the department of geology and ... › library › theses › 2012 ›...

119
THE USE OF DIGITAL IMAGE PROCESSING TO FACILITATE DIGITIZING LAND COVER ZONES FROM GRAY LEVEL AERIAL PHOTOS A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND GEOGRAPHY IN CANDIDACY FOR THE DEGREE OF MASTER OF SCIENCE By JOAN M. BIEDIGER NORTHWEST MISSOURI STATE UNIVERSITY MARYVILLE, MISSOURI April 2012

Upload: others

Post on 30-May-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

THE USE OF DIGITAL IMAGE PROCESSING TO FACILITATE DIGITIZING LAND COVER ZONES FROM GRAY LEVEL AERIAL PHOTOS

A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND GEOGRAPHY

IN CANDIDACY FOR THE DEGREE OF MASTER OF SCIENCE

By

JOAN M. BIEDIGER

NORTHWEST MISSOURI STATE UNIVERSITY MARYVILLE, MISSOURI

April 2012

Page 2: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

ii

DIGITAL IMAGE PROCESSING

The Use of Digital Image Processing to Facilitate Digitizing

Land Cover Zones from Gray Level Aerial Photos

Joan Biediger

Northwest Missouri State University

THESIS APPROVED

____________________________ Thesis Advisor, Dr. Ming-Chih Hung Date

____________________________ Dr. Yi-Hwa Wu Date

____________________________ Dr. Patricia Drews Date

____________________________ Dean of Graduate School, Dr. Gregory Haddock Date

Page 3: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

iii

The Use of Digital Image Processing to Facilitate Digitizing

Land Cover Zones from Gray Level Aerial Photos

Abstract

Aerial imagery from the 1930s to the early 1990s was predominantly acquired using

black and white film. Its use in remote sensing applications and GIS analysis is

constrained by its limited spectral information and high spatial resolution. As a historical

record and to study long-term land use/land cover change this imagery is a valuable but

often underutilized resource. Traditional classification of gray level aerial photos has

primarily relied on visual interpretation and digitizing to obtain land cover classifications

that can be used in a GIS. This is a time consuming and labor intensive process that can

often limit the scale of analysis.

This research focused on the use of digital image processing to facilitate visual

interpretation and heads up digitizing of gray level imagery. Existing remote sensing

software packages have limited functionalities with respect to classifying black and white

aerial photos. Traditional image classification alone provides limited results when

determining land cover types derived from gray level imagery. This research examined

approaching classification as a system which uses digital image processing techniques

such as filtering, texture analysis and principle components analysis to improve

supervised and unsupervised classification algorithms to provide a base for digitizing

land cover types in a GIS. Post processing operations included smoothing the

classification result and converting it to a vector layer that can be further refined in a GIS.

Page 4: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

iv

Software tools were developed using ArcObjects to aid the process of refining the vector

classification. These tools improve the usability and accuracy of the digital image

processing results that help facilitate the visual interpretation and digitizing process to

gain a usable land use/land cover classification from gray level imagery.

Page 5: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

v

TABLE OF CONTENTS ABSTRACT…………………………………………………………………………. iii LIST OF FIGURES……………………….………………………………………….vii LIST OF TABLES………………………………………………….………...….…. viii ACKNOWLEDGMENTS……………….…………………………………..………..ix

CHAPTER 1: INTRODUCTION……………………….……………………………1 1.1 Research Objective…………………………………………………………… 4 CHAPTER 2: LITERATURE REVIEW…...……………………………………….. 5 2.1 Historical Aerial Imagery Uses and Importance……………………………... 5 2.2 Classification Problems of High Resolution Panchromatic Imagery………….6 2.3 Statistical Texture Indicators…………………………………………………. 9 2.4 Image Enhancements and Filtering……………………………………….….. 13 2.5 Image Segmentation and Object-based Image Analysis……………………... 15 CHAPTER 3: CONCEPTUAL FRAMEWORK AND METHODOLOGY……... ….17 3.1 Description of Study Area………………………………………………….… 17 3.2 Description of Data…………………………………………………………… 17 3.3 Methodology………………………………………………………………….. 21 3.3.1 Conceptual Overview…………………………………..……………….. 21 3.3.2 Software Utilized…………………………………………………………22 3.3.3 Preliminary Image Processes……………………………...………….…. 24 3.3.4 Unsupervised Classification.……………………….…………………… 27 3.3.5 Supervised Classification……………………………………………….. 33 3.3.6 Image Enhancement and Texture Analysis……………………....………35 3.3.7 Object-based Image Analysis………………….…………………………38 3.3.8 Post Processing and Automation…………………………………………40 3.3.9 Accuracy Assessment………………………………………...……….….43 CHAPTER 4: ANALYSIS RESULTS AND DISCUSSION……………….………46 4.1 Manual Digitizing…………………………………………………………….. 46 4.2 Unsupervised Classification………………………………………………….. 48 4.3 Supervised Classification…………………………………………………….. 53 4.4 Image Enhancements and Texture Analysis……….………..………………... 57 4.5 Object-based Image Analysis………………………………………………… 63 4.6 Post Processing and Automation……………………………………………... 65 4.7 Classification Accuracy and Results…………………………………………. 70 CHAPTER 5: CONCLUSION………………………………………………………81 5.1 Limitations of the Research…………………………………………………... 81 5.2 Potential Future Developments………………………………………………. 81

Page 6: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

vi

APPENDIX 1: ERROR MATRIX TABLES……………………………………….. .84 APPENDIX 2: VECTOR EDITING TOOLBAR .NET CODE……………………..101 REFERENCES..………………….…………………………………...……………... .106

Page 7: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

vii

LIST OF FIGURES Figure 1 – Aerial photo of Ogden study area.………..………………………………… 18 Figure 2 - Overview of study areas in relationship to the state of Utah……………….. 18 Figure 3 – Aerial photo of Salt Lake City study area…………..………………..…….. 19 Figure 4 - Ogden DOQQ study area…………………………………………………… 20 Figure 5 - Salt Lake City MDOQ study are……………………………………….…… 21 Figure 6 – Main workflow processes…………………………………………………... 23 Figure 7 – Ogden dendrogram of ISODATA clustering 10 classes..………………….. 29 Figure 8 – Ogden dendrogram of ISODATA clustering 25 classes…………………… 30 Figure 9 – Ogden dendrogram of ISODATA clustering 100 classes………………….. 31 Figure 10 – Distances between classes from Salt Lake City dendrogram…………..…. 32 Figure 11 – Training sample distribution for the Ogden image………………………...34 Figure 12 – Training sample distribution for Salt Lake City image………………….... 34 Figure 13 – Unstretched images compared to contrast stretched images……………… 36 Figure 14 – Post processing ArcGIS Model…………………………………………… 41 Figure 15 – Polygon raster to vector, smoothing, and smooth simplify………………...42 Figure 16 – Classification using visual interpretation of the Ogden image……………. 49 Figure 17 – Classification using visual interpretation of the Salt Lake City image…….49 Figure 18 – Ogden image ISODATA classifications………………………..………… 51 Figure 19 – Salt Lake City image ISODATA classifications………………………….. 53 Figure 20 – Minimum distance and support vector machine classification of the Salt Lake City image……………………….…………………………………... 57 Figure 21 – Minimum distance classification of the Ogden image with high pass filter 58 Figure 22 – Minimum distance classification of the Ogden image with low pass filter. 59 Figure 23 – ISODATA 10 spectral classes Halounova image…………………………. 61 Figure 24 – SCRM object-based segmentation images………………………………... 64 Figure 25 – Ogden object-based classification image and post processing system

vectors……………………………………………………………………... 68 Figure 26 – Salt Lake City object-based classification image and post processing system vectors………………………………………………………….… 69 Figure 27 – Ogden pixel based classification and post processing system vectors……. 69

Page 8: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

viii

LIST OF TABLES

Table 1 – First level classification Ogden land use/land cover classes………………… 25 Table 2 – First level classification Salt Lake City land use/land cover classes………… 27 Table 3 – ISODATA overall accuracy results for Ogden and Salt Lake City study areas ………………………………………………………………………………………….. 50 Table 4 – Training sample statistics from original Ogden image…………….………… 55 Table 5 – Training sample statistics from original Salt Lake image…………………….56 Table 6 – Ogden image overall accuracy and level 1 completion time………….………72 Table 7 – Salt Lake City image overall accuracy and level 1 completion time…….…... 74 Table 8 – User’s accuracies for individual land use/land cover types Ogden study area. 76 Table 9 – User’s accuracies for individual land use/land cover types Salt Lake City study

area…………………………………………………………………………... 77 Table 10 – Overall accuracy ranges for classification groups………………………….. 78

Page 9: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

ix

ACKNOWLEDGMENTS I would like to thank Dr. Ming-Chih Hung for chairing my thesis committee and for

all the support, encouragement and guidance he has given me along the way. I would

also like to thank Dr. Yi-Hwa Wu and Dr. Patricia Drews for serving on my thesis

committee and for their contributions in developing this thesis. Last but certainly not

least I would like to thank my husband Barry for encouraging me through many long

nights and weekends while I completed this work. Without your support and love I

would never have been able to finish this thesis.

Page 10: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

1

CHAPTER 1: INTRODUCTION

Aerial imagery from the 1930s to the present is a primary data source used to study

many natural processes and land use patterns (Carmel and Kadmon 1998, Kadmon and

Harari-Kremer 1999). Early aerial imagery from the 1930s to early 1990s is

predominantly black and white (panchromatic) film photography meaning there is only

one band of data. This type of imagery contains limited spectral information unlike

today’s satellite digital sensors, which offer more spectral information even in the

panchromatic band.

The Aerial Photography Field Office (APFO) is a division of the Farm Service

Agency (FSA), of the United States Department of Agriculture (USDA). The APFO,

located in Salt Lake City, Utah, has one of the nation’s largest collections of historical

aerial imagery dating back to the 1950s. Film from the 1930s through 1940s was sent to

the national archives. APFO has over 50,000 rolls of film of which over 60% is black

and white (Mathews 2005). This historical aerial imagery is a valuable, largely untapped

resource. The film format of the imagery makes it unavailable to GIS and imagery

analysis programs unless it is scanned and processed to digital format. There is

widespread interest from the public and other government agencies in making this

imagery available and usable in digital format.

Recently, more historical imagery from the 1950s to 1990s is being scanned to digital

format for use in change detection projects for the Farm Service Agency (FSA).

According to Brian Vanderbilt (personal communication, 01 Sep 2009) FSA is interested

in studying agricultural loss patterns over long periods so that processes of change can be

more fully understood. One of the challenges with these types of projects is that land

Page 11: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

2

use/land cover classification with the imagery usually involves visual interpretation and

manual digitizing, due to the difficulty of using digital image processing techniques with

the historical panchromatic imagery. Manual digitizing is a time consuming process for

multiple years of imagery, as each photo requires its own analysis. There are not enough

image analysts within FSA to manage the increasing workload for projects requiring the

use of historical imagery. Another concern is that study areas are limited in scale because

of the time and resources needed to digitize land cover types on the imagery. There is

interest and need to explore digital options for land cover classification so that the use of

these historical imagery datasets can be expanded.

The ability to facilitate digitizing of land cover types on historical aerial imagery

would make it a more usable resource to study long term land use/land cover changes.

Classification of this type of imagery is very labor intensive which often limits the size of

study areas. If the imagery could be utilized on a more broad scale, we can gain greater

historical perspective on changes such as agricultural loss over time. Increased accuracy

and repeatability of results obtained by using digital image processing could make the

results of long-term change detection projects more valid rather than having to rely on

varying levels of image interpretation skills if a project requires several image analysts to

interpret imagery for a project.

Historical aerial imagery offers a unique opportunity to study long-term patterns of

land use/land cover change by offering the analyst a more extensive historical perspective

on geographic processes such as land use/land cover change, urban expansion and

vegetation patterns (Kadmon and Harari-Kremer 1999, Awwad 2003, Alhaddad et al.,

2009). Producing a thematic map through image classification is one of the most

Page 12: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

3

common imagery analysis tasks in remote sensing. Image classification techniques such

as unsupervised and supervised classification, NDVI, spectral signatures, and spectral

band combinations have limited usability with panchromatic aerial imagery as they rely

heavily on spectral information, which is limited with this type of imagery.

Visual interpretation of imagery does not rely on spectral information alone to classify

imagery. Visual interpretation makes use of scene qualities such as texture, shape,

arrangement of objects and context of elements in an image. The human visual system is

very efficient at pattern recognition and in many ways is superior to existing machine

processing methods, but on the other hand inherent subjectivity and the inability of the

eye to extract complex patterns can limit interpretation. Digital image processing

techniques that incorporate the use of texture, tone, shape, pattern recognition and object-

based image analysis can be used to enhance traditional methods of supervised and

unsupervised classification especially with gray level aerial imagery (Caridade et al.,

2008).

A great deal of research has been done on the most effective ways of classifying

multispectral imagery and mapping the results (Jensen, 2005). There is relatively little

research on how digital image processing of historical panchromatic imagery can

improve or reduce manual interpretation for image analysis and GIS analysis. In this

thesis research, digital image processing techniques including texture analysis,

convolution filters, and object-based image analysis were considered in respect to how

they can improve the classification of panchromatic aerial imagery and how this

improvement can facilitate digitizing and in some cases possibly eliminate it. A post

processing system involving image smoothing, raster to vector conversion, polygon

Page 13: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

4

smoothing and simplification, and custom polygon editing tools for use in ESRI’s

ArcMap GIS software was used to improve an initial digital image classification. The

post processing system can be used to improve most digital image classifications. The

quality of the baseline land use/land cover classification was the main factor in how

efficient it was to create a usable thematic layer.

Continued study in this area could yield new approaches to land cover classification of

gray level imagery. If historical imagery has the ability to be used effectively in a digital

environment, then more of it may be scanned and become more readily available, which

would benefit the geospatial community.

1.1 Research Objective

The objective of this project is to establish a working model that utilizes digital image

processing to facilitate or assist the user with digitizing land cover zones from gray level

aerial photos. This study approaches the problem of digitizing land cover zones by first

classifying the aerial photo and then by establishing a post processing system employing

vector layers for use in a GIS.

There is limited research available in using digital image processing to enhance the

classification process of gray level aerial photos and the digitizing process. Digital image

processing may not be able to completely replace visual interpretation of this type of

imagery, but it may be able to make the process more efficient.

Page 14: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

5

CHAPTER 2: LITERATURE REVIEW 2.1 Historical Aerial Imagery Uses and Importance

Historical imagery as referred to in this study refers to imagery acquired by an aerial

camera mounted in an airplane. The photography has been directly imaged onto film and

is also referred to as analog photography as opposed to modern digital imagery. This

historical imagery is black and white and may be referred to as either panchromatic or

gray level.

Black and White, gray level, and panchromatic are terms which refer to imagery

composed of shades of gray. The imagery used in this study has a pixel depth of 8 bits

where the binary representation assumes that 0 is black and 255 is white. Between 0 and

255 raw pixel values are grayscale and the digital numbers correspond to different levels

of gray. For example a digital number of 127 will correspond to a medium gray in the

photo. This panchromatic imagery has a single band where digital numbers represent the

spectral reflectance from the visible light range. Historical panchromatic imagery

contains brightness values but has limited spectral information available in the visible

wavelengths (0.4-0.7µm), unlike the panchromatic band of a satellite image such as

Landsat 7, which generally is sensitive into the near infrared wavelengths (.52-.0.9µm)

(Hoffer 1984).

Historical aerial photographs are a valuable and important data source for studying

long term (20 – 80 year) change processes such as land use/land cover change and

vegetation and environmental dynamics. These historic photos present a snapshot in time

that may offer insight into the current state of land use/land cover change processes and

what patterns may have affected their growth and stability. Much of the imagery

Page 15: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

6

available for long-term analysis is black and white aerial photography (Carmel and

Kadmon 1998, Hudak and Wessman 1998, Caridade et al. 2008). The historical record

that has been captured from aerial photography provides a long temporal history to work

with and provides an extensive frame of reference in which to assess the magnitude of

land use/land cover change. Advances in GIS, photogrammetry, image analysis, and

digital image processing have increased the potential to use historical aerial photography

for many types of change analysis including land use/land cover change (Okeke and

Karnieli 2006).

Gray level historical aerial photos used to produce land cover maps are generally

created through techniques such as visual interpretation and manual digitizing (Carmel

and Kadmon 1998, Kadmon and Harari-Kremer 1999). This is a very time consuming

and labor intensive process. This fact has a tendency to limit analysis to small areas. The

digitizing itself is generally dependent on the ability of the interpreter and may lead to

results that are not objective due to skill level and human bias (Kadmon and Harari-

Kremer 1999). The assumption is often made that manual interpretation is 100 %

accurate but assessing the accuracy of this method is difficult according to Congalton and

Green (1993) and Carmel and Kadmon (1998).

2.2 Classification Problems of High Resolution Panchromatic Imagery

The historical aerial imagery analyzed in this project is limited in spectral information

and has high spatial detail. These two variables can present some difficulties with the use

of common digital classification and image processing techniques. The first challenge is

the spectral resolution, which is only one band. This band lacks detailed spectral

information. Most panchromatic aerial films are sensitive to the visible spectrum but also

Page 16: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

7

require filtering to take into account haze and atmospheric conditions. The film is

generally filter exposed to green and red visible wavelengths and not the blue

wavelengths to cut down on atmospheric haze. The resulting image records in black and

white the tonal variations of the landscape in the scene (U.S. Army Corp of Engineers

1995). Common classification methods are limited in accuracy and usability when there

is only one band to work with (Short and Short 1987, Anderson and Cobb 2004, Caridade

et al. 2008).

Research from Carmel and Harari-Kremer (1999) and Carmel and Kadmon (1998)

have approached the limitations of having only one band of information to analyze in

several ways. Carmel and Kadmon (1998) used a combination of illumination

adjustment and a modified maximum likelihood classifier that used neighborhood

statistics to achieve classification accuracies of over 80% for study of long-term

vegetation patterns using gray level aerial imagery. This research showed that the

relationship between neighborhood pixels was an important factor in achieving improved

classification accuracy. Carmel and Harari-Kremer (1999) concentrated on training data

and ancillary data to produce vegetation maps from black and white aerial photos from

1962 and 1992. The accuracy of using a maximum likelihood classifier was about 80%.

Their study stresses the importance of carefully considered training data and the utility of

digital image processing of historical aerial photography in vegetation change detection

studies. Mast et al. (1997) researched long-term change detection of forest ecotones

using gray level aerial imagery from 1937 – 1990. Density slicing was used after

determining the range of brightness values for tree cover across all imagery to get a

classification of tree cover and no such. Results were satisfactory although no accuracy

Page 17: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

8

assessment was mentioned, but again the significance of object brightness values for gray

level imagery was established.

The second challenge when analyzing this imagery is that higher spatial resolution

does not generalize features to the degree coarse or medium scale imagery does, which

allows much more detail to be considered in an image. Individual trees, buildings and

sidewalks become visible when image detail is more perceptible in these 1-meter

resolution images. This factor makes visual interpretation easier but can cause problems

with automated classification, especially when spectral information is limited or non-

existent. High spatial resolution can increase within-class variances, which can cause

uncertainty between classes. Browning et al. (2009) in their study of historical aerial

imagery as a data source emphasized the importance of object scale when analyzing

imagery. Some objects may be larger than a pixel, referred to as H-resolution, and some

objects may be smaller than a pixel, which is referred to as L-resolution. This factor can

make imagery with multiple scale objects more difficult to get consistent classification

results across a scene. Spatial autocorrelation is also an important factor when

considering this concept, as all natural scenes in remote sensing will have some type of

spatial autocorrelation to create a scene, so that the image organization is something other

than random noise (Strahler et al. 1986).

The challenges of limited spectral information and high spatial detail can lead to a

number of features in an image having similar gray level signatures and a great deal of

confusion between class types (Fauvel and Chanussot 2007). In turn a per pixel classifier

such as the maximum likelihood classifier has difficulty distinguishing between a

medium gray field and water in a panchromatic image. Panchromatic image

Page 18: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

9

classification can be improved by considering the relationship between neighborhood

pixels as in texture analysis and object-based image analysis (Alhaddad et al. 2009,

Myint and Lam 2005, and Caridade et al., 2008).

2.3 Statistical Texture Indicators Image texture is one of the most important visual indicators in distinguishing between

homogenous and heterogeneous regions in a scene. The human interpreter uses shape,

texture, size, pattern, shadow, arrangement and context of elements in an aerial photo to

distinguish between objects in the image (Campbell 2008). According to Tuceryan and

Jain (1998) texture is easy to discern in an image but it can be a difficult concept to

define and there is not one generally accepted definition. One way to define texture is to

consider it as the spatial variation of the intensity values in a region of an image

(Tuceryan and Jain 1998). This regional variation in intensity values implies that the

evaluation of texture is a neighborhood process and that a single pixel does not create

texture on its own.

Texture is also a quality of an image scene that corresponds to a pattern that is part of

the structure of the image. In a natural scene an area of farmland and a forested area

comprise two separate visual patterns in separable regions. These regions may also

contain secondary patterns having characteristics such as brightness, shape, size, etc. A

field may also have a planting pattern and a forest may be comprised of deciduous and

coniferous trees giving the area a distinctive sub pattern that has its own brightness,

shape, size, etc (Srinivasan and Shobha 2008). Texture as a property of an object or

regional feature in an image can be described as fine, smooth, coarse, etc. Tone is the

range of shades of gray in an image. According to Haralick (1979), tone and texture are

Page 19: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

10

interdependent concepts in that both are always present in an image to varying degrees.

This interrelationship between tone and texture is explained by Haralick (1979) as

patches in an image that either have little variation in tonal primitives (tone) or a patch

that has a great variation of tonal primitives (texture).

The work of Haralick et al. (1973) was the foundation for most of the later research

relating to image texture analysis. Their work provided a computational method to

determine textural characteristics in an image scene and discussed several widely used

textural statistics used in image texture recognition. These statistics included: contrast,

correlation, angular second moment, inverse difference moment and entropy. Contrast

measures the amount of local variation in an image. Correlation measures the linear

dependency of gray levels in the image. Angular second moment measures local

homogeneity. Inverse difference moment also measures local homogeneity but relates

inversely to contrast. Entropy measures randomness of values. Image analysis may be

performed using these measures either alone or in combination.

There are three main approaches to texture analysis. These approaches include

statistical, spectral and structural. Statistical methods are based on local statistical

parameters such as the co-occurrence matrix and variability within moving windows.

Spectral methods include analysis using the Fourier transform and structural methods

emphasize the shape of image primitives (Srinivasan and Shobha 2008). This study

utilized statistical methods to include the co-occurrence matrix, the occurrence measures

and moving windows. By evaluating the spatial distribution of gray values using

statistical methods, a set of statistics can be derived from the distributions of neighboring

features throughout the image. There are first order and second order texture statistics.

Page 20: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

11

First order statistics such as mean, standard deviation, and variance analyze pixel

brightness values without analyzing the relationships between the pixels. Second order

statistics on the other hand analyze the relationships between two pixels and these

measures include contrast, dissimilarity, homogeneity, entropy, and angular second

moment (Srinivasan and Shobha 2008). First order and second order statistics are used in

this study as a method to improve the classification accuracy of panchromatic aerial

photos.

The analysis of texture is a technique that has been used to aid and increase

classification accuracy in both gray level image analysis and multispectral analysis.

Haralick et al. (1973) conducted the first major study of texture as an imagery analysis

tool. They demonstrated the utility of the Gray Level Co-occurrence Matrix (GLCM) as

an analysis tool for panchromatic aerial photographs and multispectral imagery even

though computer processing constraints of the time hindered their study. The

classification accuracy in their study was 82% for the panchromatic aerial imagery.

Caridade et al. (2008) used the GLCM and a variety of moving window sizes to achieve

an overall classification accuracy of black and white aerial photos of 83.4% using four

land cover classes. The GLCM uses statistics such as dissimilarity, angular second

moment, homogeneity, contrast, entropy etc. to statistically determine the frequency of

pixel pairs of gray levels in the image. Caridade et al. (2008) also discusses the variation

of land cover type accuracies throughout an image. Their study shows that certain land

cover types such as water may achieve accuracy levels of 100% while others such as bare

ground are much lower at 76.5%. Cots-Folch et al. (2007) used the GLCM to train a

neural network classifier but the highest accuracy obtained was only 74%. Their study

Page 21: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

12

stated that better training data and ancillary data sources could be used to improve the

results. Maillard (2003) compared the GLCM to semi-variogram and Fourier spectra

methods and found that the GLCM works better in areas where textures are easily

distinguished and the semi-variogram is better in areas where texture is more similar.

The Fourier method was less successful than either of the other two methods. Alhaddad

et al. (2009) found that the GLCM and mathematical morphology produced results which

were closer to visual interpretation than other texture analysis methods.

One of the main utilities of texture analysis as it applies to improving the classification

of panchromatic imagery in particular is that it increases the dimensionality of the

imagery from one band to multiple bands. A new band is created for each texture

function. This increased dimensionality can help alleviate some of the problems of class

separability that arise when trying to classify historical aerial photos (Halounova 2009).

Halounova used a combination of texture, filtering and object oriented classification to

achieve overall accuracy levels between 89% and 92%. Their methodology of increasing

the dimensionality of panchromatic imagery to try to achieve more separability between

land use/land cover classes was an important influence on this thesis research.

In areas of heterogeneous objects, the texture information in neighborhood pixels is a

consideration. Common classification algorithms that rely on spectral information at the

pixel level do not consider spatial information. This spatial information can become very

important when trying to discern land cover types such as urban areas (Myint and Lam

2005). Two types of analysis can assist the classification process: region-based analysis

and window based analysis. Region-based analysis involves using image segmentation

and window based analysis can be used in pre- or post-classification to filter noise from

Page 22: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

13

the results (Gong et al. 1992). The importance of the spatial aspect of texture analysis is

illustrated in many studies involving texture analysis (Haralick 1973, Gong et al. 1992,

Hudak and Wessman 1998, Myint and Lam 2005, Erener and Duzgun 2009, Pacifici et al

2009). This study used region-based analysis during object-based image analysis and

window based analysis through the GLCM.

2.4 Image Enhancements and Filtering

Texture analysis in combination with image pre-processing such as principal

component analysis has been explored by Awwad (2003). His study, which utilized a

1941 gray level photo, used texture analysis windows of different sizes and then

combined the results to create an image with sixteen layers. Principle components

analysis (PCA) was used to reduce the dimensionality of the resulting image. He

combined several digital processing techniques but overall accuracy was only 58%.

Much of the literature on using digital image processing techniques for classifying gray

level aerial photos does not make use of multiple texture window sizes in combination to

return a result. Even though examples are rare in the literature and accuracy was low as

reported by Awwad (2003), the technique has promise. Halounova (2009, 2005) also

combined several texture window sizes but used filtering and object oriented

classification rather than PCA to achieve classification accuracies over 90%. Image

enhancements such as filtering and texture add multiple channels to the one band

panchromatic image and allow the image to be processed in a similar fashion to a

multiple band image. There is room for more research using this type of methodology

with different parameters and different pre- or post-processing results such as

convolution filtering, edge detection and smoothing windows.

Page 23: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

14

Edge detection is another important consideration when trying to separate a scene into

distinct objects. A natural scene such as an aerial photo does not necessarily have a clear

relationship between an object and a background. Anderson and Cobb (2004) provided a

new unsupervised hybrid classification algorithm based on edge detection and

thresholding for pixel classification. Nearest edge thresholding outperformed both the

maximum likelihood and ISODATA clustering classification schemes. Their study

illustrated the importance of edge detection between features in gray level aerial photos.

Li et al. (2008) also conducted research, which concentrated on the importance of edge

detection and shape characteristics. The process used was automated using ArcGIS

Model Builder and results were compared to manual digitizing with the model correctly

identifying 70% of the manual classifications. Hu et al. (2008) used grayscale

thresholding in regards to image segmentation and emphasized the importance of

transition regions between objects in a scene and the ability to segment objects in an

image. Transition regions between objects can be problematic when classifying complex

scenes, as there can be multiple areas in the image with different gray scales between

objects causing classification errors and a salt and pepper effect.

Texture filters in combination with neural network classifiers are another methodology

that has shown some success in land use/land cover classification of gray level aerial

photos. Ashish (2002) used several artificial neural network (ANN) classifiers based on

histograms, texture and spatial parameters with some success on 1993 gray level aerial

photos. Textural parameters yielded the highest overall accuracy at 92%. His study

further showed the importance of texture parameters for classification of gray level aerial

photos. Another study conducted by Pacifici et al. (2009) used a neural network

Page 24: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

15

classifier and a simplification procedure with some success on the panchromatic bands of

WorldView-1 satellite imagery. After the simplification procedure called “network

pruning” was used on the imagery, texture was optimized and input features were

reduced producing classification accuracy above 90% in relation to the Kappa coefficient.

Their study provided another example of how texture parameters can improve the

classification accuracy of different types of classifiers using high resolution panchromatic

imagery.

2.5 Image Segmentation and Object-based Image Analysis

Considering the high spatial resolution of gray level aerial photos and the lack of

spectral information, object-based image analysis is another technique that has been

successful in classifying high spatial resolution imagery. Object-based image analysis

(OBIA) is a method of image analysis that uses objects in a scene rather than individual

pixels to derive information from the imagery. OBIA is a two-part process consisting of

image segmentation and then image classification. The image is first divided into

homogenous and adjacent regions, which take into account texture, region context, shape

and spectral information during the segmentation phase. Image segmentation reduces the

complexity of the image, and produces regions in the image, which can in turn be

considered meaningful to the image interpreter.

OBIA was compared to pixel based classification in a study by Pillai and Wesberg

(2005) using gray level aerial imagery from 1965 and 1995. Their study illustrated how

scale dependency can affect classification results depending on the objects studied. Scale

dependency of individual landscape elements can also affect the usefulness of texture

parameters as illustrated in Resler et al. (2004). Change at the scale of individual trees

Page 25: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

16

was not statistically significant between pixel based classification and object-based

classification. Object-based classification was more accurate when comparing patches of

trees in high spatial-resolution panchromatic imagery. Their study illustrates the

importance of determining land use categories and object scale when classifying imagery.

Elmqvist et al. (2008) performed OBIA on the panchromatic band of an Ikonos image

and found that spectral information provided the best segmentation results. Classification

accuracies were fairly low for their study but outperformed pixel based classification.

Laliberte et al. (2004) used a combination of low-pass filtering and object-based image

analysis on gray level aerial photos successfully integrating gray level aerial photos and

satellite imagery in a change detection study. Middleton et al. (2008) successfully used

feature extraction and a support vector machine (SVM) supervised classifier to extract

features on a 1947 aerial image in a change detection study. One of the main conclusions

of their study was that classification accuracy of the panchromatic image was based on

image quality. Historic panchromatic imagery is not always of good quality due to age or

deterioration of the film. A successful methodology for classifying this type of imagery

needs to be successful for various levels of image quality.

The literature regarding classification of gray level aerial photos concentrates for the

most part on replacing manual digitizing with digital image processing techniques. There

is a gap in the literature in regard to using digital image processing to help facilitate

digitizing. By combining digital image analysis techniques such as texture and object-

based image analysis with GIS vector capabilities, digitizing land cover classification

zones can be enhanced and in some cases possibly eliminated.

Page 26: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

17

CHAPTER 3: CONCEPTUAL FRAMEWORK AND METHODOLOGY

3.1 Description of the Study Area

The study area for this project is near Ogden, Utah (Figure 1). The area is in north

central Utah (Figure 2) and consists of a variety of land cover types including agricultural

land, impervious surfaces, grassland, forest and water. The Ogden study area does not

provide an example of dense urban land cover so a secondary area of interest was chosen

in Salt Lake City, Utah (Figure 3). The Salt Lake City study area includes a park and a

variety of residential and commercial land cover. By using two study areas with a variety

of textures and objects in the scene, this research can show the usefulness of digital image

processing across two completely different areas and images.

The classification results concentrate on the Ogden imagery as this imagery has better

defined and larger areas of land class types. The Salt Lake City image is used mainly to

see how the same techniques can be used in an urban area. Urban areas have their own

unique classification challenges that are increased when trying to classify panchromatic

imagery. Another reason the Ogden image was the main focus of this research is that this

imagery was originally flown for FSA for agricultural purposes. It is also likely that

much of the historical imagery in the vault at APFO will be used to further study

historical agricultural change processes.

3.2 Description of Data

The image of Ogden, Utah from 1958 was obtained from the Aerial Photography Field

Office’s internal imagery storage network. The Ogden study area was clipped from a

digital orthophoto quarter quadrangle (DOQQ) 4111256ne from 1958 (Figure 4) and

covers approximately 0.5 square miles. The image was scanned from black and white

Page 27: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

18

Figure 1 – Aerial photo of Ogden study area

Figure 2 - Overview of study areas in relationship to the state of Utah

Page 28: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

19

Figure 3 – Aerial photo of Salt Lake City study area

film at APFO using a standard 25 microns which produces about 1016 Dots Per Inch

(DPI). The imagery was originally flown at 40,000 feet producing a pixel resolution of 1

meter and the bit depth of the image is 8 bits. This imagery was also ortho rectified at

APFO using the Socet Set 4x software suite and was rectified to the Universal Transverse

Mercator (UTM) coordinate system zone 12, North American Datum of 1983 (NAD 83).

The imagery is in GeoTIFF format, which can be used in a variety of imagery analysis

and GIS programs.

The image of Salt Lake City, Utah from 1977 (Figure 5) was obtained from the Utah

State Automated Geographic Reference Center interactive imagery website:

http://gis.utah.gov/images/sgidraster/SLCo_1977_DOQ.html. The Salt Lake City study

area was clipped from a Mosaiked Digital Orthophoto Quadrangle (MDOQ) q1219_1977

Page 29: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

20

and was scanned and ortho rectified at APFO using the same parameters and methods as

the 1958 Ogden imagery. Q1219_1977 is a mosaic that was created from original

DOQQs using Socet Set 4x and interactive seaming. The image resolution is 1 meter and

the bit depth is 8 bits.

Figure 4 - Ogden DOQQ Study Area

Page 30: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

21

Figure 5 - Salt Lake City MDOQ Study Area

3.3 Methodology 3.3.1 Conceptual Overview

The research for this study involved a number of steps. The preliminary image

processing included creating a subset of the study area from both the 1958 imagery and

the 1977 imagery. A subset was used to cut down on processing and digitizing time.

Once the study areas were created, the classification scheme was determined and finally

heads up digitizing was performed on both images in order to obtain the digitized

baseline information for comparison to automated classification and to use as ground

truth data to test the accuracy of the digital imaging techniques. After the preliminary

Page 31: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

22

processing was completed, a number of digital image processing techniques were

performed on the imagery (see Figure 6). The original imagery was classified using

supervised and unsupervised classifiers to form the classification baseline information.

Then four main digital image processing techniques were used to try to improve the

classification. These four processes were: convolution filtering, texture analysis,

principle components analysis, and object-based image classification. Texture analysis

was used to create layer-stacked images which increased the dimensionality of the

original one band image to improve classification results. Principle components analysis

was used to decrease the dimensionality of the multiple layer texture images and in one

case the first principle component image derived from the multi-layer texture image was

layer stacked with the original one band image. The final digital image processing

component in the research was image post-processing to refine the most promising results

for GIS analysis. After image post-processing an accuracy assessment was completed to

compare the results of each classification with the digitized baseline information obtained

by visual interpretation (heads up digitizing).

3.3.2 Software Utilized

There were three software programs used in this project as no single software suite

available to me provided all the tools needed for this research. The imagery analysis

programs used were ERDAS Imagine version 11.0, ENVI 4.8 and ENVI EX 4.8. The

GIS software used is ArcMap 10.0. ERDAS Imagine has a good set of texture analysis

and filtering tools. ENVI EX and ENVI have the benefit of integration with the GIS

software and ENVI EX provided a wizard based feature extraction toolset for object-

based image classification. The main interface used to provide the baseline land use/land

Page 32: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

23

Figure 6 – Main Workflow Processes

Page 33: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

24

cover zones to aid or facilitate the manual digitizing process is ArcMap 10 as this

software has good vector tools, and the ability to integrate ENVI image analysis tools

into ArcMap Model Builder.

3.3.3 Preliminary Processes

The study area was clipped from the original DOQQs using the ERDAS Imagine

subset tool. The area covers approximately 0.5 miles in both project areas to facilitate

digitizing and image processing. Much of the image processing including the use of

convolution filters; texture analysis and classification methods required trial and error to

find the best settings and analysis methods for the imagery. The best results were

analyzed further using post processing, vector conversion and editing.

Heads up digitizing was performed on the Ogden and Salt Lake City imagery. This

provided the digitized baseline information as ground truth to be used later in the

classification accuracy assessment. Heads up digitizing was performed using ESRI’s

ArcMap 10.0 software. A geodatabase was created for both the Ogden imagery and the

Salt Lake City imagery.

One person performed the visual interpretation of the imagery for the sake of

consistency. The interpreter has had eight years of work experience using photo

interpretation to create a variety of map types for the Defense Mapping Agency (now the

National Geospatial Intelligence Agency). The times were recorded so that a comparison

can be made between manual digitizing and digital image processing to determine the

efficiency of digital image processing.

The determination of land use classes was an important consideration as it had a great

deal of impact in the final results of image classification especially for panchromatic

Page 34: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

25

imagery since so many land use/land cover types have similar digital number (DN)

values. Classification schemes in previous studies using black and white aerial imagery

have used relatively limited categories (Kadmon and Harari-Kremer 1999, Laliberte et al.

2004, Okeke and Karnieli 2006, and Pringle et al. 2009). This study includes three levels

of classification detail for the study areas. The approach looked at the classification of

the imagery in a bottom up manner going from a high level of detail in representing the

land cover types existing in the imagery to grouping these types into larger categories.

This strategy was used to determine how useful detailed digital analysis of the imagery

was compared to visual interpretation. The first level of classification of the Ogden

imagery was based on eight land use/land cover classes including water, forest, grassland,

dark fields, medium fields, light fields, bare earth and impervious surface (Table 1). At

this level it was too difficult to represent the cropland as one class as there is too much

variation between fallow fields and fields that are growing or wet. There was also

confusion between the most representative digital number values between dark, medium,

and light fields as there are pattern variations in the respective fields.

Table 1 – First level classification Ogden land use/land cover classes

Class Name Description

Water Lakes, Reservoirs, Rivers

Forest Areas of trees with a canopy cover greater than 50%

Grassland Areas dominated by grasses and herbaceous plants with little or no tree or shrub cover

Dark Fields Agricultural cropland area characterized by dark gray tone DN ~ 0-122

Medium Fields Agricultural cropland area characterized by medium gray tone DN ~ 100-188

Light Fields Agricultural cropland area characterized by light gray tone DN ~ 151-200

Bare Earth Areas of earth, sand, and rock with little to no vegetation

Impervious Surface Buildings, roads, parking lots

Page 35: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

26

The second level of classification took the eight classes and combined them into three

larger groups: cropland, vegetation, and other. Finally, the third level of classification

consisted of cropland and non-cropland. The results of these classifications and their

impact on classification accuracy were obtained by combining the results of the initial

classifications rather than running new supervised and unsupervised classifications to

reflect these combined groupings.

The classification system used on the Salt Lake City image also used a bottom up

approach starting out with a more detailed classification and then moving to more general

groupings. The first level of classification consisted of five land types including

commercial, transportation, trees, grass, and residential (Table 2). The second level

classification was reduced to built up areas, vegetation, and transportation. The third level

of classification consisted of built up areas and non-built up areas. The Salt Lake City

image has entirely different characteristics from the Ogden image, as the Salt Lake City

image is comprised of a mixed type urban area without any agriculture, bare earth, forest,

or large bodies of water. The added classification difficulty in the Salt Lake City image

was that the commercial and residential areas are made up of a mixture of manmade and

natural materials. These areas consisted of thousands of small buildings and may be

surrounded by either grass or concrete, all of which provide a very complex pattern of

shapes and surfaces which were tonally very similar. There were many tonal similarities

existing in the Ogden imagery as well but the land cover types such as dark fields, light

fields, water, etc. are fairly homogenous blocks unlike the patchwork of the urban areas.

Page 36: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

27

Table 2 – First level classification Salt Lake City land use/land cover classes Class Name Description

Commercial Built up area consisting of industrial, commercial complexes

Transportation Transportation network including major streets and highways

Residential Mixed area that includes single family homes, apartments, trees, and grass

Grass Areas dominated by grasses and herbaceous plants (yards, fields)

Trees Woody vegetation < 20ft tall

3.3.4 Unsupervised Classification

Unsupervised classification was performed on the original subset of the Ogden and

Salt Lake City images to provide the unsupervised classification baseline information for

comparison to digital classifications with image enhancements. This initial classification

was completed using ENVI 4.8 tools for ArcGIS and the ISODATA clustering algorithm.

This clustering algorithm essentially divides the image into naturally occurring groups of

pixels. Similar pixels are grouped together. Three classification sets were used to

process the imagery: 10, 25, and 100 spectral classes. After the imagery was classified,

these groups were interactively assigned an information class by visually comparing the

classified image and/or reference data. Since many of the spectral classes have similar

tonal values and statistics, it was necessary to assign some of these mixed classes to

either the most numerous type or the type with the most concentrated areas of pixels.

There was room for interpretation, and there is a certain amount of subjectivity involved

in assigning these classes. The interpreter needs to be familiar with the study area, and

when some classes are divided between seemingly equal areas, it was difficult to

determine which was the best class to assign the pixels to. In some cases a spectral class

was divided between 3 or 4 information classes. At this stage there was not a method to

split these classes into their respective groups using the ENVI or ArcGIS software. It is

Page 37: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

28

possible to use masking and a technique called cluster busting, but this methodology was

not used in this research, as it requires a significant amount of extra processing.

The unsupervised classification process did provide some useful general information

about the imagery. It was very difficult to assign classes to the detail level land

classification system used for both the Ogden and the Salt Lake City images. After

aggregating classes and assigning them a land use/land cover type from the classification

scheme, there were about five classes that could be distinguished in the Ogden image and

three in the Salt Lake City image. A useful tool to visualize how the clusters in an image

are derived is a dendrogram. Dendrograms were created using the ArcGIS software for

the same number of classes and iterations as the unsupervised classifications (Figures 7,

8, 9). A dendrogram is a graphic diagram in the form of a tree that is used to analyze

clusters in a signature file (ESRI 2011). The dendrograms are used to show the clustering

process from individual classes to one large cluster. The dendrogram tool takes an input

signature file created in ArcMap and creates the diagram based on a hierarchical

clustering diagram. The classes are clusters of pixels and the graph illustrates the

distances between merged classes. The dendrogram helps to illustrate how the 10, 25,

and 100 classes are distributed using the ISODATA classifier. Many of the classes

overlap and are very close together numerically, which is why unsupervised classification

on panchromatic imagery often gives the user unsatisfactory results. The dendrograms

also illustrate the relatively small changes in class distances between having 10, 25, and

100 classes. Dendrograms of the Salt Lake City imagery were very similar except for

slight differences of distances between pairs of combined classes (Figure 10). The

Page 38: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

29

ISODATA classifier only returned 67 classes instead of 100 for the Salt Lake City image

and 93 out of 100 for the Ogden image.

Figure 7 - Ogden dendrogram of ISODATA clustering 10 classes

Page 39: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

30

Figure 8 - Ogden dendrogram of ISODATA clustering 25 classes

Page 40: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

31

Figure 9 - Ogden dendrogram of ISODATA clustering 100 classes

Page 41: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

32

10 Classes 25 Classes

100 Classes

Figure 10 – Distances between classes from Salt Lake City dendrograms

A K-Means unsupervised classifier was also used to classify an Ogden texture image

incorporating the mean, variance and homogeneity bands. This classifier provided a

more satisfactory result on the texture images than the ISODATA classifier did. The K-

Means classifier in the ENVI software uses a set number of classes provided by the

analyst, and classes are determined after the classifier iterates through the image and the

optimal separability is reached based on the distance to mean (ENVI 2011). The

ISODATA classifier had difficulties with the texture image and returned a completely

Page 42: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

33

gray image unless the classes were increased to well over 25. Considering how time

consuming it was to assign classes to the result the K-Means classifier was used. Ten

classes and 25 classes were used on the texture image.

3.3.5 Supervised Classification

Supervised classification was performed on the original image subsets to create the

supervised classification baseline information. Later on, another supervised classification

was performed on images which had been digitally processed or enhanced (filtering or

texture analysis). Results of the latter supervised classification were compared to the

supervised classification baseline information to determine if these digital image process

enhancements improved classification. Supervised classification was performed using

ENVI and ArcGIS 10 software.

Supervised classification unlike unsupervised classification involves the user creating

training samples from land use/land cover classes that are determined to be present in the

imagery. The training sets called region of interest (ROI) were created using ENVI

software. This training data was used throughout the supervised classifications performed

on the original imagery, texture images, PCA images, and the filtered images. The final

training sets for both study areas were determined by trial and error. A training set was

developed which had about twice as many samples, but this set did not significantly

improve classification results for either image. These larger sets did however increase

processing time, so in the interest of efficiency smaller training sets were used throughout

(Figure 11 and 12). Training sets are inherently subjective and do require the analyst to

be able to distinguish land use/land cover types.

Page 43: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

34

Figure 11 – Training sample distribution for the Ogden image

Figure 12 – Training sample distribution for Salt Lake City Image

Page 44: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

35

Several supervised classifiers were used to evaluate the imagery using ENVI software.

The minimum distance classifier, the maximum likelihood classifier, neural net, and

SVM classifiers were examined. Each classifier provides distinct advantages and

disadvantages. The minimum distance to means classifier determines the mean of each

pre-defined class and then classifies pixels into the appropriate class by using the

Euclidean distance of the closest mean. One of the advantages to this algorithm is that it

classifies all pixels and processes very quickly. The maximum likelihood classifier

assumes that each class is normally distributed and is based on the highest probability

that a pixel will be assigned to a particular class. When classes have a multimodal

distribution this classifier will not provide optimum results. An advantage of this method

is that the classifier considers the mean and covariance of the samples. The neural net

classifier provided by ENVI software uses back propagation to determine class

assignment of pixels. An advantage of the neural net classifier is that it does not make

assumptions about the distribution of the data. The Support Vector Machine (SVM)

classifier available in the ENVI software works with any number of bands and has good

accuracy when automatically separating pixels into classes. This classifier also

maximizes the boundary between classes, which may be useful for distinguishing land

use/land cover types with similar characteristics. Another advantage of this classifier is

that it works well on imagery that has a lot of noise (ENVI 2011, Jensen 2005).

3.3.6 Image Enhancement and Texture Analysis

Digital image processing techniques were explored to determine if classification

results could be improved. Texture analysis, convolution filtering, and contrast stretching

enhance some of the spatial characteristics of the imagery. For example, contrast

Page 45: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

36

stretching brings out more differences between light and dark areas of the imagery, and

convolution filters can enhance edges. Low pass filters can smooth out areas of noise in

an image such as the variations found throughout the field areas in the Ogden imagery,

while high pass filters make the image appear more crisp or sharp (Jensen 2005).

Convolution filtering, contrast stretching and texture filtering were used in a variety of

combinations to enhance the study areas and try to improve classification.

A two standard deviation contrast stretch was applied to both study areas to enhance

the contrast and sharpness of the imagery. Both original images lacked definition in the

light and dark areas of the image (Figure 13). The Ogden study area had a DN range of

0-235 and the Salt Lake City study area had a DN range of 0-187. All subsequent

filtering and texture analysis was performed on the stretched images.

Unstretched Stretched

Figure 13 – Unstretched images compared to contrast stretched images

Page 46: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

37

Convolution filtering was performed on the study areas using ENVI software. High

pass filtering was used to help sharpen the imagery using a variety of kernel sizes: 3x3,

5x5, 7x7, and 11x11. Low pass filtering was applied to the imagery to smooth out noise

in the field areas. Again 3x3, 5x5, 7x7, and 11x11 kernels were examined. As the kernel

gets larger with low pass filtering, the detail becomes more generalized or blurred as this

type of filtering preserves the low frequency parts of the image. A median filter was also

examined using the previously mentioned kernel sizes. This filter has a smoothing effect

on the image but the edges remain somewhat crisper than the low pass filter. ENVI also

provides several edge enhancing filters that were used to process the original study

images. The filters used in this study were Laplacian, Roberts and Sobel. The Laplacian

filter has an editable window size whereas the Roberts and Sobel filters do not have

editable kernels or window sizes. Edge filtered images were created using the Laplacian

filter using window sizes of 3x3, 5x5, 7x7 and 11x11. The Laplacian filter was also used

in combination with the Gaussian low pass filter to try and reduce some of the noise that

results when creating the Laplacian filtered images.

Texture images were created using ENVI software and are based on the GLCM which

includes the following texture characteristics: mean, variance, homogeneity, contrast,

dissimilarity, entropy, second moment and correlation. Another set of texture images

were created using the Occurrence measures which consist of data range, mean, variance,

entropy, and skewness. Each set of texture images was created using a 3x3, 5x5, 7x7 and

11x11 processing window. The processing window measures the number of times each

gray level occurs in that particular part of the image (ENVI 2011). As the processing

window becomes larger, image detail is lost. The texture images created using the

Page 47: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

38

GLCM are eight band images, and the texture occurrence images are five band images;

thus the dimensionality of the imagery is significantly increased by the use of texture.

These two texture images were also layer stacked with the original imagery to create

nine-band and six-band images. Additional nine-band and six-band images were also

created from these two texture images layer stacked with a filtered original image. The

resulting images were then classified using unsupervised and supervised classifiers. The

accuracy of these classifications was then compared to the classification baseline

information using an error matrix.

Principle components analysis was used to reduce the number of bands on several

composite images. In this way the dimensionality of the imagery is reduced but most of

the information in the imagery is maintained. PCA was performed on a multi-layer

image consisting of images created from variance, mean, and homogeneity texture

operators, plus the original unprocessed image. The result was a two-layer image which

incorporates information from the original image and the texture layers.

ENVI software also provides tools to perform mathematical morphology filtering

which is a non-linear process based on shape. Morphology filtering was performed on

both the original imagery and 5x5 occurrence texture images. Supervised and

unsupervised classification was then performed to determine the accuracy as compared to

the classification baseline information.

3.3.7. Object-based Image Analysis

Another digital image processing technique which was explored in this research was

object-based image analysis. Object-based image analysis is based on regions or groups

of pixels in an image rather than single pixels. Feature extraction was performed using

Page 48: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

39

ENVI EX which provides object-based tools that utilize spatial, spectral, and textural

features. The object-based analysis provided by the ENVI software uses an edge-based

segmentation algorithm and requires only the scale level as an input parameter. The scale

levels range from 0-100 where a high scale level reduces the number of segments that are

defined, and a low scale level increases the number of segments that are defined. There

should be a balance in determining the scale level by trying to choose a scale that

delineates the image object boundaries as well as possible. This level is likely to be

different depending on the characteristics of the imagery being analyzed. ENVI provides

an interactive preview window to help determine an appropriate scale level for an image.

The preview window allows you to see what kind of effect changing the scale level of the

segmentation has on the objects of interest in the image scene before the segmentation

runs. This helps to avoid creating numerous unsuccessful segmentation images. After

the initial segmentation has been performed, image segment merging can be done. ENVI

uses the Lambda-Schedule algorithm that iteratively merges segments by using a

combination of spectral and spatial information. This step is especially helpful when an

image has been over segmented as it enables the aggregation of small segments that may

occur from image object variation (ENVI 2011). After segmentation the next step is to

find objects and classify the imagery. Objects were chosen interactively from the

segmented image and the image was then classified. ENVI EX offers either a K-means

classifier or a SVM classifier. Classification and post processing was performed using

both available OBIA classifiers. The final step before classification in the ENVI EX

feature extraction workflow is the refine results window. In this window there are

options to export vectors and smooth the results similar to using a majority filter on a

Page 49: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

40

classified image. The process for using the feature extraction tools in ENVI EX is

designed to make the process of OBIA user friendly.

ENVI 4.8 also offers an OBIA classification method called size-constrained region

merging (SCRM). This tool is an extension that can be added to ENVI. The tool

partitions an image into reasonably homogenous polygons based on a minimum size

threshold. The output of the tool is a vector file and an image file. The vector file can be

used directly as an initial source to assist visual interpretation, and the image can be

further classified using either unsupervised or supervised classification. One of the

limitations of this extension is that there is a size limitation of 2MB for the image

(Castilla and Hay 2007). All of the layer stacked imagery exceeded the size limitation for

using this tool. SCRM was used on the original imagery, the one band dissimilarity,

mean, homogeneity, and variance texture images. The second moment, entropy, and

contrast bands were not used, as there appears to be a lot of correlation between them and

the bands that were selected. The correlation band does not have enough usable

information in it to segment it into objects. The output image was then classified using

the SVM classifier.

3.3.8. Post Processing and Automation

The classified images created from the previously mentioned digital processing

techniques and classifiers contained varying quantities of island pixels and salt and

pepper noise. There are numerous methodologies to reduce these types of areas in a

classified image. Majority and minority filtering, clump, sieve, and combine classes are

some of the commonly available tools provided in GIS and image analysis software.

These processes reduce the complexity of the classification and allow a more cohesive

Page 50: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

41

result for further analysis. Post classification processing may also produce error in the

final imagery by smoothing and combining the wrong classes together. It is also not

practical to remove noise pixel by pixel, as there may be thousands of areas to examine.

The next step in this research was to produce a vector polygon layer that can assist in

visual interpretation of the imagery. In order to simplify the procedure of processing the

classified rasters and converting them to a vector layer that facilitates visual

interpretation, a model was developed using ArcGIS Model Builder (Figure 14). This

model allows the user to input a classified image, apply a smoothing kernel, aggregate

island pixels to a specified tolerance, convert the raster to a vector layer, and smooth and

simplify the resulting polygons. For consistency a majority filter using a 3x3 window

and aggregation using a minimum threshold of 25 was used on all the classified images

examined. The model parameters for smoothing and simplifying polygons were left open

so that adjustments can be made for different images.

Figure 14 – Post Processing ArcGIS Model

Page 51: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

42

One of the challenges of using vector files that have been converted from raster files is

that polygons have a stepped appearance that follows pixel boundaries. This

characteristic appearance is much different from a vector file created through heads up

digitizing. A human digitizer classifies an image into recognizable objects using shape,

context, texture, shadows, etc. to help determine the boundaries of objects. This would

be very difficult if not impossible for a human digitizer to create land use/land cover

boundaries at the pixel level. This is one of the main differences between automated

classification and classification performed by visual interpretation.

The polygon smoothing and aggregation steps used in the model help to reduce some

of the stepped appearance created by the raster to vector conversion process (Figure 15).

After polygons underwent smoothing and simplification, the result appeared much closer

to results obtained through visual interpretation. This process was also an advantage if

polygons needed to be reshaped. There are fewer vertices for each polygon after

completing these operations.

Once the vector layer had been processed through the model, it was edited using a

custom toolbar in ArcGIS 10 software. The custom toolbar includes a combination of

Figure 15 – Polygon raster to vector, smoothing, and smooth and simplify

Page 52: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

43

out of the box tools (Selection Tool and Cut Polygon Tool) and several custom tools

created using C#.net and ArcObjects. The purpose of the custom toolbar is to provide

functions to remove small islands by merging them to other neighboring pixels. It was

implemented as an “Add-in” which was easily added to the ArcGIS 10 user interface.

The toolbar consists of four custom tools: select by area, merge with smallest neighbor,

merge with largest neighbor, and merge with selected polygon. These tools are very

similar to raster majority and minority filtering except that the user has more control over

them. The tools were then used to further refine the classification using visual

interpretation. The automated classifications in essence become the starting point for the

manual digitizing effort for the study areas.

3.3.9. Accuracy Assessment

One of the most serious limitations of historical imagery is ground-truthing. The

imagery is between 33 and 52 years old, and it is likely that many of the objects in the

imagery have changed or no longer exist today. Ground-truthing was limited to visual

interpretation and image accuracy. The baseline information derived from heads up

digitizing was used as ground truth to evaluate the accuracy of the classification of both

the original images and the images where digital image processing has been used (i.e.

filtering, texture, PCA and segmentation).

An evaluation tool called a confusion matrix (or error matrix) was used between

classification baseline information and classifications after image processing

enhancement so that there is a comparison of accuracy results. To save time and labor,

only the classifications deemed best were evaluated. The confusion matrix can help to

Page 53: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

44

visually represent classification error by use of a table, and it is used to help validate the

results of the image classification compared to the ground truth.

A stratified random sample of points was used as a sampling strategy for the accuracy

assessment. The samples for each image were created using Hawth’s sampling tools for

ArcMap. The values for the points were derived from the digitized baseline information.

Each class consisted of forty sample points except for extremely sparse areas such as

impervious surface on the Ogden image and grass on the Salt Lake City image. These

sparse areas were underrepresented by a simple random sampling strategy and as such

did not give an accurate assessment. Using the stratified sampling strategy allows each

land cover type to have a statistically significant number of points. The Ogden image

high level classification scheme used eight classes. Forty sample points were chosen for

the seven most predominant classes and thirty for the sparse class totaling 310 sample

points. The Salt Lake City high level classification scheme used five classes. Again forty

sample points were chosen for the four most predominant classes and thirty for the sparse

class totaling 190 sample points. The sample point numbers for each class in the study

areas are statistically significant and for the sake of time and effort a relatively small

number of points was chosen for each area based on the number of land use/land cover

classes determined for each area.

The Extract Values to Points tool was used to get values from the classified image and

the ground truth. These values were then combined in one column (e.g. 1-1, 1-3, etc.) to

obtain unique value pairs and then the summarize tool in ArcMap was used to obtain the

count. The values were then entered into an Excel spreadsheet which was set up to

calculate percentages of overall accuracy, producer’s accuracy, errors of omission, user’s

Page 54: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

45

accuracy, errors of commission, single class accuracy, and the Kappa coefficient. Refer to

Appendix 1 for a sample of the error matrixes used in this research.

Page 55: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

46

CHAPTER 4: ANALYSIS RESULTS AND DISCUSSION

4.1. Manual Digitizing

Heads up digitizing of land cover classes on any type of imagery whether it is multi

spectral or panchromatic allows the user more control over the results of the

classification. The results of this method of classification in general do not require

further editing or post processing. On the other hand subjectivity of the digitizer has an

effect on the results of the classification. It is unlikely that a digitizer would be able to

classify an image exactly the same every time.

Digitizing took place in two sessions with the Ogden imagery taking approximately

five hours to complete and the Salt Lake City image took approximately three hours to

complete. The Salt Lake City image has 331 polygons compared to the Ogden image

which has 172 polygons. The Ogden image took much longer to digitize even though

there are approximately half the amount of polygons. The polygons and land cover

configurations were more complicated when considering the integration of grassland,

forest and water areas on the image. The features on the Salt Lake City image are laid

out in a grid pattern separated by wide streets so even though there were almost twice as

many polygons to digitize the process went more quickly. An important aspect of this

research was to show that digital image processing of historical panchromatic imagery

could enhance and facilitate visual interpretation of the imagery on a variety of terrains

and features.

The visual interpretation of the imagery required a zoom level of between 1:1,500 and

1:3,000 on the Salt Lake City image and 1:1,000 and 1:4,000 on the Ogden image. These

zoom levels were determined by the digitizer by how well they could see the details in

Page 56: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

47

the imagery while still being able to have some reference to the context of objects being

examined. In the experience of the digitizer a more consistent result is also achieved if

there is not a large variance in the viewing scale of the objects in the scene. If an area is

digitized at 1:3,000 and another area at 1:24,000 then the details being observed will not

be consistent throughout the study area. Digital image processing on the other hand

classifies by pixel without involving scale issues. This is a major difference in the

methodology of classification. Digitizing at varying scales is both an advantage and

disadvantage compared to digital classification. If the scale is zoomed in at the pixel

level, it was impossible to discern what the objects in the imagery were. A large variance

in scale can lead to inconsistency, but a small variance in digitizing scale can help the

digitizer to consider a feature’s relationship to surrounding objects when determining

what the object is, unlike most per pixel digital classifications. By using a small variance

in digitizing scale for land use/land cover classification of panchromatic imagery, both

detail and consistency can be maintained while the expert knowledge of relationships and

contexts of features can be utilized.

This project used relatively small areas of interest. After examining the land use/land

cover classes from the beginning of the project to its conclusion, there were areas of the

initial digitizing which on further analysis could have been refined or changed, especially

in diverse areas containing many intricate changes in the landscape. There was a

tendency to generalize areas where the land use/land cover types are fragmented. This

tendency is most notable in the Ogden image in the southern half of the image where the

forested areas are broken up by water and grasslands. The initial digitizing was not

changed to reflect new perceptions of the land class areas on the imagery. Some of these

Page 57: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

48

inconsistencies have an effect on the final accuracy of the digital classifications, as it was

apparent that at some points the digital classification was more correct than the visual

interpretation. This is a limitation of the research.

One of the major differences found in this research between the manual digitizing

classification and the digital image processing classification was the level of detail

achieved in the classifications. In the Ogden image the total number of polygons

digitized was 172 (Figure 16) and the total number of polygons digitized for the Salt

Lake City image was 331 (Figure 17). The digital classifications in comparison before

post processing yielded several thousand polygons. After post processing most digital

image classifications still exceeded the digitized baseline information but results

averaged about 500-1000 polygons. It was a difficult task to digitize very detailed areas

on the imagery. This study has shown that by utilizing digital image processing

techniques to help facilitate visual interpretation of land use/land cover classes, the

analyst can take advantage of the detail and repeatability that digital processes provide

while improving the classification accuracy using a GIS in post processing the results.

Results using visual interpretation and heads up digitizing may provide more initial

accuracy, but digital image processing lends some added consistency to the process.

4.2. Unsupervised Classification

Supervised and unsupervised classification results varied depending on the image, the

classification method, pre-processing, and post-processing. Panchromatic imagery

presents many challenges as previously mentioned in this study. The heterogeneity of

the study area also has an effect on how successful classification is. This study has

Page 58: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

49

Figure 16 – Classification using visual interpretation of the Ogden image

Figure 17 - Classification using visual interpretation of the Salt Lake City image

Page 59: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

50

concentrated on supervised classification as this methodology gave better classification

results and was far less time consuming once the training classes were obtained.

Overall classification accuracy for unsupervised classification was low on both images

ranging from 25-40% on both study areas for the detailed classification. The more

generalized classification schemes improved the results by 8-50% (Table 3). The largest

improvement was from level 1 to level 3 using 10 spectral classes, ISODATA classifier

and the Halounova image. Running the unsupervised classification with more classes did

not generally improve accuracy except in the Salt Lake City image with the level 2 land

use/land cover classification scheme. Unsupervised classification with ten spectral

classes provided the best overall accuracy on both the Salt Lake City and Ogden images.

One of the major problems in assigning information classes to spectral classes was that

there was so much overlap between classes such as forest and dark fields, and water and

medium fields. There was no easy way to separate these areas on the raster image.

These unsupervised classifications appear very similar to each other visually (Figure 18).

The 100 class ISODATA was more difficult to assign classes to as many of the areas

were very small and appeared to be evenly divided at times between two or three

opposing classes such as water, grassland and medium fields.

Table 3 – ISODATA overall accuracy results for Ogden and Salt Lake City study areas

Land use/land cover Classification Scheme

10 Classes 25 Classes 100 Classes

Ogden – level 1 39% 40% 39% Ogden – level 2 48% 50% 47% Ogden – level 3 55% 56% 49% Salt Lake City – level 1 39% 38% 38% Salt Lake City – level 2 51% 51% 53% Salt Lake City – level 3 67% 64% 65%

Page 60: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

51

Although unsupervised classification showed low accuracy in both study areas, the

results showed some important trends in the data. In the Ogden image it was very

difficult to extract more than five classes which was an indication that land cover types

such as water, medium fields and grassland are very similar. Panchromatic imagery

would require more pre- and post-processing to achieve a more accurate classification

using eight land cover types. As the classes are aggregated into larger parent classes the

classification accuracy increased accordingly. Unsupervised classification even on a

small study area such as this was more time consuming than supervised classification and

provided somewhat unsatisfactory results.

The Salt Lake City image proved difficult in a different way in that the mixed urban

area consisted of commercial, residential and transportation areas which appear very

distinct using visual interpretation but present difficulties for digital classifiers. Urban

areas are uniquely difficult to classify on multispectral imagery, as there is such a mixture

of impervious surfaces. Black and white high spatial resolution imagery complicates this

10 spectral classes 25 spectral classes 100 spectral classes

Figure 18 – Ogden image ISODATA classifications

Page 61: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

52

situation, as there was an extreme overlap between classes, because features such as

buildings and mixed surfaces like parking lots and vegetation exist in both residential and

commercial areas making it difficult to distinguish these areas. None of the ISODATA

classifications of the Salt Lake City imagery were able to distinguish between all five,

detail level land cover types. Trees, transportation, and commercial land cover types

were the only three land cover types that could be classified from the 10, 25, and 100

spectral class ISODATA classifications (Figure 19). Many areas of overlap exist

between the commercial and transportation classes in all three unsupervised

classifications. The transportation network in this image is a very distinct linear feature

when classifying the imagery through visual interpretation, but there are many tonal

variations in the pavement which causes a great deal of confusion for most traditional

unsupervised classifiers. Grass and residential land cover types were unable to be

distinguished from commercial, transportation and trees as there was considerable tonal

overlap between these areas.

A 10 and 25 spectral class K-Means unsupervised classification was performed on the

Ogden imagery using a layer stacked image consisting of the original image and the

following texture characteristics: mean, variance, and homogeneity. Surprisingly the use

of texture did not improve the unsupervised classification using the level 1 land use/land

cover types. Overall accuracy was 25% for 10 classes and 34% for 25 classes. This is

most likely due to the fact that there was little to no distinction between the field areas as

most of them exhibit a smooth surface. Also the field areas and the water areas were

confused as well. Aggregating the classification into the more generalized classes

increased accuracy significantly in the unsupervised classification. This was particularly

Page 62: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

53

10 spectral classes 25 spectral classes 100 spectral classes

Figure 19 – Salt Lake City image ISODATA classifications

apparent in the texture image. Accuracy increased to 54% for the level 2 classification (3

land use/land cover types classification scheme) and increased to 71% for the level 3

classification (2 land use/land cover types classification scheme). The Halounova image

which consisted of texture and filtered layers did not provide improvement for the Ogden

image level 1 classification scheme using unsupervised classification, but did slightly

improve the Salt Lake City level 1 overall accuracy. Due to the poor accuracy results

using texture and unsupervised classification no further analysis was performed in either

study area.

4.3 Supervised Classification

Supervised classification of panchromatic imagery again presents many challenges.

The SVM classifier was used to perform the supervised classification as it has the ability

to process single band imagery and it provided better results. The supervised classifiers

available in ENVI are limited when using single band data as many options such as

maximum likelihood, spectral angle divergence, and neural net all require more than one

Page 63: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

54

band of data to classify the image. The classification baseline for the original Ogden

image had a poor overall accuracy of 39%.

The training data statistics showed how challenging it is to distinguish a detailed

classification on an unprocessed panchromatic image. The training areas displayed

either bimodal or multimodal histograms which in itself is a challenge for classifiers such

as maximum likelihood where the premise is that the data should have a normal

distribution (Jensen 2005, Campbell 2008). Another challenge in classifying the Ogden

image was that certain land cover types such as forest and impervious surface have a

large standard deviation. If visual interpretation is used to classify the imagery, we see

that forested areas display a lot of texture and that there is a lot of variation in the tonal

properties of this land cover. Impervious surface has the same problem in that some of

the roads are very light and others are a medium gray. The min and max values across

the training set for the individual classes also overlap. Several different training sets

were examined but this problem occurred in all sets examined. The land cover types that

had the most overlap with other classes were forest with a min of 0 and a max of 162 and

impervious surface with a min of 74 and a max of 223 (Table 4). There are no other

spectral characteristics to go from to help distinguish these types of subtleties in gray

level imagery.

The Salt Lake City image presented even more challenges in part due to the

characteristics of the image and the detailed land cover classification scheme. The land

cover types were very detailed and textured. Overlap between classes is impossible to

avoid in this type of area using a gray level image. The training data statistics again help

Page 64: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

55

Table 4 – Training sample statistics from original Ogden image

Land Cover Type Min Max Mean StDev Points

Water 129 177 151.6 13.1 1611

Forest 0 162 70.3 33.7 3246

Grassland 81 197 129.9 13.8 1944

Dark Field 53 102 78.2 12 2706

Medium Field 94 150 124.1 13.5 5056

Light Field 148 194 179 6.5 1918

Impervious Surface 74 223 170.7 28.3 732

Bare Earth 180 227 197.8 7.5 1393

to illustrate the overlap which occurs between commercial, residential and transportation

classes throughout this image (Table 5). The histograms were either bimodal or

multimodal. Although the histogram for transportation approached a normal distribution,

there were still many peaks and valleys indicating variations in gray levels in the image

for this land cover type.

Supervised classification results showed that it was very difficult to extract more than

8 classes on the Ogden image and 5 classes on the Salt Lake City image. One of the

limitations of using panchromatic imagery for land use/land cover classification is that

the DN values which make up the signature for many land use/land cover types contain a

significant amount of confusion. Real world features may be difficult to identify without

taking into account their spatial context (Hung and Wu 2005). Land use/land cover types

may need to be generalized. For example, detail like corn or wheat fields may not be

characterized using panchromatic imagery, but dark fields and light fields or cropland

may be possible. The increased accuracy achieved when aggregating land use/land cover

Page 65: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

56

types into the level 2 and level 3 classification schemes support this conclusion. Training

samples tested with a greater number of pixels increased the confusion between classes

such as water and medium fields and forest and dark and medium fields. These larger

samples had a broader min and max range for all classes.

This research showed that although it is possible to use supervised classification on

gray level imagery, it does require a significant amount of post processing of both the

classification result and the vector file. The initial supervised classification is very noisy

(Figure 20) compared to some of the results obtained using texture and object-based

analysis. All classifiers had difficulty distinguishing between transportation areas and

grass and residential areas. The most confusion occurred between residential and

commercial areas with almost no distinguishable residential areas correctly classified.

Both Minimum Distance and SVM classifiers were able to distinguish trees throughout

the image better than other land cover types. One of reasons this occurred was because

the trees training sample had a mean which was much farther away from other land cover

types.

Table 5 – Training sample statistics from original Salt Lake City image

Land Cover Type Min Max Mean StDev Points

Trees 0 133 38.8 18.4 3246

Grass 41 156 92.9 26.3 907

Transportation 60 169 108.5 14.8 9286

Residential 6 169 112 31.6 10738

Commercial 35 179 130.4 21.6 14221

Page 66: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

57

Minimum Distance Support Vector Machine

Figure 20 – Minimum distance and support vector machine classification of the Salt Lake City image

4.4. Image Enhancement and Texture Analysis

Many combinations of image filtering, texture operators, principle components

analysis and convolution filters were used to try and improve classification baseline

information. Convolution filtering alone was only moderately successful. Low pass

filtering was somewhat more successful than high pass filtering as the high pass filter

produced an excessive amount of noise and reduced the number of distinguishable land

use/land cover types in the resulting classifications. Window sizes used for filtering

ranged from 3x3 to 11x11 incremented in odd numbers for both the high pass and the low

pass filters. It appears from examining the images produced from the high pass filtering

that as the window size increases, the contrast between edges of objects become more

prominent. The low pass filtering causes the image to become smoother as the window

Page 67: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

58

size increases although the blurring along edges reduced classification accuracy and

caused most of the cropland classes to become confused when using an ISODATA

classifier. Results were better using supervised classification. A minimum distance

classifier was used on the one band filtered images. The high pass filter results were too

noisy to be useful especially with the 3x3 filter where individual features such as forest

and grassland are almost indistinguishable (Figure 21). The low pass filter caused

confusion between classes such as medium fields, grassland and water and dark fields

and forest (Figure 22). A combination median filter and Gaussian high pass filter was

used to create a two layer image using 3x3 and 11x11 window size, respectively. The

result was similar to the low pass filter but was noisier in the field and forest areas.

These combination images did not appear to improve classification results so no further

analysis was conducted on these images.

3x3 Window 11x11 Window

Figure 21 – Minimum distance classification of the Ogden image with high pass filter

Page 68: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

59

3x3 Window 11x11 Window

Figure 22 – Minimum distance classification of the Ogden image with low pass filter

Halounova (2009) combined filtering, texture, and object oriented classification to

classify panchromatic aerial photos. Similar combinations were examined in this study

using both pixel based and object-based classification for the Ogden study area. A multi

band image was created using a median filter, Gaussian high pass filter, mean texture

measure with 11x11 and 21x21 filter sizes, variance texture measure with11x11 and

21x21 filter sizes, and dissimilarity texture measure with 11x11 and 21x21 filter sizes.

This image is similar to the most successful classification used in the Halounova (2009)

study with the only differences being the variance texture measure as opposed to the

standard deviation, and the Gaussian filter did not have an option for 9 standard

deviations in the ENVI software. The 11x11 texture window was used to preserve the

smallest objects in the image. In the Ogden image most of the individual trees fit this

window although there were a couple of buildings which were smaller, but it was

Page 69: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

60

determined that the majority of small objects were better represented by the 11x11

window. The larger window sizes are used to mainly filter noise from the image. The

pixel-based supervised classifications using the level 1 classification scheme were poor

for the minimum distance and neural net classifiers at 35%. Level 2 and level 3

classification schemes increase substantially ranging from 50%-55% for level 2 and 79%

for level 3. An ISODATA unsupervised classification was also run on this image using

10 spectral classes. Class names were assigned based on the majority level 1 land

use/land cover type contained in each spectral class. Due to significant confusion

between level 1 land use/land cover types, the level 1 classification scheme was unable to

be represented by the 10 spectral class ISODATA classification. Class 1 was labeled as

medium field because this was the majority field type contained in this spectral class.

The three field types (medium, light, and dark) used in the level 1 classification were

clustered together in class 1 due to the effects of filtering and texture on this image. The

other spectral classes were assigned as follows: classes 2-3 were labeled as grassland,

and classes 4-10 were labeled as forest (figure 23). This helps to explain why the level 1

overall accuracy was so low at 32% and increases substantially to 82% for the level 3

classification scheme. The level 3 classification scheme using ISODATA unsupervised

classification was also comparable to supervised classification overall accuracy which

was 79% for both the neural net and minimum distance classifiers for this image.

A Halounova based image was also created for the Salt Lake City study area. The

resulting classifications were slightly more successful for ISODATA using 10 spectral

classes at 42%. The Halounova image classified with the SVM classifier had the best

overall accuracy for supervised classification of the Salt Lake City study area at 43%.

Page 70: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

61

Figure 23 – ISODATA 10 Spectral Classes Halounova Image

One of the shortcomings of combining the texture and filter images was that it created

edge artifacts where the size of the imagery changes with the larger processing windows.

The neural net and minimum distance classifiers handled these areas better than the

maximum likelihood and ISODATA classifiers. Object-based classification was slightly

less accurate than the pixel based classification. It was difficult to segment the image and

obtain a good representation of classes. Overall accuracy of object-based classification

for level 1 was 35%, level 2 was 52%, and level 3 was 77%. One of the biggest problems

with the object-based classification was that there was significant confusion between

water, medium fields, and dark fields. The results may be due to the study area

Page 71: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

62

characteristics or software as Halounova (2009) used E-Cognition software that appears

to have a more robust segmentation algorithm than ENVI.

The most promising image enhancements were texture operators and principle

components analysis. The occurrence texture images using moving window sizes of 3x3,

5x5, 7x7 and 11x11 were the most successful texture images. The supervised

classification baseline information on the Ogden image provided an overall accuracy

improvement of between 11-13%. The 3x3 window had the highest accuracy at 52% for

the detailed classification. Overall accuracy improved considerably as the land cover

scheme was generalized. The 3x3 occurrence measure image increased in overall

accuracy at level 2 to 65% and level 3 to 77%.

The GLCM images with eight layers were somewhat difficult to work with in that

several of the resulting layers such as entropy, second moment, dissimilarity, and

correlation use floating point values that range from 0 –1. The 3x3 window produced an

image with too much noise and presented problems for some classifiers including neural

net and maximum likelihood where the result was a few speckled areas or a completely

gray image. The GLCM images were easier to work with after the individual layers were

split and saved as 8 bit unsigned integer TIFF images. In this case the DN ranges were

all within 0-255. As previously mentioned there is still a good deal of overlap between

texture characteristics such as homogeneity and second moment. Through trial and error

combinations that included the mean, variance, and contrast bands achieved some of the

more successful classification results ranging from 50% overall accuracy to 98% overall

accuracy.

Page 72: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

63

Other image enhancements using the detailed level classification system scheme were

edge enhancement 1 band image (39% overall accuracy), 3x3 co-occurrence layers

stacked with original 9 band image (50% overall accuracy), laplacian filter 1 band image

(34% overall accuracy), closed morphology filter 1 band image (43% overall accuracy),

and a PCA 2 band image, original image plus the first principle component of the mean,

contrast and variance 3-band image (50% overall accuracy). Generally, processes that

smoothed the image rather than sharpened the image were more successful. Texture

analysis was successful in separating highly textured areas such as forest from cropland,

but had difficulty in separating medium and light fields from water areas as both are

similar in tonal range and have a relatively smooth texture.

4.5 Object-based Image Analysis

Object-based image analysis offered some very promising results using the SRCM

region based approach and the ENVI EX object-based tools. Unfortunately, the SRCM

based classification could not be verified in the same manner as the other classifications.

The software produced an image which was unable to be projected in ArcMap so visually

the images looked very good for the Salt Lake City and Ogden imagery (Figure 24), but

the extract values to points tool was unable to be used. The Ogden image produced a

shapefile output that could be lined up with the original imagery, but the shapefile option

would not work for the Salt Lake City image. In order to determine the accuracy of the

SRCM it would be necessary to assign a land use/land cover classification to each

polygon. It did not seem practical to do this since there were over 1000 polygons for the

Ogden study area. Converting the polygons to a raster and then classifying the regions

proved unsuccessful as the SRCM output was based on the number of regions and did not

Page 73: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

64

contain any spectral data. This tool produced visually very good results, but it has

limitations and it does not have consistent performance. If these limiting factors could be

improved, this tool may be more successful than the ENVI EX object-based tools.

Object-based classification using ENVI EX feature extraction proved to be the most

successful classification of the original unprocessed Salt Lake City image with an overall

accuracy of 53%. This is still relatively low, but for the detailed classification this is a

14% improvement over the best pixel based classification results. As the land cover

classification scheme was generalized, the overall accuracy improved to 64% for level 2

and 74% for level 3. The object-based classification had the advantage of using spatial

and textural relationships to break the imagery up into their respective classes.

One of the difficulties of OBIA is that there are generally no set rules for determining

the segmentation of the image. The segmenting scale and merging scale were determined

from the preview function and seeing if the objects in the scene were well represented.

Ogden image Salt Lake City image

Figure 24 – SCRM object-based segmentation images

Page 74: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

65

A low segmentation scale seemed more successful if the image was slightly over

segmented to keep the field boundaries, although this caused some over segmentation in

the forest areas. The merging step after the initial segmentation helped to eliminate some

of the extra segments caused by keeping the field boundaries. For the Ogden project area

a scale of 45 and a merging scale of 75 were used. Two types of classification are

available in the ENVI software to classify the imagery, rule based and example based.

This study uses example based classifications, as it is the simplest and most

straightforward approach.

The Ogden image produced a fairly successful object-based classification on a PCA

texture image with an overall accuracy of 50%. The generalized classifications yielded

an overall accuracy of 61% for level 2 and 77% for level 3. The object-based

classification on the Ogden image did a good job of distinguishing large areas in the

imagery, but there was difficulty distinguishing between the medium and light field

classes. This had the effect of lowering the overall accuracy of the level 1 classification

scheme. The object-based classification for both images had less noise than other

classification methods. Since the object-based classifications were the most successful

and produced results with less noise than the per pixel supervised and unsupervised

classifications, they were used to test the post-processing system explained in the next

section. A per pixel classification based on the methods used by Halounova (2009) was

used for comparison.

4.6 Post Processing and Automation

Post processing the image classification rasters was an important step in order to use

these results to facilitate visual interpretation. Supervised classification produced better

Page 75: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

66

results in general than unsupervised classification, but there is also a practicality aspect to

the usefulness of digital image processing to aid an image analyst. It became apparent

after much trial and error using different classifiers and processing methods that there

was not one single method or process that was ideal for classifying panchromatic

imagery. A good base classification makes the job of visual interpretation easier, but it is

not entirely necessary if a good system is put into place to utilize the digital image

classification. The post processing model and the polygon editing toolbar provide a way

to give the user a base to start from even if it is less than ideal. This flexibility in the

system was an important consideration due to the variability that the user may encounter

between land use/land cover types for different projects and variability in general image

quality.

A post-processing system was designed to improve and clean up final land use/land

cover results obtained from the classified images. The system consists of 3 steps. Step 1

is to run the post-processing ArcGIS Model (Figure 15), which converts the image into a

polygon layer. Step 2 is to interactively edit the resulting polygon layer using the custom

vector editing toolbar. Finally step 3 is to perform an accuracy assessment to determine

the success of the results.

Heads up digitizing is often performed using GIS software with the output being a

land cover classification vector layer. A vector layer provides a more flexible medium

for visual interpretation as each member of a class is represented as a feature rather than a

non-contiguous area. Vertices and attributes are easily edited in vector format. The

custom polygon editing tool was used to delete any remaining noise and to reduce small

island pixels throughout the classification. Refining the object-based classification

Page 76: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

67

through the post processing system mentioned above took between 3 and 5 additional

hours for each image which is similar in time to digitizing. Depending on the project an

overall quick cleanup could be performed in about an hour and would likely add

substantial accuracy to the classification. Most of the system involved using the merge

with largest neighbor tool and the merge with selected polygon tool. The merge with

selected polygon tool was used most often, as occasionally what appears to be the largest

neighbor was difficult to discern.

The object-based classification of the study areas provided a good base to work with

as there were fewer areas of noise and island pixels. The post processing system was

very successful at improving the overall accuracy of the classification for both images

(Figures 25 and 26). For the Ogden image using the level 1 scheme, overall accuracy

was increased from 56% to 85% with a Kappa index of 83% indicating a substantial

agreement between the classification and the reference data. The level 2 scheme’s

overall accuracy was increased from 80% to 93% with a Kappa Index of 89%, and the

level 3 scheme produced an overall accuracy of 98% improved from 94% with a Kappa

index of 97%. The Salt Lake City image also yielded substantial accuracy improvements

from 53% to 72% (65% Kappa index), from 63% to 76% (65% Kappa index), and from

71% to 82% accuracy (63% Kappa index) respectively in decreasing level of land cover

classification scheme.

A per pixel classification based on Halounova (2009) methodology using texture and

filtering was also used to determine whether the post processing system could improve a

classification with an initially lower overall accuracy. Before using the post processing

system overall accuracy of this image was 35% (Figure 27). The post processing system

Page 77: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

68

analysis of the Halounova (2009) based image took five hours compared to three hours

for the object-based analysis. The object of this study was not to try to perfect the

classification but to effectively and efficiently improve the results in terms of time

consumption. After post processing the overall accuracy for the level 1 classification

scheme was improved from 35% to 75% (72% Kappa index). Level 2 was improved

from 50% to 87% and level 3 from 79% to 98%. The improvements in level 2 and level

3 are similar to the gains seen for object-based classification. This is likely due to the

good separation between cropland and all other classes on this image. This study has

shown that the post processing system can improve a poor initial classification

substantially.

Figure 25 – Ogden object-based classification image and post processing system vectors

Page 78: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

69

Figure 26 – Salt Lake City object-based classification image and post processing system vectors

Figure 27 – Ogden pixel based classification and post processing system vectors

Page 79: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

70

4.7 Classification Accuracy and Results

This research showed that classification accuracy for panchromatic imagery could be

variable depending on the image and the classification scheme. One of the biggest

factors that affected classification accuracy was the number of land cover categories.

Some projects require a high level of classification accuracy. This research has also

shown that by using digital image processing techniques and a system of post processing

on the results, it is possible to achieve a high level of accuracy (over 80%) for a project

requiring detailed land use/land cover classification. The Kappa index used to determine

how much of the accuracy may be due to chance was quite variable throughout this study

ranging from 25% to upwards of 97%. The higher the number the more agreement there

was between the reference data (digitized baseline information) and the classification

image. It is notable that the Kappa index was extremely high after the post-processing

system was applied. This high Kappa value shows that the agreement between the

ground truth classification and the post-processing results increased, and positive results

had less to do with chance alone. Overall accuracy does provide a benchmark to

determine in general how the classifications compared to each other. Please refer to

Appendix 1 for a sample of error matrixes used for this research.

Classification results between supervised and unsupervised classification were very

similar on the original panchromatic imagery for the level 1 classification scheme.

Overall accuracy for unsupervised classification ranged from 39%-40% on the Ogden

image while the supervised classification baseline information overall accuracy was 39%.

The overall accuracy of the Salt Lake City image unsupervised classification ranged from

38% to 39% and the supervised classification baseline was 38%. Both study areas had

Page 80: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

71

very low overall accuracy. There was not one classifier either supervised or unsupervised

which stood out as being significantly better than another for the classification of the

original panchromatic image in either study area. Please refer to tables 6 and 7 for a

comparison of classification results for the Ogden and Salt Lake City study areas.

From a user’s perspective supervised classification provides more usable and easier to

work with results given that identified classes are returned. This in turn offers a better

base that can be run through the post processing system proposed in this research.

Supervised classification takes into account the classes that the user has determined

beforehand and does not require further identification of classes. Identifying classes and

labeling them is much more time consuming in the unsupervised classifications (Tables 6

and 7). Unsupervised classification results were useful for seeing how classes were

grouped in the imagery and were a good illustration of why the more generalized

classifications had higher overall accuracy. After assigning labels to classes from the

unsupervised classification due to confusion and combining classes, there were only 4-7

level 1 land use/land cover classes that could be categorized in the Ogden study area

depending on the number of spectral classes chosen and whether texture/filtering was

used. The object-based classifications on the other hand had the most promising results

in this study for use in the post processing system, although the system works to improve

even an unsatisfactory classification base.

When specific land use/land cover types were examined, the unsupervised

classification baseline was very good at classifying the dark, medium, and light fields but

very poor at classifying water, forest, grassland, impervious surface and bare earth.

Supervised classification worked well on dark fields, water, and light fields. In both

Page 81: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

72

unsupervised and supervised classifications the dark fields’ user’s accuracy was

consistently the highest, ranging from 60%-92.5%. User’s accuracy for impervious

surface and bare earth was consistently low for both methodologies ranging from 0%-

35%. See table 8 for a comparison of the user’s accuracy for individual land use/land

cover results for the Ogden study area.

Table 6 Ogden image overall accuracy and level 1 completion time

Classification method Time Classification scheme Level 1 Level 1 Level 2 Level 3 ISODATA 10 spectral classes - original imagery 17 m 39% 48% 55% ISODATA 25 spectral classes - original imagery 22 m 40% 50% 56% ISODATA 100 spectral classes - original imagery 57 m 39% 47% 49% ISODATA 10 spectral classes – Halounova image 15 m 32% 51% 82% Unsupervised K-Means 10 spectral classes– original, texture (mean, variance, homogeneity) 12 m 25% 54% 71%

SVM - original imagery 6 m 39% 53% 60%

Minimum Distance – Halounova 9 layer 1 m 35% 50% 79% Neural Net – Halounova 9 layer 30 m 35% 55% 79% SVM - 3x3 texture occurrence measures 8 m 52% 65% 77% SVM – 5x5 texture occurrence measures 8 m 51% 65% 78% SVM – 7x7 texture occurrence measures 8 m 50% 63% 77% SVM – 11 x 11 texture occurrence measures 8 m 46% 61% 80% SVM – 3x3 Co-occurrence measures and original 8 m 50% 62% 76% SVM – PCA, original, 3x3 texture (mean, homogeneity) 7 m 35% 53% 62% SVM – PCA, original, 3x3 texture (mean, contrast, variance) 7 m 50% 61% 75%

SVM – 5x5 edge enhance 6 m 39% 50% 60% SVM – Laplacian filter add back 80% original 7 m 34% 46% 54% SVM – closed morphology filter 7 m 43% 53% 62%

SVM Object-based – PCA, original, 3x3 texture (mean, contrast, and variance) 15 m 56% 80% 94%

Neural Net – Halounova 9 layer post processing system 5 h 75% 87% 98% SVM – 11x11 texture occurrence measures post processing system 1h 27m 61% 72% 92%

SVM Object-based post processing system – PCA, original, texture (mean, contrast, and variance) 3 h 85% 93% 98%

*Please note times were only recorded for level 1 classification results as level 2 and level 3 results were obtained by aggregating the land use/land cover types

Page 82: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

73

Adding texture layers offered fairly significant improvement on the overall accuracy

using the level 1 classification scheme for the Ogden study area. Only slight

improvement was seen in the Salt Lake City image. Overall accuracy for the Ogden

image using the occurrence texture measures ranged from 46%-52% and the Salt Lake

City image ranged from 38%-42%. The size of the texture window for the Ogden image

did not significantly impact the overall accuracy of the image until the window size

reached 11x11. At this point overall accuracy started to drop more quickly than between

the smaller windows (52%-46%). The overall range of accuracy is still fairly small at

6%. The texture window size for level 2 and level 3 classification schemes had even less

impact on overall accuracy with a range of only 3%-4% difference. The texture window

size had the opposite effect on the Salt Lake City image. Overall accuracy increased very

gradually as the texture window got larger (38%-42%). This helps to show the

differences in the two study areas. The Ogden study area is characterized by more

homogenous features such as forest and cropland whereas the Salt Lake City study area

has diverse land use/land cover classes such as commercial and residential which are

more heterogeneous in content consisting of a variety of natural and manmade materials

in widely varying shapes and sizes.

The use of texture also had an effect on the various land use/land cover classes in both

study areas. Texture increased the user’s accuracy of features such as forest, grassland,

dark fields, and impervious surface while water, bare earth, and medium fields decreased.

Light fields had mixed results either decreasing or increasing depending on the texture

window size. The Salt Lake City land use/land cover classes were also affected by

texture. Texture measures increased the user’s accuracy of grass, and residential classes

Page 83: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

74

Table 7 Salt Lake City image overall accuracy and level 1 completion time Classification method Time Classification scheme Level 1 Level 1 Level 2 Level 3 ISODATA 10 spectral classes - original imagery 10m 39% 51% 67% ISODATA 25 spectral classes - original imagery 22m 38% 51% 64% ISODATA 100 spectral classes - original imagery 46m 38% 53% 65% ISODATA 10 spectral classes – Halounova image 8m 42% 60% 67% Minimum Distance Classifier – original imagery 1m 38% 54% 63% SVM – original imagery 5m 38% 45% 58% Maximum Likelihood – 5x5 texture occurrence, saturation stretch

1m 42% 66% 63%

Minimum Distance Classifier – Halounova image 1m 42% 60% 65% Neural Net – Halounova image 40m 39% 48% 54% Neural Net – 5x5 texture occurrence 14m 37% 46% 57% SVM – 3x3 texture occurrence measures 15m 38% 45% 55% SVM – 5x5 texture occurrence measures 15m 41% 48% 56% SVM – 11x11 occurrence measures 16m 42% 52% 58% SVM – 5x5 texture occurrence measures, saturation Stretch

16m 41% 66% 75%

SVM – Halounova Image 30m 43% 52% 58% SVM Object-based – original imagery 10m 53% 64% 71% SVM Object-based Post Processing System – original imagery

3 h 72% 76% 82%

*Please note times were only recorded for level 1 classification results as level 2 and level 3 results were obtained by aggregating the land use/land cover types and decreased the user’s accuracy of trees and commercial classes. The transportation

class user’s accuracy increased or decreased depending on the texture window size. The

individual land use/land cover classes in both study areas were affected by texture

window size depending on how homogenous or heterogeneous the features were. Classes

that were more homogenous like water, medium fields, and bare earth decreased as

window sizes increased. Classes that were more heterogeneous like forest, grassland, and

impervious surface increased in accuracy as window sizes increased up to the 11x11

window size where accuracy then started to decline again. This demonstrated that there

was not a single texture processing window that was able to effectively characterize all

the textures for either study area (Tables 8 and 9).

Page 84: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

75

Object-based classification had the highest overall accuracy for both study areas. For

the level 1 classification scheme the Ogden study area had an overall accuracy of 56%,

which was an improvement over pixel based classifications of 4%-31%. The level 2 and

level 3 classification schemes showed an even greater improvement of 15%-33% and

12%-45% respectively. The level 3 classification scheme was the only classification

which attained over 90% accuracy without post-processing. This indicated that the

object-based classification was good at distinguishing cropland from other land use/land

cover classes. This is due to the spectral homogeneity of dark and medium fields and the

shape homogeneity of the fields. User’s accuracy for individual land use/land cover

classes in the level 1 classification scheme varied for object-based classification. Object-

based classification showed significant improvement over pixel based classification for

water, dark fields, impervious surface, and bare earth from 10%-72.5%. Forest,

grassland, and medium fields varied depending on the classifier and texture properties.

Light field was the only feature that had a decrease in accuracy ranging from 27.5%-75%

depending on whether texture or supervised, unsupervised classification was initially

used. This decrease in accuracy was most likely due to confusion between light fields

and medium fields that could be caused by some spectral similarities, and the close

proximity of medium and light fields in the Ogden study area.

Object-based classification was also generally more successful than pixel based

classification for the Salt Lake City study area. The overall accuracy for the level 1

classification scheme was 53% which was a 10%-16% improvement. Level 2 and level 3

classification schemes also showed improvement from 4%-17% over most of the pixel

based classifiers except the 5x5 texture processing window using the occurrence

Page 85: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

76

Table 8 – User’s accuracies for individual land use/land cover types Ogden study area

Classification method Land use/land cover

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

ISODATA 10 spectral classes – original imagery 0% 27.5% 17.5% 92.5% 62.5% 67.5% 0% 35%

ISODATA 25 spectral classes – original imagery 15% 30% 7.5% 87.5% 62.5% 90% 0% 17.5%

ISODATA 100 spectral classes – original imagery 0% 30% 5% 85% 77.5% 92.5% 0% 15%

ISODATA 10 spectral classes – Halounova image 0% 90% 30% 72.5% 52.5% 0% 0% 0%

Unsupervised K-Means 10 spectral classes– original, texture (mean, variance, homogeneity)

0% 27.5% 17.5% 92.5% 62.5% 67.5% 0% 35%

SVM – original imagery 62.5% 32.5% 0% 77.5% 45% 65% 0% 22.5%

Minimum distance – Halounova image 13% 38% 38% 60% 25% 63% 47% 0%

Neural Net – Halounova image 7.5% 62.5% 17.5% 60% 37.5% 42.5% 53.3% 7.5%

SVM – 3x3 Texture occurrence measures 52.5% 75% 47.5% 82.5% 35% 62.5% 36.7% 17.5%

SVM – 5x5 Texture occurrence measure 45% 77.5% 55% 80% 22.5% 72.5% 46.7% 7.5%

SVM – 7x7 Texture occurrence measures 35% 77.5% 52.5% 82.5% 17.5% 67.5% 56.7% 10%

SVM – 11x11 occurrence measures 32.5% 67.5% 35% 80% 20% 65% 63.3% 7.5%

SVM – 3x3 Co-occurrence measures and original 47.5% 72.5% 32.5% 80% 32.5% 70% 40% 20%

SVM – PCA, original, 3x3 texture (mean, homogeneity) 2.5% 67.5% 12.5% 65% 57.5% 52.5% 0% 17.5%

SVM – PCA, original, 3x3 texture (mean, contrast, variance) 62.5% 77.5% 32.5% 82.5% 30% 47.5% 43.3% 20%

SVM – 5x5 edge enhance 80% 37.5% 0% 67.5% 52.5% 47.5% 0% 17.5%

SVM – Laplacian filter add back 80% original 22.5% 22.5% 0% 67.5% 55% 67.5% 6.7% 25%

SVM – closed morphology filter 70% 37.5% 20% 82.5% 47.5% 62.5% 0% 15%

SVM Object-based – PCA, original, 3x3 texture (mean, contrast, and variance) 80% 70% 12.5% 92.5% 57.5% 15% 73.3% 50%

Neural Net – Halounova image post processing system 72.5% 72.5% 45% 97.5% 97.5% 85% 90% 47.5%

SVM – 11x11 texture occurrence measures post processing system 45% 70% 57.5% 85% 67.5% 92.5% 43.3% 20%

SVM Object-based post processing system – PCA, original, 3x3 texture (mean, contrast, variance)

90% 85% 55% 95% 95% 85% 96.7% 80%

Page 86: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

77

Table 9 – User’s accuracies for individual land use/land cover types Salt Lake City study area

Classification method Land use/land cover

Trees Grass Transportation Residential Commercial

ISODATA 10 spectral classes – original imagery 55% 0% 60% 0% 70%

ISODATA 25 spectral classes - original imagery 55% 0% 50% 0% 75%

ISODATA 100 spectral classes – original imagery 57.5% 0% 45% 0% 80%

ISODATA 10 spectral classes – Halounova image 80% 0% 65% 10% 45%

Minimum Distance Classifier – original imagery 42.5% 46.7% 20% 17.5% 67.5%

SVM – original imagery 37.5% 0% 50% 20% 72.5%

Maximum Likelihood – 5x5 texture occurrence, saturation

stretch 30% 53.3% 42.5% 37.5% 50%

Minimum Distance Classifier – Halounova image 22.5% 50% 60% 25% 52.5%

Neural Net – Halounova image 20% 0% 55% 80% 32.5%

Neural Net – 5x5 texture occurrence 27.5% 0% 50% 47.5% 52.5%

SVM – 3x3 Texture occurrence measures 27.5% 0% 47.5% 40% 65%

SVM – 5x5 Texture occurrence measure 27.5% 3.3% 42.5% 72.7% 60%

SVM – 11x11 occurrence measures 20% 10% 57.5% 60% 52.5%

SVM – 5x5 texture occurrence measures, saturation stretch 22.5% 36.7% 57.5% 32.5% 52.5%

SVM – Halounova Image 25% 0% 52.5% 70% 57.5%

SVM Object-based – 5x5 occurrence measures 37.5% 60% 50% 50% 70%

SVM Object-based Post Processing System – original imagery 40% 66.7% 82.5% 72.5% 97.5%

measures which was more accurate by 2%-4%. The Salt Lake City study area showed

slightly less improvement than the Ogden study area which was likely due to the

variation of objects on the ground in the commercial and residential land use/land cover

classes. Please see table 10 for a comparison of overall accuracy ranges for the level 1

classification scheme and the range of improvement gained for levels 2 and 3 for each

classification group.

User’s accuracies for individual land use/land cover types using object-based

classification in the Salt Lake City study area were mixed and in general did not show

asmuch improvement as the Ogden study area. Grass was the only feature which showed

Page 87: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

78

significant improvement between 14%-60% for object-based classification. All other

land use/land cover classes had mixed results depending on which classification strategy

it was compared to. For example, the commercial class showed an increase in accuracy

when compared to texture and filtering but showed a decrease in accuracy when

compared to the unsupervised classification group of images. Transportation, residential,

and commercial land use/land cover classes showed a similar trend. The commercial,

residential, and transportation areas are very heterogeneous spectrally and even though

spatially they are fairly homogenous, the spectral variations seemed to affect the accuracy

of these classes. Object-based classification appeared to handle the more homogenous

classes like trees and grass better than the other land use/land cover types. This study has

shown that object-based classification is a very promising technique for high resolution

panchromatic aerial imagery. This methodology was more successful in the Ogden study

area that is characterized by more distinct regions of land use/land cover types than the

urban area that characterized the Salt Lake City study area.

The post processing system provided the most accurate results for both study areas.

Overall accuracy for level 1 classification ranged from 61%-85% with improvement

Table 10 – Overall accuracy ranges for classification groups

Classification group Level 1 accuracy

range Level 2 accuracy range

Level 3 accuracy range

Range of overall improvement

Unsupervised classifications 25-42% 47-60% 49-82% 5-57% Baseline classifications 38-39% 45-54% 58-63% 6-25% Texture/Filtering/PCA classifications

34-52% 45-66% 54-80% 2-46%

Object-based classifications 53-56% 64-80% 71-94% 8-41% Post processing system classifications

61-85% 72-93% 82-98% 8-37%

Page 88: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

79

between 8%-37% for levels 2 and 3. The object-based classification for the Ogden study

area has the highest overall accuracy for the level 1 classification scheme at 85%. All

land use/land cover categories for both study areas showed significant improvement.

Notable areas of improvement included water, bare earth, impervious surface,

transportation, and grass.

Effectiveness and efficiency of the post processing system were two of the primary

goals of this research. This research has shown that the system effectively improves

classification accuracy. The post processing system is an efficient method based on user

interaction, but it did add significant time to the image classification. Image-processing

software can automatically classify, post-process, and produce a usable land use/land

cover layer in minutes. In comparison the post processing system in this study added

several hours onto this process in order to gain a more accurate land use/land cover layer

for use in a GIS. Since the system is based on user interaction, the time can be variable

depending on how much detail or what accuracy level may be required for a project. If a

generalized classification using three or less classes and accuracy between 70% and 80%

would be appropriate for a project, then the post processing system can be completed in

two to three hours. This is a modest time-savings compared to manual digitizing,

although the resulting classifications contained far more detail than the manually

digitized results. The Ogden image took approximately five hours to digitize and the Salt

Lake City image about three hours. Performing automated classification using image

processing software is the most efficient method to produce a land use/land cover

classification layer in this study. It took about 1 minute using a minimum distance

classifier and about 30 minutes using a neural net classifier, but the accuracy of the

Page 89: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

80

resulting classification was often low. The post processing system added both

consistency and accuracy to the classifications but did add several hours to the process.

The object-based classifications provided a better starting base but added about three to

four hours to get a fairly accurate final land use/land cover classification. In contrast a

poor starting base with an overall accuracy of 35% took about five hours to achieve an

improved overall accuracy of 75%.

Page 90: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

81

CHAPTER 5: CONCLUSION

5.1 Limitations of the Research

The point of this research was to facilitate visual interpretation of panchromatic

historical imagery by using digital image processing. Imagery is variable in both quality

and scene characteristics so that what is successful on one dataset may not be successful

on other datasets. Another limit to the methods in this research was the sheer number of

digital image processing techniques which could be used on the imagery. Thousands of

combinations and settings were available to process the imagery, but due to time

limitations relatively few were feasible to study. Availability of algorithms and software

capabilities are also a limitation as commercial software provides relatively few choices

compared to open source programs available from a variety of sources. Another

limitation was knowledge of advanced algorithms and programming to determine new

classification algorithms that may benefit single band image classification. It was

determined that using readily available software and the capabilities they offer was more

useful if the research is to be applied in a practical manner for the study of the Farm

Service Agency’s historical aerial photos. The overall research goals were achieved in

that it was shown that by combining digital image processing with a system of post-

processing that allows user interaction, image classification accuracy can be improved by

at least 20%.

5.2 Potential Future Developments

The current vector tools created for this project could be expanded to include more

editing and reporting capabilities. It would be beneficial to include an undo button, a

field editing option and real time statistical reporting. Also including more of the

Page 91: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

82

advanced ENVI functionality through the use of IDL and Python programming into the

ArcGIS interface could expand the options for supervised classification including the

SVM, SCRM Object-based, neural net, and the self organizing map. The process of

classification is more streamlined for the user if only one software package is required to

perform all the steps.

Through trial and error many filters and combinations of texture images were assessed

for their usefulness in classification of panchromatic imagery. There are almost endless

combinations using texture measures, filtering, PCA, and contrast stretching. A

relatively small number of these combinations were assessed for this research. This

aspect of the study lends itself to further research in the future as there may be other

combinations which could be more successful.

Further research is also possible in the area of classification algorithms designed

specifically for panchromatic imagery. Most classifiers are optimally designed to take

advantage of multi-spectral bands. As computer-processing power continues to advance,

it may be possible to develop new algorithms that have the capability of better

distinguishing distinct land cover classes with limited spectral information. More

integration between texture measures and object-based segmentation and feature

extraction needs to be explored further. These digital techniques show obvious

advantages and improvements in classifying panchromatic imagery, but results could still

be improved especially in urban areas.

One of the findings in this research was that certain classification techniques are more

successful than others on particular land use/land cover types. This suggests that it may

be possible to achieve high classification accuracy using a hybrid approach where each

Page 92: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

83

land use/land cover type is classified separately using a technique which yields a high

accuracy for that category. For example, in the Ogden study area a 10 spectral class

ISODATA unsupervised classification yielded 92.5% accuracy for dark fields but low

accuracy for water (0%) and impervious surface (0%). Through masking and a process

of elimination water and impervious surface could be classified using a more successful

technique such as object-based classification where accuracy was higher 80% and 50%

respectively. Each land use/land cover type in the image could then be successively

classified using a high accuracy technique. This would likely be a multi-process

classification but the possibility of less post processing and higher accuracy has a lot of

potential for future study.

As historical imagery becomes more readily available to the public through

technology such as web-based image services, classification tools for use in these

services would have wide appeal to government and the remote sensing community. The

ability to make use of historical data to study long-term land use trends is one of the most

important aspects of this research. Black and white aerial photography is an

underutilized resource at present, but developing more tools to access the information

contained in the imagery will broaden its appeal to the remote sensing community.

Page 93: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

84

APPENDIX 1: ERROR MATRIX TABLES Ogden unsupervised classification of original unprocessed image

Pixels Classification Data Unsupervised Ogden Data 10 Spectral Classes Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 0 0 0 0 0 0 0 0 0 0% 100% Forest 3 11 1 0 0 0 1 1 17 64.7% 35.3% Grassland 3 7 7 2 3 0 10 2 34 20.6% 79.4%

Dark Fields 0 12 8 37 9 0 2 6 74 50% 50% Medium Fields 20 8 20 0 25 2 8 8 91 27.5% 72.5%

Light Fields 12 2 2 1 3 27 6 9 62 43.5% 56.5% Impervious Surface 0 0 0 0 0 0 0 0 0 0 100%

Bare Earth 2 0 2 0 0 11 3 14 32 43.8% 56.3% Column Total 40 40 40 40 40 40 30 40 121

User’s Accuracy 0% 27.5% 17.5% 92.5% 62.5% 67.5% 0% 35%

Errors of Commission 100% 72.5% 82.5% 7.5% 37.5% 32.5% 100% 65%

Overall Accuracy 39% Kappa Index 30%

Pixels Classification Data Unsupervised Ogden Data 25 Spectral Classes Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 6 0 3 0 2 0 2 3 16 37.5% 62.5% Forest 3 12 1 0 0 0 1 1 18 66.6% 33.4% Grassland 1 5 3 2 2 0 8 1 22 13.6% 86.4% Dark Fields 0 9 8 35 6 0 0 5 63 55.5% 44.4% Medium Fields 18 11 21 1 25 4 10 9 99 25.2% 74.7%

Light Fields 11 2 3 1 3 36 6 14 76 47.3% 52.6% Impervious Surface 0 0 0 0 0 0 0 0 0 0% 100%

Bare Earth 1 1 1 1 2 0 3 7 16 43.7% 56.3% Column Total 40 40 40 40 40 40 30 40 124

User’s Accuracy 15% 30% 7.5% 87.5% 62.5% 90% 0% 17.5%

Errors of Commission 85% 70% 92.5% 12.5% 37.5% 10% 100% 82.5%

Overall Accuracy 40% Kappa Index 31%

Page 94: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

85

Pixels Classification Data Unsupervised Ogden Data 100 Spectral Classes Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 0 0 0 0 0 0 0 1 1 0% 100% Forest 3 12 1 1 0 0 1 1 19 63.2% 36.8% Grassland 0 1 2 1 0 0 1 0 5 40% 60% Dark Fields 0 10 8 34 6 0 0 5 63 54% 46% Medium Fields 24 15 25 3 31 3 19 10 130 23.8% 76.2%

Light Fields 12 2 3 1 3 37 6 17 81 45.7% 54.3% Impervious Surface 0 0 0 0 0 0 0 0 0 0% 100%

Bare Earth 1 0 1 0 0 0 3 6 11 54.5% 45.5% Column Total 40 40 40 40 40 40 30 40 122

User’s Accuracy 0% 30% 5% 85% 77.5% 92.5% 0% 15%

Errors of Commission 100% 70% 95% 15% 22.5% 7.5% 100% 85%

Overall Accuracy 39% Kappa Index 30%

Pixels Classification Data Unsupervised Ogden Data 10 Spectral Classes Halounova image Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 0 0 0 0 0 0 0 0 0 0% 100% Forest 15 36 24 4 10 6 20 26 141 25.5% 74.5% Grassland 12 4 12 7 9 3 10 13 70 17.1% 82.9% Dark Fields 0 0 0 29 0 0 0 0 29 100% 0% Medium Fields 13 0 4 0 21 31 0 1 70 30% 70%

Light Fields 0 0 0 0 0 0 0 0 0 0% 100% Impervious Surface 0 0 0 0 0 0 0 0 0 0% 100%

Bare Earth 0 0 0 0 0 0 0 0 0 0% 100% Column Total 40 40 40 40 40 40 30 40 98

User’s Accuracy 0% 90% 30% 72.5% 52.5% 0% 0% 0%

Errors of Commission 100% 10% 70% 27.5% 47.5% 100% 100% 100%

Overall Accuracy 32% Kappa Index 21%

Ogden level 1 classification scheme

Pixels Classification Data Supervised Ogden Original Unprocessed SVM Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 25 5 10 0 16 14 11 12 93 26.9% 73.1% Forest 3 13 3 1 1 0 1 1 23 56.5% 43.5% Grassland 0 0 0 0 0 0 0 0 0 0% 100% Dark Fields 0 8 4 31 2 0 0 5 50 62% 38% Medium Fields 5 13 19 7 18 0 13 4 79 22.8% 77.2%

Light Fields 6 1 2 1 3 26 2 9 50 52% 48% Impervious Surface 0 0 0 0 0 0 0 0 0 0% 100%

Bare Earth 1 0 2 0 0 0 3 9 15 60% 40% Column Total 40 40 40 40 40 40 30 40 122

User’s Accuracy 62.5% 32.5% 0% 77.5% 45% 65% 0% 22.5%

Errors of Commission 37.5% 67.5% 100% 22.5% 55% 35% 100% 77.5%

Overall Accuracy 39% Kappa Index 30%

Page 95: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

86

Pixels Classification Data Supervised 3x3 Occurrence SVM Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 21 0 1 0 11 11 1 0 45 46.7% 53.3% Forest 4 30 9 3 2 0 2 6 56 53.6% 46.4% Grassland 3 5 19 1 8 2 10 3 51 37.3% 62.7% Dark Fields 0 0 1 33 1 0 0 1 36 91.7% 8.3% Medium Fields 3 1 4 2 14 0 4 0 28 50% 50%

Light Fields 6 0 2 0 3 25 0 8 44 56.8% 43.2% Impervious Surface 3 4 4 1 1 2 11 15 41 26.8% 73.2%

Bare Earth 0 0 0 0 0 0 2 7 9 77.8% 22.2% Column Total 40 40 40 40 40 40 30 40 160

User’s Accuracy 52.5% 75% 47.5% 82.5% 35% 62..5% 36.7% 17.5%

Errors of Commission 47.5% 25% 52.5% 17.5% 65% 37.5% 63.3% 82.5%

Overall Accuracy 52% Kappa Index 45%

Pixels Classification Data Supervised 5x5 Occurrence SVM Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 18 0 0 0 10 8 1 0 37 48.6% 51.4% Forest 4 31 6 3 2 0 2 4 52 59.6% 40.4% Grassland 7 4 22 1 15 1 9 4 63 34.9% 65.1% Dark Fields 1 0 2 32 0 0 0 1 36 88.9% 11.1% Medium Fields 1 0 1 2 9 0 4 0 17 52.9% 47.1%

Light Fields 5 0 3 0 2 29 0 5 44 65.9% 34.1% Impervious Surface 4 5 6 2 2 2 14 23 58 24.1% 75.9%

Bare Earth 0 0 0 0 0 0 0 3 3 100% 0% Column Total 40 40 40 40 40 40 30 40 158

User’s Accuracy 45% 77.5% 55% 80% 22.5% 72.5% 46.7% 7.5%

Errors of Commission 55% 22.5% 45% 20% 77.5% 27.5% 53.3% 92.5%

Overall Accuracy 51% Kappa Index 44%

Pixels Classification Data Supervised Minimum Distance Classifier Halounova 9 layer - original, texture, filters Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 5 0 0 0 10 5 0 0 20 25% 75% Forest 7 15 12 2 5 4 6 9 60 25% 75% Grassland 12 4 15 10 9 2 10 14 76 19.7% 80.3% Dark Fields 0 0 1 24 0 0 0 0 25 96% 4% Medium Fields 1 0 1 2 10 0 0 0 14 71.4% 28.6%

Light Fields 3 0 0 0 0 25 0 1 29 86.2% 13.8% Impervious Surface 8 21 11 2 5 2 14 16 79 17.7% 82.3%

Bare Earth 4 0 0 0 1 2 0 0 7 0% 100% Column Total 40 40 40 40 40 40 30 40 108

User’s Accuracy 13% 38% 38% 60% 25% 63% 47% 0%

Errors of Commission 87% 62% 62% 40% 75% 37% 53% 100%

Overall Accuracy 35% Kappa Index 26%

Page 96: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

87

Pixels Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 3 0 0 0 4 13 0 1 21 14.3% 85.7% Forest 3 25 12 4 3 0 5 5 57 43.9% 56.1% Grassland 14 2 7 6 7 2 9 6 53 13.2% 86.8% Dark Fields 0 0 1 24 0 0 0 0 25 96% 4% Medium Fields 4 0 1 5 15 1 0 0 26 57.7% 42.3%

Light Fields 2 0 0 0 1 17 0 0 20 85% 15% Impervious Surface 13 13 17 1 9 4 16 25 98 16.3% 83.7%

Bare Earth 1 0 2 0 1 3 0 3 10 30% 70% Column Total 40 40 40 40 40 40 30 40 110

User’s Accuracy 7.5% 62.5% 17.5% 60% 37.5% 42.5% 53.3% 7.5%

Errors of Commission 92.5% 37.5% 82.5% 40% 62.5% 57.5% 46.7% 92.5%

Overall Accuracy 35% Kappa Index 27%

Pixels Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters Post Processing System Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 29 0 0 0 0 0 0 0 29 100% 0% Forest 3 29 9 1 0 0 2 3 47 61.7% 38.3% Grassland 1 2 18 0 0 0 1 2 24 75% 25% Dark Fields 0 0 0 39 0 0 0 0 39 100% 0% Medium Fields 0 0 0 0 39 2 0 0 41 95.1% 4.9%

Light Fields 0 0 0 0 0 34 0 0 34 100% 0% Impervious Surface 7 9 12 0 1 4 27 16 76 35.5% 64.5%

Bare Earth 0 0 1 0 0 0 0 19 20 95% 5% Column Total 40 40 40 40 40 40 30 40 234

User’s Accuracy 72.5% 72.5% 45% 97.5% 97.5% 85% 90% 47.5%

Errors of Commission 27.5% 27.5% 55% 2.5% 2.5% 15% 10% 52.5%

Overall Accuracy 75% Kappa Index 72%

Pixels Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Object-based classification Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 32 0 0 0 0 0 0 4 36 88.9% 11.1% Forest 3 28 15 2 2 0 1 6 57 49.1% 50.9% Grassland 0 1 5 0 1 0 4 0 11 45.5% 54.5% Dark Fields 0 0 0 37 1 0 1 0 39 94.9% 5.1% Medium Fields 0 1 0 0 23 34 0 0 58 39.7% 60.3%

Light Fields 0 0 1 1 2 6 0 0 10 60% 40% Impervious Surface 1 8 11 0 4 0 22 10 56 39.3% 60.7%

Bare Earth 4 2 8 0 7 0 2 20 43 46.5% 53.5% Column Total 40 40 40 40 40 40 30 40 173

User’s Accuracy 80% 70% 12.5% 92.5% 57.5% 15% 73.3% 50

Errors of Commission 20% 30% 87.5% 7.5% 42.5% 85% 26.7% 50

Overall Accuracy 56% Kappa Index 50%

Page 97: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

88

Pixels Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Object-based Post Processing System

Reference Data

Water Forest Grassland Dark Fields

Medium Fields

Light Fields

Impervious Surface

Bare Earth

Row Total

Producer’s Accuracy

Errors of Omission

Water 36 0 0 0 0 0 0 4 40 90% 10% Forest 3 34 11 1 1 0 0 4 54 63% 37% Grassland 1 2 22 0 0 0 1 0 26 84.6% 15.4% Dark Fields 0 0 0 38 1 0 0 0 39 97.4% 2.6% Medium Fields 0 2 0 0 38 6 0 0 46 82.6% 17.4%

Light Fields 0 0 0 1 0 34 0 0 35 97.1% 2.9% Impervious Surface 0 0 0 0 0 0 29 0 29 100% 0%

Bare Earth 0 2 7 0 0 0 0 32 41 78% 22% Column Total 40 40 40 40 40 40 30 40 263

User’s Accuracy 90% 85% 55% 95% 95% 85% 96.7% 80%

Errors of Commission 10% 15% 45% 5% 5% 15% 3.3% 20%

Overall Accuracy 85% Kappa Index 83%

Ogden level 2 classification scheme

Pixels Classification Data Unsupervised Ogden Data 10 Spectral Classes Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 26 5 20 51 51% 49% Cropland 52 104 71 227 45.8% 54.2% Other 2 11 19 32 59.4% 40.6% Column Total 80 120 110 149 User’s Accuracy 32.5% 86.7% 17.3% Errors of Commission 67.5% 13.3% 82.7% Overall Accuracy 48% Kappa Index 19%

Pixels Classification Data Unsupervised Ogden Data 25 Spectral Classes Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 21 4 15 40 52.5% 47.5% Cropland 54 111 73 238 46.6% 53.4% Other 5 5 22 32 68.8% 31.3% Column Total 80 120 110 154 User’s Accuracy 26.3% 92.5% 20% Errors of Commission 73.7% 7.5% 80% Overall Accuracy 50% Kappa Index 20%

Pixels Classification Data Unsupervised Ogden Data 100 Spectral Classes Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 16 2 6 24 66.7% 33.3% Cropland 63 118 93 274 43.1% 56.9% Other 1 0 11 12 91.7% 8.3% Column Total 80 120 110 145 User’s Accuracy 20% 98.3% 10% Errors of Commission 80% 1.7% 90% Overall Accuracy 47% Kappa Index 15%

Page 98: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

89

Pixels Classification Data Unsupervised Ogden Data 10 Spectral Classes Halounova image Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 76 39 96 211 36% 64% Cropland 4 81 14 99 81.8% 18.2% Other 0 0 0 0 0% 100% Column Total 80 120 110 157 User’s Accuracy 95% 67.5% 0% Errors of Commission 5% 32.5% 100% Overall Accuracy 51% Kappa Index 30%

Pixels Classification Data Supervised 3x3 Occurrence SVM Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 63 16 28 107 58.9% 41.1% Cropland 8 78 22 108 72.2% 27.8% Bare Earth 9 26 60 95 63.2% 36.8% Column Total 80 120 110 201 User’s Accuracy 78.8% 65% 54.5% Errors of Commission 21.2% 35% 45.5%

Overall Accuracy 65% Kappa Index 47%

Pixels Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Object-

based post processing system Reference Data Vegetatio

n Cropland

Other Row Total Producer’s Accuracy

Errors of Omission

Vegetation 69 2 8 79 87.3% 12.7% Cropland 2 118 1 121 97.5% 2.5% Other 9 0 101 110 91.8% 8.2% Column Total 80 120 110 288 User’s Accuracy 86.3% 98.3% 91.8% Errors of Commission 13.8% 1.7% 8.2% Overall Accuracy 93% Kappa Index 89%

Pixels Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Object-

based classification Reference Data Vegetatio

n Cropland

Other Row Total Producer’s Accuracy

Errors of Omission

Vegetation 49 5 14 68 72.1% 27.9% Cropland 2 104 1 107 97.2% 2.8% Other 29 11 95 135 70.4% 29.6% Column Total 80 120 110 248 User’s Accuracy 61.3% 86.7% 86.4% Errors of Commission 38.8% 13.3% 13.6%

Overall Accuracy 80% Kappa Index 69%

Pixels Classification Data Supervised Ogden Original Unprocessed SVM Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 16 2 5 23 69.6% 30.4% Cropland 47 88 44 179 49.2% 50.8% Other 17 30 61 108 56.5% 43.5% Column Total 80 120 110 165 User’s Accuracy 20% 73.3% 55.5% Errors of Commission 80% 26.7% 44.5%

Overall Accuracy 53% Kappa Index 26%

Page 99: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

90

Pixel Classification Data Supervised Halounova 9 layer - original, texture, filters Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 46 32 58 136 33.8% 66.2% Cropland 2 61 5 68 89.7% 10.3% Other 32 27 47 106 44.3% 55.7% Column Total 80 120 110 154 User’s Accuracy 57.5% 50.8% 42.7% Errors of Commission 42.5% 49.2% 57.3%

Overall Accuracy 50% Kappa Index 26%

Pixels Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 46 22 42 110 41.8% 58.2% Cropland 2 63 6 71 88.7% 11.3% Other 32 35 62 129 48.1% 51.9% Column Total 80 120 110 171 User’s Accuracy 57.5% 52.5% 56.4% Errors of Commission 42.5% 47.5% 43.6%

Overall Accuracy 55% Kappa Index 33%

Pixels Classification Data Unsupervised Isodata Classifier Halounova 9 layer - original, texture, filters Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 60 20 61 141 42.6% 57.4% Cropland 11 93 28 132 70.5% 29.5% Other 9 7 21 37 56.8% 43.2% Column Total 80 120 110 174 User’s Accuracy 75% 77.5% 19.1% Errors of Commission 25% 22% 80.9%

Overall Accuracy 56% Kappa Index 35%

Pixels Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture, filters

Post Processing System Reference Data Vegetation Croplan

d Other Row Total Producer’s

Accuracy Errors of Omission

Vegetation 58 1 12 71 81.7% 18.3% Cropland 0 114 0 114 100% 0% Other 22 5 98 125 78.4% 21.6% Column Total 80 120 110 270 User’s Accuracy 72.5% 95% 89.1% Errors of Commission 27.5% 5% 10.9%

Overall Accuracy 87% Kappa Index 80%

Page 100: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

91

Ogden level 3 classification scheme

Pixels Classification Data Unsupervised Ogden Data 10 Spectral Classes Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 104 123 227 45.8% 54.2% Non-Cropland 16 67 83 80.7% 19.3% Column Total 120 190 171 User’s Accuracy 86.7% 35.3% Errors of Commission 13.3% 64.7% Overall Accuracy 55% Kappa Index 19%

Pixels Classification Data Unsupervised Ogden Data 25 Spectral Classes Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 111 127 238 46.6% 53.4% Non-Cropland 9 63 72 87.5% 12.5% Column Total 120 190 174 User’s Accuracy 92.5% 33.2% Errors of Commission 7.5% 66.8% Overall Accuracy 56% Kappa Index 22%

Pixels Classification Data Unsupervised Ogden Data 100 Spectral Classes Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 118 156 274 43.1% 56.9% Non-Cropland 2 34 36 94.4% 5.6% Column Total 120 190 152 User’s Accuracy 98.3% 17.9% Errors of Commission 1.7% 82.1% Overall Accuracy 49% Kappa Index 13%

Pixels Classification Data Unsupervised Ogden Data 10 Spectral Classes Halounova image Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 81 18 99 81.8% 18.2% Non-Cropland 39 172 211 81.5% 18.5% Column Total 120 190 253 User’s Accuracy 67.5% 90.5% Errors of Commission 32.5% 9.5% Overall Accuracy 82% Kappa Index 60%

Pixels Classification Data Supervised Ogden PCA 3x3tex original, mean, contrast, variance SVM

Object-based post processing system Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 118 3 121 97.5% 2.5% Non-Cropland 2 187 189 98.9% 1.1% Column Total 120 190 305 User’s Accuracy 98.3% 98.4% Errors of Commission 1.7% 1.6% Overall Accuracy 98% Kappa Index 97%

Pixels Classification Data Supervised Ogden PCA 3x3tex orig, mean, contrast, variance SVM Object-

based classification Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 104 3 107 97.2% 2.8% Non-Cropland 16 187 203 92.1% 7.9% Column Total 120 190 291 User’s Accuracy 86.7% 98.4% Errors of Commission 13.3% 1.6% Overall Accuracy 94% Kappa Index 87%

Page 101: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

92

Pixels Classification Data Supervised Ogden Original Unprocessed SVM Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 88 91 179 49.2% 50.8% Non-Cropland 32 99 131 75.6% 24.4% Column Total 120 190 187 User’s Accuracy 73.3% 52.1% Errors of Commission 26.7% 47.9% Overall Accuracy 60% Kappa Index 23%

Pixels Classification Data Supervised 3x3 Occurrence SVM Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 78 30 108 72.2% 27.8% Non-Cropland 42 160 202 79.2% 20.8% Column Total 120 190 238 User’s Accuracy 65% 84.2% Errors of Commission 35% 15.8% Overall Accuracy 77% Kappa Index 50%

Pixels Classification Data Supervised Halounova 9 layer - original, texture, filters Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 61 7 68 79.7% 10.3% Non-Cropland 59 183 242 75.6% 24.4% Column Total 120 190 244 User’s Accuracy 50.8% 96.3% Errors of Commission 49.2% 3.7% Overall Accuracy 79% Kappa Index 51%

Pixels Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture,

filters Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 63 8 71 88.7% 11.3% Non-Cropland 57 182 239 76.2% 23.8% Column Total 120 190 245 User’s Accuracy 52.5% 95.8% Errors of Commission 47.5% 4.2% Overall Accuracy 79% Kappa Index 52%

Pixels Classification Data Unsupervised Isodata Classifier Halounova 9 layer - original, texture, filters Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 93 39 132 70.5% 29.5% Non-Cropland 27 151 178 84.8% 15.2% Column Total 120 190 244 User’s Accuracy 77% 79.5% Errors of Commission 22% 20.5 Overall Accuracy 79% Kappa Index 56%

Pixels Classification Data Supervised Neural Net Classifier Halounova 9 layer - original, texture,

filters Post Processing System Reference Data Cropland Non-Cropland Row Total Producer’s Accuracy Errors of Omission Cropland 114 0 114 100% 0% Non-Cropland 6 190 196 96.9% 3.1% Column Total 120 190 304 User’s Accuracy 95% 100% Errors of Commission 5% 0% Overall Accuracy 98% Kappa Index 96%

Page 102: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

93

Salt Lake City unsupervised classification

Pixels Classification Data Unsupervised Salt Lake City Original ISODATA 10 Spectral Classes Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 22 12 4 9 2 49 44.9% 55.1% Grass 0 0 0 0 0 0 0% 0% Transportation 12 14 24 20 10 80 30% 70% Residential 0 0 0 0 0 0 0% 0% Commercial 6 4 12 11 28 61 45.9% 54.1% Column Total 40 30 40 40 40 74 User’s Accuracy 55% 0% 60% 0% 70%

Errors of Commission 45% 100% 40% 100% 30%

Overall Accuracy 39%

Kappa Index 23%

Pixels Classification Data Unsupervised Salt Lake City Original ISODATA 25 Spectral Classes Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 22 12 4 9 2 49 44.9% 55.1% Grass 0 0 0 0 0 0 0% 100% Transportation 9 11 20 18 8 66 30.3% 69.7% Residential 0 0 0 0 0 0 0% 100% Commercial 9 7 16 13 30 75 48.4% 51.6% Column Total 40 30 40 40 40 72 User’s Accuracy 55% 0% 50% 0% 75%

Errors of Commission 45% 100% 50% 100% 25%

Overall Accuracy 38%

Kappa Index 30%

Pixels Classification Data Unsupervised Salt Lake City Original ISODATA 100 Spectral Classes Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 23 13 5 11 2 54 42.6% 57.4% Grass 0 0 0 0 0 0 0% 0% Transportation 8 9 18 14 6 55 32.7% 67.3% Residential 0 0 0 0 0 0 0% 0% Commercial 9 8 17 15 32 81 39.5% 60.5% Column Total 40 30 40 40 40 73 User’s Accuracy 57.5% 0% 45% 0% 80%

Errors of Commission 42.5% 100% 55% 100% 20%

Overall Accuracy 38%

Kappa Index 22%

Page 103: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

94

Salt Lake City level 1 classification scheme

Pixels Classification Data Salt Lake City Original Supervised Minimum Distance Classifier Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 17 3 2 4 0 26 65.4% 34.6% Grass 12 14 9 12 6 53 26.4% 73.6% Transportation 1 4 8 7 2 22 36.4% 63.6% Residential 4 6 11 7 5 33 21.2% 78.8% Commercial 6 3 10 10 27 56 48.2% 51.8% Column Total 40 30 40 40 40 73 User’s Accuracy 42.5% 46.7% 20% 17.5% 67.5%

Errors of Commission 57.5% 53.3% 80% 82.5% 32.5%

Overall Accuracy 38%

Kappa Index 23%

Pixels Classification Data Salt Lake City Original Supervised Support Vector Machine Classifier Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 15 0 1 2 0 18 83.3% 16.7% Grass 0 0 0 0 0 0 0% 0% Transportation 11 13 20 19 9 72 27.8% 72.2% Residential 8 13 5 8 2 36 22.2% 77.8% Commercial 6 4 14 11 29 64 45.3% 54.7% Column Total 40 30 40 40 40 72 User’s Accuracy 37.5% 0% 50% 20% 72.5%

Errors of Commission 62.5% 100% 50% 80% 27.5%

Overall Accuracy 38%

Kappa Index 21%

Pixels Classification Data Salt Lake City 5x5 Texture Occurrence Supervised Support Vector Machine Classifier

Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 11 0 2 1 0 14 78.6% 21.4% Grass 0 1 0 0 0 1 100% 0% Transportation 4 8 17 8 8 45 37.8% 62.2% Residential 24 19 9 24 8 84 28.6% 71.4% Commercial 1 2 12 0 24 39 61.5% 38.5% Column Total 40 30 40 33 40 77 User’s Accuracy 27.5% 3.3% 42.5% 72.7% 60%

Errors of Commission 72.5% 96.7% 57.5% 27.3% 40%

Overall Accuracy 41%

Kappa Index 27%

Page 104: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

95

Pixels Classification Data Salt Lake City 5x5 Texture Occurrence Saturation Stretch Supervised Support

Maximum Likelihood Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 12 0 2 1 0 15 80% 20% Grass 7 16 9 8 3 43 37.2% 62.8% Transportation 4 7 17 9 13 50 34% 66% Residential 16 6 1 15 4 42 35.7% 64.3% Commercial 1 1 11 7 20 40 50% 50% Column Total 40 30 40 40 40 80 User’s Accuracy 30% 53.3% 42.5% 37.5% 50%

Errors of Commission 70% 46.7% 57.5% 62.5% 50%

Overall Accuracy 42%

Kappa Index 28%

Pixels Classification Data Salt Lake City 11x11 Texture Occurrence Support Vector Machine Classifier Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 8 1 3 1 0 13 61.5% 38.5% Grass 2 3 1 1 0 7 42.9% 57.1% Transportation 2 5 23 11 5 46 50% 50% Residential 27 18 7 24 14 90 26.7% 73.3% Commercial 1 3 6 3 21 34 61.8% 38.2% Column Total 40 30 40 40 40 79 User’s Accuracy 20% 10% 57.5% 60% 52.5%

Errors of Commission 80% 90% 42.5% 40% 47.5%

Overall Accuracy 42%

Kappa Index 26%

Pixels Classification Data Salt Lake City Neural Net Classifier Halounova Image Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 8 2 3 6 1 20 40% 60% Grass 0 0 0 0 0 0 0% 100% Transportation 1 6 22 2 11 42 52.4% 47.6% Residential 31 22 8 32 15 108 29.6% 70.4% Commercial 0 0 7 0 13 20 65% 35% Column Total 40 30 40 40 40 75 User’s Accuracy 20% 0% 55% 80% 32.5%

Errors of Commission 80% 100% 45% 20% 67.5%

Overall Accuracy 39%

Kappa Index 23%

Page 105: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

96

Pixels Classification Data Supervised Salt Lake City Object-based Support Vector Machine Classifier Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 15 2 1 1 0 19 78.9% 21.1% Grass 11 18 6 15 2 52 34.6% 65.4% Transportation 2 3 20 3 3 31 64.5% 35.5% Residential 11 7 4 20 7 49 40.8% 59.2% Commercial 1 0 9 1 28 39 71.8% 28.2% Column Total 40 30 40 40 40 101 User’s Accuracy 37.5% 60% 50% 50% 70%

Errors of Commission 62.5% 40% 50% 50% 30%

Overall Accuracy 53%

Kappa Index 42%

Pixels Classification Data Supervised Salt Lake City Object-based Support Vector Machine Post Processing

System Reference Data

Trees Grass Transportation Residential Commercial Row Total

Producer’s Accuracy

Errors of Omission

Trees 16 2 1 1 0 20 80% 20% Grass 5 20 3 6 0 34 58.8% 41.2% Transportation 2 4 33 3 1 43 76.7% 23.3% Residential 6 0 0 29 0 35 82.9% 17.1% Commercial 11 4 3 1 39 58 67.2% 32.8% Column Total 40 30 40 40 40 137 User’s Accuracy 40% 66.7% 82.5% 72.5% 97.5%

Errors of Commission 60% 33.3% 17.5% 27.5% 2.5%

Overall Accuracy 72%

Kappa Index 65%

Salt Lake City level 2 classification scheme

Pixels Classification Data Unsupervised Salt Lake City Original ISODATA 10 Spectral Classes Reference Data Built up

Area Vegetation Transportation Row

Total Producer’s Accuracy

Errors of Omission

Built up Area 39 10 12 61 63.9% 36.1% Vegetation 11 34 4 49 69.4% 30.6% Transportation 30 26 24 80 30% 70% Column Total 80 70 40 97 User’s Accuracy 48.7% 48.6% 60% Errors of Commission 51.3% 51.4% 40%

Overall Accuracy 51% Kappa Index 20%

Pixels Classification Data Unsupervised Salt Lake City Original ISODATA 25 Spectral Classes Reference Data Built up

Area Vegetation Transportation Row

Total Producer’s Accuracy

Errors of Omission

Built up Area 43 16 16 75 57.3% 42.7% Vegetation 11 34 4 49 69.4% 30.6% Transportation 26 20 20 66 30.3% 69.7% Column Total 80 70 40 97 User’s Accuracy 53.8% 48.6% 50% Errors of Commission 46.2% 51.4% 50%

Overall Accuracy 51% Kappa Index 20%

Page 106: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

97

Pixels Classification Data Salt Lake City Original Supervised Minimum Distance Classifier Reference Data Built up

Area Vegetation Transportation Row

Total Producer’s Accuracy

Errors of Omission

Built up Area 49 19 21 89 55.1% 44.9% Vegetation 22 46 11 79 58.2% 41.8% Transportation 9 5 8 22 36.4% 63.6% Column Total 80 70 40 103 User’s Accuracy 61.3% 65.7% 20% Errors of Commission 38.7% 34.3% 80%

Overall Accuracy 54% Kappa Index 25%

Pixels Classification Data Salt Lake City Original Supervised Support Vector Machine Classifier Reference Data Built up

Area Vegetation Transportation Row

Total Producer’s Accuracy

Errors of Omission

Built up Area 50 31 19 100 50% 50% Vegetation 2 15 1 18 83.3% 16.7% Transportation 28 24 20 72 27.8% 72.2% Column Total 80 70 40 85 User’s Accuracy 62.5% 21.4% 50% Errors of Commission 37.5% 78.6% 50%

Overall Accuracy 45% Kappa Index 8%

Pixels Classification Data Supervised Salt Lake City Object-based Support Vector Machine Classifier Reference Data Built up

Area Vegetation Transportation Row

Total Producer’s Accuracy

Errors of Omission

Built up Area 56 19 13 88 63.6% 36.4% Vegetation 18 46 7 71 64.8% 35.2% Transportation 6 5 20 31 64.5% 35.5% Column Total 80 70 40 122 User’s Accuracy 70% 65.7% 50% Errors of Commission 30% 34.3% 50%

Overall Accuracy 64% Kappa Index 41%

Pixels Classification Data Supervised Salt Lake City Object-based 5x5 Occurrence Texture Saturation Stretch Support Vector Machine Classifier

Reference Data Built up Area

Vegetation Transportation Row Total

Producer’s Accuracy

Errors of Omission

Built up Area 56 12 12 80 70% 30% Vegetation 22 46 5 73 63% 37% Transportation 2 12 23 37 62.2% 37.8% Column Total 80 71 40 125 User’s Accuracy 70% 65.7% 57.5% Errors of Commission 30% 34.3% 42.5%

Overall Accuracy 66% Kappa Index 44%

Page 107: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

98

Pixels Classification Data Supervised Salt Lake City Neural Net Classifier Reference Data Built up

Area Vegetation Transportation Row

Total Producer’s Accuracy

Errors of Omission

Built up Area 60 53 15 128 46.9% 53.1% Vegetation 7 10 3 20 50% 50% Transportation 13 7 22 42 52.4% 47.6% Column Total 80 70 40 92 User’s Accuracy 75% 14.3% 55% Errors of Commission 25% 85.7% 45%

Overall Accuracy 39% Kappa Index 23%

Pixels Classification Data Supervised Salt Lake City Object-based 11x11 Occurrence Texture Support

Vector Machine Classifier Reference Data Built up

Area Vegetation Transportation Row

Total Producer’s Accuracy

Errors of Omission

Built up Area 62 49 13 124 50% 50% Vegetation 2 14 4 20 70% 30% Transportation 16 7 23 46 50% 50% Column Total 80 70 40 99 User’s Accuracy 77.5% 20% 57.5% Errors of Commission 22.5% 80% 42.5%

Overall Accuracy 52% Kappa Index 20%

Pixels Classification Data Supervised Salt Lake City Object-based Support Vector Machine Post

Processing System Reference Data Built up

Area Vegetation Transportation Row

Total Producer’s Accuracy

Errors of Omission

Built up Area 69 21 3 93 74.2% 25.8% Vegetation 1 43 4 48 89.6% 10.4% Transportation 4 6 33 43 76.7% 23.3% Column Total 74 70 40 145 User’s Accuracy 93.2% 61.4% 82.5% Errors of Commission 6.8% 38.6% 17.5%

Overall Accuracy 76% Kappa Index 65%

Salt Lake City level 3 classification scheme

Pixels Classification Data Unsupervised Salt Lake City Original ISODATA 10 Spectral Classes Reference Data Built up

Area Non Built up Area

Row Total Producer’s Accuracy Errors of Omission

Built up Area 39 22 61 63.9% 36.1% Non Built up Area 41 88 129 68.2% 31.8% Column Total 80 110 127 User’s Accuracy 48.8% 80% Errors of Commission 51.2% 20%

Overall Accuracy 67% Kappa Index 30%

Page 108: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

99

Pixels Classification Data Unsupervised Salt Lake City Original ISODATA 25 Spectral Classes Reference Data Built up

Area Non Built up Area Row Total Producer’s

Accuracy Errors of Omission

Built up Area 43 32 75 57.3% 42.7% Non Built up Area 37 78 115 67.8% 32.2% Column Total 80 110 121 User’s Accuracy 53.8% 70.9% Errors of Commission 46.2% 29.1%

Overall Accuracy 64% Kappa Index 25%

Pixels Classification Data Salt Lake City 5x5 Texture Occurrence Supervised Support Vector Machine

Classifier Reference Data Built up

Area Non Built up Area Row Total Producer’s

Accuracy Errors of Omission

Built up Area 63 67 130 48.5% 51.5% Non Built up Area 17 43 60 71.7% 28.3% Column Total 80 110 106 User’s Accuracy 78.8% 39.1% Errors of Commission 21.2% 60.9%

Overall Accuracy 56% Kappa Index 16%

Pixels Classification Data Salt Lake City 5x5 Texture Occurrence Saturation Stretch Supervised Support Maximum Likelihood

Reference Data Built up Area

Non Built up Area Row Total Producer’s Accuracy

Errors of Omission

Built up Area 46 36 82 56.1% 43.9% Non Built up Area 34 74 108 68.5% 31.5% Column Total 80 110 120 User’s Accuracy 57.5% 67.3% Errors of Commission 42.5% 32.7%

Overall Accuracy 63% Kappa Index 25%

Pixels Classification Data Salt Lake City 11x11 Texture Occurrence Supervised Support Vector Machine

Classifier Reference Data Built up

Area Non Built up Area Row Total Producer’s

Accuracy Errors of Omission

Built up Area 62 62 124 50% 50% Non Built up Area 18 48 66 72.7% 27.3% Column Total 80 110 110 User’s Accuracy 77.5% 43.6% Errors of Commission 22.5% 56.4%

Overall Accuracy 58% Kappa Index 20%

Pixels Classification Data Salt Lake City Neural Net Classifier Reference Data Built up

Area Non Built up Area Row Total Producer’s

Accuracy Errors of Omission

Built up Area 60 68 128 46.9% 53.1% Non Built up Area 20 42 62 67.7% 32.3% Column Total 80 110 102 User’s Accuracy 75% 38.2% Errors of Commission 25% 61.8%

Overall Accuracy 54% Kappa Index 12%

Page 109: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

100

Pixels Classification Data Supervised Salt Lake City Object-based Support Vector Machine Classifier Reference Data Built up

Area Non Built up Area Row Total Producer’s

Accuracy Errors of Omission

Built up Area 56 32 88 63.6% 36.4% Non Built up Area 24 78 102 76.5% 23.5% Column Total 80 110 134 User’s Accuracy 70% 70.9% Errors of Commission 30% 29.1%

Overall Accuracy 71% Kappa Index 40%

Pixel Classification Data Supervised Salt Lake City Object-based Support Vector Machine Post

Processing System Reference Data Built up

Area Non Built up Area Row Total Producer’s

Accuracy Errors of Omission

Built up Area 69 24 93 74.2% 25.8% Non Built up Area 11 86 97 88.7% 11.3% Column Total 80 110 155 User’s Accuracy 86.3% 78.2% Errors of Commission 13.2% 21.8%

Overall Accuracy 82% Kappa Index 63%

Page 110: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

101

APPENDIX 2: VECTOR EDITING TOOLBAR C#.NET CODE

Merge Polygon Button using ESRI.ArcGIS.ArcMapUI; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; using ESRI.ArcGIS.Carto; using ESRI.ArcGIS.Editor; using ESRI.ArcGIS.Geometry; using ESRI.ArcGIS.Geodatabase; namespace MergePolys { class MergePolysMethods { //neighborType is either "smallest" or "largest" depending on the button clicked. static internal void Merge(string neighborType) { try { IActiveView activeView = ArcMap.Document.ActiveView; IEditor3 theEditor; IEditLayers editLayers; IFeatureSelection featSel; string message; //Check that edit session is set up correctly for merging and set up featSel if (SetupOK(out theEditor, out editLayers, out featSel, out message) != true) { MessageBox.Show(message); return; } ICursor cursor; IFeatureCursor featCursor; IFeature currentFeat; IFeature featToBeMergedWith; ITopologicalOperator topoOp; featSel.SelectionSet.Search(null, false, out cursor); featCursor = cursor as IFeatureCursor; currentFeat = featCursor.NextFeature(); while (currentFeat != null) { featToBeMergedWith = GetAdjacentFeature(currentFeat, neighborType); if (featToBeMergedWith != null) { topoOp = featToBeMergedWith.ShapeCopy as ITopologicalOperator; featToBeMergedWith.Shape = topoOp.Union(currentFeat.ShapeCopy); featToBeMergedWith.Store(); currentFeat.Delete(); } else { MessageBox.Show("No polygons are adjacent to: " + currentFeat.OID); } currentFeat = featCursor.NextFeature(); } ArcMap.Document.ActiveView.Refresh(); } catch (Exception ex)

Page 111: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

102

{ MessageBox.Show("Error: " + ex.Message + Environment.NewLine + "Merge aborted!"); } } static internal void Merge(IPoint clickPoint) { try { IActiveView activeView = ArcMap.Document.ActiveView; IEditor3 theEditor; IEditLayers editLayers; IFeatureSelection featSel; string message; //Check that edit session is set up correctly for merging and set up featSel if (SetupOK(out theEditor, out editLayers, out featSel, out message) != true) { MessageBox.Show(message); return; } if (featSel.SelectionSet.Count > Properties.Settings.Default.MaxSelBeforeWarn) { string dialogMessage = "There are " + Properties.Settings.Default.MaxSelBeforeWarn.ToString() + " features selected to be merged. Are you sure you want to merge all those that touch the clicked polygon?"; const string caption = "Warning"; var result = MessageBox.Show(dialogMessage, caption, MessageBoxButtons.YesNo, MessageBoxIcon.Exclamation); if (result == DialogResult.No) { return; } } ICursor cursor; IFeatureCursor featCursor; IFeature currentFeat; IFeature featToBeMergedWith; ITopologicalOperator topoOp; featSel.SelectionSet.Search(null, false, out cursor); featCursor = cursor as IFeatureCursor; currentFeat = featCursor.NextFeature(); //get feature selected by mouse click clickPoint.SpatialReference = currentFeat.Shape.SpatialReference; ISpatialFilter clickSpatFilter = new SpatialFilter(); clickSpatFilter.Geometry = clickPoint as IGeometry; clickSpatFilter.SpatialRel = ESRI.ArcGIS.Geodatabase.esriSpatialRelEnum.esriSpatialRelIntersects; IQueryFilter queryFilter = (IQueryFilter)clickSpatFilter; IFeatureCursor toBeMergedWithFeatCursor = editLayers.CurrentLayer.FeatureClass.Search(queryFilter, false); int numMerged = 0; featToBeMergedWith = toBeMergedWithFeatCursor.NextFeature(); while (currentFeat != null) { if (FeaturesTouch(featToBeMergedWith, currentFeat)) { topoOp = featToBeMergedWith.ShapeCopy as ITopologicalOperator; featToBeMergedWith.Shape = topoOp.Union(currentFeat.ShapeCopy); featToBeMergedWith.Store(); currentFeat.Delete(); numMerged++; } currentFeat = featCursor.NextFeature(); } if(numMerged==0)

Page 112: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

103

{ MessageBox.Show("Merge polygon did not touch selected features"); } else{ ArcMap.Document.ActiveView.Refresh(); } } catch (Exception ex) { MessageBox.Show("Error: " + ex.Message + Environment.NewLine + "Merge aborted!"); } } static private IFeature GetAdjacentFeature(IFeature featToBeMerged, string neighborType) { ISpatialFilter spatFilter = new SpatialFilter(); IFeatureClass theFeatClass = featToBeMerged.Class as IFeatureClass; spatFilter.Geometry = featToBeMerged.Shape; spatFilter.SpatialRel = esriSpatialRelEnum.esriSpatialRelIntersects; IFeatureCursor featCursor = theFeatClass.Search(spatFilter, false); IFeature tempFeat; IFeature featToBeMergedWith = null; Double threshold = 0; Double tempFeatArea = 0; tempFeat = featCursor.NextFeature(); while (tempFeat != null) { if (tempFeat.OID != featToBeMerged.OID) { tempFeatArea = GetArea(tempFeat.Shape as IArea); if (featToBeMergedWith == null) { featToBeMergedWith = tempFeat; threshold = tempFeatArea; } else { switch (neighborType) { case "smallest": if (tempFeatArea < threshold) { featToBeMergedWith = tempFeat; threshold = tempFeatArea; } break; case "largest": if (tempFeatArea > threshold) { featToBeMergedWith = tempFeat; threshold = tempFeatArea; } break; } } } tempFeat = featCursor.NextFeature(); } return featToBeMergedWith; } static private bool FeaturesTouch(IFeature featA, IFeature featB) { IRelationalOperator relationalOperator = (IRelationalOperator)featA.Shape; return relationalOperator.Touches(featB.Shape); }

Page 113: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

104

static private Double GetArea(IArea thePolygon) { return thePolygon.Area; } static private bool SetupOK(out IEditor3 theEditor, out IEditLayers editLayers, out IFeatureSelection featSel, out string message) { try { theEditor = null; editLayers = null; featSel = null; theEditor = ArcMap.Application.FindExtensionByName("ESRI Object Editor") as IEditor3; if (theEditor.EditState != esriEditState.esriStateEditing) { message = "Please start an editing session first!"; return false; } editLayers = theEditor as IEditLayers; if (editLayers.CurrentLayer.FeatureClass.ShapeType != esriGeometryType.esriGeometryPolygon) { message = "Current edit layer must be a polygon layer."; return false; } featSel = editLayers.CurrentLayer as IFeatureSelection; if (featSel.SelectionSet.Count == 0) { message = "No features to be merged have been selected."; return false; } message = "OK"; return true; } catch { throw new Exception("Error checking edit session."); } } } }

Select by Area Button

using ESRI.ArcGIS.esriSystem; using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using ESRI.ArcGIS.ArcMapUI; using ESRI.ArcGIS.Carto; using ESRI.ArcGIS.Editor; using ESRI.ArcGIS.Geometry; using ESRI.ArcGIS.Geodatabase; namespace MergePolys { public partial class SelectByArea : Form

Page 114: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

105

{ public SelectByArea() { InitializeComponent(); } private void btn_Cancel_Click(object sender, EventArgs e) { this.Dispose(); } private void btn_Select_Click(object sender, EventArgs e) { IActiveView activeView = ArcMap.Document.ActiveView; IEditor3 theEditor; IEditLayers editLayers; IFeatureClass featCLS; try { theEditor = ArcMap.Application.FindExtensionByName("ESRI Object Editor") as IEditor3; if (theEditor.EditState != esriEditState.esriStateEditing) { MessageBox.Show("Please start an editing session first!"); this.Dispose(); } else { editLayers = theEditor as IEditLayers; featCLS = editLayers.CurrentLayer.FeatureClass; if (featCLS.ShapeType != esriGeometryType.esriGeometryPolygon) { MessageBox.Show("Current edit layer must be a polygon layer."); } else { IFeatureSelection featureSelection = editLayers.CurrentLayer as IFeatureSelection; IQueryFilter qf = new QueryFilterClass(); qf.WhereClause = "Shape_Area < " + this.txt_Area.Text; activeView.PartialRefresh(esriViewDrawPhase.esriViewGeoSelection, null, null); featureSelection.SelectFeatures(qf, esriSelectionResultEnum.esriSelectionResultNew, false); activeView.PartialRefresh(esriViewDrawPhase.esriViewGeoSelection, null, null); this.Close(); } } } catch (Exception ex) { MessageBox.Show("Error selecting by area - " + ex.Message); this.Dispose(); } } } }

Page 115: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

106

REFERENCES Alhaddad, B., Roca, J., and Burns, M., 2009. Monitoring urban sprawl from historical

aerial photographs and satellite imagery using texture analysis and mathematical morphology approaches. In: A: European Congress of the Regional Science Association International. "Territorial Cohesion of Europe & Integrative Planning: 49th European Congress of the Regional Science Association International". 25-29 August 2009 Lodz. Belgium: European Regional Science Association, 1-9.

Anderson, J.J. and Cobb, N.S., 2004. Tree cover discrimination in panchromatic aerial

imagery of pinyon-juniper woodlands. Photogrammetric Engineering & Remote Sensing, 70 (9), 1063-1068.

Awwad, W.A., 2003. Land cover mapping: a comparison between manual digitizing and

automated classification of black and white historical aerial photography. Thesis (MS). University of Florida.

Ashish, D., 2002. Land classification of aerial images using artificial neural networks.

Thesis (MS). University of Georgia. Browning, D.M., Archer, S.R. and Byrne, A.T., 2009. Field validation of 1930s aerial

photography: What are we missing? Journal of Arid Environments, 73, 844-853. Campbell, J.B, 2008. Introduction to Remote Sensing. 4th ed. New York: Guilford

Press. Caridade, C.M.R., Marcal, A.R.S., and Mendonca, T., 2008. The use of texture for image

classification of black and white air photographs. International Journal of Remote Sensing, 29 (2), 593-607.

Carmel, Y. and Kadmon, R., 1998. Computerized classification of Mediterranean

vegetation using panchromatic aerial photographs. Journal of Vegetation Science, 9, 445-454.

Castilla, G. and Hay, G., 2007. An automated delineation tool for assisted interpretation

of digital imagery. [online] American Society for Photogrammetry and Remote Sensing 2007 Annual Conference Tampa, Florida. Available from: http://www.asprs.org/a/publications/proceedings/tampa2007/0014.pdf [Accessed 17 September 2011].

Congalton, R. and Green K., 1993. A practical look at the sources of confusion in error

matrix generation. Photogrammetric Engineering & Remote Sensing, 59 (5), 641-644.

Page 116: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

107

Cots-Folch, R., Aitkenhead, M. J., and Martinez-Casasnovas, J.A, 2007. Mapping land cover from detailed aerial photography data using textural and neural network analysis. International Journal of Remote Sensing, 28 (7), 1625-1642.

Elmqvist, B., Ardo, J. and Olsson, L., 2008. Land use studies in drylands: an evaluation of object-oriented classification of very high resolution panchromatic imagery. International Journal of Remote Sensing, 29 (24), 7129-7140.

ENVI, 2011. ENVI 4.8 Software Documentation – Applying K-Means classification. Erener, A. and Duzgun, S. H., 2009. A methodology for land use change detection of

high resolution pan images based on texture analysis. Italian Journal of Remote Sensing, 41 (2), 47-59.

ESRI, 2011. ArcGIS 10 Desktop Help – How Dendrogram works. [online] ArcGIS

Resource Center. Available from: http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#/How_Dendrogram_works/009z000000q6000000/ [Accessed 15 September 2011].

Fauvel, M. and Chanussot, J., 2007. A joint spatial and spectral SVM’s classification of

panchromatic images. Geoscience and Remote Sensing Symposium, 1497-1500. Gong, P., Marceau, D.J. and Howarth, P.J., 1992. A comparison of spatial feature

extraction algorithms for land-use classification with SPOT HRV data. Remote Sensing of Environment, 40 (2), 137-151.

Halounova, L., 2009. Object – oriented land cover classification of panchromatic analog

image data [online]. Available from: http://gis.vsb.cz/zsv/images/stories/publikace/halounovajourremsensing_jan2009.pdf [Accessed Nov 4 2011].

Halounova, L., 2005. The automatic classification of B&W aerial photos [online].

Available from: http://www.isprs.org/proceedings/XXXV/congress/comm3/papers/313.pdf [Accessed Nov 4 2011].

Haralick, R.M, Shanmugam, K. and Dinstein, I., 1973. Textural Features for Image

Classification. IEEE Transactions on Systems, Man and Cybernetics, 3 (6), 610-621.

Haralick, R. 1979. Statistical and structural approaches to texture. Proceedings of the

IEEE, 67 (5), 786-804. Hoffer, R.M., 1984. Remote Sensing to Measure the Distribution and Structure of

Vegetation in The Role of Terrestrial Vegetation in the Global Carbon Cycle: Measurement by Remote Sensing. In: Woodwell, G. M., ed. Volume 23 of

Page 117: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

108

Scientific Committee on Problems of the Environment Series. Sussex, John Wiley & Sons Ltd., 131-159.

Hu, Q., Luo, S., Qiao, Y. and Qian, G., 2008. Supervised grayscale thresholding based

on transition regions. Image and Vision Computing, 26, 1677-1684. Hudak, A.T. and Wessman, C.A., 1998. Textural analysis of historical aerial

photography to characterize woody plant encroachment in South African savanna. Remote Sensing of Environment, 66, 317-330.

Hung, M.-C., and Wu, Y.-H., 2005. Mapping and visualizing the Great Salt Lake

landscape dynamics using multi-temporal satellite images, 1972-1976. International Journal of Remote Sensing, 26, 1815-1834.

Jensen, J.R., 2005. Introductory digital image processing a remote sensing perspective

3rd ed. New Jersey: Pearson Prentice Hall. Kadmon, R. and Harari-Kremer, R., 1999. Studying long-term vegetation dynamics

using digital processing of historical aerial photographs. Remote Sensing of Environment, 68, 164-176.

Laliberte, A.S, Rango, A., Havstad, K.M., Paris, J.F., Reldon, B.F., McNeely, R. and

Gonzalez, A.L., 2004. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sensing of Environment, 93, 198-210.

Li, Y., Onasch, C.M. and Guo, Y., 2008. GIS-based detection of grain boundaries.

Journal of Structural Geology, 30, 431-443. Maillard, P., 2003. Comparing texture analysis methods through classification.

Photogrammetric Engineering & Remote Sensing, 69 (4), 357-367. Mast, J.N, Veblen, T.T. and Hodgson, M.E., 1997. Tree invasion within a pine/grassland

ecotone: an approach with historic aerial photography and GIS modeling. Forest Ecology and Management, 93, 181-194.

Mathews, Louise, 2005. Aerial Photography Field Office (APFO): Historical imagery

holdings for the United States Department of Agriculture (USDA) [online]. Farm Service Agency. Available from: http://www.fsa.usda.gov/Internet/FSA_File/vault_holdings2.pdf [Accessed 8 February 2011].

Middleton, M., Narhi, P., Sutinen, M.L. and Sutinen, R., 2008. Object-based change

detection of historical aerial photographs reveals altitudinal forest expansion [online]. International Society for Photogrammetry and Remote Sensing Geographic Object-based Image Analysis Calgary Canada. Available from:

Page 118: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

109

http://www.isprs.org/proceedings/XXXVIII/4-C1/Sessions/Session4/6791_middleton_Proc_pap.pdf [Accessed 25 November 2010].

Myint, S. W. and Lam, N., 2005. A study of lacunarity-based texture analysis

approaches to improve urban image classification. Computers, Environment and Urban Systems, 29, 501-523.

Okeke, F. and Karnieli A., 2006. Methods for fuzzy classification and accuracy

assessement of historical aerial photographs for vegetation change analysis. Part I: Algorithm development. International Journal of Remote Sensing, 27, 153-176.

Pacifici, F., Chini, M. and Emery, W.J., 2009. A neural network approach using multi-

scale textural metrics from very high-resolution panchromatic imagery for urban land-use classification. Remote Sensing of Environment, 113, 1276-1292.

Pillai, R.B. and Weisberg, P.J., 2005. Object-oriented classification of repeat aerial

photography for quantifying woodland expansion in central Nevada. 20th Biennial Workshop on Aerial Photography, Videography, and High Resolution Digital Imagery for Resource Assessment, 3-5 October2005, Weslaco, Texas.

Pringle, R.M., Syfert, M., Webb, J.K., and Shine, R., 2009. Quantifying historical

changes in habitat availability for endangered species: use of pixel- and object-based remote sensing. Journal of Applied Ecology, 46, 544-553.

Resler, L.M., Fonstad, M.A. and Butler, D.R., 2004. Mapping the alpine treeline ecotone

with digital aerial photography and textural analysis. Geocarto International, 19 (1), 37-44.

Short, D. and Short D., 1987. Studying long term community dynamics using image

processing. In: Tenhunen, J.D., ed. Plant response to stress in Mediterranean climates. Berlin: Springer-Verlag, 165-171.

Srinivasan, G.N. and Shobha, G., 2008. Statistical Texture Analysis. Proceedings of

World Academy of Science, Engineering and Technology, 36, 1264-1269. Strahler, A.H., Woodcock, C.E. and Smith J.A., 1986. On the nature of models in remote

sensing. Remote Sensing of Environment, 20, 121-139. Tuceryan, M. and Jain, A.K., 1998. Texture Analysis. In: Chen, C.H., Pau, L.F., and

Wang, P.S.P, eds. The handbook of pattern recognition and computer vision 2nd edition. World Scientific Publishing Co., 207-248.

Page 119: A THESIS PRESENTED TO THE DEPARTMENT OF GEOLOGY AND ... › library › theses › 2012 › BiedigerJoan.pdf · eye to extract complex patterns can limit interpretation. Digital image

110

U.S. Army Corps of Engineers, 1995. Manual No. 1110-1-1802 on Engineering and Design: Geophysical exploration for engineering and environmental investigations, chapter 9: Remote Sensing [online]. Available from: http://140.194.76.129/publications/eng-manuals/em1110-1-1802/toc.htm [Accessed 1 January 2011].