bsensing department of environmental earth system science ... · contrast among plant species....
TRANSCRIPT
This article was downloaded by: [University of Texas at Austin], [Kevan Moffett]On: 10 October 2012, At: 08:21Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
International Journal of RemoteSensingPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/tres20
Distinguishing wetland vegetation andchannel features with object-basedimage segmentationKevan B. Moffett a & Steven M. Gorelick ba Department of Geological Sciences, The University of Texas atAustin, Austin, TX, USAb Department of Environmental Earth System Science, StanfordUniversity, Stanford, CA, USA
Version of record first published: 10 Oct 2012.
To cite this article: Kevan B. Moffett & Steven M. Gorelick (2013): Distinguishing wetlandvegetation and channel features with object-based image segmentation, International Journal ofRemote Sensing, 34:4, 1332-1354
To link to this article: http://dx.doi.org/10.1080/01431161.2012.718463
PLEASE SCROLL DOWN FOR ARTICLE
Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representationthat the contents will be complete or accurate or up to date. The accuracy of anyinstructions, formulae, and drug doses should be independently verified with primarysources. The publisher shall not be liable for any loss, actions, claims, proceedings,demand, or costs or damages whatsoever or howsoever caused arising directly orindirectly in connection with or arising out of the use of this material.
International Journal of Remote SensingVol. 34, No. 4, 20 February 2013, 1332–1354
Distinguishing wetland vegetation and channel features withobject-based image segmentation
Kevan B. Moffetta* and Steven M. Gorelickb
aDepartment of Geological Sciences, The University of Texas at Austin, Austin, TX, USA;bDepartment of Environmental Earth System Science, Stanford University, Stanford, CA, USA
(Received 30 December 2011; accepted 27 May 2012)
Mapping landscape features within wetlands using remote-sensing imagery is a persis-tent challenge due to the fine scale of wetland pattern variation and the low spectralcontrast among plant species. Object-based image analysis (OBIA) is a promisingapproach for distinguishing wetland features, but systematic guidance for this use ofOBIA is not presently available. A sensitivity analysis was tested using OBIA to distin-guish vegetation zones, vegetation patches, and surface water channels in two intertidalsalt marshes in southern San Francisco Bay. Optimal imagery sources and OBIA seg-mentation settings were determined from 348 sensitivity tests using the eCognitionmultiresolution segmentation algorithm. The optimal high-resolution (≤1 m) imagerychoices were colour infrared (CIR) imagery to distinguish vegetation zones, CIR orred, green, blue (RGB) imagery to distinguish vegetation patches depending on speciesand season, and RGB imagery to distinguish surface water channels. High-resolution(1 m) lidar data did not help distinguish small surface water channels or other features.Optimal segmentation varied according to segmentation setting choices. Small vegeta-tion patches and narrow channels were more recognizable using small scale parametersettings and coarse vegetation zones using larger scale parameter settings. The scaleparameter served as a de facto lower bound to median segmented object size. Objectsmoothness/compactness weight settings had little effect. Wetland features were morerecognizable using high colour/low shape weight settings. However, an experiment ona synthetic non-wetland image demonstrated that, colour information notwithstanding,segmentation results are still strongly affected by the selected image resolution, OBIAsettings, and shape of the analysis region. Future wetland OBIA studies may benefitfrom strategically making imagery and segmentation setting choices based on theseresults; such systemization of future wetland OBIA approaches may also enhance studycomparability.
1. Introduction
Accurately mapping wetland landscape features is a goal shared by science, management,and restoration since the spatial organization of key features within a wetland, such asvegetation zones, vegetation patches, and surface water channels, is of intrinsic ecologi-cal and hydrological interest. These patterns can also be indicative of wetland health anddevelopment. Historically, wetlands were mapped by laborious field surveys and manualcartography. Information on wetland features can now be collected much more efficiently
*Corresponding author. Email: [email protected]
ISSN 0143-1161 print/ISSN 1366-5901 online© 2013 Taylor & Francishttp://dx.doi.org/10.1080/01431161.2012.718463http://www.tandfonline.com
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1333
by modern remote-sensing technologies. However, the translation of remote-sensing infor-mation into detailed maps of wetlands remains a challenge, in part because what the imagecaptures, the electromagnetic radiation from the surface, is not the information that is actu-ally desired, the categorization and configuration of vegetation types or landscape featureson the ground.
This study systematically tested means to translate remote-sensing data into maps ofwetland features using a relatively new remote-sensing-based mapping method: object-based image analysis (OBIA). The goal of the study was to determine the imagery andOBIA analysis choices most useful for distinguishing three key types of wetland features:vegetation zones, vegetation patches, and surface water channels.
The most common technique for mapping landscape features using information fromremote-sensing imagery is pixel-based spectral image analysis (PBSIA). This automatedapproach places each image pixel in a class (e.g. vegetation type) based on its spectralsignature such that it is optimally separable from the other classes. PBSIA is useful forits broad spatial coverage, non-invasiveness, repeatability, analytical speed, and ease ofautomation (Rundquist, Narumalani, and Narayanan 2001; Ozesmi and Bauer 2002; Adam,Mutanga, and Rugege 2009). These advantages have led to much recent use of PBSIA tomap wetlands.
Although PBSIA is generally successful at identifying whole wetlands from back-ground imagery of water or other land-cover classes, mapping features within wetlands,such as vegetation patterns, is more difficult because of the generally small size of veg-etation zones and low-spectral contrast among plant species (Ozesmi and Bauer 2002;Silva et al. 2008; Adam, Mutanga, and Rugege 2009; Tuxen et al. 2010). Uncertainty inPBSIA vegetation maps is made greater by low-spectral contrast and use of high-resolutionimagery, which typically produces noisy maps with ‘salt-and-pepper’ pixel effects (Ouyanget al. 2011). Intertidal salt marshes exhibit particularly low-spectral contrast that dependson the species present (Silvestri, Marani, and Marani 2003; Li, Ustin, and Lay 2005; Rosso,Ustin, and Hastings 2005; Artigas and Yang 2006; Sadro, Gastil-Bubl, and Melack 2007;Andrew and Ustin 2008), the time of year (Gao and Zhang 2006; Gilmore et al. 2008), andeven vegetation health (Wilson, Ustin, and Rocke 2004; Tilley et al. 2007). Salt marshesthus serve as a particularly challenging test case for remote sensing-based wetland mappingmethods.
A wetland mapping approach that avoids some of the difficulties with PBSIA is OBIA(Baatz and Schäpe 2000; Burnett and Blaschke 2003; Hay et al. 2003; Benz et al. 2004).While PBSIA considers the remotely sensed spectrum of each image pixel independently,OBIA simulataneously considers the pixel spectrum, its relative spatial location, and thelocal spectral homogeneity and shape of adjacent clusters of similar pixels. These additionalconsiderations can help to separate spectrally similar vegetation types and to produces lessnoisy maps with more intuitively recognizable, spatially coherent landscape features. Forexample, OBIA has sucessfully helped distingusish vegetation types in some brackish wet-lands (Gilmore et al. 2008; Tian et al. 2008; Tuxen and Kelly 2008). However, brackishvegetation is more diverse and may be more spectrally separable than vegetation in someother types of wetlands, such as salt marshes (Tuxen et al. 2010). In the particularlychallenging case of intertidal salt marshes, OBIA studies have so far been limited to dis-tinguishing high-contrast wetland features such as vegetation, mud, and water (Hurd et al.2006; Kim et al. 2011) or a few easily spectrally separable vegetation species (Ouyang et al.2011).
Despite these potential advantages of using OBIA to map wetland features, a drawbackof OBIA is that, unlike PBSIA, it cannot be completely automated using current software
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1334 K.B. Moffett and S.M. Gorelick
and requires key user inputs. Mapping using OBIA is at least a two-step process, consistingof segmentation and classification. First, the user must direct the software to divide (seg-ment) the image into clusters of adjacent pixels (objects) with relatively uniform properties.For the popular and highly successful (Meinel and Neubert 2004; Neubert, Herold, andMeinel 2008) multiresolution segmentation algorithm (MRSA) of eCognition Developer(Trimble Navigation Limited, Sunnyvale, CA, USA; Baatz and Schäpe 2000; Benz et al.2004), the three segmentation settings the user must supply are (1) colour/shape weights,which control how much the segmentation is based on image spectral (colour) informa-tion versus object shape information; (2) smoothness/compactness weights, which controlhow much the object shape tends to be spatially compact versus spectrally homogeneous(smooth) but less compact; and (3) a scale parameter, which limits overall object colourand shape complexity. After an image has been segmented into objects, a map is producedby assigning the objects to semantically meaningful classes (e.g. ‘water’, ‘bare soil’, orspecific vegetation species). Unlike the segmentation process, this classification processcan be automated (e.g. using decision tree analysis; McIver and Friedl 2002; Laliberte,Fredrickson, and Rango 2007; Lu and Weng 2009) but is more commonly done via a setof rules according to expert judgement. In some complex applications, segmentation andclassification processes may be alternated across multiple scales to better distinguish land-scape features (e.g. Burnet and Blaschke 2003; Baatz, Hoffmann, and Willhauck 2008;Varela, Rego, and Iglesias 2008; Weinke, Lang, and Preiner 2008; Kim et al. 2011), butthis approach still requires, at a minimum, some user input to direct the segmentationprocesses.
The need for user input is only the first of two barriers to broad adoption of OBIAmethods to map wetland features. The second problem is that there is little consistencyamong the user inputs employed in past wetland OBIA studies and so it is difficult togeneralize useful approaches from the literature. In fact, wetland mapping studies havebeen quite inconsistent in their choices of imagery sources and their choices of OBIAsegmentation parameters. Table 1 summarizes the choices made in a variety of wetlandmapping studies that used the eCognition MRSA; in some cases, the choices made werenot reported by the study. The most common type of imagery used in these studies hasbeen optical imagery (aerial photography, Landsat, QuickBird, IKONOS), but the pixelresolutions of such imagery have ranged from 0.3 to 30 m. The MRSA settings chosen havealso varied greatly. Smaller scale parameter settings have generally been used to identifyfiner image features and larger scale parameters to identify coarser image features. Still,the scale parameter settings do not appear to have been consistently based on the typeor size of the feature of interest or on the size of the image pixels. There seems to be ageneral preference for weighting colour more than shape and for equally weighting objectsmoothness and compactness, but few studies report sensitivity analyses to support thesechoices.
Various studies have tested the effects of user’s choices of remote-sensing imagerysources on OBIA wetland mapping. Some examples tested the use of optical imageryand lidar topography (Gilmore et al. 2008; Arroyo et al. 2010), optical imagery andRADARSAT land moisture index (Li and Chen 2005; Grenier et al. 2007), and differentoptical platforms (Harvey and Hill 2001; Ringrose, Vanderpost, and Matheson 2003). Otherimagery combinations are highlighted in existing reviews (e.g. Ozesmi and Bauer 2002).Such studies have generally concluded that high-resolution imagery, such as multispectralor hyperspectral aerial photography, is preferable for accurately mapping wetland vegeta-tion. To distinguish tidal channels from marsh vegetation, high-resolution topographic dataare most commonly used, but are not always available and are expensive to obtain.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1335
Tabl
e1.
Sum
mar
yof
OB
IAst
udie
sof
wet
land
sus
ing
the
eCog
niti
onM
RS
A.
Seg
men
tati
onse
ttin
gsS
tudy
Info
rmat
ion
Col
our/
shap
ew
eigh
ts
Sm
ooth
ness
/co
mpa
ctne
ssw
eigh
tsS
cale
para
met
erP
ixel
size
(m)
Targ
etim
age
feat
ures
Wet
land
type
Dat
aus
edR
efer
ence
n/a
n/a
5–10
00.
3V
eg.,
chan
.,m
udfl
ats
Sal
tmar
shP
hoto
Kim
etal
.(20
11)
0.7/
0.3
0.3/
0.7
n/a
0.5
Pla
ntco
ver
Rip
aria
nL
idar
Joha
nsen
etal
.(20
10)
0.7/
0.3
0.5/
0.5
350.
6Tw
ove
g.sp
p.,
non-
veg.
Bra
ckis
hm
arsh
Qui
ckB
ird-
psO
uyan
get
al.(
2011
)
0.7/
0.3
0.5/
0.5
150.
6Tw
ove
g.sp
p.,
non-
veg.
Bra
ckis
hm
arsh
Qui
ckB
ird-
psO
uyan
get
al.(
2011
)
0.7/
0.3
0.3/
0.7
500.
6V
eg.,
wat
erFe
nsQ
uick
Bir
dpa
nchr
omat
icD
issa
nska
,Ber
nier
,and
Paye
tte
(200
9)0.
9/0.
10.
3/0.
710
0.6
Veg
.,w
ater
Fens
Qui
ckB
ird
panc
hrom
atic
Dis
sans
ka,B
erni
er,a
ndPa
yett
e(2
009)
0.2/
0.8
0.8/
0.2
100
0.2
and
1V
eg.,
wat
erB
rack
ish
mar
shP
hoto
and
lida
rTu
xen
and
Kel
ly(2
008)
0.9/
0.1
0.5/
0.5
500.
2an
d1
Veg
.pat
ches
,fa
una
habi
tat
Bra
ckis
hm
arsh
Pho
toan
dli
dar
Tuxe
nan
dK
elly
(200
8)
0.9/
0.1
0.5/
0.5
250.
2an
d1
Veg
.pat
ches
,fa
una
habi
tat
Bra
ckis
hm
arsh
Pho
toan
dli
dar
Tuxe
nan
dK
elly
(200
8)
n/a
n/a
‘Rel
ativ
ely
high
’0.
6an
d1
Veg
.zon
esR
ipar
ian
16-B
and
phot
oH
arke
nan
dS
ugum
aran
(200
5)C
olou
r>
shap
en/
a7
1T
hree
veg.
zone
s,ro
ckFr
eshw
ater
mar
shIK
ON
OS
-ps
Rok
itni
cki-
Woj
cik,
Wei
,an
dC
how
-Fra
ser
(201
1)0.
7/0.
30.
5/0.
520
1V
eg.z
ones
Coa
stal
and
upla
ndC
IRph
oto
Yu
etal
.(20
06)
n/a
n/a
251
and
4V
eg.z
ones
Man
grov
eIK
ON
OS
Wan
g,S
ousa
,and
Gon
g(2
004)
0.7/
0.3
n/a
202.
4V
eg.z
ones
Bra
ckis
hm
arsh
Qui
ckB
ird
and
lida
rG
ilm
ore
etal
.(20
08)
(Con
tinu
ed)
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1336 K.B. Moffett and S.M. Gorelick
Tabl
e1.
(Con
tinu
ed).
Seg
men
tati
onse
ttin
gsS
tudy
Info
rmat
ion
Col
our/
shap
ew
eigh
ts
Sm
ooth
ness
/co
mpa
ctne
ssw
eigh
tsS
cale
para
met
erP
ixel
size
(m)
Targ
etim
age
feat
ures
Wet
land
type
Dat
aus
edR
efer
ence
n/a
n/a
200
2.8
Veg
.zon
esR
ipar
ian
Qui
ckbi
rdG
erge
leta
l.(2
007)
0.9/
0.1
0.5/
0.5
25,5
0,10
0–20
015
and
12.5
Five
wet
land
type
s,w
ater
,up
land
s
Taig
aan
dbo
real
Lan
dsat
7-ps
and
RA
DA
RS
AT
1G
reni
eret
al.(
2007
)
0.1/
0.9
0.5/
0.5
4,7,
1015
Wet
land
Gen
eral
Lan
dsat
7-ps
Froh
net
al.(
2009
)0.
1/0.
90.
5/0.
550
,100
15U
plan
dG
ener
alL
ands
at7-
psFr
ohn
etal
.(20
09)
0.8/
0.2
0.9/
0.1
1020
Bas
icob
ject
sB
ogC
IRph
oto
Bur
nett
etal
.(20
03)
0.7/
0.3
0.5/
0.5
3020
Veg
.zon
esB
ogC
IRph
oto
Bur
nett
etal
.(20
03)
0.7/
0.3
0.5/
0.5
150
20C
over
clas
ses
Bog
CIR
phot
oB
urne
ttet
al.(
2003
)0.
9/0.
10.
5/0.
530
30L
akes
,bas
ins,
othe
rPe
rmaf
rost
Lan
dsat
7Fr
ohn
(200
6)
0.9/
0.1
0.5/
0.5
7530
Wat
er,o
ther
Bra
ckis
hm
arsh
Lan
dsat
7H
urd
etal
.(20
06)
0.9/
0.1
0.5/
0.5
1530
Upl
and,
high
and
low
mar
sh
Bra
ckis
hm
arsh
Lan
dsat
7H
urd
etal
.(20
06)
0.7/
0.3
0.5/
0.5
2030
Wat
er,w
etla
ndF
lood
edM
ulti
-pol
ariz
edS
AR
She
n,G
uo,a
ndL
iao
(200
8)0.
85/0.
150.
8/0.
210
30V
eg.z
ones
Bra
ckis
hm
arsh
Lan
dsat
5T
ian
etal
.(20
08)
Not
es:n
/a,
nota
vail
able
.T
hesu
ffix
‘-ps
’in
dica
tes
pan-
shar
pene
dim
ager
y.S
ome
stud
ies
repo
rted
asbe
ing
ofsa
ltm
arsh
esar
eli
sted
here
asbr
acki
shm
arsh
esdu
eto
subs
tant
ialr
epor
ted
popu
lati
ons
offr
esh/
brac
kish
plan
tgen
era
such
asP
hrag
mit
esan
dJu
ncus
.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1337
A few studies have tested the effects of segmentation parameter choices on OBIA wet-land mapping, although using only one type of imagery in each case. When seeking todistinguish wetlands from surrounding land cover, more well-defined wetlands apparentlyresult from higher colour versus shape weighting, while smoothness/compactnessweighting has relatively little effect on the segmentation results, although the specific testsleading to these conclusions were not described in detail (Grenier et al. 2007). To distin-guish finer features, such as vegetation patterns within wetlands, the effects of differentcolour/shape and smoothness/compactness weight values have not been reported. Tests ofthe scale parameter have led to the selection of values ranging from 10 (Tian et al. 2008) to25 (Wang, Sousa, and Gong 2004) to 200 (Gergel et al. 2007) to identify vegetation zoneswithin a wetland. Further tests of the scale parameter and systematic tests of different com-binations of the three segmentation parameter choices have not been combined with testsof different imagery sources. The one study that reported testing multiple segmentationparameters with multiple imagery sources to identify wetland vegetation patches did notdescribe the specific tests or detail their results (Tuxen and Kelly 2008).
In summary, this literature (Table 1) fails to generalize how imagery and segmenta-tion setting choices are connected to wetland OBIA map quality. This makes it difficultto judge the representativeness of the results yielded by one study’s imagery and segmen-tation setting choices compared to other studies or to possible results from unexploredchoices. In this study, we systematically examine the effects of different combinations ofimagery and OBIA settings on wetland image segmentation. Our sensitivity analysis iden-tifies imagery and segmentation setting combinations that produce segmented objects mosteasily recognized as prominent salt marsh features (vegetation zones, vegetation patches,and tidal channels). The analysis exposes trade-offs between choice of imagery and choiceof segmentation settings and so provides a context for interpreting an individual map-ping result (based on just one choice of imagery and segmentation settings) to deal withnon-uniqueness. Improved OBIA-based wetland map quality with better imagery and seg-mentation setting choices may improve related science, management, restoration planning,and monitoring that employ OBIA.
2. Methods
2.1. Principles of OBIA segmentation
The image segmentation conducted by OBIA software combines adjacent image pixelsinto discrete spatial objects. This is an automated process determined by a segmentationalgorithm and by user-supplied control variables. The goal of segmentation is to createrecognizable objects, which is the prerequisite to later assigning the objects to semanticallymeaningful map classes, such as ‘water’, ‘bare soil’, or different vegetation types.
This study focused on one of the most widely used segmentation tools: the MRSA of theeCognition Developer software (Baatz and Schäpe 2000; Benz et al. 2004). The followingsummary of MRSA is based on the description by Benz et al. (2004). MRSA grows objectsfrom individual pixels by automated merger decisions. Two developing objects are mergedif the increase in heterogeneity ( f ) incurred by the merger is the smallest among all thecandidate merges with adjacent objects (or pixels) and if it is less than a specified thresh-old ( f < SP). The threshold (SP) is the scale parameter introduced in Section 1. Notethat the ‘scale parameter’ is not prescriptive of the size (scale) of resulting objects, nordirectly related to the dimension or resolution (scale) of the input data. It might be moreintuitively called the ‘object complexity limit’, but the term ‘scale parameter’ is well estab-lished and so is retained. The change in the weighted heterogeneity ( f ) of a potential
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1338 K.B. Moffett and S.M. Gorelick
object merger is the sum of the change in colour heterogeneity (�hcolour) and the change inshape heterogeneity (�hshape), multiplied by the colour and shape weights (wcolour, wshape),respectively (see Section 1):
f = (wcolour �hcolour) + (wshape �hshape). (1)
The colour and shape heterogeneity changes of a potential object merger are calculated asthe heterogeneity score of the potential new larger object minus the sum of the scores ofthe original two smaller objects. The colour heterogeneity of an object is its size in pixels(n) multiplied by the standard deviation of pixel values in each band (σ b), summed over allbands. Bands are typically weighted equally (wb = 1) but may be weighted individually:
�hcolour =∑
bwb
(nnew σnew,b − (nold1 σold1,b + nold2 σold2,b)
). (2)
The shape heterogeneity of an object is the sum of the change in compactness het-erogeneity (�hcomp.) and the change in smoothness heterogeneity (�hsmooth.), multipliedby the compactness and smoothness weights (wcomp., wsmooth.), respectively, introduced inSection 1:
�hshape = (wcomp. �hcomp.) + (wsmooth. �hsmooth.). (3)
The smoothness heterogeneity of an object is its size in pixels (n), multiplied by itsperimeter (p), divided by the perimeter of a bounding box oriented parallel to the inputraster (pbox):
�hsmooth. = nnew pnew
pnew,box−
(nold1 pold1
pold1,box+ nold2 pold2
pold2,box
). (4)
The compactness heterogeneity of an object is its perimeter (p), multiplied by the squareroot of its size in pixels (
√n):
�hcomp. = √nnew pnew − (√
nold1 pold1 + √nold2 pold2
). (5)
In summary, the MRSA requires the user to supply the shape parameter, the colour andshape weights (which must sum to 1), and the smoothness and compactness weights (whichmust sum to 1).
2.2. Study areas
To test how to effectively segment low-contrast remote-sensing imagery into objects rec-ognizable as wetland vegetation zones, vegetation patches, and surface water channels, twosalt marsh sites were chosen for analysis (Figure 1). The sites share a similar location,substrate, vegetation communities, and protected status, partially controlling for factorsnot relevant to testing the OBIA methods. The sites were chosen to differ markedly in theconfiguration of their vegetation patterns and channel networks, to allow for potential vari-ations in OBIA performance between differently patterned sites (Table 2). The DumbartonPoint Marsh is characterized by a branching network of sinuous tidal channels and a com-plex mosaic of vegetation; the more rapid modern growth of the Calaveras Point Marsh
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1339
(a) (b)
(c) (e)
(d)
Salt pondOpen water
Mudflat
Tidal marsh
Figure 1. Wetland sites: (a) site locations shown by black outlines, in southern San Francisco Bay;(b, c) Dumbarton Point Marsh site, centred on 37.506393◦, −122.094446◦; (d, e) Calaveras PointMarsh site, centred on 37.467546◦, −122.033717◦; (b, d) IKONOS CIR imagery; (c, e) RGB aerialphotography. Analysis limits are shown by white outlines in (b–e).
Table 2. Comparison of Dumbarton and Calaveras intertidal salt marsh site characteristics.
Site characteristic Dumbarton point marsh Calaveras point marsh
Center location 37.506393◦, −122.094446◦ 37.467546◦, −122.033717◦Area 2,243,581 m2 1,654,813 m2
Approximate major axis 2,468 m 2,908 mApproximate minor axis 1,135 m 787 mMarsh age Mostly ∼3,000 years Mostly <160 yearsChannel topology Meandering StraightChannel widths Variable NarrowGeneral vegetation pattern type Mosaic BandedEndangered species Clapper rail, salt marsh harvest mouse, steelhead trout,
chinook salmon, groundfish
has created a set of parallel linear channels around which vegetation is arranged in a morechannel-and shore-parallel banded fashion. Both marshes are protected by the Don EdwardsSan Francisco Bay National Wildlife Refuge in far southern San Francisco Bay, as part ofthe 20% or so of tidal wetland area that remains in the bay compared to the historical extentbefore 1850 (Atwater et al. 1979). Both marshes are thought to have naturally grown bay-ward in the past 150 years (Calaveras Point more than Dumbarton Point) but based on acore area of largely pristine marsh (Atwater et al. 1979).
The predominant plant species in far southern San Francisco Bay tidal marshesare spectrally very similar and intermixed: native Salicornia virginica and S. europaea,native Spartina foliosa with admixture of invasive S. alterniflora and hybrids, and nativeFrankenia salina, Distichlis spicata, Jaumea carnosa, Atriplex triangularis, and Grindeliastricta (SBSPRP 2005). The minor influence of the Coyote River on the Calaveras PointMarsh also causes the growth of more brackish species at that site: Scirpus maritimus andinvasive Lepidium latifolium (Grossinger et al. 1998; Rosso, Ustin, and Hastings 2005). TheCalaveras Point Marsh was previously studied by Rosso, Ustin, and Hastings (2005), who
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1340 K.B. Moffett and S.M. Gorelick
tested approaches to spectrally unmix 20-m AVIRIS hyperspectral image pixels to map saltmarsh vegetation species at subpixel scales. The reflectance spectra of dominant salt marshplant species, bay mud, and water published by Rosso, Ustin, and Hastings (2005) and oth-ers (e.g., Silvestri, Marani, and Marani 2003; Li, Ustin, and Lay 2005; Artigas and Yang2006; Sadro, Gastil-Buhl, and Melack 2007; Andrew and Ustin 2008) have demonstratedthe easy spectral separability of mud, water, and vegetation, but sometimes difficult sepa-ration of individual salt marsh vegetation zones and patches, depending on the season andspectral bands of the imagery.
2.3. Data and analysis
The sensitivity analysis of imagery and OBIA segmentation settings was conducted intwo phases. First, 120 tests that spanned the full range of segmentation settings wereapplied to colour infrared (CIR) IKONOS imagery. This large number of tests wasnecessary to cover the wide range of scale parameters (20, 50, 100, 300, 500, 1000),colour/shape weights (0.3/0.7, 0.5/0.5, 0.7/0.3, 0.9/0.1), and smoothness/compactnessweights (0.1/0.9, 0.3/0.7, 0.5/0.5, 0.7/0.3, 0.9/0.1) that may reasonably be used for thistype of analysis (see Table 1). Scale parameters smaller than 20 and larger than 1000 werenot tested. Using a scale parameter of 20 already produced objects containing only a fewpixels, so a smaller size would be illogical. Using a scale parameter of 1000 often containedthe entire marsh in one object, so a larger size would not be useful. A colour/shape weightof 0.1/0.9 was not used because such little weighting of the image colour information isan illogical way to analyse the imagery. The 120 tests on IKONOS imagery covered theremaining, potentially useful parameter space. Particular care was taken to use the mapfeature of the eCognition software when programming the analyses, so objects from onetest were not accidentally inherited by subsequent tests.
In the second phase of testing, targeted subsets of segmentation settings were applied in30 tests with visual-spectrum aerial photography and in 24 tests with the aerial photographyplus a lidar topography layer. Some of the parameter combinations were omitted that werenot found to be useful with the IKONOS imagery and that were also judged to be unlikelyto be useful with the aerial photography or LIDAR. Applying all these tests to each of thetwo selected marsh sites yielded a total of 348 segmentation tests on which the sensitivityanalysis was based.
The CIR imagery (provided by Timothy Hayes, Environmental Services Department,City of San Jose, CA, USA) covered the near infrared, red, and green bands with pixelresolution of 1.0 m (IKONOS 2004). The visual-spectrum aerial photography covered thered, green, and blue (RGB) bands with pixel resolution 0.3 m (NGA 2004 USGS HighResolution Orthoimage USNG 10SEG775460, San Francisco-Oakland, CA: provided toNational Geospatial-Intelligence Agency (NGA) and US Geological Survey (USGS) byEarthData International of Maryland, LLC, and to the project by the San Francisco EstuaryInstitute, Oakland, CA). The lidar topography was a bare earth lattice data product (pro-vided to the project by the San Franisco Estuary Institute, Oakland, CA) with a pixelresolution of 1.0 m (TerraPoint USA 2005). Since the vertical accuracy of the lidar oversoft vegetation was 10–20 cm (TerraPoint USA 2005), comparable to the total topographicrelief of the marsh plain, use of the lidar data was intended to provide only a rough guide tomajor channel locations, not a detailed topographic map of the marshes. All three data setswere georeferenced by the data providers. The aerial photography and lidar georeferencingmatched very closely; the IKONOS georeferencing was slightly offset from the other two
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1341
Table 3. System for assigning qualitative scores of wetland feature distinguishability.
Vegetation zones5 Fully automatically extracted4 Represented in pieces but require some interpretation to merge into zones3 Represented in pieces but require much classification to merge into zones2 Only vaguely represented or highly over-segmented1 Not represented
Vegetation patches and tidal channels5 Fully distinguishable as separate features of target type4 Largely distinguishable3 Somewhat distinguishable2 Hardly distinguishable or highly over-segmented1 Not distinguishable
data sets. Within each data set, image tiles sufficient to cover the two study sites weremerged prior to analysis using the tools of ArcGIS 9.3 (ESRI, Redlands, CA, USA) so asto preserve and match the colour ranges of all tiles.
The segmentation test results were evaluated according to the criterion proposed byCastilla and Hay (2008), as cited by Kim et al. (2011): ‘one measure of an optimal segmen-tation of individual landscape features is that the segmentation result has minimal over- orunder-segmentation’. This criterion was used to develop the scoring system in Table 3,which was applied to each of the 348 test results by an expert in local salt marsh pat-terning. Average scores were assessed for different combinations of imagery, segmentationsettings, and target marsh feature types. The three marsh feature types targeted were vege-tation zones, which are spatially extensive regions covered by a relatively uniform mix ofa few plant species (i.e. vegetation assemblage) or relatively uniform array of vegetationpatches; vegetation patches, which are spatially limited regions of nearly homogeneousplant cover (species or assemblage); and tidal surface water channels, which variouslyappeared filled with water, appeared as an exposed muddy channel bed, or were partially tocompletely covered over by vegetation, depending on their size, location, and the imagerytype.
3. Results
3.1. General effects of choice of imagery
Image segmentations conducted using different imagery types but consistent segmentationsettings produced distinctly different sets of objects (Figure 2). For example, applying oneof the combinations of segmentation settings most common in the literature (colour/shapeweighting 0.9/0.1 and smoothness/compactness weighting 0.5/0.5; see Table 1) to CIRIKONOS satellite imagery resulted in objects with highly tortuous shapes and crenulatedboundaries. In comparison, objects created from RGB aerial photography using the samesettings were blockier and less complex. The addition of a lidar layer to the RGB photo-graphy appeared to dilute the tendency towards object complexity, resulting in objects thatwere even less recognizable as the target marsh features than when using the RGB dataalone. Overall, no one type of imagery was clearly superior for identifying all three typesof target wetland features: different imagery appeared suitable for different purposes, asdescribed presently.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1342 K.B. Moffett and S.M. Gorelick
Dumbarton MarshCIR imagery (IKONOS)
Dumbarton MarshRGB imagery
Dumbarton MarshRGB imagery + lidar
Calaveras MarshCIR imagery (IKONOS)
Calaveras MarshRBG imagery
Calaveras MarshRGB imagery + lidar
SP300
SP100
SP50
SP20
100 m
Figure 2. Example segmentations using different imagery and scale parameters (SPs), withcolour/shape weighting (0.9/0.1) and smoothness/compactness weighting (0.5/0.5).
3.2. General effects of choice of segmentation settings
Image segmentations conducted using different segmentation settings on the same type ofimagery produced distinctly different sets of objects (Figure 3). The following observationsapplied consistently across tests of all three imagery types. Weighting colour informationover shape information provided better results than the reverse in most tests, with a pref-erence for the highest two colour weights (0.7, 0.9)/lowest two shape weights (0.3, 0.1).The highest colour weight (0.9)/lowest shape weight (0.1) resulted in very complicatedobjects with highly crenulated boundaries that tended to connect distant areas via nar-row corridors, but in many cases this seemed to accurately reflect the shapes of the targetwetland features. The reverse choice, of low colour weights (<0.5)/high shape weights(≥0.5), resulted in blocky objects that coincided poorly with target wetland features.However, these blocky shapes were useful in a few cases for roughly breaking up the marshinto broad zones, particularly when used with higher scale parameter values (300, 500).
Blocky object shapes were more frequently obtained and were more pronounced ifthe segmentation was also conducted with low smoothness (0.3, 0.1)/high compactness(0.7, 0.9) weights. In general, the smoothness/compactness weights appeared to have lit-tle effect on the resulting segmentations’ scores, when the scores were averaged acrossmarsh sites, scale parameters, and colour/shape weights (Figure 4). Smaller scale param-eter values were generally preferred, although using the small scale parameter value of20 usually divided relatively homogeneous target features into many more objects thannecessary and so incurred a scoring penalty for over-segmentation. Pronounced over-segmentation poses a potential problem for subsequent classification as portions of a targetfeature with slightly different properties may accidentally become assigned to different
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1343
Dumbarton MarshCIR imagery (IKONOS)
Dumbarton MarshRGB imagery
Dumbarton MarshRGB imagery + lidar
Calaveras MarshCIR imagery (IKONOS)
Calaveras MarshRBG imagery
Calaveras MarshRGB imagery + lidar
Colour/shape0.9/0.1
Sm./com.0.5/0.5
Colour/shape0.5/0.5
Sm./com.0.5/0.5
Colour/shape0.9/0.1
Sm./com.0.3/0.7
Colour/shape0.5/0.5
Sm./com.0.3/0.7
100 m
Figure 3. Example segmentations using different colour/shape weighting and smoothness/compactness (Sm./com.) weighting, with scale parameter (100).Note: Panels in the first row are the same as panels in the second row of Figure 2.
classes (Kim et al. 2011). At the other extreme, the large scale parameter value of 1000 typ-ically resulted in very few nonsensical objects and sometimes merged the entire marsh intoa single object.
3.3. Segmentation of wetland vegetation zones
Salt marsh vegetation zones were more easily identifiable from 1 m CIR satellite imagerythan from 0.3 m RGB aerial photography in our tests, whether the latter was analysedwith or without an additional lidar data layer (Figure 4). Using the CIR imagery, theoptimal settings for identifying vegetation zones in our tests were a scale parameter of300, colour/shape weights of 0.9/0.1, and smoothness/compactness weights of 0.3/0.7.However, different settings were more appropriate if the analysis was based on the 0.3 mRGB aerial photography, with or without the 1 m lidar layer (Table 4). The scale parametermost appropriate for identifying objects corresponding to vegetation zones was especiallysensitive to the choice of imagery, varying from values of 50 to 300.
Another key observation was that the segmentation of recognizable vegetation zonesappeared substantially affected by the differences in the tidal channel configurations atthe two sites. The effect on object boundaries was more visually striking among thestraight, parallel channels at the Calaveras site than among the more sinuous channelsat the Dumbarton site (see Figures 2 and 3). In both cases, the tidal channels served asobject boundaries, if not necessarily discrete objects, and apparently partially controlledthe directions of object growth. This constraint of objects between tidal channels consis-tently hindered the ability of the MRSA to grow spatially extensive vegetation zones (e.g.see the SP 300 example of Figure 2).
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1344 K.B. Moffett and S.M. Gorelick
00.5
11.5
22.5
33.5
4
(a) (d) (g)
(e) (h)
(f) (i)
(b)
(c)
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
2.5
3
00.5
11.5
22.5
33.5
4
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
2.5
3
00.5
11.5
22.5
33.5
4
20 50 100 300 500 1000 20 50 100 300 500 20 50 100 300
0
0.5
1
1.5
2
2.5
3
3.5
0.3 0.5 0.7 0.9 0.5 0.7 0.9 0.5 0.7 0.9
0
0.5
1
1.5
2
2.5
3
0.1 0.3 0.5 0.7 0.9 0.3 0.5 0.3 0.5
CIR imagery (1 m resolution IKONOS satellite)
RGB imagery (0.3 m resolution aerial photography)
RGB + lidar (0.3 m photographs + 1 m lidar data)
Scale parameter
Sco
reS
core
Sco
re
Scale parameterScale parameter
Colour weight (1–shape weight) Colour weight (1–shape weight)Colour weight (1–shape weight)
Smoothness weight(1–compactness weight)
Smoothness weight(1–compactness weight)
Smoothness weight(1–compactness weight)
Veg. Zones
Channels
Veg. Patches
Figure 4. Summary of segmentation sensitivity test results, which illustrates a framework for con-necting OBIA user imagery and segmentation setting choices to expected wetland feature map quality.Charts show average scores of marsh featuring distinguishability: higher scores reflect better fea-ture extraction (see Table 3). (a, d, g) Average scores using different scale parameter settings foreach imagery type, averaged across all colour/shape and smoothness/compactness weights. (b, e, h)Average scores using different colour weight settings for each imagery type, averaged across all scaleparameters and smoothness/compactness weights. (c, f , i) Average scores using different smooth-ness weight settings for each imagery type, averaged across all scale parameters and colour/shapeweights. Only a subset of parameter combinations was tested with the aerial photography data.Note: Legend in (i) applied to all panels.
3.4. Segmentation of wetland vegetation patches
Unlike vegetation zones, vegetation patches were identifiable with roughly equal successfrom either CIR satellite imagery or RBG aerial photography, although the specific resultswere still dependent on the imagery and segmentation setting choices (Figure 4). Usingeither source of imagery, objects were most recognizable as distinct vegetation patcheswhen created using colour/shape weight of 0.7/0.3 and smoothness/compactness weightof 0.5/0.5 (Table 4). This optimal colour/shape weight was particularly notable since itwas lower than those preferred in the remainder of the analyses of marsh features (veg-etation zones and channels). For distinguishing vegetation patches, small scale parametervalues were necessary, despite the concurrent problems with over-segmentation. The abil-ity to recognize objects as distinct vegetation patches was substantially degraded above
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1345
Table 4. Settings distinguishing salt marsh features with highest average scores.
Input data FeatureScale
parameterColourweight
Shapeweight
Smoothnessweight
Compactnessweight
IKONOS CIR satellite imagery (1 m resolution)Vegetation zones 300 0.9 0.1 0.3 0.7Vegetation 50 0.7 0.3 0.5 0.5
patchesTidal channels 20 0.9 0.1 0.5 0.5
RGB aerial photography (0.3 m resolution)Vegetation zones 100 0.9 0.1 0.3 0.7Vegetation 20 0.7 0.3 0.5 0.5
patchesTidal channels 20 0.9 0.1 0.5 0.5
RGB aerial photography (0.3 m resolution) and lidar (1 m, resampled to 0.3 m)Vegetation zones 50 0.9 0.1 0.3–0.5 0.7–0.5Vegetation 20 0.9 0.1 0.3 0.7
patchesTidal channels 20 0.9 0.1 0.5 0.5
a scale parameter setting of 50 in tests of both CIR satellite imagery and RGB aerialphotography (Figure 4).
3.5. Segmentation of wetland surface water channels
The surface water channels of the study sites were most easily identifiable from the RGBaerial photography (Figure 4). Although the RGB imagery yielded objects somewhat moreeasily recognizable as tidal channels than the CIR imagery, the optimal segmentation set-tings were the same for both sets of imagery (Table 4). The optimal settings for identifyingtidal channels in our tests were a scale parameter of 20, colour/shape weight of 0.9/0.1,and smoothness/compactness weight of 0.5/0.5. The greater success at identifying chan-nels from the RGB imagery compared to the CIR imagery seemed to be due to two factors:the difference in the colour spectrum of the RGB versus CIR imagery and the finer RGBimage resolution. The positive effect of the colour spectrum of the RGB imagery comparedto the CIR imagery was evidenced by a strong trend of improved performance with increas-ing colour weighting using the RGB imagery (Figure 4(e)) but not using the CIR imagery(Figure 4(b)). The finer RGB image resolution resolved the very small size of the tidal chan-nels more successfully than the CIR imagery, although the smallest of the scale parametersettings tested (20) was still required for optimum performance in every case. Despite thetendency towards problematic over-segmentation with such a small scale parameter value,this value was the most successful at creating objects that were fully contained within thechannels and not partially merged with vegetation outside the channel banks.
It was expected that the addition of a lidar data layer to the RGB imagery would assistwith segmenting objects more easily recognizable as tidal channels. Unfortunately, theaddition of the lidar data instead appeared to promote further over-segmentation in thevicinity of the tidal channels, therefore not improving the ability to extract the channels asreasonably large, contiguous objects and degrading overall performance (Figure 4).
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1346 K.B. Moffett and S.M. Gorelick
4. Discussion
4.1. Choosing imagery and segmentation settings for wetland OBIA and otherconsiderations
The results of this study demonstrated the variations in OBIA segmentation result qualitythat may be produced by different user’s choices of OBIA source imagery and segmentationsettings. Taken as a whole, the sensitivity analysis connected OBIA user imagery and seg-mentation setting choices to expected wetland feature map quality (Figure 4). Specificimagery and segmentation setting recommendations were extracted from the analysis thatmay be expected to produce easily recognizable wetland vegetation zones, vegetationpatches, and surface water channels (Table 4), at least for low-spectral contrast, low-topographic relief wetlands such as the intertidal salt marshes of San Francisco Bay. Thissection expands on these recommendations and examines additional considerations thatshould inform one’s segmentation setting choices: imagery availability, the type of wetlandfeatures of interest, and prior conceptual models of wetland organization.
In practice, constraints external to the mapping project often limit the choice ofimagery. Whether the source of imagery for a wetland mapping project is chosen orprescribed, to use OBIA methods one must select segmentation settings that complementthe available imagery and support the mapping goals. If one has a choice of imagerysources, our results lead us to recommend high-resolution CIR imagery to distinguishvegetation zones, high-resolution CIR or RGB imagery to distinguish vegetation patches(the choice likely depending on species and season), and high-resolution RGB imageryto distinguish surface water channels. Of course, there are many other imagery types thatmight be included in an OBIA approach and for specific applications their inclusion may bewarranted (e.g. different optical platforms (Harvey and Hill 2001; Ringrose, Vanderpost,and Matheson 2003), RADARSAT land moisture index (Li and Chen 2005; Grenier et al.2007), texture layers (Berberoglu et al. 2010; Laba et al. 2010; Kim et al. 2011), principalcomponents of imagery bands (Tuxen and Kelly 2008), or other options (see review byOzesmi and Bauer 2002)). However, simple optical RGB and CIR imagery remain themost frequently employed sources of information for wetland mapping (Table 1) and sowere the focus of this study.
Once imagery is chosen, the OBIA segmentation settings should be tailored to the typeof wetland features of interest. Unfortunately, this limits the utility of existing literatureto guide OBIA applications to novel types of wetland features. However, in general, ourresults suggest that the segmentation of a low-contrast wetland into recognizable objectswill be most successful if it is heavily based on the spectral information in the imagerywhile de-emphasizing object shape. Although the literature suggested a general preferencefor high colour/low shape weights (Table 1), our sensitivity analysis provides justificationfor such a preference. In general, high colour/low shape weights caused complex, highlycrenulated objects, which seemed to accurately reflect the configuration of broad vegetationzones and surface water channels. To identify vegetation zones and surface water channels,the highest colour weight (0.9) and lowest shape weight (0.1) settings were preferred in thisstudy. This result was consistent with inferences from the other two sensitivity analysesof colour/shape weights in wetland mapping (Grenier et al. 2007; Tuxen and Kelly 2008).However, when seeking to identify discrete vegetation patches from among low-contrastCIR IKONOS or RGB aerial imagery, a slight decrease in colour weighting (0.7) and anincrease in shape weighting (from 0.1 to 0.3) proved helpful. This slightly greater attentionto object shape may be beneficial because it better captures the naturally compact nature ofvegetation patches.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1347
Whether set at a moderately high or high value (0.7 or 0.9), the importance of highcolour weight and consequent low shape weight to produce recognizable wetland objectsfrom low-contrast imagery was emphasized by our tests of the addition of a lidar layer tothe RGB aerial photography. The lidar data, which were of relatively uniform elevation‘colour’ across the marsh plane, appeared to further dilute the segmentation algorithm’sability to separate low-contrast vegetation types into distinct objects. This result might havebeen partially remedied by reducing the weight of the lidar data relative to the RGB bands(see variable wb in Equation (2)). However, our recommendation based on our results is toomit the lidar data from wetland segmentation processes altogether, at least for low-contrastwetlands that also have low topographic relief. Prominent exceptions to this suggestion arecases for which a target vegetation type is of a markedly different height than the surround-ing landscape, and high-resolution canopy-return lidar data can be obtained during thatparticular season or circumstance (e.g. Rosso, Ustin, and Hastings 2006; Gilmore et al.2008). Again, the choices of imagery and segmentation settings must be tailored to thetarget wetland features.
In addition to considering the imagery used and the type of wetland features of interest,one’s segmentation setting choices may also be affected by assumed conceptual models ofwetland organization. For example, salt marsh vegetation zones are generally thought ofas spatially extensive, blocky, largely uniform vegetation assemblages, often arranged asbands roughly parallel to the shore (Adam 1990). This conventional conceptual model mayhave manifested in this study as a preference for slightly higher object compactness, rela-tive to object smoothness, when trying to identify objects representing salt marsh vegetationzones (Table 4). However, despite the guidance of this conceptual model, the optimal scaleparameter value for producing objects recognizable as vegetation zones changed greatlydepending on the type of imagery used. This result warns against trying to force a seg-mentation to match a prior conceptual model if that model is not supported by evidence inthe imagery. Instead, better understanding of the interspersion of different vegetation zonesmight be gained, as in this study, from acceptance of more complicated zone boundaries.For example, one may imagine that classification of the example segmentation shown inFigure 2 for IKONOS imagery and a scale parameter of 300 would result in well-definedbut highly complex vegetation zone boundaries. Hence, we suggest that assumed concep-tual models may be useful to help guide OBIA wetland mapping, but caution is necessarygiven the constraints of and evidence from the available data and analysis methods.
4.2. Additional notes on the ‘scale parameter’
Tailoring OBIA approaches to the imagery, type of wetland features, and assumed con-ceptual model is also important when choosing the scale parameter setting in eCognition’sMRSA. Of all the MRSA settings, the scale parameter was the most variable across imageryand target wetland feature types (Table 4) and so was particularly important to optimize.The goal is to narrow in on appropriate scale parameter values without extensive sensitivityanalysis. We already discussed in Section 2.1 that, despite its name, the scale parameter isnot directly related to the dimension or resolution (scale) of the input data and is not pre-scriptive of the size (scale) of resulting objects. Its definition, and our results, emphasizesthat an appropriate value of the scale parameter cannot be reliably derived from logicalconsideration of the input data or target wetland features.
In theory, the scale parameter functions as an ‘object complexity limit’, as suggested inSection 2.1. However, a meta-analysis of our segmentation tests suggests that, in prac-tice, the scale parameter may actually serve as a lower bound for the median size of
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1348 K.B. Moffett and S.M. Gorelick
objects produced from high-resolution CIR or RGB imagery (Figure 5). Variations inmedian object size were clearest in our analysis of IKONOS imagery, for which we testeda wide range of settings (Figure 5(a)). The median object size was reduced and approachedthe value of the scale parameter for very low shape/high colour weights. Median objectsize increased dramatically with increased shape/decreased colour weights for every scale
0 0.5 1
20
40
60
80
100
120
Shape weight
SP 20(a)
(b)
(c)
Med
ian
obje
ct s
ize
(pix
els)
0 0.5 10
5
10
Shape weight
SP 1000Compactness weight
0 0.5 1
20
40
60
80
100
120
Compactness weight
SP 20
Med
ian
obje
ct s
ize
(pix
els)
0 0.5 10
200
400
600
Shape weight
SP 50
0 0.5 10
200
400
600
Compactness weight
SP 50
0 0.5 10
1000
2000
3000
Shape weight
SP 100
0 0.5 10
1000
2000
3000
Compactness weight
SP 100
0 0.5 10
1
2
3x 104
x 104
x 104
x 104
x 104
x 104
x 105
x 105
x 104 x 105
x 104 x 105
Shape weight
SP 300
0 0.5 10
1
2
3
Compactness weight
SP 300
0 0.5 10
2
4
6
8
Shape weight
SP 500
0 0.5 10
2
4
6
8
Compactness weight
SP 500
CalaverasDumbarton
0 0.5 10
5
10
Compactness weight
SP 1000Shape weight
0.1
0.3
0.5
0.7
0.9
0.1
0.3
0.5
0.7
0.9
0 0.5 10
100
200
300
400
500
Shape weight
SP 20
Med
ian
obje
ct s
ize
(pix
els)
Med
ian
obje
ct s
ize
(pix
els)
Med
ian
obje
ct s
ize
(pix
els)
Med
ian
obje
ct s
ize
(pix
els)
0 0.5 10
100
200
300
400
500
Compactness weight
SP 20
0 0.5 10
1000
2000
3000
Shape weight
SP 50
0 0.5 10
1000
2000
3000
Compactness weight
SP 50
0 0.5 10
5000
10000
Shape weight
SP 100
0 0.5 10
5000
10000
Compactness weight
SP 100
0 0.5 10
5
10
Shape weight
SP 300
0 0.5 10
5
10
Compactness weight
SP 300
0 0.5 10
1
2
3
Shape weight
SP 500Compactness weight
0 0.5 10
1
2
3
Compactness weight
SP 500Shape weight
CalaverasDumbarton
0.1
0.3
0.5
0.7
0.9
0.1
0.3
0.5
0.7
0.9
0 0.5 10
200
400
600
Shape weight
SP 20
0 0.5 10
200
400
600
Compactness weight
SP 20
0 0.5 10
1000
2000
3000
Shape weight
SP 50
0 0.5 10
1000
2000
3000
Compactness weight
SP 50
0 0.5 10
5000
10,000
15,000
Shape weight
SP 100
0 0.5 10
5000
10,000
15,000
Compactness weight
SP 100
0 0.5 10
5
10
Shape weight
SP 300Compactness weight
0 0.5 10
5
10
Compactness weight
SP 300Shape weight
CalaverasDumbarton
0.1
0.3
0.5
0.7
0.9
0.1
0.3
0.5
0.7
0.9
Figure 5. Median object size in each wetland (Calaveras Point or Dumbarton Point marshes) result-ing from segmentation using various segmentation settings (scale parameter (SP), shape/colourweight, and compactness/smoothness weight) and imagery: (a) results from IKONOS data; (b) resultsfrom RGB aerial photography; (c) results from RGB aerial photography with lidar data.Note: In each panel, object size equivalent to the SP setting value is indicated by horizontal dashedline.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1349
parameter and compactness/smoothness weight combination that we tested (Figure 5(a),top row). This effect can be explained logically: as colour information is discounted infavour of shape information, the weighted colour heterogeneity score of potentially mergedobject will decrease (see Equation (1)). This then allows the object size to become largerbefore shape irregularities cause the weighted shape heterogeneity score to push the totalchange in weighted heterogeneity to exceed the specified scale parameter.
The median object sizes from our tests were more variable at high scale parametersettings, but the above trends were still well represented. For any scale parameter settingand any colour/shape weight that we tested, the median object size was relatively insensitiveto changes in the compactness/smoothness weights. The same trends were loosely borneout among the less extensive tests of aerial photography, with or without the lidar data(Figures 5(b) and (c)). In every test, the median object size was greater than the value ofthe scale parameter setting. It is not clear from these results whether there may be an upperbound to the size of objects produced using a given scale parameter setting. However, thisrealization that the scale parameter is a de facto lower bound to median object size may beuseful for guiding the selection of scale parameter values appropriate to capture wetlandfeatures of interest that exhibit characteristic scales.
4.3. Interacting effects of imagery and segmentation settings on results
While the meta-analysis provides some insight into the average effect of changing the scaleparameter setting, individual objects may not follow these trends. We designed a syn-thetic experiment to test the effects of the MRSA scale parameter on the size and shapeof individual objects when image colour heterogeneity is minimized. The experiment wasto segment a perfectly homogeneous black image (in Equation (1), �hcolour= 0). The blackimage was converted to rasters of two different resolutions (0.3 m and 1.0 m) to mimicdifferent imagery sources. Two regular shapes (a square and a circle) were introduced tobound the segmentations, as if they were the outlines of field sites. The scale parametervalues tested were 20, 50, and 100. Although we hypothesized that colour/shape weightingwould not affect the results given a homogeneously black input image, we tested weightsof 0.1/0.9 and 0.9/0.1. We used equal smoothness/compactness weights (0.5/0.5) in eachtest, based on the insensitivity of the results of any of our previous tests to this weighting.
Figure 6 illustrates the results of this synthetic experiment of MRSA behaviour in theabsence of image heterogeneity. Even with zero colour heterogeneity in the input data,the size, shape, and regularity of the resulting objects varied dramatically depending on(1) the scale parameter setting, (2) the colour/shape weighting, (3) the image pixel reso-lution, and (4) the shape of the bounding region (circle or square). Furthermore, thesecontributing factors appeared to interact, e.g. such that the result of using a scale parameterof 20 on 1 m resolution imagery was more similar to the result of using a scale parameterof 50 on 0.3 m imagery resolution than to any other result, for a given set of colour/shapeweights.
5. Conclusions
The low-spectral contrast of wetlands continues to pose a challenge for producing accu-rate, high-resolution maps at the sub-wetland scale. OBIA maps depend on the imagesegmentation, which depends on the source imagery and the segmentation settings. Thisstudy showed that a user’s choices regarding imagery and segmentation settings are linked.The optimal set of choices depend on the imagery, the landscape features of interest, and
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1350 K.B. Moffett and S.M. Gorelick
SP 501 m pixels, C/S 0.1/0.9
SP 1001 m pixels, C/S 0.1/0.9
SP 500.3 m pixels, C/S 0.1/0.9
SP 1000.3 m pixels, C/S 0.1/0.9
SP 201-m pixels, C/S 0.1/0.9
SP 200.3 m pixels, C/S 0.1/0.9
SP 501 m pixels, C/S 0.9/0.1
SP 1001 m pixels, C/S 0.9/0.1
SP 500.3 m pixels, C/S 0.9/0.1
SP 1000.3 m pixels, C/S 0.9/0.1
SP 201 m pixels, C/S 0.9/0.1
SP 200.3 m pixels, C/S 0.9/0.1
(a) (b)
Figure 6. Results of synthetic experiment testing segmentation of square and circular areas (asif field site boundaries) within a homogeneous black rectangle. The black image was pixelated ateither 1 m or 0.3 m resolution and segmented within the square and circle boundaries using differ-ent scale parameters (SPs), as indicated. Panel (a): popular settings of colour/shape (c/s) weights of0.9/0.1. Panel (b): reverse settings of colour/shape weights of 0.1/0.9. All these experiments usedsmoothness/compactness weights of 0.5/0.5.
possibly one’s prior conceptual models of wetland organization. For low-contrast wetlandswith pattern characteristics similar to the western US coastal salt marshes examined here,we have been able to recommend some choices of imagery and segmentation settings thatare likely to produce objects easily recognizable as distinct vegetation zones, vegetationpatches, and tidal channels (Table 4).
A synthetic experiment testing the effects of OBIA settings in the absence of imageheterogeneity also showed that the shape of the regions being segmented may affect thesegmentation results. This result has substantial implications for multiresolution OBIAprocedures, which expressly seek to nest fine-scale objects within the boundaries of coarserscale objects over a range of scales (Burnett and Blaschke 2003).
Because OBIA mapping methods require user input, even ‘optimal’ OBIA resultsare non-unique. For example, Figures 2 and 3 clearly illustrated that different choicesof imagery or segmentation settings greatly affected the objects produced and their abil-ity to be recognized as wetland features of interest. A poor appreciation of this inherentnon-uniqueness can have real costs, e.g. causing inefficient allocation of funds for wetlandrestoration (Gergel et al. 2007). Future wetland OBIA studies may benefit from mak-ing strategic imagery and segmentation setting choices based on the results of this study.By connecting OBIA user imagery and segmentation setting choices to expected wetlandfeature map quality, we hope these results will promote efficiency, quality, and compara-bility among future wetland OBIA mapping projects and related science, management, andrestoration.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1351
AcknowledgementsWe thank the San Francisco Estuary Institute and the City of San Jose for providing us withthe aerial and satellite imagery. This work was supported by National Science Foundation grantEAR-1013843 to Stanford University. Any opinions, findings, and conclusions or recommendationsexpressed in this material are those of the authors and do not necessarily reflect the views of theNational Science Foundation.
ReferencesAdam, P. 1990. Saltmarsh Ecology. New York: Cambridge University Press.Adam, E., O. Mutanga, and D. Rugege. 2009. “Multispectral and Hyperspectral Remote Sensing
for Identification and Mapping of Wetland Vegetation: A Review.” Wetlands Ecology andManagement 18: 281–96.
Andrew, M. E., and S. L. Ustin. 2008. “The Role of Environmental Context in Mapping InvasivePlants with Hyperspectral Image Data.” Remote Sensing of Environment 112: 4301–17.
Arroyo, L. A., K. Johansen, J. Armston, and S. Phinn. 2010. “Integration of LiDAR and QuickBirdImagery for Mapping Riparian Biophysical Parameters and Land Cover Types in AustralianTropical Savannas.” Forest Ecology and Management 259: 598–606.
Artigas, F. J., and J. Yang. 2006. “Spectral Discrimination of Marsh Vegetation Types in the NewJersey Meadowlands, USA.” Wetlands 26: 271–7.
Atwater, B. F., S. G. Conard, J. N. Dowden, C. W. Hedel, R. L. MacDonald, and W. Savage. 1979.“History, Landforms, and Vegetation of the Estuary’s Tidal Marshes.” In San Francisco Bay: TheUrbanized Estuary, edited by T. J. Conomos, 347–444. San Francisco, CA: California Academyof Sciences.
Baatz, M., C. Hoffmann, and G. Willhauck. 2008. “Progressing from Object-Based to Object-Oriented Image Analysis.” In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications, edited by T. Blaschke, S. Lang, and G. J. Hay, 29–42.Berlin: Springer-Verlag.
Baatz, M., and A. Schäpe. 2000. Multiresolution Segmentation: An Optimization Approach for HighQuality Multi-Scale Image Segmentation (eCognition). Accessed December 30, 2011. http://www.ecognition.cc/download/baatz_schaepe.pdf.
Benz, U. C., P. Hofmann, G. Willhauck, I. Lingenfelder, and M. Heynen. 2004. “Multi-Resolution,Objectoriented Fuzzy Analysis of Remote Sensing Data for GIS-Ready Information.” ISPRSJournal of Photogrammetry & Remote Sensing 58: 239–58.
Berberoglu, S., A. Akin, P. M. Atkinson, and P. J. Curran. 2010. “Utilizing Image Texture to DetectLand-Cover Change in Mediterranean Coastal Wetlands.” International Journal of RemoteSensing 31: 2793–815.
Burnett, C., K. Aaviksoo, S. Lang, T. Langanke, and T. Blaschke. 2003. “An Object-BasedMethodology for Mapping Mires Using High Resolution Imagery.” In International Conferenceon Ecohydrological Processes in Northern Wetlands, Tallinn, Estonia, June 30–July 4, 239–44.Accessed December 30, 2011. http://www.ecognition.com/sites/default/files/332_102_full.pdf.
Burnett, C., and T. Blaschke. 2003. “A Multi-Scale Segmentation/Object Relationship ModelingMethodology for Landscape Analysis.” Ecological Modeling 168: 233–49.
Dissanska, M., M. Bernier, and S. Payette. 2009. “Object-Based Classification of Very HighResolution Panchromatic Images for Evaluating Recent Change in the Structure of PatternedPeatlands.” Canadian Journal of Remote Sensing 35: 189–215.
Frohn, R. C. 2006. “The Use of Landscape Pattern Metrics in Remote Sensing Image Classification.”International Journal of Remote Sensing 27: 2025–32.
Frohn, R. C., M. Reif, C. Lane, and B. Autrey. 2009. “Satellite Remote Sensing of Isolated WetlandsUsing Object-Oriented Classification of Landsat-7 Data.” Wetlands 29: 931–41.
Gao, Z. G., and L. Q. Zhang. 2006. “Multi-Seasonal Spectral Characteristics Analysis of Coastal SaltMarsh Vegetation in Shanghai, China.” Estuarine, Coastal and Shelf Science 69: 217–24.
Gergel, S. E., Y. Stange, N. C. Coopes, K. Johansen, and K. R. Kirby. 2007. “What Is the Value of aGood Map? An Example Using High Spatial Resolution Imagery to Aid Riparian Restoration.”Ecosystems 10: 688–702.
Gilmore, M. S., E. H. Wilson, N. Barrett, D. L. Civco, S. Prisloe, J. D. Hurd, and C. Chadwick. 2008.“Integrating Multi-Temporal Spectral and Structural Information to Map Wetland Vegetation ina Lower Connecticut River Tidal Marsh.” Remote Sensing of Environment 112: 4048–60.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1352 K.B. Moffett and S.M. Gorelick
Grenier, M., A.-M. Demers, S. Labrecque, M. Benoit, R. A. Fournier, and B. Drolet. 2007. “AnObject-Based Method to Map Wetland Using RADARSAT-1 and Landsat ETM Images: TestCase on Two Sites in Quebec, Canada.” Canadian Journal of Remote Sensing 33: S28–45.
Grossinger, R., J. Alexander, A. N. Cohen, and J. N. Collins. 1998. Introduced Tidal Marsh Plants inthe San Francisco Estuary: Regional Distribution and Priorities for Control. Richmond, CA: SanFrancisco Estuary Institute. Accessed December 30, 2011. http://legacy.sfei.org/ecoatlas/Plants/docs/images/intrtmar.pdf.
Harken, J., and R. Sugumaran. 2005. “Classification of Iowa Wetlands Using an AirborneHyperspectral Image: A Comparison of the Spectral Angle Mapper Classifier and an Object-Oriented Approach.” Canadian Journal of Remote Sensing 31: 167–74.
Harvey, K. R., and G. J. E. Hill. 2001. “Vegetation Mapping of a Tropical Freshwater Swamp inthe Northern Territory, Australia: A Comparison of Aerial Photography, Landsat TM and SPOTSatellite Imagery.” International Journal of Remote Sensing 22: 2911–25.
Hay, G. J., T. Blaschke, D. J. Marceau, and A. Bouchard. 2003. “A Comparison of Three Image-ObjectMethods for the Multiscale Analysis of Landscape Structure.” ISPRS Journal of Photogrammetry& Remote Sensing 57: 327–45.
Hurd, J. D., D. L. Civco, M. S. Gilmore, S. Prisloe, and E. H. Wilson. 2006. “Tidal WetlandClassification from Landsat Imagery Using an Integrated Pixel-Based and Object-BasedClassification Approach.” In ASPRS 2006 Annual Conference, Reno, Nevada, USA, May1–5, 11 p. Accessed December 30, 2011. http://www.ecognition.com/sites/default/files/171_asprs2006_0063.pdf or http://clear.uconn.edu/publications/research/tech_papers/Hurd_et_al_ASPRS2006.pdf.
IKONOS. 2004. Color Infrared Satellite Imagery. San Jose, CA: IKONOS.Johansen, K., L. A. Arroyo, J. Armston, S. Phinn, and C. Witte. 2010. “Mapping Riparian Condition
Indicators in a Sub-Tropical Savanna Environment from Discrete Return LiDAR Data UsingObject-Based Image Analysis.” Ecological Indicators 10: 796–807.
Kim, M., T. A. Warner, M. Madden, and D. S. Atkinson. 2011. “Multi-Scale GEOBIA with Very HighSpatial Resolution Digital Aerial Imagery: Scale, Texture and Image Objects.” InternationalJournal of Remote Sensing 32: 2825–50.
Laba, M., B. Blair, R. Downs, B. Monger, W. Philpot, S. Smith, P. Sullivan, and P. C. Baveye. 2010.“Use of Textural Measurements to Map Invasive Wetland Plants in the Hudson River NationalEstuarine Research Reserve with IKONOS Satellite Imagery.” Remote Sensing of Environment114: 867–86.
Laliberte, A. S., E. L. Fredrickson, and A. Rango. 2007. “Combining Decision Trees withHierarchical Object-Oriented Image Analysis for Mapping Arid Rangelands.” PhotogrammetricEngineering & Remote Sensing 73: 197–207.
Li, J., and W. Chen. 2005. “A Rule-Based Method for Mapping Canada’s Wetlands Using Optical,Radar and DEM Data.” International Journal of Remote Sensing 26: 5051–69.
Li, L., S. L. Ustin, and M. Lay. 2005. “Application of Multiple Endmember Spectral Mixture Analysis(MESMA) to AVIRIS Imagery for Coastal Salt Marsh Mapping: A Case Study in China Camp,CA, USA.” International Journal of Remote Sensing 26: 5193–207.
Lu, D., and Q. Weng. 2009. “Extraction of Urban Impervious Surfaces from an IKONOS Image.”International Journal of Remote Sensing 30: 1297–311.
McIver, D. K., and M. A. Friedl. 2002. “Using Prior Probabilities in Decision-Tree Classification ofRemotely Sensed Data.” Remote Sensing of Environment 81: 253–61.
Meinel, G., and M. Neubert. 2004. “A Comparison of Segmentation Programs for HighResolution Remote Sensing Data.” In Proceedings IAPRS, Commission IV Papers, Vol.XXXV, Part B4, XXth ISPRS Congress, Istanbul, July 12–23, p. 6. Accessed December 30,2011. http://www.ecognition.com/sites/default/files/306_isprs2004_506.pdf or http://www.isprs.org/proceedings/XXXV/congress/comm4/papers/506.pdf.
Neubert, M., H. Herold, and G. Meinel. 2008. “Assessing Image Segmentation Quality – Concepts,Methods and Application.” In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications, edited by T. Blaschke, S. Lang, and G. J. Hay, 769–84.Berlin: Springer-Verlag.
NGA (National Geospatial-Intelligence Agency). 2004. USGS High Resolution Orthoimage USNG10SEG775460. San Francisco-Oakland, CA: NGA.
Ouyang, Z.-T., M.-Q. Zhang, X. Xie, Q. Shen, H.-Q. Guo, and B. Zhao. 2011. “A Comparison ofPixel-Based and Object-Oriented Approaches to VHR Imagery for Mapping Saltmarsh Plants.”Ecological Informatics 6: 136–46.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
International Journal of Remote Sensing 1353
Ozesmi, S. L., and M. E. Bauer. 2002. “Satellite Remote Sensing of Wetlands.” Wetlands Ecologyand Management 10: 381–402.
Ringrose, S., C. Vanderpost, and W. Matheson. 2003. “Mapping Ecological Conditions in theOkavango Delta, Botswana Using Fine and Coarse Resolution Systems Including SimulatedSPOT Vegetation Imagery.” International Journal of Remote Sensing 24: 1029–52.
Rokitnicki-Wojcik, D., A. Wei, and P. Chow-Fraser. 2011. “Transferability of Object-Based Rule Setsfor Mapping Coastal High Marsh Habitat among Different Regions in Georgian Bay, Canada.”Wetlands Ecology and Management 19: 223–36.
Rosso, P. H., S. L. Ustin, and A. Hastings. 2005. “Mapping Marshland Vegetation of San FranciscoBay, California, Using Hyperspectral Data.” International Journal of Remote Sensing 26:5169–91.
Rosso, P. H., S. L. Ustin, and A. Hastings. 2006. “Use of Lidar to Study Changes Associated withSpartina Invasion in San Francisco Bay Marshes.” Remote Sensing of Environment 100: 295–306.
Rundquist, D. C., S. Narumalani, and R. M. Narayanan. 2001. “A Review of Wetlands RemoteSensing and Defining New Considerations.” Remote Sensing Reviews 20: 207–26.
Sadro, S., M. Gastil-Buhl, and J. Melack. 2007. “Characterizing Patterns of Plant Distribution in aSouthern California Salt Marsh Using Remotely Sensed Topographic and Hyperspectral Dataand Local Tidal Fluctuations.” Remote Sensing of Environment 110: 226–39.
SBSPRP (South Bay Salt Pond Restoration Project). 2005. Biology and Habitats ExistingConditions Report (prepared by H. T. Harvey and Associates, Philip Williams andAssociates, EDAW, and Brown and Caldwell). South Bay Salt Pond Restoration Project, LosGatos, CA, USA. Accessed December 30, 2011. http://www.southbayrestoration.org/pdf_files/Biology_Habitats_Existing_Conditions.3.25.05.pdf.
Shen, G., H. Guo, and J. Liao. 2008. “Object Oriented Method for Detection of Inundation ExtentUsing Multipolarized Synthetic Aperture Radar Image.” Journal of Applied Remote Sensing 2:023512.
Silva, T. S. F., M. P. F. Costa, J. M. Melack, and E. M.L. M. Novo. 2008. “Remote Sensing of AquaticVegetation: Theory and Applications.” Environmental Monitoring and Assessment 140: 131–45.
Silvestri, S., M. Marani, and A. Marani. 2003. “Hyperspectral Remote Sensing of Salt MarshVegetation, Morphology and Soil Topography.” Physics and Chemistry of the Earth 28: 15–25.
TerraPointUSA. 2005. Project Report, Contract # 2206-H (South Bay Restoration, 1020 Final Report,1021 LIDAR data), v. 1.2, February 2, 2005. The Woodlands, TX: TerraPointUSA.
Tian, B., Y. Zhou, L. Zhang, and L. Yuan. 2008. “Analyzing the Habitat Suitability for MigratoryBirds at the Chongming Dongtan Nature Reserve in Shanghai, China.” Estuarine, Coastal andShelf Science 80: 296–302.
Tilley, D. R., M. Ahmed, J. H. Son, and H. Badrinarayanan. 2007. “Hyperspectral ReflectanceResponse of Freshwater Macrophytes to Salinity in a Brackish Subtropical Marsh.” Journal ofEnvironmental Quality 36: 780–9.
Tuxen, K., and M. Kelly. 2008. “Multi-Scale Functional Mapping of Tidal Marsh VegetationUsing Object-Based Image Analysis.” In Object-Based Image Analysis: Spatial Concepts forKnowledge-Driven Remote Sensing Applications, edited by T. Blaschke, S. Lang, and G. J. Hay,415–42. Berlin: Springer-Verlag.
Tuxen, K., L. Schile, D. Stralberg, S. Siegel, T. Parker, M. Vasey, J. Callaway, and M. Kelly. 2010.“Mapping Changes in Tidal Wetland Vegetation Composition and Pattern across a SalinityGradient Using High Spatial Resolution Imagery.” Wetlands Ecology and Management 19:141–57.
Varela, R. A. D., P. R. Rego, and M. S. C. Iglesias. 2008. “Classification of Linear EnvironmentalImpacts and Habitat Fragmentation by Object-Oriented Analysis of Aerial Photographs inCorrubedo National Park (NW Iberian Peninsula).” In Object-Based Image Analysis: SpatialConcepts for Knowledge-Driven Remote Sensing Applications, edited by T. Blaschke, S. Lang,and G. J. Hay, 399–414. Berlin: Springer-Verlag.
Wang, L., W. P. Sousa, and P. Gong. 2004. “Integration of Object-Based and Pixel-BasedClassification for Mapping Mangroves with IKONOS Imagery.” International Journal of RemoteSensing 25: 5655–68.
Weinke, E., S. Lang, and M. Preiner. 2008. “Strategies for Semi-Automated Habitat Delineation andSpatial Change Assessment in an Alpine Environment.” In Object-Based Image Analysis: SpatialConcepts for Knowledge-Driven Remote Sensing Applications, edited by T. Blaschke, S. Lang,and G. J. Hay, 711–32. Berlin: Springer-Verlag.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012
1354 K.B. Moffett and S.M. Gorelick
Wilson, M. D., S. L. Ustin, and D. M. Rocke. 2004. “Classification of Contamination in Salt MarshPlants Using Hyperspectral Reflectance.” IEEE Transactions on Geoscience and Remote Sensing42: 1088–95.
Yu, Q., P. Gong, N. Clinton, G. Biging, M. Kelly, and D. Schirokauer. 2006. “Object-Based DetailedVegetation Classification with Airborne High Spatial Resolution Remote Sensing Imagery.”Photogrammetric Engineering & Remote Sensing 72: 799–811.
Dow
nloa
ded
by [
Uni
vers
ity o
f T
exas
at A
ustin
], [
Kev
an M
offe
tt] a
t 08:
21 1
0 O
ctob
er 2
012