6-an object-based approach for urban landv (2013)

Upload: miusay

Post on 14-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 6-An Object-Based Approach for Urban Landv (2013)

    1/4

    This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 1

    An Object-Based Approach for Urban LandCover Classification: Integrating LiDAR

    Height and Intensity DataWeiqi Zhou

    AbstractDigital surface models (DSMs) derived from lightdetection and ranging (LiDAR) data have been increasinglyintegrated with high-resolution multispectral satellite/aerial im-agery for urban land cover classification. Fewer studies, however,have investigated the usefulness of LiDAR intensity in aid ofurban land cover classification, particularly in highly developedurban settings. In this letter, we use an object-based classificationapproach to investigate whether a combination of LiDAR heightand intensity data can accurately map urban land cover. Wefurther compare the approach to a method that uses multispectral

    imagery as the primary data source, but LiDAR DSM as ancillarydata to aid in classification. The study site is a suburban area inBaltimore County, MD. The LiDAR data were acquired in March2005, from which DSM and two intensity layers (first and lastreturns), with 1-m spatial resolution were generated, respectively.Four classes were included: 1) buildings; 2) pavement; 3) treesand shrubs; and 4) grass. Our results indicated that the object-based approach provided flexible and effective means to integrateLiDAR height and intensity data for urban land cover classifi-cation. A combination of the LiDAR height and intensity dataproved to be effective for urban land cover classification. Theoverall accuracy of the classification was 90.7%, and the overallKappa statistics equaled 0.872, with the users and producersaccuracies ranging from 86.8% to 93.6%. The accuracy of theresults were far better than those using multispectral imagery

    alone, and comparable to using DSM data in combination withhigh-resolution multispectral satellite/aerial imagery.

    Index TermsBaltimore, high-resolution imagery, intensity,light detection and ranging (LiDAR), normalized digital surfacemodel (nDSM), object-based image analysis, urban land coverclassification.

    I. Introduction

    ACCURATE and timely information about urban land

    cover is essential for urban land management, planning,

    and landscape pattern analysis. Remote sensing provides the

    primary source of data for urban land cover mapping. As an

    Manuscript received November 19, 2012; revised January 8, 2013; acceptedFebruary 27, 2013. This work was supported in part by the Chinese Academyof Sciences One Hundred Talented Program, the State Key Laboratory ofUrban and Regional Ecology, the Ministry of Environmental Protection ofChina under Grant STSN-12-01, and the National Science Foundation LTERProgram (DEB 042376).

    The author is with the State Key Laboratory of Urban and RegionalEcology, Research Center for Eco-Environmental Sciences, Chinese Academyof Sciences, Beijing 100085, China (e-mail: [email protected]).

    Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

    Digital Object Identifier 10.1109/LGRS.2013.2251453

    urban environment is extremely complex and heterogeneous,

    very high spatial resolution remotely sensed data are needed

    to adequately characterize the fine-scale spatial heterogeneity

    of urban landscape [1]. Consequently, high spatial resolution

    satellite and aerial imagery has been frequently used for

    detailed urban land cover mapping [2][4].

    The recent availability of airborne light detection and rang-

    ing (LiDAR) data provides new opportunities for detailed

    urban land cover mapping at very fine scales. LiDAR is anactive remote sensing technology, operating in the visible or

    near-infrared region of the electromagnetic spectrum. With

    the recent advances of airborne LiDAR technology, there is

    increasing interest in applying LiDAR data to urban land

    cover classification. LiDAR point clouds can be directly used

    for urban feature extraction [5]. More frequently, however,

    LiDAR points are first interpolated into raster layer(s), and

    then combined with high-resolution satellite/aerial imagery

    for detail urban land cover mapping. Researchers commonly

    used surface height information, or digital surface model

    (DSM) derived from LiDAR data as ancillary data to aid

    in classification [1], or as the primary data for classification

    [4], [6]. Studies have shown that the accuracy of urban landcover classification can be greatly improved by integrating

    multispectral imagery with LiDAR data [1], [4], [6].

    In addition to height data, LiDAR also provides intensity

    data that reflect the material characteristics of land cover

    features, which can be potentially used for urban land cover

    classification [6], [7]. While LiDAR intensity data have been

    increasingly used in forest-type classification [8], only a few

    very recent studies have used LiDAR intensity as ancillary

    data to aid in urban land cover mapping [4], [6]. Few studies

    have investigated the usefulness of LiDAR data alone, i.e.,

    a combination of LiDAR height and intensity information,

    in urban land cover classification [7], [9], particularly in

    highly developed urban settings, where classification is morechallenging due to the fine-scale complexity of urban land

    cover features. This letter aims to fill this gap.

    Paralleled with the increasing availability of LiDAR data

    are the advances in object-based image analysis (OBIA), an

    image classification approach that has gained wide acceptance

    in fine-scale urban land cover mapping [10]. Rather than clas-

    sifying individual pixels, object-based classification segments

    the imagery into objects. Consequently, in addition to spectral

    response, object characteristics, such as shape and spatial

    1545-598X/$31.00 c 2013 IEEE

  • 7/30/2019 6-An Object-Based Approach for Urban Landv (2013)

    2/4

    This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

    2 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS

    Fig. 1. Study site: a suburban area in the Baltimore County, MD, USA.

    relations, can be used for classification [1], [10]. Many studies

    have shown that OBIA techniques are superior to pixel-based

    approaches for land cover classification from high-resolution

    imagery [10].

    In this letter, we used an object-based classification ap-

    proach to investigate whether a combination of the LiDAR

    height and intensity data can accurately characterize and map

    urban land cover. We further compared this approach to a

    method that used imagery as the primary data source, but

    LiDAR height data as ancillary data for classification.

    II. Data and Methodology

    A. Study Site

    The study site was a suburban area in Baltimore County,MD, USA (Fig. 1). Land use of this area was dominated by

    medium- to high-density residential development, mixed with

    small proportions of commercial and other institutional land

    uses. Land cover features are typical of those in urban and

    suburban environments, including detached and multifamily

    houses, commercial buildings, paved surfaces, and vegetation

    cover. Therefore, the variety of the land use/land cover in the

    study site makes it well suited for the goal of this letter.

    B. Data Preprocessing

    1) LiDAR Data: The LiDAR data were acquired in March

    2005. Both the first and last vertical returns were recorded foreach laser pulse, with the average point spacing of approx-

    imately 1 m. The returns from bare ground and nonground

    (e.g., tree canopy, building roofs) were separated. The LiDAR

    point clouds were processed to generate three separate raster

    datasets: a normalized digital surface model (nDSM), and two

    intensity image layers.

    Normalized digital surface model: The points returns from

    bare ground were interpolated into 1-m spatial resolution dig-

    ital elevation model (DEM), and all returns (i.e., both returns

    from bare ground and nonground) into 1-m resolution DSM,

    using the natural neighbor interpolation method available in

    Fig. 2. Subset of the aerial imagery, LiDAR data layers, and classificationresults. (a) Multispectral emerge imagery. (b) nDSM. (c) First return intensity.(d) Last return intensity. (e) Classification results from Method 1 (shades ofgray from light to dark: grass, trees and shrubs, pavement, and buildings).(f) Classification results from Method 2.

    ArcGIS 3-D analyst. The surface cover height model (referred

    to as nDSM) was then generated by subtracting the DEM from

    the DSM (Fig. 2).

    Intensity layers: Two intensity layers were generated from

    the first and last return measurements, respectively (Fig. 2).

    The natural neighbor interpolation method in ArcGIS 3-D

    analyst was used to generate the two 1-m spatial resolution

    intensity layers. The mean and standard deviation of intensity

    from first returns were 7.83 and 5.99, respectively, with the

    range of 0.10 to 472.82; and those from last returns were 9.17

    and 5.31, respectively, with the range of 0.14 to 502.32.2) High-Resolution Imagery Data: Color-infrared digi-

    tal aerial image data with a pixel size of 0.6 m acquired

    in 2004 were used in this letter for comparison purposes(Fig. 2). The imagery was 3-band color-infrared, with green

    (510600 nm), red (600700 nm), and near-infrared bands

    (800900 nm). The imagery data has an 8-bit radiometric

    depth, and was orthorectified [2].

    C. Land Cover Classification

    In this letter, four land cover classes were identified: 1)

    buildings; 2) pavement; 3) trees and shrubs; and 4) grass,

    which are the most typical land cover types in urban and

    suburban landscapes. Two methods were applied to perform

    the land cover classification. Both methods used an OBIA

  • 7/30/2019 6-An Object-Based Approach for Urban Landv (2013)

    3/4

    This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

    ZHOU: OBJECT-BASED APPROACH FOR URBAN LAND COVER CLASSIFICATION 3

    approach for classifications, which was implemented using the

    software eCognition. The classification procedures, as detailed

    below, however, were slightly different because the primary

    data sources for classification within the two methods were

    different.

    1) Method 1: Classification Using LiDAR Data Alone:

    Method 1 used LiDAR data alone for classification. A rule

    set, a sequence of processing commands/algorithms, was de-

    veloped to generate image segments and to classify them intodesired land cover classes [1], [2]. Specifically, a contrast-split

    segmentation algorithm was first used to separate tall objects

    from short objects based on nDSM [6]. The minimum and

    maximum thresholds were set as 6 feet and 9 feet, respectively.

    We then further separated tall objects into buildings and

    trees, and classified short objects into pavement and grass,

    as detailed follows.

    Classification of tall objects into buildings and trees. A

    multiresolution segmentation was run for the tall objects,

    using both nDSM and the intensity layer from the first return

    measurements. The multiresolution segmentation algorithm

    initialized with each pixel previously classified as tall objects

    in the image as a separate segment, which was merged withneighboring segments based on their level of similarity in

    subsequent steps. The process stops when there are no more

    possible merges given a defined scale parameter. The scale

    parameter specifies the maximum heterogeneity that is allowed

    within each object, which indirectly controls the size of

    objects. The greater the scale parameter, the larger the average

    size of the objects. The user can also specify color and shape

    parameters to change the relative weighting of reflectance and

    shape in defining segments. In this letter, the scale parameter

    was set as 10 to conduct the segmentation at a very fine

    scale. The color criterion was given a weight of 0.9, while the

    shape was assigned with the remaining weight of 0.1, giving

    equal weights to compactness (i.e., 0.05) and smoothness. The

    scale parameter of 10 and the values for the color and shape

    parameters were determined by visual inspection of the image

    segmentation results, where objects were considered to be

    internally homogenous, i.e., all pixels within an image object

    belonged to one cover class [1]. Following the segmentation,

    tall objects were classified as buildings if the difference in

    intensity from the first and last returns was less than 1, and

    standard deviation of nDSM was less than 6 feet. In addition,

    tall objects that share boundaries with these previously clas-

    sified buildings were further identified as buildings, if: 1) the

    standard deviation of nDSM

  • 7/30/2019 6-An Object-Based Approach for Urban Landv (2013)

    4/4

    This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

    4 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS

    TABLE I

    Error Matrix of the Four Classes, With Calculated Producer,

    User and Overall Accuracy for Method 1

    Classified Reference data Row TotalUserAcc.(%)

    data Tree Pavement Grass Building

    Tree 88 0 4 4 96 91.67

    Pavement 1 52 4 0 57 91.23

    Grass 1 7 86 0 97 88.66

    Building 4 0 0 46 50 92.00

    Producer 93.62 88.14 91.49 86.79Acc. (%)

    Overall accuracy: 90.67%, overall Kappa statistic: 0.872.

    TABLE II

    Error Matrix of the Four Classes, With Calculated Producer,

    User and Overall Accuracy for Method 2

    Classified Reference data Row TotalUserAcc.(%)

    data Tree Pavement Grass Building

    Tree 93 0 4 1 98 94.90

    Pavement 0 56 2 3 61 91.80

    Grass 5 6 79 1 91 86.81

    Building 2 0 0 48 50 96.00Producer Acc. (%) 93.00 90.32 88.76 92.31Acc. (%)

    Overall accuracy: 92.00%, overall Kappa statistic: 0.883.

    were also far better than those of urban land cover classi-

    fication using multispectral imagery alone [11]. One of the

    advantages of the LiDAR system over passive multispectral

    remote sensing is that LiDAR is an active remote sensing

    technology. Therefore, LiDAR nDSM and intensity are not

    affected by shadows that can affect a significant proportion of

    high spatial resolution imagery in urban areas [2].

    The overall accuracy of the classification based on Method

    1 was slightly lower than that from Method 2 (Table II). The

    accuracies based on Method 1 were also comparable to those

    of urban land cover classifications, in which LiDAR data were

    integrated with multispectral imagery and existing GIS layers

    [4], [6]. These results suggested that with an object-based

    classification approach, LiDAR data alone could potentially

    be a very useful and effective data source for accurate urban

    land cover mapping.

    LiDAR data also have some limitations in urban land

    cover classification. Using LiDAR data in urban land cover

    classification generally requires data processing of LiDAR

    point clouds into raster layers, which is relatively computation-

    ally intensive. More importantly, the process of interpolating

    LiDAR points into raster layers may introduce some uncer-tainty, which may affect the later urban land cover classi-

    fication. For example, the accuracy assessment and evalua-

    tion indicated that small pieces of pavement surrounded by

    grass (e.g., paved sidewalks) were commonly misclassified

    as grass. This is likely because the raster layers of intensity

    were generated by interpolating the LIDAR sampling points

    using the natural neighbor method, which tended to create a

    smooth surface. Similarly, tree canopies right next to buildings

    sometimes were misclassified into buildings, and the edges

    of buildings may be misclassified as trees. In addition, laser

    pulses generally cannot penetrate very dense tree canopies.

    Consequently, the use of the feature of intensity difference

    led to some misclassifications between buildings and trees.

    IV. Summary and Conclusion

    This research investigated whether the use of LiDAR data

    alone can be effectively map detailed urban land cover, using

    an object-based classification approach. Our results indicated

    that using an object-based classification approach, a combina-

    tion of the LiDAR height and intensity data could accuratelycharacterize and map urban land cover. The accuracy of the

    results was far better than those using multispectral imagery

    alone, and comparable to those integrating LiDAR data with

    multispectral imagery and existing GIS layers. The object-

    based approach provided a flexible and effective means of

    integrating LiDAR height and intensity information for urban

    land cover classification, and was superior to a pixel-based

    approach. As LiDAR nDSM are relatively consistent and

    stable across a heterogeneous urban landscape, and thus allows

    for automatic feature extraction for a large region, using

    an object-based approach, the integration of LiDAR nDSM

    and intensity provides great potential for accurate large-scale

    mapping of detailed urban land cover.

    Acknowledgment

    The authors would like to thank the reviewers for their

    helpful comments. Many thanks are given to W. Yu and

    Dr. L. Han for their help with accuracy assessment and format

    editing.

    References

    [1] W. Zhou and A. Troy, An object-oriented approach for analysing andcharacterizing urban landscape at the parcel level, Int. J. Remote Sens.,vol. 29, pp. 31193135, May 2008.

    [2] W. Zhou, G. Huang, A. Troy, and M. L. Cadenasso, Object-based landcover classification of shaded areas in high spatial resolution imageryof urban areas: A comparison study, Remote Sens. Environ., vol. 113,pp. 17691777, Aug. 2009.

    [3] D. Lu, S. Hetrick, and E. Morgan, Land cover classification in acomplex urban-rural landscape with Quickbird imagery, Photogram.

    Eng. Remote Sens., vol. 76, pp. 11591168, Oct. 2010.[4] S. W. MacFaden, J. P. M. ONeil-Dunne, A. R. Royar, J. W. T. Lu, and

    A. G. Rundle, High-resolution tree canopy mapping for New York Cityusing LIDAR and object-based image analysis, J. Appl. Remote Sens.,vol. 6, pp. 123, Sep. 2012.

    [5] K. Zhang, J. Yan, and S. C. Chen, Automatic construction of buildingfootprints from airborne LIDAR data, IEEE Trans. Geosci. RemoteSens., vol. 44, no. 9, pp. 25232533, Sep. 2006.

    [6] J. P. M. ONeil-Dunne, S. W. MacFaden, A. R. Royar, and K. C.Pelletier, An object-based system for LiDAR data fusion and featureextraction, Geocarto Int., pp. 116, Jun. 2012.

    [7] J. Im, J. R. Jensen, and M. E. Hodgson, Object-based land coverclassification using high-posting-density LiDAR data, GISci. RemoteSens., vol. 45, pp. 209228, Apr. 2008.

    [8] A. S. Antonarakis, K. S. Richards, and J. Brasington, Object-based landcover classification using airborne LiDAR, Remote Sens. Environ., vol.112, pp. 29882998, Jun. 2008.

    [9] A. Shaker and N. El-Ashmawy, Land cover information extractionusing lidar data, in Proc. XXII ISPRS Congr. Int. Archives Photogram-metry, Remote Sens. Spatial Inform. Sci., vol. XXXIX-B7. Melbourne,Australia, Aug.Sep. 2012, p. 25.

    [10] T. Blaschke, Object based image analysis for remote sensing, ISPRSJ. Photogrammetry Remote Sens., vol. 65, pp. 216, Jan. 2010.

    [11] N. Thomas, C. Hendrix, and R. G. Congalton, A comparison of urbanmapping methods using high resolution digital imagery, Photogram.

    Eng. Remote Sens., vol. 69, pp. 963972, Sep. 2003.