use of uav oblique imaging for the detection of individual

9
Urban Forestry & Urban Greening 14 (2015) 404–412 Contents lists available at ScienceDirect Urban Forestry & Urban Greening j ourna l h om epage: www.elsevier.com/locate/ufug Use of UAV oblique imaging for the detection of individual trees in residential environments Yi Lin a,, Miao Jiang b , Yunjun Yao c , Lifu Zhang d , Jiayuan Lin e a Institute of Remote Sensing and Geographic Information System, Beijing Key Lab of Spatial Information Integration and Its Applications, Peking University, 100871 Beijing, China b Institute of Mineral Resources Research, China Metallurgical Geology Bureau, 100025 Beijing, China c College of Global Change and Earth System Science, Beijing Normal University, 100875 Beijing, China d Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, 100101 Beijing, China e Institute of Mountain Hazards and Environment, Chinese Academy of Sciences, 610041 Chengdu, China a r t i c l e i n f o Keywords: Aerial oblique imaging Individual tree detection Residential environment Ultra high spatial resolution Unmanned aerial vehicle a b s t r a c t Oblique imaging and unmanned aerial vehicles (UAV) are two state-of-the-art remote sensing (RS) tech- niques that are undergoing explosive development. While their synthesis means more possibilities for the applications such as urban forestry and urban greening, the related methods for data processing and information extraction, e.g. individual tree detection, are still in short supply. In order to help to fill this technical gap, this study focused on developing a new method applicable for the detection of individual trees in UAV oblique images. The planned algorithm is composed of three steps: (1) classification based on k-means clustering and RGB-based vegetation index derivation to acquire vegetation cover maps, (2) suggestion of new feature parameters by synthesizing texture and color parameters to identify vegeta- tion distribution, and (3) individual tree detection based on marker-controlled watershed segmentation and shape analysis. The evaluations based on the images within residential environments indicated that the commission and omission errors are less than 32% and 26%, respectively. The results have basically validated the proposed method. © 2015 Elsevier GmbH. All rights reserved. Introduction Unmanned aerial vehicle (UAV) based remote sensing (RS), as one of the cutting-edge technologies in its field, is now in explosive development. With a number of advantages (e.g. operation flexi- bility) compared to established RS methods, various UAV-based RS methods aimed at different application scenarios have been pro- posed and validated (e.g. Zarco-Tejada et al., 2012; Vasuki et al., 2014; Lin et al., 2011; Getzin et al., 2012). UAV-based photography is such a kind of promising techniques. In general, even with ordi- nary cameras integrated, this technique can provide images with extremely high spatial resolutions, which can help understanding earth surfaces and acquiring previously undetectable object detail information (Getzin et al., 2012). In addition, because UAV flying altitudes tend to be low, UAV-based photography is rarely affected by cloud cover and flight campaigns can be more flexibly planned Corresponding author. Tel.: +86 1062751191. E-mail address: [email protected] (Y. Lin). and manipulated (Rango et al., 2009). Hence, a large amount of UAV-based imaging applications have been attempted in multi- ple fields, such as for studying geological structures (Vasuki et al., 2014), forest (Getzin et al., 2012), rangeland (Rango et al., 2009), and agriculture (Torres-Sánchez et al., 2014). At the same time, aerospace/aerial oblique imaging is another potential technology now highlighted by the RS community. This concept was earliest put forward in 1930s, when the film- based Fairchild T-3A system was established for acquiring aerial oblique images (Talley, 1938). With Pictometry ® re-launching the mission of oblique airborne photogrammetry in 2000 as the “land- mark”, this “traditionally sideline business to vertical photography” (Petrie, 2009) has became more and more popular and its abilities for real uses have been increasingly stressed. Particularly dur- ing these 5 years, it has attracted more attention. At the same time, aerial digital multi-camera systems have been published by many companies such as IGI, Leica, Blomoblique, Pictometry and Vexcel/Microsoft. An overview of the technical configurations in these diverse oblique-imaging systems was made by Petrie (2009). Their kernel techniques such as image acquisition and processing and their potential usages for such as object measurement, http://dx.doi.org/10.1016/j.ufug.2015.03.003 1618-8667/© 2015 Elsevier GmbH. All rights reserved.

Upload: others

Post on 25-May-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Use of UAV oblique imaging for the detection of individual

Ur

Ya

1b

c

d

e

a

KAIRUU

I

odbmp2ineeiab

h1

Urban Forestry & Urban Greening 14 (2015) 404–412

Contents lists available at ScienceDirect

Urban Forestry & Urban Greening

j ourna l h om epage: www.elsev ier .com/ locate /u fug

se of UAV oblique imaging for the detection of individual trees inesidential environments

i Lina,∗, Miao Jiangb, Yunjun Yaoc, Lifu Zhangd, Jiayuan Line

Institute of Remote Sensing and Geographic Information System, Beijing Key Lab of Spatial Information Integration and Its Applications, Peking University,00871 Beijing, ChinaInstitute of Mineral Resources Research, China Metallurgical Geology Bureau, 100025 Beijing, ChinaCollege of Global Change and Earth System Science, Beijing Normal University, 100875 Beijing, ChinaInstitute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, 100101 Beijing, ChinaInstitute of Mountain Hazards and Environment, Chinese Academy of Sciences, 610041 Chengdu, China

r t i c l e i n f o

eywords:erial oblique imaging

ndividual tree detectionesidential environmentltra high spatial resolutionnmanned aerial vehicle

a b s t r a c t

Oblique imaging and unmanned aerial vehicles (UAV) are two state-of-the-art remote sensing (RS) tech-niques that are undergoing explosive development. While their synthesis means more possibilities forthe applications such as urban forestry and urban greening, the related methods for data processing andinformation extraction, e.g. individual tree detection, are still in short supply. In order to help to fill thistechnical gap, this study focused on developing a new method applicable for the detection of individualtrees in UAV oblique images. The planned algorithm is composed of three steps: (1) classification basedon k-means clustering and RGB-based vegetation index derivation to acquire vegetation cover maps, (2)

suggestion of new feature parameters by synthesizing texture and color parameters to identify vegeta-tion distribution, and (3) individual tree detection based on marker-controlled watershed segmentationand shape analysis. The evaluations based on the images within residential environments indicated thatthe commission and omission errors are less than 32% and 26%, respectively. The results have basicallyvalidated the proposed method.

© 2015 Elsevier GmbH. All rights reserved.

ntroduction

Unmanned aerial vehicle (UAV) based remote sensing (RS), asne of the cutting-edge technologies in its field, is now in explosiveevelopment. With a number of advantages (e.g. operation flexi-ility) compared to established RS methods, various UAV-based RSethods aimed at different application scenarios have been pro-

osed and validated (e.g. Zarco-Tejada et al., 2012; Vasuki et al.,014; Lin et al., 2011; Getzin et al., 2012). UAV-based photography

s such a kind of promising techniques. In general, even with ordi-ary cameras integrated, this technique can provide images withxtremely high spatial resolutions, which can help understandingarth surfaces and acquiring previously undetectable object detail

nformation (Getzin et al., 2012). In addition, because UAV flyingltitudes tend to be low, UAV-based photography is rarely affectedy cloud cover and flight campaigns can be more flexibly planned

∗ Corresponding author. Tel.: +86 1062751191.E-mail address: [email protected] (Y. Lin).

ttp://dx.doi.org/10.1016/j.ufug.2015.03.003618-8667/© 2015 Elsevier GmbH. All rights reserved.

and manipulated (Rango et al., 2009). Hence, a large amount ofUAV-based imaging applications have been attempted in multi-ple fields, such as for studying geological structures (Vasuki et al.,2014), forest (Getzin et al., 2012), rangeland (Rango et al., 2009),and agriculture (Torres-Sánchez et al., 2014).

At the same time, aerospace/aerial oblique imaging is anotherpotential technology now highlighted by the RS community.This concept was earliest put forward in 1930s, when the film-based Fairchild T-3A system was established for acquiring aerialoblique images (Talley, 1938). With Pictometry® re-launching themission of oblique airborne photogrammetry in 2000 as the “land-mark”, this “traditionally sideline business to vertical photography”(Petrie, 2009) has became more and more popular and its abilitiesfor real uses have been increasingly stressed. Particularly dur-ing these 5 years, it has attracted more attention. At the sametime, aerial digital multi-camera systems have been published bymany companies such as IGI, Leica, Blomoblique, Pictometry and

Vexcel/Microsoft. An overview of the technical configurations inthese diverse oblique-imaging systems was made by Petrie (2009).Their kernel techniques such as image acquisition and processingand their potential usages for such as object measurement,
Page 2: Use of UAV oblique imaging for the detection of individual

Urban

3S

itsbasataova(e(1tIi

wttvbTtohc3c2eo

ftihdfofgtoeIpAtle

i3timiuc

Y. Lin et al. / Urban Forestry &

D modeling, and web service were summarized by Karbo andchroth (2009).

The highlights on aerospace/aerial oblique imaging were rootedn its technical advantages. Oblique imaging can supplement theraditional vertical photography of a locality with a unique per-pective view. Specifically, this technique can see each side of auilding, structure or feature, and can expose the blind spots, exitsnd entrances. It can also improve the identification of the hard-to-ee assets and facilities (e.g. lighting posts, telegraph poles, trees atvenues etc.). It, even, can measure height, length, and area of fea-ures directly from photography (if data is correctly georeferenced)nd map facades. Previously, it is hard to locate these features basedn traditional vertical photography. These strengths have beenerified in the applications for generating virtual images of cityreas (Tommaselli et al., 2013), mapping textures of 3D city modelsFrueh et al., 2013), verifying buildings (Nyaruhuma et al., 2010),xtracting buildings (Xiao et al., 2012) and even building footprintsNyaruhuma et al., 2012; Nex et al., 2013). Compared to the almost5-year re-stressing of oblique imaging and system development,he works on its information extraction are still relatively fewer.n fact, more applications with the potential of supplying variousnformation are recommended by Fritsch and Rothermel (2013).

Combination of UAV and oblique imaging seems to be a sounday for improving both of their applicability on RS tasks. In addi-

ion to the above-mentioned merits, UAVs can loiter above theargets to acquire their oblique images almost from any arbitraryiewing angle. In light of these strengths, these years the case num-er of applying their combination has undergone an increment.he attempts to improve their performance, e.g. via both rela-ive and absolute calibrations of a multi-head camera system withblique and nadir looking cameras for a UAV (Niemeyer et al., 2013),ave gradually emerged. Besides the typical UAV-based applicationases listed above, UAV multiple oblique images can also generateD point clouds, each point with accurate coordinates, which helpharacterize the structures of the objects (Rosnell and Honkavaara,012) and simultaneously provide their texture information (Rocat al., 2013). Overall, these merits provide advantages for UAVblique photography on object modeling.

However, in the applications of airborne oblique imagery urbanorestry and urban greening have been quite less involved andhe reconstruction of vegetation such as trees is still a challeng-ng task, because for different viewing angles their appearance mayeavily differ (Fritsch and Rothermel, 2013). Although some worksedicated to developing the solutions by generating point cloudsrom oblique images, e.g. by dense matching (Gerke, 2009), haveccurred, practices show that the introduction of UAV helps little soar. For instance, based on UAV oblique images, Wang et al. (2011)enerated a good point cloud representation and reconstruction ofhe targeted 3D forest scene. However, it was found that at the stagef their work the density of the point cloud is yet insufficient forxtracting accurate information about tree structural parameters.n fact, the technical progresses on acquiring tree-level structuralarameters from UAV oblique images have kept being reported.lthough some of the feature parameters can be retrieved by statis-

ical correlations, the results in terms of tree height turned out to beargely divergent from the real ones (Fritz et al., 2013; Zarco-Tejadat al., 2014).

Alternatively, extraction of individual trees in UAV obliquemagery in advance may supply some a priori knowledge for theirD structure reconstruction. Compared to crown upper surfaces inhe traditional nadir photography, the isolated trees representedn the form of UAV oblique imagery are conducive to supplying

ore attributes, e.g. the structural features of stems. Moreover, fornstance, in the case of administrative managements for such asrban greening, it is unnecessary to acquire images with strictlyontrolled conditions. Thus, UAV attitude parameters are often

Greening 14 (2015) 404–412 405

unavailable or inaccurate in such situations. In fact, there aremany application scenarios that just need tree detection, ratherthan complicated tree structure reconstruction involving 3D pointsgeneration (Gerke, 2009). Hence, developing the methods for thedetection of individual trees directly in UAV oblique imagery is ofsignificance and also in demand. In fact, a variety of algorithmshave been developed for the task of tree detection in variousaerospace/aerial images (Hirschmugl et al., 2007; Wolf and Heipke,2007; Yang et al., 2009; Ke and Quackenbush, 2011). With theprogress of aerospace/aerial photography in terms of spatial res-olution, the algorithms aimed at the so-called very high spatialresolution (VHR) images (with ∼0.5 m ground sampling distance,GSD) have been proposed (Hung et al., 2011; Jiang and Lin, 2013).In addition, this study can do favor to urban forest management byfully making use of the recordings of oblique images, which may bepreviously collected in different scenarios for different objectives.These can compose of a good foundation for pushing forward thestudy of extracting single trees in UAV oblique images.

However, the algorithms developed in most of the previousstudies cannot be directly applied onto UAV oblique images, sincethey were initially developed aimed at nadir-photos. In such photostrees behave with mountain-like distribution of reflectance, butin UAV oblique images trees show varying scales and low simi-larity due to wide baseline configuration and resultant changes inthe perspective (Fritsch and Rothermel, 2013). In other words, it isrequired that the appropriate tree-detection algorithms can standthe twist of imaging and the inconsistency of imaging geometriesfor different pixels, namely, different tree sizes and different struc-tural morphologies for different trees. Some applications occurredinvolving the use of aerial oblique images, but virtually they wereassumed either just for visual classification accuracy assessment(Moskal et al., 2011) or yet via photogrammetry-based 3D recon-struction for tree characterization (Sheng et al., 2001; Sperlich et al.,2014). Overall, almost no studies on the detection of individualtrees directly in a UAV oblique image have been reported.

Aimed at this gap, this study attempted to introduce UAVoblique imaging for tree investigation into urban forestry and urbangreening and was dedicated to develop new methods for the detec-tion of individual trees in residential environments from the UAVoblique RGB images. The next section lists the methodologies,including the used UAV imaging system, the data for test and theproposed algorithm and its evaluation. Then, the results are pre-sented, and the issues and the next-step works are discussed. Atlast, conclusions are given.

Methods

Materials

UAV imaging systemThe data collection was conducted based on a UAV imag-

ing system developed at the Finnish Geodetic Institute (Rosnelland Honkavaara, 2012). The system comprises a Microdronemd4-200 (Fig. 1), which is an electronic battery-powered quadro-copter UAV (Microdrones GmBH, Germany, www.microdrones.com/index.php) and is able to carry a 300 g payload. This kind ofUAVs can take off from and land vertically on a small open areaand has an onboard flight controller with a compass and inertial,gyroscopic, barometric and GPS sensors. The flight time with thecurrent batteries is 10–20 min. The system is sensitive to wind, andwind speed lower than 4 m/s is asked to obtain a controlled image

block. Wind speeds over 4 m/s tilt the UAV drastically and lead tolarge pitch and roll angles.

The md4-200 was equipped with a Ricoh GR Digital III (RicohGR3) digital compact camera (Ricoh Company, Ltd., Global). The

Page 3: Use of UAV oblique imaging for the detection of individual

406 Y. Lin et al. / Urban Forestry & Urban

Fa

cu7oaR2C

D

Sr2iiptisioisabacfto

A

I

vdstscitir

mvmt

ig. 1. The assumed UAV imaging system composed of a Microdrone md4-200 and Ricoh GR3 compact camera.

amera has a lens with a 6 mm fixed focal length and f-stop val-es of 1.9–9 (Rosnell and Honkavaara, 2012). The camera holds a.6 mm × 5.7 mm (3648 × 2736 pixels) CCD sensor with a pixel sizef 2 �m. The weight of the camera is 180 g, excluding the batterynd the memory card. The raw images are stored and developed toGB images using the free-ware DCRAW-software (Sperlich et al.,014). A radiometric processing line for md4-200 is developed byhitade and Katiyar (2010).

ata collectionThe UAV oblique photographs for test were collected in the

undsberg district, Southern Finland (60◦25′ N, 24◦91′ E), a typicalesidential area. The flight campaign was deployed on 12 August011, without strictly planned fly routines. The acquisition height

s around 40 m above the ground, and the angles of the obliquemages were arbitrary. The original images without undergoingre-processing such as distortion correction but directly used forhe test are illustrated in their thumbnail forms as in Fig. 2 (primar-ly <0.2 m GSD). There are roads, lighting poles, trees, cars, lawns,hrubs, and buildings, and all are the components common in res-dential environments. The scene for the test is mainly composedf red roofs, green vegetation and gray roads, and their ratios varyn different plots and even in different viewing angles. Generally, ithows that vegetation is un-ignorable, and this is the feature of liv-ble environments as the test site shows. The trees briefly of borealirch and pine species are with mid-heights (4–5 m). Their viewingngles span from ∼90◦ to ∼35◦ in the collected oblique images. Itan be realized that even within a single image the viewing anglesor each individual tree are still different, and this makes it feasibleo validate the proposed algorithm just based on a limited numberf the collected images.

lgorithm development

maging characteristics analysisIn traditional works on tree detection, the three-dimensional

iews of high spatial resolution images over high- or moderate-ensity forested areas were often described as mountainous spatialtructures. For trees with such conical structures, bright peaks inhe images correspond to treetops generally with higher levels ofolar illumination. Accordingly, the problem of detecting tree topsan be deemed as a problem of finding bright peaks in images, i.e.t is to find the pixels with the maximum brightness values amongheir surrounding pixels (Ke and Quackenbush, 2011). For the VHRmages, most of the algorithms aimed at tree detection were stillooted in this principle (Jiang and Lin, 2013).

However, trees in UAV oblique images of residential environ-

ents present distinguished features. First, in such images oblique

iews make tree tops deviate from their centers, and tree topsay not be the brightest parts. Second, ultra high spatial resolu-

ions (UHR, GSD <0.5 m) triggered by UAV low-flights make tree

Greening 14 (2015) 404–412

crowns show large reflectance variations, and it is inappropriateto consider them as the traditionally assumed mountainous spatialstructures. As regards the shadow feature defined by Hung et al.(2011), shadows in UAV oblique images behave inconsistently interms of colors and shapes as well. Third, in UAV oblique imagesthe backgrounds of individual trees tend to vary in a large degree.UAV oblique images may show individual trees with backgroundsof roads, earths, lawns, lighting poles, walls, or their mixes. In otherwords, the development of new algorithms appropriate for treedetection in UAV oblique imagery needs to take all of the charac-teristics mentioned above into account.

Algorithm proceduresThe schematic workflow of the newly proposed algorithm with

UAV oblique imagery as its inputs is briefly demonstrated in Fig. 3,in terms of its procedural goals (right column) and specific imple-mentations (left column).

Step 1: Land Cover Classification

(1) Color-based image segmentation using k-means clustering:Residential environments often mean complicated color dis-tributions in UAV oblique images. This is challenging for thedetection of individual trees. To overcome this problem, landcover classification is firstly manipulated, which can helprestrict the range for tree detection in efficiency. Given thatit is a RGB digital camera assumed for data collection, thiswork assumed the algorithm of color-based image segmen-tation using k-means clustering (Chitade and Katiyar, 2010).First, the target image needs to be converted from the RGBcolor space to the L*a*b* color space, which is a color-opponentspace with dimension L for lightness and a and b for the color-opponent (Chitade and Katiyar, 2010). Next, the colors in theL*a*b* space are classified using k-means clustering. k-Meansis the clustering algorithm used to determine the natural spec-tral groupings present in a data set. The number of clusters tobe located in the data is given in advance. Here, the numberof clusters was valued in accordance to the dominant objectcategories. In fact, since the later operation of vegetation iden-tification can judge which of the resulting clusters belongingto trees to some extent, it is not asked to accurately value thenumber in prior. Next, the algorithm arbitrarily seeds or locatesthat number of cluster centers in multidimensional measure-ment space. Each pixel in the image is assigned to the clusterwhose arbitrary mean vector is closest. The procedure contin-ues until there is no significant change in the location of classmean vectors between successive iterations of the algorithms(Seber, 1984). Then, every pixel in the image is labeled using theresults from the k-means clustering, and in this way the imageis segmented.

(2) Image classification using new vegetation indices derived fromRGB: The segmentation results do not mean that to which typeof objects each segment corresponds is known. The vegeta-tion segment set comprising trees need to be identified. Forthis issue, the traditional concept of vegetation indices helpsto give some inspiration on how to conceive possible solutionplans. As we know, the parameter of normalized differencevegetation index (NDVI) is often assumed for vegetation recog-nition (Liang, 2004). Although the RGB-related color channelsof the Ricoh GR3 camera are not quite coincident with the com-monly defined spectral bands in NDVI, an NDVI-like vegetation

index based on the R and B bands of the camera (NrbVI) canbe composed for vegetation recognition, following the similarschematic plan as earlier attempted by Gitelson et al. (2002).As indicated in Formula (1), the pixels with NrbVI values larger
Page 4: Use of UAV oblique imaging for the detection of individual

Y. Lin et al. / Urban Forestry & Urban Greening 14 (2015) 404–412 407

obliqu

(

fl

Fig. 2. Illustrations of the collected UAV

than 0.1 can be marked as vegetation types, which follows thecommon rules in the NDVI applications (Liang, 2004).

PR − PB

PR + PB> 0.1 (1)

Note that the extracted pixels need to be further screenedto reduce the false ones introduced by the scenario of color-channel inconsistency. Given inconsistency of inputs tends toperform with extreme output cases but in small number, thescreening was manipulated based on statistics. Specifically, theRGB values are individually analyzed in terms of histogram, andtheir probabilistic distributions are fitted using the Gaussianfunctions. Their parameters of mean u and standard deviation� can be calculated. The pixels lying outside of the range ofu + 3� then will be excluded. In theory, this action can increasethe ratio of color-channel consistency, and vegetation cover canbe identified with higher accuracies.

3) Vegetation cover map generation by synthesizing the segmen-tation and classification results: It is hard to promise that aset of the segments of same type fully overlapping a set of thepixels of vegetation type exists. The solution is to tackle theiroverlapping degrees using the logical AND operations. Afteroverlapping pixel statistics and comparison, the set of the seg-ments showing the largest degree of overlapping is deemed asthe vegetation cover map.

Step 2: Vegetation Characterization TransformationA vegetation cover map can effectively narrow the search area

or individual tree detection, but still cannot directly give the iso-ated trees. In addition, the similarity of trees, shrubs and lawns in

Color -ba sed image seg men tation using k-means clus tering and NRB VI

Vegetation cha racterization transform

Watersh ed-ba sed image seg men tation and shape an alysis

Fig. 3. The schematic workflow of the proposed algorithm with th

e images from different viewing angles.

colors makes it hard to distinguish them. As illustrated in Fig. 4,in terms of brightness trees may be darker than, similar with, orbrighter than lawns. So, purely relying on the parameter of RGB val-ues is not enough for tree, shrub and lawn differentiation. For theaim of distinguishing different vegetation categories, new param-eters capable of consistently characterizing their features need tobe figured out.

In fact, it can be learned from Fig. 2 that in terms of texture, treecrowns are obviously different from lawns. The intricately varyinglayouts of branches and leaves make tree crowns to demonstratehigher color variations than lawns, which tend to present relativelyhigher reflectance on their relatively-smooth surfaces, particularlyin an oblique view. Of course, brightness as an important indicatorof vegetation categories is still assumed. After all, in most casestrees present different colors compared to lawns. In addition, incase trees and lawns show the same textures, color perhaps canhelp.

Thus, a new parameter listed in Eq. (2) was proposed by syn-thesizing the feature parameters of texture and RGB brightness forvegetation characterization transformation. For each pixel withinthe vegetation cover map, its value will be replaced by the result ofEq. (2). Specifically, a round window with radius R is deployed onthe vegetation cover map, with its center set on each pixel. For eachsuch pixel Pij, the standard deviation (std) and the mean of the R, Gand B values for the pixels {Pmn} within the window ‖Pmn − Pij ‖ ≤ Rare calculated. Then, the value of the pixel is replaced by the sum

of their individual ratios Pnew

ij. As the usage of the round window is

just for the examination of the texture feature, the determination ofR values can be fulfilled by comparing the experience-derived sizesof trees or the average sizes of tree samples to the average GSD. The

UAV obli que image

Vegetation cover map

Vegetation characterization map

Ind ividua l tree

e procedural tasks and the kernel implementation methods.

Page 5: Use of UAV oblique imaging for the detection of individual

408 Y. Lin et al. / Urban Forestry & Urban Greening 14 (2015) 404–412

ee dar

nw

P

(

(

P

i

Fig. 4. Illustrations of the brightness of trees and lawns (for Fig. 2a): (a) tr

ew vegetation characterization map featured by this new variableill be used for individual tree detection.

newij =

R,G,B

std({Pmn}|‖Pmn−Pij‖≤R)

mean({Pmn}|‖Pmn−Pij‖≤R)(2)

Step 3: Individual Tree Detection

1) Extraction of individual trees using marker-controlled water-shed segmentation: Intuitively, the vegetation characterizationmaps are conducive to extracting individual trees, and trees andshrubs are relatively brighter than lawns. However, this is dif-ferent from the scenarios shown in the vegetation cover maps(Fig. 4). Based on this phenomenon, the solution scheme oftree detection applied segment-center-marked image segmen-tation. The specific method is marker-controlled watershedsegmentation (WS), which has been emphasized in recentefforts on tree delineation in high-resolution imagery (Lamaret al., 2005). The brief principle of watershed segmentation isto consider a grayscale image as a topographic surface with itsaltitudes inversely toned by its grayness and gradually flood it.With the water level rising, the enclosed watershed lines canform the boundaries of the segments (termed as WS-segmentshereafter), i.e. the individual trees here. The WS method cansolve the problem of crown merging to some extent. Thesegments need to be identified as individual trees and thosecontaining no sought tree locations need to be excluded.

2) Individual tree identification based on WS-segment shape anal-ysis: The resulting WS-segment sets are not equivalent tothe real individual trees. The sets may comprise the distur-bing shrubs, since they behave similarly in the procedures ofland cover classification and characterization transformation.However, shrubs still present some special features, e.g. theirintegral shapes different from individual trees due to beingplanted in lines for landscaping in residential environments.The method assumed in this study is to analyze the shapesof the WS-segment sets. Principal component analysis (PCA)(Krzanowski, 1988) is first applied for calculating the long axisand its perpendicular axis, and their ratios are compared withthe thresholds to exclude shrubs and lawns so as to detect indi-vidual trees. Here, the thresholds can be valued by extracting alimited number of trees samples and making statistics of theirlong and cross axes. The boundaries of the extracted trees arealso investigated by checking if there are linear parts. If thereare, the extracted trees will be excluded. The judgment is alsobased on the plan of sample-and-statistics. With these criteriasatisfied, trees in UAV oblique images can finally be identified.

erformance evaluationPerformance evaluation is implemented in a way of compar-

ng the detected results with the ground-truths. The locations and

ker than lawn, (b) tree similar with lawn, and (c) tree brighter than lawn.

outlines of the existing individual trees are acquired by visual inter-pretation of the for-test UAV oblique images and the synchronousmobile laser scanning point clouds (Lin et al., 2013) as the refer-ence data. Note that the accuracies of the segmented tree crownsoverlapping with the real ones are not involved in this study. Ifthe resulting tree locations lying within the outlines of the ref-erence trees, it can be deemed that they are correctly detected.The performance of the proposed tree detection algorithm aimedat UAV oblique imagery is assessed in terms of the commission andomission errors. The commission error is the ratio of the segmentswrongly identified as trees compared to all of the extracted seg-ments, and the omission error is the ratio of the missing trees indetection compared to the ground-truth trees. These two variablescan reflect the performance of the proposed algorithm.

Results and discussions

Results

Vegetation cover mapWith the number of clusters prescribed as 3, the results of

(1) color-based image segmentation using k-means clustering areillustrated in Fig. 5a. The segmentation yielded three types of landcovers, i.e. green marking vegetation, red marking building roofs,and blue marking others. Next, the results of (2) image classificationusing new vegetation indices (NrbVI) derived from RGB are illus-trated in Fig. 5b. For this case, the means u and standard deviations� derived from the original images for valuing R, G, and B in Eq. (2)are 63 and 21, 77 and 26, and 7 and 21, respectively. Although not allof the vegetation have been extracted, the extracted pixels mostlybelong to vegetation. This ensures the effect of the next operation.Then, after synthesizing the segmentation and classification results,the positive results of (3) vegetation cover map generation as illus-trated in Fig. 5c proved this point. The following processing wasdeployed on these resulting vegetation cover maps.

Vegetation characterization mapThe result of step 2 with window radius R set to be 15 pixels

is illustrated in Fig. 6a. It can be concerned that in the resultingvegetation characterization map trees are relatively brighter thanlawns, compared to the scenario in the corresponding grayscaleimage (as shown in Fig. 6b) directly converted from the RGB colorimage. In fact, some trees that are hard to recognize by visual inter-pretation in the RGB images show clearly with shining spots afterthe NrbVI transformation. Some small-size trees in the same modesin the RGB images show up with clear boundaries as well. This isconducive to distinguishing individual trees.

Individual tree detectionThe results of (1) extraction of individual trees using marker-

controlled watershed segmentation are illustrated in Fig. 7. After

Page 6: Use of UAV oblique imaging for the detection of individual

Y. Lin et al. / Urban Forestry & Urban Greening 14 (2015) 404–412 409

Table 1Performance assessment of the proposed algorithm.

Ground-truth trees Detected trees Correctly detected trees Commission error (%) Omission error (%)

sdtl

F(p(r

Fig. 2a 54 46 37

Fig. 2b 33 31 23

hape analysis (via thresholding the ratios between the PCA-

etermined long axes and their perpendicular axes and excludinghe trees with linear parts of boundaries), the shrubs and localawns performing similar as trees can be excluded to some extent.

ig. 5. Illustration of (a) the result of k-means clustering based image segmentationgreen: vegetation, red: roofs, blue: others), (b) the result of NrbVI-based vegetationixels extraction, and (c) the result of vegetation cover map generation (for Fig. 2a).For interpretation of the references to color in this figure legend, the reader iseferred to the web version of this article.)

31.48 19.5730.31 25.81

Then, 54 and 33 trees of relatively complete and distinguishableshapes were determined as the ground-truths in Fig. 2a and brespectively, and the performance of the proposed algorithm wasassessed in terms of the commission and omission errors. Theresults listed in Table 1 are positive and can basically verify theefficacy of the new algorithm.

Application extension

The proposed method were also deployed onto another testsite in order to validate its adaptability. The oblique images asillustrated in Fig. 8a were collected using another Microdronemd4-200 UAV platform in front of the Welfenschloss building,Hannover, Germany, i.e. a residential scenario. The specificationsof the assumed UAV oblique imaging system and the flight cam-paign can refer to (Reich et al., 2012). The image shows buildingroofs with colors different from the buildings shown in Fig. 2 and

even consists of van roofs with colors similar with trees in green.Some trees are located in the shadows of the buildings. All of thesefactors make it more difficult to extract the individual trees. Theresults after manipulating the procedures of the proposed approach

Fig. 6. Illustration of (a) the resulting vegetation characterization map compared to(b) the grayscale image directly converted from the corresponding RGB color image(for Fig. 2a).

Page 7: Use of UAV oblique imaging for the detection of individual

410 Y. Lin et al. / Urban Forestry & Urban

FF

amT

fietsfpFtalipottitF

D

c

ig. 7. Illustration of the results of WS-based individual tree segmentation: (a) forig. 2a, and (b) for Fig. 2b.

re displayed in Fig. 8b–f. The target trees are primarily detected,eanwhile the extracted mailbox can explain the high errors in

able 1.In the results there are three issues, which also suggest the dif-

culty of single tree detection in UAV oblique images of residentialnvironments. The first one is that the disturbing objects such ashe climbing plants on the walls were readily mistaken as trees, theecond one is that a tree with a little cover in the image due to itsar distance from the UAV camera in oblique imaging or buildingartially shading was missed (as shown by the upper-left corner ofig. 6a), and the third one is that a single tree with a large cover inhe image due to too low flight, namely, with large reflectance vari-tions was extracted as multiple smaller trees (as illustrated by theower-right corner of Fig. 8f). For these issues, the proposed methodncorporates the operation of manually checking to exclude theseroblems and need to be improved in the next-step work. On thether hand, the results involving the second issue can also alludehe feasibility of the proposed method for the detection of singlerees in natural environments, in which the features of the stand-ng trees in terms of canopy surface reflectance are similar withhe single tree of large cover as shown in the lower-right corner ofig. 8a.

iscussions

From the perspective of algorithm development, this studyan be recognized as a pioneering work for the scenario of tree

Greening 14 (2015) 404–412

detection in UAV oblique imagery, when in a comparative wayregarding the progress of algorithm development for tree detectionin aerospace/aerial vertical photographs (Hirschmugl et al., 2007;Wolf and Heipke, 2007; Yang et al., 2009; Hung et al., 2011; Keand Quackenbush, 2011; Jiang and Lin, 2013). In addition, treesin vertical photos, particularly orthophotos, tend to display con-sistent appearance features, and this renders the same algorithmsapplicable for large covers. In contrast, different trees in residen-tial environments may be quite distinctive in appearance, and it issometimes hard to extract the consistent features. This can explainwhy the correctness of the detection has not reached the perfectlevel.

For the follow-up improvements of the proposed algorithm,how to accurately determine the parameters such as cluster num-ber and window radius perhaps needs to be explored. In fact, it ishard to ensure the pre-specified parameters to work well withineach oblique image. After all, objects of same sizes may displayquite different even in an oblique image. For this issue, the plot-based schematic plan, for instance, can be applied. That is, thetargeted oblique image is segmented into plots, within which theparameters can be relatively consistently determined. In addition,if UAV attitudes and locations are available, the parameters can alsobe quantified in a range-adapting way.

Detection of individual trees in UAV oblique images has a lot ofapplications. Besides helping users get institutive grasp of appear-ances, isolated trees in the UHR and oblique presentations withtrunks being visible can also give accurate structure and geome-try parameters, such as diameter at the breast height (DBH). Thearchitectural models of crowns and the layout modes of branchescan also be learned in some cases. The isolated trees can help esti-mate the penetrability of light in tree-level canopies, and this ishelpful for understanding the characteristics of trees in radiativetransfer. In fact, the images for test were synchronously collectedwith mobile laser scanning (MLS) data in the same residentialenvironments (Lin et al., 2013). Further, for the purpose of syn-thesizing the UHR oblique image and the MLS point group for eachsingle tree to implement its 4D reconstruction (i.e. 3D structureand color appearance), the algorithm proposed in this study canhelp.

The algorithm also needs to be continuously improved to adaptto different scenarios. When regarding the individual trees in suchas natural forests, various spatial distribution patterns may be met.Crowns may be partially imaged, and stems may be fully shaded.It will be different from the scenario of this study deployed onthe residential environments, where the distributions of individ-ual trees are relatively isolated and tree stems are often visible inUAV oblique images. Moreover, distinction of forest images intotrees and background objects is also a challenging task due to itshigh variations of such as illumination, foreshortening, effect of dif-ferent color shades, shadows, and non-homogenous bark texture.Thereby, the algorithm proposed in this study needs to be adaptedwhen applied to dense forests.

In fact, the technique of oblique imaging itself presents manychallenges. As clued by Gruber and Walcher (2013), new designskeep being needed in order to ensure aerial/aerospace obliqueimaging systems with high geometric performance and image qual-ity. As indicated by Rupnik et al. (2014), techniques for obliqueimage post-processing are also needed and is still an open researchissue. The possible works include dense image matching and pointclouds generation, and the open issues involve such as pursuingminimum image-overlap, optimizing the number of referencingimages and investigating illumination and scale changes betweenoverlapping images. Along with the solutions of these issues andchallenges in UAV and oblique remote sensing, the methods for

the automatic detection of individual trees will be improved aswell.
Page 8: Use of UAV oblique imaging for the detection of individual

Y. Lin et al. / Urban Forestry & Urban Greening 14 (2015) 404–412 411

Fig. 8. Another application case of the proposed method: (a) the UAV oblique image for illustration, (b) the result of k-means clustering based image segmentation, (c) ther r mapW samer

C

dcTfctis

R

C

esult of NrbVI-based vegetation pixels extraction, (d) the result of vegetation coveS-based individual tree segmentation. Note that all of the color-marking rules are

eader is referred to the web version of this article.)

onclusion

The positive results of the test basically validated the algorithmeveloped for the detection of individual trees directly in UAV-ollected individual oblique images of residential environments.he proposed new feature parameter that synthesizes the commoneature parameters of texture and RGB brightness displayed theapability of exposing trees. Overall, this work proved to help fill theechnical gap between the vigorous progress of UAV oblique imag-ng and its limited applications on object information extractionuch as for urban forestry and urban greening.

eferences

hitade, A., Katiyar, S., 2010. Color based image segmentation using k-means clus-tering. Int. J. Eng. Sci. Technol. 2, 5319–5325.

generation, (e) the resulting vegetation characterization map, and (f) the result of as the above. (For interpretation of the references to color in this figure legend, the

Fritsch, D., Rothermel, M., 2013. Oblique image data processing – potential, expe-riences and recommendation. In: IEEE International Conference on ComputerVision, Sydney, Australia, 1–8 December 2013, pp. 586–593.

Fritz, A., Kattenborn, T., Koch, B., 2013. UAV-based photogrammetric point clouds– tree stem mapping in open stands in comparison to terrestrial laser scan-ner point clouds. Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci. XL-1/W2,141–146.

Frueh, C., Sammon, R., Zakhor, A., 2013. Automated texture mapping of 3D city mod-els with oblique aerial imagery. ISPRS Ann. Photogram. Remote Sens. Spatial Inf.Sci. II-3/W3, 61–66.

Gerke, M., 2009. Dense matching in high resolution oblique airborne images. Int.Arch. Photogram. Remote Sens. 38-3/W4, 77–82.

Getzin, S., Wiegand, K., Schöning, I., 2012. Assessing biodiversity in forests usingvery high-resolution images and unmanned aerial vehicles. Methods Ecol. Evol.3, 397–404.

Gitelson, A.A., Kaufman, Y.J., Stark, R., Rundquista, D., 2002. Novel algorithms forremote estimation of vegetation fraction. Remote Sens. Environ. 80, 76–87.

Gruber, M., Walcher, W., 2013. Oblique image collection – challenges and solutions.In: Photogrammetric Week’13, Stuttgart, Germany, 7–11 September 2013, pp.111–115.

Hirschmugl, M., Ofner, M., Raggam, J., Schardt, M., 2007. Single tree detection in veryhigh resolution remote sensing data. Remote Sens. Environ. 110, 533–544.

Page 9: Use of UAV oblique imaging for the detection of individual

4 Urban

H

J

K

K

K

L

L

L

L

M

N

N

N

N

P

R

R

55, 89–99.

12 Y. Lin et al. / Urban Forestry &

ung, C., Bryson, M., Sukkarieh, S., 2011. Vision-based shadow-aided tree crowndetection and classification algorithm using imagery from an unmanned air-borne vehicle. In: 34th International Symposium for Remote Sensing of theEnvironment, Sydney, Australia, 10–15 April 2011.

iang, M., Lin, Y., 2013. Individual deciduous tree recognition in leaf-off aerial ultra-high spatial resolution remotely sensed imagery. IEEE Geosci. Remote Sens. Lett.10, 38–42.

arbo, N., Schroth, R., 2009. Oblique aerial photography: a status review. In:Photogrammetric Week’09, Stuttgart, Germany, 7–11 September 2009, pp.119–125.

e, Y., Quackenbush, L.J., 2011. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing. Int. J. RemoteSens. 32, 4725–4747.

rzanowski, W.J., 1988. Principles of Multivariate Analysis. Oxford University Press,London.

amar, R.W., McGraw, J., Warner, T.A., 2005. Multitemporal censusing of a popu-lation of eastern hemlock from remotely sensed imagery using an automatedsegmentation and reconciliation procedure. Remote Sens. Environ. 94, 133–143.

iang, S.L., 2004. Quantitative Remote Sensing of Land Surfaces. John Wiley & Sons,Inc., Hoboken, NJ.

in, Y., Hyyppä, J., Jaakkola, A., 2011. Mini-UAV-borne LiDAR for finescale mapping.IEEE Geosci. Remote Sens. Lett. 8, 426–430.

in, Y., Hyyppa, J., Rosnell, T., Jaakkola, A., Honkavaara, E., 2013. Development of aUAV-MMS-collaborative aerial-to-ground remote sensing system – a prepara-tory field validation. IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 6 (4),1893–1898.

oskal, L.M., Styers, D.M., Halabisky, M., 2011. Monitoring urban tree cover usingobject-based image analysis and public domain remotely sensed data. RemoteSens. 3, 2243–2262.

ex, F., Rupnik, E., Remondino, F., 2013. Building footprints extraction from obliqueimagery. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. II-3/W3, 61–66.

iemeyer, F., Schima, R., Grenzdörffer, G., 2013. Relative and absolute calibration ofa multihead camera system with oblique and nadir looking cameras for a UAS.Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci. XL-1/W2, 287–291.

yaruhuma, A.P., Gerke, M., Vosselman, G., 2010. Evidence of walls in oblique imagesfor automatic verification of buildings. Int. Arch. Photogram. Remote Sens. Spa-tial Inf. Sci. 38-3A, 263–268.

yaruhuma, A.P., Gerke, M., Vosselman, G., Mtalo, E.G., 2012. Building footprintsextraction from oblique imagery. ISPRS J. Photogram. Remote Sens. 71, 62–75.

etrie, G., 2009. Systematic oblique aerial photography using multiple digital cam-eras. Photogram. Eng. Remote Sens. (2), 586–593.

ango, A., Laliberte, A., Herrick, J.E., Winters, C., Havstad, K., Steele, C., Browning, D.,

2009. Unmanned aerial vehicle-based remote sensing for rangeland assessment,monitoring, and management. J. Appl. Remote Sens. 3, Article No. 033542.

eich, M., Wiggenhagen, M., Muhle, D., 2012. Filling the holes – potential of UAV-based photogrammetric fac ade modelling. In: Tagungsband des 15. 3D-NordOstWorkshops der GFat, Berlin, Germany, 6–7 December 2012, pp. 147–156.

Greening 14 (2015) 404–412

Roca, D., Lagüela, S., Diaz-Vilarino, L., Armesto, J., Arias, P., 2013. Low-cost aerial unitfor outdoor inspection of building facades. Autom. Construct. 36, 128–135.

Rosnell, T., Honkavaara, E., 2012. Point cloud generation from aerial image dataacquired by a quadrocopter type micro unmanned aerial vehicle and a digitalstill camera. Sensors 12, 453–480.

Rupnik, E., Nex, F., Remondino, F., 2014. Oblique multi-camera systems – orientationand dense matching issues. Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci.XL-3/W1, 107–114.

Seber, G.A.F., 1984. Multivariate Observations. John Wiley & Sons, Inc., Hoboken, NJ.Sheng, Y., Gong, P., Biging, G.S., 2001. Model-based conifer-crown surface recon-

struction from high-resolution aerial images. Photogram. Eng. Remote Sens. 67,957–965.

Sperlich, M., Kattenborn, T., Koch, B., Kattenborn, G., 2014. Potential of unmannedaerial vehicle based photogrammetric point clouds for automatic single treedetection. In: Gemeinsame Tagung 2014 der DGfK, der DGPF, der GfGI und desGiN (DGPF Tagungsband 23/2014), Hamburg, Germany, 26–28 March 2014.

Talley, B.B., 1938. Multiple lens aerial cameras Aerial and Terrestrial Photogram-metry. Pitman Publishing Corporation, New York/Chicago, USA, pp. 91–116.

Tommaselli, A.M.G., Calo, M., de Moraes, M.V.A., Marcato, J., Caldeira, C.R.T., Lopes,R.F., 2013. Generating virtual images from oblique frames. Remote Sens. 5,1875–1893.

Torres-Sánchez, J., Pena, J.M., de Castro, A.I., López-Granados, F., 2014. Multi-temporal mapping of the vegetation fraction in early-season wheat fields usingimages from UAV. Comp. Electron. Agric. 103, 104–113.

Vasuki, Y., Holden, E.J., Kovesi, P., Micklethwaite, S., 2014. Semi-automatic map-ping of geological structures using UAV-based photogrammetric data: an imageanalysis approach. Comp. Geosci. 69, 22–32.

Wang, T., Yan, L., Mooney, P., 2011. Dense point cloud extraction from UAV capturedimages in forest area. In: 2011 IEEE International Conference on Spatial DataMining and Geographical Knowledge Services, Fuzhou, China, 29 June–1 July2011, pp. 389–392.

Wolf, B.M., Heipke, C., 2007. Automatic extraction and delineation of single treesfrom remote sensing data. Mach. Vis. Appl. 18, 317–330.

Xiao, J., Gerke, M., Vosselman, G., 2012. Building extraction from oblique airborneimagery based on robust fac ade detection. ISPRS J. Photogram. Remote Sens. 68,56–68.

Yang, L., Wu, X., Praun, E., Ma, X., 2009. Tree detection from aerial imagery. In: ACMGIS’09, Seattle, WA, 4–6 November 2009.

Zarco-Tejada, P.J., Diaz-Varela, R., Angileri, V., Loudjani, P., 2014. Tree height quan-tification using very high resolution imagery acquired from an unmanned aerialvehicle (UAV) and automatic 3D photo-reconstruction methods. Eur. J. Agron.

Zarco-Tejada, P.J., González-Dugo, V., Berni, J., 2012. Fluorescence, temperature andnarrow-band indices acquired from a UAV platform for water stress detectionusing a micro-hyperspectral imager and a thermal camera. Remote Sens. Envi-ron. 117, 322–337.