template for high-resolution river landscape …...2017/12/01 · submerged topography...
Post on 24-Jul-2020
2 Views
Preview:
TRANSCRIPT
Seediscussions,stats,andauthorprofilesforthispublicationat:https://www.researchgate.net/publication/320475996
Templateforhigh-resolutionriverlandscapemappingusingUAVtechnology
ArticleinMeasurement·October2017
DOI:10.1016/j.measurement.2017.10.023
CITATIONS
0
READS
123
4authors,including:
Someoftheauthorsofthispublicationarealsoworkingontheserelatedprojects:
Quantificationofmorphologicalchangesinriverchannelsanditsimpactonfloodrisk(MORCHFLOOD
-PA05EnvironmentalRisks)Viewproject
Responseofgeomorphic-sedimentaryconnectivity/disconnectivityinfluvialsystemtoenvironmental
impactsViewproject
MilošRusnákSlovakAcademyofSciences
19PUBLICATIONS36CITATIONS
SEEPROFILE
AnnaKidova
SlovakAcademyofSciences
18PUBLICATIONS21CITATIONS
SEEPROFILE
AllcontentfollowingthispagewasuploadedbyMilošRusnákon10November2017.
Theuserhasrequestedenhancementofthedownloadedfile.
Contents lists available at ScienceDirect
Measurement
journal homepage: www.elsevier.com/locate/measurement
Template for high-resolution river landscape mapping using UAVtechnology
Miloš Rusnáka,⁎, Ján Sládeka,b, Anna Kidováa, Milan Lehotskýa
a Institute of Geography, Slovak Academy of Sciences, Štefániková 49, 814 73 Bratislava, Slovakiab GEOTECH Bratislava, s.r.o., Černyševského 26, 851 01 Bratislava, Slovakia
A R T I C L E I N F O
Keywords:UAVFluvial landscape mappingWorkflowUAV photogrammetryData extraction
A B S T R A C T
This paper presents the template for high-resolution mapping of a river landscape by Unmanned Aerial Vehicle(UAV) technology with the following five steps: (i) reconnaissance of the mapped site; (ii) pre-flight field work;(iii) flight mission; (iv) quality check and processing of aerial data; and (v) operations above the processed layersand landforms (objects) mapping (extraction). The small multirotor UAV (HiSystem Hexakopter XL) equippedwith Sony NEX 6 camera with standard 16–50 mm lens provided image capture and workflow design applica-tions. Images were processed by Agisoft PhotoScan software and georeferencing was ensured with 20 GroundControl Points (GCP) and 18 check points certifying accuracy assessment. Three imaging methods for 3D modelcreation of the study area were used: (i) nadir, (ii) oblique and (iii) horizontal. This minimized geometric errorand captured topography under treetop cover and overhanging banks.
1. Introduction
Geomorphological mapping requires the collection of primarylandform data and is dependent on scale and changes in the studiedobject's attributes. Landform identification is normally based on theobject classification framework using field work and remotely senseddata with temporal and spatial accessibility, flexibility and accuracy[1]. This methodology provides a very powerful tool for landformmapping [2–4]. River morphology is one of the most dynamic land-scape entities, therefore it is essential in its mapping to generate theprecise dataset and topography, required for linking processes, patternsand spatio-temporal volumetric changes [5–7]. Thus, remote sensingtechniques combined with field survey provides the basic informationsource in modern fluvial geomorphology [8].
The last decade has seen rapid development in the use and avail-ability of unmanned aerial vehicles (UAVs) for obtaining the earthsurface spatial information [9,10], and various UAV's platforms areused in landscape research. While fixed wings systems are suitable formapping of large areas, multirotors are superior for small, linear andnarrow localities, and UAV's equipped with digital cameras and lidar, orthese combined systems [11], are optimal for data acquisition inlandscape research at resolutions from 0.5 to 2 cm [6,14–17]. Digitalcamera payload is commonly used in UAV technology, and develop-ment of easily applied commercial or open source software and in-expensive platforms enable high temporal and resolution topography
reconstruction and monitoring [18].Turner et al. [12] highlighted that UAV photogrammetry is the most
suitable technology for real-time or near-real-time landscape mon-itoring and photogrammetrically-derived digital elevation model(DEM) is limited by its inability to capture surface topography beneathvegetation [19].
Technological advances in UAV enable production of a high qualityelevation model and orthophotomosaic with spatial resolution up to10 cm and vertical error to 50 cm (Table 1). UAV is therefore an idealtool to assess riparian forest dynamics [20,32] and analyze post-floodriver channel morphology [22,28], planform changes, meander neckcut-offs [27] and lateral channel shifts [21]. UAV photogrammetrygenerates the high resolution topographic data essential to detectmorphological changes [18,26,28,35], bankfull stages [33], riparianvegetation dynamics and canopy height models [20,22,32]. While itprovides excellent data quality, including data for grain size identifi-cation [34,36] and topographic reconstruction of submerged channels[6,23,29,30], evaluation of topographic accuracy and error range stillrequires validation.
Research has processed data acquisition from UAV images[24–26,30–32], and now an appropriate scheme encompassing all theaspects of flight mission logistics, data acquisition and processing andthe nested, multi-scale and component aspects is required. Workflowapproaches to UAV imaging and data processing consist of the fol-lowing 6 main steps; (i) UAV preparation, (ii) its calibration, (iii)
http://dx.doi.org/10.1016/j.measurement.2017.10.023Received 26 June 2017; Received in revised form 10 October 2017; Accepted 11 October 2017
⁎ Corresponding author.E-mail addresses: geogmilo@savba.sk (M. Rusnák), geogslad@savba.sk (J. Sládek), geogkido@savba.sk (A. Kidová), geogleho@savba.sk (M. Lehotský).
Measurement 115 (2018) 139–151
Available online 18 October 20170263-2241/ © 2017 Elsevier Ltd. All rights reserved.
MARK
Table1
Riverswithdifferen
tplan
form
settings,w
hich
was
stud
iedby
UAVtech
nology
withva
riou
splatform
type
san
dda
tasetaccu
racy.
Stud
yRiver
Plan
form
setting
UAVplatform
Cam
era
Research
UAV
accu
racy
Lejotet
al.[
6]DrômeRiver;
Ain
River
(Franc
e)
Shifting
grav
elbe
drive
rsPixy
dron
eCan
onPo
wershot
G5/
Can
onEO
S50
0N
Refl
ex
Bathym
etricmod
elof
rive
rbe
dSp
atialresolution
5–7cm
;verticalresidu
als±
50cm
Dun
ford
etal.[
20]
DrômeRiver
(Franc
e)Sh
ifting
grav
elbe
drive
rsPixy
dron
eCan
onPo
wershot
G5/
Can
onEO
S5D
Riparianforest
automatic
classification
Spatialresolution
6.8–
21.8
cm
Vericat
etal.[
21]
FeshireRiver
(Sco
tlan
dUK)
Braide
dgrav
elbe
dSk
yHoo
kHelikiteba
lloon
Ricoh
Cap
lioR5
Gen
erationph
oto-mosaics
forrive
rmon
itoring
Spatialresolution
5–10
cm;e
rrors25
–5cm
Hervo
uetet
al.[
22]
DrômeRiver
(Franc
e)Braide
dPixy
dron
e/ULA
Vpa
raglider
Can
onPo
wershot
G5
/Can
onEO
S5D
/So
nyDSL
R-A35
0
Veg
etationdy
namics
Spatialresolution
3–16
cm;R
MSE
0.10
m
Flen
eret
al.[
23]
Pulm
anki
River
(Finland
)River
bend
with
pointba
rMinicop
terMax
i-Jo
ker
3DD/A
lignT-Rex
700E
Nikon
D50
0/Nikon
D51
00Creationseam
less
topo
grap
hymod
elby
combina
tion
mob
ilelid
ar,U
AV
photog
raph
yan
dop
ticalba
thym
etric
mod
eling
DEM
:ve
rtical
0.03
/0.05m
(201
0/20
11),RMSE
0.01
52/0
.088
m;
bathym
etry:ve
rtical
0.12
/-0.5m,R
MSE
0.22
1/0.16
3m
Flyn
nan
dCha
pra
[24]
TheClark
Fork
stream
(US)
Meand
ering
DJI
GoP
roHero3
Map
ping
aqua
ticve
getation
–Clado
phora
Spatialerror0.3m
Jave
rnicket
al.[
25]
Ahu
riri
River
(New
Zealan
d)Braide
dRob
insonR22
helic
opter
Can
on+
28mm
lens
Com
plex
DEM
ofrive
rch
anne
l(groun
dan
dba
thym
etry)
Groun
dRMSE
0.20
m;b
athy
metry
RMSE
0.31
m
Miři
jovský
and
Lang
hammer
[26]
Javo
říBroo
k(C
zech
Rep
ublic
)Meand
ering
Mikroko
pter
HexaX
LCan
onEO
S50
0DMorph
olog
ych
ange
sof
chan
nel
(volum
etricch
ange
s)Sp
atialresolution
2cm
;RMSE
3.7cm
Miři
jovský
etal.[
27]
Morav
aRiver
(Czech
Rep
ublic
)Meand
ering
Mikroko
pter
HexaX
LCan
onEO
S50
0DMon
itoringcu
t-off
ofmeand
erVerticalRMSE
0.09
3m
Tamminga
etal.[
28]
Elbo
wRiver
(Can
ada)
Braide
d/wan
dering
Aeryo
nScou
tqu
adco
ptor/eBe
efixed-
wing
Photo3S
camera/
Can
onIXUS12
7HS
Effectof
floo
dsto
chan
nelmorph
olog
y(volum
etricch
ange
s)Sp
atialresolution
5/4cm
;RMSE
(201
2)0.08
8m
(exp
osed
)/0.09
8m
(sub
merge
d);R
MSE
(201
3)0.04
7m/0
.095
2m
Tamminga
etal.[
29]
Elbo
wRiver
(Can
ada)
Braide
d/wan
dering
Aeryo
nScou
tqu
adco
ptor
Photo3S
camera
Assessm
entreachscaleha
bitatan
dmorph
olog
y(D
EM–expo
sedan
dsubm
erge
dareas)
VerticalRMSE
8.8cm
(exp
osed
areas),1
1.9cm
(sub
merge
d)
Woo
dget
etal.[
30]
Arrow
River;
Coled
aleBe
ckRiver
(UK)
Smallstream
Draga
nflye
rX6
Pana
sonicLu
mix
DMC-LX3
Qua
ntitativeassessmen
tof
subm
erge
dch
anne
lareasan
dtopo
grap
hicmap
ping
Meanerrorfrom
-0.029
to0.11
1m
Casad
oet
al.[
31]
Dee
River
(Wales
UK)
Meand
ering
Que
stUA
Q-200
Sony
NEX
7/Pa
nasonicLu
mix
DMC-LX7
Hyd
romorph
oloc
ical
automatic
classification
ofrive
rch
anne
lSp
atialresolution
2.5/
5/10
cm;R
MSE
0.04
51/0
.157
4/3.05
74m
Miche
zet
al.[
32]
Salm
River;
Hou
illeRiver
(Belgium
)
–Gatew
ingX10
0Ricoh
GR3
Riparianforest
automatic
classification
Rep
rojectionerror0.72
/0.86pixel
Niedz
dielskiet
al.
[33]
Ścinaw
kaRiver
(Polan
d)Allu
vial
meand
eringwith
cutban
ks
Swinglet
Cam
fixed-wing
Integrated
RGB
camera
Observing
rive
rstag
esResolutionorthop
hoto
3cm
Coo
k[18]
Daa
nRiver
(Taiwan
)Be
droc
kgo
rge
DJI
Phan
ton2
Can
onIXUS13
5/Can
onPo
wershot
4000
IS
Topo
grap
hyiden
tification
andtopo
grap
hych
ange
sMeandifferen
ce−
0.9cm
/−0.85
cm;R
MSdifferen
ce39
.4cm
/45.9cm
Lang
hammer
etal.
[34]
Javo
říbroo
k(C
zech
Rep
ublic
)Meand
ering
Mikroko
pter
HexaX
LCan
onEO
S50
0DOptical
digitalgran
nulometry
VerticalRMSE
0.01
5/0.02
1m
Marteau
etal.[
35]
Eben
Gill
stream
(Eng
land
UK)
–DJI
Phan
tom
IGoP
roHero3+
Geo
morph
olog
icch
ange
detectionforrive
rrestorationmon
itoring(volum
etric
analyses)
Ortho
photospatialresolution
0.02
5m;e
rror
0.02
5/0.03
0/0.01
7m
Woo
dget
and
Austrum
s[36]
Coled
aleBe
ckRiver
(UK)
Grave
lbe
drive
rDraga
nflye
rX6
Pana
sonicLu
mix
DMC-LX3
Grain
size
relation
totheim
agetexture
andtopo
grap
hicroug
hness
Spatialresolution
1cm
(ortho
photo),2
cm(D
EM)
M. Rusnák et al. Measurement 115 (2018) 139–151
140
ground control point (GCP) targeting, (iv) flight mission, (v) imageprocessing and (vi) point cloud and orthophoto processing and analysis[24,25]. Other authors have also emphasized photogrammetric pro-cessing [13,37,38] and have designed workflows to the data post-pro-cessing stage for image classification, validation and calculation ofsubmerged topography [6,25,31,32]. In addition, Miřijovský and Lan-ghammer [26] have described a legislative process in a workflow dia-gram, and this step has become an integral part of UAV planning anddrone operation [39].
The aim of this paper is to compile a template for the application ofUAV technology in riverine landscape mapping. Our illustrated ex-ample of a reach of braided-wandering river knickzone incorporates thefore-going concepts. This reach was selected because its landform andland cover diversity enable the combination of different method indiscriminating riverine landscape objects for scalar mapping and as-sessment.
2. Workflow design
Data acquisition for riverine landscape mapping using a low-costUAV outlay is divided into 5 main steps (Fig. 1): (i) reconnaissance of themapped site – identification of potential problems and dangers in flightmission, takeoff and landing points; (ii) pre-flight field work – the pla-cement and targeting of ground control points (GCPs) for precisegeoreferencing; (iii) flight mission – aerial imaging of the study area; (iv)quality check and processing of aerial data – data accuracy assessment andsoftware data processing and (v) operations above processed layers andlandform (object) mapping (extractions) – comprising visualization,landform identification and morphometric analysis. The essential fieldresearch elements of legislative processes and UAV permission andregulations were certified prior to the flight; and these are designatedstep zero (0).
Fig. 1. Scheme for mapping riverine landscape. Workflow consists of the following steps: (i) reconnaissance of the mapped site, (ii) pre-flight field work, (iii) flight mission, (iv) qualitycheck and processing of aerial data, and (v) operations above processed layers and landform (object) mapping (extractions). Prior to field mapping, it is necessary to check UAVregulations and permission for field mapping and flight missions (step zero).
M. Rusnák et al. Measurement 115 (2018) 139–151
141
2.1. Step 0: Legislation and regulation
UAV mapping requires acceptance of legislation and the applicationof general field survey rules. These include survey permission in pro-tected areas and private properties and specific requirements for areassuch as designated military zones. Provision is also made for the variousregulations affecting drone operation, including pilot training coursesand licensing. Stöcker et al. [39] emphasize that UAV legal require-ments are an integral part of the preparation phase for UAV flights andsummarize different legislation applications in individual countries andnational and international legislation in the following 6 main regulationspheres: (1,2) applicability and technical prerequisites that coverregulated UAV weight, range, control and safety systems; (3)
operational limitations center on flight height, definition of 'non-flightzones', the line of sight and urban zone restrictions; (4) administrationprocedures involve flight permission, notification, approval from avia-tion authorities, registration number, insurance, image control bymilitary authorities, company permission and appropriate license foraerial work; (5) human resource requirements include operator trainingand license and (6) ethical constraints for privacy and data protection.Different countries have varying regulations, it is essential to followpre-flight instructions from aviation authorities and ensure that flightmission follows regulations set or order flight mission by relevant li-censed companies and state institutions.
Fig. 2. Three method of data acquisition for geometry building and creation of the 3D river landscape model: (a) nadir, (b) oblique and (c) horizontal.
Fig. 3. Localization of the Belá River: (a) in Slovakia, (b) catchment hillshade relief with position of the study reach for UAV monitoring and (c) orthophoto of the forestry floodplain andin-channel structure (water surface – blue, gravel bars – grey and island – green). Visualization of riparian landscape (d) form 30 high fly of UAV and (e) presentation of the multirotorplatform HiSystem Hexakopter XL with Sony NEX 6 camera. (orthophoto: EUROSENSE Slovakia, GIS data: Geodesy, Cartography and Cadastre Authority of Slovak Republic (122-24-99-2012)). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
M. Rusnák et al. Measurement 115 (2018) 139–151
142
Fig. 4. Localization of (a) 6 take-off points and 6sectors with image acquisition, (b) distribution ofground control points (GCP's) in the study area,(c) detailed GCP on the gravel bar and (d) GPSpoint targeting.
Fig. 5. Vectorisation and identification of objects in the 3-level database with its 3 main components: braidplain, floodplain and terrace (level 1). Objects specification at level 2(braidplain area, the most specific part of the fluvial landscape). The database was completed by land cover information at level 3.
M. Rusnák et al. Measurement 115 (2018) 139–151
143
2.2. Step 1: Reconnaissance of the mapped site
Study area field reconnaissance and visual assessment identify theflight course, obstacles and takeoff and landing sites. The area to bemapped and UAV technical limitations in operation time and line-of-sight determine the numbers of flight sectors and number of take-off/landing points. Electrical wiring and vegetation create fixed obstaclesand shielded GPS signal, so the visual survey identifies potential urbanand economic risk areas and helps the operator select the type of flightmode: autonomous/manual, if this is not nationally predefined.
2.3. Step 2: Pre-flight field work
Ground control points (GCPs) distribution and accuracy are keyfactor in precise UAV mapping [13,40–42], because this technique is5–10 times more accurate than direct georeferencing by internal GNSSof the UAV [37]. Three or more GCPs may be required for georefer-encing [43–45], as errors increase with limited GCP number and dis-tance from the GCP [17,46]. PhotoScan software requires at least 10GCPs for model referencing [47] and these should not be distributed inlines or create regular patterns; such as equilateral triangles [48]. Afurther relevant condition for GCP placement is their arrangement atdifferent vertical level; including river banks, in-channel structures,floodplain benches, terraces and bluffs. The next paramount feature ofpre-flight work is defining the settings used for UAV and cameraparameters based on flight plan design. Here, Miřijovský and
Langhammer [26] recorded the calculation equations for flight height,focal length, scale and ground sample distance. Camera parameters areparticularly important, and these must be set for shutter speed in pre-vailing light conditions to produce sharp images of approximately 1/1000. Camera can be pre-calibrated prior to the flight or self-calibratedin software processing. Difficulties, however, are invited by low numberof images, strip-imaging and sole use of nadir photography becausethese can lead to model deformation and the 'doming' effect due toradial distortion [40,49,50]. All these parameters must be considered inflight planning and control to ensure the accuracy of the final topo-graphy.
2.4. Step 3: Flight mission
Nadir, oblique and horizontal imagery is recommended to combinein manual or autonomous flights to achieve precision and model texturequality (Fig. 2). This combination improves self-calibration and modelprecision, thus minimizing model deformation [40,50]. The obliquephotographs provide conjunction between nadir and horizontal ima-ging from ground or low UAV flight paths and ensure the capture ofoverhanging river-banks and bank topography obscured by treetopcover. Sunlight illumination and water surface reflectance also influ-ence successful flight mission planning and timing.
Fig. 6. The five classes identified by (a) automatic supervised maximum likelihood classification (MLC), (b) result of post-classification processing: noise and mis-classification removal byapplication of the focal statistics tool, boundary clean toolset and removing small isolated regions of pixels, (C) reference data source for quality assessment (POMC) and (d) spatialdistribution of automatic classification errors on orthophotomosaic .
M. Rusnák et al. Measurement 115 (2018) 139–151
144
2.5. Step 4: Quality check and processing of aerial data
Image quality control prevents the occurrence of unexpected andinaccurate results from the data processing and 3D processing failure byinadequate flight line and images overlap. It is firstly necessary to in-spect the general coverage and image exposure and sharpness qualityafter each flight mission and secondly quality check-level then ensuressoftware detection of criss-cross image overlap by recording the numberof identical point visible in different images. The final check levelidentifies optical and motion blurring, as does the image blurry analysisin Agisoft PhotoScan [47]. Sieberth et al. [51] have previously pre-sented automatic detection methods for identifying blurred images insets. Low quality images must therefore be excluded from geometricmodel analysis. The accepted set of photographs then undergo photo-grammetric software processing.
Advances in technology and image analysis enable user-friendlyblack-box operation and model generation [49,50]. Specialized re-search has described process of image analysis, SfM photogrammetry,multi-view stereopsis techniques and model geometry generation withimage alignment, dense point cloud generation, geometry, texturebuilding and georeferencing [9,13,25,50]. In addition SfM softwarefrom open source and commercial or on-line solutions are now avail-able for data processing, including Bundler, PMVS, Pix4D Mapper,Agisoft PhotoScan, Photosynth or ARC3D.
It is crucial to assess precision and accuracy following the final 3Dmodel geometry computation. This is essential for topographic modelsto undergo successful modeling and volumetric changes computation.The reference sources for spatial DEM accuracy are used terrestrial laserscanner (TLS) data [18,42,52,53]; lidar data [9,14,17]; and total sta-tion, DGPS and RTK GPS check points [12,13,16,21,37,54].
2.6. Step 5: Operations above processed layers and landform (object)mapping (extractions)
The final step requires processing data generated in orthophoto-mosaics, raster, TIN DEM and point cloud formats. In fluvial geomor-phology, it is also necessary to identify topographical surfaces undervegetation. Combined 2D and 3D datasets enable the use of classifica-tion techniques and point cloud filtration for precise object identifica-tion. This especially includes banks under tree canopies, overhangingbanks and vertical structures. The final operations here are stronglyinfluenced by the study aims and the nature of the study objects. Influvial studies, UAV generated topography models detect morphologychanges [18,26,28,35], identify submerged area topography[6,23,25,30], vegetation dynamic [22,32], classify object from ortho-photomosaic [20,24,31] and analyze optical granulometry [34,36].
3. Workflow application: case study area and technical equipment
Imaging data of the 1.6 km long knickzone reach of the braided-wandering Belá River in the northern part of Slovakia (Fig. 3) waschosen to illustrate the template's individual steps. This reach has asystem of bars with a well developed main channel, bedrock outcrop,vertically diverse forested banks, forested floodplain with side arms, aterrace, 30 m high undercut bluff and variable land cover.
The small multirotor UAV (HiSystem Hexakopter XL, Fig. 1e)equipped with Sony NEX 6 camera and standard 16–50 mm lens wasused for images capturing. All images were taken at 16 mm focal lengthproviding 82° field-of-view. The UAV was equipped with First PersonView (FPV) to image targets and visually check the flight path. Thisprovides live-view for transmitting video from either the traditionalcamera or small CCD camera UAV board to the ground station withexternal monitor and Google maps for position localization. AgisoftPhotoScan software running on standard 4 core workstation Intel i7
Fig. 7. Differences in (a) orthoimages 2002 with pixel resolution 1 m, (b) from 2006 – 40 cm/pixel, (c) 2015 – 20 cm/pixel and (d) from UAV photogrammetry (2015) with 5 cm pixel.(orthophoto: EUROSENSE Slovakia).
M. Rusnák et al. Measurement 115 (2018) 139–151
145
equipped with 32 GB RAM and 2 GB graphic card then processed theimages.
4. Methods
4.1. Image acquisition and processing
Field reconnaissance ensured successful location of six take off andlanding sites on the valley floor at the top of gravel bars centers. Theseprovided the best conditions for UAV visual flight control, with
Fig. 8. (a) Photorealistic colorized point cloud landscape model in the study area, (b) classified point cloud (water – blue, ground – grey, vegetation – green, LWD – orange), and (c) pointcloud of surface model containing only ground and water classes. Raster generated (d) envelope surface model (DSM) with vegetation, or (e) terrain digital elevation model (DTM) and (f)DEM of differences between these models. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Table 2Confusion matrix and accuracy measures (classification accuracy, overall classification accuracy and KAPPA index) between classified and reference data for supervised MaximumLikelihood Classification.
Class Reference data Classification accuracy
Water area Bar surface Vegetation Woody debris Bare surface
Classified data Water area 0.78 0.02. 0.15 0.01 0.03 78.44%Bar surface 0.04 0.89 0.03 0.01 0.04 88.52%Vegetation 0.02 0.02 0.94 0.01 0.01 94.05%Woody debris 0.26 0.08 0.43 0.18 0.05 17.70%Bare surface 0.37 0.19 0.23 0.03 0.17 17.36%
Overall classification accuracy: 0.81 (81.39%).Kappa index of agreement: 0.62.
M. Rusnák et al. Measurement 115 (2018) 139–151
146
maximal direct visual contact for flights over floodplain forest andterraces and the accompanying six imaging sectors highlighted inFig. 4a. A total of 38 GCPs were measured by RTK GPS of GG03 antennaand Leica Zeno 5 with 1–2 cm accuracy were measured (Fig. 4b–d).Images were obtained by Sony NEX6 camera with automatically com-puted aperture, and the exposure time was set at shutter speeds of 1/1200 in good sunlight and 1/800 in cloudy conditions. The autonomousflight mode of Hexakopters for take-off, landing and waypoint flightstabilized by GPS could not be used for imaging because GPS signalwere shielded by tree canopy.
Nadir, oblique and horizontal imaging were used for 3D modelcreation over the 75 min flights. Nadir imaging was realized in all sectorswith the average flight height (AGL – above ground level) 80 m and50 m above terrace surface in sector 4. Oblique imaging at 30–45° wasapplied in sectors 2, 3 and 4, where treetops covered the banks andbank line. AGL at 20 m was used for oblique imaging of sector 2 and 3,while sector 4 (bluff imaging) was performed at 80 m, 60 m, 50 m AGL.The Orthogonal and oblique imaging was complemented by horizontalone which provided ground level photos of river banks under tree ca-nopies and bluffs by UAV from 30 m and 20 m AGL. Aerial image datawas processed by Agisoft PhotoScan with 2413 aerial images andS_JTSK (the unified trigonometric cadastral network) coordinates of 20GCP's. The remaining 18 points served as check points for accuracyassessment. Thus, data exported form PhotoScan provided the texturedpoint cloud, RGB orthophotomosaic in red, green and blue bands withpixel resolution 5 cm and digital elevation model of channel andfloodplain with 6.46 cm resolution.
4.2. Data analysis and processing
Orthophotomosaic provided manual classification of land cover
categories and landforms in ArcGIS software (Fig. 5). On the first level,landforms were classified in three main braidplain, floodplain and terracegeomorphic components. Second level classification then gave thebraidplain in water area, bar area and islands categories and land coverwas classified in: water area, bedrock, gravel bar, low vegetation, shrubvegetation, tree vegetation, and large woody debris categories. Process-oriented manually classified layer (POMC) with 2 landform levels and 7land cover categories was the result.
The photogrammetrically derived point cloud created a surfaceenvelope, and the vegetation cover used in creating the surface topo-graphy required identification and filtering. Unfortunately, since theclassification process with high density photogrammetrically derivedpoints was based on digital camera imaging with only one “return”from the screened object, it was impossible to use the filtering algo-rithm with the first and last returns supplies by airborne lidar. Thephotogrammetrically derived point cloud classification is thereforebased on the inspections of radius angles and the height betweenclassified points by several iterations. The quasi-vertical bank, bluff,landslide surface classification results prove unreliable without op-erator input.
The Terrasolid-TerraScan extension in Microstation software forsemi-automatic point cloud classification filtered the following six ve-getation cover classes: (1) high vegetation (> 5 m); (2) middle vege-tation (1.5–5 m); (3) low vegetation (0.2–1.5 m); (4) ground; (5) waterand (6) LWD have been identified. This classification of vegetationcover was harmonized with American Society for Photogrammetry andRemote Sensing (ASPRS) classification standards. Woody debris wasincluded into the low vegetation class, and therefore extraction wasbased on the previous manually-vectorised woody debris land coverclass in the POMC layer. The water area points were also difficult toclassify by automatically methods because the flat water surface was
Fig. 9. (a) Identification of the Belá river main channel and several abandonment side arms and (b) longitudinal profiles calculated from UAV photogrammetry 3D models. Identificationof (c) bar topography and (d) several vertical bar levels (e) on the cross-section plotted on the point bar.
M. Rusnák et al. Measurement 115 (2018) 139–151
147
deformed by rippling water. This led to classification of the watersurface as ground or small vegetation or noise, and the water areas weretherefore classified similarly to the woody debris class based on poly-gons created from the POMC layer. Class corrections were verified bythe moving profile method in all study area.
The RGB orthophotomosaic was classified in ArcGIS software bysupervised Maximum Likelihood Classification (MLC). Here trainingsites signatures identified the five water area, bar surface, vegetation,LWD and bare surface land-covers. ISO cluster analysis and dendrogramtools tested, separated and validated the potential number of classifi-cation categories and training samples were manually digitized viaspectral RGB diversity into five author-predefined classes. Final post-classification processing to reduce noise and mis-classification com-prised in focal statistics filtering, smoothing by the boundary cleantoolset and removing small isolated regions of pixels (Fig. 6).
5. Results
5.1. UAV data: accuracy and precision
Fig. 7 highlights that the high resolution of UAV orthophotomosaiccombined with nadir, oblique and horizontal imaging provided land-forms and land cover vectorization with one dimension greater than10 cm. This included vertically oriented bluffs and banks with over-hanging tree canopy. Aligned GCP's precision computed by AgisoftPhotoScan was 0.336 pixel (0.08 m) and the average rate of root meansquare errors (RMSE) of all GCPs after alignment was 0.01915 m(z = 0.10093 m; x = 0.01474 m; y = 0.02272 m). The mean differ-ence in check points was 0.02642 m and the RMSE vertical coordinateswere 0.02836 m and 0.02459 m for the X coordinate and 0.02812 m forY.
Fig. 10. Woody debris accumulation identification (a) on the orthophotomosaic and digital elevation model with (b) 3 cross-section plotted on woody debris. Statistical distribution (c) ofvalues calculated form 516 woody debris spots and (d) spatial distribution of woody debris accumulation in the study area.
M. Rusnák et al. Measurement 115 (2018) 139–151
148
The point cloud was exported to RGB point information whichprovided photorealistic 3D visualization (Fig. 8a) and the class in-formation for terrain, vegetation and separately defined classes(Fig. 8b and c). Filtering gave the DSM for combined all classes points(Fig. 8d) and also DTM with only ground-point classes (Fig. 8e). Ver-tical differences between DTM and DSM for vegetation cover heightvaried from −4.38 to +34.09 m, dependent on floodplain tree cover(Fig. 8f). With exception of islands, the braidplain area has small ve-getation from 50 cm to 1 m providing a less precise topography modelbecause of the low number of points remaining after filtering vegetationpoints.
5.2. Automatic supervised classification
The accuracy of the MLC supervised classification of RGB char-acteristics was assessed by the combination toolset and cross-tabulationwith the manually-classified POMC reference layer dataset. POMC wasadjusted here by merging the data in the equal 5 classes: (1) waterareas; (2) bar surface; (3) vegetation; (4) woody debris; (5) bare sur-face. Accuracy assessment results are presented in the Table 2 confusionmatrix, which provided overall accuracy of 81.39% and 0.62 KAPPAindex of agreement. While proficient automated classification wasachieved for the “vegetation” and “gravel bar” classes when spatialerror distribution in Fig. 6d was considered, close observation of thewater areas class provided lower accuracy due to vegetation class coveroverhanging the bank line, visible bedrock under the water level clas-sified in the bare-surface class and the gravelled channel bed in thegravel-bar class. Here, the bare surface and woody debris classes alsoexhibited low accuracy. The spatial distribution of error matrix at studyarea margins was characterized by lower image quality, especially inareas with visible bottom channel bed structure and in the vegetation-channel and water area-bar surface contact zones (Fig. 6d).
5.3. Channel landforms mapping and woody debris volume calculation
Classified elevation models identified study area landforms andtheir vertical diversity. DTM longitudinal profiles then determinedvertical differentiations in the abandonment arms which are approxi-mately 60–100 cm higher than the main channel water level (Fig. 9),and DTM also provided information on the secondary channel incisionwhich is approximately 150 cm compared to the main channel in thesouthern part of the knickzone accompanied by outcrop of claystones.In addition, the combined DTM and high precision orthophotomosaicidentified bar vertical diversity (Fig. 9c). The point bar's three levels(Fig. 9e) define three accretion stages which differ in deposited materialgrain size (Fig. 9d). The oldest level, No. 3, has 706.256 m a. s. l.average altitude and is stabilized by low vegetation cover and sandydeposits, while bar level 2 has average 705.671 m a. s. l. altitude andthe youngest bar level 1 elevation is 705.001 m a. s. l. The orthopho-tomosaic also highlights that the youngest bar level surface is formed byfiner material than level 2.
In addition to vertical landform differentiation, point cloud enableswoody debris volume calculation (Fig. 10); and this calculation ex-ample is chosen because woody debris has a very important role inbraided channel development. High resolution orthophotomosaic andpoint cloud give precise debris identification, including calculation ofits length, study area location and orientation to water flow (Fig. 10a).The elevation model determines vertical profiles of roots and trunks(Fig. 10b), which create flow obstruction; the point cloud captureswoody debris with height 30 cm (profile 1), 10 cm high trunk (profile2), and 1 m high root and woody debris bulk (profile 3); and this cor-responds to the actual terrain situation. The total air and wood massvolume in the woody debris was computed by the 'cut method' volumeanalysis tool in Topolyst software. A total of 516 woody debris accu-mulations with 931 m3 volume were registered in the study area(Fig. 10d); with 134.9 m3 maximum accumulation, 1.8 m3 average and
0.0195 m3 median. This highlights that 75% of all accumulations wasless than 0.319 m3 (Fig. 10c).
It is essential in woody debris estimation to quantify wood massvolume by identifying wood/air volume ratio in the different types ofaccumulated material; isolated wood pieces, jams of logs, branches,root boles and twigs and shrubs and whole trees. An appropriate ap-proach for estimation of woody debris accumulation mass, calculationof the density index for woody debris and air proportion estimation inlog jams and shrubs derived from aerial photography were developedby Thévenet et al. [55] and presented by Piégay et al. [56] or Wyžgaand Zawiejska [57].
6. Discussion
6.1. Workflow design
The template for UAV technology application to riverine landscapemapping consists of five steps. These encompass complying with UAVregulations, reconnaissance of the mapped site, data processing andlandform mapping/object classification. Universally applied UAVtechnology describes the process of image acquisition and processing,including description of the UAV platform, technical parameters,number of images, camera settings, software processing and the spe-cialized work involved in the use of UAV as the best tool for case studydata acquisition. Research also emphasizes problems involved inmethods used for final data post-processing, especially in image clas-sification [31], vegetation segmentation and classification [32], re-fraction correction [30] and submerged topography model generation[23,25]. Other reports highlight photogrammetric processing, the ca-libration process, GCP distribution and parameters setting for accuracyassessment in UAV derived models [13,37,38,40,41]. Further articlesthen describe legislation and regulations mandated in the image dataacquisition phase. Here, Tonkin et al. [16] recorded aviation authorityrequirements in line-of-sight, while Flynn et al. [24] conducted flightsin accordance with the Model Aircraft Operating Standards AdvisoryCircular, and Cassado et al. [31] described data collection under UKCivil Aviation Authority regulation. Rango and Laliberte [58] thendetailed UAV flight planning required by US Federal Aviation Admin-istration law (FFA) in the National Airspace System and Stöcker et al.[39] provided wide overview of UAV regulations in 19 countries withthe common goal of minimizing risks in UAV operations. The devel-opment of workflow template as an appropriate tool for mapping ob-jects in riverine landscape requires familiarization with all complexitiesinvolved in UAV application to this process, and it ensure check systemsfor identifying and controlling all steps in model creation. Workflowdesign must cover the multi-component aspects of field data acquisitionand all technical requirements in order to achieve precise data gen-eration.
Advances in UAV image processing and progress in multi-viewstereopsis techniques with universal availability and low-cost UAVoutlay as an easy data collection source led to UAV photogrammetryblack-box technology data generation. Combination of the followingessential steps is fundamental in UAV mapping; (i) data collectionunder legislative and regulatory frameworks to minimize risks, (ii)photogrammetric processing with high accuracy GCP, combined nadir/oblique imaging and geometric model accuracy assessment and (iii)final data processing methodology.
6.2. Advantages and limitation of UAV riparian landscape mapping
New developments of UAV technology have enabled the creation ofhigh precision models of the actual landscape with appropriate in-formation density and position referencing, especially in the verticaldimension [59]. Miřijovský and Langhammer [26] emphasize the greatreliability of UAV in fluvial geomorphologic research and illustratedifferences between standard schematization of study area during
M. Rusnák et al. Measurement 115 (2018) 139–151
149
classic field research with errors from subjective observation and UAVmapping. This new mapping platform ensures less time-consuming dataacquisition, and its greatest advantages include real time capture of thelandscape state and subsequent changes elucidated by time series data.This determines the actual processes involved in these changes and isspecially applicable in capturing and monitoring flood events andlandslide movements. The precise results record small objects with afew centimeters resolution, thus improving data identification, extrac-tion and classification, with additional benefit from both vertical datainformation and improved spatial data processing analysis.
A further advantage of this technology is coupling several datamodels generated from the one flight. Three hours UAV mapping of sixflights with 12 min duration and approximately 1 h GCP targeted on theBelá River, together with DEMs, orthophotos, point cloud and nadir ortilted aerial photos for complex modeling of landscape have thus beengenerated. While UAV mapping is less expensive, it is better operationaland has equal spatial information density than airborne lidar [15], itsoptical data acquisition cannot penetrate the vegetation cover and re-ceive the signals from terrain topography [12]. Its technology is alsolimited by surface envelope vegetation cover, where significant errorsare especially recorded in landscape covered by small trees and shrubs[15], vegetated areas [17] and areas continuously covered by low ve-getation which cannot be classified or removed from the surface [19].In contrast, highly precise DTM can capture ground terrain in sparselyvegetated areas by optical sensors after point cloud filtering.
Ground level for data classifications in our study area involves non-densely, sparsely vegetated, or bare surfaces, so classified point cloudgreatly improves topographic surface model accuracy. Its precisecompilation of a sparsely point cloud covered floodplain plays a crucialrole in landforms mapping. However, interpolation during DTM gen-eration is impossible when the floodplain is covered by dense vegeta-tion, so geodynamics analysis with UAV photogrammetry is essential tostudy vegetation effect on vertical errors. While it is possible to use UAVlidars [11] for point cloud acquisition in the areas where we need to hitthe ground under the vegetation canopy. Disadvantage of small UAVlidar are their relatively high price and weight more than 2 kg thatlimits to use on small UAV platforms.
6.3. UAV technology in riparian landscape of Belá River
Multiple imaging by classic nadir images combined with obliqueand horizontal ground imaging is crucial in mapping bank height, bankline, bluffs and valley walls. Point cloud combined with calculation ofwoody debris volume provides precise data for high spatial resolutionin land cover class classification. Here, debris below the planar surfaceand buried in gravel is excluded, so wood debris volume is calculatedby differences between woody debris classified points and the planarsurface.
The Belá River is subdivided into two reaches; (1) the northernreach dominated by gravel bar and island areas with woody debrisaccumulated in abandonment arms created by material accumulationup to 1 m above present river flow and (2) the reach in the lower part isaffected by anthropogenic creation of a small hydropower plant andartificial canal with evident secondary channel incision [60]. This riverhas a riverine landscape with floodplain tree cover so it is essential hereto use the multiple imaging of classic orthogonal and oblique andground horizontal imaging. This provides workflow advantages inidentifying the bank lines and surface under tree canopies overhangingthe river banks. The accuracy of the applied classic approaches to bankdelimitation from aerial photos in both models and orthophotos is se-verely affected when the banks are inclined at an angle rather thanforming a vertical cliff. It is therefore important to create a metho-dology which applies landforms delimitation in 3-dimensional space;employing 3D assessment of landscape objects rather than classicplanar geoscience analysis in GIS.
7. Conclusion
The presented template for the application of UAV technology inriverine landscape mapping encompasses all technical aspects of flightmission logistics, data acquisition and processing. It also includes thenested, multiscale and multi-component aspects of riverine landscapeobject discrimination and their scalar mapping and assessment whichfurther provide a successful tool for researching this landscape type.Our proposed five step workflow comprises (0) legislation and regula-tion control, (i) reconnaissance of the mapped site, (ii) pre-flight fieldwork, (iii) flight mission, (vi) quality check and processing of aerialdata, (v) operations above processed layers and landform (object)mapping (extractions). The combination of all these essential steps isfundamental in UAV mapped identification of bank lines and morpho-logical changes.
Finally, it is of outmost importance to recognize that the accuracy ofthe applied classic approaches to bank delimitation from aerial photosand UAV orthophotos is compromised where banks are inclined at anangle rather than forming a vertical cliff. This, however, can be suc-cessfully counteracted by methodology which applies landform deli-mitation in 3-dimensional space, thus employing 3D assessment oflandscape objects.
Acknowledgments
This research was supported by Science Grant Agency (VEGA) of theMinistry of Education of the Slovak Republic and the Slovak Academyof Sciences; 02/0020/15. Flight mission was accomplished by colla-boration with GEOTECH Bratislava s.r.o. and the orthoimage aerial wasprovided by EUROSENSE Slovakia, s.r.o. The authors are grateful to R.Marshall for manuscript language review.
References
[1] J.M. Wheaton, K.A. Fryirs, G. Brierley, S.G. Bangen, N. Bouwes, G. O'Brien,Geomorphic mapping and taxonomy of fluvial landforms, Geomorphology 248(2015) 273–295, http://dx.doi.org/10.1016/j.geomorph.2015.07.010.
[2] E.J. Milton, D.J. Gilvear, I.D. Hooper, Investigating change in fluvial systems usingremotely sensed data, in: A.M. Gurnell, G.E. Petts (Eds.), Changing River Channels,John Wiley and Sons, Chichester, 1995, pp. 276–301.
[3] R.G. Bryant, D.J. Gilvear, Quantifying geomorphic and riparian land cover changeseither side of a large flood event using airborne remote sensing: river Tay, Scotland,Geomorphology 29 (1999) 307–321, http://dx.doi.org/10.1016/S0169-555X(99)00023-9.
[4] D. Gilvear, C. Davids, A.N. Tyler, The use of remotely sensed data to detect channelhydromorphology; River Tummel, Scotland, River Res. Appl. 20 (2004) 795–811,http://dx.doi.org/10.1002/rra.792.
[5] J.M. Hooke, C.E. Redmond, Use of cartographic sources for analyzing river channelchange with examples from Britain, in: G.E. Petts (Ed.), Historical Change of LargeAlluvial Rivers: Western Europe, John Wiley and Sons, Chichester, 1989, pp. 79–93.
[6] J. Lejot, C. Delacourt, H. Piégay, T. Fournier, M.L. Trémélo, P. Allemand, Very highspatial resolution imagery for channel bathymetry and topography from an un-manned mapping controlled platform, Earth Surf. Process. Landf. 32 (2007)1705–1725, http://dx.doi.org/10.1002/esp.1595.
[7] D. Gilvear, R. Bryant, Analysis of aerial photography and other remotely senseddata, in: M. Kondolf, H. Piégay (Eds.), Tools in Geomorphology, Wiley, Chichester,2003, pp. 135–170 doi: 10.1002/0470868333.ch6.
[8] A.M. Gurnell, J.L. Peiry, G.E. Petts, Using historical data in fluvial geomorphology,in: M. Kondolf, H. Piégay (Eds.), Tools in Geomorphology, Wiley, Chichester, 2003,pp. 77–103.
[9] M.A. Fonstad, J.T. Dietrich, B.C. Courville, J.L. Jensen, P.E. Carbonneau,Topographic structure from motion: a new development in photogrammetricmeasurement, Earth Surf. Process. Landf. 38 (2013) 421–430, http://dx.doi.org/10.1002/esp.3366.
[10] J. Sládek, M. Rusnák, Nizkonákladové mikro-UAV techológie v geografii (novámetóda zberu priestorových dát), Geografický Časopis 65 (3) (2013) 269–285.
[11] T. Sankey, J. Donager, J. McVay, J.B. Sankey, UAV lidar and hyperspectral fusionfor forest monitoring in the southwestern USA, Remote Sens. Environ. 195 (2017)30–43, http://dx.doi.org/10.1016/j.rse.2017.04.007.
[12] D. Turner, A. Lucieer, M. de Jong, Time series analysis of landslide dynamics usingan unmanned aerial vehicle (UAV), Remote Sens. 7 (2015) 1736–1757, http://dx.doi.org/10.3390/rs70201736.
[13] S. Harwin, A. Lucieer, Assessing the accuracy of georeferenced point clouds pro-duced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery,Remote Sens. 4 (2012) 1573–1599, http://dx.doi.org/10.3390/rs4061573.
M. Rusnák et al. Measurement 115 (2018) 139–151
150
[14] Ch.H. Hugenholtz, K. Whitehead, O.W. Brown, T.E. Barchyn, B.J. Moorman,A. LeClair, K. Riddell, T. Hamilton, Geomorphological mapping with a small un-manned aircraft system (sUAS): feature detection and accuracy assessment of aphotogrammetrically-derived digital terrain model, Geomorphology 194 (2013)16–24, http://dx.doi.org/10.1016/j.geomorph.2013.03.023.
[15] U. Niethammer, M.R. James, S. Rothmund, J. Travelletti, M. Joswig, UAV-basedremote sensing of the Super-Sauze landslide: evaluation and results, Eng. Geol. 128(2012) 2–11, http://dx.doi.org/10.1016/j.enggeo.2011.03.012.
[16] T.N. Tonkin, N.G. Midgley, D.J. Graham, J.C. Labadz, The potential of small un-manned aircraft systems and Structure-from-Motion for topographic surveys: a testof emerging integrated approaches at Cwm Idwal, North Wales. Geomorphol. 226(2014) 35–43, http://dx.doi.org/10.1016/j.geomorph.2014.07.021.
[17] F. Clapuyt, V. Vanacker, K. Van Oost, Reproducibility of UAV-based earth topo-graphy reconstructions based on structure-from-motion algorithms,Geomorphology 260 (2016) 4–15, http://dx.doi.org/10.1016/j.geomorph.2015.05.011.
[18] K.J. Cook, An evaluation of the effectiveness of low-cost UAVs and structure frommotion for geomorphic change detection, Geomorphology 278 (2017) 195–208,http://dx.doi.org/10.1016/j.geomorph.2016.11.009.
[19] M. Rusnák, J. Sládek, J. Buša, V. Greif, Suitability of digital elevation modelsgenerated by UAV photogrammetry for slope stability assessment (case study oflandslide in Svätý Anton, Slovakia), Acta Sci. Pol. Form. Cir. 15 (4) (2016) 439–449,http://dx.doi.org/10.15576/ASP.FC/2016.15.4.439.
[20] R. Dunford, K. Michel, M. Gagnage, H. Piégay, M.L. Trémélo, Potential and con-straints of unmanned aerial vehicle technology for the characterization ofMediterranean riparian forest, Int. J. Remote Sens. 30 (2009) 4915–4935, http://dx.doi.org/10.1080/01431160903023025.
[21] D. Vericat, J. Brasington, J. Wheaton, M. Cowie, Accuracy assessment of aerialphotographs acquired using lighter-than-air-blimps: low-cost tools for mappingriver corridors, River Res. Appl. 25 (2009) 985–1000, http://dx.doi.org/10.1002/rra.1198.
[22] A. Hervouet, R. Dunford, H. Piégay, B. Belletti, M.L. Trémélo, Analysis of post-floodrecruitment patterns in braided-channel rivers at multiple scales based on an imageseries collected by unmanned aerial vehicles, ultralight aerial vehicles, and sa-tellites, GIsci. Remote Sens. 48 (2011) 50–73, http://dx.doi.org/10.2747/1548-1603.48.1.50.
[23] C. Flener, M. Vaaja, A. Jaakkola, A. Krooks, H. Kaartinen, A. Kukko, E. Kasvi,H. Hyyppä, P. Alho, Seamless mapping of river channels at high resolution usingmobile LiDAR and UAV-photography, Remote Sens. 5 (2013) 6382–6407, http://dx.doi.org/10.3390/rs5126382.
[24] K.F. Flynn, S.C. Chapra, Remote sensing of submerged aquatic vegetation in ashallow non-turbid river using an unmanned aerial vehicle, Remote Sens. 6 (2014)12815–12836, http://dx.doi.org/10.3390/rs61212815.
[25] L. Javernick, J. Brasington, B. Caruso, Modeling the topography of shallow braidedrivers using Structure-from-Motion photogrammetry, Geomorphology 213 (2014)166–182, http://dx.doi.org/10.1016/j.geomorph.2014.01.006.
[26] J. Miřijovský, J. Langhammer, Multitemporal monitoring of the morphodynamics ofa mid-mountain stream using UAS photogrammetry, Remote Sens. 7 (2015)8586–8609, http://dx.doi.org/10.3390/rs70708586.
[27] J. Miřijovský, M.Š. Michalková, O. Petyniak, Z. Máčka, M. Trizna, Spatiotemporalevolution of a unique preserved meandering system in Central Europe – The MoravaRiver near Litovel, Catena 127 (2015) 300–311, http://dx.doi.org/10.1016/j.catena.2014.12.006.
[28] A.D. Tamminga, B.C. Eaton, Ch.H. Hugenholtz, UAS-based remote sensing of fluvialchange following an extreme flood event, Earth Surf. Process. Landf. 40 (2015)1464–1476, http://dx.doi.org/10.1002/esp.3728.
[29] A. Tamminga, C. Hugenholtz, B. Eaton, M. Lapointe, Hyperspatial remote sensing ofchannel reach morphology and hydraulic fish habitat using an unnmanned aerialvehicle (UAV): a first assessment in the context of river research and management,River Res. Appl. 31 (2015) 379–391, http://dx.doi.org/10.1002/rra.2743.
[30] A.S. Woodget, P.E. Carbonneau, F. Visser, I.P. Maddock, Quantifying submergedfluvial topography using hyperspatial resolution UAS imagery and structure frommotion photogrammetry, Earth Surf. Process. Landf. 40 (2015) 47–64, http://dx.doi.org/10.1002/esp.3613.
[31] M.R. Casado, R.B. Gonzales, R. Wright, P. Bellamy, Quantifying the effect of aerialimagery resolution in automated hydromorphological river characterisation,Remote Sens. 8 (2016) 650, http://dx.doi.org/10.3390/rs8080650.
[32] A. Michez, H. Piégay, J. Lisein, H. Claessens, P. Lejeune, Classification of riparianforest species and health condition using multi-temporal and hyperspatial imageryfrom unmanned aerial system, Environ. Monit. Assess. 188 (2016) 146, http://dx.doi.org/10.1007/s10661-015-4996-2.
[33] T. Niedzielski, M. Witek, W. Spallek, Observing river stages using unmanned aerialvehicles, Hydrol. Earth Syst. Sci. 20 (2016) 3193–3205, http://dx.doi.org/10.5194/hess-20-3193-2016.
[34] J. Langhammer, T. Lendzioch, J. Miřijovský, F. Hartvich, UAV-based optical gran-ulometry as tool for detecting changes in structure of flood depositions, RemoteSens. 9 (2017) 240–261, http://dx.doi.org/10.3390/rs9030240.
[35] B. Marteau, D. Vericat, Ch. Gibbins, R.J. Batalla, D.R. Green, Application ofStructure-from-Motion photogrammetry to river restoration, Earth Surf. Process.Landf. 42 (3) (2016) 503–515, http://dx.doi.org/10.1002/esp.4086.
[36] A.S. Woodget, R. Austrums, Subaerial gravel size measurement using topographicdata derived from a UAV-SfM approach, Earth Surf. Process. Landf. 42 (9) (2017)1434–1443, http://dx.doi.org/10.1002/esp.4139.
[37] D. Turner, A. Lucieer, Ch. Watson, An Automated Technique for generating geor-ectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) ima-gery, based on structure from motion (SfM) point clouds, Remote Sens. 4 (2012)1392–1410, http://dx.doi.org/10.3390/rs4051392.
[38] A.S. Laliberte, M.A. Goforth, C.M. Steele, A. Rango, Multispectral remote sensingfrom unmanned aircraft: image processing workflows and applications for range-land environments, Remote Sens. 3 (2011) 2529–2551, http://dx.doi.org/10.3390/rs3112529.
[39] C. Stöcker, R. Bennett, F. Nex, M. Gerke, J. Zevenbergen, Review of the current stateof UAV regulations, Remote Sens. 9 (2017) 459, http://dx.doi.org/10.3390/rs9050459.
[40] S. Harwin, A. Lucieer, J. Osborn, The impact of the calibration method on the ac-curacy of point clouds derived using unmanned aerial vehicle multi-view stereopsis,Remote Sens. 7 (2015) 11933–11953, http://dx.doi.org/10.3390/rs70911933.
[41] M.R. James, S. Robson, S. d'Oleire-Oltmanns, U. Niethammer, Optimising UAVtopographic surveys processed with structure-from-motion: ground control quality,quantity and bundle adjustment, Geomorphology 280 (2017) 51–66, http://dx.doi.org/10.1016/j.geomorph.2016.11.021.
[42] M.R. James, S. Robson, Straightforward reconstruction of 3D surfaces and topo-graphy with a camera: accuracy and geoscience application, J. Geophys. Res. 117(2012) F03017, http://dx.doi.org/10.1029/2011JF002289.
[43] T.N. Tonkin, N.G. Midgley, Ground-control networks for image based surface re-construction: an investigation of optimum survey designs using UAV derived ima-gery and structure-from-motion photogrammetry, Remote Sens. 8 (2016) 876,http://dx.doi.org/10.3390/rs8090786.
[44] K. Kraus, Photogrammetry. Geometry from Images and Laser Scans, second ed.,DeGruyter, Berlin, 1997.
[45] C. McGlone, E. Mikhail, J. Bethel, Manual of Photogrammetry, fifth ed., ASPRS,Bethesda, 2004.
[46] F. Agüera-Vega, F. Carvajal-Ramírez, P. Martínez-Carricondo, Assessment of pho-togrammetric mapping accuracy based on variation ground control points numberusing unmanned aerial vehicle, Measurement 98 (2017) 221–227, http://dx.doi.org/10.1016/j.measurement.2016.12.002.
[47] Agisoft PhotoScan User Manual. < http://www.agisoft.com/pdf/photoscan-pro_1_3_en.pdf> (accessed 10.03.2017).
[48] J. Miřijovský, Bezpilotní systémy – sběr dat a využití ve fotogrammetrii, first ed., UPPublishing, Olomouc, 2013.
[49] C.S. Fraser, Automatic camera calibration in close range photogrammetry,Photogramm. Eng. Remote Sens. 79 (2013) 381–388.
[50] M.R. James, S. Robson, Mitigating systematic error in topographic models derivedfrom UAV and ground-based image networks, Earth Surf. Process. Landf. 39 (2014)1413–1420, http://dx.doi.org/10.1002/esp.3609.
[51] T. Sieberth, R. Wackrow, J.H. Chandler, Automatic detection of blurred images inUAV image sets, ISPRS J. Photogramm. Remote Sens. 122 (2016) 1–16, http://dx.doi.org/10.1016/j.isprsjprs.2016.09.010.
[52] M.J. Westoby, J. Brasington, N.F. Glasser, M.J. Hambrey, J.M. Reynolds, “Structurefrom Motion” photogrammetry: a low-cost, effective tool for geoscience applica-tions, Geomorphology 179 (2012) 300–314, http://dx.doi.org/10.1016/j.geomorph.2012.08.021.
[53] H. Obanawa, Y. Hayakawa, H. Saito, C. Gomez, Comparison of DSMs derived fromUAV-SfM method and terrestrial laser scanning, J. Jpn. Soc. Photogramm. RemoteSens. 53 (2014) 67–74, http://dx.doi.org/10.4287/jsprs.53.67.
[54] M.M. Ouédraogo, A. Degré, C. Debouche, J. Lisein, The evaluation of unmannedaerial system-based photogrammetry and terrestrial laser scanning to generateDEMs of agricultural watersheds, Geomorphology 214 (2014) 339–355, http://dx.doi.org/10.1016/j.geomorph.2014.02.016.
[55] A. Thévenet, A. Citterio, H. Piégay, A new methodology for the assessment of largewoody debris accumulations on highly modified rivers (example of two Frenchpiedmont rivers), Regul. River. 14 (1998) 467–483, http://dx.doi.org/10.1002/(SICI)1099-1646(1998110)14:6<467::AID-RRR514>3.0.CO;2-X.
[56] H. Piégay, A. Thévenet, A. Citterio, Input, storage and distribution of large woodydebris along a mountain river continuum, the Drôme River, France, Catena 35(1999) 19–39, http://dx.doi.org/10.1016/S0341-8162(98)00120-9.
[57] B. Wyžga, J. Zawiejska, Wood storage in a wide mountain river: case study of theCzarny Dunajec, Polish Carpathians, Earth Surf. Process. Landf. 30 (2005)1475–1494, http://dx.doi.org/10.1002/esp.1204.
[58] A. Rango, A.S. Laliberte, Impact of flight regulationson effective use of unmannedaircraft systems for natural resources applications, J Appl. Remote Sens. 4 (1)(2010), http://dx.doi.org/10.1117/1.3474649.
[59] B. Kršlák, P. Blišťan, A. Pauliková, P. Puškárová, Ľ. Kovanič, J. Palková,V. Zelizňáková, Use of low-cost UAV photogrammetry to analyze the accuracy of adigital elevation model in a case study, Measurement 91 (2016) 276–287, http://dx.doi.org/10.1016/j.measurement.2016.05.028.
[60] A. Kidová, M. Lehotský, M. Rusnák, Geomorphic diversity in the braided-wanderingBelá River, Slovak Carpathians, as a response to flood variability and environmentalchanges, Geomorphlogy 272 (2016) 137–149, http://dx.doi.org/10.1016/j.geomorph.2016.01.002.
M. Rusnák et al. Measurement 115 (2018) 139–151
151
View publication statsView publication stats
top related