3d printed maps from global dems, lidar, and uav …tpingel.org/documents/presentations/pingel and...

22
3D Printed Maps from Global DEMs, LiDAR, and UAV Sourced Photogrammetry Thomas J. Pingel and Earle W. Isibue Northern Illinois University 2017 Annual Meeting of the Illinois Geographical Society University of Illinois Urbana-Champaign April 29, 2017

Upload: nguyennhi

Post on 11-Jun-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

3D Printed Maps from Global DEMs, LiDAR, and UAV Sourced Photogrammetry

Thomas J. Pingel and Earle W. IsibueNorthern Illinois University

2017 Annual Meeting of the Illinois Geographical SocietyUniversity of Illinois Urbana-Champaign

April 29, 2017

Research Objectives

• Develop open source methodology to collect, process, render, and print 3D maps of the environment.

• Use UAVs, terrestrial and airborne lidar, DEMs.

• Create both indoor and outdoor maps.

• Applications– Printed maps for people with visual impairments

– Enhance and support expert decision making

– Improve education on terrain and its influence on a variety of geographic processes

2

UAV-based data capture

3

• We fly a 3DR Iris+ equipped with a GoPro Black (4K, 30 fps) camera and Tarot 2-axis stabilizing gimbal.

• Flight time up to 20 minutes.• Range:

• Z: 400* feet (FAA)• X/Y: Half mile

• Generally capture 1 photo per second to mesh well with image processing software.

• The Iris+ is now 3D printable!

3D Printed UAVs

UAV platforms are evolving rapidly.

Our lab is developing 3D printable drones for fieldwork and training, to reduce costs, improve potential for

field repairs, and to enhance learning opportunities.

4

5

We use Tower (for Android) and Mission Planner (for Windows) to plan and fly

autonomous missions.

Mission Planner is used to fuse the drone’s GPS data with the GoPro images to georeference

them.

GPS log and images are synchronized via a time offset

calculated from a post-hoc timestamp comparison of a

captured image.

Mission Planner

Tower

6

We use Pix4D to create 3D point clouds and orthomosaics from the imagery using a technique called Structure from Motion

(SfM).

Pix4D is the engine behind ESRI’s Drone2Map.

It is highly resource intensive, and it takes several hours to

process ~500 images.

Some researchers are using hosted machines on Amazon to

accelerate the process.

7

Microsoft’s free Image Composite Editor (ICE) is a passable option for some aerial work.

This is a stitched image of a farmstead in Iowa taken with a GoPro at 200 feet.

303 images stitched to 22,000 x 23,000 pixels.

Acquisition time was 12 minutes.

Effective resolution is about 1 cm.

Processing time: 5 minutes.

8

Microsoft ICE also has some fun projections for image mosaics.

9

We use the open source software CloudCompare to further process the point cloud, and to render it as a mesh.

10

We also use terrestrial lidar for point cloud capture using the

Velodyne HDL-32E.

Compared to airborne LiDAR collection, it boasts higher spatial

and temporal resolution.

Most applicable for pedestrian use and street capture, although UAV mounting is coming soon for

our lab.

The instrument captures scanlines which must be assembled (by the end user!) into a full point cloud.

Point clouds from these units can be assembled using

Simultaneous Localization and Mapping (SLAM).

Lightweight, consumer-grade GPSeshave (until recently) not been of

sufficient quality to co-register frames, and they do not work well indoors.

SLAM algorithms register each frame to the last to build a map and to track

position.

However, like dead reckoning, guesses can drift over time.

13Video from Kaarta

SLAM using Kinect sensor

SLAM using the Robot Operating System (ROS)

14

The small, open source Reach RTK GPS will enable both precise positioning and orientation.

This will enable the assembly of point clouds either eliminating or reducing the need for SLAM, and improving results outdoors

over large footprint areas.

15

SfM vs. Lidar

• Structure from Motion– Inexpensive and relatively fast– Easy to pair with UAV– Native color (RGB) capture– Works best outdoors

• Terrestrial lidar– Moderately expensive– Requires SLAM or high-precision GPS– Works indoor and outdoor– Limited range (100 meter)

• Airborne lidar– Either very expensive (custom) or free

(if already flown)– Very precise, few artifacts in image– Penetrates canopy– Poor coverage of building faces

16

17

Interior scan using Velodyne terrestrial lidar and assembled using Kaarta software.

18

The scans are integrated in CloudCompare using automated and manual techniques.

The meshes are sent to Cura and printed on our LulzBot TAZ 6 printer.

19

We are using similar techniques to develop 3D printed models of

large-extent terrain.

The rasters are first processed using a Python script to parse the scene into layers (.25 mm thick) to improve print quality. They are saved as PNG image

files, with an integer value indicating the layer.

Cura handles the conversion of PNG to mesh.

They are printed in sizes up to 280 mm.

In cooperation with NIU’s Vision Program, we are developing 3D maps to help people with visual impairments to

better learn their environments, and expect to print models of

neighborhoods and indoor spaces.

Many issues pertaining to data cleanup, model construction, appropriate scale,

and cartographic generalization need to be solved.

20

In a project inspired by the Augmented Reality Sandbox,

we are also investigating projecting visualizations onto

3D printed maps.

In this way, any geographic variable that interacts with terrain can be highlighted.

Migration, weather, climate, and pollution are among the

topics we are currently investigating.

21

Conclusions

• Lidar and photogrammetry can be combined for merged interior-exterior modeling, but there is still a significant amount of manual alignment and cleaning necessary for a quality finished product.

• Lightweight, consumer-grade RTK can help constrain the SLAM process or eliminate its need altogether, leading to better mobile lidar-derived point clouds.

• Automated efforts for 3D printed maps such as Iowa State’s TouchTerrain are a great start, but like other cartography, a human can help refine the product.

• Future work will test generalization and scale effects on map readability for sighted and blind individuals.

• Special thanks to the Illinois Geographical Society and Northern Illinois University for grant support on this project.

22