3d mapping drone - ucf department of eecs · web viewthere are five main loops, two metal oxide...

10
3D Mapping Drone Edwin Lounsbery, Brian Vermillion, and Matthew McHenry Dept. of Electrical Engineering and Computer Science, University of Central Florida, Orlando, Florida, 32816-2450 Abstract — The objective of this project is to create an unmanned aerial vehicle that has the capability to scan a location and collect location points via a Kinect camera. The collected data will be used to make a 3D map of the scanned area. The design of the drone will support the ability of becoming autonomous, but initially will only be manually controlled. In addition to the Kinect, this drone will use a LIDAR in order to detect objects and avoid them with preloaded algorithms in the flight controller. The main computing unit, an embedded Linux device, will have two way communication with the flight controller to assist in automation as well as location detection in an environment. For easier charging of the battery, the base of the drone will have an induction charging system. Index Terms — 3D, semi-autonomous, drone, Kinect, Lidar, mapping, induction charging. I. INTRODUCTION Our inspiration for creating this project started with an interest in three dimensional imaging and parallel computing. The choice of using a drone as a vehicle came from the desire of versatility when moving around an environment. This will allow for larger areas to be mapped, including hard to reach areas like the ceiling. Once those objectives were realized, we decided that the project might benefit from some special features such as induction charging, extra collision sensors, and autonomy. The induction charging will allow the drone to be easily charged without having to remove the batteries from the drone. One end will be on the base of the drone while the other end of the changing inductors will be a base station built specifically for the drone. To aid the user in avoiding collision with objects we added a lidar to detect objects. These features can be used to implement autonomous navigation and flight. Also, a drone with these features could be used to map an unsafe area before people enter. Such a use case would be mapping out a damaged building from an earthquake prior to rescue crews entering. This would give them a clear picture of what they are entering into or what obstacles they might face inside. There are many potential uses of this drone system. II. KINECT CAMERA The device we built this project around is the Kinect camera, designed and manufactured by Microsoft. The Kinect is a RGB camera that also contains a CMOS camera and infrared laser. These additional cameras are used to calculate the position of objects within its field of view. This depth camera is similar in concept to a LIDAR, but priced at an

Upload: others

Post on 09-Feb-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 3D Mapping Drone - UCF Department of EECS · Web viewThere are five main loops, two metal oxide field effect transistors, one bipolar junction transistor, with resistors, capacitors,

3D Mapping DroneEdwin Lounsbery, Brian Vermillion, and

Matthew McHenry

Dept. of Electrical Engineering and Computer Science, University of Central Florida, Orlando, Florida, 32816-2450

Abstract — The objective of this project is to create an unmanned aerial vehicle that has the capability to scan a location and collect location points via a Kinect camera. The collected data will be used to make a 3D map of the scanned area. The design of the drone will support the ability of becoming autonomous, but initially will only be manually controlled. In addition to the Kinect, this drone will use a LIDAR in order to detect objects and avoid them with preloaded algorithms in the flight controller. The main computing unit, an embedded Linux device, will have two way communication with the flight controller to assist in automation as well as location detection in an environment. For easier charging of the battery, the base of the drone will have an induction charging system.

Index Terms — 3D, semi-autonomous, drone, Kinect, Lidar, mapping, induction charging.

I. INTRODUCTION

Our inspiration for creating this project started with an interest in three dimensional imaging and parallel computing. The choice of using a drone as a vehicle came from the desire of versatility when moving around an environment. This will allow for larger areas to be mapped, including hard to reach areas like the ceiling. Once those objectives were realized, we decided that the project might benefit from some special features such as induction charging, extra collision sensors, and autonomy. The induction charging will allow the drone to be easily charged without having to remove the batteries from the drone. One end will be on the base of the drone while the other end of the changing inductors will be a base station built specifically for the drone. To aid the user in avoiding collision with objects we added a lidar to detect objects. These features can be used to implement autonomous navigation and flight. Also, a drone with these features could be used to map an unsafe area before people enter. Such a use case would be mapping out a damaged building from an earthquake prior to rescue crews entering. This would give them a clear picture of what they are entering into or what obstacles they might face inside. There are many potential uses of this drone system.

II. KINECT CAMERA

The device we built this project around is the Kinect camera, designed and manufactured by Microsoft. The Kinect is a RGB camera that also contains a CMOS camera and infrared laser. These additional cameras are used to calculate the position of objects within its field of view. This depth camera is similar in concept to a LIDAR, but priced at an affordable $40. We are using this camera together with a drone to create a 3D map of an indoor area.

The depth camera has an approximate maximum range of about 4.5m, which is well suited for smaller indoor areas. One limiting factor of the depth camera is that it does not work well outdoors. The IR laser in the Kinect is fairly weak, so the projected points get washed out by IR from sunlight. This can be partially solved by using polarized camera filters. The filters allow for some conditions with partial sunlight, but any direct sunlight still washes out the image.

One thing that sets the camera apart from traditional lidars is its two dimensional fov (field of view). As seen in the table below, The Kinect camera has a fov of 43° in the vertical direction and a fov of 57° in the horizontal direction. This allows for a very wide area to be viewed in just a single frame. In addition to this area the Kinect is capable of capturing up to 30fps, which gives you a lot of detail in a single area.

For the cost, the Kinect is a very powerful tool. It has been used in studies at many different universities, NASA has used it to control a robotic arm, and various hacker groups have created projects from it like the OpenKinect project. These are the full specs of the Kinect camera:

FOV Vertical 43°FOV Horizontal 57°Resolution 640x480Max FPS 30Max Depth ~4.5mMin Depth 0.4mMicrophones for voice recognitionTilt motorRGB cameraCMOS camera for infraredInfrared laserMain Power [email protected] Power 5V

Page 2: 3D Mapping Drone - UCF Department of EECS · Web viewThere are five main loops, two metal oxide field effect transistors, one bipolar junction transistor, with resistors, capacitors,

In our project we are using a stripped down version of the Kinect. This means that we removed all of the protective casing, unused audio components, motor, and everything else not necessary for the operation of the cameras on the drone.

Data gathered by the Kinect is received through a USB connection, and this connection also provides power. Additional power is needed to power a pelteir device on the laser, which is responsible for keeping the laser at a stable temperature. This extra power will either be provided by the main battery or small 12V support battery. We are still testing which option will be best for our project.

The Kinect operates at 30fps, and generates about 20Mbps of data to be processed. Data from the Kinect is transferred to an embedded Linux device called the Parallella, which will buffer and process the data.

III. PARALLELLA BOARD

The Parallella board is an embedded Linux device similar to the raspberry pi. It is responsible for handling the code that controls the Kinect camera and calculating the depth points returned from the camera. It runs on a dual core Zynq-Z7010 ARM processor running at 600MHz.

Under normal circumstances this wouldn’t be nearly enough computation power to process the points from the Kinect, but the board includes a 16 core coprocessor that is capable of it. A full list of the board’s specs is in the table below:

The Zynq-Z7010 ARM processor is a little special because it has an fpga built into it. This fpga is preprogrammed with a bitstream that connects the coprocessor to the ARM host processor. In addition to this, it comes with libraries to program the coprocessor.

The coprocessor runs on a different RISC architecture than the host processor. This architecture is designed specifically to support a mesh network between the cores of the processor. The coprocessor was designed with parallel computing in mind, and this project takes advantage of that extra computation power. The coprocessor also shares a portion of the RAM with the host, making it easy to transfer data between the coprocessor and the host.

The host computer has a small set of light responsibilities in our project. The first thing it does is manage the Kinect, and all of the data from it. This data is buffered into a ring buffer which supports asynchronous data access. This data is then pulled from the buffer asynchronously and offloaded onto the coprocessor cores for computation. When completed, another thread copies the data from the coprocessor and temporarily saves it to RAM. After a set interval of ten seconds, the computed data will be offloaded to disk as a 3D snapshot of the world. The hardest task the host has to complete is a lot of memcpy commands.

In addition to communication with the coprocessor, the computer has to communicate with the flight controller, the pixhawk. This is done through serial communication. The board has serial communication pins built into it, which will be directly connected to the pixhawks serial ports. Since the pixhawk is responsible for flight control, it has all of the information on orientation and position. This data will be sent to the Parallella board which will use the data to calculate the actual position of the points in the physical world.

IV. SOFTWARE

There are three programs written specifically for this project. The first is the main program, the one running on the parallella and calculating the location points from the Kinect. Programs two and three run on the pixhawk. One controls the lidar and handles collision detection while the other sends position and orientation information to the Parallella.

In addition to these we use various other open source libraries and programs. The most notable of theses is Meshlab, Blender, and OpenKinect. Meshlab is a tool for loading and editing the point cloud recorded by the drone. We can use this to create a point cloud that is more visually pleasing than the raw capture. Blender is an additional tool we can use to convert the

Zynq-Z7010 or Z7020 Dual-core ARM A9 CPU @ 600Mhz

16-core Epiphany Coprocessor @ 600Mhz1GB RAM48 GPIO signalsGigabit EthernetLinux DebianUART pinsIC2/JTAG pinsUSB mini connection5V Power @ 2A

Page 3: 3D Mapping Drone - UCF Department of EECS · Web viewThere are five main loops, two metal oxide field effect transistors, one bipolar junction transistor, with resistors, capacitors,

Meshlab product into other formats. One such form is a 3D model that can be printed. It is a goal of ours to scan a room and 3d print it by the end of the semester.

The Kinect camera is a Microsoft product, and due to their policies on proprietary software the SDK for the Kinect is Windows only. Our embedded device runs on Linux, so we had to turn to the open source project OpenKinect. This library was created by a group of hackers that reverse engineered the Kinect, and it works on all platforms.

Programming the software to take the Kinect data and calculate location points was one of the hardest parts of this project. The data received from the camera is a 640x480 image, where each pixel represents a disparity value from the camera. The depth of the point can be calculated using this formula:

Z = f*B/d (1)

Z = distance along the camera Z axisf = focal length of lens (in pixels)B = baseline (in meters)d = disparity (in pixels)

Both the focal length and the baseline are known values for the Kinect, which makes calculating depth values really easy. The tricky part is getting the X and Y coordinates from the camera. Those values are based off of the Z values and the pixel location, as seen below.

x = (i - w / 2) * (z + minDistance) * scaleFactor

(2)

y = (j - h / 2) * (z + minDistance) * scaleFactor

(3)

z = z

All of these values return a number that represents the distance in millimeters. While the Kinect claims to have millimeter accuracy, the data returned is fairly noisy so we decided to round up to the centimeter level.

These calculations are all performed on the coprocessor of the Parallella board. After retrieving the raw data from the Kinect it stores it in a thread safe ring buffer. Another thread takes 1000 data points from the buffer, and puts them in a shared memory area that the coprocessor can access. Each of the 16 cores available in the coprocessor work on their own set of 1000 data points independently. This greatly speeds up computation time. After the points are calculated, they

are buffered again, and then offloaded to a text file on disk.

The data calculated by these formulas currently only represent points relative to the camera, but if the drone is going to be moving around then they need to represent points around an origin in the world. To accomplish this we get telemetry from the drone about its location and orientation through serial communication. The pixhawk automatically tracks position and orientation in the world, so we wrote a program to run on the FMU that takes this information and transmits it to the Parallella through serial. The additional information allows us to get the exact positions of the points, instead of relative to the camera.

Since the position is mainly based of an accelerometer and gyroscope, it can get inaccurate over long periods of time. To compensate for this we are going to take 10 second “snapshots” of the world. Basically, every 10 seconds the origin will be reset to the current position and future calculations will be based off that starting point. Past snapshots will be saved to disk so we can view them later. This eliminates any accumulating inaccuracies in the position calculations. After the flight, we can take these snapshots and merge them into one using Meshlab.

Another program running on the pixhawk is the lidar controller. It receives packets through a serial connection which tell you where it is pointing and how far away the point is. Using this data we can implement safeguards to prevent the drone from flying into objects. This can be done by monitoring the data received by the lidar and checking if they are to close. If something is detected it will put the drone into stationary mode, which halts all movement of the drone.

V. LIDAR

To have semi-autonomous capability we need to be able to detect objects around the drone. Our first choice was to attach several ultrasonic sensors to the drone and measure the distance in all three axes. Early on in the project we noticed a timeout issue with the sensors which after a while led us to scrap them for a LIDAR.

Our major concern for detection would be up, down, left, right, and straight in front of the drone. The Kinect will take care of the front detection which leaves the LIDAR to detect the rest. In our testing we were able to collect data at a rate of 360 points per revolution and 200-300 revolutions per minute. We noticed that with a 3v motor input we has a data loss of about 30%. As we increased the voltage our loss increased with 100% loss happening at 5v. This is due to the lidar spinning to fast at higher voltages.

Page 4: 3D Mapping Drone - UCF Department of EECS · Web viewThere are five main loops, two metal oxide field effect transistors, one bipolar junction transistor, with resistors, capacitors,

The error is acceptable to us for two reasons, (1) we only need four points per revolution and (2) the drone will not be travelling at a fast rate so the rate at which we are receiving accurate data is more rapid than needed. This data will be transmitted to the Pixhawk as well as the Parallella board for location and object detection.

VII. DRONE AND SUPPORTING HARDWARE

When it comes to design of a drone there are several sub components. First, the body of the drone can be designed with plenty of options whether it’s wood, polycarbonate, or carbon fiber. Every material has its own advantages and disadvantages and for our purpose carbon fiber was the right choice. Because of its strength to weight factor, it will allow the drone to carry more.

We knew that our drone will have several components not found on store bought drones and kits. We looked at all of the requirements needed for the project as well as restrictions that might hinder our choices. A 650mm frame size drone kit was chosen which we added two more layers and a custom battery mount to house all of the electrical apparatuses.

The second subsection is the motors. In order to carry the payload of the hardware we calculated the total weight of the drone to determine how much thrust is needed in order for the drone to fly. Our estimated weight was 2.55 kilograms which we doubled and added 20% for motor inefficiency to give us a total of 6.12 kilograms of thrust is needed in order for the drone to fly. The EMAX MT4114 produces a total of 8.08 kilograms of thrust at full throttle at 24 volts.

We also installed 4 electronic speed controllers to control the speed and orientation of the drone via pulse width modulation. They have a rating of 40 amperes max per motor, which is more than enough current than our motors will ever need. When we looked for electronic speed controllers we calculated a 20% increase over the max voltage of the motors to give us a current need of 18.5 amperes. With our search for the correct electronic speed controllers we could not find any that had a current rating just above the rating that we needed and the correct voltage therefore we chose ones that were beyond any current that the motors will ever draw.

Our next objective was to choose a flight controller to give us stable flight. We decided to choose the Pixhawk from 3DR Robotics at a cost of $200. While for a flight controller, it is expensive but it will give us the

flexibility needed as well as the computing power for stable flight. With an extensive online support system and industry leading hardware specifications the Pixhawk was the only choice. The Pixhawk features a 32 bit ARM cortex processor with a failsafe coprocessor, 168MHZ/256KB of Ram and 2MB of flash. The Pixhawk also has plenty of I/O pins for peripherals. This is extremely helpful for us since we will add the Parallella board and LIDAR in addition to compass and global positioning system to the Pixhawk.

The Pixhawk sends and reads signals in pulse width modulation, so in order for the transmitter to issue commands to the Pixhawk a pulse position encoder is needed to convert the signal. A compass and global positioning system (GPS) module were added to achieve autonomous flight in outdoor conditions via waypoints. To view what the drone is “seeing” we installed a first person view (fpv) camera. This camera will send video to the base station via a 5.8MHz 200mW transmitter, which is rated to have a 500 meter range.

Next, custom power distribution board were fabricated to power 5 volt and 3.3 volt components which were added to the drone. Communication between different components posed a challenge. We used an usb hub to assist with the Microsoft Kinect, the Parallella board, and a wifi dongle. Our main communication with the drone will be a 2.8MHZ transmitter and receiver which has eight channels of communication. With all of these channels we are able to setup the main controls such as yaw, throttle, pitch, and roll as well as add in special controls like altitude hold or stabilize. A base station was correspondingly created to view the fpv camera as well as the induction charging of the system.

VIII. INDUCTION CHARGING AND BATTERY

Charging is essential to any battery powered device. Most mobile devices have batteries and although the concept of the technology is hundreds of year’s old, batteries are still being reinvented. Lithium-ion batteries are of the newer available selections. Lithium polymer batteries specifically are great for mobile devices because of their lightweight. A Turnigy 5800 mA*h three-cell lithium polymer battery supplies the drone’s many devices. To charge the Turnigy battery power will be routed from an outlet.

Tapping the outlet begins an extraction and conversion process that supplies an induction interface to transfer power to the drone. A 1FD91 power cord

Page 5: 3D Mapping Drone - UCF Department of EECS · Web viewThere are five main loops, two metal oxide field effect transistors, one bipolar junction transistor, with resistors, capacitors,

rated at fifteen amps leads one hundred and twenty alternating current voltage through a fuse to a branch of three, one hundred kilo-ohm resistors, that precede a transformer. The fuse is a circuit breaker and resistors isolate current from the transformer. A center-tapped transformer inputs twenty-four alternating current volts through two switches to a conversion board. AC/DC conversion then DC/AC conversion is required to transmit enough voltage across a set of mutual inductors. A full-wave rectifier designed for center-tapped transformers converts and splits the transformer input to one positive and one negative twenty-two direct current volt, with a slight ripple, electrical signal. Two linear regulator circuits applying IC LM7815 and IC LM7915 complete the AC/DC conversion of the electrical signals to positive and negative fifteen volts.

Inductors transmit continuously alternating current voltage to mutual inductors at efficiency rates dependent of the frequency of the waveform. At two hundred kilo-hertz the efficiency rate for 535-12508 ND inductor is approximately thirty-six percent and this frequency is obtainable with a Wein-Bridge oscillator.

Waveform --- 20 Volts Peak-Peak Sinusoid

AC Max Voltage

200 kHz 3.60 V

1 MHz 3.64 V

5 MHz 3.68 V

10 MHz 3.60 V

15 MHz 4.00 V

20 MHz 4.60 V

25 MHz 4.80 V

The DC/AC conversion centers around a Wein-Bridge that applies an operational amplifier that is biased by each linear regulator circuit. A popular and optimal choice part as the operational amplifier is a TL082 manufactured by Texas Instruments. Biased at fifteen volts the rail-to-rail voltage is then fifteen volts, therefore, for one Wein-bridge oscillator the maximum amount of voltage transmitted across a pair of 535-12508 ND mutual inductors is five and two-fifths alternating current volts. Therefore, ideally the induction interface will consist of three pairs of 535-12508 ND inductors. A two hundred kilo-hertz output is obtained by setting the resistor capacitor combinations connected to the TL082’s non-inverting terminal. Resistor values between three thousand and five thousand one hundred ohms combined with one hundred pico-farads capacitance achieve this frequency.

Across the set mutual inductors is a charge controller that’s part of the drone. The preferred circuit requires thirteen to sixteen direct current volts to function properly that is achieved from the AC/DC conversion and DC/AC conversion supplied by the transformer input and transmission across set a of mutual inductors. Receiving alternating current signal at thirty volts peak-to-peak is a printed circuit board leading with a full wave bridge rectifier that converts the signal to a direct current voltage meeting the 13-16 volts specification of the charge controller circuit. An LM3420 charge controller manufactured by Texas Instruments is what the circuit is designed for to serve a purpose as a functioning battery charging circuit.

There are five main loops, two metal oxide field effect transistors, one bipolar junction transistor, with

Page 6: 3D Mapping Drone - UCF Department of EECS · Web viewThere are five main loops, two metal oxide field effect transistors, one bipolar junction transistor, with resistors, capacitors,

resistors, capacitors, and diodes. The circuit works by reacting to the battery’s present voltage, when the

battery is below it’s maximum voltage the LM3420 charge controller is off and maximum current is supplied to the battery, when maximum voltage is obtained by the battery (constant-voltage) the LM3420 charge controller is activated. Activated, LM3420 supplies a determined amount of

current that increases the value of resistor five that subtracts voltage from the collector emitter terminals of the bipolar junction transistor. Due to the fact the bipolar junction transistor’s current is independent of it’s voltage the circuit stays reactive to the batteries’ voltage and as the battery gets closer to fully charged the charge controller emits more current. Since the charging current emits from the p-type metal oxide field effect transistor that gate voltage is equal to the collector emitter voltage of the bipolar junction transistor as the charge controller emits current the gate begins to close as a function of the current value. Thus, lowering the charging current, that is done at an exponential rate ideally.

At output terminals of the printed circuit board current from the metal oxide semiconductor field effect transistor transfers to the Turnigy lithium polymer battery. That is a power source supply for the drone’s flight controller, electronic speed control, telemetry, PPM encoder, Lidar module, microprocessor, and the Kinect depth sensor camera.

For a workplace environment the circuit can be of possible use. Interchangeability of the transformer, conversion board, and induction interface allow for multiple applications. Most applicable to this project is interchangeability of the conversion board that could supply a multitude of battery charge controller circuits for lithium-ion batteries composed of one to four cells. These circuits could obtain their specified voltage requirement across the set of mutual inductors.

IX. CONCLUSION

Overall, our project consists of the drone, the lidar, the software/Parallella, and the induction charging.

When this is all brought together, it gives us a system capable of 3D mapping an area.

X. BIOGRAPHY

Edwin Lounsbery is an Electrical Engineering student at the University of Central Florida. He will be pursuing employment within the profession and continuing his studies. His areas of interest are control systems, senor technology, and simulation.

Matthew McHenry, an Electrical Engineering student from Pinellas county Florida. After graduating from the Department of Electrical and Computer Engineering at the University of Central Florida I will be pursuing a career in a related line of work, ideally in the Orlando metropolitan area. My interests include studying semiconductor devices and building/designing wireless systems

Brian Vermillion a Computer Engineering student at the University of Central Florida. He will pursue a job in software engineering after graduation and eventually work his way to a graduate program, preferably at the University of Central Florida

XI. ACKNOWLEDGEMENT

Funding provided by PolyGlass USA. Thank you!

.