multi robot exploration and mapping

34
EE492 Senior Project Final Report Multi Robot Exploration Using Minik Robots Mahmut Demir Project Advisor: H. Is . ıl Bozma Evaluation Committee Members: Aysın Ert¨ uz¨ un Yagmur Denizhan Department of Electrical and Electronic Engineering Bo˘ gazi¸ciUniversity Bebek, Istanbul 34342 08.04.2012

Upload: mdemirst

Post on 21-Apr-2015

87 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Multi robot exploration and Mapping

EE492 Senior Project Final Report

Multi Robot Exploration Using Minik Robots

Mahmut Demir

Project Advisor: H. Is.ıl Bozma

Evaluation Committee Members:

Aysın Ertuzun

Yagmur Denizhan

Department of Electrical and Electronic Engineering

Bogazici University

Bebek, Istanbul 34342

08.04.2012

Page 2: Multi robot exploration and Mapping

Contents

1 Introduction 1

2 Objectives 1

3 Related Literature 13.1 Map Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Multi-Robot Cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33.3 Map Types and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

4 Approach and Methodology 3

5 Work done in EE491 Senior Design Project 35.1 Work on Minik II System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

5.1.1 Comprehensive Testing & Control . . . . . . . . . . . . . . . . . . . . . . . . 45.1.2 Re-wiring of Minik II Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . 45.1.3 Design of Electronic Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45.1.4 Adding Range Sensor & Analog/Digital Converter Module . . . . . . . . . . 45.1.5 Minik Development and Control Software (MinikDCS) . . . . . . . . . . . . 55.1.6 Mid Level Motion Control of Minik Robots . . . . . . . . . . . . . . . . . . . 55.1.7 Integrating Teleoperation into our MinikDCS Software . . . . . . . . . . . . . 55.1.8 Integrating CMU Cam3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

5.2 Robotic Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75.2.1 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75.2.2 Object matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85.2.3 Depth Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105.2.4 Map construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.3.1 Connected Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.3.2 Depth calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

6 Work done in EE492 Senior Design Project 176.1 Work on Minik II System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

6.1.1 Integrating Surveyor Cam . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176.1.2 Designing All-in-One Control panel . . . . . . . . . . . . . . . . . . . . . . . 18

6.2 Robotic Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186.2.1 Connected Components in HSV Space . . . . . . . . . . . . . . . . . . . . . . 186.2.2 Line detection for depth calculation . . . . . . . . . . . . . . . . . . . . . . . 216.2.3 SURF features and depth calcuation from SURF feature points . . . . . . . . 21

6.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246.3.1 Connected Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

7 Economic, Environmental and Social Impacts 27

8 Cost Analysis 28

A Minik II – Wiring Diagram 28

Page 3: Multi robot exploration and Mapping

List of Figures

1 Minik II robots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 An example of connected components . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Ratio comparison for matching objects . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Ratio of intersected areas in two images . . . . . . . . . . . . . . . . . . . . . . . . . 95 Area of the objects gets bigger proportional to distance travelled into scene . . . . . 116 Parameters used in calculation of distance from two consecutive images . . . . . . . 117 Images that are used in connected component analysis . . . . . . . . . . . . . . . . . 128 Images transformed into binary with a certain threshold . . . . . . . . . . . . . . . . 139 Contours of the objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1310 Connected components detected in images . . . . . . . . . . . . . . . . . . . . . . . . 1311 Eliminated objects in the thresholding process . . . . . . . . . . . . . . . . . . . . . 1412 Connected objects that are above the threshold and touching each other . . . . . . . 1413 Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . 1514 Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . 1515 Distance calculation with travelled distance: 40cm . . . . . . . . . . . . . . . . . . . 1516 Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . 1617 Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . 1618 Distance calculation with travelled distance: 30cm . . . . . . . . . . . . . . . . . . . 1619 Comparison of calculated distances and groundtruth values . . . . . . . . . . . . . . 1720 PCB design of control panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1821 Hue, Saturation and Value components of an image . . . . . . . . . . . . . . . . . . . 1922 Connected components extracted for different hue values( from left: 10, 20, 60, 110) 2023 Set of test images to compare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2024 Comparison of grayscale and HSV space extraction (HSV space is below) . . . . . . 2025 Hough transform applied to two consequtive images . . . . . . . . . . . . . . . . . . 2126 SURF features found in given image . . . . . . . . . . . . . . . . . . . . . . . . . . . 2227 SURF features are matched in two consequtive images . . . . . . . . . . . . . . . . . 2328 Diagram shows calculation of depth from two consequtive images . . . . . . . . . . . 2329 Depth calculation for case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2430 Depth calculation for case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2531 Depth calculation for case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2532 Depth calculation for case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2633 Depth calculation for case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2634 Depth calculation for case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2735 Process of finding connected components . . . . . . . . . . . . . . . . . . . . . . . . . 29

List of Tables

1 List of robots and hardware embodied in robots . . . . . . . . . . . . . . . . . . . . . 42 Class structure of MinikDCS software . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Midlevel Motion Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 The components of Minik II robot, physical properties and cost of each one . . . . . 28

Page 4: Multi robot exploration and Mapping

Acknowledgements

I would like to thank my advisor Prof. Dr. Isıl Bozma for her encouragement, understanding andguidance throughout my project. I also thank to my evaluation committee members Prof. YagmurDenizhan and Prof. Dr. Aysın Ertuzun. I am also grateful to Ozgur Erkent, Haluk Bayram, HakanKaraoguz and Ahmet Unutulmaz who always helped us with their experience and wisdom.

Page 5: Multi robot exploration and Mapping

1 Introduction

Autonomous exploration of unknown areas is an important robotics task that is still being studiedextensively. Many different approaches have been offered and numerous algorithms were written inorder to perform the exploration of environment effectively. In recent years, the use of multi-robotsystems is being advocated for this task since using a team of robots, the overall performance canbe much faster and more robust[1]. Each of robots explores only some part of the environment. Acomprehensive map of the area can be created via robots communicating each other and exchangingthe maps they have created. The goal of this project is to start investigate the use of multi-robotsin map building using Minik II robots as shown in Fig. 1.

Figure 1: Minik II robots.

2 Objectives

This project addresses the problem of exploration and mapping of an unknown area by multiplerobots. As our robots are very small sized, we have some limitations in terms of number of sensorsand positioning devices. Therefore, we will use the image sequences taken by onboard camera.Motor encoder information will be used to calculate the distance travelled and to position robots.

In the first phase of our project, which is EE491 Senior Design Project, we worked on building a mapof the environment by a single robot. Each robot is aimed to build a hybrid map of its environment.In this second term, we are going to improve our algorithm on map building by a single robot.Then we will move on the cooperation of robots task for efficient exploration. Cooperation betweenrobots will be through wireless communication. Each robot will explore some part of the unknownenvironment and then, they will share it with other robots to create a bigger perspective of theenvironment.

3 Related Literature

3.1 Map Building

There is extensive work on mobile robot exploration and mapping. There are a number of waysmaking an efficient mapping and exploration of the environment depending on what kind of sensorsis desired to be used. Such sensors may be one dimensional (single beam) or 2D- (sweeping) laserrangefinders, 3D Flash LIDAR, 2D or 3D sonar sensors and one or more 2D cameras. Recently,

1

Page 6: Multi robot exploration and Mapping

there has been intense research into visual mapping and localization using primarily visual (camera)sensors, because of the increasing ubiquity of cameras such as those in mobile devices [2].

Using visual clues to map the environment is a challenging task but it has many advantagesin mapping. First of all, it provides the real image of the environment. Secondly, the use ofcameras is has been increasing due to decreasing price and availability. Using cameras in mappingis a challenging task because they do not provide depth information, they are easily affected bychanging lighting conditions and algorithms on image processing may be computationally costly.The first step in the map making process is recognizing or extracting the features of the objectsin the environment. Candidate objects should include features that enable them to be matched insubsequent frames. If they have such properties, then they can be matched and more comprehensiveimage of the environment can be constructed. Various methods used in feature extraction. Someof them include edge detection, corner detection, SIFT or SURF features etc. [3].

The SIFT features are local and based on the appearance of the object at particular interest points,and are invariant to image scale and rotation. They are also robust to changes in illumination,noise, and minor changes in viewpoint. In addition to these properties, they are highly distinctive,relatively easy to extract, allow for correct object identification with low probability of mismatchand are easy to match against a (large) database of local features. Object description by set ofSIFT features is also robust to partial occlusion; as few as 3 SIFT features from an object areenough to compute its location and pose. The disadvantage of using SIFT algorithm is that itis comptutationally costly. SURF(Speeded Up Robust Feature) is another robust image detectorand descriptor. It is partly inspired by the SIFT descriptor. The standard version of SURF isseveral times faster than SIFT and claimed by its authors to be more robust against differentimage transformations than SIFT[4]. It can be used in computer vision tasks just as SIFT used.

Corner detection is an approach used within computer vision systems to extract certain kinds offeatures and infer the contents of an image. Corner detection is frequently used in motion detection,image registration, video tracking, image mosaicing, panorama stitching, 3D modelling and objectrecognition.

Corner detection overlaps with the topic of interest point detection. Edge detection is a also afundamental tool in image processing and computer vision, particularly in the areas of featuredetection and feature extraction, which aim at identifying points in a digital image at which theimage brightness changes sharply or, more formally, has discontinuities.

Candidate objects has features that distinguishes them from the other objects. In two subsequentimages, these objects need to be matched. Reliable matching algorithm requires corporating morethan one feature matching per object because features are tend to be changed as image changed.In this perspective, SIFT or SURF features are very reliable and invariant to image transforms.

A real-time algorithm that can recover the 3D trajectory of a monocular camera and map theenvironment simultaneously is presented in [3]. Their work is important because it is the firstsuccessful application of the SLAM methodology from mobile robotics to the pure vision domainof a single uncontrolled camera. The key concept of their approach is a probabilistic feature-basedmap, representing at any instant a snapshot of the current estimates of the state of the cameraand all features of interest and, crucially, also the uncertainty in these estimates. They uses SIFTfeatures to match the images.

In [5], a simple method to track objects is presented. Here, the objects are the connected com-ponents or ’blobs’ in the image data. Blobs are matched against hypothesised objects, in a five

2

Page 7: Multi robot exploration and Mapping

dimensional space, parametrizing position and shape.

3.2 Multi-Robot Cooperation

After completing mapping with single robot, we are going to employ multiple robots in this task.It is argued that using multiple robots instead of a single one would increase accuracy and offermany advantages such as in ’Robust Exploration’ [6] where inaccuracies that occur over time fromdead reckoning errors are reduced simultaneously.

3.3 Map Types and Performance

Another taxonomy on map making is based on the type of map constructed – namely whetherit is a metric, topological map or hybrid. The approaches used in finding each type of also varydepending on the reasoning used in constructing the associated maps. One traditional approachusing metric maps is based on probabilistic approaches such as Extended Kalman filtering such as[7]. Topological maps - on the other hand - use graph-based approaches. In order to estimate overallperformance, several scoring techniques exist. Scoring the map quality in terms of metric accuracyor scoring the skeleton accuracy rather than metric accuracy are used in scoring frequently.

4 Approach and Methodology

The project consists of two parts:

• Map building with a single Minik II robot

• Multi-robot exploration and map building

In the EE491 Senior design project we focused on the first issue. Whereas in EE492 we will focussimultaneously on the second issue and improving algorithms for single robot map building. Workdone in EE491 Senior Design Project is explained in Section 5. Work done until now in EE492Project is explained in Section 6.

5 Work done in EE491 Senior Design Project

5.1 Work on Minik II System

Map building with a single robot required the following:

• Redoing all the wiring in a Minik II robot for increased robustness

• Comprehensive testing and debugging of the robot and its navigation capabilities

3

Page 8: Multi robot exploration and Mapping

5.1.1 Comprehensive Testing & Control

As I stated earlier, my project is a continuation of previous work done on MinikII robots. Therefore,examining and understanding the previous works was the first step. Testing of robots was doneboth on hardware and software. The current states of all the robots robots are as shown in Table 1

Sensor Robot1 Robot2 Robot3 Robot4 Robot5

Camera SRV CMU Cam CMU Cam SRV Stereo SRV

Motor Board 3 3 3 3 5

Computer EPIA P700/ Pico Pc

EPIA P700/ Pico Pc

EPIA P700/ Pico Pc

EPIA P700/ Pico Pc

EPIA P700/ Pico Pc

Distance sensor Sharp IRRanger

Sharp IRRanger

Sharp IRRanger

Sharp IRRanger

Sharp IRRanger

Voltage regulator 3 3 3 3

Wireless Adapter

Battery/Voltage Dis-play

Table 1: List of robots and hardware embodied in robots

5.1.2 Re-wiring of Minik II Robots

Complex wiring configuration inside the robots makes working on robots difficult and causes oper-ation of robot to be unstabilized. Moreover, in order to add new modules and sensors we neededto have more clear wiring inside the robot. First, the wiring diagram of the robot was generated -as is given in Appendix A. The wiring of the electrical system was then completely redone in orderto allow easier dismantling of the electronic components when required. We are now redesigningthe voltage regulator circuit and Nor-gate package to further increase space inside the robot.

We have also provided connection drawings for the new design of the robots. We have slightlyaltered the placement of the parts of the robot as we need more space to place new sensors.

5.1.3 Design of Electronic Cards

I am also supervising to an undergraduate student in the design of new cards for MinikII. Designof the new card is required for routing RS232 Serial Port to internal/external sockets and also forreading encoder data from motors. Moreover, we are going to redesign the voltage reguator circuitto decrease its dimensions.

5.1.4 Adding Range Sensor & Analog/Digital Converter Module

Robots interact with the environment by using sensors embodied on their body and they usuallyhave more than one sensor as the type of the properties of the environment varies(e.g. color,distance). s our project will be mainly focused on extraction of the map of environment, we willbasically need both range information and visual clues from the environment. The purpose of

4

Page 9: Multi robot exploration and Mapping

the previous work was distance measurement by using only a camera. Although, we are going toutilize the previous algorithm to some extend, we also plan to use range information which willbe obtained by an infrared range sensor embodided on robots. Reading measurement from rangesensors by using analog-to-digital converter module of motor cards is completed.

5.1.5 Minik Development and Control Software (MinikDCS)

We created a software which includes modules related to Minik robot. These modules include thefollowing:

• Motor control

• Teleoperation of robots

• Camera and image processing module

By using these software we can control these three modules at the same time. Each is working im aseperate thread and does not block other modules. Codes are written in C++ and class structureis extensively used. Using class-like structure makes our program more modular and using methodsbecomes easier for future use. Here, we will give a brief information about classes, methods existon our program.

5.1.6 Mid Level Motion Control of Minik Robots

It’s possible to control the motion of Minik robots by using mid level motion commands. Thesecommands makes it easier to control the robot and eliminates the need for dealing with manyparameters each time. Brief overview of motion commands are provided below:

5.1.7 Integrating Teleoperation into our MinikDCS Software

We have currently the ability to control robots remotely. This is done by connecting to MinikIIoperating system remotely and control it over another computer. This process, however, costs toomuch and burdens the network between robots and remote computer. Moreover, it causes to a lagbetween computers even the loss of connection. In order to make control of robots more efficientand faster, I have integrated the teleoperation module which I wrote earlier to MinikII robots.Currently, we can control the robots remotely.

5.1.8 Integrating CMU Cam3

Different from the previous version, we don’t need any longer to run CMUCam grabber to grabimages from CMU Cam. We wrote code which sets up a serial connection to CMU cam and grabs aframe whenever we need. In CameraVision class, grabFrame() method is used for grabbing image,

5

Page 10: Multi robot exploration and Mapping

Class Name Methods Definition

CameraVision setupConn() Sets up the connection with Camera

grabFrame() Grabs a single frame from the camerausing serial connection at baud rate of 115200

cComp() Calculated connected componentsof the image given

calcDist() Calculated distances to theobjects exits in the image

showMatchesAndDistance() Draws the matched objects in twosubsequent image and shows distances to them

MotionControl

setupConn() Sets up serial connection with motor card

midLevelMotion() Includes mid level motion commands.Further explanation will be given

basicMotion() Includes basic motion commands(get counter, set counter, set speed etc...)

TeleControl

setConnection() Sets up TCP/IP connection andwaits for clients to connect

acceptClient() After permission is given to client to connect,it accepts client and waits for commmands

motionCommands These commands include all the mid level motioncommands we implemented in MotionControl class

Table 2: Class structure of MinikDCS software

Method Parameters Definition

SetDefaultRobot() none Sets up default parameters for robot

setRobotParameters() rightRadius,leftRadius,axisLength Sets up given parameters

setMotorSpeed(int speed) none sets motor speed for future commands

isMoving() is moving or not

stop() stop robot

resetRobot() reset robot parameters

forward() go forward

backward() go bacward

travel(int distance) distance travel given distance

goTo(int x, int y, int direction) x, y, direction go to given point and direction

rotateTo(float angle) rotate to given angle

rotateLeft() rotate left

rotateRight() rotate right

goArc(int angle, int radius) angle, radius go on an arc with given parameters

Table 3: Midlevel Motion Commands

6

Page 11: Multi robot exploration and Mapping

when a new image is grabbed, it is written to updatedImage variable, and former image is writtentwo previousImage variable. This enable us to compare two images in depth calculation.

5.2 Robotic Mapping

Robotic mapping aims to build a map of the local environment of the robot. Wide variety ofsensors with different characteristics can be used for mapping. Because we have only a monocularcamera as a sensor, recognition of the scene will be performed by processing the images obtainedfrom the camera mounted on top the robots. In the first phase of our project, we are going to dealwith map building with a single robot.

5.2.1 Feature Extraction

Extracting and labeling of various disjoint and connected components in an image is central tomany automated image analysis applications.

Assuming objects, most of the time, can be distinguished by their colors different from the restof the scene(or background), we can detect such objects by thresholding the given image. Basicthresholding is performed first converting colored image to grayscale then thresholding image bypre-assigned threshold value.

Connected components labeling scans an image and groups its pixels into components based onpixel connectivity, i.e. all pixels in a connected component share similar pixel intensity values andare in some way connected with each other. Once all groups have been determined, each pixel islabeled with a graylevel or a color (color labeling) according to the component it was assigned to.

Connected component labeling works by scanning an image, pixel-by-pixel (from top to bottomand left to right) in order to identify connected pixel regions, i.e. regions of adjacent pixels whichshare the same set of intensity values I. In a gray level image, the intensity values will take on arange of values and hence the method needs to be adapted accordingly using different measures ofconnectivity.

We assume binary input images and 8-connectivity. The connected components labeling operatorscans the image by moving along a row until it comes to a pixel x where x denotes the pixel to belabeled at any stage in the scanning process) for which I(x) = 1. When this is true, it examines thefour neighbors of x which have already been encountered in the scan (i.e. the neighbors (i) to theleft of x, (ii) above it, and (iii and iv) the two upper diagonal terms). Based on this information,the labeling of x occurs as follows:

• If all four neighbors have intensity values equal to 0, assign a new label to x, else

• If only one neighbor has intensity equal to 1, assign its label to that of x, else

• If more than one of the neighbors have values equal to 1, assign one of the labels to x andmake a note of the equivalences.

After completing the scan, the equivalent label pairs are sorted into equivalence classes anda unique label is assigned to each class. As a final step, a second scan is made through the image,during which each label is replaced by the label assigned to its equivalence classes. For display,the labels might be different graylevels or colors. As a result of connected components processing,a set of blobs B = {B1, . . . , Bn} is obtained. Example of connected component analysis is shown

7

Page 12: Multi robot exploration and Mapping

in Fig. 2 where the left portion shows the end of first scan and the right portion shows the end ofsecond scan.

Figure 2: An example of connected components

5.2.2 Object matching

Similarity measures are used in deciding whether two blobs are similar or dissimilar. In order tocreate a similarity measure for each object, we have chosen several features to be compared. Eachtype of feature requires a different indice measurement. We use two such measures:

1. The first measure is spatial proximity of their centers

2. The second is width-to-height ratios of each object. The closer are the ratios, the more similarare the objects as is seen in Fig. 5.

3. Area of overlap

4. Color similarity

Relative position measure uses the merits of the connected components algorithm. In that algo-rithms, each component is connected to its surrounding components. As the relative positionsof the objects will not change across the images, it provides us a reliable matching. The centerµi ∈ R2 of a blob Bi is simply computed as the mean position – namely

µi =1

|Bi|∑x∈Bi

x (1)

The relative position of two blobs Bi and Bj is simply measured as the difference between centersas:

|µi − µj | (2)

The cross ratio is determined by simply taking the ratio of length li and width wi of a blobBi. Again, two cross ratios are compared via taking taking their difference as∣∣∣∣ liwi

− ljwj

∣∣∣∣ (3)

8

Page 13: Multi robot exploration and Mapping

Figure 3: Ratio comparison for matching objects

Association measures compare the intersection of the areas of the objects across the subsequentimages. Similar objects are supposed to have common areas(or intersection) because position oftwo objects in two subsequent images will not change much. If the areas of two objects intersect,then it is highly probably that objects will be matched as in seen in Fig. 4. The intersection of twoblobs Bi and Bj can simply be computed as:

ν(i, j) =∑

x∈Bi∩Bj

1 (4)

Figure 4: Ratio of intersected areas in two images

9

Page 14: Multi robot exploration and Mapping

Color measure is the last similarity indice we used in our algorithm. It compares the colors of theobjects across the images, and matches objects with similar colors. The color of a blob is computedas the average color where c(x) denotes the color associated with pixel x:

ci =1

|Bi|∑x∈Bi

c(x) (5)

The similarity of colors is measured again via taking their difference as

|ci − cj | (6)

Of course, none of these similarity measures alone could guarantee a reliable matching. We forman integrated measure of similarity s : B × B → R≥0 via incorporating all of these measures atthe same time. The measure s will provide a measure of matching with higher probability. Eachmeasure is weighted with coeffients as shown in Eq. 7 so that unreliable indices will not affect theresult much.

s(B1 ×B2) = η1 ‖µi − µj‖+ η2

∣∣∣∣ liwi− ljwj

∣∣∣∣+ η3ν(i, j) + η4 ‖ci − cj‖ (7)

5.2.3 Depth Calculation

The second step is placing detected objects to their proper places in the map constructed. Themain problem when dealing with images taken with a single camera do not have depth informationabout the environment. In other words, we can not infer the depth of the object from camera eventhough we can distinguish them in the scene.

There are various ways of extacting the depth information from the environment. The most realiableand straigthforward way of it is using sonar sensor or laser range finders. However, their high pricesand relatively big sizes does not allow us to use them. Another proposed solution is using stereocameras. Stereo cameras look into the scene from two differen viewpoint at the same time so thatthey can infer the depth of the objects in an instant. Processing such images and interpolating twoimages is a time-consuming task and requires expensive hardware, at least two cameras need to beused in the robot. Still, one of our Minik robots has Surveyor Stereo Vision system as its camerasensor can be used for future applications.

Using simple and single pinpoint camera for mapping task is relatively new notion in roboticmapping. Recently, ubiquity of cameras in mobile platforms made them a strong competitoragainst sonar sensors or laser range finders. In order to extract the depth information by using asimple camera might be a challenging task. In our project in this term, we mainly concentrated ondepth calculation by using a single camera.

Two images that looks into the same scene but taken from different viewpoints can contain depthinformation and be extracted by using simple geometrical equations. When a robot approaches tothe objects, area of the objects gets bigger proportional to the distance the robot travelled into theobjects as shown in fig. 5

The computation of computing depth from two consequtive images is as given in Eq. 8:

h

d2=p1f

h

d1=p2f

(8)

10

Page 15: Multi robot exploration and Mapping

Figure 5: Area of the objects gets bigger proportional to distance travelled into scene

where

• f - The length between camera lens and CCD image sensor

• h - The height of the object in 3D world

• p1 - The height of the the object in the image when object get closer (in pixels)

• p2 - The height of the image when object is far (in pixels)

• d1 - The distance of the object to the lens when it is closer

• d2 - The distance of the object to the lens when it is far

Figure 6: Parameters used in calculation of distance from two consecutive images

In fig. 6 the object gets closer to camera but it is the same thing as if camera gets closer to anobject. As we can see from fig. 6, as the camera(robot) gets closer to an object, the size of theimage taken from CCD image sensor of the camera is increased. From this increase and the knowninformation of distance travelled from the motor encoders, we can obtain distances to objects.

11

Page 16: Multi robot exploration and Mapping

Eliminating h(actual distance) and f(focal lenght) in Eq. (8), we obtain ratio α as

α = p2/p1 = d1/d2 (9)

If we denote x as distance travelled,

x = d2 − d1 (10)

We obtain distance to object in term of α and x as in Eq.(11)

d2 = x/(1− α) (11)

5.2.4 Map construction

In the map contruction process, the general approach we will follow is as follows:

1. First, distinguish the objects in the image and label them.

2. Next, match the objects in two subsequent images using the similarity indices we indentifiedbefore.

3. Following, extract the depth information of the matched objects.

4. Finally, stitch the images and objects according to the their relative positions.

5.3 Experimental Results

5.3.1 Connected Components

We used Open Source Computer Vision Library (OpenCV) for connected component extractionfrom the given images. OpenCv library provides a number of handy functions for connectedcomponent extraction.

Before beginning connected component analysis, we first checked the brightness level of theimage. Brigter images more than a certain threhold are inverted to get better performance in theconnected component analysis.

First, we used findThreshold() function with a certain threshold value to convert the imageto black and white. In the next step, we found the contours of the objects in the image by usingfindContours() function. In the final step, we colorized the found connected components. Here, weapplied our algorithm on several images and outputs are shown figures below:

First image sequence as shown in fig. 23 is the candidate images we are going to use to findconnected components.

Figure 7: Images that are used in connected component analysis

12

Page 17: Multi robot exploration and Mapping

Images are then transformed into binary images with a certain threshold applied as shownin fig. 8

Figure 8: Images transformed into binary with a certain threshold

Next sequence in fig.9 shows the contours of the objects in the images

Figure 9: Contours of the objects

In the last step, fig.10 shows colorized connected components

Figure 10: Connected components detected in images

As we can see from figures, some of the objects couldn’t be recognized and categorizedas connected component. Most of the error in our algoritm arises in the thresholding process.Because we use a threshold to seperate objects from the background, some of the objects thathas brightness values under the threshold also get eliminated in the thresholding process. Thiscan be seen in fig.11. Two of the three objects in the scene recognized whereas the last one iseliminated in the thresholding process. This error can be fixed by using more complex thresholdingalgorithms. One simple solution would be comparing hue values of the image along with thebrightness values in thresholding. By performing logical-AND operation to outputs of the hue andbrightness thresholding functions, we can get a better recognition performance.

13

Page 18: Multi robot exploration and Mapping

Figure 11: Eliminated objects in the thresholding process

Another possible source of the error is that our connected component algorithm relates andmerges objects which exceed the threshold value and touch to each other. This can be seen in fig.12.In some of cases, cables or unwanted particles causes objects to be connected each other and seemas a single object. We tried to reduce error by performing erode and dilate on thresholded images,still we encounter such errors in some of the cases. Applying better thresholding algorithms canreduce this type of error.

Figure 12: Connected objects that are above the threshold and touching each other

Finding connected components and matching these component for a given subsequent im-ages is an important process. Wrong matching can cause error in distance calculation. Thereforeused three indices in order to have a reliable matching namely, intersecting the areas, ratio matchand color match.

5.3.2 Depth calculation

After matching the connected components, we implemented our distance calculation algorithmas described in the methodology section. Distances to each detected objects in two consecutiveimages are calculated seperately. Results of the distance calculation algorithm with accompanyingscene images are shown fig.13...fig.18. Calculated distances are written next to each obstacle in thefigures.

In the fig.13...fig.18, blue circle represents the robot and robot moves forward for a specifiedamount. Red circles represents the objects in the scene. After the distance calculation algorithmis performed, calculated distances for each object are written inside of each circle, respectively.

14

Page 19: Multi robot exploration and Mapping

Figure 13: Distance calculation with travelled distance: 30cm

Figure 14: Distance calculation with travelled distance: 30cm

Figure 15: Distance calculation with travelled distance: 40cm

15

Page 20: Multi robot exploration and Mapping

Figure 16: Distance calculation with travelled distance: 30cm

Figure 17: Distance calculation with travelled distance: 30cm

Figure 18: Distance calculation with travelled distance: 30cm

16

Page 21: Multi robot exploration and Mapping

By using distance calculation algoritm, we obtained a satisfactory results with the successrate of 68.5%. In some of the scenarios above, some of the objects couldn’t be recognized (e.g. infig.14 and in fig.17) because of the threshold value we specified. Using an adaptive threshouldingcan yield better results in recognizing objects.

In some of the scenarios, we obtained very high errors in distance calculation for some ofthe objects. For example in fig.17, distance is calculated as 115cm for the black object, but it’sdistance in real is 240cm. This error is mainly due to a big change in the ratio of the object.Reason of big change in ratios is that boundaries of that object couldn’t be specified exactly inthis scenario. If the boundaries of the object in two consecutive images can’t be found exactly,then distance calculation algorithm will not yield accurate results. Same error applies for fig.13also. Distance of the leftmost object is calculated as 132cm however it’s actual distance is 200cm.Because this object is shown on the edge of the image, object is clipped out from the image asrobot moves. Clipping out the object, in turn, causes ratio of the object to be changed more thanexpected. In order to eliminate similar cases, we don’t use objects that are very close to the edgesof the image.

Distance calculations for various scenarios and objects are performed and results are sum-marized in fig.19. In the figure, vertical axis shows the calculated distances and the horizontal axisshows groundtruth data. In the experiments, we obtained a total success rate of 68.5%.

Figure 19: Comparison of calculated distances and groundtruth values

6 Work done in EE492 Senior Design Project

6.1 Work on Minik II System

6.1.1 Integrating Surveyor Cam

Three of Minik robots out of five use surveyor cameras as their vision hardware. In the last term,we were only using CMU Cam to grab images because we only needed a single robot. However

17

Page 22: Multi robot exploration and Mapping

working with multi robots will also require surveyor cameras to operate. We integrated surveyorcameras to our Minik Software and now it can be used by changing just one parameter as theCameraVision() class being initialized. Usage of the class is same as of usage of CMU Cam. SnceSurveyor cameras provide better resolution images compared to CMU cam, we will be using thesecameras in the second half of the project.

6.1.2 Designing All-in-One Control panel

With the purpose of clearing out the mess of cables inside the robot, we designed a new controlpanel on which all of the switches and input output port are placed. We added switches for changingbetween external and internal RS232 ports and for changing between internal or external power.PCB of the control panel is show in fig. 20

Figure 20: PCB design of control panel

6.2 Robotic Mapping

In the EE491 part, we have employed connected components approach for extracting features fromthe environment. Although we obtained results with 70% success rate in experiments, we wanteda further improvement in our feature extraction and depth calculation. Our first attemp was usingimages represented in HSV space instead of Grayscale space. Then, we looked for other clues thatcan be used in depth calculation. These clues can be straight lines or SURF features.

6.2.1 Connected Components in HSV Space

Since different colors possibly take same grayscale values, using previous approach of grayscalethresholding caused some inaccuraties in the experiments we performed in EE491 project. As asolution, more improved approach of using HSV space instead of Grayscale space is used in thesecond half of the project.

HSV stands for hue, saturation, and value, and is also often called HSB. HSV(or possibly HSI)are the two most common cylindrical-coordinate representations of points in an RGB color model,which rearrange the geometry of RGB in an attempt to be more intuitive and perceptually relevantthan the cartesian (cube) representation. Three components of HSV space is shown in fig. 21.Because colors in the image are one of the most important clues that defines an object, we wantedto make use of color information in connected component extraction. Hue value actually standsthe original-color and doesn’t not get affected by changing illumination or brightness. Other two

18

Page 23: Multi robot exploration and Mapping

parameters are also important because as the brightness of the object decreases its perceived-colorgets darker and affects our extraction algorithm. Small saturation value implies that perceived-color of the object doesn’t have enough so called color pigments so that they cannot representtheir original color. Therefore we cannot make calculation on colors which has very low(or high)intensity or saturation values.

Figure 21: Hue, Saturation and Value components of an image

Taking into account saturation and intensity values, we applied a thresholding on image accordingto a set of hue values. A set contains most possible colors that we encountered throughout ourexperiments. A set usually contains 6-7 different colors but the number can be increased for moreaccurate results. For each color in the set, we scanned image once for connected components asshown in fig. 22, if any component is found, we kept it in memory and we merged(logical-and) all

19

Page 24: Multi robot exploration and Mapping

connected components in the memory after scanning is fully completed.

Figure 22: Connected components extracted for different hue values( from left: 10, 20, 60, 110)

In order to improve performance of scanning algorithm, K-Means algorithm can be used for decidingfor which hue values to scan image instead of assigning values manually.

Figure 23: Set of test images to compare

Figure 24: Comparison of grayscale and HSV space extraction (HSV space is below)

20

Page 25: Multi robot exploration and Mapping

Comparison of algorithms using Grayscale and HSV space can be seen in fig. 24. Although in someof the cases, previous algorithm yield better results, because we now dismiss images which has lowsaturation and intensity values, latter approach performs better in overall performance.

6.2.2 Line detection for depth calculation

Connected components approach performs well and yields good result in an indoor environment inwhich distinct colored objects are placed. However, when we perform same experiment in corridors,performance of our algorithm decreases. Because clues that can be extacted from this environmentare highly limited. In order to have our algorithm high performance in any environment, we triedother approaches, too. In a human environment, vertical and horizontal lines exist in various places.Making use of lines exist in the environment, we can calculated depth from the difference of lenghtof the lines between two consequtive images.

The Hough transform is a feature extraction technique used in image analysis and is mostly con-cerned with the identification of lines in the image[8]. By applying hough transform to images weeasily get straight lines exist in the image. In fig. 25, result of hough transform applied to twoconsequtive images is shown.

Figure 25: Hough transform applied to two consequtive images

However there exist a problem in using hough transform. It doesn’t guarantee to detect all lines inthe image. It can even produce different results in a consequtive, similar images. Such an unstabledetection cannot be used in depth calculation as because we need very exact lenght of the lines.

6.2.3 SURF features and depth calcuation from SURF feature points

As we stated in related literature section, SURF or SIFT algorithm produces very robust featurepoints that doesn’t get affected from changing illumination or scale. Employing these features in

21

Page 26: Multi robot exploration and Mapping

our depth calculation can yield very accurate results. Furthermore, these features can be usedtogether with connected components in detection of objects. We had several experiments usingSURF features as shown in fig. 26 and fig. 27.

Figure 26: SURF features found in given image

22

Page 27: Multi robot exploration and Mapping

Figure 27: SURF features are matched in two consequtive images

Calculation of depth from SURF feauteres can be easily performed as shown in Eq.(12). We omitthe derivation of formula since it is similar to one as in Eq.(11).

x = d.α/β (12)

Figure 28: Diagram shows calculation of depth from two consequtive images

23

Page 28: Multi robot exploration and Mapping

After we find depth of the SURF points across consequtive images, we can merge our calculationwith the one we obtained from connected components approach. By doing so, we can have notonly depth space but depth of the objects in the scene.

6.3 Experimental Results

6.3.1 Connected Components

We used Open Source Computer Vision Library (OpenCV) for connected component extractionfrom the given images. In depth calculation examples, we used HSV values for thresholding.

Before beginning connected component analysis, we first checked the histogram of the image andapplied histogram equalization. We performed in three different cases:

Calculation in inroom(inside a room) environment where distinctive colored block objects exist (fig. 29). (Note that numbers next to objects stands for the calculated depth values for each object)

Figure 29: Depth calculation for case 1

24

Page 29: Multi robot exploration and Mapping

Calculation in inroom(inside a room) environment where natural objects exist ( fig. 30, 31).

Figure 30: Depth calculation for case 2

Figure 31: Depth calculation for case 2

25

Page 30: Multi robot exploration and Mapping

Calculation in corridor like environment ( fig. 32, 33, 34).

Figure 32: Depth calculation for case 3

Figure 33: Depth calculation for case 3

26

Page 31: Multi robot exploration and Mapping

Figure 34: Depth calculation for case 3

In any of the environment our algorithm did perform depth calculation. For cases two andthree, number of connected components depends on the number of interesting objects exit in theenvironment. Therefore, our algorithm was highly dependent on the type of the environment. Inorder to find a solution for our environment-dependent algorithm we tried different approachesincluding line extraction and SURF feature extraction. Merging such algorithms will likely toproduce better and more accurate results.

7 Economic, Environmental and Social Impacts

Working with multiple robots is still a relatively new area of investigation. Although there aremany hard problems yet to be solved, multi-agent approaches have already demonstrated a num-ber of important impacts both environmentally and socially:

Multiple agents can improve efficiency in many tasks as they specialize, yet some tasks simplycannot be done by a single robot. Moreover, real-time response can be achieved by spreading thecomputational burden of control and information processing across a population.

Multi-agent strategies not only increase utility but also allow us to develop an important aspect ofintelligence: social behavior. Some scientists, such as sociologists and psychologists, use simulatedgroups of robots to model social behavior in humans. Many aspects of human interaction can bestudied in this way, including the spreading of diseases or learning how traffic jams form.

27

Page 32: Multi robot exploration and Mapping

8 Cost Analysis

Minik II robots have already been designed and manufactured in a previous ISL project. The costanalysis as taken from the associated EE491-EE492 project report is shown in table 4.

NoComponent Weight Length Width Height Cost

(g) (mm) (mm) (mm) (TL)

1 Plexiglass 400 70 40 3 12

2 Cables 75 - - - 6

3 Li-po battery 150 104 34 33 100

4 GHM-01 30:1 Gear Motors X 2 66 48 37 37 35

5 Motor Control Card Materials 29 90 70 10 150

6 Encoders X 2 30 30 20 7 40

7 Solarbotics Wheels 24 67 67 6 1

8 CF to IDE 44 Pin Adapter 29 52 44 11 35

9 Compact Flash 20 36 42 3.3 100

10 Plastic Ball Caster 4 12 12 11 5

11 VIA Epia P700-10L 400 100 72 18 350

12 Surveyor Stereo Vision System(optional) 140 60 150 60 825

13 CMU Cam3 (optional) 108 74 115 53 225

14 Sharp Distance Sensors(optional) X 2 75 13.5 44.5 13.5 40

15 CNY70 Line Following Sensors(optional) 75 20 40 3 13

Total Standart 1208 834

Total (All optionals included) 1606 1567

Table 4: The components of Minik II robot, physical properties and cost of each one

A Minik II – Wiring Diagram

28

Page 33: Multi robot exploration and Mapping

Figure 35: Process of finding connected components

29

Page 34: Multi robot exploration and Mapping

References

[1] B. Yim, Y. Lee, J. Song, and W. Chung. Mobile robot localization using fusion of objectrecognition and range information. In Proceedings of IEEE Int. on Robotics and Automation,pages 3533–3536, 2007.

[2] E.;Ostrowski J;Goncalves L.;Pirjanian P.;Munich M. Karlsson, N.; Di Bernardo. The vslamalgorithm for robust localization and mapping. In Int. Conf. on Robotics and Automation(ICRA), 2005.

[3] Ian D. Reid Andrew J. Davison. Monoslam: Real-time single camera slam. In IEEE TRANS-ACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, pages VOL. 29,NO. 6, JUNE 2007.

[4] Tinne Tuytelaars Herbert Bay and Luc Van Gool. Surf: Speeded up robust features. InCOMPUTER VISION ECCV, pages Volume 3951/2006, 404–417, 2006.

[5] Remagnino Paolo Orwell, James and Graeme Jones. From connected components to objectsequences. In 1st IEEE International Workshop on Performance Evaluation of Tracking andSurveillance (PETS), 2000.

[6] Ioannis M. Rekleitis. Ph.d. thesis. In School of Computer Science, McGill University, Montreal,Quebec, Canada., 2003.

[7] M. Fox D. Simmons R. Thrun S. Burgard, W. Moors. Collaborative multi-robot exploration.In ICRA ’00. IEEE International Conference,, pages Vol.1, pg. 476 – 481., 2000.

[8] Linfeng Guo Opas Chutatape. A modified houghtransform for line detection and its perfor-mance. In Elsevier, Pattern Recognition, pages Volume 32, Issue 2, Pages 181192, February1999.

30