[ieee 2011 international conference on consumer electronics, communications and networks (cecnet) -...

3
Cheng Jun College of Physics and Electronic Engineering Taizhou University TaiZhou, China [email protected] Wang Tao* College of Physics and Electronic Engineering Taizhou University TaiZhou, China Twang61@ tzc.edu.cn Abstract—this paper is based on Intel established open source computer vision library (OpenCV) software system. This system can track a red object in the scene of the tracking system, and can be relatively effective to avoid the interference of light in the scene. Contour by calculating the position in the image center of gravity to control the robot's rotation, while the change in area under the control of the robot forward and back and stop. Keywords-OpenCV;computer Visual tracking;robot; I. INTRODUCTION Visual tracking problem is the development of computer technology gradually becomes a research hotspot. The twentieth century 80 years ago due to limitations of computer technology, image processing and analysis of the main static image-based. In the analysis of dynamic image sequences to track the moving target with a strong static image analysis features. Twentieth century the early 80s, after the optical flow was proposed, the dynamic image sequence analysis of a study into the climax. Method of optical flow generated from its boom continued until the mid-90s of the twentieth century and the optical flow method can be found in the literature review articles. But the optical flow computation required is too large for the computer, it is difficult to meet the real-time requirements, and because of the limitations of the assumptions used the optical flow method are particularly sensitive to noise, it is easy to produce erroneous results. These shortcomings result in realization of the actual use of optical flow; there is still a great step. II. BODY MOBILE ROBOT VISUAL TRACKING AND WORKFLOW Mobile robot visual tracking system mainly includes object recognition systems and robot control systems. Target recognition system software platform: Intel for the establishment of an open source computer vision library (OpenCV) software system. Through this platform to achieve the object spun off from the scene to complete the data collection object, thus achieving the target recognition. Target recognition system hardware platform: DC gear motor vehicle moving platform And handling by the robot control system software platforms control data coming from the decision-making on how to walk. First, initialize the system to search the target object (red ball). The system will get the current frame information and to build a color model if search the target object. With the filter and a series of morphological operations system will get the outline of objects and object-related information. Then the PC will transmit the information about the object via the serial port to the microcontroller. Then microcontroller processes the data further to form the robot forward or backward and other instructions. Finally, the system will base on the feedback from the implementation of the robot to generate the new instructions. See in particular structural framework (Figure 1) and flow chart (Figure 2). Figure 1. Frame of Figure Figure 2. Under a certain temperature of P-V curve. Tracking of Mobile Robot System 4600 978-1-61284-459-6/11/$26.00 ©2011 IEEE

Upload: wang

Post on 20-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2011 International Conference on Consumer Electronics, Communications and Networks (CECNet) - Xianning, China (2011.04.16-2011.04.18)] 2011 International Conference on Consumer

Cheng Jun College of Physics and Electronic Engineering

Taizhou University TaiZhou, China

[email protected]

Wang Tao* College of Physics and Electronic Engineering

Taizhou University TaiZhou, China Twang61@ tzc.edu.cn

Abstract—this paper is based on Intel established open source computer vision library (OpenCV) software system. This system can track a red object in the scene of the tracking system, and can be relatively effective to avoid the interference of light in the scene. Contour by calculating the position in the image center of gravity to control the robot's rotation, while the change in area under the control of the robot forward and back and stop.

Keywords-OpenCV;computer Visual tracking;robot;

I. INTRODUCTION Visual tracking problem is the development of computer technology gradually becomes a research hotspot. The twentieth century 80 years ago due to limitations of computer technology, image processing and analysis of the main static image-based. In the analysis of dynamic image sequences to track the moving target with a strong static image analysis features. Twentieth century the early 80s, after the optical flow was proposed, the dynamic image sequence analysis of a study into the climax. Method of optical flow generated from its boom continued until the mid-90s of the twentieth century and the optical flow method can be found in the literature review articles. But the optical flow computation required is too large for the computer, it is difficult to meet the real-time requirements, and because of the limitations of the assumptions used the optical flow method are particularly sensitive to noise, it is easy to produce erroneous results. These shortcomings result in realization of the actual use of optical flow; there is still a great step.

II. BODY MOBILE ROBOT VISUAL TRACKING AND WORKFLOW

Mobile robot visual tracking system mainly includes

object recognition systems and robot control systems. Target recognition system software platform: Intel for the establishment of an open source computer vision library (OpenCV) software system. Through this platform to achieve the object spun off from the scene to complete the data collection object, thus achieving the target recognition. Target recognition system hardware platform: DC gear motor vehicle moving platform And handling by the robot

control system software platforms control data coming from the decision-making on how to walk.

First, initialize the system to search the target object (red ball). The system will get the current frame information and to build a color model if search the target object. With the filter and a series of morphological operations system will get the outline of objects and object-related information. Then the PC will transmit the information about the object via the serial port to the microcontroller. Then microcontroller processes the data further to form the robot forward or backward and other instructions. Finally, the system will base on the feedback from the implementation of the robot to generate the new instructions. See in particular structural framework (Figure 1) and flow chart (Figure 2).

Figure 1. Frame of Figure

Figure 2. Under a certain temperature of P-V curve.

Tracking of Mobile Robot System

4600978-1-61284-459-6/11/$26.00 ©2011 IEEE

Page 2: [IEEE 2011 International Conference on Consumer Electronics, Communications and Networks (CECNet) - Xianning, China (2011.04.16-2011.04.18)] 2011 International Conference on Consumer

III. VISUAL TRACKING ALGORITHM FOR MOBILE ROBOT TRACKING

A. Lab Space Partitioning The target recognition system is the use of A-channel

images from the Lab space of the way to complete the split in the red object in the scene.

Lab color space is a space that is a color model. This is a device-independent color systems (with digitally describe the human visual sense) is also the color system based on physical characteristics It adapts to all light colors that the color of the body or with the calculation. Lab space is converted from RGB three primary colors and it is a bridge that RGB model convert to the HSV model and CMYK model.

Lab color space is based on a luminance component L and two color components a and b to represent the color which the range L is 0~ 100, a component representative of the spectrum from green to red change, and b components on behalf of the spectrum from blue to yellow change, a and b are the range of -120 to 120. Lab is to use a split-channel approach to segmentation the red object in the scene. The method first transforms the RGB to Lab. The Transformation formula is:

( ) ( )

( ) ( )

13

1 13 3

1 13 3

L 1160.299R 0.587G 0.114B 16

a 500 1.006 0.607R 0.174G 0.201B 0.299R 0.587G 0.114B

b 200 0.299R 0.587G 0.114B 0.846 0.066G 1.117B

= + + −

= + + − + +

= + + − +

When collected image transformation to the Lab color space, its three channels can be separated into an independent grayscale in order to facilitate further processing.

B. Contour Information Extraction Contours in binary image to be found in the center of the

image coordinates and contour area. Area: Area is the sum of contour pixels in the processing

a-channel binary image. Center of gravity coordinates: Moment seeking the center

of the image coordinates of the center of gravity. 1) The definition of moments

Given two-dimensional continuous function f (x, y), we define the (p, q) moment of a contour as:

( )pq p qM x y x, y dxdy, pq 0,1,2f∞ ∞

∞ ∞

+ +

− −

= =

The reason can be used to characterize a moment of two-dimensional image is based on the Papoulis Theorem. Here p is the x-order and q is the y-order, whereby order means the power to which the corresponding component is taken in the sum just displayed. The summation is over all of the pixels of the contour boundary (denoted by n in the equation). It then follows immediately that if p and q are both equal to 0, then the 00M moment is actually just the length in pixels of the contour.

2) Number of features defined by the moment

Central moment: A central moment is basically the same as the moments just described except that the values of x and y used in the formulas are displaced by the mean values:

n

p q0

( , )( ) ( )p qavg avg

i

I x y x x yu y=

= − −

10 00 01 00/ , / .avg avgX m m y m m= =

Center of gravity coordinates: ( , ),avg avgX Y

10 00 01 00/ , / .avg avgX m m Y m m= =

IV. VISUAL TRACKING MOBILE ROBOT CONTROL INSTRUCTIONS

Robot track the target mainly in the following instructions: forward, stop, backward, turn left, and turn right. System processing contour area and the center coordinates of contour to send commands.

In the system initialization, set the center of gravity in the middle of a range of images, and set the initial value of the area. Shown in Figure 3, Robot by detecting the contour area to control forward, stop, rewind. When the target away from the robot, the contour of the area is less than the initial value, the mobile robot forward, and when the set targets and maintain the range of the robot will stop, but when the target close to the robot, the contour area of more than The initial value, the robot will be back. When the center of gravity coordinates of the center line from the image, they began to rotate until the center of gravity in setting the initial range.

Figure 3. Schematic diagram

4601

Page 3: [IEEE 2011 International Conference on Consumer Electronics, Communications and Networks (CECNet) - Xianning, China (2011.04.16-2011.04.18)] 2011 International Conference on Consumer

V. VISUAL TRACKING MOBILE ROBOT SYSTEM PERFORMANCE TEST

Figure 4. Schematic diagram

From the experiment the robot tracking system can be accurately identify the red object in a complex environment which the light is not strong. This system has the advantage

of less Instruction computing and real-time tracking. Effectively improve the tracking system to adapt to the environment.

VI. CONCLUSION The robot visual tracking system tracking red object in

the scene is based on object features of the tracking system. If the red object in the scene, the system will track the object because the system is base on the a-channel of the Lab space image segmentation.

Because this system is based on feature tracking system, so the scene itself has certain requirements. First in this scene cannot have too similar with the red object, it may cause tracking failure. Although this system is based on the a-channel Lab space image segmentation can effectively avoid the interference of light, light in the scene cannot be too strong, otherwise the same will follow failure.

REFERENCES [1] Dan Margulis, “Photoshop LAB Color: The Canyon Conundrum and

Other Adventures in the Most Powerful Colorspace”, Peachpit Press; Pap/Cdr edition, August 18, 2005, pp.81-105.

[2] Gary Bradski,Adrian Kaehler, ” Learning OpenCV”, Rachel Monaghan, September 2008, pp. 222-264.

[3] XU Xiao-xiao; WANG Zhi-ling; WU Liang; CHEN Zong-hai,” Vision tracking algorithm under occlusion of multiple objects”, Control and Decision, vol.2,2010, pp. 2-4.

[4] SUN Biao,” Monocular vision-based tracking of moving targets” Sichuan Ordnance Technology, vol.4,2010, pp. 2-4.

[5] YANG Ge, LIU Hong,” Survey of visual tracking algorithms”,Caai Transactions on Intelligent Systems,vol.2,2010.

[6] ZHOU Tianjuan,” Based on machine vision technology strawberry harvesting robot”,PhD thesis, China Agricultural University,2007,pp. 14-52.

[7] GUO Shi-long,LI Wen-feng,LI Bo,JIN Jiagen,”Camshift algorithm based mobile robots visual tracing system” Journal of Huazhong University of Science and Technology(Nature Science Edition) vol.1,2008.

4602