automated industrial robot path planning for pick and plcae applications final

2
AbstractIndustrial robots are being used in many applications nowadays and their capabilities are continually improving, yet to reach their limit. However, in many applications, the industrial robot needs to be programmed in advance as most of industrial robots do not have learning and adaptation capabilities. In order to alleviate this problem, a graphic user interface application in C# for using an industrial robot arm to perform a “pick & place” task is developed. In particular, shape and color recognition were performed using a Kinect for Xbox connected to the main computer. The detection results are transferred to the robot controller, with the robot is controlled so as to move to the target. Computer simulations using RobotStudio in addition to real experiments are carried out to verify the effectiveness of the proposed approach. I. INTRODUCTION I ndustrial robots have experienced a rapid growth recently due to the lack of skilled manpower in many sectors of industry. In industrial manufacturing, robots are required to perform repetitive and non-repetitive tasks, such as hand writing [1]. 3D cameras like Kinect [2] could be integrated with robotics systems to perform such tasks. Kinect, using its depth camera [3, 4], can be calibrated to estimate the position of objects in front of it with reliable precision. “Scene analysis” is also crucial. Estimation of the target position depends upon the precision of image processing of the scene [5]. II. METHODS AND RESULTS The experimental system (Figure 1) consists of an ABB robot and its controller (connected to the main computer), plus a Kinect camera. The interaction between the camera and the robot arm relies on a scheme that is developed based on RobotStudio and Visual Studio. The integration of both platforms allows the robot to find the object in the correct world frame position. Figure 1. Schematic diagram of the experimental system Figure 2 shows the proc edure of scene analysis . The major goal of scene analysis i s to recognize the position and shape of each object. Image processing i s performed using the Aforge.Net library. The result s are show n in F igure 3. Figure 2. Procedure of scene analysis Figure 3. Final result of image processing By calculating the intrinsic and extrinsic camera parameters, the position information of the target in the real world can be obtained and sent to the robot controller. Experimental results are shown in Figure 4. Figure 4. Experimental results III. CONCLUSION Using the proposed approach, the ABB robot that integrates Kinect is able to perform pick and place tasks with high accuracy.

Upload: pablo-gonzalez

Post on 22-Dec-2015

215 views

Category:

Documents


0 download

DESCRIPTION

paper

TRANSCRIPT

Page 1: Automated Industrial Robot Path Planning for Pick and Plcae Applications Final

Abstract—Industrial robots are being used in many applications nowadays and their capabilities are continually improving, yet to reach their limit. However, in many applications, the industrial robot needs to be programmed in advance as most of industrial robots do not have learning and adaptation capabilities. In order to alleviate this problem, a graphic user interface application in C# for using an industrial robot arm to perform a “pick & place” task is developed. In particular, shape and color recognition were performed using a Kinect for Xbox connected to the main computer. The detection results are transferred to the robot controller, with the robot is controlled so as to move to the target. Computer simulations using RobotStudio in addition to real experiments are carried out to verify the effectiveness of the proposed approach.

I. INTRODUCTION

I ndustrial robots have experienced a rapid growth recently due to the lack of skilled manpower in many sectors of industry. In industrial manufacturing, robots are required to perform repetitive and non-repetitive tasks, such as hand writing [1]. 3D cameras like Kinect [2] could be integrated with robotics systems to perform such tasks. Kinect, using its depth camera [3, 4], can be calibrated to estimate the position of objects in front of it with reliable precision. “Scene analysis” is also crucial. Estimation of the target position depends upon the precision of image processing of the scene [5].

II.METHODS AND RESULTS

The experimental system (Figure 1) consists of an ABB robot and its controller (connected to the main computer), plus a Kinect camera. The interaction between the camera and the robot arm relies on a scheme that is developed based on RobotStudio and Visual Studio. The integration of both platforms allows the robot to find the object in the correct world frame position.

Figure 1. Schematic diagram of the experimental system

Figure 2 shows the procedure of scene analysis. The major goal of scene analysis is to recognize the position and shape of each object. Image processing is performed using the Aforge.Net library. The results are shown in Figure 3.

Figure 2. Procedure of scene analysis

Figure 3. Final result of image processing

By calculating the intrinsic and extrinsic camera parameters, the position information of the target in the real world can be obtained and sent to the robot controller. Experimental results are shown in Figure 4.

Figure 4. Experimental results

III. CONCLUSION

Using the proposed approach, the ABB robot that integrates Kinect is able to perform pick and place tasks with high accuracy.

ACKNOWLEDGMENT

We would like to express our gratitude to Catcher Technology Co Ltd for providing us with the ABB robot.

REFERENCES

[1] A. Izabo, T. Faisal, M. Iwan, H M A A AL-Assadi, H. Ramli, Programming ABB Industrial Robot for an Accurate Handwriting. 11th WSEAS International Conference on System Science and Simulation in Engineering (ICOSSSE '12) 2012, pp. 80-85.

[2] H.Belhadj, S.Ben Hassen, K. Kaaniche, and H. Mekki, KUKA Robot based Kinect  image analysis. In  IEEE Individual and Collective Behaviors on Robotic, 2013 pp.21-26.

[3] J. Smisek, M. Jancosek, and T. Pajdla, “3D with Kinect,” in ICCV Workshop on Consumer Depth Cameras for Computer Vision,2011    

[4] F. Ryd´en, H. J. Chizeck, S. N. Kosari, H. King, and B. Hannaford.Using Kinect TM and a Haptic Interface for Implementation of Real-Time Virtual Fixtures. In RSS Workshop on RGB-D Cameras, 2011   

[5] Jafari S, Jarvis R (2005) Robotic eye-to-hand coordination: implementing visual perception to object manipulation. Int J Hybrid Intell Syst 2(4): pp. 269–293