driverless car using open cv python and image …

8
93 DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE PROCESSING Shrutika BHONDE, Sakshi BHONDE, Aboli PADOLE, Kishori KSHIRSAGAR, Pragita Bagde GUIDE, Prof. Manoj TITRE Abstract. The evolution of Artificial Intelligence has served as the catalyzed in the field of technology. The things which were once just an imagination possibly can be developed now. One to such creation is the self-driving car. In future, days have come where some people can do their work or some people sleep in the car and without even touching the accelerator, steering wheel you will be reach your target destination safely. In that paper the working model is capable of driving from one location to another. We installed the highly advanced camera on the top of the car along with Raspberry Pi which will send the images from real-world to the Convolutional Neural Network which will predict one of the following directions i.e. left direction, right direction, stop or forward which is then followed by sending a signal from the Arduino to the controller of the self- driving car and as a result of it, without any human intervention the car moves in the desired direction. Keywords: Artificial Intillegence, Image Processing, Raspberry pi, Convolution Neural Network. 1. Introduction While driving the car on the road important factor is to determine traffic road signs and its recognition with robotic eyes or cameras. The sign which he placed at the side of roads to impart information to road users is known as road signs or traffic signs. Creating the road sign detection an interesting problem because of the application and the difficulty of road sign detection. As far as application, street sign location is very significant for the street sign acknowledgement issue, since it is the most significant advance for a street sign acknowledgement framework. The system is important for autonomous vehicles and also for drivers to avoid accidents. [email protected], Computer Science and Engineering, J. D. College of Engineering and Management, Nagpur, India.

Upload: others

Post on 18-Dec-2021

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE …

93

DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE PROCESSING

Shrutika BHONDE, Sakshi BHONDE, Aboli PADOLE, Kishori KSHIRSAGAR,

Pragita Bagde GUIDE, Prof. Manoj TITRE Abstract. The evolution of Artificial Intelligence has served as the catalyzed in the field of technology. The things which were once just an imagination possibly can be developed now. One to such creation is the self-driving car. In future, days have come where some people can do their work or some people sleep in the car and without even touching the accelerator, steering wheel you will be reach your target destination safely. In that paper the working model is capable of driving from one location to another. We installed the highly advanced camera on the top of the car along with Raspberry Pi which will send the images from real-world to the Convolutional Neural Network which will predict one of the following directions i.e. left direction, right direction, stop or forward which is then followed by sending a signal from the Arduino to the controller of the self-driving car and as a result of it, without any human intervention the car moves in the desired direction.

Keywords: Artificial Intillegence, Image Processing, Raspberry pi, Convolution Neural Network.

1. Introduction

While driving the car on the road important factor is to determine

traffic road signs and its recognition with robotic eyes or cameras. The sign which he placed at the side of roads to impart information to road users is known as road signs or traffic signs. Creating the road sign detection an interesting problem because of the application and the difficulty of road sign detection. As far as application, street sign location is very significant for the street sign acknowledgement issue, since it is the most significant advance for a street sign acknowledgement framework.

The system is important for autonomous vehicles and also for drivers to avoid accidents.

[email protected], Computer Science and Engineering, J. D. College of

Engineering and Management, Nagpur, India.

Page 2: DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE …

94

Traffic signboards on the roadsides become hard to look for the drivers and the driver may now and then miss the signboards on the road. So, we design a system that detects the direction of the traffic sign. We would most likely be faced with more accidents without such useful signs. It is important to identify traffic signs for the autonomous vehicle. By implementing image processing techniques and open CV libraries all existing traffic signboards can be detected. It has an important role in transportation for reducing the rising accident rate. In future depending on the signs detected the Intelligent Vehicles would be capable to take some decisions about their speed, trajectory, etc. The goal is to develop or create.

Street sign location so has expected applications in human-PC interface and observation framework. Third, the difficulty of the task because of the presence of unpredictable imaging conditions in an unconstrained environment increases.

One of the many important aspects of a self-driving car is the ability for it to detect traffic signs to provide safety and security for the people not only inside the car but also outside of it. Various traffic information such as traffic signs, obstacles and the like are considered to be the time for real traffic sign detection and recognition with image processing technology in reality. As an important technology of various intelligent vehicle systems.

2. Literature Review

This is the conclusion reached by author Abdel Moghit Zarne,

Ibtisam Salemani, Abdeltif Hamdon on the system "Vehicle to Vehicle Distance Measurement for Self-Driving Systems". A method of measuring distances is introduced for self-propelled systems. The method used is based on a stereoscopic camera in which two cameras are placed in the same vertical position and displaced horizontally by a fixed distance. In the first stage, the methods of identifying the edge are then assumed using cross-correlation. In the second step, the derivative assumptions are verified by extracting the desired third-level features of 2DDWT from them and then classifying the features extracted using the Adobe Boost Classifier. However, in this paper the vehicle is first detected on one camera and then on another camera using a similar vehicle adaptation method. After finding the same vehicle on both the cameras, the method of measuring the distance based on the distance between the two cameras, the position of the vehicle between the two cameras and the specific geometric angle are used.

Page 3: DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE …

95

In Paper [2], Mihai Negru, author of the Cluj-Napoca Department of Computer Science at Ridu Denscu Technical University, introduces a novel method of designing micro-self-driving vehicles. Two types of traffic signs were found: stop and parking. Whenever the stop sign was found, the train stopped for a while. If the car encounters a parking traffic sign, the car undergoes late parking. Since participating in this competition our team has gained a lot of experience on the domain of autonomous vehicles which will be useful in future endeavors.

In the paper [3] author Aditya Kumar Jain introduced the method of modeling a self-driving car by the Department of Electronics and Communication. Different components of hardware, including software and neural network configurations, are clearly described. A successful model was created with the help of image processing and machine learning that works as expected. Thus this model was successfully created, implemented and tested.

In the document [4] author ut Turaj Kulkarni, Department of Electronics and Telecommunications has clarified the area of own driving car or autonomous driving. The proposed work is the module of the navigation system. The use of the fast R-CNN Inception-V2 model through transfer learning improves accuracy which makes the system reliable for real-time application. Results with bounding boxes provide guidelines for real-time control of vehicles. The paper “Towards Self-Driving Cars Using Convoluted Neural Networks and Road Lane Detectors” showed a detailed and step-by-step approach to road lane guides. Use Pretrend Vylov 1, a road lane detector and a controller to integrate into the smart system in the proposed method. Based on the observations, the proposed methods can adjust the road lanes, locate objects (e.g. cars) and give steering instructions to help the driver automate the driving process. This paper is useful for highway conditions in case of frequent failure to find road lanes in urban roads. Given these limitations, we will strive to build a better and robust system that can handle CNN lighting, speeding and collision warning, and different road environments using CNN to increase its capacity. Our system will be implemented in the real environment of self-driving car.

R. M. Swarna Priya, c. Gunavati, and S. L. Aarti in his paper titled “Object Using D Image Reconstruction of Human of the Human of Object Using D Image Reconstruction” has proposed the proposed method for Religion 3D image reconstruction for object distance measurement. The method developed in this work uses only two cameras to capture the roadside view. The 3D image reconstruction technique is then used to

Page 4: DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE …

96

identify centroid, point cloud, and inequality maps. Using this, the final 3D cloud is rearranged. The person is detected using an opaque angle and then the distance from the camera is estimated and displayed. Procedures can be extended to identify any objects or obstacles on the road. Furthermore, the error can still be reduced by changing the way the image is reproduced. Rohit Tiwari and Dushyant Kumar Singh developed their own responsible robot car. Working on different hardware components is described. A way to find stop signboards and red signals has been fixed and a way to find obstacles has also been defined. All the methods and algorithms given in this paper have been successfully implemented in a two wheeled chassis robot car.

The author, Ungui Lee, Jivan Jung, Seek-woo Jung, and David Hinchul Shim have proposed the development of a self-driving car to handle adverse weather conditions, combining paper, route, leader and vision information to find real-time lanes. . This paper established the system and presented the results of an experimentally designed experiment of our autonomous vehicle. In addition to vehicle information, the vehicle drove its own vehicle using GPS, cameras and leaders. Since it is not accurate enough to enable vehicles to drive in one lane in a GPS, the leader and camera combined information to find the lane. Next, lane estimates were revised using path information. Information from three sources allowed vehicles to drive on a rainy day, but was able to find the lane more strongly. A more robust vision processing algorithm should be developed to improve lane detection accuracy with severely different lighting conditions and approximate road conditions. Furthermore, if available, preliminary vehicle route information can be used to make approximate improvements. To build a safe autonomous vehicle, it is necessary to use not only existing sensor infusions but also to develop sensors in new forms operating in critical conditions. Also, to compensate for the inconvenience of the sensor, it is necessary to use accurate maps or V2X (Vehicle to Something) communication.

The paper entitled Real-Time Traffic Light Signal Recognition System for Self-Driving Cars [23] has been written by authors Nakul Agarwal, Abhishek Sharma and Jih Ren Chang for the purpose of identifying traffic lights. Obstacles such as the brightness of the LED and another traffic signal to see objects on the road have to be dealt with. To solve the problem of LED, the shutter speed of the camera can be changed. Another way to solve the problem of recognizing traffic light signals could be to use an object detection algorithm based on necklace cascades. Distance estimation from the signal can also be applied as done in. It is

Page 5: DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE …

97

also beneficial to use in-depth learning algorithms such as Conventional Networks. The convoluted neural network is a type of feed-forward neural network proposed by Alex Krieze Sky Inn. The work of this RC car can be further extended to make it more autonomous. Traffic sign recognition can be applied using a convoluted neural network as discussed. This has been done where two-tiered neural networks have been trained for this purpose. The paper entitled "Self-Driving with Remote Control: Challenges and Guidelines" proposes to use remote control when the self-driving system fails to understand the environment or the road information does not match the predefined traffic rules. This presents open questions for designing remote control systems and infrastructure. We present case studies in this work and advocate further research on challenging topics to enhance self-driving by remote control.

Innovative Features: After studied different research papers following features can be implemented in our project.

• Traffic sign recognition: This is part of the features collectively called the Advance Driver Assistance System (ADAS). The innovation is being created by an assortment of car providers. The detection methods can be generally divided into color-based, shaped-based, and learning-based methods. When the camera detects the signs, it appears on the display after the vehicles pass the sign reminders to the car system. In that model we uses the pi camera for sign recognition. The model is used for the traffic sign put on the road e.g. "speed limit", "children", "turn ahead".

• Emergency braking pedestrian detection: Walker identification frameworks with programmed slowing down usefulness can possibly forestall or lessen the seriousness of impacts bringing about property harm, individual injury, or potentially passing. The reason for this work is to detail the presentation and restrictions of as of now accessible passerby location frameworks. Just frameworks with programmed slowing down usefulness were assessed inside this work. These frameworks are intended to include a layer of driver help and impact relief; they are not expected to fill in as a substitute for a drew in driver.

• Lane departure warning: Lane departure warning is a system that gives an alert message to the system if the car crossed or touched the lane markers in the form of audio or visual. Lane departure warning system operates at the speed of 30-40mph. In some time the system mistakenly crosses the lane that time this technology is used. Lane departure warning system track the vehicles of the position within a lane usually with the camera mounted on or near the review mirror. The system helps direct a vehicle back into the lane through light steering or breaking.

Page 6: DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE …

98

• Surrounded view: An encompass see screen, or around see screen framework, join together a superior perspective on your vehicle from overhead and shows a moving picture on the contiguous vehicles. A Surrounded view is the vehicle camera that will help the system when park. A Surrounded view camera can be consisting of four fisheye cameras and each will have a 180-degree horizontal view. A Surrounded view camera will assist the system while the park particular vehicle by displaying a surrounded view on the LCD that will be placed on the dashboard.

• Park Assistance: Park assistance is helpful to automatically find space in parking and automatically steer the vehicle into the parking spots. In that technology the multiple ultrasonic sensors are used. In that technology, if the vehicle closed to the parking area then the speed of the vehicle decreases automatically. If it closed to the car on the right, turn the wheel to the left to get a wider angle.

• Distance object adaptive cruise control: Versatile journey control is like ordinary voyage control in that it keeps up the vehicle's pre-set speed. If the lead vehicles low down, or if another object is detected, the systems end a signal to the engine or braking system to decelerate. If the system detects another car in front of it that is traveling at a slower speed the vehicle will reduce its speed to match that of the detect the car and then maintain a selected interval behind the car. The technology informs the distance between the two vehicles and changes in that distance inform the system of the vehicle's relative speeds.

• Antibreak system: It is the halting computerization. It is a closed circle control device. ABS work by shielding the wheels from making sure about during easing back down, in this route staying in contact with the road surface. It is more impressive than drivers could regulate. This development is a significantly faster rate. It will be able to steer the vehicle properly and it also reduces the braking distance.

3. Conclusion

In this paper, we did a survey of different research papers on self-driving cars. Also, we can implement the innovative feature in our project. A method to make a model of a self-driving car is presented. With the help of Image Processing and Machine Learning a successful model can be developed that worked as per expectation. Finally, the model can be implemented, and tested, successfully designed.

Page 7: DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE …

99

REFERENCES

[1] Abdelmoghit ZAARANE, Ibtissam SLIMANI, Abdellatif HAMDOUN, Issam ATOUF LTI Laboratory, Physics Departement, Faculty of Sciences Ben M’sik, University HassanII, Casablanca, Morocco. IEEE (2019).

[2] Sofiane Lagraa†, Maxime Cailac‡, Sean Rivera†, Fr´ed´eric Beck‡, and Radu State†† SnT, University of Luxembourg first name. Last name @uni.lu‡ Inria Nancy-GrandEst, 615ruedu Jardin Botanique, 54600 Villers-les-Nancy, France [email protected] (2019).

[3] Anselme Ndikumana and Choong Seon Hong* Department of Computer Science and Engineering, Kyung Hee University, Rep. of Korea {anselme, cshong}@khu.ac.kr.IEEE (2019).

[4] Bokyeong Kim Dept.of Elec., info., and comm. Eng. Daejeon University Daejeon, Rep. Of. Korea. IEEE (2019).

[5] Bianca-Cerasela-Zelia Blaga 1, Mihai Adrian Deac 2, Rami Watheq Yaseen Aldoori 3, Mihai Negru 4, Radu Dnescu 5, Technical University of Cluj-Napoca Computer Science Department Cluj-Napoca, Romania. IEEE (2018).

[6] Mariusz Bojarski 1, Anna Choromanska 2, Krzysztof Choromanski 3, Bernhard Firner 1, Larry Jackel 1, Urs Muller 1, Phil Yeres 1, and Karol Zieba 1. IEEE (2018).

[7] Minh-Thien Duong, Truong-Dong Do and My-HaLe. IEEE (2018). [8] Aditya Kumar Jain Electronics and Communication Department Dharmsinh Desai

University Gujarat, India [email protected] (2018). [9] Ruturaj Kulkarni Dept. of Electronics and Telecommunications Pune Vidyarthi

Griha’s College of Engineering and Technology Savitribai Phule Pune University Pune, India [email protected]. IEEE (2018).

[10] Syed Owais Ali Chishti Dept. of Computer Science FAST National University Peshawar, Pakistan [email protected] Sana Riaz Dept. of Computer Science FAST National University Peshawar, [email protected] (2018).

[11] Mochamad Vicky Ghani Aziz Control and Computer System Laboratory School of Electrical Engineering and Informatics, ITB Bandung, Indonesia. IEEE (2017).

[12] Brilian Tafjira Nugraha 1, Shun-FengSu 2 Department of Electrical Engineering National Taiwan University of Science and Technology (NTUST) Taipei, Taiwan, ROC [email protected], [email protected] (2017).

[13] T. Banerjee 1, S. Bose 2, A. Chakraborty 3, T. Samadder 4, Bhaskar Kumar 5, T. K. Rana 6 1,2,3’4,5 Third Year Student, ECE Dept. Institute of Engineering and Management, Salt Lake, Kolkata 6 Professor, ECE Dept. Institute of Engineering and Management, Salt Lake, Kolkata. IEEE (2017).

[14] Renjith R B. Tech Student (Dept. of Electronics and Communication) Sree Narayana Gurukulam College Ernakulam, Indiaren [email protected]. IEEE (2017).

[15] Shahroz Tariq, Hyunsoo Choi, C. M. Wasiq, Heemin Park Dept. of Computer Science and Engineering Sangmyung University Cheonan 31066, Korea [email protected], [email protected], [email protected], [email protected]. IEEE (2016).

[16] R. M. Swarna Priya (B)·C. Gunavathi (B)·S. L. Aarthy School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, Indi. SPRINGER (2019).

Page 8: DRIVERLESS CAR USING OPEN CV PYTHON AND IMAGE …

100

[17] Andreea-Iulia Patachi 1, Florin Leon1(&), and Doina Logofătu 2 1 Department of Computer Science and Engineering, “Gheorghe Asachi” Technical University of Iaşi, Romania [email protected], [email protected] 2 Faculty of Computer Science and Engineering, Frankfurt University of Applied Sciences, Frankfurt, Germany [email protected] SPRINGER (2019).

[18] R. Tiwari (&) D. K. Singh Lovely Professional University, Phagwara, Punjab, India. SPRINGER (2018).

[19] Jingyan Qin(✉), Zeyu Hao, and Shujing Zhang School of Mechanical Engineering, University of Science and Technology Beijing, Beijing, China [email protected]. SPRINGER (2018).

[20] W. T. Prasetyo (&) P. Santoso R. Lim Electrical Engineering Department, Petra Christian University, Surabaya, Indonesia e-mail: [email protected]. SPRINGER (2016).

[21] Unghui Lee, Jiwon Jung, Seokwoo Jung and David Hyunchul Shim*, School of Mechanical and Aerospace Engineering, KAIST, Daejeon 34141, Korea. SPRINGER (2018).

[22] Rasmus Buch1, Samaneh Beheshti Kashi 2(✉), Thomas Alexander Sick Nielsen3, and Aseem Kinra1 1 Department of Operations Management, Copenhagen Business School, Copenhagen, Denmark [email protected], [email protected]. SPRINGER (2018).

[23] Nakul Agarwal 1(&), Abhishek Sharma 2, and Jieh Ren Chang 3 [24] Undergraduate, Computer Science and Engineering, The LNM Institute of

Information Technology, Jaipur, India [email protected] (2018). [25] Joshu ´eP´ erez, Jorge Villagr ́ a, Enrique Onieva, Vicente Milan´es, Teresa de

Pedro, and Ljubo Vlacic1, Robotics Department, Center for Automation and Robotics (CAR) La Poveda Argandadel Rey, 28500 Madrid, Spain 1 Intelligent Control Systems Laboratory, Griffith University, Brisbane, Australia. SPRINGER (2012).