[ieee 2009 second international conference on advances in circuits, electronics and...

6
Traffic image processing systems Tadas Surgailis, Algimantas Valinevicius, Mindaugas Zilys Department of Electronics engineering, Kaunas University of Technology Studentǐ str. 50, 51368 Kaunas, Lithuania e-mail: [email protected], [email protected], [email protected] Abstract–This paper is about image processing systems and its ability to use in traffic surveillance and analysis. Growing car number is cause of bigger traffic jams and city pollution. With intelligent vision systems we can control traffic and reduce this problem. These systems have tremendous perspective. This paper presents the fundamental principles of traffic image processing system in moving object detection and extraction. Keywords blob detection, image segmentation, background subtraction, traffic surveillance, traffic tracking. I. INTRODUCTION Image processing systems (IPS) have a various practical use. Today they are widely developed and used in automatics, medicine, astronomy, and security. As they are simple in their structure, these systems can handle many tasks. Special attention is tended to growing traffic. Due to immoderate number of cars, there are more traffic jams, so usually citizens lose their valuable time and also use more fuel and pollute environment with exhaust. For traffic jams reducing in streets or parking places, traffic allocation must be used. Here helps IPS. Image processing systems can be used in information gathering and processing, required for traffic control or surveillance: traffic flow, speed, accidents, jam and other parameters. With the information taken from the system, we can regulate traffic-light control speed or differently react to increased traffic intensity more efficiently. The main goal of this article is to propose fast algorithm for traffic surveillance with ability to calculate moving cars. This paper was written parallel with my Master degree work which title is “Embedded vision processing systems”. This paper starts from hardware structure of IPS explanation. In Section III follows object extraction techniques and algorithms. How to extract moving objects explained in Section IV. There you will find our suggested fast moving object extraction algorithm which successfully can be used in embedded systems. Last section shows result of our IPS calculations with different conditions. II. IMAGE PROCESSING SYSTEM STRUCTURE Image processing system structure main parts are video camera and image processing unit, i.e. computer with software (Fig. 1.). In some cases, additional lightning can be used. It is important to choose appropriate camera and appropriate interface with computer when system is projected. Mostly two types of cameras are used in the market. One of them use CCD image sensors, others CMOS. Both sensors do the same, but their specifications and prices are different. CMOS are usually cheaper and use less power, while CCD has better dynamic range and used to obtain high quality. It is enough to use 320x240 or 640x480 pixel resolution in real time traffic observation systems, because better quality requires longer processing time. Figure 1. Image processing system structure Choosing data interface between camera and CPU are important too. There are five main interfaces, which are chosen according to these criteria: capacity, signal transmission distance, voltage transmission, and connection topology Table 1. TABLE I. DATA INTERFACES COMPARISON Interface name Capacity, MB/s Maximum transmission distance, m Power supply Connection Analog 11 100 none device-device CAMERA Link 255, 382, 680 10 none point-point GIGE vision 125 100 none Network, device-device FireWire 10, 20, 40 4.5, 72 (with repeater), 0-1.5A, 8-30V Network, device-device USB 40 5, 30 (with repeater) 500 mA ,4.7 V Network, master-slave Software for traffic observation and statistics gathering uses various algorithms for movement, and object detection. 2009 Second International Conference on Advances in Circuits, Electronics and Micro-Electronics 978-0-7695-3832-7/09 $26.00 © 2009 IEEE DOI 10.1109/CENICS.2009.9 61 2009 Second International Conference on Advances in Circuits, Electronics and Micro-electronics 978-0-7695-3832-7/09 $26.00 © 2009 IEEE DOI 10.1109/CENICS.2009.9 61

Upload: mindaugas

Post on 27-Feb-2017

217 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2009 Second International Conference on Advances in Circuits, Electronics and Micro-electronics (CENICS) - Sliema, Malta (2009.10.11-2009.10.16)] 2009 Second International Conference

Traffic image processing systems Tadas Surgailis, Algimantas Valinevicius, Mindaugas Zilys

Department of Electronics engineering, Kaunas University of Technology Student str. 50, 51368 Kaunas, Lithuania

e-mail: [email protected], [email protected], [email protected]

Abstract–This paper is about image processing systems and its ability to use in traffic surveillance and analysis. Growing car number is cause of bigger traffic jams and city pollution. With intelligent vision systems we can control traffic and reduce this problem. These systems have tremendous perspective. This paper presents the fundamental principles of traffic image processing system in moving object detection and extraction.

Keywords – blob detection, image segmentation, background subtraction, traffic surveillance, traffic tracking.

I. INTRODUCTION Image processing systems (IPS) have a various

practical use. Today they are widely developed and used in automatics, medicine, astronomy, and security. As they are simple in their structure, these systems can handle many tasks.

Special attention is tended to growing traffic. Due to immoderate number of cars, there are more traffic jams, so usually citizens lose their valuable time and also use more fuel and pollute environment with exhaust. For traffic jams reducing in streets or parking places, traffic allocation must be used. Here helps IPS.

Image processing systems can be used in information gathering and processing, required for traffic control or surveillance: traffic flow, speed, accidents, jam and other parameters. With the information taken from the system, we can regulate traffic-light control speed or differently react to increased traffic intensity more efficiently.

The main goal of this article is to propose fast algorithm for traffic surveillance with ability to calculate moving cars. This paper was written parallel with my Master degree work which title is “Embedded vision processing systems”.

This paper starts from hardware structure of IPS explanation. In Section III follows object extraction techniques and algorithms. How to extract moving objects explained in Section IV. There you will find our suggested fast moving object extraction algorithm which successfully can be used in embedded systems. Last section shows result of our IPS calculations with different conditions.

II. IMAGE PROCESSING SYSTEM STRUCTURE Image processing system structure main parts are video

camera and image processing unit, i.e. computer with software (Fig. 1.). In some cases, additional lightning can be used.

It is important to choose appropriate camera and appropriate interface with computer when system is projected. Mostly two types of cameras are used in the market. One of them use CCD image sensors, others CMOS. Both sensors do the same, but their specifications and prices are different. CMOS are usually cheaper and use less power, while CCD has better dynamic range and used to obtain high quality. It is enough to use 320x240 or 640x480 pixel resolution in real time traffic observation systems, because better quality requires longer processing time.

Figure 1. Image processing system structure

Choosing data interface between camera and CPU are important too. There are five main interfaces, which are chosen according to these criteria: capacity, signal transmission distance, voltage transmission, and connection topology Table 1.

TABLE I. DATA INTERFACES COMPARISON

Interface name

Capacity, MB/s

Maximum transmission distance, m

Power supply Connection

Analog 11 100 none device-device

CAMERA Link

255, 382, 680 10 none point-point

GIGE vision 125 100 none Network,

device-device

FireWire 10, 20, 40 4.5, 72 (with repeater),

0-1.5A, 8-30V

Network, device-device

USB 40 5, 30 (with repeater)

500 mA ,4.7 V

Network, master-slave

Software for traffic observation and statistics gathering

uses various algorithms for movement, and object detection.

2009 Second International Conference on Advances in Circuits, Electronics and Micro-Electronics

978-0-7695-3832-7/09 $26.00 © 2009 IEEE

DOI 10.1109/CENICS.2009.9

61

2009 Second International Conference on Advances in Circuits, Electronics and Micro-electronics

978-0-7695-3832-7/09 $26.00 © 2009 IEEE

DOI 10.1109/CENICS.2009.9

61

Page 2: [IEEE 2009 Second International Conference on Advances in Circuits, Electronics and Micro-electronics (CENICS) - Sliema, Malta (2009.10.11-2009.10.16)] 2009 Second International Conference

Figure 2. Advanced region growing method algorithm

III. OBJECT EXTRACTION ALGORITHM IN IPS The main goal of image processing system – detect

object and extract it. In traffic surveillance the main subject is moving car or any pedestrian. After extraction, object can be determined with following characteristics: measurement, used space, direction, perimeter and etc. From gathered information we can determine, what object it is. There are three main vehicles detection techniques:

A. 3D Model based technique. Model-based vision allows use of prior knowledge of

the shape and appearance of specific objects to be used in the machine interpretation of a visual scene [1]. The most serious weakness of this approach is the reliance on detailed geometric object models. The biggest disadvantage is unrealistic to expect detailed models for all vehicles that could be found on the roadway.

B. Features based technique. This method tracks sub-features such as

distinguishable points or lines on the object [2][7]. Good accuracy for free flowing as well as congested traffic condition. But comparing with active region, it need more time to calculate.

C. Active region based technique. c This method is very computationally efficient.

Representing object in bounding box and keep updating it dynamically. It is fast and best choice for limited computation systems.

Our purpose to use advanced growing region method (Fig.3.).

This method checks all pixels of the blob. For performance enhancing, this method join not single pixels, but groups of them. These grouped pixels called 1D or linear blobs. In the first stage, the received frame is scanned and linear blobs list is made. In the second stage

all 1D list look over again. If linear blob is overlapped with another one, they are joined. Joined blob called 2D objects. This method reduces possibility to overfill stack memory, then using recursion function. Advanced growing region algorithm is illustrated in Figure 2.

Figure 3. Advanced growing region method principle illustration

The main parameter of objects registration is processing time. It often shows system capabilities. Figure 4 shows how fast this method works using different image resolutions when there is the same 27.8% space used by blobs. In one case, it is distributed into 101 blobs, other – into 4.

0

20

40

60

80

100

120

140

160

320x240 640x480 960x720 1280x960 1600x1200 1920x1440

Resolution

Cal

cula

te ti

me,

ms

101 blobs 4 blobs

Figure 4. Joint growing region method processing time, when frame resolution is different

6262

Page 3: [IEEE 2009 Second International Conference on Advances in Circuits, Electronics and Micro-electronics (CENICS) - Sliema, Malta (2009.10.11-2009.10.16)] 2009 Second International Conference

Experiment completed with 1.6GHz AMD Turion X2, 1GB RAM computer. It is obvious that time increases then increasing resolution of the mage. Also we can see that processing time is increased when there are more blobs, even if blobs pixel quantity is the same in both cases. This is because of operations, used for blobs listing. However, when 640x480 VGA format is chosen, algorithm works fast enough and lasts only 16 – 25ms.

IV. MOVING OBJECT EXTRACTION IN IPS When moving objects be analyzed calculation level

moves to 2D dimension. Here calculations process not only in frames, but also between them. Moving object can be determined by removing it from the background. Fast background subtraction methods use difference between image and background. As result we get motion image. In one case background can be static

],[][][ iBiIiF THiF ][

here F –foreground, I –image, B – background, TH – motion threshold, i – index.

This method is very sensitive for motion. All around moving environment make this method very noisy. To use it in IPS would be very problematic and not effective. Therefore there is way how to reduce noise – use not static background. We can update background with every new image

][)1(]1[][ iIiBiB

here B[i] – new background B[i-1] – old background, I[i] – new image, – background update rate, i – index.

To increase motion detection efficient, we can update only pixels with no change or lower then threshold

elseiBTHiIiBifiIiB

iB],[

][]1[[],[)1(]1[][ , (3)

This method is very fast, but has quite height environment noise. It makes small motions such as tree branches and bushes, camera vibrations, shadows and etc. To eliminate this problem we can use more complex background subtraction methods for example non-parametric method [3] or probabilistic background model [6], but they use mush more computational resources. Another decision is to use noise filter. Neighbor review filter is used in this job. If a point has more than N neighbors, it is considered as movement point. It is fast and good for low noise. The biggest disadvantage is that filter can interact with real motion pixels and delete them. It is not good, because real moving object could be split into several smaller objects (Fig. 4.) To fix this it, we can find close blobs and join them. After that operation we will have real moving object region (Fig. 5.).

If we join motion extraction, object detection and all noise filters we will get fast moving objects detection

algorithm wish can be used in traffic surveillance (Fig. 6.). This algorithm can be dividing into five operations:

Motion extraction; False motion filter; Object detection; Small object filter; Close zone connector.

Figure 5. Object identified with split zones

Figure 6. Object identified with joined zones

V. IPS EXPERIMENTAL ANALYSIS Traffic surveillance system with proposed algorithm

can detect moving object, but it is necessary to analyze how it reacts to environment changes, i.e. when it is a day or a night, the road is dry or wet, etc. Also it is important to know what impact to the calculation deviation does all other parameters have, such like motion threshold, background update coefficient and etc. Vehicles traffic counter has been made for system analysis, which counts vehicles, crossed the set limit. One street of the city and exact place has been chosen for the experiment. Firstly, vehicle flow has been recorded and later analyzed. Video has been recorded using QVGA format, as it is 320x640. Several clips have been record with different environment, e.g. when the day is sunny, when is evening, when is night or road is wet. Two main parameters having direct impact have been changed; it is motion threshold Th and background update coefficient .

Experiment results are shown in Figures 8, 10. The results have been shown according to the deviation level.

6363

Page 4: [IEEE 2009 Second International Conference on Advances in Circuits, Electronics and Micro-electronics (CENICS) - Sliema, Malta (2009.10.11-2009.10.16)] 2009 Second International Conference

If there were more or less vehicles, this difference is expressed in percent according to real count (2).

%100Nr

NNrA

A – deviation from the real value, Nr – real number of the vehicles, N – system results.

Figure 7. Moving object exctracting algorithm

The first experiment has been made with the video, when vehicles are recorded in the friendly environment i.e. when it is daytime. Results of this experiment are shown in Figures 8, 9. The results show that selection of the motion threshold parameter is a determinant to the results. The more it is smaller the more shifted pixels are committed, so it is possible to restore moving object in better detail. When motion threshold is 4, vehicle counter deviation was only 1.14%. Very low threshold value increases noise level. Stronger filter is needed for false motion pixel elimination.

1.14

8.00

17.14

37.14

27.43

15.43

0.005.00

10.0015.0020.0025.0030.0035.0040.00

4 6 8 12 16 18Th

A, %

Figure 8. IPS vehicle counting deviation dependency from motion

threshold when it is daytime

Results show if value is increased, the system could count vehicles with lower deviation. This is because of moving objects are more standing out from the background. However, if the value is too big, the object spindles and usually connects to the nearby moving object, then the system can detect less vehicles. Another very important

factor is shadows. This algorithm can’t eliminate shadows. They can be connected to other vehicles, so the system can calculate them as single. That is why filter for shadow separation from detected moving objects is needed [4, 5].

17.14

10.86

2.86 1.710.57

-1.71-5.00

0.00

5.00

10.00

15.00

20.00

0.2 0.25 0.3 0.35 0.4 0.45

A, %

Figure 9. IPS vehicle counting deviation dependency from background

update coefficient when it is daytime

The second experiment is made in the evening. At this time, light intensity is reduced, so side effect makes a bigger influence to the results. It is vehicles headlights. If the road is wet, then this effect makes even bigger influence, because of the light reflection from it. When intensity is lower, motion threshold theoretically should be decreased. If we look to the results, we will see that if the threshold is lowered, deviation level is decreased. But also we can see that if the threshold value is increased, deviation sharply decreases. This is because of harder detection of the moving objects. Usually detects only conspicuous vehicles (e.g. white color). We can

6464

Page 5: [IEEE 2009 Second International Conference on Advances in Circuits, Electronics and Micro-electronics (CENICS) - Sliema, Malta (2009.10.11-2009.10.16)] 2009 Second International Conference

increase update coefficient value for higher detection sensitivity of the moving objects. However, the problem of joined objects shows up again. If the traffic is intensive, usually nearby vehicles that are on after other will be joined and calculated as single.

10.09

20.18

28.2526.01

19.73

-1.12-5.00

0.00

5.00

10.00

15.00

20.00

25.00

30.00

4 6 8 12 16 20Th

A, %

Figure 10. IPS vehicle counting deviation dependency from motion

threshold in the evening

28.25

19.96

14.13

8.305.83

4.04

0.00

5.00

10.00

15.00

20.00

25.00

30.00

0.2 0.25 0.3 0.35 0.4 0.45

A, %

Figure 11. IPS vehicle counting deviation dependency from background

update coefficient in the evening

Parameter changing is needed if the system has to be used in the different time. One of the methods is to seek detected image average pixel value (5) and then determine the lightness. If it is daytime, use one parameter and if it is rainy, use different and so on. After experiment, average pixel value is about 130 in the daytime and about 30 in the dark.

hw

ivid ipipip

hwp

0]2[3.0]1[59.0][11.01

here pvid – average of all pixels values, pvid 0, when it is night; pvid 255, when it is day, p[i] – blue pixel value, p[i] – green pixel value, p[i] – red pixel value, w – image width, h – image height.

Analyzing the case, when vehicle traffic is observed in the dark, obtained results show very complicated situation (Fig. 12, 13.). Changing different parameters do not give the needed low deviation value. This complication is because of vehicle lights reflected from the road. They have the biggest motion value, so their reflections usually are detected as different objects. Also when vehicles lighten each other becomes a big problem. Both vehicles are detected as single. It is possible to eliminate light reflections or use night infrared cameras.

18.8316.59 16.37 16.59

7.40

-6.50-10.00

-5.00

0.00

5.00

10.00

15.00

20.00

25.00

4 6 8 12 16 20

Th

A, %

Figure 12. IPS vehicle counting deviation dependency from motion

threshold in the dark

16.3714.35

17.71

14.3512.11 12.33

0.002.004.006.008.00

10.0012.0014.0016.0018.0020.00

0.2 0.25 0.3 0.35 0.4 0.45

A, %

Figure 13. IPS vehicle counting deviation dependency from background

update coefficient in the dark

How you see in Figures 14, 15, our vehicle surveillance systems best calculate in the day. With optimal parameters sets it deviation was only 0.57%.

-10.00

-5.00

0.00

5.00

10.00

15.00

20.00

25.00

30.00

35.00

40.00

4 6 8 12 16 18

Th

A, %

Day Evening Night Figure 14. Comparison of the all gathered results dependency from the

threshold

-5.00

0.00

5.00

10.00

15.00

20.00

25.00

30.00

0.2 0.25 0.3 0.35 0.4 0.45

A, %

Day Evening Night Figure 15. Comparison of the all gathered results dependency from the

background update coefficient

Other experiment has been made using better video quality. VGA (640x480) format has been used. After experiment it was obvious that bigger resolution increases system accuracy (Fig. 16, 17, 18.). That is because moving object has more points and can be joined together easily. However, if the threshold is increased, moving object usually is separated into several regions with big spaces between them, so the system does not join them and detect them as different.

6565

Page 6: [IEEE 2009 Second International Conference on Advances in Circuits, Electronics and Micro-electronics (CENICS) - Sliema, Malta (2009.10.11-2009.10.16)] 2009 Second International Conference

7.17

16.14

34.30

45.52

38.1243.05

0.005.00

10.0015.0020.0025.0030.0035.0040.0045.0050.00

4 6 8 12 16 18Th

A, %

Figure 16. IPS vehicle counting deviation dependency from motion

threshold in the dark using VGA resolution

34.30

10.76 9.196.05 5.38 5.38

0.005.00

10.0015.0020.00

25.0030.00

35.0040.00

0.2 0.25 0.3 0.35 0.4 0.45

A, %

Figure 17. IPS vehicle counting deviation dependency from background

update coefficient in the dark using VGA resolution

-10.00

0.00

10.00

20.00

30.00

40.00

50.00

4 6 8 12 16 20

Th

A, %

QWGA WGA

Figure 18. Comparison of QVGA and VGA results dependence from threshold

There is two main motion extraction parametres wish destime quility of movig object detection. That is background update coefficient and motion threshold. But there is other way how to detect moving objects more sensitive. That is camera record speed rate. If it is low, than motion in diferents frames will be mush more significant. Figure 19 shows IPS vehicle counting deviation dependency from record speed rate when is daytime. Lowest speed destine heigher quility of detected object. Besides, lower record speed rate use less computational resourses. It is important then computer has limited abilities.

56.57

14.29

-7.43

2.29

-20.00

-10.00

0.00

10.00

20.00

30.00

40.00

50.00

60.00

30 15 10 7.5

record speed, frame/s

Dev

iatio

n, %

Figure 19. IPS vehicle counting deviation dependency from record speed

rate when is daytime

VI. CONCLUSION Image processing systems are universal with their analysis abilities and can be used in complex traffic observation systems. Suggested image processing system algorithm is fast and works well in good conditions, e.g. sunny day. In these conditions with optimal set parameters, system deviation was only 0.57%. This traffic image processing system can’t work in the night yet, because it deviation is too large for count cars. Increasing video camera resolution and using more additional complex image processing methods can achieve better system accuracy, but on the other hand, processing time increases too. The biggest impact for good extraction of moving car has shadows and reflected cars headlights from road. In our suggested image processing system algorithm there is no complicated additional processes and methods, so it can be easily integrated into DSP or ARM embedded systems.

VII. REFERENCES [1] A. D. Worrall, R. F. Marslin, G. D. Sullivan, K. D. Baker “Model-

based Tracking”, Department of Computer Science, University of Reading, RG6 2AX, UK.

[2] Young-Kee Jung, Yo-Sung Ho, “A Feature-Based Vehicle tracking System in Congested traffic video Sequences”, Lecture Notes In Computer Science vol. 2195, Springer-Verlag London, UK, , 2001, pp. 190 – 197.

[3] Hyenkyun Woo, Min Ok Lee and Jin Keun Seo, “Real-time motion detection in video surveillance using a level set-based energy functional”, Department of Mathematics, Yonsei University, Seoul, 120-749, KoreaA, unpublished.

[4] Ahmed Elgammal, David Harwood, Larry Davis, “Non-parametric Model for Background Subtraction”, Lecture Notes In Computer Science vol. 1843, Springer-Verlag London, UK, 2000, pp. 751 – 767.

[5] QIN Bo, ZHANG Chuangde, FANG Zhenghua, LI Wei, “A Quick Self-Adaptive Background Updating Algorithm Based on Moving Region”, Ocean University of China, China, Qingdao/China 266071, unpublished.

[6] J. Rittscher, J. Kato, S. Joga, and A. Blake, “A Probabilistic Background Model for Tracking”, Lecture Notes In Computer Science; Vol. 1843, Springer-Verlag London, UK, 2000, pp. 336 – 350.

[7] Benjamin Coifman, David Beymer, “A Real-Time Computer Vision System for Vehicle Tracking and Traffic Surveillance”, Transportation Research Part C: Emerging Technologies, Volume 6, Issue 4, August 1998, Pages 271-288.

6666