moving object segmentation using improved running gaussian average background model

8
Moving Object Segmentation Using Improved Running Gaussian Average Background Model Shu-Te Su and Yung-Yaw Chen Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan, R.O.C. {f96921047, yychen}@ntu.edu.tw Abstract Moving object segmentation using Improved Running Gaussian Average Background Model (IRGABM) is proposed in this paper. Background subtraction for a relatively static background is a popular method for moving object segmentation in image sequences. However, there are some problems for the background subtraction method, such as the varying luminance effect, the background updating problem, and the noise effect. IRGABM has the advantages of fast computational speed and low memory requirement. Our study also shows its improvements on the above-mentioned problems. For the purpose of moving object segmentation, background updating time, auto-thresholding and shadow detection are also discussed in this paper. Keywords: background subtraction, moving object segmentation, background update, background updating time, auto-thresholding, shadow detection. 1 Introduction Background subtraction for a relatively static scene is a popular method for moving object segmentations. The method first constructs the background model and subtracts the background from the current frame, to form the absolute difference frame. Foreground frame is then constructed by thresholding the absolute difference frame and the moving object is extracted. Background subtraction can be applied in video surveillance, such as human motion analysis, high way surveillance, etc. There are many existing methods for background models, such as, the Running Gaussian Average Background Model (RGABM) [1][3][9], Kernel Density Estimation (KDE) [7], Mixture of Gaussians (MoG) [6], and Eigenbackgrounds [8]. RGABM is one of the recursive-form background models which require less computational time and memory. KDE, MoG, and Eigenbackgrounds, on the other hand, are non-recursive form background models which require more computational time and memory, but usually achieve better accuracy. Although Moeslund et al. [10] indicated that MoG has become the standard of background models. These methods usually have a tradeoff between computational speed, memory requirement and accuracy. The comparisons between above four different background models are shown in Table 1 [3]. Background Model Speed Memory Accuracy RGABM [1] Fast Low Low/Med. Eigen-Backgrounds [8] Slow Med. Med. MoG [6] Slow Med. High KDE [7] Slow High High Table 1. Comparisons between four different background models In order to achieve real-time system, low computational time backgrounds are considered in this paper. In [3], Piccardi pointed out that RGABM is both fast in computational speed and low in requirements of memory. However, its level of accuracy is only acceptable. Most background model methods require long computation time or large memory. In general, simple recursive-form approach such as RGABM, on the other hand, has high computational speed and low memory requirement, which seems to be of great potential for further improvement. Consider the low computational time background model, the simplest background model is the Time-Invariant Background Model (TIBM). The first captured frame is assigned to be the background which is invariant. The mathematical description can be represented as (1). 0 , , k x y xy B I = (1) where , k x y I is the pixel (x,y) of the k th captured frame, and , k x y B is the pixel (x,y) of the k th background. However, TIBM fails as noise or varying luminance in the image sequence. The Thresholding Background Digital Image Computing: Techniques and Applications 978-0-7695-3456-5/08 $25.00 © 2008 IEEE DOI 10.1109/DICTA.2008.15 24

Upload: hamada-ahmed

Post on 13-Apr-2015

30 views

Category:

Documents


1 download

DESCRIPTION

Moving Object Segmentation Using Improved Running GaussianAverage Background Model

TRANSCRIPT

Page 1: Moving Object Segmentation Using Improved Running Gaussian  Average Background Model

Moving Object Segmentation Using Improved Running Gaussian

Average Background Model

Shu-Te Su and Yung-Yaw Chen Department of Electrical Engineering, National Taiwan University,

Taipei, Taiwan, R.O.C. {f96921047, yychen}@ntu.edu.tw

Abstract

Moving object segmentation using Improved Running Gaussian Average Background Model (IRGABM) is proposed in this paper. Background subtraction for a relatively static background is a popular method for moving object segmentation in image sequences. However, there are some problems for the background subtraction method, such as the varying luminance effect, the background updating problem, and the noise effect. IRGABM has the advantages of fast computational speed and low memory requirement. Our study also shows its improvements on the above-mentioned problems. For the purpose of moving object segmentation, background updating time, auto-thresholding and shadow detection are also discussed in this paper. Keywords: background subtraction, moving object segmentation, background update, background updating time, auto-thresholding, shadow detection. 1 Introduction

Background subtraction for a relatively static scene is a popular method for moving object segmentations. The method first constructs the background model and subtracts the background from the current frame, to form the absolute difference frame. Foreground frame is then constructed by thresholding the absolute difference frame and the moving object is extracted. Background subtraction can be applied in video surveillance, such as human motion analysis, high way surveillance, etc.

There are many existing methods for background models, such as, the Running Gaussian Average Background Model (RGABM) [1][3][9], Kernel Density Estimation (KDE) [7], Mixture of Gaussians (MoG) [6], f

and Eigenbackgrounds [8]. RGABM is one of the recursive-form background models which require less computational time and memory. KDE, MoG, and Eigenbackgrounds, on the other hand, are non-recursive

form background models which require more computational time and memory, but usually achieve better accuracy. Although Moeslund et al. [10] indicated that MoG has become the standard of background models. These methods usually have a tradeoff between computational speed, memory requirement and accuracy. The comparisons between above four different background models are shown in Table 1 [3].

Background Model Speed Memory Accuracy RGABM [1] Fast Low Low/Med.

Eigen-Backgrounds [8] Slow Med. Med. MoG [6] Slow Med. High KDE [7] Slow High High

Table 1. Comparisons between four different background models

In order to achieve real-time system, low

computational time backgrounds are considered in this paper. In [3], Piccardi pointed out that RGABM is both fast in computational speed and low in requirements of memory. However, its level of accuracy is only acceptable. Most background model methods require long computation time or large memory. In general, simple recursive-form approach such as RGABM, on the other hand, has high computational speed and low memory requirement, which seems to be of great potential for further improvement.

Consider the low computational time background model, the simplest background model is the Time-Invariant Background Model (TIBM). The first captured frame is assigned to be the background which is invariant. The mathematical description can be represented as (1).

0, ,

kx y x yB I= (1)

where ,kx yI is the pixel (x,y) of the kth captured frame, and

,kx yB is the pixel (x,y) of the kth background.

However, TIBM fails as noise or varying luminance in the image sequence. The Thresholding Background

Digital Image Computing: Techniques and Applications

978-0-7695-3456-5/08 $25.00 © 2008 IEEE

DOI 10.1109/DICTA.2008.15

24

Page 2: Moving Object Segmentation Using Improved Running Gaussian  Average Background Model

Model (TBM) improves TIBM. If the value of pixel of the absolute difference frame is too large, then define the pixel of the captured frame is foreground part. On the contrary, if the value of pixel of the absolutely difference frame is too small, the pixel of the captured frame is background part. The background part should be updated and the foreground part should not be updated. According to this idea, the mathematical description of TBM can be written as (2).

, ,, 1

, ,

,

,

k kx y x yk

x y k kx y x y

I AD ThB

B AD Th−

⎧ <⎪= ⎨>⎪⎩

(2)

where ,kx yAD is the pixel (x,y) of the kth absolute

difference frame between the kth captured frame and the k-1th background, i.e. 1

, , ,k k kx y x y x yAD I B −= − .

The TBM can be used to reduce the noise effect and the varying luminance effect. However, TBM has deckle effect that the deckle of the foreground usually be updated into the background. On the other hand, the wrong updating occurs as the foreground and the background have a near color. The TBM reduces the noise effect and the varying luminance effect; however, it also produces noise. In [5], the Long-Term Average Background Model (LTABM) is provided as (3) or in recursive form as (4).

, ,1

1 kk rx y x y

r

B Ik =

= ∑ (3)

1, , ,

1 11k k kx y x y x yB B I

k k−⎛ ⎞= − +⎜ ⎟

⎝ ⎠ (4)

LTABM says that the background is the average of all past frame up to the current frame. It reduces the deckle effect. For LTABM, if the frame number k is very small, then the weighting of the each captured frame is very large and the noise in the captured frame would be updated in background. Contrarily, if the frame number k is very large, then the weighting of the each captured frame is very small and noise would appear after background subtraction if luminance varies. However, the Moving Average Background Model (MABM) improves the disadvantage of the LTABM, represented as (5).

( )

0, ,

, ,1

, 1 0

1

ix y x y

kk rx y x y

r k W

I I W i

B IW = − −

⎧ = − + < <⎪⎨ =⎪⎩

(5)

where W is the moving length. The background is the average of last W captured frames. The weighting of the last W frames are the same, however, it cannot be written in a recursive form, and large memory is required. The RGABM [1][3][9] not only reduces the varying luminance effect and noise, but also updates background. RGABM can be written as recursive form (6).

( ) [ ]1, ,

, 0,

1 , 1, 0,1

, 0

k kx y x yk

x yx y

B I kB

I k

α α α−⎧ − + > ∈⎪= ⎨=⎪⎩

(6)

where α is the background updating rate, typically 0.05. In equation (6), 1

,kx yB − is the reference of ,

kx yB with a little

tuning by the background updating rate and the difference between the captured frame and the background. RGABM has a strongly ability to reduce the varying luminance effect and the noise effect.

In this paper, Improved Running Gaussian Average Background Model (IRGABM) is proposed. It not only improves the accuracy by enhancing the noise reduction capability and reducing the varying luminance effect, but also updates the background effectively.

The analysis of the background models performance about the noise reduction ability, background updating, moving object segmentation false positive (mistake still objects for moving ones), and false negative (mistake moving objects for still ones) will also be discussed in the following paragraphs. 2 Improved Running Gaussian Average

Background Model 2.1 New Approach

In general, a recursive-form background model has the characteristics of low memory requirement and high computational speed. However, its level of accuracy is only acceptable. The noise effect and the varying luminance effect are the major causes for low accuracy. The noise effect shows rapid but small changes in the luminance of a frame; the varying luminance effect shows changes slow but large changes in the luminance of a frame. In order to reduce the noise effect and the varying luminance effect, the appropriate updating speed is required.

Assume there are no moving objects in the first captured frame, and the moving object are in an enough moving speed to prevent moving object from being updated into background. Based on the two assumptions as above, the IRGABM can be described in recursive form as (7).

( )( )

11 , 1 , ,

1, 2 , 2 , ,

0,

1 , 0,

1 , 0,

, 0

k k kx y x y x y

k k k kx y x y x y x y

x y

B I if k AD Th

B B I if k AD Th

I if k

α α

α α

⎧ − + > <⎪⎪= − + > >⎨⎪ =⎪⎩

(7)

where ,kx yB is the kth frame of background at pixel (x,y),

,kx yI is the kth frame of captured frame at pixel (x,y), ,

kx yAD is

the kth absolute difference frame at pixel (x,y), i.e. 1

, , ,k k kx y x y x yAD I B −= − , 1α and 2α are background updating rate,

2 1α α< , and Th is the threshold determined by auto-thresholding algorithm (see subsection 3.2). Suitable background updating rates, 1α and 2,α can be determined by the concept of background updating time (see subsection 2.2). The larger background updating rate, the faster background updating speed. If ,

kx yAD is smaller

than the threshold Th , the pixel of the captured frame may be noise, or changing luminance, then the large background updating rate is required. As the same reason,

25

Page 3: Moving Object Segmentation Using Improved Running Gaussian  Average Background Model

if ,kx yAD is larger than the threshold Th , the pixel of the

captured frame may be the moving object, then the small background updating rate is required. The background model has a fast background updating speed so that the varying luminance effect and noise effect would be reduced. Hence, the background updating rate 1α should be smaller than 2α . As luminance varies, the intensity of captured frame becomes lighter or darker. IRGABM updates the variance soon if a proper background updating rate 1α is chosen. IRGABM provides noise effect reduction, varying luminance effect reduction, and background updating. 2.2 Background Updating Time

For a background model with background updating, it usually updates new fixed objects that in the scene. If the size and lowest speed of moving objects are known, then the concept of background updating time can be used for setting proper background updating rates. For IRGABM, background updating rates are 1α and 2α . Assume one pixel P in the background with intensity 0, and the captured frame at pixel P changes its intensity from 0 to 1. The intensity of the background at pixel P updates intensity from 0 to 1, and the time between its intensity from 0.1 to 0.9 called as background updating time, like Figure 1. If the size of moving object is large or in low speed, large background updating time is required; on the contrary, the moving object is in small size or moving object is in high speed, small background updating time would perform well.

Figure 1. Background Updating Time

In the Table 2 shows the background updating time of background models. TIBM always not updates background, and always be interfered by the varying luminance effect, noise, and new fixed objects.

TBM not only updates luminance variance and noise, but also updates noise into background. However, it cannot update new fixed objects, too. LTABM has time-varying background updating time. Namely, LTABM has small background updating time at the initial and updates soon, but too large background updating time as time passes by and updates in very low speed. MABM has a fixed background updating time, and a proper moving length W can be chosen by experiment such that

the moving object can be detected in a good performance. RGABM performs better than above, and IRGABM is more flexible than RGABM. IRGABM has two background updating rates, one is for the reduction of the noise effect and the varying luminance effect, the other one is for background updating. Choosing suitable background updating rates in (7), the background would update new fixed objects soon and find moving objects in a good accuracy. Background Model Background Updating Time

TIBM ∞

TBM ,

,

0 ,

,

kx y

kx y

if AD Th

if AD Th

⎧ <⎪⎨∞ >⎪⎩

LTABM ( )80 19

k −

MABM 0.8W

RGABM ( )1log 9α−−

IRGABM ( )

( )

1

2

,1

,1

log 9 ,

log 9 ,

kx y

kx y

if AD Th

if AD Thα

α

⎧− <⎪⎨

− >⎪⎩

Table 2. Background updating time for background models 3 Moving Object Segmentation 3.1 Moving Object Segmentation Process

For moving object segmentation, first construct the background model. Subtract the background from captured frame, and the absolute difference frame is obtained. The foreground frame is then constructed by thresholding the absolute difference frame. For the foreground frame, Cucchiara et al.[2] said that the foreground can be divided into object part and shadow part. Object part includes moving object and ghost; shadow part includes moving object shadow and ghost shadow. In this paper, the shadow part can be detected by shadow detection, and ghost always be updated by IRGABM. Therefore, moving object can be found by eliminating shadow parts and ghost. The flow chart of moving object segmentation is shown in Figure 2.

Figure 2. The Flow Chart of Moving Object Segmentation

26

Page 4: Moving Object Segmentation Using Improved Running Gaussian  Average Background Model

3.2 Auto-Thresholding Algorithm

In background subtraction case, the moving objects always not occupy large area in the frame. Reasonably assume the histogram for the absolute difference frame like Figure 3. In order to get a proper threshold to find suitable foreground, an auto-thresholding algorithm, triangle algorithm[4], is provided for background subtraction. First construct a histogram of intensities versus number of pixels shown in Figure 3. Draw a line between the maximum value of the histogram hmax and the minimum value hmin and calculate the distance D between the line and the histogram. Decrease h from hmin to h equals to hmax. The threshold value th becomes the h for which the distance D is maximized. Substitute the threshold value, Th, into (7), then the intensity less than threshold would be regarded as background, and the intensity value larger than threshold would be regarded as foreground.

,,

,

1,

0,

kx yk

x y kx y

AD thF

AD th

⎧ >⎪= ⎨<⎪⎩

(8)

where ,k

x yF is the foreground of the kth frame.

Figure 3. Triangle Algorithm 3.3 Shadow Detection

In general scene, if the shadow is cast on the background, the intensity of shadow is darker in luminosity and few various in chromaticity [11][12]. Based on this characteristic mentioned above, the pixels of image can be analyzed in the HSV (Hue, Saturation, and Value) color model for the sake of the HSV color model can separate chromaticity and luminosity information. Hue and saturation of the HSV color model are chromaticity information, and the value of the HSV model is luminosity information. The shadow most has lower value, and few various in hue and saturation. Hence, shadow detection can be written as (9)

,, ,

, ,

_1, _ , _ _ ,

_0,

kx yk k

H x y x y Sk kx y x y

I Vif D H I S B S V V

Shadow B Votherwise

α βτ τ⎧

≤ − < ≤ ≤⎪= ⎨⎪⎩

(9)

where , , , ,_ min _ _ ,360 _ _t t t tx y x y x y x yD H I H B H I H B H⎡ ⎤= − − −⎣ ⎦

and [ ], 0,1V Vα β ∈ By the shadow detection, the most shadow would

be reduced, and the moving object can be found.

, ,,

,

, 0

0 , 1

k kx y x yk

x y kx y

F ShadowMO

Shadow

⎧ =⎪= ⎨=⎪⎩

(12)

where ,kx yMO is the moving object.

The moving object is the foreground by eliminating the shadow and the ghost. However, few part of the moving object may be also removed if the moving object has near chroma with the background. 4 Experimental Results

We separate the experiments into four parts. The first part is the analysis of varying luminance effect and noise for background models. We consider environment noise and camera noise in image sequences that captured from CCD camera. The varying luminance effect is also considered. Finally, the noise effect and the varying luminance effect for background models using statistical analysis are compared. In the second part, false positive and false negative for background models are discussed. In the third part, we discuss the moving object with shadow detection. IRGABM is the best background in recursive-form background models. The fourth part discusses the computational speed of recursive-form background models and other fast background models. 4.1 Analysis of Varying Luminance Effect and Noise

for Background Models

In this subsection, the analysis of the varying luminance effect and the noise effect in image sequence are discussed. The image sequence with moving objects ought to be considered. However, the moving object and its shadow are not noise or varying luminance. Therefore, the moving object and its shadow should not be considered in this analysis. The rest of the image frame is static scene. Hence, we consider static scene only in this subsection.

In order to distinguish the performance of background models, first define the average (AV) of the absolute difference error as (10).

,0 0 0

1 1L M Nkx y

k x yAV AD

L MN= = =

⎛ ⎞= ⎜ ⎟

⎝ ⎠∑ ∑∑ (10)

where L is the total number of captured frames. The standard deviation (SD) of the absolute difference error is defined as (11).

( )2

,0 0 0

1 1L M Nkx y

k x ySD AD AV

L MN= = =

⎡ ⎤= −⎢ ⎥

⎢ ⎥⎣ ⎦∑ ∑∑ (11)

The average of the absolute difference error and the standard deviation of the absolute difference error are

27

Page 5: Moving Object Segmentation Using Improved Running Gaussian  Average Background Model

represented as the mean of noise and standard deviation of noise, respectively. The smaller the two statistical values, the better. Experiment Background Scene Luminance Varying

(A) Simple No (B) Complex No (C) Simple Yes (D) Complex Yes

Table 3. Experiments Conditions Then, use four image sequences (each has 300

captured frames) to compare background models. The four experiments with some conditions shown as Table 3.

The experiment (A) and (B) without luminance varying, and only exist the noise that comes from the environment or camera. The experiment (C) and (D) have luminance varying, they contain not only noise that come from the environment or camera, but also luminance varying by fluorescent lamp. Set the gray scale of white is 255 and black is 0. Choose the parameters Th equals to 5 for TBM, W is 15 frames for MABM, background updating rate α = 0.05 for RGABM, and 1α =0.1,

2α =0.05, and Th is determined by triangle algorithm for IRGABM. The experimental results about the average error per frame are then shown in Figure 4. IRGABM reduces the noise effect and the varying luminance effect, and better than other recursive-form background models.

In Figure 4, the convergence speed of noise reduction for IRGABM is faster than others as the abrupt noise occurs. The analysis of the average of the absolute difference error and the standard deviation of the absolute difference error for recursive-form background models are shown in Table 4, and Table 5, respectively. As the results show, IRGABM improves the performance of the noise effect reduction and the varying luminance effect reduction. For the average of the absolute difference error, IRGABM improves RGABM about 5.93% ~ 24.71%. For the standard deviation of the absolute difference error, IRGABM improves RGABM about 3.38% ~ 16.62%. The performance of IRGABM is better than the others five background models. Experiment (A) (B) (C) (D)

TIBM 3.4743 3.5907 12.2718 9.8507 TBM 2.1365 2.2565 12.3072 8.9622 LTABM 2.0262 2.0801 7.9454 7.4763 MABM 2.1095 2.1759 7.7963 7.8448 RGABM 1.9250 1.9985 5.1597 5.1904 IRGABM 1.8006 1.8799 4.0268 3.9078

Table 4. The average of the absolute difference error for background models

Experiment (A) (B) (C) (D) TIBM 0.2680 0.2220 5.4517 5.6034 TBM 0.1937 0.1324 5.9436 5.2175 LTABM 0.2121 0.1369 5.9834 2.8686 MABM 0.2656 0.2453 6.3659 2.6779 RGABM 0.2033 0.1631 5.2042 3.2423 IRGABM 0.1739 0.1360 4.6753 3.1328

Table 5. The standard deviation of the absolute difference error for background models

(A)

(B)

(C)

(D)

Figure 4. The average error per frame v.s. # of frame

4.2 False Positive and False Negative for Background

Models

In subsection 4.1, we know the MABM, RGABM, and IRGABM are better than other three background models. Therefore, in the next, MABM, RGABM, and IRGABM are discussed only. In the experiment, the

28

Page 6: Moving Object Segmentation Using Improved Running Gaussian  Average Background Model

background scene is designed as complex in intensity that the intensity is uniform distributed form 0 to 255, shown in Figure 5. In the image sequence, there are 600 captured frames. We produce the changes of luminance by fluorescent lamp, and give a new pink fixed object from frame # 320 to frame #600. We also give a green moving object that moves sometimes fast and sometimes slow. The frame #160 and # 487 are shown in Figure 6. Judge the background models by false positive and false negative, shown in Figure 7 and Figure 8, respectively. The averages of false positive and false negative over all frames are shown in Table 7. The comparison of false positive and false negative of RGABM and IRGABM, IRGABM improves 25.5% and 54.7%, respectively.

Figure 5. Uniform Distributed Background Scene

Background MABM RGABM IRGABM

False positive 477.9 408.6 304.3

False negative 183.4 286.4 129.7

(320 x 240 pixels per frame, and unit is pixels per frame)

(The average area if the moving object is 2532 pixels)

Table 7. Average of False Positive and False Negative 4.3 Moving Object Segmentation with Shadow

Removing

Shadow detection can be used for removing moving object shadow and ghost shadow. In the real indoor scene, Figure 9 and Figure 10, the left side frames are captured frames. The absolute difference frames are in the middle, and moving object segmentation with removing the shadows by shadow detection are shown in the right side. Obviously, shadow detection removes the ghost shadow and moving object shadow. However, few part of the moving object may be also removed if the chroma of foreground is near to background. Finally, the moving object can be extracted from the foreground frame. 4.4 Computational Speed

The computational speed is discussed in this subsection. We use the same image sequence for testing the computational speed of background models. The image sequence has 300 frames with solution 320×240 pixels. The computer with 2 quad CPU @ 2.40GHz and

2G RAM is used for this experiment. The result is shown in Table 8. IRGABM is slower than RGABM, but the accuracy is better. There is a tradeoff between accuracy and computational speed. However, IRGABM is still a fast background model.

Frame #160 Frame #487

Captured

Frame

TIBM

TBM

LTABM

MABM

RGABM

IRGABM

Figure 6. frame # 160 and # 487

Background Model TIBM TBM LTABM MABM RGABM IRGABM

Computational Speed 454.55 84.03 106.76 47.10 157.07 71.09

Table 8. The Comparison of Computational Speed (fps) between Recursive-form

Background Models and Other Fast Background Models

29

Page 7: Moving Object Segmentation Using Improved Running Gaussian  Average Background Model

Figure 7. False Positive

Figure 8. False Negative

Figure 9. Shadow Detection in Frame # 86

Figure 10. Shadow Detection in Frame # 89

5 Conclusion

This paper provides IRGABM that with a fast computational speed and low memory requirement for moving object segmentation. First objective is to find foreground. IRGABM is a better background model for the noise reduction and the varying luminance effect

reduction. Background updating also provided in IRGABM. Foreground can be found by background subtraction and its threshold is determined by auto-thresholding algorithm. The auto-thresholding algorithm we suggest is triangle algorithm. That is the most suitable algorithm in the case of moving object segmentation to find foreground. Background updating time can be used for tuning a suitable background updating rates so that we would have a better performance as we update backgrounds and find suitable foreground. The noise and the varying luminance effect should have larger background updating rate, and moving objects should have smaller background updating rate.

Second objective is moving object segmentation, we introduce foreground classification. Foreground includes moving object, ghost, shadow, and ghost shadow. Shadow and ghost shadow can be eliminated by shadow detection, and ghost always be updated by IRGABM. Hence, moving objects can be founded by eliminating shadow, ghost shadow, and ghost.

In the experimental results, IRGABM has two tunable background updating rates that reduce the noise effect and the varying luminance effect, and reaches low false negative and low false positive.

In this paper we discuss recursive-form background models with fast computational speed and low memory requirement. IRGABM reduces the noise effect and the varying luminance effect. Namely, IRGABM is more accuracy than other recursive-form background models. IRGABM also updates background with tuning background updating rates. Choosing background updating rates using the concept of background updating time, a proper background updating would perform. Triangle algorithm provides IRGABM to have an auto-thresholding to distinguish foreground and background. Then shadow detection removes the moving object shadow and the ghost shadow, and moving objects can be extracted. Do the procedure shown above, moving object can be found, and IRGABM has a good accuracy.

References [1] C.R. Wren, A. Azarbayejani, T. Darrell, A.P.

Pentland, "Pfinder: Real-Time Tracking of The Human Body," IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(7), pp.780-785, 1997.

[2] R. Cucchiara, C. Grana, M. Piccardi, A. Prati, "Detecting Moving Objects, Ghosts, and Shadows in Video Streams," IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(10), pp.1337-1342, 2003.

[3] M. Piccardi, "Background Subtraction Techniques: A Review," IEEE International Conference on Systems, Man and Cybernetics, 4, pp. 3099-3104, 2004.

30

Page 8: Moving Object Segmentation Using Improved Running Gaussian  Average Background Model

[4] E. Hodneland, “Segmentation of digital images,” Cand. scient thesis, Department of Mathematics,University of Bergen, 2003. Available online at http://www.mi.uib.no/%7Etai

[5] N. Friedman, S. Russell, "Image Segmentation in Video Sequences: A Probabilistic Approach," In Proc. of the Thirteenth Conference on University in Artificial Intelligence (UAI), 1997.

[6] C. Stauffer, W.E.L. Grimson, "Adaptive Background Mixture Models for Real-Time Tracking," IEEE Computer Society Conference on. Computer Vision and Pattern Recognition, 2(23-25), pp. 246-252, 1999.

[7] A.M. Elgammal, D. Harwood, and L.S. Davis, "Non-parametric Model for Background Subtraction," Proceedings of ECCV 2000, pp. 751-767, 2000.

[8] N.M. Oliver, B. Rosario, A.P. Pentland, "A Bayesian Computer Vision System for Modeling Human Interactions," IEEE Trans. on Pattern Analysis and Machine Intelligence, 22(8), pp. 831-843, 2000.

[9] D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, and S. Russell, "Towards Robust Automatic Traffic Scene Analysis in Real-Time," Proceedings of the 12th IAPR International Conference on Computer Vision & Image Processing, 1, pp. 126-131, 1994.

[10] T.B. Moeslunda, A. Hiltonb, and V. Krügerc, "A Survey of Advances in Vision-Based Human Motion Capture and Analysis," Journal of Computer Vision and Image Understanding, 2006.

[11] R. Cucchiara, C. Grana, M. Piccardi, A. Prati, S. Sirotti, "Improving Shadow Suppression in Moving Object Detection with HSV Color Information," Proc. IEEE Int'l Conference on Intelligent Transportation Systems, pp. 334-339, 2001.

[12] A. Prati, R. Cucchiara, I. Mikic, M.M. Trivedi, "Analysis and Detection of Shadows in Video Streams: A Comparative Evaluation," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, 2(2), pp. II-571- II-576, 2001.

31