segmentation of vehicles in traffic video tun-yu chiang wilson lau

25
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Upload: nora-gibbs

Post on 17-Jan-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation of Vehicles in Traffic Video

Tun-Yu Chiang

Wilson Lau

Page 2: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Introduction

Motivation In CA alone, >400 road monitoring cameras, plans to install more Reduce video bit-rate by object coding Collect traffic flow data and/or surveillance e.g. count vehicles

passing on highway, draw attention to abnormal driver behavior

Two different approaches to segmentation: Motion Segmentation Gaussian Mixture Model

Page 3: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Motion Segmentation

Segment regions with a coherent motionCoherent motion: similar parameters in motion model

Steps:i. Estimate dense optic flow field

ii. Iterate between motion parameter estimation and region segmentation

iii. Segmentation by k-means clustering of motion parameters

Translational model use motion vectors directly as parameters

Page 4: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Optic Flow Estimation

Optic Flow (or spatio-temporal constraint) equation:

Ix · vx + Iy · vy + It = 0where Ix , Iy , It are the spatial and temporal derivatives

Problemsi. Under-constrained: add ‘smoothness constraint’ – assume flow

field constant over 5x5 neighbourhood window weighted LS solution

ii. ‘Small’ flow assumption often not valid: e.g. at 1 pixel/frame, object will take 10 secs (300 frames@30fps) to move across width of 300 pixels multi-scale approach

Page 5: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Level 0 (original resolution)

Level 1

Level 2

Multi-scale Optic Flow Estimation

Iteratively Gaussian filter and sub-sample by 2 to get ‘pyramid’ of lower resolution images

Project and interpolate LS solution from higher level which then serve as initial estimates for current level

Use estimates to ‘pre-warp’ one frame to satisfy small motion assumption

LS solution at each level refines previous estimates

Problem: Error propagation temporal smoothing essential at

higher levels4 pixels/frame

2 pixels/frame

1 pixel/frame

Page 6: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Results: Optic flow field estimation

Page 7: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Results: Optical flow field estimation

Smoothing of motion vectors across motion (object) boundaries due to Smoothness constraint added (5x5 window) to solve optic flow equation Further exacerbated by multi-scale approach

Occlusions, other assumption violations (e.g. constant intensity) ‘noisy’ motion estimates

Page 8: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation

Extract regions of interest by thresholding magnitude of motion vectors For each connected region, perform k-means clustering using feature

vector:

Color intensities give information on object boundaries to counter the smoothing of motion vectors across edges in optic flow estimate

Remove small, isolated regions

[vx, vy, x, y, R, G, B]motion vectors

pixel coordinates

color intensities

Page 9: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation Results

Simple translational motion model adequate Camera motion Unable to segment car in background 2-pixel border at level 2 of image pyramid (5x5 neighbourhood window)

translates to a 8-pixel border region at full resolution

Page 10: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation Results

Unsatisfactory segmentation when optic flow estimate is noisy Further work on

Adding temporal continuity constraint for objects Improving optic flow estimation e.g. Total Least Squares Assess reliability of each motion vector estimate and incorporate into

segmentation

Page 11: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Gaussian Background Mixture Model

Per-pixel model Each pixel is modeled as

sum of K weighted Gaussians. K = 3~5

The weights reflects the frequency the Gaussian is identified as part of background

Model updated adaptively with learning rate and new observation

I

N

XwXP

XXXX

tktk

T

btkgtkrtktk

tktktk

tktkttk

K

ktkt

T

tbtgtrt

2

,,,,,,,,,

,,,

,,,1

,

,,,

,

,,)(

Page 12: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation Algorithm

Matching Criterion

If no match found:pixel is foreground

If match found: background is average of high ranking Gaussians. Foreground is average of low ranking Gaussians

Update Formula Update weights:

Update Gaussian:Match found:

No Match found:Replace least possible Gaussian

with new observation.

Matching and model updating

New observation

background

foreground

2

,

2

, tktktX

/w

kkk Mww 1

tmt

T

tmttmtm

ttmtm

XX

X

,,

2

,

2

1,

,1,

1

1

Page 13: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation Result 1

• Background: “disappearing” electrical pole, blurring in the trees• lane marks appear in both foreground/background

Page 14: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation Result 2

Cleaner background: beginning of original sequence is purely background, so background model was built faster.

Page 15: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation Result 3

Smaller global motion in original sequence: Cleaner foreground and background.

Page 16: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Parameters matter

affects how fast the background model incorporates new observation

K affects how sharp the detail regions appears

Page 17: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Artifacts: Global Motion

Constant small motion caused by hand-held camera Blurring of background Lane marks (vertical motion) and electrical pole (horizontal

motion)

Page 18: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Global Motion Compensation

We used Phase Correlation Motion Estimation Block-based method Computationally

inexpensive comparing to block matching

Page 19: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation After Compensation

Page 20: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Segmentation After Compensation

Corrects artifacts before mentioned Still have problems: residue disappears slower

even with same learning rate

Page 21: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Q & A

Page 22: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Mixture model fails when …

Constant repetitive motion (jittering)

High contrast between neighborhood values (edge regions)

The object would appear in both foreground and background

Page 23: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Phase Correlation Motion Estimation

Use block-based Phase Correlation Function (PCF) to estimate translation vectors.

dxfFxPCF

eff

fff

eff

dxx

fdj

fdj

T

T

1

2

21

*

21

2

21

21

Page 24: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Introduction

Page 25: Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau

Our Experiment

Obtain test dataWe shoot our own test sequences at intersection of

Page Mill Rd. and I-280.Only translational motions included in the sequences

SegmentationTun-Yu experimented on Gaussian mixture modelWilson experimented on motion segmentation