hsu-yung cheng, member, ieee, chih-chia weng, and yi-ying chen

22
Hsu-Yung Cheng , Member, IEEE , Chih-Chia Weng, and Yi-Ying Chen Vehicle Detection in Aerial Surveillance Using Dynamic Bayesian Networks

Upload: edgar-benson

Post on 17-Jan-2018

229 views

Category:

Documents


0 download

DESCRIPTION

These technologies have a variety of applications, such as military, police, and traffic management. Cheng and Butler performed color segmentation via mean-shift algorithm and motion analysis via change detection.

TRANSCRIPT

Page 1: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Vehicle Detection in Aerial Surveillance Using Dynamic

Bayesian Networks

Page 2: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Introduction

Proposed vehicle detection framework

Experimental Results

Conclusion

Outline

Page 3: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

These technologies have a variety of applications, such as military, police, and traffic management.

Cheng and Butler performed color segmentation via mean-shift algorithm and motion analysis via change detection.

Introduction

Page 4: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Choi and Yang proposed a vehicle detection algorithm using the symmetric property of car shapes.

In this paper, we design a new vehicle detection framework that preserves the advantages of the existing works and avoids their drawbacks.

Introduction

Page 5: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Introduction

Page 6: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

A. Background Color Removal

B. Feature Extraction1) Local Feature Analysis2) Color Transform and Color Classification

C. Dynamic Bayesian Network(DBN)

Proposed vehicle detection framework

Page 7: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Since nonvehicle regions cover most parts of the entire scene in aerial images.

We quantize the color histogram bins as 16 16 16.Colors corresponding to the first eight highest bins

are regarded as background colors and removed from the scene.

Background Color Removal

Page 8: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

pj= nj/n

Tmax =T ,Tmin=0.1*(Gmax-Gmin) for Canny edge detector.Harris detector is for the corners.

Feature Extraction- Local Feature Analysis

Page 9: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

A new color model transforms (R,G,B) color components into the color domain

(3) (4)

Feature Extraction-Color Transform and Color Classification

Page 10: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Feature Extraction-Color Transform and Color ClassificationUse n*m as a

block to train SVM model to classify color.

Page 11: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

(5) (6) (7)A=Length/WidthZ is the pixel count of “vehicle color region 1”

Feature Extraction

Page 12: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Use some videos to train the probabilities with people marked ground truth.

indicates if a pixel belongs to a vehicle at time slice t. (8)

Dynamic Bayesian Network(DBN)

Page 13: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

We use morphological operations to enhance the detection mask and perform connected component labeling to get the vehicle objects.

Post Processing

Page 14: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Experimental results

Page 15: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Experimental results

Page 16: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Experimental results

Page 17: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Experimental results

results of color classification by SVM after background color removal and local feature analysis.

Page 18: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Experimental results

Fig. 11(a) shows the results obtained using the traditional Canny edge detector with nonadaptive thresholds. Fig. 11(b) shows the detection results obtained using the enhanced Canny edge detector with moment-preserving threshold selection.

Page 19: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Experimental results

Page 20: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Experimental results

Page 21: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

Experimental results

Page 22: Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen

ConclusionThe number of frames required to train the DBN

is very small.

Overall, the entire framework does not require a large amount of training samples.

For future work, performing vehicle tracking on the detected vehicles can further stabilize the detection results.