sliding window

36
Object Category Detection: Sliding Windows Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/18/10

Upload: others

Post on 10-Feb-2022

8 views

Category:

Documents


0 download

TRANSCRIPT

Object Category Detection: Sliding Windows

Computer Vision

CS 543 / ECE 549

University of Illinois

Derek Hoiem

03/18/10

Goal: Detect all instances of objects

Influential Works in Detection

• Sung-Poggio (1994, 1998) : ~1450 citations– Basic idea of statistical template detection (I think), bootstrapping to get

“face-like” negative examples, multiple whole-face prototypes (in 1994)

• Rowley-Baluja-Kanade (1996-1998) : ~2900– “Parts” at fixed position, non-maxima suppression, simple cascade, rotation,

pretty good accuracy, fast

• Schneiderman-Kanade (1998-2000,2004) : ~1250– Careful feature engineering, excellent results, cascade

• Viola-Jones (2001, 2004) : ~6500– Haar-like features, Adaboost as feature selection, hyper-cascade, very fast,

easy to implement

• Dalal-Triggs (2005) : 1025– Careful feature engineering, excellent results, HOG feature, online code

• Felzenszwalb-McAllester-Ramanan (2008)? 105 citations– Excellent template/parts-based blend

Sliding window detection

What the Detector Sees

Statistical Template

• Object model = log linear model of parts at fixed positions

+3 +2 -2 -1 -2.5 = -0.5

+4 +1 +0.5 +3 +0.5= 10.5

> 7.5?

> 7.5?

Non-object

Object

Design challenges

• Part design– How to model appearance

– Which “parts” to include

– How to set part likelihoods

• How to make it fast

• How to deal with different viewpoints

• Implementation details– Window size

– Aspect ratio

– Translation/scale step size

– Non-maxima suppression

Schneiderman and Kanade

Schneiderman and Kanade. A Statistical Method for 3D Object Detection. (2000)

Schneiderman and Kanade

Decision function:

Parts model

• Part = group of wavelet coefficients that are statistically dependent

Parts: groups of wavelet coefficients• Fixed parts within/across subbands

• 17 types of “parts” that can appear at each position

• Discretize wavelet coefficient to 3 values• E.g., part with 8 coefficients has 3^8 = 6561

values

Part Likelihood

• Class-conditional likelihood ratio

• Estimate P(part|object) and P(part | non-object) by counting over examples

• Adaboost tunes weights discriminatively

)()&()|(

objectcountobjectpartcountobjectpartP =

Training

1) Create training dataa) Get positive and negative patchesb) Pre-process (optional), compute wavelet

coefficients, discretizec) Compute parts values

2) Learn statisticsa) Compute ratios of histograms by counting for

positive and negative examplesb) Reweight examples using Adaboost, recount, etc.

3) Get more negative examples (bootstrapping)

Training multiple viewpoints

Train new detector for each viewpoint.

Testing

1) Processing:a) Lighting correction (optional)

b) Compute wavelet coefficients, quantize

2) Slide window over each position/scale (2 pixels, 21/4

scale) a) Compute part values

b) Lookup likelihood ratios

c) Sum over parts

d) Threshold

3) Use faster classifier to prune patches (cascade…more on this later)

4) Non-maximum suppression

Results: faces

208 images with 441 faces, 347 in profile

Results: cars

Results: faces today

http://demo.pittpatt.com/

Viola and Jones

Fast detection through two mechanisms

Viola and Jones. Rapid Object Detection using a Boosted Cascade of Simple Features (2001).

Integral Images

• “Haar-like features”– Differences of sums of intensity

– Thousands, computed at various positions and scales within detection window

Two-rectangle features Three-rectangle features Etc.

-1 +1

Integral Images

• ii = cumsum(cumsum(Im, 1), 2)

x, y

ii(x,y) = Sum of the values in the grey region

How to compute A+D-B-C?

How to compute B-A?

Adaboost as feature selection

• Create a large pool of parts (180K)

• “Weak learner” = feature + threshold + parity

• Choose weak learner that minimizes error on the weighted training set

• Reweight

Adaboost

Adaboost

“RealBoost”

Figure from Friedman et al. 1999

Important special case: ht partitions input space:

alphat

Adaboost: Immune to Overfitting?

Test error

Train error

Interpretations of Adaboost

• Additive logistic regression (Friedman et al. 2000)– LogitBoost from Collins et al. 2002 does this more

explicitly

• Margin maximization (Schapire et al. 1998)– Ratch and Warmuth 2002 do this more explicitly

Adaboost: Margin Maximizer

margin

Test error

Train error

Cascade for Fast Detection

Examples

Stage 1H1(x) > t1?

Reject

No

YesStage 2

H2(x) > t2?Stage N

HN(x) > tN?

Yes

… Pass

Reject

No

Reject

No

• Choose threshold for low false negative rate

• Fast classifiers early in cascade

• Slow classifiers later, but most examples don’t get there

Viola-Jones details• 38 stages with 1, 10, 25, 50 … features

– 6061 total used out of 180K candidates– 10 features evaluated on average

• Examples– 4916 positive examples– 10000 negative examples collected after each stage

• Scanning– Scale detector rather than image– Scale steps = 1.25, Translation 1.0*s to 1.5*s

• Non-max suppression: average coordinates of overlapping boxes

• Train 3 classifiers and take vote

Viola Jones Results

MIT + CMU face dataset

Schneiderman later results

Viola-Jones 2001Roth et al. 1999

Schneiderman-Kanade 2000

Schneiderman 2004

Speed: frontal face detector

• Schneiderman-Kanade (2000): 5 seconds

• Viola-Jones (2001): 15 fps

Occlusions?

• A problem

• Objects occluded by > 50% considered “don’t care”

• PASCAL VOC changed this

Strengths and Weaknesses of Statistical Template Approach

Strengths

• Works very well for non-deformable objects: faces, cars, upright pedestrians

• Fast detection

Weaknesses

• Not so well for highly deformable objects

• Not robust to occlusion

• Requires lots of training data

SK vs. VJ

Schneiderman-Kanade• Wavelet features

• Log linear model via boosted histogram ratios

• Bootstrap training

• Two-stage cascade

• NMS: Remove overlapping weak boxes

• Slow but very accurate

Viola-Jones• Similar to Haar wavelets

• Log linear model via boosted stubs

• Bootstrap training

• Multistage cascade, integrated into training

• NMS: average coordinates of overlapping boxes

• Less accurate but very fast

Things to remember• Excellent results require careful

feature engineering

• Sliding window for search

• Features based on differences of intensity (gradient, wavelet, etc.)

• Boosting for feature selection (also L1-logistic regression)

• Integral images, cascade for speed

• Bootstrapping to deal with many, many negative examples

Examples

Stage 1H1(x) >

t1?

Reject

No

YesStage 2H2(x) >

t2?

Stage NHN(x) >

tN?

Yes

…Pass

Reject

No

Reject

No