using image priors in maximum margin classifiers tali brayer margarita osadchy daniel keren

Post on 15-Jan-2016

213 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Using Image Priors in Maximum Margin

ClassifiersTali Brayer

Margarita Osadchy

Daniel Keren

Object DetectionProblem:

Locate instances of object category in a given image. Asymmetric classification

problem!

Background Object (Category)

Very large Relatively small

Complex (thousands of categories)

Simple (single category)

Large prior to appear in an image

Small prior

Easy to collect (not easy to learn from examples)

Hard to collect

All images

Intuition

Denote H to be the acceptance region of a classifier. We propose to minimize

Pr(All images) ( Pr(bkg)) in H except for the object samples.

Background

Object class

All images Background

We have a prior on the distribution of all natural images

Other work:

Combine small labeled training set with large unlabeled set – semi-supervised learning: EM with generative mixture models, Fisher kernel,self-training, co-training, transductive SVM, and graph-based methods…

All good for the symmetric case, but

We have more information: marginal background

Image smoothness measureLower probability

Lower probability

Distribution of Natural Images – “Boltzmann-like”

dxdyIII yx22exp)Pr(

lklkxlk

,

2,

22exp)xPr(

In frequency domain:

Linear SVM

Maximal margin

Enough training data

Class 1

Class 2

Not Enough training data

Linear SVM

Class 1

Class 2

0 xwx b

MM classifier with Prior

0xwx T b

margin wide3)

H samples positive )2

Hin images) natural(min)1

P

Class 1

Class 2

Minimize the probability of natural images over H

After some manipulations it reduces to

n

H

n

iii

bwdxdxxd ...expmin 1

1

2

,

n

i i

ib

dw

b

1

2,werfc

2min

Q

Random w with unit norm and random b from [-0.5, 0.5]

% o

f im

ages

tha

t w

x+b>

0

Relation between the number of natural random images in the positive half-space and the integral

Training Algorithm

erfc2

..1 ,0

..11 wx..

wmin

1

2

,

1

2

,

n

i i

i

i

ii

Mii

bw

d

w

-b

Mi

Mibts

C

Probability constraint:

(δ→0)

Convex Constraint

ii

n

i i

i

n

i i

i

dDDb

d

wb

d

w

b

1 0, w

0 ,2

erfc

21

1

2

1

2

1

convex

n

i i

i

dw

b

1

2erfc

2

Results Tested categories: cars (side view), faces. Training: 5/10/20/60/(all available data) object’s

images. All available background images. Test:

Face set: 472 faces, 23,573 bkg. Images Cars test: 299 cars, 10,111 bkg. images

Ran 50 trials for each set with different random choices of training data.

Weighted SVM was used to deal with the asymmetry in class sizes.

UIUC

CBCL

Average recognition rate(%): Faces

5 10 60 all

Weighted Linear SVM

70 72.5 75.2 77

Weighted Kernel SVM

69.7 72.6 79.6 83

MM_prior

Linear72.7 75 78 80.3

MM_prior Kernel

71.7 75.2 79.1 -

Average recognition rate(%): Cars

5 10 60 all

Weighted Linear SVM

89.24 91.8 92.8 93.7

Weighted Kernel SVM

90 92.9 95.4 96

MM_prior Linear

91.3 93 94.3 95.3

MM_prior Kernel

89.4 93.2 95.8 -

Future Work Video. Explore additional and more robust features. Refining the priors (using background

examples). Kernelization.

top related