goal of learning algorithms the early learning algorithms were designed to find such an accurate...

11
Goal of Learning Algorithms early learning algorithms were designed to such an accurate fit to the data. assifier is said to be consistent if it perfor ct classification of the training data bility of a classifier to correctly classify aining set is known as its generalization ible code? 1994 Taipei Mayor election? Predict the real future NOT fitting the data in you and or predict the desired results

Upload: clemence-tucker

Post on 19-Jan-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Goal of Learning Algorithms

The early learning algorithms were designed to find such an accurate fit to the data.

A classifier is said to be consistent if it performed the correct classification of the training data

The ability of a classifier to correctly classify data not in the training set is known as its generalization Bible code? 1994 Taipei Mayor election?

Predict the real future NOT fitting the data in your hand or predict the desired results

Page 2: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Probably Approximately Correct Learningpac Model

Key assumption:

Training and testing data are generated i.i.d.according to anfixed but unknowndistributionD

We call such measure risk functional and denote

When we evaluate the “quality” of a hypothesis(classification function)h 2 H

D

we should take the

unknowndistributionerror” or “expected error”made by theh 2 H

( i.e.“average

)

it as Derr(h) =D

f (x;y) 2 X â f 1;à 1gj h(x)6=yg

into account

Page 3: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Generalization Error of pac Model

Let be a set ofS = f (x1;y1);. . .;(xl;yl)g l training

Dexamples chosen i.i.d. according to Treat the generalization error err(hS)

Das a r.v.

depending on the random selection of S Find a bound of the trail of the distribution of

in the formr.v.

err(hS)D

" = "(l;H;î )

" = "(l;H;î ) is a function ofl;H and î,where1à î

is the confidence level of the error bound which isgiven by learner

Page 4: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Probably Approximately Correct

We assert:

Pr(f err(hS)D

> " = "(l;H; î )g) < î

The error made by the hypothesisthen the error bound

hs will be less

"(l;H;î )that is not dependon the unknown distributionD

Pr(f err(hS)D

6" = "(l;H; î )g)>1à î

or

Page 5: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Find the Hypothesis with MinimumExpected Risk?

LetS = f (x1;y1);. . .;(xl;yl)gò X â f à 1;1gthe training Dexamples chosen i.i.d. according towiththe probability densityp(x;y)

be

The expected misclassification error made byh 2 His

R[h] =8;

X â f à 1;1g21jh(x) à yjdp(x;y)

The ideal hypothesishãoptshould has the smallest

expected riskR[hãopt]6 R[h]; 8h 2 H

Unrealistic !!!

Page 6: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Empirical Risk Minimization (ERM)

Find the hypothesishãempwith the smallest empirical

risk Remp[hãemp]6 Remp[h]; 8h 2 H

D p(x;y)and are not needed)(

Replace the expected risk over by an p(x;y)average over the training example

Remp[h] = l1 P

i=1

l

21 jh(xi) à yij The empirical risk:

Only focusing on empirical risk will cause overfitting

Page 7: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

VC ConfidenceRemp[h] & R[h](The Bound between )

R[h]6Remp[h]+ lv(log(2l=v)+1)à log(î =4)

q

The following inequality will be held with probability

1à î

C. J. C. Burges, A tutorial on support vector machines for pattern recognition,Data Mining and Knowledge Discovery 2 (2) (1998), p.121-167

Page 8: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Why We Maximize the Margin?(Based on Statistical Learning Theory)

The Structural Risk Minimization (SRM):

The expected risk will be less than or equal to

empirical risk (training error)+ VC (error) bound

íí w

íí

2 / VC bound

minVC bound , min21íí w

íí 2

2 , maxMargin

Page 9: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Capacity (Complexity) of Hypothesis Space :VC-dimension

H

A given training set is shattered byif for every labeling of

with this labeling

S H if and only

S; 9 h 2 H consistent

Three (linear independent) points shattered by ahyperplanes inR2

Page 10: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Shattering Points with Hyperplanesin Rn

Theorem: Consider some set of m points inRn. Choosea point as origin. Then the m points can be shattered

by oriented hyperplanes if and only if the positionvectors of the rest points are linearly independent.

Can you always shatter three points with a line inR2?

Page 11: Goal of Learning Algorithms  The early learning algorithms were designed to find such an accurate fit to the data.  A classifier is said to be consistent

Definition of VC-dimension H(A Capacity Measure of Hypothesis Space )

The Vapnik-Chervonenkis dimension,VC(H), ofhypothesis spaceHdefined over the input spaceXis the size of the (existent) largest finite subset

Xshattered byH If arbitrary large finite set ofXcan be shattered

byH, then VC(H) ñ 1

of

Let H = fall hyperplanes in Rngthen

VC(H) = n + 1