kernel methods a b m shawkat ali 1 2 data mining ¤ dm or kdd (knowledge discovery in databases)...

24
Kernel Methods A B M Shawkat Ali 1

Upload: blanche-randall

Post on 02-Jan-2016

225 views

Category:

Documents


6 download

TRANSCRIPT

Kernel Methods

A B M Shawkat Ali

1

2

Data Mining

¤ DM or KDD (Knowledge Discovery in Databases)

Extracting previously unknown, valid, and actionable information crucial decisions

¤ Approach ModelTrain Data

crucial decisions

Test Data

History of SVM• The original optimal hyperplane algorithm proposed by Vladimir

Vapnik in 1963 was a linear classifier.

• However, in 1992, Bernhard Boser, Isabelle Guyon and Vapnik suggested a way to create non-linear classifiers by applying the kernel trick (originally proposed by Aizerman et al.) to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a non-linear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be non-linear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space, it may be non-linear in the original input space.

4

Property of the SVM

¤ Relatively new approach¤ Lot of interest recently: Many successes, e.g., text classification

¤ Important concepts: Transformation into high dimensional space Finding a "maximal margin" separation Structural risk minimization rather than Empirical risk minimization

5

Support Vector Machine (SVM)

¤ Classification Grouping of similar data.

¤ Regression Prediction by historical knowledge.

¤ Novelty Detection To detect abnormal instances from a

dataset.¤ Clustering, Feature Selection

6

SVM Block Diagram

Training Data Domain

Non linear Mapping by Kernel

To Choose Optimal Hyperplane

Linear Feature Space of SVM

7

SVM Block Diagram

Constructed Model through Feature

knowledge

Class I

Class II

Test Data Domain

Kernel Mapping

8

SVM Formulation

Siby ii ,1)( Xw

Si

iii y Xw

)(sign byySi

iii

XX

w

1 )(sign by Xw

ww

:min

1)( by ii Xw

9

SVM Formulation

Siby ii ,1)( Xw

Si

iii y Xw

)(sign byySi

iii

XX

)(sign by Xw

iii

bbyC )(1:min

2

21

,Xww

w

10

SVM Formulation

x X

)(

)(

)(

xX

xX

ii

),( K ),()()( xxxx ii K

)),((sign bKyySi

iii

xx

))()((sign byySi

iii

xx

)()( xxXX ii

Mercer’s Condition

)(sign byySi

iii

XX

11

Types of Kernels

Common kernels for SVM¤ Linear ¤ Polynomial¤ Radial Basis Function

New kernels (not used in SVM)¤ Laplace ¤ Multiquadratic

12

SVM kernel

dii kK )(),( xxxx

)exp(),( 22

).(2).().(

xxxxxxxx iii kkk

iK

Polynomial

Gaussian (Radial Basis Function)

Linear

xxxx iiK ),(

13

Laplace kernel

Introduced by Pavel Paclik et. al. in Pattern Recognition letters 21 (2000)

Laplace Kernel based on Laplace Probability Density

N

i

D

j c

ijj

c h

xx

hNxf

1 1

exp2

11)|(ˆ

Smoothing Parameter (Sp)

14

Linear Kernel

The reality of data separation

16

RBF kernel

XOR solved by SVM

Input data x

Output class y

(-1,-1) -1

(-1,+1) +1

(+1,-1) +1

(+1,+1) -1

Table 5.3. Boolean XOR Problem

2)1(),( TjijiK xxxx

1 1

1 1 1 1 1 1.

1 1 1 1 1 1

1 1

Ti j

x x

•First, we transform the dataset by polynomial kernel as:

Here,

9111

1911

1191

1119

),( jiK xx

4 4 41

21 1 1i j

o y y Ki i i j i ji i j

x x

Therefore the kernel matrix is:

We can write the maximization term following SVM implementation given in Figure 5.20 as:

1 2 2 (9 2 2 2 91 2 3 4 1 1 2 1 3 1 4 222 22 2 9 2 9 )2 3 2 4 3 3 4 4

04321

4

1

i

iiy

1 2 3 40 , 0 , 0 , 0 subject to:

,

19

19

19

19

4321

4321

4321

4321

By solving these above equations we can write the solution to this optimisation problem as:

8

14321

.

Therefore, the decision function in the inner product representation is:

4

1

2

1

1125.0,ˆi

ii

n

iiii yKyf xxxxx

)()(

)(2)(2)())((2)(1

1)(2)(

)1),((),(

i

22112

2222112

11

22112

2211

2

jT

jijijijijiji

jijijiji

jiji

xxxxxxxxxxxx

xxxxxxxx

K

xx

xxxx

The 2nd degree polynomial kernel function:

Now we can write the 2nd degree polynomial transformation function as:

Tiiiiiii xxxxxx ]2,2,,2,,1[)( 21

2221

21 x

4

1

)(

i

iiio y x

)]()()()([8

14321 xxxx

0

0

0

2/1

0

0

2

2

1

2

1

1

2

2

1

2

1

1

2

2

1

2

1

1

2

2

1

2

1

1

8

1

0

2

2

2

1

0,0,0,2

1,0,0

2

1

22

21

21

x

x

x

xx

x

=

Therefore the optimal hyperplane function for this XOR problem is:

21)(ˆ xxf x

Conclusions

• Research Issues

– How to select a kernel automatically– How to select optimal parameter values for kernel