k nearest neighbor and rocchio algorithm ling 572 fei xia 1/11/2007

28
K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Post on 19-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

K nearest neighborand Rocchio algorithm

LING 572

Fei Xia

1/11/2007

Page 2: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Announcement

• Hw2 is online now. It is due on Jan 20.

• Hw1 is due at 11pm on Jan 13 (Sat).

• Lab session after the class.

• Read DT Tutorial before next Tues’ class.

Page 3: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

K-Nearest Neighbor (kNN)

Page 4: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Instance-based (IB) learning

• No training: store all training instances. “Lazy learning”

• Examples:– kNN– Locally weighted regression– Radial basis functions– Case-based reasoning– …

• The most well-known IB method: kNN

Page 5: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

kNN

Page 6: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

kNN

• For a new document d,– find k training documents that are closest to d.– perform majority voting or weighted voting.

• Properties:– A “lazy” classifier. No training.– Feature selection and distance measure are

crucial.

Page 7: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

The algorithm

• Determine parameter K• Calculate the distance between query-

instance and all the training instances• Sort the distances and determine K

nearest neighbors • Gather the labels of the K nearest

neighbors • Use simple majority voting or weighted

voting.

Page 8: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Picking K

• Use N-fold cross validation: pick the one that minimizes cross validation error.

Page 9: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Normalizing attribute values

• Distance could be dominated by some attributes with large numbers:– Ex: features: age, income

– Original data: x1=(35, 76K), x2=(36, 80K), x3=(70, 79K)

– Assume: age 2 [0,100], income 2 [0, 200K]

– After normalization: x1=(0.35, 0.38),

x2=(0.36, 0.40), x3 = (0.70, 0.395).

Page 10: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

The Choice of Features

• Imagine there are 100 features, and only 2 of them are relevant to the target label.

• kNN is easily misled in high-dimensional space.

Feature weighting or feature selection

Page 11: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Feature weighting

• Stretch j-th axis by weight wj,

• Use cross-validation to automatically

choose weights w1, …, wn

• Setting wj to zero eliminates this dimension altogether.

Page 12: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Similarity measure

• Euclidean distance:

• Weighted Euclidean distance:

• Similarity measure: cosine

Page 13: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Voting

• Majority voting:

c* = arg maxc i (c, fi(x))

• Weighted voting: weighting is on each neighbor c* = arg maxc i wi (c, fi(x))

wi = 1/dist(x, xi)

We can use all the training examples.

Page 14: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Summary of kNN

• Strengths:– Simplicity (conceptual)– Efficiency at training: no training– Handling multi-class– Stability and robustness: averaging k neighbors– Predication accuracy: when the training data is large

• Weakness:– Efficiency at testing time: need to calc all distances– Theoretical validity– It is not clear which types of distance measure and

features to use.

Page 15: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Rocchio Algorithm

Page 16: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Relevance Feedback for IR

• The issue: “plane” vs. “aircraft”

• Take advantage of user feedback on relevance of docs to improve IR results: – User issues a short, simple query– The user marks returned documents as relevant or non-relevant.– The system computes a better representation of the information

need based on feedback.– Relevance feedback can go through one or more iterations.

• Idea: it may be difficult to formulate a good query when you don’t know the collection well, so iterate

Page 17: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Rocchio Algorithm

• The Rocchio algorithm incorporates relevance feedback information into the vector space model.

• Want to maximize sim (Q, Cr) − sim (Q, Cnr)• The optimal query vector for separating relevant

and non-relevant documents (with cosine sim.):

• Qopt= optimal query; Cr = set of relevant doc vectors; N = collection size

Page 18: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Rocchio 1971 Algorithm (SMART)

qm = modified query vector; q0 = original query vector; , : weights Dr = set of known relevant doc vectors;Dnr= set of known irrelevant doc vectors

Page 19: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Relevance feedback assumptions

Relevance prototypes are “well-behaved”.• Term distribution in relevant documents will be

similar• Term distribution in non-relevant documents will

be different from those in relevant documents:– Either: All relevant documents are tightly clustered

around a single prototype.– Or: There are different prototypes, but they have

significant vocabulary overlap.– Similarities between relevant and irrelevant

documents are small.

Page 20: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Rocchio Algorithm for text classification

• Training time: construct a set of prototype vectors, one vector per class.

• Testing time: for a new document, find the most similar prototype vector.

Page 21: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Training time

Cj: the set of positive examples for class j.D: the set of positive and negative examples

: weights

Page 22: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Why this formula?

• Rocchio shows when ==1, each prototype vector maximizes

• How maximizing this formula connects to the classification accuracy?

Page 23: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Testing time

• Given a new document d

Page 24: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

kNN vs. Rocchio

• kNN:– Lazy learning: no training– Use all the training instances at testing time.

• Rocchio algorithm:– At the training time, calculate prototype vectors.– At the test time, instead of using all the training

instances, use only prototype vectors.

– Linear classifier: not as expressive as kNN.

Page 25: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Summary of Rocchio

• Strengths:– Simplicity (conceptual)– Efficiency at training– Efficiency at testing time– Handling multi-class

• Weakness:– Theoretical validity– Stability and robustness– Predication accuracy: it does not work well when the

categories are not linearly separable.

Page 26: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Additional slides

Page 27: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Three major design choices

• The weight of feature fk in document di: e.g., tf-idf

• The document length normalization

• The similarity measure: e.g., cosine

Page 28: K nearest neighbor and Rocchio algorithm LING 572 Fei Xia 1/11/2007

Extending Rocchio?

• Generalized instance set (GIS) algorithm (Lam and Ho, 1998).

• …