information retrieval and web search ir models: vectorial model instructor: rada mihalcea class web...

40
Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: http://www.cs.unt.edu/~rada/CSCE5300 [Note: Some slides in this set were adapted from an IR course taught by Ray Mooney at UT Austin, who in turn adapted them from Joydeep Ghosh, who in turn adapted them …]

Upload: elvin-pierce

Post on 03-Jan-2016

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Information Retrieval and Web Search

IR models: Vectorial Model

Instructor: Rada MihalceaClass web page: http://www.cs.unt.edu/~rada/CSCE5300

[Note: Some slides in this set were adapted from an IR course taught by Ray Mooney at UT Austin, who in turn adapted them from Joydeep Ghosh, who in turn adapted them …]

Page 2: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 2

Topics

•Vectorial model– TF/IDF Weighting – Similarity measure

• Inner product• Euclidian• cosine

– Naïve implementation – Practical implementation– Weighting methods

• Need someone to present next time

Page 3: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 3

IR Models

Non-Overlapping ListsProximal Nodes

Structured Models

Retrieval: Adhoc Filtering

Browsing

U s e r

T a s k

Classic Models

boolean vector probabilistic

Set Theoretic

Fuzzy Extended Boolean

Probabilistic

Inference Network Belief Network

Algebraic

Generalized Vector Lat. Semantic Index Neural Networks

Browsing

Flat Structure Guided Hypertext

Page 4: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 4

Vector-Space Model

•t distinct terms remain after preprocessing– Unique terms that form the VOCABULARY

•These “orthogonal” terms form a vector space. Dimension = t = |vocabulary| – 2 terms bi-dimensional; …; n-terms n-dimensional

•Each term, i, in a document or query j, is given a real-valued weight, wij.

•Both documents and queries are expressed as t-dimensional vectors: dj = (w1j, w2j, …, wtj)

Page 5: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 5

Vector-Space Model

Query as vector:

•We regard query as short document

•We return the documents ranked by the closeness of their vectors to the query, also represented as a vector.

•Vectorial model was developed in the SMART system (Salton, c. 1970) and standardly used by TREC participants and web IR systems

Page 6: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 6

Graphic Representation

Example:

D1 = 2T1 + 3T2 + 5T3

D2 = 3T1 + 7T2 + T3

Q = 0T1 + 0T2 + 2T3

T3

T1

T2

D1 = 2T1+ 3T2 + 5T3

D2 = 3T1 + 7T2 + T3

Q = 0T1 + 0T2 + 2T3

7

32

5

• Is D1 or D2 more similar to Q?• How to measure the degree of

similarity? Distance? Angle? Projection?

Page 7: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 7

Document Collection Representation•A collection of n documents can be represented in the vector

space model by a term-document matrix.

•An entry in the matrix corresponds to the “weight” of a term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document.

T1 T2 …. Tt

D1 w11 w21 … wt1

D2 w12 w22 … wt2

: : : : : : : :Dn w1n w2n … wtn

Page 8: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 8

Term Weights: Term Frequency

•More frequent terms in a document are more important, i.e. more indicative of the topic. fij = frequency of term i in document j

•May want to normalize term frequency (tf) across the entire corpus: tfij = fij / max{fij}

Page 9: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 9

Term Weights: Inverse Document Frequency

• Terms that appear in many different documents are less indicative of overall topic.

df i = document frequency of term i

= number of documents containing term i

idfi = inverse document frequency of term i,

= log2 (N/ df i)

(N: total number of documents)

• An indication of a term’s discrimination power.

• Log used to dampen the effect relative to tf.

• Make the difference:– Document frequency VS. corpus frequency ?

Page 10: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 10

TF-IDF Weighting

• A typical weighting is tf-idf weighting:

wij = tfij idfi = tfij log2 (N/ dfi)

• A term occurring frequently in the document but rarely in the rest of the collection is given high weight.

• Experimentally, tf-idf has been found to work well.

• It was also theoretically proved to work well (Papineni, NAACL 2001)

• [more weighting schemes next time]

Page 11: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 11

Computing TF-IDF: An Example

Given a document containing terms with given frequencies:

A(3), B(2), C(1)

Assume collection contains 10,000 documents and

document frequencies of these terms are:

A(50), B(1300), C(250)

Then:

A: tf = 3/3; idf = log(10000/50) = 5.3; tf-idf = 5.3

B: tf = 2/3; idf = log(10000/1300) = 2.0; tf-idf = 1.3

C: tf = 1/3; idf = log(10000/250) = 3.7; tf-idf = 1.2

Page 12: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 12

Query Vector

•Query vector is typically treated as a document and also tf-idf weighted.

•Alternative is for the user to supply weights for the given query terms.

Page 13: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 13

Similarity Measure•We now have vectors for all documents in the

collection, a vector for the query, how to compute similarity?

•A similarity measure is a function that computes the degree of similarity between two vectors.

•Using a similarity measure between the query and each document:– It is possible to rank the retrieved documents in the

order of presumed relevance.– It is possible to enforce a certain threshold so that the

size of the retrieved set can be controlled.

Page 14: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 14

Desiderata for proximity

•If d1 is near d2, then d2 is near d1.

•If d1 near d2, and d2 near d3, then d1 is not far from d3.

•No document is closer to d than d itself.– Sometimes it is a good idea to determine the maximum

possible similarity as the “distance” between a document d and itself

Page 15: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 15

First cut: Euclidean distance

•Distance between vectors d1 and d2 is the length of the vector |d1 – d2|.– Euclidean distance

•Exercise: Determine the Euclidean distance between the vectors (0, 3, 2, 1, 10) and (2, 7, 1, 0, 0)

•Why is this not a great idea?

•We still haven’t dealt with the issue of length normalization– Long documents would be more similar to each other by

virtue of length, not topic

•However, we can implicitly normalize by looking at angles instead

Page 16: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 16

Second cut: Manhattan Distance•Or “city block” measure

– Based on the idea that generally in American cities you cannot follow a direct line between two points.

•Uses the formula:

•Exercise: Determine the Euclidean distance between the vectors (0, 3, 2, 1, 10) and (2, 7, 1, 0, 0)

x

y

n

iii yxYXManhDist

1

||),(

Page 17: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 17

Third cut: Inner Product

• Similarity between vectors for the document di and query q can be computed as the vector inner product:

sim(dj,q) = dj•q = wij · wiq

where wij is the weight of term i in document j and wiq is the weight of

term i in the query

• For binary vectors, the inner product is the number of matched query terms in the document (size of intersection).

• For weighted term vectors, it is the sum of the products of the weights of the matched terms.

t

i 1

Page 18: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 18

Properties of Inner Product

•Favors long documents with a large number of unique terms.– Again, the issue of normalization

•Measures how many terms matched but not how many terms are not matched.

Page 19: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 19

Inner Product: Example 1

k1 k2 k3 q dj d1 1 0 1 2 d2 1 0 0 1 d3 0 1 1 2 d4 1 0 0 1 d5 1 1 1 3 d6 1 1 0 2 d7 0 1 0 1

q 1 1 1

d1

d2

d3d4 d5

d6d7

k1k2

k3

Page 20: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 20

d1

d2

d3d4 d5

d6d7

k1k2

k3

Inner Product: Exercise

k1 k2 k3 q dj d1 1 0 1 ? d2 1 0 0 ? d3 0 1 1 ? d4 1 0 0 ? d5 1 1 1 ? d6 1 1 0 ? d7 0 1 0 ?

q 1 2 3

Page 21: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 21

Cosine similarity

•Distance between vectors d1 and d2 captured by the cosine of the angle x between them.

•Note – this is similarity, not distance

t 1

d2

d1

t 3

t 2

θ

Page 22: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 22

Cosine similarity

•Cosine of angle between two vectors

•The denominator involves the lengths of the vectors

•So the cosine measure is also known as the normalized inner product

n

i ki

n

i ji

n

i kiji

kj

kjkj

ww

ww

dd

ddddsim

1

2,1

2,

1 ,,),(

n

i jij wd1

2,Length

Page 23: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 23

Cosine similarity exercise

•Exercise: Rank the following by decreasing cosine similarity:– Two documents that have only frequent words (the, a,

an, of) in common.– Two documents that have no words in common.– Two documents that have many rare words in common

(wingspan, tailfin).

Page 24: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 24

Example

• Documents: Austen's Sense and Sensibility, Pride and Prejudice; Bronte's Wuthering Heights

• cos(SAS, PAP) = .996 x .993 + .087 x .120 + .017 x 0.0 = 0.999

• cos(SAS, WH) = .996 x .847 + .087 x .466 + .017 x .254 = 0.929

SaS PaP WHaffection 115 58 20jealous 10 7 11gossip 2 0 6

SaS PaP WHaffection 0.996 0.993 0.847jealous 0.087 0.120 0.466gossip 0.017 0.000 0.254

Page 25: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 25

Cosine Similarity vs. Inner Product• Cosine similarity measures the cosine of the angle

between two vectors.

• Inner product normalized by the vector lengths.

D1 = 2T1 + 3T2 + 5T3 CosSim(D1 , Q) = 10 / (4+9+25)(0+0+4) = 0.81D2 = 3T1 + 7T2 + 1T3 CosSim(D2 , Q) = 2 / (9+49+1)(0+0+4) = 0.13

Q = 0T1 + 0T2 + 2T3

t3

t1

t2

D1

D2

Q

D1 is 6 times better than D2 using cosine similarity but only 5 times better using

inner product.

t

i

t

i

t

i

ww

ww

qd

qd

iqij

iqij

j

j

1 1

22

1

)(

CosSim(dj, q) =

qdj

InnerProduct(dj, q) =

Page 26: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 26

Comments on Vector Space Models

•Simple, mathematically based approach.

•Considers both local (tf) and global (idf) word occurrence frequencies.

•Provides partial matching and ranked results.

•Tends to work quite well in practice despite obvious weaknesses.

•Allows efficient implementation for large document collections.

Page 27: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 27

Problems with Vector Space Model

•Missing semantic information (e.g. word sense).

•Missing syntactic information (e.g. phrase structure, word order, proximity information).

•Assumption of term independence (e.g. ignores synonomy).

•Lacks the control of a Boolean model (e.g., requiring a term to appear in a document).– Given a two-term query “A B”, may prefer a document

containing A frequently but not B, over a document that contains both A and B, but both less frequently.

Page 28: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 28

Naïve Implementation

Convert all documents in collection D to tf-idf weighted vectors, dj, for keyword vocabulary V.

Convert query to a tf-idf-weighted vector q.

For each dj in D do

Compute score sj = cosSim(dj, q)

Sort documents by decreasing score.

Present top ranked documents to the user.

Time complexity: O(|V|·|D|) Bad for large V & D !

|V| = 10,000; |D| = 100,000; |V|·|D| = 1,000,000,000

Page 29: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 29

Practical Implementation

•Based on the observation that documents containing none of the query keywords do not affect the final ranking

•Try to identify only those documents that contain at least one query keyword

•Actual implementation of an inverted index

Page 30: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 30

Step 1: Preprocessing

•Implement the preprocessing functions:– For tokenization– For stop word removal– For stemming

•Input: Documents that are read one by one from the collection

•Output: Tokens to be added to the index– No punctuation, no stop-words, stemmed

Page 31: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 31

Step 2: Indexing

•Build an inverted index, with an entry for each word in the vocabulary

•Input: Tokens obtained from the preprocessing module

•Output: An inverted index for fast access

Page 32: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 32

Step 2 (cont’d)

•Many data structures are appropriate for fast access– B-trees, skipped lists, hashtables

•We need:– One entry for each word in the vocabulary– For each such entry:

• Keep a list of all the documents where it appears together with the corresponding frequency TF

– For each such entry, keep the total number of occurrences in all documents: IDF

Page 33: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 33

Step 2 (cont’d)

system

computer

database

science D2, 4

D5, 2

D1, 3

D7, 4

Index terms df

3

2

4

1

Dj, tfj

Index file lists

Page 34: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 34

Step 2 (cont’d)

•TF and IDF for each token can be computed in one pass

•Cosine similarity also required document lengths

•Need a second pass to compute document vector lengths– Remember that the length of a document vector is the square-

root of sum of the squares of the weights of its tokens.– Remember the weight of a token is: TF * IDF– Therefore, must wait until IDF’s are known (and therefore

until all documents are indexed) before document lengths can be determined.

•Do a second pass over all documents: keep a list or hashtable with all document id-s, and for each document determine its length.

Page 35: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 35

Time Complexity of Indexing

• Complexity of creating vector and indexing a document of n tokens is O(n).

• So indexing m such documents is O(m n).

• Computing token IDFs can be done during the same first pass

• Computing vector lengths is also O(m n).

• Complete process is O(m n), which is also the complexity of just reading in the corpus.

Page 36: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 36

Step 3: Retrieval

•Use inverted index (from step 2) to find the limited set of documents that contain at least one of the query words.

•Incrementally compute cosine similarity of each indexed document as query words are processed one by one.

•To accumulate a total score for each retrieved document, store retrieved documents in a hashtable, where the document id is the key, and the partial accumulated score is the value.

•Input: Query and Inverted Index (from Step 2)

•Output: Similarity values between query and documents

Page 37: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 37

Step 4: Ranking

•Sort the hashtable including the retrieved documents based on the value of cosine similarity– sort {$retrieved{$b} $retrieved{$a}} keys %retrieved

•Return the documents in descending order of their relevance

•Input: Similarity values between query and documents

•Output: Ranked list of documented in reversed order of their relevance

Page 38: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 38

What weighting methods?

•Weights applied to both document terms and query terms

•Direct impact on the final ranking

Direct impact on the results

Direct impact on the quality of IR system

Page 39: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 39

Standard Evaluation Measures

w x

y z

n2 = w + y

n1 = w + x

N

relevant

not relevant

retrieved not retrieved

Starts with a CONTINGENCY table

Page 40: Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: rada/CSCE5300 [Note: Some

Slide 40

Precision and Recall

Recall:

Precision:

w

w+y

w+x

w

From all the documents that are relevant out there,how many did the IR system retrieve?

From all the documents that are retrieved by the IR system, how many are relevant?