singular value decomposition and data management

70
Singular Value Decomposition and Data Management

Post on 21-Dec-2015

235 views

Category:

Documents


10 download

TRANSCRIPT

Page 1: Singular Value Decomposition and Data Management

Singular Value Decomposition and Data

Management

Page 2: Singular Value Decomposition and Data Management

SVD - Detailed outline Motivation Definition - properties Interpretation Complexity Case studies Additional properties

Page 3: Singular Value Decomposition and Data Management

SVD - Motivation problem #1: text - LSI: find

‘concepts’ problem #2: compression / dim.

reduction

Page 4: Singular Value Decomposition and Data Management

SVD - Motivation problem #1: text - LSI: find

‘concepts’

Page 5: Singular Value Decomposition and Data Management

SVD - Motivation problem #2: compress / reduce

dimensionality

Page 6: Singular Value Decomposition and Data Management

Problem - specs ~10**6 rows; ~10**3 columns; no updates; random access to any cell(s) ; small error: OK

Page 7: Singular Value Decomposition and Data Management

SVD - Motivation

Page 8: Singular Value Decomposition and Data Management

SVD - Motivation

Page 9: Singular Value Decomposition and Data Management

SVD - Definition

A[n x m] = U[n x r] r x r] (V[m x r])T

A: n x m matrix (eg., n documents, m terms)

U: n x r matrix (n documents, r concepts)

: r x r diagonal matrix (strength of each ‘concept’) (r : rank of the matrix)

V: m x r matrix (m terms, r concepts)

Page 10: Singular Value Decomposition and Data Management

SVD - Properties

THEOREM [Press+92]: always possible to decompose matrix A into A = U VT , where

U, V: unique (*) U, V: column orthonormal (ie., columns

are unit vectors, orthogonal to each other) UT U = I; VT V = I (I: identity matrix)

: singular values, non-negative and sorted in decreasing order

Page 11: Singular Value Decomposition and Data Management

SVD - Example A = U VT - example:

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

datainf.

retrieval

brain lung

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=CS

MD

9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

Page 12: Singular Value Decomposition and Data Management

SVD - Example A = U VT - example:

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

datainf.

retrieval

brain lung

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=CS

MD

9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

CS-conceptMD-concept

Page 13: Singular Value Decomposition and Data Management

SVD - Example A = U VT - example:

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

datainf.

retrieval

brain lung

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=CS

MD

9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

CS-conceptMD-concept

doc-to-concept similarity matrix

Page 14: Singular Value Decomposition and Data Management

SVD - Example A = U VT - example:

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

datainf.

retrieval

brain lung

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=CS

MD

9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

‘strength’ of CS-concept

Page 15: Singular Value Decomposition and Data Management

SVD - Example A = U VT - example:

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

datainf.

retrieval

brain lung

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=CS

MD

9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

term-to-conceptsimilarity matrix

CS-concept

Page 16: Singular Value Decomposition and Data Management

SVD - Example A = U VT - example:

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

datainf.

retrieval

brain lung

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=CS

MD

9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

term-to-conceptsimilarity matrix

CS-concept

Page 17: Singular Value Decomposition and Data Management

SVD - Detailed outline Motivation Definition - properties Interpretation Complexity Case studies Additional properties

Page 18: Singular Value Decomposition and Data Management

SVD - Interpretation #1

‘documents’, ‘terms’ and ‘concepts’: U: document-to-concept similarity

matrix V: term-to-concept sim. matrix : its diagonal elements:

‘strength’ of each concept

Page 19: Singular Value Decomposition and Data Management

SVD - Interpretation #2 best axis to project on: (‘best’ =

min sum of squares of projection errors)

Page 20: Singular Value Decomposition and Data Management

SVD - Motivation

Page 21: Singular Value Decomposition and Data Management

SVD - interpretation #2

minimum RMS error

SVD: givesbest axis to project

v1

Page 22: Singular Value Decomposition and Data Management

SVD - Interpretation #2

Page 23: Singular Value Decomposition and Data Management

SVD - Interpretation #2

A = U VT - example:

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

v1

Page 24: Singular Value Decomposition and Data Management

SVD - Interpretation #2 A = U VT - example:

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

variance (‘spread’) on the v1 axis

Page 25: Singular Value Decomposition and Data Management

SVD - Interpretation #2 A = U VT - example:

U gives the coordinates of the points in the projection axis

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

Page 26: Singular Value Decomposition and Data Management

SVD - Interpretation #2 More details Q: how exactly is dim. reduction

done?1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

Page 27: Singular Value Decomposition and Data Management

SVD - Interpretation #2 More details Q: how exactly is dim. reduction

done? A: set the smallest singular values

to zero:1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

Page 28: Singular Value Decomposition and Data Management

SVD - Interpretation #2

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

~9.64 0

0 0x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

Page 29: Singular Value Decomposition and Data Management

SVD - Interpretation #2

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

~9.64 0

0 0x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

Page 30: Singular Value Decomposition and Data Management

SVD - Interpretation #2

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18

0.36

0.18

0.90

0

00

~9.64

x

0.58 0.58 0.58 0 0

x

Page 31: Singular Value Decomposition and Data Management

SVD - Interpretation #2

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

~

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 0 0

0 0 0 0 00 0 0 0 0

Page 32: Singular Value Decomposition and Data Management

SVD - Interpretation #2

Equivalent:‘spectral decomposition’ of the

matrix:1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

Page 33: Singular Value Decomposition and Data Management

SVD - Interpretation #2

Equivalent:‘spectral decomposition’ of the

matrix:1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

= x xu1 u2

1

2

v1

v2

Page 34: Singular Value Decomposition and Data Management

SVD - Interpretation #2Equivalent:‘spectral decomposition’ of the

matrix:1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

= u11 vT1 u22 vT

2+ +...n

m

r

i

Tiii vu

1

Page 35: Singular Value Decomposition and Data Management

SVD - Interpretation #2

‘spectral decomposition’ of the matrix:

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

= u11 vT1 u22 vT

2+ +...n

m

n x 1 1 x m

r terms

Page 36: Singular Value Decomposition and Data Management

SVD - Interpretation #2

approximation / dim. reduction:by keeping the first few terms (Q: how

many?)1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

= u11 vT1 u22 vT

2+ +...n

m

assume: 1 >= 2 >= ...

To do the mapping you use VT

X’ = VT X

Page 37: Singular Value Decomposition and Data Management

SVD - Interpretation #2

A (heuristic - [Fukunaga]): keep 80-90% of ‘energy’ (= sum of squares of i ’s)

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

= u11 vT1 u22 vT

2+ +...n

m

assume: 1 >= 2 >= ...

Page 38: Singular Value Decomposition and Data Management

SVD - Interpretation #3

finds non-zero ‘blobs’ in a data matrix

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

Page 39: Singular Value Decomposition and Data Management

SVD - Interpretation #3 finds non-zero ‘blobs’ in a data

matrix

1 1 1 0 0

2 2 2 0 0

1 1 1 0 0

5 5 5 0 0

0 0 0 2 2

0 0 0 3 30 0 0 1 1

0.18 0

0.36 0

0.18 0

0.90 0

0 0.53

0 0.800 0.27

=9.64 0

0 5.29x

0.58 0.58 0.58 0 0

0 0 0 0.71 0.71

x

Page 40: Singular Value Decomposition and Data Management

SVD - Interpretation #3 Drill: find the SVD, ‘by inspection’! Q: rank = ??

1 1 1 0 0

1 1 1 0 0

1 1 1 0 0

0 0 0 1 1

0 0 0 1 1

= x x?? ??

??

Page 41: Singular Value Decomposition and Data Management

SVD - Interpretation #3 A: rank = 2 (2 linearly independent

rows/cols)

1 1 1 0 0

1 1 1 0 0

1 1 1 0 0

0 0 0 1 1

0 0 0 1 1

= x x??

??

?? 0

0 ??

??

??

Page 42: Singular Value Decomposition and Data Management

SVD - Interpretation #3 A: rank = 2 (2 linearly independent

rows/cols)

1 1 1 0 0

1 1 1 0 0

1 1 1 0 0

0 0 0 1 1

0 0 0 1 1

= x x?? 0

0 ??

1 0

1 0

1 0

0 1

0 11 1 1 0 0

0 0 0 1 1

orthogonal??

Page 43: Singular Value Decomposition and Data Management

SVD - Interpretation #3 column vectors: are orthogonal -

but not unit vectors:

3

1

1 1 1 0 0

1 1 1 0 0

1 1 1 0 0

0 0 0 1 1

0 0 0 1 1

= x x?? 0

0 ??

3

1

3

1

3

1

3

1

3

1

00

000

2

1

2

1

0 0

0 0 0 2

1

2

1

Page 44: Singular Value Decomposition and Data Management

SVD - Interpretation #3 and the singular values are:

1 1 1 0 0

1 1 1 0 0

1 1 1 0 0

0 0 0 1 1

0 0 0 1 1

= x x3 0

0 2

3

1

3

1

3

1

00

000

2

1

2

1

3

1

3

1

3

10 0

0 0 0 2

1

2

1

Page 45: Singular Value Decomposition and Data Management

SVD - Interpretation #3 A: SVD properties:

matrix product should give back matrix A

matrix U should be column-orthonormal, i.e., columns should be unit vectors, orthogonal to each other

ditto for matrix V matrix should be diagonal, with

positive values

Page 46: Singular Value Decomposition and Data Management

SVD - Complexity O( n * m * m) or O( n * n * m) (whichever is

less) less work, if we just want singular values or if we want first k left singular vectors or if the matrix is sparse [Berry] Implemented: in any linear algebra

package (LINPACK, matlab, Splus, mathematica ...)

Page 47: Singular Value Decomposition and Data Management

Optimality of SVD

Def: The Frobenius norm of a n x m matrix M is

(reminder) The rank of a matrix M is the number of independent rows (or columns) of M

Let A=UVT and Ak = Uk k VkT (SVD approximation of

A) Ak is an nxm matrix, Uk an nxk, k kxk, and Vk mxk

Theorem: [Eckart and Young] Among all n x m matrices C of rank at most k, we have that:

2],[ jiMMF

FFk CAAA

Page 48: Singular Value Decomposition and Data Management

Kleinberg’s Algorithm Main idea: In many cases, when you

search the web using some terms, the most relevant pages may not contain this term (or contain the term only a few times) Harvard : www.harvard.edu Search Engines: yahoo, google, altavista

Authorities and hubs

Page 49: Singular Value Decomposition and Data Management

Kleinberg’s algorithm Problem dfn: given the web and a query find the most ‘authoritative’ web pages

for this query

Step 0: find all pages containing the query terms (root set)

Step 1: expand by one move forward and backward (base set)

Page 50: Singular Value Decomposition and Data Management

Kleinberg’s algorithm Step 1: expand by one move

forward and backward

Page 51: Singular Value Decomposition and Data Management

Kleinberg’s algorithm on the resulting graph, give high

score (= ‘authorities’) to nodes that many important nodes point to

give high importance score (‘hubs’) to nodes that point to good ‘authorities’)

hubs authorities

Page 52: Singular Value Decomposition and Data Management

Kleinberg’s algorithm

observations recursive definition! each node (say, ‘i’-th node) has

both an authoritativeness score ai and a hubness score hi

Page 53: Singular Value Decomposition and Data Management

Kleinberg’s algorithm

Let E be the set of edges and A be the adjacency matrix: the (i,j) is 1 if the edge from i to j exists

Let h and a be [n x 1] vectors with the ‘hubness’ and ‘authoritativiness’ scores.

Then:

Page 54: Singular Value Decomposition and Data Management

Kleinberg’s algorithm

Then:ai = hk + hl + hm

that isai = Sum (hj) over all j

that (j,i) edge existsora = AT h

k

l

m

i

Page 55: Singular Value Decomposition and Data Management

Kleinberg’s algorithm

symmetrically, for the ‘hubness’:

hi = an + ap + aq

that ishi = Sum (qj) over all j

that (i,j) edge existsorh = A a

p

n

q

i

Page 56: Singular Value Decomposition and Data Management

Kleinberg’s algorithm

In conclusion, we want vectors h and a such that:

h = A aa = AT h

Recall properties:C(2): A [n x m] v1 [m x 1] = 1 u1 [n x 1]

C(3): u1T A = 1 v1

T

Page 57: Singular Value Decomposition and Data Management

Kleinberg’s algorithmIn short, the solutions to

h = A aa = AT h

are the left- and right- eigenvectors of the adjacency matrix A.

Starting from random a’ and iterating, we’ll eventually converge

(Q: to which of all the eigenvectors? why?)

Page 58: Singular Value Decomposition and Data Management

Kleinberg’s algorithm

(Q: to which of all the eigenvectors? why?)

A: to the ones of the strongest eigenvalue, because of property B(5):B(5): (AT

A ) k v’ ~ (constant) v1

Page 59: Singular Value Decomposition and Data Management

Kleinberg’s algorithm - results

Eg., for the query ‘java’:0.328 www.gamelan.com0.251 java.sun.com0.190 www.digitalfocus.com (“the

java developer”)

Page 60: Singular Value Decomposition and Data Management

Kleinberg’s algorithm - discussion

‘authority’ score can be used to find ‘similar pages’ to page p

closely related to ‘citation analysis’, social networs / ‘small world’ phenomena

Page 61: Singular Value Decomposition and Data Management

google/page-rank algorithm

closely related: The Web is a directed graph of connected nodes

imagine a particle randomly moving along the edges (*)

compute its steady-state probabilities. That gives the PageRank of each pages (the importance of this page)

(*) with occasional random jumps

Page 62: Singular Value Decomposition and Data Management

PageRank Definition Assume a page A and pages T1, T2,

…, Tm that point to A. Let d is a damping factor. PR(A) the pagerank of A. C(A) the out-degree of A. Then:

))(

)(...

)2(

)2(

)1(

)1(()1()(

TmC

TmPR

TC

TPR

TC

TPRddAPR

Page 63: Singular Value Decomposition and Data Management

google/page-rank algorithm Compute the PR of each

page~identical problem: given a Markov Chain, compute the steady state probabilities p1 ... p5

1 2 3

45

Page 64: Singular Value Decomposition and Data Management

Computing PageRank Iterative procedure Also, … navigate the web by

randomly follow links or with prob p jump to a random page. Let A the adjacency matrix (n x n), di out-degree of page i

Prob(Ai->Aj) = pn-1+(1-p)di–1Aij

A’[i,j] = Prob(Ai->Aj)

Page 65: Singular Value Decomposition and Data Management

google/page-rank algorithm Let A’ be the transition matrix (=

adjacency matrix, row-normalized : sum of each row = 1)

1 2 3

45

1

1/2 1/2

1

1

1/2 1/2

p1

p2

p3

p4

p5

p1

p2

p3

p4

p5

=

Page 66: Singular Value Decomposition and Data Management

google/page-rank algorithm A p = p

1 2 3

45

1

1/2 1/2

1

1

1/2 1/2

p1

p2

p3

p4

p5

p1

p2

p3

p4

p5

=

A p = p

Page 67: Singular Value Decomposition and Data Management

google/page-rank algorithm

A p = p thus, p is the eigenvector that

corresponds to the highest eigenvalue (=1, since the matrix is row-normalized)

Page 68: Singular Value Decomposition and Data Management

Kleinberg/google - conclusions

SVD helps in graph analysis:hub/authority scores: strongest left-

and right- eigenvectors of the adjacency matrix

random walk on a graph: steady state probabilities are given by the strongest eigenvector of the transition matrix

Page 69: Singular Value Decomposition and Data Management

Conclusions – so far SVD: a valuable tool given a document-term matrix, it

finds ‘concepts’ (LSI) ... and can reduce dimensionality

(KL)

Page 70: Singular Value Decomposition and Data Management

Conclusions cont’d ... and can find fixed-points or

steady-state probabilities (google/ Kleinberg/ Markov Chains)

... and can solve optimally over- and under-constraint linear systems (least squares)