linear subspace transforms pca, karhunen- loeve, hotelling c306, 2000
TRANSCRIPT
![Page 1: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000](https://reader036.vdocument.in/reader036/viewer/2022082713/5697bfab1a28abf838c9ae6c/html5/thumbnails/1.jpg)
Linear Subspace Transforms
PCA, Karhunen-Loeve, Hotelling
C306, 2000
![Page 2: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000](https://reader036.vdocument.in/reader036/viewer/2022082713/5697bfab1a28abf838c9ae6c/html5/thumbnails/2.jpg)
New idea: Image dependent basis
Previous transforms, (Fourier and Cosine) uses a fixed basis.
Better compression can be achieved by tailoring the basis to the image (pixel) statistics.
![Page 3: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000](https://reader036.vdocument.in/reader036/viewer/2022082713/5697bfab1a28abf838c9ae6c/html5/thumbnails/3.jpg)
Sample data
Let x =[x1, x2,…,xw] be a sequence of random variables (vectors)
Exampe 1: x = Column flattening of an image, so x is a sample over several images.
Example 2: x = Column flattening of an image block. x I could be sampled from only one image.
![Page 4: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000](https://reader036.vdocument.in/reader036/viewer/2022082713/5697bfab1a28abf838c9ae6c/html5/thumbnails/4.jpg)
Covariance Matrix
Note 1: Can more compactly write: C = X’*X and m = sum(x)/wNote 2: Covariance matrix is both symmetric and
real, therefore the matrix can be diagonalized and its eigenvalues will be real
)1()(1 1
eqnmmxxW
C Tj
W
j
Tj
)2()(1 1
eqnxW
mW
ii
![Page 5: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000](https://reader036.vdocument.in/reader036/viewer/2022082713/5697bfab1a28abf838c9ae6c/html5/thumbnails/5.jpg)
Coordinate Transform
Diagonalize C, ie find B s.t.
B’CB = D = diag( )
Note:
1. C symmetric => B orthogonal
2. B columns = C eigenvectors
j
![Page 6: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000](https://reader036.vdocument.in/reader036/viewer/2022082713/5697bfab1a28abf838c9ae6c/html5/thumbnails/6.jpg)
DefinitionKL, PCA, Hotelling transform
The forward coordinate transform
y = B’*(x-mx) Inverse transform x = B*y + mx
Note: my = E[y] = 0 zero mean
C = E[yy’] = D diagonal cov matrix
![Page 7: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000](https://reader036.vdocument.in/reader036/viewer/2022082713/5697bfab1a28abf838c9ae6c/html5/thumbnails/7.jpg)
Dimensionality reduction
Can write: x = y1B1 + y2B2 +…+ ywBw
But can drop some and only use a subset, i.e. x = y1B1 + y2B2
Result: Fewer y values encode almost all the data in x
![Page 8: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000](https://reader036.vdocument.in/reader036/viewer/2022082713/5697bfab1a28abf838c9ae6c/html5/thumbnails/8.jpg)
Remaining error
Mean square error of the approximation is
Proof: (on blackboard)
1N
KjjMSE
![Page 9: Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000](https://reader036.vdocument.in/reader036/viewer/2022082713/5697bfab1a28abf838c9ae6c/html5/thumbnails/9.jpg)
SummaryHotelling,PCA,KLT
Advantages:
1. Optimal coding in the least square sense
2. Theoretcially well founded
Disadvantages:
1. Computationally intensive
2. Image dependent basis