matrix multiplication chapter iibest.eng.buffalo.edu/research/lecture series 2013/matrix...
TRANSCRIPT
Matrix Multiplication
Chapter II- Matrix Analysis
By Gokturk Poyrazoglu
The State University of New York at Buffalo – BEST Group – Winter Lecture Series
Outline
1. Basic Linear Algebra
2. Vector Norms
3. Matrix Norms
4. The Singular Value Decomposition
5. SVD Properties
Basic Ideas from Linear Algebra
Independence :
An indexed family of vectors is a linearly independent family if none
of them can be written as a linear combination of finitely many other
vectors in the family.
A subset S of a vector space A is called linearly dependent if there
exist a finite number of distinct vectors a1, a2,..., an in S and scalars α1,
α2,..., αn, not all zero, such that
Subspace (span) :
Given a collection of vectors a1, a2, … , an ; the set of all linear
combinations of these vectors is a subspace (span).
Basic Ideas from Linear Algebra
Maximal Linearly Independent Subset:
The subset {ai1, ai2, …, ain} is a maximal subset of {a1, a2, …, an} if
it is linearly independent and is not properly contained in any
linearly independent subset of {a1, a2, …, an} .
Basis :
If the subset {ai1, ai2, …, ain} is maximal and the span of the
maximal subset is equal to span of the {a1, a2, …, an}, then the
maximal subset {ai1, ai2, …, ain} is a BASIS for spanning the
space.
Dimension :
All bases for a subspace S have the same number of elements.
This number is the dimension of S.
Range – Null Space - Rank
Range:
Null Space :
Rank :
Matrix Inverse
Consider matrix A and X as a n-by-n square matrix
X is inverse of A if
If such X exists, then A is said to be nonsingular.
Otherwise, A is singular matrix.
Properties:
Vector Orthogonality
Orthogonal :
A set of vectors {x1, x2, …, xn} is orthogonal if
Orthonormal:
A set of vectors {x1, x2, …, xn} is orthonormal if
where is the Kronecker Delta
Matrix Orthogonality
Orthogonal Matrix :
A square matrix Q is said to be orthogonal if
Properties:
Columns of Q are orthonormal vectors.
If matrix V1 is a rectangular matrix with all orthonormal
vectors, we can find such a matrix V2 to create an
orthogonal matrix V.
Determinant of Q is either +1 or -1.
Determinant
where A1j is a matrix obtained by deleting the 1st row and jth column of A.
Properties:
1.
2.
3.
4.
5.
For Square Matrices
Eigenvalues
The set of matrix-A’s eigenvalues is
If the eigenvalues of A are real; index them from largest to
smallest
Properties :
1. Matrix A and B are similar matrices if
Similar matrices have exactly the same eigenvalues.
Eigenvectors
There is a nonzero vector x such that the following holds:
Such a vector x is said to be an eigenvector for A
associated with an eigenvalue.
Note:
If n-by-n square matrix has n independent eigenvectors then A
is diagonalizable.
Vector Norms
Vector norm is a function that satisfies the followings:
Notation :
P-norms :
Important Norms (1 , 2 , inf ):
Vector Norm Properties
Holder inequality :
Spec. Case when p=q=2 is called Cauchy-Schwarz Inequality
2-norm is preserved under orthogonal transformation.
If Q is orthogonal matrix;
Absolute Error – Relative Error
Absolute Error in x:
Relative Error in x:
Special case for Relative Error: ∞-norm
Largest component of has approximately p correct significant digits.
Matrix Norms
Frobenius norm :
p-norm :
Meaning : is p-norm of the largest vector obtained by
applying A to a unit p-norm vector.
Matrix Norm Properties
1.
2.
3.
Sequence Convergence:
Consider {Ak} as a sequence, then {Ak} converges if
Orthogonal Invariance
Multiplication of matrix-A with an orthogonal matrix (Q,Z)
doesn’t change the inf-norm or 2-norm of matrix-A.
Example :
The Singular Value Decomposition
1. U is an m×m unitary matrix (orthogonal matrix if “M" is real),
2. The matrix Σ is an m×n diagonal matrix with nonnegative real numbers on the diagonal,
3. Matrix V* denotes the transpose of the n×n unitary matrix V.
4. The diagonal entries of Σ are known as the singular values of M.
5. In this case, the diagonal matrix Σ is uniquely determined by M (though the matrices U and V are not).
SVD Properties
*Detailed proof is given in slide 28
SVD Properties
The corollary states that:
1. The columns of V (right-singular vectors) are eigenvectors of ATA.
2. The columns of U (left-singular vectors) are eigenvectors of AAT.
3. The non-zero elements of Σ (non-zero singular values) are the
square roots of the non-zero eigenvalues of AAT or ATA.
SVD Properties
SVD Properties
SVD Properties
It states that the closest rank-k matrix to A is Ak. (rank
minimization)
It states that the smallest singular value of A is the 2-norm
distance of A to the set of all rank-deficient matrices.
Complex SVD
Unitary Matrix:
A complex square matrix is a unitary matrix if
Complex SVD:
Consider a complex square matrix A, then there exists unitary
matrices U and V such that the following holds.
Sensitivity of Square Systems
We want to solve Ax=b;
How perturbations in A and b affect the solution x?
SVD Analysis
Extra Proof Slides
Proof of SVD Theorem (Slide 18)
Proof of SVD Theorem (Slide 19)
Proof of
The Eckhart-Young Theorem (Slide 23)
Nearness to Singularity
Question is:
If det(A)=0 is equivalent to singularity, is det(A)≅0
equivalent to near singularity ?
Answer is NO.
Matrix 2-norm
Consider a unit 2-norm vector z