advanced topics in digital communications …...advanced topics in digital communications spezielle...
TRANSCRIPT
Advanced Topics in Digital CommunicationsSpezielle Methoden der digitalen Datenübertragung
Dr.-Ing. Carsten BockelmannInstitute for Telecommunications and High-Frequency Techniques
Department of Communications EngineeringRoom: SPT C3160, Phone: 0421/218-62386
www.ant.uni-bremen.de/courses/atdc/
LectureThursday, 10:00 – 12:00 in N3130
ExerciseWednesday, 14:00 – 16:00 in N1250
Dates for exercises will be announced during lectures.
TutorTobias Monsees
Room: SPT C3220Phone 218-62407
Who are we?
Lecturers Dirk gives lecture from Oct to end of Nov Carsten gives lecture from Dec. (end of parental leave) to end of semester
Tutor Tobias provides guidance in exercises and is available for questions
2
Carsten BockelmannDirk Wübben Tobias Monsees
Aim of Course and Requirements
Bridging the gap between courses and theses
Course focuses on state-of-the-art topics being subject of current research
Interactive exercises Executed in small groups Solve little problems with Matlab autonomously Presentation and discussion during exercises
Requirements for course attendance (recommended) Wireless Communications Channel Coding I Digital Signal Processing
3Outline
Outline Part 1: Linear Algebra
Eigenvalues and eigenvectors, pseudo inverse Decompositions (QR, unitary matrices, singular value, Cholesky )
Part 2: Basics and Preliminaries Motivating systems with Multiple Inputs and Multiple Outputs (multiple access techniques) General classification and description of MIMO systems (SIMO, MISO, MIMO) Mobile Radio Channel
Part 3: Information Theory for MIMO Systems Repetition of IT basics, channel capacity for SISO AWGN channel Extension to SISO fading channels Generalization for the MIMO case
Part 4: Multiple Antenna Systems SIMO: diversity gain, beamforming at receiver MISO: space-time coding, beamforming at transmitter MIMO: BLAST with detection strategies Influence of channel (correlation)
Part 5: Relaying Systems Basic relaying structures Relaying protocols and exemplary configurations
4Outline
Outline Part 6: In Network Processing
Part 7: Compressive Sensing Motivating Sampling below Nyquist Reconstruction principles and algorithms Applications
5Outline
Linear Algebra
Notations and definitions Vectors and matrices Special Matrices Elementary operations, Matrix multiplication, Transpose, Hermitian Transpose Determinants, Vector and Matrix norm Linear combination (range, null space)
Linear equation systems Cramer’s rule, Gaussian elimination, iterative methods Inverse matrix, matrix inversion lemma, inverse of a block matrix
Matrix factorizations LU, Cholesky, QR (Gram-Schmidt, Householder, Givens) Eigenvalues and eigenvectors Singular Value Decomposition SVD (pseudo-inverse, condition number)
Least squares
6Part 1: Linear Algebra
Notations and Definitions (1)
Vectors Column vectors (preferred): boldface lower case
Row vectors: underlined boldface lower case
Matrices Boldface capital letters (m × n matrix)
Column vectors are just m × 1 matrices Row vectors are just 1 × n matrices
7Part 1: Linear Algebra
x =
⎡⎢⎢⎢⎣x1x2...xn
⎤⎥⎥⎥⎦x =
£x1 x2 · · · xn
¤
A =
⎡⎢⎢⎢⎣a1,1 a1,2 · · · a1,na2,1 a2,2 · · · a2,n...
.... . .
...am,1 am,2 · · · am,n
⎤⎥⎥⎥⎦ = £ a1 a2 · · · an¤=
⎡⎢⎢⎢⎣a1a2...am
⎤⎥⎥⎥⎦
Notations and Definitions (2)
Some special matrices Identity matrix and zero matrix
Diagonal, lower and upper triangular matrices
8Part 1: Linear Algebra
Explicit dimensions:
Im: m × m identity matrix
0m,n: m × n zero matrixI =
⎡⎢⎢⎢⎣1 0 · · · 00 1 · · · 0....... . .
...0 0 · · · 1
⎤⎥⎥⎥⎦ 0 =
⎡⎢⎢⎢⎣0 0 · · · 00 0 · · · 0....... . .
...0 0 · · · 0
⎤⎥⎥⎥⎦
D =
⎡⎢⎢⎢⎣d1 0 · · · 00 d2 · · · 0...
.... . .
...0 0 · · · dn
⎤⎥⎥⎥⎦ L =
⎡⎢⎢⎢⎣l1,1 0 · · · 0l2,1 l2,2 · · · 0...
.... . .
...ln,1 ln,2 · · · ln,n
⎤⎥⎥⎥⎦ U =
⎡⎢⎢⎢⎣u1,1 u1,2 · · · u1,n0 u2,2 · · · u2,n...
.... . .
...0 0 · · · un,n
⎤⎥⎥⎥⎦
Basic Operations and Properties
Let A, B, C be matrices and , be scalars Addition and scalar multiplication are defined element-wise
Properties Addition is commutative Addition is associative Neutral element of addition Inverse element of addition Scalar multiplication is associative Neutral element of scalar multiplication Scalar multiplication is distributive Scalar multiplication is distributive
9Part 1: Linear Algebra
m n
A+B =
⎡⎢⎣ a1,1 + b1,1 · · · a1,n + b1,n...
. . ....
am,1 + bm,1 · · · am,n + bm,n
⎤⎥⎦ αA =
⎡⎢⎣ αa1,1 · · · αa1,n...
. . ....
αam,1 · · · αam,n
⎤⎥⎦A+B = B+A
(A+B) +C = A+ (B+C)
A+ 0 = A
A+ (−A) = 0(αβ)A = α(βA)
1A = A
(α+ β)A = αA+ βA
α(A+B) = αA+ αB
Matrix Multiplication (1)
Let A be a m × n matrix and B be a n × p matrix
The product C=AB is a m × p matrix with elements “row times column”
Note: number of columns of A has to equal number of rows of B Equivalent formulations of the matrix multiplication:
10Part 1: Linear Algebra
A =
⎡⎢⎣ a1,1 · · · a1,n...
. . ....
am,1 · · · am,n
⎤⎥⎦ = £ a1 · · · an¤=
⎡⎢⎣ a1...am
⎤⎥⎦B =⎡⎢⎣ b1,1 · · · b1,p
.... . .
...bn,1 · · · bn,p
⎤⎥⎦ = £ b1 · · · bp¤=
⎡⎢⎣ b1...bn
⎤⎥⎦ci,j = ai · bj =
nXk=1
ai,k · bk,j
C =
⎡⎢⎢⎢⎢⎢⎣nPk=1
a1,k · bk,1 · · ·nPk=1
a1,k · bk,p...
. . ....
nPk=1
am,k · bk,1 · · ·nPk=1
am,k · bk,p
⎤⎥⎥⎥⎥⎥⎦ =⎡⎢⎣ a1b1 · · · a1bp
.... . .
...amb1 · · · ambp
⎤⎥⎦ = £ Ab1 · · · Abp¤=
⎡⎢⎣ a1B...
amB
⎤⎥⎦ = nXk=1
akbk
Matrix Multiplication (2)
Special cases m = 1, n > 1, p = 1 (row vector times column vector)
m = 1, n > 1, p > 1 (row vector times matrix)
m > 1, n > 1, p = 1 (matrix times column vector)
m > 1, n = 1, p > 1 (column vector times row vector)
11Part 1: Linear Algebra
scalar
row vector
column vector
matrix
Inner or scalar product
Outer or dyadic product
Matrix-vector products
c = ab =
nXk=1
akbk
c = aB =
nXk=1
akbk
c = Ab =
nXk=1
akbk
C = ab =
⎡⎢⎣ a1,1b1,1 · · · a1,1b1,p...
. . ....
am,1b1,1 · · · am,1b1,p
⎤⎥⎦
Matrix Multiplication (3)
Properties Matrix multiplication is distributive Matrix multiplication is distributive Mixed scalar / matrix multiplication is associative Matrix multiplication is associative
Note: matrix multiplication is not commutative in general Example
12Part 1: Linear Algebra
(A+B)C = AC+BC
A(B+C) = AB +AC
(AB)C = ABC
A =
·2 61 7
¸B =
·−3 −12 1
¸C =
·15 61 20
¸AB =
·6 411 6
¸BA =
·−7 −255 19
¸AC =
·36 13222 146
¸CA =
·36 13222 146
¸ ⇒
⇒
AB 6= BA
AC = CA
Transpose and Hermitian Transpose
Transpose of a matrix
Row vectors become column vectors and vice versa Hermitian transpose of a complex matrix
Transpose of the complex conjugate matrix Properties
and
13Part 1: Linear Algebra
A =
⎡⎢⎣ a1,1 · · · a1,n...
. . ....
am,1 · · · am,n
⎤⎥⎦ = £ a1 · · · an¤=
⎡⎢⎣ a1...am
⎤⎥⎦ AT =
⎡⎢⎣ a1,1 · · · am,1...
. . ....
a1,n · · · am,n
⎤⎥⎦ = £ aT1 · · · aTm¤=
⎡⎢⎣ aT1...aTn
⎤⎥⎦⇒
AH = (A∗)T =
⎡⎢⎣ a∗1,1 · · · a∗1,n...
. . ....
a∗m,1 · · · a∗m,n
⎤⎥⎦T
=
⎡⎢⎣ a∗1,1 · · · a∗m,1...
. . ....
a∗1,n · · · a∗m,n
⎤⎥⎦ = £ aH1 · · · aHm¤=
⎡⎢⎣ aH1...aHn
⎤⎥⎦
(AT )T = A (AH)H = A(A+B)T = AT +BT (A+B)H = AH +BH
(AB)T = BTAT(AB)H = BHAH
Determinants (1)
Determinant of a 2 × 2 matrix
Determinant of a 3 × 3 matrix (Sarrus’ rule)
Determinant of a n × n matrix Let the (n-1) × (n-1) matrix Ai,j equal A without the i-th row and j-th column Recursive definition of determinant by cofactor expansion
14Part 1: Linear Algebra
column expansion row expansion Ai,j: minor matrix
det Ai,j: minor
detA = |A| =¯̄̄̄a1,1 a1,2a2,1 a2,2
¯̄̄̄= a1,1a2,2 − a2,1a1,2
detA = |A| =
¯̄̄̄¯̄ a1,1 a1,2 a1,3a2,1 a2,2 a2,3a3,1 a3,2 a3,3
¯̄̄̄¯̄ a1,1 a1,2a2,1 a2,2a3,1 a3,2
= a1,1a2,2a3,3 + a1,2a2,3a3,1 + a1,3a2,1a3,2−a3,1a2,2a1,3 − a3,2a2,3a1,1 − a3,3a2,1a1,2
detA =
nXi=1
(−1)i+jai,jdetAi,j detA =
nXj=1
(−1)i+jai,jdetAi,j
Determinants (2)
Fundamental properties Linearity in columns (rows) Exchanging two columns (rows) Determinant of identity matrix
Some additional properties Symmetry in columns and rows Zero column (row) Two equal columns (rows) Multiple of one column (row) Scalar multiplication Adding two columns (rows) Determinant of matrix product
All properties valid for arbitrary n × n matrices
15Part 1: Linear Algebra
|αa1 + α0a01 a2| = α · |a1 a2|+ α0 · |a01 a2||a2 a1| = −|a1 a2|det I = 1
detA = detAT
|0 a2| = 0|a1 a1| = 0|αa1 a2| = α · |a1 a2|det(αA) = αn detA|a1 + αa2 a2| = detAdet(AB) = detA · detB
Determinants (3)
Determinant of diagonal or triangular matrix At least one factor is zero for all
Efficient calculation of determinant Determinant unaffected by adding multiples of rows (columns) to rows (columns) Transform A into triangular matrix by elementary row (column) operations
Practical meaning of the determinant If det A = 0 the matrix A is singular det A equals volume of parallelepiped with edges given by rows (columns) of A Gives formulas for the pivots used for solving linear equation systems …
16Part 1: Linear Algebra
detD =
nYi=1
di,i detL =
nYi=1
li,i detU =
nYi=1
ui,i
Vector and Matrix Norm
Trace and diag operation
Vector norm ( -norm, Euclidian length)
Matrix norm ( -norm, spectral norm)
Frobenius norm
17
i : singular value of Amax(A) : largest singular value of Amin(A) : smallest singular value of A
tr{A} =nXi=1
ai,i diag{A} =
⎡⎢⎣ a1,1...
an,n
⎤⎥⎦ diag{x} =
⎡⎢⎢⎢⎣x1 0 · · · 00 x2 · · · 0...
.... . .
...0 0 · · · xn
⎤⎥⎥⎥⎦kxk = kxk2 =
√xH · x =
qtr{x · xH} =
vuut nXi=1
x∗i · xi =
vuut nXi=1
|xi|2
kAk2 = supx6=0
kAxkkxk = sup
kxk=1kAxk = σmax(A)
kAkF =qtr{A ·AH} =
vuut mXi=1
nXj=1
|ai,j |2 =
vuut mXi=1
σ2i
kA−1k2 =1
σmin(A)kAxk ≤ kAk2 · kxk
`2
`2
Linear Equation Systems (1)
System of m linear equations in n unknowns
Matrix-vector notation
Geometric interpretations x is the intersection of m hyperplanes b is a linear combination of the column vectors
18Part 1: Linear Algebra
Extended coefficient matrix [A | b]
a1,1x1 + a1,2x2 + · · · + a1,nxn = b1a2,1x1 + a2,2x2 + · · · + a2,nxn = b2...
......
...am,1x1 + am,2x2 + · · · + am,nxn = bm
⎡⎢⎢⎢⎣a1,1 a1,2 · · · a1,na2,1 a2,2 · · · a2,n...
.... . .
...am,1 am,2 · · · am,n
⎤⎥⎥⎥⎦ ·⎡⎢⎢⎢⎣x1x2...xn
⎤⎥⎥⎥⎦ =⎡⎢⎢⎢⎣b1b2...bm
⎤⎥⎥⎥⎦⎡⎢⎢⎢⎣a1,1 a1,2 · · · a1,na2,1 a2,2 · · · a2,n...
.... . .
...am,1 am,2 · · · am,n
¯̄̄̄¯̄̄̄¯b1b2...bm
⎤⎥⎥⎥⎦Ax = b⇔
aix = bi nXi=1
xiai = b
Linear Combination
Matrix ARm x n describes linear mapping of vector xRn onto vector yRm
Vector y is given by linear combination of the column vectors ai
Important subspaces Range (span, image): Subspace consisting of all linear combinations of a1,…, an
is called the subspace spanned by A
If the columns of A are linear independent, they form a basis of the spanned space
Null space (kernel): The null space consists of all vectors x such that Ax = 0
19
Linearity:A : Rn → Rm x→ A · x A · (γx+ x0) = γ(Ax) + (Ax0)
y = A · x = a1x1 + a2x2 + · · · anxn =nXi=1
aixi
R{A} = span{A} = {y|y = A · x, x ∈ Rn}
N{A} = kern{A} = {x|A · x = 0, x ∈ Rn}
a1
a2 y
x1a1
x2a2
Linear Combination
Example R1 R3
Example R2 R3
Example R3 R3
20
Line in R3
Plane in R3
y =
⎡⎣ y1y2y3
⎤⎦ =⎡⎣ a1,1a2,1a3,1
⎤⎦x1 = a1x1
y =
⎡⎣ a1,1 a1,2a2,1 a2,2a3,1 a3,2
⎤⎦ · x1x2
¸= a1x1 + a2x2
y =
⎡⎣ a1,1 a1,2 a1,3a2,1 a2,2 a2,3a3,1 a3,2 a3,3
⎤⎦⎡⎣ x1x2x3
⎤⎦ = a1x1 + a2x2 + a3x3
a1
a2
yy
y
a1
a2
a1
a2
a1
a3
Linear Equation Systems (2)
Illustration for 2 × 2 system (hyperplanes straight lines)
21Part 1: Linear Algebra
intersecting straight lines parallel straight lines identical straight lines
a1, a2 linearly independent a1, a2 parallel a1, a2, b parallel
unique solution no solution infinite number of solutions
a2 b
a1
a3
x2a2
x1a1
b b
a2a1
a2a1
x2
x1
x2
x1
x2
x1
a1x = b1
a2x = b2 a1x = b1
a2x = b2 a1x = b1
a2x = b2
Linear Equation Systems (3)
Elementary operations that result in equivalent linear equation systems Interchange two columns Multiply an equation by a nonzero scalar Add a constant multiple of one equation to another
As equations correspond to rows of the extended coefficient matrix [A | b], the elementary operations are performed on the rows of this matrix
Apply elementary operations to solve task Apply operations to the rows of the extended coefficient matrix [A | b] to simplify
the calculation of the solution Calculation of the inverse by Gauss-Jordan method Cholesky and QR decomposition of matrices
22
Linear Equation Systems (4)
Square linear equation system Ax = b with n equations in n unkowns Cramer’s rule
Let Aj equal A with the j-th column replaced by b
Then the j-th element of x is
Proof: substitute into Aj and use linearity in columns
Three possibilities unique solution no solution infinite number of solutions
23Part 1: Linear Algebra
Example for n=5 and j=3
Aj =£a1 · · · aj−1 b aj+1 · · ·an
¤xj =
detAj
detAb =
nXi=1
xiai
detA 6= 0detA = 0 and detAj 6= 0 for some j
detA = 0 and detAj = 0 for all j
det(A3) = |a1 a2 b a4 a5|
= |a1 a2
5Xi=1
xiai a4 a5|
= |a1 a2 x3a3 a4 a5|= x3 · |a1 a2 a3 a4 a5|= x3 · det(A)
Gaussian Elimination (1)
Example: system
24Part 1: Linear Algebra
(1) Elimination Subtracting multiples of rows to create zeros Transform system into upper triangular form
(2) Back-substitution Solve for unknowns Computation in reverse order
3 3
Pivot elements
Extension to (1): If If for some k > j exchange rows
If for all k > j move to next columnReduced systems
·l2,1 = a2,1/a1,1·l3,1 = a3,1/a1,1
·l3,2 = a(1)3,2/a(1)2,2
x3 = b(2)3 /a
(2)3,3
x2 = (b(1)2 − a(1)2,3x3)/a
(1)2,2
x1 = (b1 − a1,2x2 − a1,3x3)/a1,1
a1,1 a1,2 a1,3 b1a2,1 a2,2 a2,3 b2a3,1 a3,2 a3,3 b3a1,1 a1,2 a1,3 b1
0 a(1)2,2 a
(1)2,3 b
(1)2
0 a(1)3,2 a
(1)3,3 b
(1)3
a1,1 a1,2 a1,3 b1
0 a(1)2,2 a
(1)2,3 b
(1)2
0 0 a(2)3,3 b
(2)3
a(j−1)k,j 6= 0
a(j−1)k,j = 0
a(j−1)j,j = 0
Gaussian Elimination (2)
Special cases All diagonal elements nonzero
Zero row in coefficient matrix, corresponding right hand side nonzero
Zero rows in coefficient matrix, corresponding right hand sides zero
25Part 1: Linear Algebra
unique solution
no solution
infinite number of solutions
Free parameters
• ∗ ∗ ∗0 • ∗ ∗0 0 • ∗
∗ ∗ ∗ ∗0 ∗ ∗ ∗0 0 0 •
• ∗ ∗ ∗0 • ∗ ∗0 0 0 0
• ∗ ∗ ∗0 0 • ∗0 0 0 0
• ∗ ∗ ∗0 0 0 00 0 0 0
0 0 0 00 0 0 00 0 0 0
x3 x3 x3x2 x2 x2x1
Gaussian Elimination (3)
General formulation of the algorithm (1) Initialization and elimination
26Part 1: Linear Algebra
Back-substitution
Pivot search
A(0) := A,b(0) := bfor j := 1 to m− 1 do
find pivot element a(j−1)j,nj
for i := j + 1 to m do
li,j = a(j−1)i,nj
/a(j−1)j,nj
for k := nj + 1 to n do
a(j)i,k = a
(j−1)i,nj
− li,j · a(j−1)j,nj
end
b(j)i = b
(j−1)i − li,j · b(j−1)j
endend
nj := index of first nonzero columnif no nj then r := j − 1, breakexchange rows, so that a
(j−1)j,nj
6= 0
choose values for free parametersfor j := r down to 1 do
xnj =
Ãb(j−1)j −
nPk=nj+1
a(j−1)j,k · xk
!· 1
a(j−1)j,nj
end
Gaussian Elimination (4)
Result after Elimination Step
Number of nonzero rows on left hand side: Rank of Matrix A(number of linear independent equations)
Solution exists only if or Unique solution if no free parameters Infinite number of solutions if free parameters
27Part 1: Linear Algebra
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣
a(0)1,n1
· · · ∗ · · · · · · ∗ ∗ · · · ∗ b(0)1
0 · · · a(1)2,n2
· · · · · · ∗ ∗ · · · ∗ b(1)2
......
. . ....
......
0 · · · 0 · · · · · · a(r−1)r,nr ∗ ∗ b
(r−1)r
0 · · · 0 · · · · · · 0 · · · · · · 0 b(r)r+1
......
......
0 · · · 0 · · · · · · 0 · · · · · · 0 b(r)m
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦
r
m− r
rank{A} = r
r = m
r = nr < n n− r
r < m and b(r)r+1 = · · · = b
(r)m = 0
Iterative Solution of Linear Equation Systems
Linear equation system Basic idea of iterative algorithms
Start with initial estimate of the solution vector Find improved approximation from previous approximation Stop after convergence
Jacobi Solve row i for unknown xi
Parallel implementation possible Gauss-Seidel
Use already updated values Better convergence behavior than Jacobi No parallel implementation possible
Conjugate Gradient More complicated implementation, but usually fast convergence
28Part 1: Linear Algebra
Ax = b⇔nXj=1
ai,jxj = bi for 1 ≤ i ≤ n
x(0)
x(k)x(k+1)
x(k+1)i =
⎛⎝bi − i−1Xj=1
ai,jx(k)j −
nXj=i+1
ai,jx(k)j
⎞⎠ · 1ai,i
x(k+1)i =
⎛⎝bi − i−1Xj=1
ai,jx(k+1)j −
nXj=i+1
ai,jx(k)j
⎞⎠ · 1ai,i
Inverse Matrix (1)
Inverse A-1 of a square n × n matrix A
Relation of inverse to linear equation systems
Calculation of the inverse by Gauss-Jordan method n simultaneous linear equation systems Forward elimination Backward elimination
Inverse exists only if AX = I has a unique solution ( A nonsingular) Condition:
Properties
29Part 1: Linear Algebra
A−1A = AA−1 = I
Ax = b ⇔ x = A−1b
£Ax1 · · ·Axn
¤= AX = I⇔ [A|I]
[A|I]⇒ [U|L−1][U|L−1]⇒ [I|A−1]
rank{A} = n ⇔ detA 6= 0
(A−1)−1 = A
(AB)−1 = B−1A−1
(AH)−1 = (A−1)H
Inverse Matrix (2)
Matrix Inversion Lemma (ARm x m, BRm x n, CRn x n, DRn x m)
Inverse of block matrix E:
with ARm x m, BRm x n, CRn x m, DRn x n
30
Schur complement of A w.r.t E
Schur complement of D w.r.t E
(A+BCD)−1 = A−1 −A−1B(C−1 +DA−1B)−1DA−1
= A−1 −A−1B(I+CDA−1B)−1CDA−1
E =
·A BC D
¸
E−1 =
·F−1 −F−1BD−1
−DCF−1 D−1 +D−1CF−1BD−1
¸
E−1 =
·A−1 +A−1BG−1CA−1 −A−1BG−1
−G−1CA−1 G−1
¸F = A−BD−1C
G = D−CA−1B
LU Decomposition
Every invertible matrix A can be written as the product of a lower triangular matrix L and an upper triangular matrix U
Application: Solution of linear equation systemwith constant coefficient matrix A for different right hand sides Inversion of triangular matrices easy solve and then
Calculation of LU decomposition by Gaussian elimination Forward elimination: L contains factors from the elimination steps
Direct calculation of LU decomposition (example: matrix)
Calculation order:
31Part 1: Linear Algebra
3 3
A = LU
Ax = LUx = b
Ux = yLy = b
[A = LU | I]⇒ [U|L−1]li,j = a
(j−1)i,j /a
(j−1)j,j
⎡⎣ a1,1 a1,2 a1,3a2,1 a2,2 a2,3a3,1 a3,2 a3,3
⎤⎦ =⎡⎣ 1 0 0l2,1 1 0l3,1 l3,2 1
⎤⎦·⎡⎣ r1,1 r1,2 r1,3
0 r2,2 r2,30 0 r3,3
⎤⎦ =⎡⎣ r1,1 r1,2 r1,3l2,1r1,1 l2,1r1,2 + r2,2 l2,1r1,3 + r2,3l3,1r1,1 l3,1r1,2 + l3,2r2,2 l3,1r1,3 + l3,2r2,3 + r3,3
⎤⎦r1,1 → r1,2 → r1,3 → l2,1 → l3,1 → r2,2 → r2,3 → l3,2 → r3,3
Cholesky Decomposition
Let A be hermitian positive definite
then A is fully characterized by a lower triangularmatrix L
Cholesky decomposition
Similar to LU decomposition But computational complexity reduced by factor 2
Example: 3 × 3 matrix
Calculation order:
32Part 1: Linear Algebra
AlgorithmAH = A
xHAx > 0∀x
A = LLH
l1,1 → l2,1 → l3,1 → l2,2 → l3,2 → l3,3
⎡⎢⎢⎢⎣a1,1 a1,2 a1,3
a2,1 a2,2 a2,3
a3,1 a3,2 a3,3
⎤⎥⎥⎥⎦ =⎡⎢⎢⎢⎣|l1,1|2 l1,1l
∗2,1 l1,1l
∗3,1
l2,1l∗1,1 |l2,1|2 + |l2,2|2 l2,1l
∗3,1 + l2,2l
∗3,2
l3,1l∗1,1 l3,1l
∗2,1l3,2l
∗2,2 |l3,1|2 + |l3,2|2 + |l3,3|2
⎤⎥⎥⎥⎦
A(0) := A
for k := 1 to n do
lk,k =qa(k−1)k,k
for i := k + 1 to n do
li,k = a(k−1)i,k /l∗k,k
for j := k + 1 to i do
a(k)i,j = a
(k−1)i,j − li,k · l∗j,k
end
end
end
QR Decomposition (1)
Every m × n matrix A can be written as where Q is a m × n matrix with orthonormal columns
R is an upper triangular n × n matrix Columns of A are represented in the orthonormal base defined by Q
Illustration for the m × 2 case
33Part 1: Linear Algebra
A = QR
QHQ = I⇔qHi qj =
(1, for i = j
0 for i 6= j
ak =
kXi=1
ri,kqi
q2
a1 = r1,1q1
£a1 a2
¤=£q1 q2
¤ · r1,1 r1,20 r2,2
¸=£r1,1q1 r1,2q1 + r2,2q2
¤r2,2q2
q1 r1,2q1
a2 = r1,2q1 + r2,2q2
QR Decomposition (2)
Calculation of QR decomposition by modified Gram-Schmidt algorithm Calculate length (Euclidean norm) of a1 r1,1
Normalize a1 to have unit length q1
Projection of a2,...,an onto q1 r1,j
Subtract components of a2,...,an parallel to q1
Continue with next column Q is computed column by column from left to right R is computed row by row from top to bottom
Illustration for the m × 2 case
34Part 1: Linear Algebra
q(1)j
for k := 1 to n do
rk,k = kakkqk = ak/rk,k
for i := k + 1 to n do
rk,i = qHk ai
ai = ai − rk,iqkend
endq2
a1 = r1,1q1
£a1 a2
¤=£q1 q2
¤ · r1,1 r1,20 r2,2
¸=£r1,1q1 r1,2q1 + r2,2q2
¤r2,2q2
q1 r1,2q1
a2 = r1,2q1 + r2,2q2
QR Decomposition (3)
Householder reflection for real valued signals Reflection of vector x across the plain surface whose normal vector is u (||u||=1) is
achieved by orthonormal matrix
Reflected vector with ||y|| = ||x||
Householder matrix is symmetric (Θ = ΘT ) and orthogonal (Θ-1 = ΘT ) Reflection into specific direction
Reflected vector should contain only one non-diminishing element reflection creates n-1 elements equal to zero
Application with respect to matrix A
35Part 1: Linear Algebra
with
Θ = I− 2 · uuT
Θ = I− 2 · uuT
kuk = uTu = 1u
x
y = Θx
αu
αu
α
y = Θx = x− 2 · uuTx|{z}α
= x− 2α · u
y =
·kxk0
¸u =
x− ykx− yk
ΘA = Θ ·
⎡⎣ ∗ ∗ ∗∗ ∗ ∗∗ ∗ ∗
⎤⎦ =⎡⎣ ka1k ∗ ∗
0 ∗ ∗0 ∗ ∗
⎤⎦
QR Decomposition (4)
Householder reflections for complex valued signals
Special case: create zeros in a vector
Application to QR decomposition of m × n matrix A
36Part 1: Linear Algebra
Loop through all columnsInitialization
Create zeros below the maindiagonal in k-th column of R
Update unitary matrix Q
Θ = I− (1 + w) · uuH with u =x− ykx− yk and w =
xHu
uHxu x
y = Hx
uuHx
uuHyy = [kxk 0]T
R := A, Q := Imfor k := 1 to n dox = R(k : m, k)y = [kxk 0]T
calculate u, w,ΘR(k : m, k : n) = Θ ·R(k : m, k : n)Q(:, k : m) = Q(:, k : m) ·ΘH
end
QR Decomposition (5)
QR decomposition of 3 x 3 matrix A
37
Step 2:Create zeros in second
column of R
Step 3:Create real-valued
lower right element in R
Step 1:Create zeros in first
column of R
Step 0: Initialization of Q and RA =
⎡⎣ 1 0 00 1 00 0 1
⎤⎦ ·⎡⎢⎣ a
(0)1,1 a
(0)1,2 a
(0)1,3
a(0)2,1 a
(0)2,2 a
(0)2,3
a(0)3,1 a
(0)3,2 a
(0)3,3
⎤⎥⎦ = Q0 ·R0
Q0 ·ΘH1 ·Θ1 ·R0 =
⎡⎣ ∗ ∗ ∗∗ ∗ ∗∗ ∗ ∗
⎤⎦ ·⎡⎢⎣ a
(1)1,1 a
(1)1,2 a
(1)1,3
0 a(1)2,2 a
(1)2,3
0 a(1)3,2 a
(1)3,3
⎤⎥⎦ = Q1 ·R1
Q1 ··1 00 ΘH
2
¸··1 00 Θ2
¸·R1 =
⎡⎣ ∗ ∗ ∗∗ ∗ ∗∗ ∗ ∗
⎤⎦ ·⎡⎢⎣ a
(1)1,1 a
(2)1,2 a
(2)1,3
0 a(2)2,2 a
(2)2,3
0 0 a(2)3,3
⎤⎥⎦ = Q2 ·R2
Q2 ··I2 00 ΘH
3
¸··I2 00 Θ3
¸·R2 =
⎡⎣ ∗ ∗ ∗∗ ∗ ∗∗ ∗ ∗
⎤⎦ ·⎡⎢⎣ a
(1)1,1 a
(2)1,2 a
(3)1,3
0 a(2)2,2 a
(3)2,3
0 0 a(3)3,3
⎤⎥⎦ = Q3 ·R3
QR Decomposition (6)
Givens rotations Let equal an identity matrix except for is unitary and describes a rotation
Special choices for c and s:
Linear transformation Givens rotation can create zero while changing only one other element
Example
38Part 1: Linear Algebra
( , , )i k G( , , )i k G
31 2 (2,3, )(2,3, ) (1,2, )
* * * * * * * * * * * ** * * * * * 0 * * 0 * ** * * 0 * * 0 * * 0 0 *
GG GA R
g∗i,i = gk,k = cos θ = c
−g∗i,k = gk,i = sin θ = s
c =xip
|xi|2 + |xk|2and s =
−xkp|xi|2 + |xk|2
y = G(i, k, θ) · x ⇒ yi =p|xi|2 + |xk|2, yk = 0, yj = xj∀j 6= i, k
⇒ Q = G(2, 3, θ1)HG(1, 2, θ2)
HG(2, 3, θ3)HR = G(2, 3, θ3)G(1, 2, θ2)G(2, 3, θ1)A
Example for Givens Rotation
Application of rotation matrix to vector x R4
39
with
y = G(2, 4, θ) · x
=
⎡⎢⎢⎣1 0 0 00 c∗ 0 −s∗0 0 1 00 s 0 c
⎤⎥⎥⎦ ·⎡⎢⎢⎣x1x2x3x4
⎤⎥⎥⎦ =⎡⎢⎢⎣
x1c∗x2 − s∗x4
x3sx2 + cx4
⎤⎥⎥⎦ =⎡⎢⎢⎣
x1c∗x2 − s∗x4
x3sx2 + cx4
⎤⎥⎥⎦ =⎡⎢⎢⎣
x1p|x2|2 + |x4|2
x30
⎤⎥⎥⎦
c∗x2−s∗x4 =x∗2x2p
|x2|2 + |x4|2− −x∗4x4p
|x2|2 + |x4|2=
|x2|2 + |x4|p|x2|2 + |x4|2
=p|x2|2 + |x4|2
sx2 + cx4 =−x4x2p|x2|2 + |x4|2
+x2x4p
|x2|2 + |x4|2= 0
Eigenvalues and Eigenvectors (1)
Special eigenvalue problem for arbitrary n × n matrices
Condition for existence of nontrivial solutions x ≠ 0 Characteristic polynomial of degree n has to be zero
Zeros i of polynomial are the eigenvalues of A with algebraic multiplicity ki
Eigenvectors Solve linear equation systems for all eigenvalues i
Dimension of solution space is called geometric multiplicity gi (1 ≤ gi ≤ ki) Eigenvectors belonging to different eigenvalues are linearly independent
Diagonalization of a matrix A Define the matrix X = [x1 xn] and the diagonal matrix = diag(1, …, n)
Only possible for linearly independent eigenvectors
40Part 1: Linear Algebra
Ax = λx (A− λI)x = 0⇔
pA(λ) = det (A− λI) = (λ− λ1)k1 · . . . · (λ− λl)
kl = 0
(A− λiI)xi = 0
AX = XΛ⇒ X−1AX = Λ
Eigenvalues and Eigenvectors (2)
Some useful general properties
Properties for hermitian matrices, i.e. AH = A All eigenvalues are real Eigenvectors belonging to different eigenvalues are orthogonal Algebraic and geometric multiplicities are identical Consequence: all eigenvectors can be chosen to be mutually orthogonal A hermitian matrix A can be diagonalized by a unitary matrix V
41Part 1: Linear Algebra
Eigenvalue decomposition
AT → λiAH → λiαA → αλi,xi
Am → λmi ,xiA+ βI → λi + β,xiX−1AX → λi,X
−1xi
detA =nQi=1
λi
traceA =nPi=1
λi
VHAV = Λ⇔ A = VΛVH
A invertible ⇔ all λ 6= 0A positive definite ⇔ all λ > 0
Singular Value Decomposition (SVD) (1)
Every m × n matrix A of rank r can be written as
Singular values i of A = square roots of nonzero eigenvalues of AHA or AAH
Unitary m × m matrix U contains left singular vectors of A = eigenvectors of AAH
Unitary n × n matrix V contains right singular vectors of A = eigenvectors of AHA Verification with eigenvalue decomposition
Four fundamental subspaces: the vectors u1,...,ur span the column space of A ur+1,...,um span the left nullspace of A v1,...,vr span the row space of A vr+1,...,vn span the right nullspace of A
42Part 1: Linear Algebra
with the matrix of singular values
orthogonal
orthogonal
A = UΣVH = U
⎡⎣ Σ0 0
0 0
⎤⎦VH =
rXi=1
σiui · vHiΣ0 = diag(σ1, · · · ,σr)
= diag(S1, · · · , Sr)
AHA = VΣHUHUΣVH = V
⎡⎣ Σ20 0
0 0
⎤⎦VH AAH = UΣVHVΣUH = U
⎡⎣ Σ20 0
0 0
⎤⎦UH
Singular Value Decomposition (SVD) (2)
Illustration of the fundamental subspaces Consider linear mapping with orthogonal decomposition
43Part 1: Linear Algebra
x→ Ax x = xr + xn
xr
xn
x0 Axn = 0
Ax = Axr
Pseudo Inverse and Least Squares Solution (1)
Inverse A-1 exists only for square matrices with full rank Generalization: (Moore-Penrose) pseudo inverse A+
Special cases for full rank matrices
Application: Least squares solution of a linear equation system Problem: find vector x that minimizes the euclidean distance between Ax and b Solution: project b onto the column space of A and solve Ax=bc
If no unique solution exists take solution vector with shortest length
44Part 1: Linear Algebra
b
span{A}
e
bc
A = UΣVH = U
⎡⎣ Σ0 0
0 0
⎤⎦VH A+ = UΣ+VH= V
⎡⎣ Σ−10 0
0 0
⎤⎦UH⇒
⇒
⇒
A+A = AA+ = I
A+ =
(AH(AAH)−1 for rank{A} = m(AHA)−1AH for rank{A} = n
minxkAx− bk x = A+b
Pseudo Inverse and Least Squares Solution (2)
Illustration of the least squares solution of a linear equation system
45Part 1: Linear Algebra
x = A+b
bc = Ax
0A+bn = 0
b
bn
Condition Number
Condition number is an indicator for the “orthogonality” of a matrix A
Solution of the linear equation system b=Ax is given by x=A-1b The condition number cond(A) describes the impact of an error b in the
observation data b (e.g. measurement errors, noise, …) to the solution
Using two estimations the following relation is achieved
Relative error
46
cond (A) =1 for unitary matrix
error
Example: cond(A)=100 and b/b= 0.1% x/x = 100ꞏ0.01=10%
Part 1: Linear Algebra
cond(A) = kAk2 · kA−1k2 =σmax(A)
σmin(A)cond(A) ≥ 1
x+ δx = A−1(b+ δb) δx = A−1 · δb
δx = A−1 · δbb = A · x
kδxk ≤ kA−1k2 · kδbk = kδbk/σmin(A)
kbk ≤ kAk2 · kxk = σmax(A)kxkkδxkkbk ≤ kδbk
σmin(A)σmax(A)kxk
kδxkkxk ≤ σmax(A)
σmin(A)
kδbkkbk = cond(A) · kδbkkbk
Selected Literature
Online: Gutknecht: Lineare Algebra Hefferon: Elementary Linear Algebra Matthews: Elementary Linear Algebra Wedderburn: Lectures on matrices The Matrix Cookbook
Printed: B. Bradie: A Friendly Introduction to Numerical Analysis, Pearson 2006 G. Strang: Linear Algebra and its Applications, Hardcourt 1988 Johnson, Riess, Arnold: Introduction to Linear Algebra, Addison Wesley 2002 K. Hardy: Linear Algebra for Engineers and Scientists using Matlab, Pearson
2005
47Part 1: Linear Algebra