final practice sols

7
7/29/2019 Final Practice Sols http://slidepdf.com/reader/full/final-practice-sols 1/7 MATH 110: LINEAR ALGEBRA PRACTICE FINAL SOLUTIONS Question 1 . (1) Write f (x) = a m x m +a m 1 x m 1 + ··· +a 0 . Then F = f (T ) = a m T m +a m 1 T m 1 + ··· +a 0 I . Since T is upper triangular, ( T k ) ii =T k ii for all positive k and i { 1,...,n } . Hence F ii =a m T m ii +a m 1 T m 1 ii + ··· +a 0 = f (T ii ). (2) TF = T f (T ) = T (a m T m +a m 1 T m 1 + ··· +a 0 I ) = a m T m +1 +a m 1 T m + ··· +a 0 T = (a m T m +a m 1 T m 1 + ··· +a 0 I )T = f (T )T = F T . (3) Since T is upper triangular, so is each power of T , and hence so is F = f (T ). Therefore, T ij = 0 and F ij = 0 whenever i > j . Hence (FT ) i,i +1 = n k =1 F ik T k,i +1 = i +1 k = i F ik T k,i +1 =F ii T i,i +1 +F i,i +1 T i +1 ,i +1 and (TF ) i,i +1 = n k =1 T ik F k,i +1 = i +1 k = i T ik F k,i +1 =T ii F i,i +1 +T i,i +1 F i +1 ,i +1 . Then ( F T ) i,i +1 = ( T F ) i,i +1 implies that F i,i +1 (T ii T i +1 ,i +1 ) = T i,i +1 F i +1 ,i +1 F ii T i,i +1 . But, by hypothesis, T ii T i +1 ,i +1 = 0, so F i,i +1 = ( T i,i +1 F i +1 ,i +1 F ii T i,i +1 )/ (T ii T i +1 ,i +1 ). (4) As before, T ij = 0 and F ij = 0 whenever i > j . So we have (F T ) i,i + k = n j =1 F ij T j,i + k = i + k j = i F ij T j,i + k =F i,i + k T i + k,i + k + i + k 1 j = i F ij T j,i + k and (T F ) i,i + k = n j =1 T ij F j,i + k = i + k j = i T ij F j,i + k =T ii F i,i + k + i + k j = i +1 T ij F j,i + k . Equating the two expressions, we obtain F i,i + k (T ii T i + k,i + k ) = i + k 1 j = i F ij T j,i + k i + k j = i +1 T ij F j,i + k . Again, by hypothesis, T ii T i + k,i + k = 0, so F i,i + k = ( i + k 1 j = i F ij T j,i + k i + k j = i +1 T ij F j,i + k )/ (T ii T i + k,i + k ). 1

Upload: cody-sage

Post on 04-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Final Practice Sols

7/29/2019 Final Practice Sols

http://slidepdf.com/reader/full/final-practice-sols 1/7

MATH 110: LINEAR ALGEBRAPRACTICE FINAL SOLUTIONS

Question 1 .(1) Write f (x) = am xm + am − 1xm − 1 + · · ·+ a0. Then F = f (T ) = am T m + am − 1T m − 1 +

· · ·+ a0I . Since T is upper triangular, ( T k )ii = T kii for all positive k and i∈ {1, . . . , n}.Hence F ii = am T mii + am − 1T m − 1

ii + · · ·+ a0 = f (T ii ).(2) TF = T f (T ) = T (am T m + am − 1T m − 1 + · · ·+ a0I ) = am T m +1 + am − 1T m + · · ·+ a0T =

(am T m + am − 1T m − 1 + · · ·+ a0I )T = f (T )T = F T .(3) Since T is upper triangular, so is each power of T , and hence so is F = f (T ).

Therefore, T ij = 0 and F ij = 0 whenever i > j . Hence

(F T )i,i +1 =n

k=1

F ik T k,i +1 =i+1

k = i

F ik T k,i +1 = F ii T i,i +1 + F i,i +1 T i+1 ,i +1

and

(TF )i,i +1 =n

k =1

T ik F k,i +1 =i+1

k= i

T ik F k,i +1 = T ii F i,i +1 + T i,i +1 F i+1 ,i +1 .

Then ( F T )i,i +1 = ( T F )i,i +1 implies that

F i,i +1 (T ii −T i+1 ,i +1 ) = T i,i +1 F i+1 ,i +1 −F ii T i,i +1 .

But, by hypothesis, T ii −T i+1 ,i +1 = 0, so

F i,i +1 = ( T i,i +1 F i+1 ,i +1 −F ii T i,i +1 )/ (T ii −T i+1 ,i +1 ).

(4) As before, T ij = 0 and F ij = 0 whenever i > j . So we have

(F T )i,i + k =n

j =1

F ij T j,i + k =i+ k

j = i

F ij T j,i + k = F i,i + k T i+ k,i + k +i+ k − 1

j = i

F ij T j,i + k

and

(T F )i,i + k =n

j =1

T ij F j,i + k =i+ k

j = i

T ij F j,i + k = T ii F i,i + k +i+ k

j = i+1

T ij F j,i + k .

Equating the two expressions, we obtain

F i,i + k (T ii −T i+ k,i + k ) =i+ k − 1

j = i

F ij T j,i + k −i+ k

j = i+1

T ij F j,i + k .

Again, by hypothesis, T ii −T i+ k,i + k = 0, so

F i,i + k = (i+ k − 1

j = i

F ij T j,i + k −i+ k

j = i+1

T ij F j,i + k )/ (T ii −T i+ k,i + k ).

1

Page 2: Final Practice Sols

7/29/2019 Final Practice Sols

http://slidepdf.com/reader/full/final-practice-sols 2/7

2 MATH 110: PRACTICE FINAL SOLUTIONS

(5) The above calculations work equally well for power series that converge at all eigen-values of T , in place of polynomials. So, noting that cos( x) = ∞

n =0(− 1) n x 2 n

(2n )! , we canapply the above considerations. Let us set

T = π/ 4 70 −π/ 4

and F = cos( T ). By the above work, F 11 = cos( π/ 4) = √ 2/ 2, F 22 = cos(−π/ 4) =√ 2/ 2, F 21 = 0, and F 12 = ( T 12F 22 −F 11T 12)/ (T 11 −T 22) = 7 √ 22

−√ 2

2 7π

4 + π

4= 0. So

F =√ 2/ 2 0

0 √ 2/ 2.

Question 2 . The rst two parts of this problem only ask you to show that AB and BAhave (almost) the same eigenvalues, but don’t actually demand you show their characteristicpolynomials (almost) agree. Showing the latter condition is stronger than the former, andthe hints given actually lead to this stronger result. However there is a simpler method toshow the weaker condition, and that is included after the full solution.

(1) Without loss of generality, we may assume that A is nonsingular and hence invertible.So A− 1(AB)A = BA . Since AB and BA are similar, they have the same eigenvalues.

(2) Let us compute the characteristic polynomials of AB and BA. We’ll be working inthe eld F (x), consisting of fractions of polynomials in x with coefficients in F .

det( xI n −AB) = det I n B0 xI n −AB = det I n 0

−A I nI n BA xI n

= det I n 0

−A I ndet I n B

A xI n= 1 ·det I n B

A xI n

= xn

detI n BA xI n x

− n

= det xI n 00 I n

det I n BA xI n

det I n 00 x− 1I n

= det xI n 00 I n

I n BA xI n

I n 00 x− 1I n

= det xI n BA I n

= det xI n BA I n ·1 = det xI n B

A I ndet I n 0

−A I n

= det xI n BA I n

I n 0

−A I n

= det xI n −BA B0 I n

= det( xI n −BA). Hence AB and BA have the same characteristic polynomial.(3) Again, we’ll be working over F (x).

det( xI m −AB) = det I n B0 xI m −AB = det I n 0

−A I mI n BA xI m

= det I n 0

−A I mdet I n B

A xI m= 1 ·det I n B

A xI m

Page 3: Final Practice Sols

7/29/2019 Final Practice Sols

http://slidepdf.com/reader/full/final-practice-sols 3/7

MATH 110: PRACTICE FINAL SOLUTIONS 3

= xm − n xn det I n BA xI m

x− m

= xm − n det xI n 00 I m

det I n BA xI m

det I n 00 x− 1I m

= xm − n det xI n 0

0 I m

I n B

A xI m

I n 0

0 x− 1

I m= xm − n det xI n B

A I m= xm − n det xI n B

A I m ·1 = xm − n det xI n BA I m

det I n 0

−A I m

= xm − n det xI n BA I m

I n 0

−A I m= xm − n det xI n −BA B

0 I n= xm − n det( xI n −BA ). Hence det( xI m −AB) = xm − n det( xI n −BA ).

Here is a simpler method which just demonstrates that AB and BA have the same eigen-values, except possibly for 0 in the m = n case.Suppose λ, v is an eigenpair for AB . Then BA(Bv ) = B(ABv) = Bλv = λBv . As long asBv = 0, we have that λ,Bv is an eigenpair for BA. But if λ = 0, then ABv = λv = 0, so

Bv = 0. This shows that without any restrictions on m and n, AB and BA have the samenon-zero eigenvalues (the eigenvalues go back from BA to AB by symmetry).Now suppose m = n and 0 is an eigenvalue for AB. Then either A or B is singular, so 0 isalso an eigenvalue for BA.Now suppose m > n . Then as B is n∗m, B has non-trivial nullspace, so 0 must be aneigenvalue for AB . 0 may or may not be an eigenvalue for BA. You may want to think of an example where it is not.

Question 3 .(1) Let a i denote the ith column of A and q i the ith column of Q. Using the algorithm

for computing QR decompositions given in class, we obtain R11 = ||a1|| = 2 and

q 1 = a1/R 11 = 1 / 2 · (1, 1, 1, 1)t

. Also, R12 = a2, q 1 = 4, R13 = a3, q 1 = 6,R22 = ||a2 −q 1R12|| = ||(3, −3, 3, −3)t || = 6, and q 2 = (3 , −3, 3, −3)t /R 22 = 1 / 2 ·(1, −1, 1, −1)t . Continuing in this fashion, one obtains

Q =12

1 1 11 −1 11 1 −11 −1 −1

and R =2 4 60 6 40 0 6

.

(2) By the computation done in class, x = R − 1Qt y = ( −26, −23, 21)t .

Question 4 .

(1) Multiplying X by P on the left interchanges rows i and n + 1 −i of X (for each i∈{1, 2, . . . , n }). So P X = ( X n +1 − i,j ). Multiplying P X by P on the right interchangescolumns j and n+1 − j of P X (for each j ∈ {1, 2, . . . , n }). So P XP = ( X n +1 − i,n +1 − j ).

(2) Let J be a Jordan block, with λ as its eigenvalue. For each i∈ {1, 2, . . . , n}, J ii = λ,and hence λ = J n +1 − i,n +1 − i = ( P XP )ii , by the above formula. Similarly, eachJ i,i +1 = 1, so 1 = J n +1 − i,n +2 − i = ( P XP )i,i − 1. It is also easy to see that sinceJ ij = 0 whenever j /∈ {i, i + 1}, (P XP )ij = 0 whenever j /∈ {i, i −1}. Hence,P JP − 1 = P JP = J t .

Page 4: Final Practice Sols

7/29/2019 Final Practice Sols

http://slidepdf.com/reader/full/final-practice-sols 4/7

Page 5: Final Practice Sols

7/29/2019 Final Practice Sols

http://slidepdf.com/reader/full/final-practice-sols 5/7

MATH 110: PRACTICE FINAL SOLUTIONS 5

Here I have used that the ui ’s form an orthnormal set.(7.1) Using A = QDQ∗,

A∗u i = QD ∗Q∗ui = QD ∗ei = Qλ ∗

i ei = λ∗

i Qei = λ∗

i u i .

So A∗ui = c∗ui for i ≤ r and A∗ui = d∗u i for i > r .(7.2) As β = {u1, . . . , u n }forms a basis,

CS(A) = span(L A (β )) = span( Au1, . . . , A u n ) = span( cu1, . . . , c u r , du r +1 , . . . , d u n ).Likewise,

CS(A∗) = span(L A∗(β )) = span( c∗u1, . . . , c∗ur , d∗ur +1 , . . . , d∗un ).

If c = 0 and d = 0, it is clear that these spans coincide with span( u1, . . . , u n ) = C n .If c = 0 = d, then both spans coincide with span( u1, . . . , u n ). Generalizing the resultof problem 6, this is E c. If c = 0 = d, we get E d instead. We can’t have c = d as weassumed they were distinct. So we are done.

Note that having only 2 distinct eigenvalues wasn’t important here; the same argumentsgeneralize to give similar results (such as CS( A) = CS( A∗) for A normal) for A having more

eigenvalues.Question 8 .

(8.1) True. A is skew-symmetric ( A = −At ) and real, so A∗A = At A = −A2 = AA∗,so A is normal. This is equivalent to A being diagonalizable by a unitary matrix.(Note we are talking about complex matrices here, as the question was about unitarymatrices. There actually isn’t a factorization A = QDQ t with Q real orthogonal andD real diagonal.)

(8.2) False. A being unitarily diagonalizable is also equivalent to the existence of anorthnormal basis of eigenvectors for A. But A is upper triangular with eigenvalues1, 2, 3. Each of the eigenspaces must have dimension 1, and computing them, one

easily sees that if v1∈E 1 and v2

∈E 2, both non-zero, then v1 and v2 are notorthogonal.(8.3) False. The matrices

1 10 1 , 1 0

0 2do not commute. Just try it.

(8.4) True. Such matrices may be simultaneously diagonalized, so they commute, by ahomework problem. To see this, let J = QDQ − 1. Q’s columns are eigenvectorsfor J , so must also be eigenvectors for K by assumption. Therefore K = QCQ − 1

for some diagonal matrix C (with the corresponding eigenvalues on its diagonal).Therefore

JK = QDQ − 1QCQ − 1 = QDCQ − 1 = QCDQ − 1 = KJ.

(8.5) False. This requires the rst column of Q− 1 to be an eigenvector for A. For inthe Q− 1-basis β (given by Q− 1’s columns), [LA ]β = T , and the rst standard basisvector is an eigenvector for an upper triangular matrix. Alternatively, letting Q’srst column be q , q = 0 and AQ− 1 = T Q, so

Aq = A(Q− 1e1) = Q− 1Te1 = Q− 1T 11e1 = T 11Q− 1e1 = T 11q.

Page 6: Final Practice Sols

7/29/2019 Final Practice Sols

http://slidepdf.com/reader/full/final-practice-sols 6/7

6 MATH 110: PRACTICE FINAL SOLUTIONS

So A must have a real eigenvector. But this is false for the 90 o rotation matrix

0 1

−1 0 .

(8.6) False. Let u = [ 1 1 1 1 1 ]t . Note that ut M = u t because M is a probability matrix.Now suppose Mx = y. Then

8 = u t y = u t Mx = ut x = 7 .

Question 9 . Suppose all of f ’s roots are distinct, and are λ1, . . . , λ n . Let Λ = diag( λ1, . . . , λ n ).I will show that any complex matrix whose characteristic poly is f is similar to Λ. This issufficient, because if M and N are both similar to Λ, then they are similar to one another.So let M be a complex matrix whose characteristic poly is f . Then M is diagonalizable as ithas n distinct eigenvalues. Thus Q− 1MQ = Θ, where Θ = diag( θ1, . . . , θn ). But the θi ’s arethe roots of f , as are the λ i ’s. So Θ and Λ have the same diagonal entries, but in possiblydifferent orders. Thus they are similar through a permutation matrix P , and M was similarto Θ, so M is similar to Λ also. If you’re interested, I’ll now explicitly construct P .I’ll dene a permutation matrix P such that P ΘP t = Λ. For each i, there is a unique jsuch that λ i = θ j , as M ’s eigenvalues are f ’s roots. Let π : {1, . . . , n} → {1, . . . , n}be thepermutation (bijective function) that sends i to j (so λ i = θπ (i) ). We want the i th row of P to extract λ i from Θ, which is in the π(i)th position in Θ, so set P i,j = δ π (i) ,j . Because π isa permutation, it is easy to check that P is a permutation matrix. Let P i be the ith row of P . As P P t = I , P i P t j = δ ij . We have

(P ΘP t)ij = P i ΘP t j = θπ (i)P i P t j

= θπ (i) δ ij = λ i δ ij = Λ ij .

Therefore P ΘP t = Λ as required.Now we do the other direction. Suppose f ’s roots, counted according to multiplicity, areλ1, . . . , λ n . By reordering if needed, suppose that r > 1, and λ1 = λ i iff i ≤ r (so themultiplicity of λ = λ1 is r > 1).Let D = diag( λ1, . . . , λ n ), let J = J (λ1, r ) be the r∗r Jordan block with eigenvalue λ1 andlet C = diag( J, λ r +1 , . . . , λ n ). Then both D and C have characteristic poly f , but are notsimilar. This follows from the uniqueness of the Jordan canonical form: any two Jordanmatrices are similar iff for each i and λ, they have the same number of i∗i Jordan blockswith eigenvalue λ (but they may appear in different orders). D and C do not satisfy this,so are not similar. We may also see it directly as follows.Writing C as a 2∗2 block matrix with rst block r∗r ,

v

E C

λ⇐⇒

(C

−λI )v = 0

⇐⇒

J (0, r ) 0

0 C 22 −λI

v1

v2= 0 .

This is equivalent to requiring J (0, r )v1 = 0 and ( C 22 −λI )v2 = 0. But C 22 has diagonalentries distinct from λ (by choice of r ), so C 22 −λI is upper triangular invertible. So wemust have v2 = 0. Clearly the rank of J (0, r ) is r −1, and its nullity is 1, which meansN (J (0, r )) and E C

λ have dimension 1.But we assumed 1 < r , so dim(E C

λ ) < mult( λ), so that C is not diagonalizable. As D is infact diagonal, they cannot be similar.

Page 7: Final Practice Sols

7/29/2019 Final Practice Sols

http://slidepdf.com/reader/full/final-practice-sols 7/7

MATH 110: PRACTICE FINAL SOLUTIONS 7

Question 10 . Note that if C is an n∗n matrix with columns ci , eti Ce j = et

i c j = C ij , andet

i = e∗i (ei is the standard basis vector, which is real). SoBe j , ei = e∗i Be j = et

i Be j = B ij ,and

e j , Aei = ( Aei )∗e j = e∗i A∗e j = et

i A∗e j = A∗

ij .

Applying the assumption, B ij = A∗

ij for each i, j , so B = A∗

.Question 11 . Let W = {v∈V | x, v = 0}. We show that W satises the three conditionsrequired of a subspace.By one of the rst theorems on inner products, 0 ∈W .If u, v∈W , x, u + v = x, u + x, v = 0 + 0 = 0, so u + v∈W .If u∈W and c∈F , x,cu = c x, u = c0 = 0, so cu∈W .