Download - Chapter 3 diff eq sm
201
CHAPTER 3 LinearAlgebra
3.1 Matrices: Sums and Products
Do They Compute?
1. 2 0 6
2 4 2 42 0 2
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦
A 2. 1 6 3
2 2 3 21 0 3
⎡ ⎤⎢ ⎥+ = ⎢ ⎥⎢ ⎥−⎣ ⎦
A B
3. 2 −C D , Matrices are not compatible
4. 1 3 32 7 21 3 1
− −⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥− −⎣ ⎦
AB 5. 5 3 92 1 21 0 1
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦
BA 6. 3 1 08 1 29 2 6
−⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥⎣ ⎦
CD
7. 1 16 7
−⎡ ⎤= ⎢ ⎥⎣ ⎦
DC 8. ( )T 1 61 7
⎡ ⎤= ⎢ ⎥−⎣ ⎦
DC
9. TC D , Matrices are not compatible 10. TD C , Matrices are not compatible
11. 2
2 0 02 1 100 0 2
−⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥−⎣ ⎦
A 12. AD, Matrices are not compatible
13. 3
2 0 32 0 21 0 0
−⎡ ⎤⎢ ⎥− = ⎢ ⎥⎢ ⎥−⎣ ⎦
A I 14. 3
1 12 04 3 0 1 0
0 0 1
⎡ ⎤⎢ ⎥− = ⎢ ⎥⎢ ⎥⎣ ⎦
B I
15. 3−C I , Matrices are not compatible 16. 2 96 70 3
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
AC
202 CHAPTER 3 Linear Algebra
More Multiplication Practice
17. [ ] 1 0 2 21 0 2
1 0 2 2
a ba c e a e
c db d f b f
e f
⎡ ⎤⋅ + ⋅ − ⋅ −⎡ ⎤ ⎡ ⎤⎢ ⎥− = =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⋅ + ⋅ − ⋅ −⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
18. 0
0a b d b ad bc ab ba ad bcc d c a cd dc db da ad bc
− − − + −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − − + −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
19.
1 10 2 0 02 0 1 02 21 1 1 1 1 0 11 1
2 2 2
⎡ ⎤ ⎡ ⎤⋅ +⎢ ⎥ ⎢ ⎥⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦⎢ ⎥ ⎢ ⎥− −⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
20. [ ] [ ]0 1 0a b cd e f d e fg h k
⎡ ⎤⎢ ⎥ =⎢ ⎥⎢ ⎥⎣ ⎦
21. [ ] [ ]0 1 1 1 0a b cd e f⎡ ⎤⎢ ⎥⎣ ⎦
= [ ][ ]1 1 0d e f not possible
22. [ ] [ ] [ ]1 11 1 0
1 1
a ba c b d a c b dc d
e f
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥ = + + = + + +⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
Rows and Columns in Products
23. (a) 5 columns (b) 4 rows (c) 6 4×
Which Rules Work for Matrix Multiplication? 24. Counterexample:
A = 1 11 0⎡ ⎤⎢ ⎥⎣ ⎦
B = 2 10 1
−⎡ ⎤⎢ ⎥⎣ ⎦
(A + B)(A − B) = 3 0 1 21 1 1 1
−⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
= 3 6
0 1−⎡ ⎤⎢ ⎥⎣ ⎦
A2 − B2 = 2 1 4 3 2 41 1 0 1 1 0
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤− =⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
25. Counterexample: Also due to the fact that AB ≠ BA for most matrices
A = 1 11 0⎡ ⎤⎢ ⎥⎣ ⎦
B = 2 10 1
−⎡ ⎤⎢ ⎥⎣ ⎦
(A + B)2 = 9 04 1⎡ ⎤⎢ ⎥⎣ ⎦
AB = 2 02 1⎡ ⎤⎢ ⎥−⎣ ⎦
A2 + 2AB + B2 = 2 1 2 0 4 3
21 1 2 1 0 1
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ +⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
= 10 25 0
−⎡ ⎤⎢ ⎥⎣ ⎦
SECTION 3.1 Matrices: Sums and Products 203
26. Proof (I + A)2 = (I + A)(I + A) = I(I + A) + A(I + A) distributive property = I2 + IA + AI + A2 = I + A + A + A2 identity property = I + 2A + A2 27. Proof (A + B)2 = (A + B)(A + B) = A(A + B) + B(A + B) distributive property = A2 + AB + BA + B2 distributive property
Find the Matrix
28. Set 1 2 3 2 4 0 03 4 3 2 4 0 0
a b a b a bc d c d c d
+ +⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
a + 3b = 0 a + 3b = 0 c + 3d = 0 c + 3d = 0 2a + 4b = 0 a + 2b = 0 2c − 4d = 0 c + 2d = 0 b = 0 d = 0 ∴ a = 0 ∴ c = 0 Therefore no nonzero matrix A will work.
29. B must be 3 × 2 Set 1 2 3 1 0
0 10 1 0
a bc de f
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎢ ⎥⎣ ⎦
a + 2c + 3e = 1 c = 0 b + 2d + 3f = 0 d = 1
B is any matrix of the form 1 3 2 3
0 1e f
e f
− − −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
for any real numbers e and f.
30. Set 1 2 2 04 1 1 4
a bc d
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
a + 2c = 2 b + 2d = 0 4a + c = 1 4b + d = 4
Thus c = 1, a = 0 b = 87
, d = 47
− and
82 7417
a bc d
⎡ ⎤⎢ ⎥⎡ ⎤
= ⎢ ⎥⎢ ⎥⎣ ⎦ ⎢ ⎥−⎢ ⎥⎣ ⎦
.
Commuters
31. 0 1 0
0 0 1a
aa
⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ so the matrix commutes with every 2 × 2 matrix.
a = 1 − 3e b = −2 − 3f ⇒
204 CHAPTER 3 Linear Algebra
32. 1
1k a b a kc b kd
k c d ka c kb d+ +⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1
1a b k a bk ak bc d k c kd ck d
+ +⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦
∴ Any matrix of the form
a bb a⎡ ⎤⎢ ⎥⎣ ⎦
a, b ∈ will commute with 1
1k
k⎡ ⎤⎢ ⎥⎣ ⎦
.
a + kc = a + bk k(c − b) = 0 ∴ c = b since k ≠ 0 b + kd = ak + b k(d − a) = 0 ∴ d = a since k ≠ 0 Same results from ka + c = c + kd and kb + d = a + d
To check:
1 1
1 1k a b a kb b ka a b k
k b a ka b kb a b a k+ +⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
33. 0 11 0
a b c dc d a b
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
0 11 0
a b b ac d d c⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
b ca d
∴ ==
Any matrix of the form ,a b
a bb a⎡ ⎤
∈⎢ ⎥⎣ ⎦
will commute with 0 11 0⎡ ⎤⎢ ⎥⎣ ⎦
.
Products with Transposes
34. (a) [ ]T 11 4 3
1⎡ ⎤
= = −⎢ ⎥−⎣ ⎦A B (b) [ ]T 1 1 1
1 14 4 4
−⎡ ⎤ ⎡ ⎤= − =⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
AB
(c) [ ]T 11 1 3
4⎡ ⎤
= − = −⎢ ⎥⎣ ⎦
B A (d) [ ]T 1 1 41 4
1 1 4⎡ ⎤ ⎡ ⎤
= =⎢ ⎥ ⎢ ⎥− − −⎣ ⎦ ⎣ ⎦BA
Reckoning
35. Let aij, bij, be the ijth elements of matrices A and B, respectively, for 1 ≤ i ≤ m, 1 ≤ j ≤ n. A − B = ( 1)ij ij ij ija b a b⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤− = + −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ = A + (−1)B
36. Let aij, bij be the ijth elements of matrices A and B respectively, for 1 ≤ i ≤ m, 1 ≤ j ≤ n. A + B = ij ij ij ija b a b⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ = +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ from the commutative property of real numbers
= ij ijb a⎡ ⎤+⎣ ⎦
= ij ijb a⎡ ⎤ ⎡ ⎤+⎣ ⎦ ⎣ ⎦ = B + A
SECTION 3.1 Matrices: Sums and Products 205
37. Let aij be the ijth element of matrix A and c and d be any real numbers. (c + d)A = ( ) ij ij ijc d a ca da⎡ ⎤ ⎡ ⎤+ = +⎣ ⎦ ⎣ ⎦ from the distributive property of real numbers
= ij ij ij ijca da c a d a⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ = +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
= cA + dA 38. Let aij, bij be the ijth elements of A and B respectively, for 1 ≤ i ≤ m, 1 ≤ j ≤ n. Let c be any real
number. The result again follows from the distributive property of real numbers. c(A + B) = ( ) ( )ij ij ij ijc a b c a b⎡ ⎤⎡ ⎤ ⎡ ⎤+ − +⎣ ⎦ ⎣ ⎦ ⎣ ⎦
= ij ij ij ijca cb ca cb⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ = +⎣ ⎦ ⎣ ⎦ ⎣ ⎦
= ij ijc a c b⎡ ⎤ ⎡ ⎤+⎣ ⎦ ⎣ ⎦ = cA + cB
Properties of the Transpose
Rather than grinding out the proofs of Problems 39–42, we make the following observations: 39. ( )TT =A A . Interchanging rows and columns of a matrix two times reproduce the original matrix.
40. ( )T T T+ = +A B A B . Add two matrices and then interchange the rows and columns of the
resulting matrix. You get the same as first interchanging the rows and columns of the matrices and then adding.
41. ( )T Tk k=A A . Demonstrate that it makes no difference whether you multiply each element of
matrix A before or after rearranging them to form the transpose. 42. ( )T T T=AB B A . This identity is not so obvious. Due to lack of space we verify the proof for
2 2× matrices. The verification for 3 3× and higher-order matrices follows along exactly the same lines.
( )
11 12
21 22
11 12
21 22
11 12 11 12 11 11 12 21 11 12 12 22
21 22 21 22 21 11 22 21 21 12 22 22
T 11 11 12 21 21 11 22 21
11 12 12 22 21 12 22 22
a aa a
b bb b
a a b b a b a b a b a ba a b b a b a b a b a b
a b a b a b a ba b a b a b a b
⎡ ⎤= ⎢ ⎥⎣ ⎦⎡ ⎤
= ⎢ ⎥⎣ ⎦
+ +⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦
+ +⎡ ⎤= ⎢ ⎥+ +⎣ ⎦
A
B
AB
AB
11 21 11 21 11 11 12 21 21 11 22 21T T
12 22 12 22 11 12 12 22 21 12 22 22
b b a a a b a b a b a bb b a a a b a b a b a b
+ +⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦
B A
Hence, ( )T T T=AB B A for 2 2× matrices.
206 CHAPTER 3 Linear Algebra
Transposes and Symmetry 43. If the matrix ija⎡ ⎤= ⎣ ⎦A is symmetric, then ij jia a= . Hence T
jia⎡ ⎤= ⎣ ⎦A is symmetric since
ji ija a= .
Symmetry and Products 44. We pick at random the two symmetric matrices
0 22 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
A , 3 11 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
B ,
which gives
0 2 3 1 2 22 1 1 1 7 3⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
AB .
This is not symmetric. In fact, if A, B are symmetric matrices, we have ( )T T T= =AB B A BA ,
which says the only time the product of symmetric matrices A and B is symmetric is when the matrices commute (i.e. =AB BA ).
Constructing Symmetry
45. We verify the statement that T+A A is symmetric for any 2 2× matrix. The general proof follows along the same lines.
11 12 11 21 11 12 21T
21 22 12 22 21 12 22
22
a a a a a a aa a a a a a a
+⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ = + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥+⎣ ⎦ ⎣ ⎦ ⎣ ⎦
A A ,
which is clearly symmetric.
More Symmetry 46. Let
11 12
21 22
31 32
a aa aa a
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
A .
Hence, we have
11 12
11 21 31 11 12T21 22
12 22 32 21 2231 32
a aa a a A A
a aa a a A A
a a
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥= =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎢ ⎥⎣ ⎦
A A ,
2 2 211 11 21 31
12 11 12 21 22 31 32
21 11 12 21 22 31 32
2 2 222 12 22 32 .
A a a aA a a a a a aA a a a a a a
A a a a
= + += + +
= + +
= + +
Note 12 21A A= , which means TAA is symmetric. We could verify the same result for 3 3× matrices.
SECTION 3.1 Matrices: Sums and Products 207
Trace of a Matrix 47. ( ) ( ) ( )Tr Tr Tr+ = +A B A B ( ) ( ) ( ) ( ) ( )11 11 11 11Tr Tr( )+Tr( )nn nn nn nna b a b a a b b+ = + + + + = + + + + + =A B A B . 48. ( ) ( ) ( )11 11Tr Trnn nnc ca ca c a a c= + + = + + =A A 49. ( ) ( )TTr Tr=A A . Taking the transpose of a (square) matrix does not alter the diagonal element,
so ( ) ( )TTr Tr=A A .
50. ( ) [ ] [ ] [ ] [ ]
( ) ( )( ) ( )[ ] [ ] [ ] [ ] ( )
11 1 11 1 1 1
11 11 1 1 1 1
11 11 1 1 1 1
11 1 11 1 1 1
Tr
Tr
n n n nn n nn
n n n n nn nn
n n n n nn nn
n n n nn n nn
a a b b a a b b
a b a b a b a b
b a b a b a b a
b b a a b b a a
= + + ⋅ + + + + + + ⋅ + +
= + + + + + +
= + + + + + +
= + + ⋅ + + + + + + ⋅ + + =
AB
BA
Matrices Can Be Complex
51. 3 0
22 4 4
ii i
+⎡ ⎤+ = ⎢ ⎥+ −⎣ ⎦
A B 52. 3 1
8 4 5 3i ii i
− + − +⎡ ⎤= ⎢ ⎥+ −⎣ ⎦
AB
53. 1 34 1
ii i− −⎡ ⎤
= ⎢ ⎥−⎣ ⎦BA 54. 2 6 4 6
6 4 5 8i i
i i+⎡ ⎤
= ⎢ ⎥− − −⎣ ⎦A
55. 1 22 3 2
ii
i i− + −⎡ ⎤
= ⎢ ⎥+⎣ ⎦A 56.
1 2 22
6 4 5i i
ii
− − +⎡ ⎤− = ⎢ ⎥−⎣ ⎦
A B
57. T 1 21
ii i
⎡ ⎤= ⎢ ⎥− +⎣ ⎦
B 58. ( )Tr 2 i= +B
Real and Imaginary Components
59. 1 2 1 0 1 2
2 2 3 2 2 0 3i i
ii
+⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = +⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
A , 1 1 0 0 12 1 0 1 2 1
ii
i i− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= = +⎢ ⎥ ⎢ ⎥ ⎢ ⎥+⎣ ⎦ ⎣ ⎦ ⎣ ⎦B
Square Roots of Zero
60. If we assume
a bc d⎡ ⎤
= ⎢ ⎥⎣ ⎦
A
is the square root of
0 00 0⎡ ⎤⎢ ⎥⎣ ⎦
,
then we must have
2
22
0 00 0
a b a b a bc ab bdc d c d ac cd bc d
⎡ ⎤+ +⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = =⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
A ,
208 CHAPTER 3 Linear Algebra
which implies the four equations
2
2
0000.
a bcab bdac cdbc d
+ =+ =+ =
+ =
From the first and last equations, we have 2 2a d= . We now consider two cases: first we assume
a d= . From the middle two preceding equations we arrive at 0b = , 0c = , and hence 0a = , 0d = . The other condition, a d= − , gives no condition on b and c, so we seek a matrix of the
form (we pick 1a = , 1d = − for simplicity)
1 1 1 0
1 1 0 1b b bc
c c bc+⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − +⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
Hence, in order for the matrix to be the zero matrix, we must have 1bc
= − , and hence
11
1c
c
⎡ ⎤−⎢ ⎥⎢ ⎥
−⎣ ⎦
,
which gives
1 1 0 01 1
0 01 1c c
c c
⎡ ⎤ ⎡ ⎤− − ⎡ ⎤⎢ ⎥ ⎢ ⎥ = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦− −⎣ ⎦ ⎣ ⎦
.
Zero Divisors
61. No, =AB 0 does not imply that =A 0 or =B 0 . For example, the product
1 0 0 00 0 0 1⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
is the zero matrix, but neither factor is itself the zero matrix.
Does Cancellation Work?
62. No. A counterexample is: 0 0 1 2 0 0 0 00 1 0 4 0 1 0 4⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
=⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠
since 1 2 0 00 4 0 4⎛ ⎞ ⎛ ⎞
≠⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠
.
SECTION 3.1 Matrices: Sums and Products 209
Taking Matrices Apart
63. (a) 1 2 3
1 5 21 0 32 4 7
⎡ ⎤⎢ ⎥⎡ ⎤= = −⎣ ⎦ ⎢ ⎥⎢ ⎥⎣ ⎦
A A A A , 243
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
x
where 1A , 2A , and 3A are the three columns of the matrix A and 1 2x = , 2 4x = , 3 3x = are the elements of x . We can write
1 1 2 2 3 3
1 5 2 2 1 2 5 4 2 3 1 5 21 0 3 4 1 2 0 4 3 3 2 1 4 0 3 32 4 7 3 2 2 4 4 7 3 2 4 7
.x x x
× + × + ×⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − = − × + × + × = − + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥× + × + ×⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
= + +
Ax
A A A
(b) We verify the fact for a 3 3× matrix. The general n n× case follows along the same lines.
11 12 13 1 11 1 12 2 13 3 11 1 12 2 13 3
21 22 23 2 21 1 22 2 23 3 21 1 22 2 23 3
31 32 33 3 31 1 32 2 33 3 31 1 32 2 33 3
1
a a a x a x a x a x a x a x a xa a a x a x a x a x a x a x a xa a a x a x a x a x a x a x a x
x
+ +⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = + + = + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
=
Ax
11 12 13
21 2 22 3 23 1 1 2 2 3 3
31 32 33
a a aa x a x a x x xa a a
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ + = + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
A A A
Diagonal Matrices
64.
11
22
0 00 000 0 nn
aa
a
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
A ,
11
22
0 00 000 0 nn
bb
b
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
B .
By multiplication we get
11 11
22 22
0 00 000 0 nn nn
a ba b
a b
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
AB ,
which is a diagonal matrix.
210 CHAPTER 3 Linear Algebra
65. By multiplication of the general matrices, and commutativity of resulting individual elements, we have
11 11 11 11
22 22 22 22
0 0 0 00 0 0 00 0
0 0 0 0nn nn nn nn
a b b aa b b a
a b a b
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥= = =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
AB BA .
However, it is not true that a diagonal matrix commutes with an arbitrary matrix.
Upper Triangular Matrices 66. (a) Examples are
1 20 3⎡ ⎤⎢ ⎥⎣ ⎦
, 1 3 00 0 50 0 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
,
2 7 9 00 3 8 10 0 4 20 0 0 6
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
(b) By direct computation, it is easy to see that all the entries in the matrix product
11 12 13 11 12 13
22 23 22 23
33 33
0 00 0 0 0
a a a b b ba a b b
a b
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥= ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
AB
below the diagonal are zero. (c) In the general case, if we multiply two upper-triangular matrices, it yields
11 12 13 1 11 12 13 1 11 12 13 1
22 23 2 22 23 2 22 23 2
33 3 33 3 33 3
0 0 00 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
n n n
n n n
n n n
nn nn nn
a a a a b b b b c c c ca a a b b b c c c
a a b b c c
a b c
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= × =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
AB .
We won’t bother to write the general expression for the elements ijc ; the important point is that the entries in the product matrix that lie below the main diagonal are clearly zero.
Hard Puzzle
67. If
a bc d⎡ ⎤
= ⎢ ⎥⎣ ⎦
M
is a square root of
0 10 0⎡ ⎤
= ⎢ ⎥⎣ ⎦
A ,
then
2 0 10 0⎡ ⎤
= ⎢ ⎥⎣ ⎦
M ,
SECTION 3.1 Matrices: Sums and Products 211
which leads to the condition 2 2a d= . Each of the possible cases leads to a contradiction. How-ever for matrix B because
1 0 1 0 1 0
1 1 0 1α α⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for any α, we conclude that
1 0
1α⎡ ⎤
= ⎢ ⎥−⎣ ⎦B
is a square root of the identity matrix for any number α.
Orthogonality
68. 1 1
20 3k⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= 1 ⋅ 1 + 2 ⋅ k + 3 ⋅ 0 = 0
2k = −1
k = 12
−
69. 1
2 04
k
k
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= k ⋅ 1 + 2 ⋅ 0 + k ⋅ 4 = 0
5k = 0 k = 0
70. 2
10 2
3
k
k
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= k ⋅ 1 + 0 ⋅ 2 + k2 ⋅ 3 = 0
3k2 + k = 0 k(5k + 1) = 0
k = 0, 13
−
71. 2
1 12 1
1k
−⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
= 1 ⋅ (−1) + 2 ⋅ 1 + k2(−1) = 0
1 − k2 = 0 k = ±1
Orthogonality Subsets
72. Set 101
abc
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= 0
a ⋅ 1 + b ⋅ 0 + c ⋅ 1 = 0 a + c = 0 c = −a
Orthogonal set = : ,ab a ba
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥ ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥−⎣ ⎦⎩ ⎭
73. Set 101
abc
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= 0 to get c = −a
Set 210
abc
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= 0
2a + b ⋅ 1 + c ⋅ 0 = 0 2a + b = 0 b = −2a
Orthogonal set = 2 :aa aa
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥− ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥−⎣ ⎦⎩ ⎭
212 CHAPTER 3 Linear Algebra
74. Set 101
abc
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= 0 to get c = −a
Set 210
abc
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= 0 to get b = −2a
Set 345
abc
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= 0 to get a ⋅ 3 + b ⋅ 4 + c ⋅ 5 = 0
3a − 8a − 5a = 0 ∴a = 0
000
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
is the orthogonal set
75. Set 101
abc
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= 0 to get c = −a
Set 210
abc
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= 0 to get b = −2a
Set 012
abc
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⋅ −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
= a ⋅ 0 + b(−1) + c(2)
= −b + 2c = 0 = 2a − 2a = 0
2 :aa aa
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥− ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥−⎣ ⎦⎩ ⎭
is the orthogonal set
Dot Products
76. [ ] [ ]2, 1 1, 2 0• − = , orthogonal 77. [ ] [ ]3, 0 2, 1 6− • = − , not orthogonal. Because the dot product is negative, this means the angle
between the vectors is greater than 90°. 78. [ ] [ ]2, 1, 2 3, 1, 0 5• − = . Because the dot product is positive, this means the angle between the
vectors is less than 90°. 79. [ ] [ ]1, 0, 1 1, 1, 1 0− • = , orthogonal 80. [ ] [ ]5, 7, 5, 1 2, 4, 3, 3 0• − − − = , orthogonal 81. [ ] [ ]7, 5, 1, 5 4, 3, 2, 3 30• − = , not orthogonal
Lengths 82. Introducing the two vectors [ ], a b=u , [ ], c d=v , we have the distance d between the heads of
the vectors
( ) ( )2 2d a c b d= − + − . But we also have
( ) ( ) ( ) ( )2 2 2a c b d− = − • − = − + −u v u v u v , so d = −u v . This proof can be extended easily to u and v in nR .
SECTION 3.1 Matrices: Sums and Products 213
Geometric Vector Operations 83. +A C lies on the horizontal axis, from 0 to –2.
[ ] [ ] [ ]1,2 3, 2 2,0+ = + − − = −A C
1 3–1–1
–2
–3
1
3
2
–2
–3 2
A = 1 2,
C = − −3 2,
A C+ = −2, 0
84. [ ] [ ] [ ]1 1 1, 2 3, 1 2.5, 22 2
+ = + − = −A B
1 3–1–1
–2
–3
1
3
2
–2
–3 2
A = 1 2,B A+ = −
12
25 2. ,
B = −3 1,
85. 2−A B lies on the horizontal axis, from 0 to 7.
4 82–1
–2
–3
1
3
2
–2
–4 6
A = 1 2,
B = −3 1, A B− =2 7, 0
Triangles 86. If [ ]3, 2 and [ ]2, 3 are two sides of a triangle,
their difference [ ]1, 1− or [ ]1, 1− is the third side. If we compute the dot products of thesesides, we see
[ ] [ ]3, 2 2, 3 12• = ,
[ ] [ ]3, 2 1, 1 1• − = ,
[ ] [ ]2, 3 1, 1 1• − = − .
1 3–1–1
–2
–3
1
3
2
–2
–3 2
2, 3
3 2,
None of these angles are right angles, so the triangle is not a right triangle (see figure).
214 CHAPTER 3 Linear Algebra
87. [ ] [ ]2, 1, 2 1, 0, 1 0− • − = so in 3-space these vectors form a right angle, since dot product is zero.
Properties of Scalar Products We let [ ]1 na a=a … , [ ]1 nb b=b … , and [ ]1 nc c=c … for simplicity.
88. True. [ ] [ ] [ ] [ ]1 1 1 1 1 1 n n n n n na a b b a b a b b a b a• = • = = = •a b b a . 89. False. Neither ( )• •a b c nor ( )• •a b c . Invalid operation, since problem asks for the scalar
product of a vector and a scalar, which is not defined. 90. True.
( ) [ ] [ ]
[ ] [ ] ( )1 1 1 1 1 1
1 1
n n n n n n
n n
k ka ka b b ka b ka b a kb a kb
a a kb kb k
• = • = + + = + +
= • = •
a b
a b
91. True.
( ) [ ] [ ] ( ) ( )
( ) ( )1 1 1 1 1 1
1 1 1 1
n n n n n n
n n n n
a a b c b c a b c a b c
a b a b a c a c
• + = • + + = + + + +
= + + + + + = • + •
a b c
a b a c
Directed Graphs
92. (a)
0 1 1 0 10 0 1 0 00 0 0 0 10 0 0 0 00 0 1 1 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
A
(b) 2
0 0 2 1 10 0 0 0 10 0 1 1 00 0 0 0 00 0 0 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
A
The ijth entry in 2A gives the number of paths of length 2 from node i to node j.
Tournament Play 93. The tournament graph had adjacency matrix
0 1 1 0 10 0 0 1 10 1 0 0 11 0 1 0 10 0 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
T .
Ranking players by the number of games won means summing the elements of each row of T, which in this case gives two ties: 1 and 4, 2 and 3, 5. Players 1 and 4 have each won 3 games. Players 2 and 3 have each won 2 games. Player 5 has won none.
SECTION 3.1 Matrices: Sums and Products 215
Second-order dominance can be determined from
2
0 1 0 1 21 0 1 0 10 0 0 1 10 2 1 0 20 0 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
T
For example, 2T tells us that Player 1 can dominate Player 5 in two second-order ways (by beating either Player 2 or Player 4, both of whom beat Player 5). The sum
2
0 2 1 1 31 0 1 1 20 1 0 1 21 2 2 0 30 0 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥+ =⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
T T ,
gives the number of ways one player has beaten another both directly and indirectly. Reranking players by sums of row elements of 2+T T can sometimes break a tie: In this case it does so and ranks the players in order 4, 1, 2, 3, 5.
Suggested Journal Entry
94. Student Project
216 CHAPTER 3 Linear Algebra
3.2 Systems of Linear Equations
Matrix-Vector Form
1. 1 2 12 1 03 2 1
xy
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥− =⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
2.
1
2
3
4
1 2 1 3 21 3 3 0 1
iiii
⎡ ⎤⎢ ⎥⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥−⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
Augmented matrix 1 2 12 1 03 2 1
⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥⎣ ⎦
Augmented matrix 1 2 1 3 21 3 3 0 1⎡ ⎤
= ⎢ ⎥−⎣ ⎦
3. 1 2 1 11 3 3 10 4 5 3
rst
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
4. [ ]1
2
3
1 2 3 0xxx
⎡ ⎤⎢ ⎥− =⎢ ⎥⎢ ⎥⎣ ⎦
Augmented matrix 1 2 1 11 3 3 10 4 5 3
⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥−⎣ ⎦
Augmented matrix [ ]1 2 3 | 0= −
Solutions in 2R
5. (A) 6. (B) 7. (C) 8. (B) 9. (A)
A Special Solution Set in 3R 10. The three equations
1
2 2 2 23 3 3 3
x y zx y zx y z
+ + =+ + =+ + =
are equivalent to the single plane 1x y z+ + = , which can be written in parametric form by letting y s= , z t= . We then have the parametric form ( ){ }1 , , : , any real numberss t s t s t− − .
Reduced Row Echelon Form
11. RREF 12. Not RREF (not all zeros above leading ones) 13. Not RREF (leading nonzero element in row 2 is not 1; not all zeros above the leading ones) 14. Not RREF (row 3 does not have a leading one, nor does it move to the right; plus pivot columns
have nonzero entries other than the leading ones) 15. RREF 16. Not RREF (not all zeros above leading ones) 17. Not RREF (not all zeros above leading ones) 18. RREF 19. RREF
SECTION 3.2 Systems of Linear Equations 217
Gauss-Jordan Elimination
20. Starting with 1 3 8 00 1 2 10 1 2 4
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
( )3 3 21R R R∗ = + − 1 3 8 00 1 2 10 0 0 3
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
3 313
R R∗ = 1 3 8 00 1 2 10 0 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
This matrix is in row echelon form. To further reduce it to RREF we carry out the following
elementary row operations ( )1 1 23R R R∗ = + − , ( )2 2 31R R R∗ = + −
1 0 2 00 1 2 0 RREF0 0 0 1
⎡ ⎤⎢ ⎥ ←⎢ ⎥⎢ ⎥⎣ ⎦
.
Hence, we see the leading ones in this RREF form are in columns 1, 2, and 4, so the pivot columns of the original matrix are columns 1, 2, and 4 shown in bold and underlined as follows:
822
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
1 3 00 1 10 1 4
.
21. 0 0 2 2 22 2 6 14 4
−⎡ ⎤⎢ ⎥⎣ ⎦
1 2R R↔ 2 2 6 14 40 0 2 2 2⎡ ⎤⎢ ⎥−⎣ ⎦
1 112
R R∗ = 1 1 3 7 20 0 2 2 2⎡ ⎤⎢ ⎥−⎣ ⎦
2 212
R R∗ = 1 1 3 7 20 0 1 1 1⎡ ⎤⎢ ⎥−⎣ ⎦
.
The matrix is in row echelon form. To further reduce it to RREF we carry out the following
elementary row operation.
( )1 1 23R R R∗ = + − 1 1 0 4 5
RREF0 0 1 1 1⎡ ⎤
←⎢ ⎥−⎣ ⎦
The pivot columns of the original matrix are first and third.
0 2 22 14 4
−⎡ ⎤⎢ ⎥⎣ ⎦
0 22 6
.
218 CHAPTER 3 Linear Algebra
22.
1 0 02 4 65 8 120 8 12
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
( )
( )
2 2 1
3 3 1
2
5
R R R
R R R
∗
∗
= + −
= + −
1 0 00 4 60 8 120 8 12
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
2 214
R R∗ =
1 0 030 12
0 8 120 8 12
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
( )
( )
3 3 2
4 4 2
8
8
R R R
R R R
∗
∗
= + −
= + −
1 0 030 1
RREF20 0 00 0 0
⎡ ⎤⎢ ⎥⎢ ⎥
←⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
This matrix is in both row echelon form and RREF form.
06
1212
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
1 02 45 80 8
The pivot columns of the original matrix are the first and second columns.
23. 1 2 3 13 7 10 42 4 6 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
( )
( )
2 2 1
3 3 1
3
2
R R R
R R R
∗
∗
= + −
= + −
, 1 2 3 10 1 1 1 row echelon form0 0 0 0
⎡ ⎤⎢ ⎥ ←⎢ ⎥⎢ ⎥⎣ ⎦
.
The matrix is in row echelon form. To further reduce it to RREF, we carry out the following elementary row operation.
( )1 1 22R R R∗ = + − 1 0 1 10 1 1 1 RREF0 0 0 0
−⎡ ⎤⎢ ⎥ ←⎢ ⎥⎢ ⎥⎣ ⎦
The pivot columns of the original matrix are first and second. 3 1
10 46 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
1 23 72 4
.
Solving Systems
24. 1 1 41 1 0⎡ ⎤⎢ ⎥−⎣ ⎦
( )2 2 11R R R∗ = + − 1 1 40 2 4⎡ ⎤⎢ ⎥− −⎣ ⎦
*2 2
12
R R= − 1 1 40 1 2⎡ ⎤⎢ ⎥⎣ ⎦
( )1 1 21R R R∗ = + − 1 0 20 1 2⎡ ⎤⎢ ⎥⎣ ⎦
unique solution; 2x = , 2y =
SECTION 3.2 Systems of Linear Equations 219
25. 2 1 01 1 3
−⎡ ⎤⎢ ⎥− −⎣ ⎦
1 2R R↔ 1 1 32 1 0
− −⎡ ⎤⎢ ⎥−⎣ ⎦
( )2 2 12R R R∗ = + − 1 1 30 1 6
− −⎡ ⎤⎢ ⎥⎣ ⎦
( )1 1 21R R R∗ = + RREF 1 0 30 1 6⎡ ⎤⎢ ⎥⎣ ⎦
unique solution; 3x = , 6y =
26. 1 1 1 00 1 1 1⎡ ⎤⎢ ⎥⎣ ⎦
( )1 1 21R R R∗ = + − RREF 1 0 0 10 1 1 1
−⎡ ⎤⎢ ⎥⎣ ⎦
arbitrary (infinitely many solutions); 1x = − , 1y z= − , z arbitrary
27. 2 4 2 05 3 0 0
−⎡ ⎤⎢ ⎥⎣ ⎦
1 112
R R∗ = 1 2 1 05 3 0 0
−⎡ ⎤⎢ ⎥⎣ ⎦
( )2 2 15R R R∗ = + − 1 2 1 00 7 5 0
−⎡ ⎤⎢ ⎥−⎣ ⎦
2 217
R R∗ = − 1 2 1 0
50 1 07
−⎡ ⎤⎢ ⎥⎢ ⎥−⎣ ⎦
( )1 1 22R R R∗ = + − RREF
31 0 0750 1 07
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
nonunique solutions; 37
x z= − , 57
y z= , z is arbitrary
28. 1 1 2 12 3 1 25 4 2 4
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
( )
( )
2 2 1
3 3 1
2
5
R R R
R R R
∗
∗
= + −
= + −
1 1 2 10 5 5 00 9 12 1
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
2 215
R R∗ = 1 1 2 10 1 1 00 9 12 1
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
( )
1 1 2
3 3 29
R R R
R R R
∗
∗
= +
= + −
1 0 1 10 1 1 00 0 3 1
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
3 313
R R∗ = 1 0 1 10 1 1 0
10 0 13
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥
−⎢ ⎥⎣ ⎦
( )
1 1 3
2 2 31
R R R
R R R
∗
∗
= +
= + −
RREF
21 0 0310 1 0310 0 13
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
unique solution; 23
x = , 13
y = , 13
z = −
220 CHAPTER 3 Linear Algebra
29. 1 4 5 02 1 8 9
−⎡ ⎤⎢ ⎥−⎣ ⎦
( )2 2 12R R R∗ = + − 1 4 5 00 9 18 9
−⎡ ⎤⎢ ⎥−⎣ ⎦
2 219
R R∗ = − 1 4 5 00 1 2 1
−⎡ ⎤⎢ ⎥− −⎣ ⎦
( )1 1 24R R R∗ = + − RREF 1 0 3 40 1 2 1⎡ ⎤⎢ ⎥− −⎣ ⎦
nonunique solutions; 1 34 3x x= − , 2 31 2x x= − + , 3x is arbitrary
30. 1 0 1 22 3 5 43 2 1 4
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
( )
( )
2 2 1
3 3 1
2
3
R R R
R R R
∗
∗
= + −
= + −
1 0 1 20 3 3 00 2 4 2
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥− −⎣ ⎦
2 213
R R∗ = − 1 0 1 20 1 1 00 2 4 2
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥− −⎣ ⎦
( )3 3 22R R R∗ = + − 1 0 1 20 1 1 00 0 2 2
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥− −⎣ ⎦
3 312
R R∗ = − 1 0 1 20 1 1 00 0 1 1
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
( )1 1 3
2 2 3
1R R R
R R R
∗
∗
= + −
= +
RREF 1 0 0 10 1 0 10 0 1 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
unique solution; 1x y z= = =
31. 1 1 1 01 1 0 01 2 1 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
( )
( )
2 2 1
3 3 1
1
1
R R R
R R R
∗
∗
= + −
= + −
1 1 1 00 2 1 00 3 2 0
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
2 212
R R∗ =
1 1 1 010 1 02
0 3 2 0
−⎡ ⎤⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
( )
1 1 2
3 3 23
R R R
R R R
∗
∗
= +
= + −
11 0 0210 1 0210 0 02
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
( )3 32R R∗ = −
11 0 0210 1 02
0 0 1 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
1 1 3
2 2 3
12
12
R R R
R R R
∗
∗
⎛ ⎞= + −⎜ ⎟⎝ ⎠
= +
RREF 1 0 0 00 1 0 00 0 1 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
unique solution; 0x y z= = =
SECTION 3.2 Systems of Linear Equations 221
32. 1 1 2 02 1 1 04 1 5 0
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
( )
( )
2 2 1
3 3 1
2
4
R R R
R R R
∗
∗
= + −
= + −
1 1 2 00 3 3 00 3 3 0
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥− −⎣ ⎦
2 213
R R∗ = − 1 1 2 00 1 1 00 3 3 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥− −⎣ ⎦
( )
( )
1 1 2
3 3 2
1
3
R R R
R R R
∗
∗
= + −
= +
RREF 1 0 1 00 1 1 00 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
nonunique solutions; , x z y z= − = − , z is arbitrary
33. 1 1 2 12 1 1 24 1 5 4
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
( )
( )
2 2 1
3 3 1
2
4
R R R
R R R
∗
∗
= + −
= + −
1 1 2 10 3 3 00 3 3 0
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥− −⎣ ⎦
2 213
R R∗ = − 1 1 2 10 1 1 00 3 3 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥− −⎣ ⎦
( )
( )
1 1 2
3 3 2
1
3
R R R
R R R
∗
∗
= + −
= +
RREF 1 0 1 10 1 1 00 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
nonunique solutions; 1x z= − , y z= − , z is arbitrary
34.
2 22 4 3 0
6 4 24
x y zx y zx y z
x y
+ + =− − =
− + − =− =
1 2 1 22 4 3 01 6 4 21 1 0 4
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥− −⎢ ⎥−⎣ ⎦
*2 1 2
*4 2 4
2R R R
R R R
= − +
= − +
1 2 1 22 4 3 20 8 5 20 3 1 2
⎡ ⎤⎢ ⎥− − −⎢ ⎥⎢ ⎥− − −⎢ ⎥− −⎣ ⎦
*3 2 3
*4 2 4
38
R R R
R R R
= +
= − +
1 2 1 20 8 5 20 0 8 2
7 50 08 4
⎡ ⎤⎢ ⎥− − −⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
*2 2
*3 3
*4 4
1818
87
R R
R R
R R
= −
= −
=
1 2 1 25 10 18 4
10 0 14
100 0 17
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥
−⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
Clearly inconsistent at this point so the RREF =
1 0 0 00 1 0 00 0 1 00 0 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
222 CHAPTER 3 Linear Algebra
35.
2 24
2 2 03 2
x x zx y
x y zy z
+ + =− =
− + =+ = −
1 2 1 21 1 0 42 1 2 00 3 1 2
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎢ ⎥−⎣ ⎦
*2 1 2
*3 1 32
R R R
R R R
= − +
= − +
1 2 1 21 3 1 20 5 0 40 3 1 2
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥− −⎢ ⎥−⎣ ⎦
*3 3
*4 2 4
15
R R
R R R
= −
= +
1 2 1 21 3 1 2
40 1 05
0 0 0 0
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
*4 3R R↔
1 2 1 240 1 05
0 3 1 20 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥− −⎢ ⎥⎢ ⎥⎣ ⎦
*1 2 1*3 2 3
2
3
R R R
R R R
= − +
= +
21 0 1540 1 05
220 0 15
0 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
*1 1 3
*3 3
R R R
R R
= +
= −
241 0 0540 1 05
220 0 15
0 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
There is a unique solution: x = 24 4,5 5
y = and z = 225
− .
36. 3 4
2 3 4
2 4 1 3 2x x x
x x x+ − =+ − =
1 0 2 4 10 1 1 3 2
−⎡ ⎤⎢ ⎥−⎣ ⎦
is in RREF
∴ infinitely many solutions x1 = −2r + 4s + 1 x2 = −r + 3s + 2, r, s ∈ x3 = r, x4 = s
Using the Nonhomogenous Principle
37. In Problem 24, 22⎡ ⎤⎢ ⎥⎣ ⎦
is a unique solution so W = { }0 and
x = 22⎡ ⎤⎢ ⎥⎣ ⎦
+ 0
SECTION 3.2 Systems of Linear Equations 223
38. In Problem 25, 36⎡ ⎤⎢ ⎥⎣ ⎦
is a unique solution so W = { }0 and
x = 36⎡ ⎤⎢ ⎥⎣ ⎦
+ 0
39. In Problem 26, infinitely many solutions, 1
1 zz
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
, and W = 01 :
1r
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥− ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
hence x = 1 01 10 1
r−⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥+ −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
for any r ∈
40. In Problem 27, (already a homogeneous system), infinitely many solutions,
W = 3
5 :7
r r⎧ ⎫−⎡ ⎤⎪ ⎪⎢ ⎥ ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
and x = 000
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
+ 357
r−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
for any r ∈
41. In Problem 28,
231313
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
is a unique solution so that W = { }0 and =x
231313
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
+ 0
42. In Problem 29, infinitely many solutions: x1 = 4 − 3x3 x2 = −1 + 2x3, x3 arbitrary
W = 32 :1
r r⎧ ⎫−⎡ ⎤⎪ ⎪⎢ ⎥ ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
410
⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥⎣ ⎦
x + 321
r−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
for any r ∈
43. In Problem 30, unique solution 111
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
so W = { }0 and
111
⎡ ⎤⎢ ⎥= +⎢ ⎥⎢ ⎥⎣ ⎦
x 0
224 CHAPTER 3 Linear Algebra
44. In Problem 31, unique solution 000
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
so W = { }0 and
= + =x 0 0 0 . 45. In Problem 32, infinitely many solutions x = −z, y = −z, z arbitrary, so
W = 11 :1
r r⎧ ⎫−⎡ ⎤⎪ ⎪⎢ ⎥− ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
1
= 11
r−⎡ ⎤⎢ ⎥+ −⎢ ⎥⎢ ⎥⎣ ⎦
x 0 for any r ∈
46. In Problem 33, nonunique solutions: x = 1 − z, y = −z, z arbitrary,
W = 11 :1
r r⎧ ⎫−⎡ ⎤⎪ ⎪⎢ ⎥− ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
1 10 10 1
r−⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥= + −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
x for any r ∈
47. In Problem 34, W =
00
:01
r r
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎪ ⎪⎢ ⎥ ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
. However, the system is inconsistent so that there is no xp
and no general solution.
48. In Problem 35, there is a unique solution: x = 245
, y = 45
, z = 225
− so
W = { }0 and
24545
225
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥= +⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
x 0 .
SECTION 3.2 Systems of Linear Equations 225
49. In Problem 36, there are infinitely many solutions: x1 = 1 − 2x3 + 4x4, x2 = 2 − x3 + 3x4,
x3 is arbitrary, x4 is arbitrary.
W =
2 41 3
: ,1 00 1
r s r s
⎧ ⎫−⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥−⎪ ⎪⎢ ⎥ ⎢ ⎥+ ∈⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
,
1 2 42 1 30 1 00 0 1
r s
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥ ⎢ ⎥= + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
x for ,r s∈
The RREF Example
50. Starting with the augmented matrix, we carry out the following steps
1 0 2 0 1 4 80 2 0 2 4 6 60 0 1 0 0 2 23 0 0 1 5 3 120 2 0 0 0 0 6
⎡ ⎤⎢ ⎥− − −⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥− −⎣ ⎦
( )
2 2
4 4 1
12
3
R R
R R R
∗ ∗
∗
=
= + −
1 0 2 0 1 4 80 1 0 1 2 3 30 0 1 0 0 2 20 0 6 1 2 9 120 2 0 0 0 0 6
⎡ ⎤⎢ ⎥− − −⎢ ⎥⎢ ⎥⎢ ⎥− − −⎢ ⎥⎢ ⎥− −⎣ ⎦
(We leave the next steps for the reader)
RREF =
1 0 0 0 1 0 40 1 0 0 0 0 30 0 1 0 0 2 20 0 0 1 2 3 00 0 0 0 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
More Equations Than Variables
51. Converting the augmented matrix to RREF yields
3 5 0 1 1 0 0 23 7 3 8 0 1 0 10 5 0 5 0 0 1 30 2 3 7 0 0 0 01 4 1 1 0 0 0 0
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥→−⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
consistent system; unique solution 2, 1x y= = − , 3z = .
Consistency 52. A homogeneous system =Ax 0 always has at least one solution, namely the zero vector =x 0 .
226 CHAPTER 3 Linear Algebra
Homogeneous Systems 53. The equations are
2 5 0
2 0w x z
y z− + =
+ =
If we let and x r z s= = , we can solve 2y s= − , 2 5w r s= − . The solution is a plane in 4R given by
2 5 2 51 0
2 0 20 1
w r sx r
r sy sz s
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
for r, s any real numbers. 54. The equations are
2 0
0x z
y+ =
=
If we let ,z s= we have 2x s= − and hence the solution is a line in 3R given by
2 20 0
1
x sy sz s
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
55. The equation is
1 2 3 44 3 0 0x x x x− + + = . If we let 2x r= , 3x s= , 4x t= , we can solve
1 2 34 3 4 3x x x r s= − = − . Hence
1
2
3
4
4 3 4 3 01 0 00 1 00 0 1
x r sx r
r s tx sx t
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
where r, s, t are any real numbers.
Making Systems Inconsistent
56. 1 0 30 2 41 0 5
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
*2 2
*3 1 3
1 2
R R
R R R
=
= − +
1 0 30 1 20 0 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
*3 3
12
R R= 1 0 30 1 20 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
Rank = 3 because every column is a pivot column.
SECTION 3.2 Systems of Linear Equations 227
57. 4 51 63 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
Rank = 2
4 51 63 1
abc
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
2 1R R↔ 1 64 53 1
bac
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
*2 1 2*3 1 3
4 +R
3
R R
R R R
= −
= − +
1 00 5 50 1 3
bb ab c
⎡ ⎤⎢ ⎥− +⎢ ⎥⎢ ⎥− − +⎣ ⎦
*3 3
2 3
R RR R
= −↔
1 60 1 30 5 5
bb cb a
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥− +⎣ ⎦
*3 2 35R R R= − +
1 60 1 30 0 20 5
bb c
a b c
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥− +⎣ ⎦
Thus the system is inconsistent for all vectors abc
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
for which a − 20b + 5c ≠ 0.
58. Find the RREF: 1 2 11 0 30 1 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
1 2*3 3
R R
R R
↔
= −
1 0 31 2 10 1 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
*2 1 2R R R= − +
1 0 30 2 20 1 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
*2 2
12
R R= 1 0 30 1 10 1 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
*3 2 3R R R= − +
1 0 30 1 10 0 1
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
*1 3 1
*2 3 2*3 3
3R R R
R R R
R R
= − +
= − +
= −
1 0 00 1 00 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
∴ rank A = 3
59. Find the RREF: 1 1 22 1 14 1 5
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
*2 1 2*3 1 3
2
4
R R R
R R R
= − +
= − + 1 1 20 3 30 3 3
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥− −⎣ ⎦
*3 2 3
*2 3
13
R R R
R R
= − +
= −
1 1 20 1 10 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
∴ rank A = 2
For arbitrary a, b, and c:
1 1 22 1 14 1 5
abc
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
*2 1 2*3 1 3
2
4
R R R
R R R
= − +
= − + 1 1 20 3 3 20 3 3 4
aa ba c
⎡ ⎤⎢ ⎥− − − +⎢ ⎥⎢ ⎥− − − +⎣ ⎦
228 CHAPTER 3 Linear Algebra
*3 2 3R R R= − +
1 1 20 3 3 20 0 0 2
aa b
a b c
⎡ ⎤⎢ ⎥− − − +⎢ ⎥⎢ ⎥− − +⎣ ⎦
Any vector abc
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
for which −2a − b + c ≠ 0
60. 1 1 11 1 01 2 1
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
*2 1 2*3 1 3
R R R
R R R
= − +
= − + 1 1 10 2 10 3 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
*2 2
12
R R=
1 1 110 12
0 3 2
−⎡ ⎤⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
*1 2 1*3 2 33
R R R
R R R
= +
= − +
11 0210 1220 03
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
*3 2
32
R R=
11 0210 18
0 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
*1 3 1
*2 3 2
1218
R R R
R R R
= − +
= − +
1 0 00 1 00 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
∴ rank A = 3
Seeking Consistency
61. 4k ≠ 62. Any k will produce a consistent system 63. 1k ≠ ± 64. The system is inconsistent for all k because the last two equations are parallel and distinct.
65. 1 0 0 1 20 2 4 0 61 1 2 1 12 2 4 2 k
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥− − −⎢ ⎥⎣ ⎦
*2 2
*3 1 3*4 1 4
12
2
R R
R R R
R R R
=
= − +
= − +
1 0 0 1 20 1 2 0 30 1 2 0 30 2 4 0 4 k
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥− − −⎢ ⎥− +⎣ ⎦
*3 2 3*4 1 42
R R R
R R R
= +
= − +
1 0 0 1 20 1 2 0 30 0 0 0 00 0 0 0 10 k
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥− +⎣ ⎦
Consistent if k = 10.
SECTION 3.2 Systems of Linear Equations 229
Not Enough Equations 66. a. 2 1 0 0 3
1 1 1 1 32 3 4 4 9
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
1 2R R↔
1 1 1 1 32 1 0 0 32 3 4 4 9
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
*2 1 2
*3 1 3
2
2
R R R
R R R
= − +
= − +
1 1 1 1 30 3 2 2 30 1 2 2 3
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
2 3R R↔
1 1 1 1 30 1 2 2 30 3 2 2 3
−⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥−⎣ ⎦
*3 2 33R R R= − +
1 0 1 1 60 1 2 2 30 0 8 8 12
− −⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥−⎣ ⎦
*3 3
18
R R=
1 0 1 1 60 1 2 2 3
30 0 1 12
⎡ ⎤⎢ ⎥− −⎢ ⎥
− −⎢ ⎥⎢ ⎥
−⎢ ⎥⎣ ⎦
This matrix is in row-echelon form and has 3 pivot colums Rank = 3 Consequently, there are infinitely many solutions because it represents a consistent
system. b. 2 1 0 0 3
1 1 1 1 31 2 1 1 6
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥− −⎣ ⎦
1 2R R↔
1 1 1 1 32 1 0 0 31 2 1 1 6
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥− − −⎣ ⎦
*2 1 2
*3 1 2
2R R R
R R R
= − +
= − +
1 1 1 1 30 3 2 2 30 3 2 2 9
−⎡ ⎤⎢ ⎥− − −⎢ ⎥⎢ ⎥− − −⎣ ⎦
1 1 1 1 30 3 2 2 30 0 0 0 6
−⎡ ⎤⎢ ⎥− − −⎢ ⎥⎢ ⎥−⎣ ⎦
Clearly inconsistent, no solutions.
Not Enough Variables 67. Matrices with the following RREF’s
1 00 10 0 00 0 0
ab
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
,
1 0 10 0 00 0 00 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
and
1 00 00 0 00 0 0
ab
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
, where a and b are nonzero real numbers,
will have, respectively, a unique solution, infinitely many solutions, and no solutions.
True/False Questions
68. a) False. 1 23 0⎡ ⎤⎢ ⎥⎣ ⎦
and 1 00 2⎡ ⎤⎢ ⎥⎣ ⎦
have the same RREF.
b) False. A = 10⎡ ⎤⎢ ⎥⎣ ⎦
has rank 1 [ ]1 20 1
a⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⇒
1 20 1aa==
contradiction
∴ no solutions
230 CHAPTER 3 Linear Algebra
c) False. Consider the matrix1 11 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
A and 12⎡ ⎤
= ⎢ ⎥⎣ ⎦
b
Then 1 1 11 1 2⎡ ⎤⎢ ⎥⎣ ⎦
has RREF 1 1 10 0 1⎡ ⎤⎢ ⎥⎣ ⎦
so the system is inconsistent.
However, the system =Ax c where 22⎡ ⎤
= ⎢ ⎥⎣ ⎦
c is consistent.
Equivalence of Systems
69. Inverse of i jR R↔ : The operation that puts the system back the way it was is j iR R↔ . In other
words, the operation 3 1R R↔ will undo the operation 1 3R R↔ .
Inverse of i iR cR= : The operation that puts the system back the way it was is 1i iR R
c= . In other
words, the operation 1 113
R R= will undo the operation 1 13R R= .
Inverse of i i jR R cR= + : The operation that puts the system back is i i jR R cR= − . In other words
i i jR R cR= − will undo the operation i i jR R cR= + .This is clear because if we add jcR to row i and then subtract jcR from row i, then row i will be unchanged. For example,
1
2
1 2 32 1 1
RR
⎡ ⎤⎢ ⎥⎣ ⎦
, 1 1 22R R R∗ = + , 5 4 52 1 1⎡ ⎤⎢ ⎥⎣ ⎦
, ( )1 1 22R R R∗ = + − , 1 2 32 1 1⎡ ⎤⎢ ⎥⎣ ⎦
.
Homogeneous versus Nonhomogeneous
70. For the homogeneous equation of Problem 32, we can write the solution as
11 ,1
c c−⎡ ⎤⎢ ⎥= − ∈⎢ ⎥⎢ ⎥⎣ ⎦
hx
where c is an arbitrary constant. For the nonhomogeneous equation of Problem 33, we can write the solution as
1 11 0 ,1 0
xy c cz
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = − + ∈⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
x .
In other words, the general solution of the nonhomogeneous algebraic system, Problem 33, is the
sum of the solutions of the associated homogeneous equation plus a particular solution.
Solutions in Tandem 71. There is nothing surprising here. By placing the two right-hand sides in the last two columns of
the augmented matrix, the student is simply organizing the material effectively. Neither of the last two columns affects the other column, so the last two columns will contain the respective solutions.
SECTION 3.2 Systems of Linear Equations 231
Tandem with a Twist 72. (a) We place the right-hand sides of the two systems in the last two columns of the
augmented matrix
1 1 0 3 50 2 1 2 4⎡ ⎤⎢ ⎥⎣ ⎦
.
Reducing this matrix to RREF, yields
11 0 2 3210 1 1 22
⎡ ⎤−⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
Hence, the first system has solutions 122
x z= + , 112
y z= − , z arbitrary, and the second
system has solutions 132
x z= + , 122
y z= − , z arbitrary.
(b) If you look carefully, you will see that the matrix equation
11 12
21 22
31 32
1 1 0 3 50 2 1 2 4
x xx xx x
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
is equivalent to the two systems of equations
11
21
31
12
22
32
1 1 0 30 2 1 2
1 1 0 5.
0 2 1 4
xxx
xxx
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
We saw in part (a) that the solution of the system on the left was
11 31122
x x= + , 21 31112
x x= − , 31x arbitrary,
and the solution of the system on the right was
12 32132
x x= + , 22 32122
x x= − , 32x arbitrary.
Putting these solutions in the columns of our unknown matrix X and calling 31x α= ,
32x β= , we have
11 12
21 22
31 32
1 12 32 21 11 22 2
x xx xx x
α β
α β
α β
⎡ ⎤+ +⎢ ⎥⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥= = − −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦
⎢ ⎥⎢ ⎥⎣ ⎦
X .
232 CHAPTER 3 Linear Algebra
Two Thousand Year Old Problem 73. Letting 1A and 2A be the areas of the two fields in square yards, we are given the two equations
1 2
1 2
1800 square yards2 1 1100 bushels 3 2
A A
A A
+ =
+ =
The areas of the two fields are 1200 and 600 square yards.
Computerizing 74. 2 2× Case. To solve the 2 2× system
11 1 12 2 1
21 1 22 2 2
a x a x ba x a x b
+ =+ =
we start by forming the augmented matrix
11 12 1
21 22 2
a a ba a b⎡ ⎤
⎡ ⎤ = ⎢ ⎥⎣ ⎦⎣ ⎦
A b .
Step 1: If 11 1a ≠ , factor it out of row 1. If 11 0a = , interchange the rows and then factor the new element in the 11 position out of the first row. (This gives a 1 in the first position of the first row.)
Step 2: Subtract from the second row the first row times the element in the 21 position of the new matrix. (This gives a zero in the first position of the second row).
Step 3: Factor the element in the 22 position from the second row of the new matrix. If this element is zero and the element in the 23 position is nonzero, there are no solutions. If both this element is zero and the element in the 23 position is zero, then there are an infinite number of solutions. To find them write out the equation corresponding to the first row of the final matrix. (This gives a 1 in the first nonzero position of the second row).
Step 4: Subtract from the first row the second row times the element in the 12 position of the new matrix. This operation will yield a matrix of the form matrix
1
2
1 00 1
rr
⎡ ⎤⎢ ⎥⎣ ⎦
where 1 1x r= , 2 2x r= . (This gives a zero in the second position of the first row.) 75. The basic idea is to formalize a strategy like that used in Example 3. The augmented matrix for
=Ax b is
11 12 13 1
21 22 23 2
31 32 33 3
a a a ba a a ba a a b
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
A pseudocode might begin:
1. To get a one in first place in row 1, multiply every element of row 1 by 11
1a
.
2. To get a zero in first place in row 2, replace row 2 by
( )21row 2 row 1 .a−
SECTION 3.2 Systems of Linear Equations 233
Electrical Circuits 76. (a) There are four junctions in this multicircuit, and Kirchhoff’s current law states that the
sum of the currents flowing in and out of any junction is zero. The given equations simply state this fact for the four junctions 1J , 2J , 3J , and 4J , respectively. Keep in mind that if a current is negative in sign, then the actual current flows in the direction opposite the indicated arrow.
(b) The augmented system is
1 1 1 0 0 0 00 1 0 1 1 0 00 0 1 1 0 1 01 0 0 0 1 1 0
− −⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥− −⎢ ⎥−⎣ ⎦
.
Carrying out the three elementary row operations, we can transform this system to RREF
1 0 0 0 1 1 00 1 0 1 1 0 00 0 1 1 0 1 00 0 0 0 0 0 0
− −⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥− −⎢ ⎥⎣ ⎦
.
Solving for the lead variables 1I , 2I , 3I in terms of the free variables 4I , 5I , 6I , we have 1 5 6I I I= + , 2 4 5I I I= − + , 3 4 6I I I= + . In matrix form, this becomes
1
2
34 5 6
4
5
6
0 1 11 1 01 0 11 0 00 1 00 0 1
III
I I IIII
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
= + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
where 1I , 2I , and 3I are arbitrary. In other words, we need three of the six currents to uniquely specify the remaining ones.
234 CHAPTER 3 Linear Algebra
More Circuit Analysis 77. 1 2 3
1 2 3
00
I I II I I
− − =− + + =
78. 1 2 3 4
1 2 3 4
00
I I I II I I I
− − − =− + + + =
79. 1 2 3 4
1 2 5
3 4 5
000
I I I II I I
I I I
− − − =− + + =
+ − =
80. 1 2 3
2 4 5
3 4 6
1 5 6
0000
I I II I I
I I II I I
− − =− − =+ − =
− + + =
Suggested Journal Entry I
81. Student Project
Suggested Journal Entry II 82. Student Project
SECTION 3.3 The Inverse of a Matrix 235
3.3 The Inverse of a Matrix
Checking Inverses
1. ( )( ) ( )( ) ( )( ) ( )( )( )( ) ( )( ) ( )( ) ( )( )5 1 3 2 5 3 3 55 3 1 3 1 02 1 1 2 2 3 1 52 1 2 5 0 1
⎡ − + + − ⎤−⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− + + −−⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
2. ( )( ) ( ) ( ) ( )
( )( ) ( ) ( ) ( )
1 1 11 2 0 4 2 402 4 1 04 2 422 0 1 1 0 11 1 12 0 0 2 0
4 4 4 2 4
⎡ ⎤⎛ ⎞⎡ ⎤ + − − + −⎜ ⎟⎢ ⎥⎢ ⎥−⎡ ⎤ ⎡ ⎤⎝ ⎠⎢ ⎥= =⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎛ ⎞⎣ ⎦ ⎣ ⎦⎢ ⎥− + − +⎢ ⎥⎜ ⎟⎢ ⎥⎣ ⎦ ⎝ ⎠⎣ ⎦
3. Direct multiplication as in Problems 1–2. 4. Direct multiplication as in Problems 1–2.
Matrix Inverses 5. We reduce ⎡ ⎤⎣ ⎦A I to RREF.
2 0 1 01 1 0 1⎡ ⎤⎢ ⎥⎣ ⎦
1 112
R R∗ = 11 0 02
1 1 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎣ ⎦
( )2 2 11R R R∗ = + −
11 0 0210 1 12
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
.
Hence, 1
1 021 12
−
⎡ ⎤⎢ ⎥
= ⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
A .
6. We reduce ⎡ ⎤⎣ ⎦A I to RREF.
1 3 1 02 5 0 1⎡ ⎤⎢ ⎥⎣ ⎦
( )2 2 12R R R∗ = + − 1 3 1 00 1 2 1⎡ ⎤⎢ ⎥− −⎣ ⎦
( )2 21R R∗ = − 1 3 1 00 1 2 1⎡ ⎤⎢ ⎥−⎣ ⎦
( )1 1 23R R R∗ = + − 1 0 5 30 1 2 1
−⎡ ⎤⎢ ⎥−⎣ ⎦
.
Hence, 1 5 32 1
− −⎡ ⎤= ⎢ ⎥−⎣ ⎦
A .
236 CHAPTER 3 Linear Algebra
7. Starting with
0 1 1 1 0 05 1 1 0 1 03 3 3 0 0 1
⎡ ⎤⎢ ⎥⎡ ⎤ = −⎣ ⎦ ⎢ ⎥⎢ ⎥− −⎣ ⎦
A I
1 2R R↔ 5 1 1 0 1 00 1 1 1 0 03 3 3 0 0 1
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥− −⎣ ⎦
1 115
R R∗ =
1 1 11 0 05 5 5
0 1 1 1 0 03 3 3 0 0 1
⎡ ⎤−⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥− −⎢ ⎥⎣ ⎦
( )3 3 13R R R∗ = + −
1 1 11 0 05 5 5
0 1 1 1 0 018 12 30 0 15 5 5
⎡ ⎤−⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥
− − −⎢ ⎥⎣ ⎦
1 1 2
3 3 2
15
185
R R R
R R R
∗
∗
⎛ ⎞= + −⎜ ⎟⎝ ⎠
= +
2 1 11 0 05 5 5
0 1 1 1 0 06 18 30 0 15 5 5
⎡ ⎤− −⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥
−⎢ ⎥⎣ ⎦
3 356
R R∗ =
2 1 11 0 05 5 5
0 1 1 1 0 01 50 0 1 32 6
⎡ ⎤− −⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥
−⎢ ⎥⎣ ⎦
( )
1 1 3
2 2 3
25
1
R R R
R R R
∗
∗
= +
= + −
11 0 0 1 03
1 50 1 0 22 61 50 0 1 32 6
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥− −⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
.
Hence, 1
11 03
1 522 61 532 6
−
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥= − −⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
A .
SECTION 3.3 The Inverse of a Matrix 237
8. Interchanging the first and third rows, we get
1 1
0 0 1 1 0 0 1 0 0 0 0 1 0 0 10 1 0 0 1 0 0 1 0 0 1 0 so 0 1 01 0 0 0 0 1 0 0 1 1 0 0 1 0 0
− −
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎡ ⎤⎡ ⎤ = → = =⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
A I I A A
9. Dividing the first row by k gives
1
11 0 0 0 00 0 1 0 00 1 0 0 1 0 0 1 0 0 1 00 0 1 0 0 1 0 0 1 0 0 1
k k−
⎡ ⎤⎢ ⎥⎡ ⎤⎢ ⎥⎢ ⎥ ⎡ ⎤⎡ ⎤ = → = ⎢ ⎥⎣ ⎦ ⎢ ⎥ ⎣ ⎦⎢ ⎥⎢ ⎥⎣ ⎦ ⎢ ⎥⎣ ⎦
A I I A
Hence 1
1 0 0
0 1 00 0 1
k−
⎡ ⎤⎢ ⎥⎢ ⎥
= ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
A .
10.
1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 01 1 0 0 1 0 0 1 1 1 1 0 0 1 1 1 1 00 2 1 0 0 1 0 2 1 0 0 1 0 2 1 0 0 1
1 0 1 1 0 0 1 0 1 1 0 0 1 0 0 1 2 10 1 1 1 1 0 0 1 0 1 1 1 0 1 0 1 1 10 0 1 2 2 1 0 0 1 2 2 1 0 0 1 2 2 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎡ ⎤ = − → − − − → −⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥→ − → − → −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − − − − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
A I
Hence 1
1 2 11 1 12 2 1
−
−⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥− −⎣ ⎦
A .
11.
1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 00 1 0 0 1 0 0 0 1 0 0 0 1 00 0 1 0 0 0 1 0 0 0 1 0 0 0 1 00 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1
k k⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥→⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
Hence 1
1 0 0 00 1 00 0 1 00 0 0 1
k−
⎡ ⎤⎢ ⎥−⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
A .
238 CHAPTER 3 Linear Algebra
12.
1 0 1 1 1 0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 1 0 0 00 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 1 1 0 1 0
1 1 1 0 0 0 1 0 0 1 0 1 1 0 1 0 0 0 1 0 0 1 0 01 0 0 2 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 1 0 0 1
1 0 0 1 1 1 0 0 1 0 0 0 20 1 0 1 1 0 1 00 0 1 0 0 1 0 00 0 0 1 1 1 0 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎢ ⎥ ⎢ ⎥ ⎢ ⎥→ →⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
− −⎡ ⎤⎢ ⎥− −⎢ ⎥→ →⎢ ⎥⎢ ⎥−⎣ ⎦
1
2 0 1 2 2 0 10 1 0 0 2 1 1 1 2 1 1 1
so 0 0 1 0 0 1 0 0 0 1 0 00 0 0 1 1 1 0 1 1 1 0 1
−
− − −⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥− −⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦
A
13. Starting with the augmented matrix
1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 00 1 0 0 0 1 0 0 0 1 0 0 0 1 0 00 1 2 0 0 0 1 0 0 1 2 0 0 0 1 01 1 3 3 0 0 0 1 0 1 3 3 1 0 0 1
1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 00 1 0 0 0 1 0 0 0 1 0 0 0 1 0 00 1 2 0 0 0 1 0 0 0 2 0 0 1 1 00 1 3 3 1 0 0 1 0 0 3 3 1 1 0 1
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥− −⎢ ⎥ ⎢ ⎥⎡ ⎤ = →⎣ ⎦ ⎢ ⎥ ⎢ ⎥− −⎢ ⎥ ⎢ ⎥− − −⎣ ⎦ ⎣ ⎦⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥− −⎢ ⎥ ⎢ ⎥→ →⎢ ⎥ ⎢ ⎥− −⎢ ⎥ ⎢− − − −⎣ ⎦ ⎣ ⎦
A I
1 0 0 0 1 0 0 01 0 0 0 1 0 0 00 1 0 0 0 1 0 00 1 0 0 0 1 0 0
1 11 1 0 0 1 0 0 00 0 1 0 0 0 2 22 21 30 0 3 3 1 1 0 1 0 0 0 3 1 12 2
1 0 0 0 1 0 0 00 1 0 0 0 1 0 0
1 1 .0 0 1 0 0 02 2
1 1 1 10 0 0 13 6 2 3
⎥
⎡ ⎤⎡ ⎤ ⎢ ⎥−⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥→ →⎢ ⎥ − −⎢ ⎥− −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− − −⎢ ⎥⎣ ⎦ ⎢ ⎥⎣ ⎦⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥→ − −⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
Hence 1
1 0 0 00 1 0 0
1 10 02 2
1 1 1 13 6 2 3
−
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥= − −⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
A .
SECTION 3.3 The Inverse of a Matrix 239
14.
0 1 2 1 1 0 0 04 0 1 2 0 1 0 00 1 0 0 0 0 1 00 2 0 1 0 0 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
1 2R R↔
4 0 1 2 0 1 0 00 1 2 1 1 0 0 00 1 0 0 0 0 1 00 2 0 1 0 0 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
3 2R R↔
4 0 1 2 0 1 0 00 1 0 1 0 0 1 00 1 2 0 1 0 0 00 2 0 1 0 0 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
*1 1
*3 2 3*4 2 4
14
2
R R
R R R
R R R
=
= − +
= − +
1 1 11 0 0 0 04 2 4
0 1 0 0 0 0 1 00 0 2 1 1 0 1 00 0 0 1 0 0 2 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥
−⎢ ⎥⎣ ⎦
*3 3
12
R R=
1 1 11 0 0 0 04 2 4
0 1 0 0 0 0 1 01 1 10 0 1 0 02 2 2
0 0 0 1 0 0 2 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥
−⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
*1 4 1
*3 4 3
12
12
R R R
R R R
− +
= − +
1 1 11 0 0 0 14 4 2
0 1 0 0 0 0 1 01 1 10 0 1 0 02 2 2
0 0 0 1 0 0 2 1
⎡ ⎤−⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥
−⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
*1 3 1
14
R R R= +
1 1 7 31 0 0 08 4 8 8
0 1 0 0 0 0 1 01 1 10 0 1 0 02 2 2
0 0 0 1 0 0 2 1
⎡ ⎤− −⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥
−⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
Hence A−1 =
1 1 7 38 4 8 8
0 0 1 01 1 102 2 20 0 2 1
⎡ ⎤− −⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥
−⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
240 CHAPTER 3 Linear Algebra
Inverse of the ×2 2 Matrix
15. Verify 1 1− −= =A A I AA . We have
1
1
01 10
01 10
d b a b ad bcc a c d ad bcad bc ad bc
a b d b ad bcc d c a ad bcad bc ad bc
−
−
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
A A I
AA I
Note that we must have 0ad bc= − ≠A .
Brute Force 16. To find the inverse of
1 31 2⎡ ⎤⎢ ⎥⎣ ⎦
,
we seek the matrix
a bc d⎡ ⎤⎢ ⎥⎣ ⎦
that satisfies
1 3 1 01 2 0 1
a bc d⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Multiplying this out we get the equations
13 2 0
03 2 1.
a ba bc d
c d
+ =+ =+ =+ =
The top two equations involve a and b, and the bottom two involve c and d, so we write the two systems
1 1 13 2 0
1 1 0.
3 2 1
ab
cd
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Solving each system, we get
1
1
1 1 1 2 1 1 213 2 0 3 1 0 31
1 1 0 2 1 0 11 .3 2 1 3 1 1 11
ab
cd
−
−
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Because a and b are the elements in the first row of 1−A , and c and d are the elements in the second row, we have
1
1 1 3 2 31 2 1 1
−− −⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
A .
SECTION 3.3 The Inverse of a Matrix 241
Finding Counterexamples
17. No. Consider 1 0 1 0 0 00 1 0 1 0 0
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
which is not invertible.
18. No. Consider 0 0 0 0 0 00 1 0 1 0 1⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Unique Inverse
19. We show that if B and C are both inverse of A, then =B C . Because B is an inverse of A, we can write =BA I . If we now multiply both sides on the right by C, we get
( ) = =BA C IC C . But then we have
( ) ( )= = =BA C B AC BI B , so =B C .
Invertible Matrix Method 20. Using the inverse found in Problem 6, yields
1 1
2
5 3 4 502 1 10 18
xx
− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
A b .
Solution by Invertible Matrix
21. Using the inverse found in Problem 7, yields
1
11 03 5 5
1 52 2 92 6
0 141 532 6
xyz
−
⎡ ⎤⎢ ⎥
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥= = − − = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎢ ⎥−⎢ ⎥⎣ ⎦
A b .
More Solutions by Invertible Matrices
22. A = 1 1 11 1 01 2 1
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
Use row reduction to obtain 1
1 1 11 2 11 3 2
−
−⎡ ⎤⎢ ⎥= − −⎢ ⎥⎢ ⎥− −⎣ ⎦
A
1
4 31 20 1
−
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥= ⋅ = −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
x A
242 CHAPTER 3 Linear Algebra
23. 4 3 25 6 03 5 2
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦
A Use row reduction to obtain
1
3 4 35 7 52 2 2
7 11 94 4 4
−
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥= − −⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
A 1
0 40 6 3410 35 5 302 110 18 23
4 4
−
⎡ ⎤⎢ ⎥− + −⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥= ⋅ = − =⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦− +⎢ ⎥⎣ ⎦
x A
Noninvertible ×2 2 Matrices
24. If we reduce a bc d⎡ ⎤
= ⎢ ⎥⎣ ⎦
A to RREF we get
1
0
ba
ad bca
⎡ ⎤⎢ ⎥⎢ ⎥
−⎢ ⎥⎢ ⎥⎣ ⎦
,
which says that the matrix is invertible when 0ad bca−
≠ , or equivalently when ad bc≠ .
Matrix Algebra with Inverses
25. ( ) ( )1 11 1 1 1− −− − − −= =AB B A BA
26. ( ) ( ) ( )1 1 12 2 2 2− − −=B A B B B A
( ) ( )( )( ) ( ) ( )
1 1 1 1 1 1
2 2 21 1 1 2 2 1
(
where means
− − − − − −
− − − − − −
= =
= =
B BB AA) B B B A A
B B A B A A A
27. Suppose A(BA)−1 =x b
( )
( ) ( )
11
1 1
(−−
− −
⎡ ⎤= ⋅⎣ ⎦
= =
=
x A BA b
BA A b B AA b
Bb
28. ( )( ) ( ) ( )( )1 11 1 11 1 1 1 1 1− −− − −− − − − − −=A BA BA BA A BA
( )
( )( )
1 1
1 1
− −
− −
=
= =
AB BA A
A B B A A A
Question of Invertibility 29. To solve (A + B) =x b requires that A + B be invertible Then (A + B)−1(A + B) x = (A + B)−1 b so that x = (A + B)−1 b
SECTION 3.3 The Inverse of a Matrix 243
Cancellation Works 30. Given that =AB AC and A are invertible, we premultiply by 1−A , getting
1 1
.
− −===
A AB A ACIB ICB C
An Inverse
31. If A is an invertible matrix and =AB I , then we can premultiply each side of the equation by 1−A getting
( )( )
1 1
1 1
1 .
− −
− −
−
=
=
=
A AB A I
AA B A
B A
Making Invertible Matrices
32. 1 00 1 00 0 1
k = 1 so k may be any number.
33. 1 00 1 0
0 1
k
k = 1 − k2 k ≠ ±1
Products and Noninvertibility 34. a) Let A and B be n × n matrices such that BA = In. First we will show that A−1 exists by showing A x = 0 has a unique solution =x 0 . Suppose =Ax 0 Then = =BAx B0 0 so I =n x 0
=x 0 so that A−1 exists BA = In BAA−1 = InA−1 so B = A−1 ∴ AB = In b) Let A, B, be n × n matrices such that AB is invertible. We will show that A must be
invertible
AB invertible means that AB(AB)−1 = In so that ( )1( )−A B AB = In
By problem 34a, ( )1( )−B AB A = In so that A is invertible.
244 CHAPTER 3 Linear Algebra
Invertiblity of Diagonal Matrices 35. Proof for (⇒): (Contrapositive) Suppose D is a diagonal matrix with one diagonal element = 0, say aii = 0. Then D has a row of
zeros and consequently RREF (D) has at least one row of zeros. Therefore, D is not invertible.
Proof for (⇐): Let D be a diagonal matrix such that every aii ≠ 0. Then the diagonal matrix
B = [bii] such that bii = 1
iia is D−1.
That is:
1 00 1 0
0 1 0 10
iiii
nn
ii
aaa
a
⎡ ⎤⎢ ⎥⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦⎢ ⎥⎣ ⎦
= In
Invertiblity of Triangular Matrices
36. Proof for (⇒): (Contrapositive) Let T be an upper triangular matrix with at least one diagonal element = 0, say ajj. Then there is one column without a pivot. Therefore RREF (T) has a zero row.
Consequently T is not invertible.
Proof for (⇐): Let T be an upper n × n triangular matrix with no nonzero diagonal elements. Then every column is a pivot column so RREF(T) = In.
Therefore T is invertible.
Inconsistency 37. If =Ax b is inconsistent for some vector b then 1−A does not exist—because if 1−A did exist,
then 1−=x A b would exists for all b , which would be a contradiction.
Inverse of an Inverse 38. To prove: If A is invertible, so is A−1 Proof: Let A be an invertible n × n matrix, then there exists A−1 so that: AA−1 = In A−1A = In
so A = (A−1)−1 by definition of inverse and the fact that inverses are unique. (3.3 Problem 19)
Inverse of a Transpose 39. To prove: If A is invertible, so is AT and (AT)−1 = (A−1)T. Proof: Let A be an invertible n × n matrix. Then (AT)(A−1)T = (A−1A)T = T
nI = In because (AT)T = A and (AB)T = BTAT (A−1)T AT = (AA−1)T = T
nI = In Therefore (AT)−1 = (A−1)T
Elementary Matrices
40. (a) 0 1 01 0 00 0 1
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
intE (b) 1 0 00 1 0
0 1k
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
replE (c) 1 0 00 00 0 1
k⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
scaleE
SECTION 3.3 The Inverse of a Matrix 245
Invertibility of Elementary Matrices 41. Because the inverse of any elementary row operation is also an elementary row operation, and
because elementary matrices are constructed from elementary row operations starting with the identity matrix, we can convert any elementary row operation to the identity matrix by elementary row operations.
For example, the inverse of intE can be found by performing the operation 1 2R R↔ on the augmented matrix
0 1 0 1 0 0 1 0 0 0 1 01 0 0 0 1 0 0 1 0 1 0 00 0 1 0 0 1 0 0 1 0 0 1
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎡ ⎤ = →⎣ ⎦ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
intE I .
Hence, 1
0 1 01 0 00 0 1
−
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
intE . In other words 1−=int intE E . We leave finding 1−replE and 1−
scaleE for the
reader.
Similar Matrices
42. Pick P as the identity matrix. 43. If ~B A , then there exists a nonsingular matrix P such that 1−=B P AP . Premultiplying by P and
postmultiplying by 1−P gives
( ) 11 1 1−− − −= =A PBP P BP ,
which shows that A is similar to B. 44. Suppose ~C A and ~C B . Then there exist invertible matrices AP and BP
1 1− −= =A A B BC P AP P BP so
( ) ( ) ( )1 1 1 1− − − −= =A B B A A B B AA P P BP P P P B P P .
Let 1−= B AQ P P . Therefore 1−=A Q BQ , so ~A B .
45. Informal Discussion Bn = (P−1AP)(P−1AP) ⋅⋅⋅ (P−1AP) n factors By generous application of the associative property of matrix multiplication we obtain. Bn = P−1A(PP−1) A(PP−1) ⋅⋅⋅ (PP−1)AP = P−1AnP by the facts that PP−1 = I and AI = A Induction Proof To Prove: Bn = P−1AnP for all positive integers n Pf: 1) B1 = P−1AP by definition of B 2) Assume for some k: Bk = P−1AkP
246 CHAPTER 3 Linear Algebra
Now for k + 1: Bk+1 = BBk = (P−1AP)(P−1AkP) = (P−1A)(PP−1)(AkP) = (P−1A)I(AkP) = P−1AAkP = P−1Ak+1P So the case for k ⇒ the case for k + 1 By Mathematical Induction, Bn = P−1AnP for all n. 46. True/False Questions a) True If all diagonal elements are nonzero, then every column has a pivot and the
matrix is invertible. If a diagonal element is zero, then the corresponding column is not a pivot column, so the matrix is not invertible.
b) True Same argument as a) c) False Consider this example:
A = 1 00 2⎡ ⎤⎢ ⎥⎣ ⎦
B = 0 10 0⎡ ⎤⎢ ⎥⎣ ⎦
A−1 = 1 0
102
⎡ ⎤⎢ ⎥⎢ ⎥⎣ ⎦
1 01 0 0 1
10 2 0 0 02
⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
= 102
0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎣ ⎦
≠ B
Leontief Model
47. 0.5 0
0 0.5⎡ ⎤
= ⎢ ⎥⎣ ⎦
T , 1010⎡ ⎤
= ⎢ ⎥⎣ ⎦
d
The basic equation is Total Output = External Demand + Internal Demand,
so we have
1 1
2 2
10 0.5 010 0 0.5
x xx x⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
= +⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
.
Solving these equations yields 1 2 20x x= = . This should be obvious because for every 20 units of product each industry produces, 10 goes back into the industry to produce the other 10.
48. 0 0.1
0.2 0⎡ ⎤
= ⎢ ⎥⎣ ⎦
T , 1010⎡ ⎤
= ⎢ ⎥⎣ ⎦
d
The basic equation is Total Output = External Demand + Internal Demand,
so we have
1 1
2 2
10 0 0.110 0.2 0
x xx x⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
= +⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
.
Solving these equations yields 1 11.2x = , 2 12.2x = .
SECTION 3.3 The Inverse of a Matrix 247
49. 0.2 0.50.5 0.2⎡ ⎤
= ⎢ ⎥⎣ ⎦
T , 1010⎡ ⎤
= ⎢ ⎥⎣ ⎦
d
The basic equation is Total Output = External Demand + Internal Demand,
so we have
1 1
2 2
10 0.2 0.510 0.5 0.2
x xx x⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
= +⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
.
Solving these equations yields 11333
x = , 21333
x = .
50. 0.5 0.20.1 0.3⎡ ⎤
= ⎢ ⎥⎣ ⎦
T , 5050⎡ ⎤
= ⎢ ⎥⎣ ⎦
d
The basic equation is Total Output = External Demand + Internal Demand,
so we have
1 1
2 2
50 0.5 0.250 0.1 0.3
x xx x⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
= +⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
.
Solving these equations yields 1 136.4x = , 2 90.9x = .
How Much Is Left Over? 51. The basic demand equation is
Total Output = External Demand + Internal Demand, so we have
1
2
150 0.3 0.4 150250 0.5 0.3 250
dd⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= +⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
.
Solving for 1d , 2d yields 1 5d = , 2 100d = .
Israeli Economy
52. (a) 0.70 0.00 0.000.10 0.80 0.200.05 0.01 0.98
⎡ ⎤⎢ ⎥− = − −⎢ ⎥⎢ ⎥− −⎣ ⎦
I T (b) ( ) 11.43 0.00 0.000.20 1.25 0.260.07 0.01 1.02
−⎡ ⎤⎢ ⎥− = ⎢ ⎥⎢ ⎥⎣ ⎦
I T
(c) ( ) 11.43 0.00 0.00 140,000 $200,2000.20 1.25 0.26 20,000 $53,5200.07 0.01 1.02 2,000 $12,040
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
x I T d
Suggested Journal Entry
53. Student Project
248 CHAPTER 3 Linear Algebra
3.4 Determinants and Cramer’s Rule
Calculating Determinants 1. Expanding by cofactors down the first column we get
0 7 9
1 1 7 9 7 92 1 1 0 2 5 0
6 2 6 2 1 15 6 2
−− = − + =
−.
2. Expanding by cofactors across the middle row we get
1 2 3
2 3 1 3 1 20 1 0 0 1 0 6
0 3 1 3 1 01 0 3
= + − = −− −
−.
3. Expanding by cofactors down the third column we get
1 3 0 21 3 2 1 3 2
0 1 1 51 1 2 7 0 1 5 6 6 12
1 2 1 71 1 6 1 1 6
1 1 0 6
−− −
−= − − + = + =
− −− −
−
.
4. Expanding by cofactors across the third row we get
( ) ( )
1 4 2 24 2 2 1 4 2
4 7 3 53 7 3 5 8 4 7 5 3 14 8 250 2042
3 0 8 01 6 9 5 1 9
5 1 6 9
− −− − − −
−= − + = + =
− − −− −
.
5. By row reduction, we can write
1 1 1 1 1 1
22 2 2 2 2 23
3 3 3 2 2 2= = 0
6. 0 0 1
0 20 2 1 1
3 13 1 1
= = −6
SECTION 3.4 Determinants and Cramer’s Rule 249
7. Using row reduction
1 2 2 42 2 2 22 1 1 21 4 4 2
− −− −
− −
1 2 2 40 3 3 02 1 1 20 2 6 6
−=
− −−
*2 3 2*4 1 4
by row operations:R R R
R R R
= +
= +
1 2 2 40 3 3 00 3 5 100 2 6 6
−=
− − −−
*3 1 3
by row operation:2R R R= − +
3 3 0
1 3 5 102 6 6
−= − − −
−
5 10 3 10
3 ( 3) 6 6 2 6− − − −
= − −−
= −24
Find the Properties
8. Subtract the first row from the second row in the matrix in the first determinant to get the matrix in the second determinant.
9. Factor out 3 from the second row of the matrix in the first determinant to get the matrix in the
second determinant. 10. Interchange the two rows of the matrix.
Basketweave for ×3 3 11. Direct computation as in Problems 1–4.
12. 0 7 92 1 1 0 35 108 45 0 28 05 6 2
− = − + − − − =
13. 1 2 30 1 0 3 0 0 3 0 0 61 0 3
= − + + − − − = −−
250 CHAPTER 3 Linear Algebra
14. By an extended basketweave hypothesis,
0 1 1 01 1 0 1
0 0 0 0 0 0 1 0 10 0 0 10 1 1 0
= + + + − − − − = − .
However, the determinant is clearly 0 (because rows 1 equals row 4), so the basketweave method does not generalize to dimensions higher than 3.
Triangular Determinants
15. We verify this for 4 4× matrices. Higher-order matrices follow along the same lines. Given the upper-triangular matrix
11 12 13 14
22 23 24
33 34
44
00 00 0 0
a a a aa a a
a aa
=A ,
we expand down the first column, getting
11 12 13 1422 23 24
22 23 24 33 3411 33 34 11 22 11 22 33 44
33 34 4444
44
00
0 0 00 0
0 0 0
a a a aa a a
a a a a aa a a a a a a a a
a a aa
a
= = = .
Think Diagonal
16. The matrix is upper triangular, hence the determinant is the product of the diagonal elements
( )( )( )3 4 00 7 6 3 7 5 1050 0 5
−= − = − .
17. The matrix is a diagonal matrix, hence the determinant is the product of the diagonal elements.
( )( )4 0 0
10 3 0 4 3 62
10 02
− = − = − .
18. The matrix is lower triangular, hence the determinant is the product of the diagonal elements.
( )( )( )( )
1 0 0 03 4 0 0
1 4 1 2 80 5 1 0
11 0 2 2
−= − = −
−−
.
SECTION 3.4 Determinants and Cramer’s Rule 251
19. The matrix is upper triangular, hence the determinant is the product of the diagonal elements.
( )( )( )( )
6 22 0 30 1 0 4
6 1 13 4 3120 0 13 00 0 0 4
−−
= − = − .
Invertibility
20. Not invertible if
3
1 00 1 4 0
0 4
kk k k
k= − = if k(4 − k2) = 0, so that k = 0 or k = ±2
Invertible if k ≠ 0 and k ≠ ±2
21. Not invertible if 1
0k
k k=
−
−k + k2 = 0 k(k − 1) = 0 Invertible if k ≠ 0 and k ≠ 1 22. Not invertible if
1 00 1 0 1 0
0 1
mkm
k= − = i.e. km = 1, k = 1
m
Invertible if km ≠ 1
Invertibility Test 23. The matrix does not have an inverse because its determinant is zero. 24. The matrix has an inverse because its determinant is nonzero. 25. The matrix has an inverse because its determinant is nonzero. 26. The matrix has an inverse because its determinant is nonzero.
252 CHAPTER 3 Linear Algebra
Product Verification 27.
1 23 4
1 01 1
3 27 4
1 22
3 4
1 01
1 1
3 22
7 4
⎡ ⎤= ⎢ ⎥⎣ ⎦⎡ ⎤
= ⎢ ⎥⎣ ⎦⎡ ⎤
= ⎢ ⎥⎣ ⎦⎡ ⎤
= = −⎢ ⎥⎣ ⎦⎡ ⎤
= =⎢ ⎥⎣ ⎦⎡ ⎤
= = −⎢ ⎥⎣ ⎦
A
B
AB
A
B
AB
Hence =AB A B .
28.
0 1 01 0 0 21 2 2
1 2 31 2 0 70 1 1
1 2 01 2 3 141 8 1
⎡ ⎤⎢ ⎥= ⇒ = −⎢ ⎥⎢ ⎥⎣ ⎦⎡ ⎤⎢ ⎥= − ⇒ = −⎢ ⎥⎢ ⎥−⎣ ⎦−⎡ ⎤⎢ ⎥= ⇒ =⎢ ⎥⎢ ⎥−⎣ ⎦
A A
B B
AB AB
Hence =AB A B .
Determinant of an Inverse
29. We have 1 11 − −= = =I AA A A
and hence 1 1− =AA
.
Do Determinants Commute?
30. = = =AB A B B A BA , because A B is a product of real or complex numbers.
Determinant of Similar Matrices 31. The key to the proof lies in the determinant of a product of matrices. If
1−=A P BP , we use the general properties
1 1− =AA
, =AB A B
shown in Problems 23 and 24, and write
1 1 1 1− −= = = = =A P BP P B P B P B P BP P
.
SECTION 3.4 Determinants and Cramer’s Rule 253
Determinant of nA 32. (a) If 0n =A for some integer n, we have
0nn = =A A .
Because nA is the product of real or complex numbers, 0=A .
Hence, A is noninvertible. (b) If 0n ≠A for some integer n, then
0nn = ≠A A
for some integer n. This implies 0≠A , so A is invertible. In other words, for every
matrix A either 0n =A for all positive integers n or it is never zero.
Determinants of Sums
33. An example is
1 00 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
A , 1 00 1−⎡ ⎤
= ⎢ ⎥−⎣ ⎦B ,
so
0 00 0⎡ ⎤
+ = ⎢ ⎥⎣ ⎦
A B ,
which has the determinant 0+ =A B ,
whereas 1= =A B ,
so 2+ =A B . Hence, + ≠ +A B A B .
Determinants of Sums Again
34. Letting
1 10 0⎡ ⎤
= ⎢ ⎥⎣ ⎦
A , 1 10 0− −⎡ ⎤
= ⎢ ⎥⎣ ⎦
B ,
we get
0 00 0⎡ ⎤
+ = ⎢ ⎥⎣ ⎦
A B .
Thus 0+ =A B . Also, we have 0=A , 0=B , so 0+ =A B .
Hence, + = +A B A B .
254 CHAPTER 3 Linear Algebra
Scalar Multiplication 35. For a 2 2× matrix, we see
11 12 11 122 2 211 22 21 12
21 22 21 22
ka ka a ak a a k a a k
ka ka a a= − = .
For an n n× matrix, A, we can factor a k out of each row getting nk k=A A .
Inversion by Determinants 36. Given the matrix
1 0 22 2 31 1 1
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
A
the matrix of minors can easily be computed and is
1 1 02 1 14 1 2
− −⎡ ⎤⎢ ⎥= − −⎢ ⎥⎢ ⎥− −⎣ ⎦
M .
The matrix of cofactors A , which we get by multiplying the minors by ( )1 i j+− , is given by
( )1 1 0
1 2 1 14 1 2
i j+
−⎡ ⎤⎢ ⎥= − = − −⎢ ⎥⎢ ⎥−⎣ ⎦
A M .
Taking the transpose of this matrix gives
T
1 2 41 1 10 1 2
− −⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥−⎣ ⎦
A .
Computing the determinant of A, we get 1= −A . Hence, we have the inverse
1 T
1 2 41 1 1 1
0 1 2
−
−⎡ ⎤⎢ ⎥= = − −⎢ ⎥⎢ ⎥−⎣ ⎦
A AA
.
Determinants of Elementary Matrices
37. (a) If we interchange the rows of the 2 2× identity matrix, we change the sign of the determinant because
1 0
10 1
= , 0 1
11 0
= − .
For a 3 3× matrix if we interchange the first and second rows, we get
0 1 01 0 0 10 0 1
= − .
You can verify yourself that if any two rows of the 3 3× identity matrix are interchanged, the determinant is –1.
SECTION 3.4 Determinants and Cramer’s Rule 255
For a 4 4× matrix suppose the ith and jth rows are interchanged and that we compute the determinant by expanding by minors across one of the rows that was not interchanged. (We can always do this.) The determinant is then
11 11 12 12 13 13 14 14a a a a= − + −A M M M M . But the minors 11M , 12M , 13M , 14M are 3 3× matrices, and we know each of these
determinants is –1 because each of these matrices is a 3 3× elementary matrix with two rows changed from the identity matrix. Hence, we know 4 4× matrices with two rows interchanged from the identity matrix have determinant –1. The idea is to proceed inductively from 4 4× matrices to 5 5× matrices and so on.
(b) The matrix
1 0 0
1 00 0 1k⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
shows what happens to the 3 3× identity matrix if we add k times the 1st row to the 2nd row. If we expand this matrix by minors across any row we see that the determinant is the product of the diagonal elements and hence 1. For the general n n× matrix adding k times the ith row to the jth row places a k in the jith position of the matrix with all other entries looking like the identity matrix. This matrix is an upper-triangular matrix, and its determinant is the product of elements on the diagonal or 1.
(c) Multiplying a row, say the first row, by k of a 3 3× matrix
0 0
0 1 00 0 1
k⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
and expanding by minors across any row will give a determinant of k. Higher-order matrices give the same result.
Determinant of a Product
38. (a) If A is not invertible then 0=A . If A is not invertible then neither is AB, so 0=AB . Hence, it yields =AB A B because both sides of the equation are zero.
(b) We first show that =EA E A for elementary matrices E. An elementary matrix is one
that results in changing the identity matrix using one of the three elementary operations. There are three kinds of elementary matrices. In the case when E results in multiplying a row of the identity matrix I by a constant k, we have:
11 12 1 11 12 1
21 22 2 21 22 2
1 2 1 2
1 0 00 0
0 0 1
n n
n n
n n nn n n nn
a a a a a aa a a ka ka kak
a a a a a a
k
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥= ⋅ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦= =
EA
A E A
In those cases when E is a result of interchanging two rows of the identity or by adding a multiple of one row to another row, the verification follows along the same lines.
256 CHAPTER 3 Linear Algebra
Now if A is invertible it can be written as the product of elementary matrices 1 1 p p−=A E E E… .
If we postmultiply this equation by B, we get 1 1 p p−=AB E E E B… ,
so 1 1 1 1 p p p p− −= = =AB E E E B E E E B A B… … .
Cramer’s Rule
39. 2 22 5 0
x yx y
+ =+ =
To solve this system we write it in matrix form as
1 2 22 5 0
xy
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
Using Cramer’s rule, we compute the determinants
1 2
1 2 2 2 1 21, 10, 4.
2 5 0 5 2 0= = = = = = −A A A
Hence, the solution is
1 210 410, 4.1 1
x y= = = = = − = −A AA A
40.
2 1x yx y
λ+ =+ =
To solve this system we write it in matrix form as
1 11 2 1
xy
λ⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
Using Cramer’s rule, we compute the determinants
1 2
1 1 1 11, 2 1, 1 .
1 2 1 2 1 1λ λ
λ λ= = = = − = = −A A A
Hence, the solution is
1 22 1 12 1, 1 .1 1
x yλ λλ λ− −= = = − = = = −
A AA A
SECTION 3.4 Determinants and Cramer’s Rule 257
41. 3 52 5 7
2 3
x y zy z
x z
+ + =+ =+ =
To solve this system, we write it in matrix form as
1 1 3 50 2 5 71 0 2 3
xyz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Using Cramer’s rule, we compute the determinants
1 2 3
1 1 3 5 1 3 1 5 3 1 1 50 2 5 3, 7 2 5 3, 0 7 5 3, 0 2 7 3.1 0 2 3 0 2 1 3 2 1 0 3
= = = = = = = =A A A A
All determinants are 3, so
1 2 33 3 31, 1, 1.3 3 3
x y z= = = = = = = = =A A AA A A
42. 1 2 3
1 2 3
1 2 3
2 63 8 9 102 2 2
x x xx x xx x x
+ − =+ + =− + = −
To solve this system, we write it in matrix form as
1
2
3
1 2 1 63 8 9 102 1 2 2
xxx
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Using Cramer’s rule, we compute the determinants
1 2 3
1 2 1 6 2 1 1 6 1 1 2 63 8 9 68, 10 8 9 68, 3 10 9 136, 3 8 10 68.2 1 2 2 1 2 2 2 2 2 1 2
− − −= = = = = = = = −
− − − − − −A A A A
Hence, the solution is
1 2 31 2 3
68 136 681, 2, 1.68 68 68
x x x= = = = = = = = − = −A A AA A A
The Wheatstone Bridge
43. (a) Each equation represents the fact that the sum of the currents into the respective nodes A, B, C, and D is zero. For example
1 2 1 2
1 1
3 3
2 3 3 2
0000 .
g x g x
x x
g g
I I I I I II I I I I II I I I I I
I I I I I I
− − = ⇒ = +− − = ⇒ = +
− + + = ⇒ = ++ − = ⇒ = +
node :node :node :node :
ABCD
258 CHAPTER 3 Linear Algebra
(b) If a current I flows through a resistance R, then the voltage drop across the resistance is RI. Applying Kirchhoff’s voltage law, the sum of the voltage drops around each of the three circuits is set to zero giving the desired three equations:
voltage drop around the large circuit 0 1 1 0x xE R I R I− − = , voltage drop around the upper-left circuit 1 1 2 2 0g gR I R I R I+ − = ,
voltage drop around the upper-right circuit 3 3 0x x g gR I R I R I− − = . (c) Using the results from part (a) and writing the three currents 3I , xI , and I in terms of 1I ,
2I , gI . gives
3 2
1
1 2 .
g
x g
I I I
I I I
I I I
= +
= −
= +
We substitute these into the three given equations to obtain the 3 3× linear system for the currents 1I , 2I , gI :
3 3 1
1 2 2
1 0
00
0
x x g
g
x x g
R R R R R IR R R I
R R R I E
⎡ ⎤− − − −⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥− =⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥+ −⎣ ⎦ ⎣ ⎦⎣ ⎦
.
Solving for gI (we only need to solve for one of the three unknowns) using Cramer’s rule, we find
1gI =
AA
where
( )3
1 1 2 0 2 1 3
1 0
00
0
x
x
x
R RR R E R R R R
R R E
−⎡ ⎤⎢ ⎥= − = − +⎢ ⎥⎢ ⎥+⎣ ⎦
A .
Hence, 0gI = if 2 1 3xR R R R= . Note: The proof of this result is much easier if we assume the resistance gR is negligible, and we take it as zero.
Least Squares Derivation
44. Starting with
( ) ( ) 2
1,
n
i ii
F m k y k mx=
= ⎡ − + ⎤⎣ ⎦∑ ,
we compute the equations
0Fk
∂=
∂, 0F
m∂
=∂
yielding
( ) ( ) ( )
( ) ( ) ( )
2
1 1
2
1 1
2 1 0
2 0.
n n
i i i ii i
n n
i i i i ii i
F y k mx y k mxk kF y k mx y k mx xm m
= =
= =
∂ ∂= ⎡ − + ⎤ = ⎡ − + ⎤ − =⎣ ⎦ ⎣ ⎦∂ ∂
∂ ∂= ⎡ − + ⎤ = ⎡ − + ⎤ − =⎣ ⎦ ⎣ ⎦∂ ∂
∑ ∑
∑ ∑
SECTION 3.4 Determinants and Cramer’s Rule 259
Carrying out a little algebra, we get
1 1
2
1 1 1
n n
i ii i
n n n
i i i ii i i
kn m x y
k x m x x y
= =
= = =
+ =
+ =
∑ ∑
∑ ∑ ∑
or in matrix form
1 1
2
1 1 1
n n
i ii i
n n n
i i i ii i i
n x ykm
x x x y
= =
= = =
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
∑ ∑
∑ ∑ ∑.
Alternative Derivation of Least Squares Equations
45. (a) Equation (9) in the text
1.7 1.12.3 3.13.1 2.34.0 3.8
k mk mk mk m
+ =+ =+ =+ =
can be written in matrix form
1 1.7 1.11 2.3 3.11 3.1 2.31 4.0 3.8
km
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
which is the form of =Ax b . (b) Given the matrix equation =Ax b , where
1
2
3
4
1111
xxxx
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
A , km⎡ ⎤
= ⎢ ⎥⎣ ⎦
x ,
1
2
3
4
yyyy
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
b
if we premultiply each side of the equation by TA , we get T T=A Ax A b , or
1 1
2 2
1 2 3 4 1 2 3 43 3
4 4
11 1 1 1 1 1 1 11
11
x yx yk
x x x x x x x xx ymx y
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦ ⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
or
4 4
1 14 4 4
2
1 1 1
4 i ii i
i i i ii i i
x ykm
x x x y
= =
= = =
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
∑ ∑
∑ ∑ ∑.
260 CHAPTER 3 Linear Algebra
Least Squares Calculation 46. Here we are given the data points
x y 0 1 1 1 2 3 3 3
so
4
14
2
14
14
1
6
14
8
16.
ii
ii
ii
i ii
x
x
y
x y
=
=
=
=
=
=
=
=
∑
∑
∑
∑
The constants m, k in the least squares line y mx k= +
satisfy the equations 4 6 86 14 16
km
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦,
which yields 0.80k m= = . The least squares lineis 0.8 0.8y x= + .
42
2
1
3
4
1
3
y x= +08 08. .
x
y
Computer or Calculator 47. To find the least-squares approximation of the form y k mx= + , we solve to a set of data points
( ){ }, : 1, 2, , i ix y i n= … , to get the system
1 1
2
1 1 1
n n
i ii i
n n n
i i i ii i i
n x ykm
x x x y
= =
= = =
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
∑ ∑
∑ ∑ ∑.
Using a spreadsheet to compute the element of the coefficient matrix and the right-hand-side vector, we get
Spreadsheet to compute least squares x y x^2 xy 1.6 1.7 2.56 2.72 3.2 5.3 10.24 16.96 6.9 5.1 47.61 35.19 8.4 6.5 70.56 54.60 9.1 8.0 82.81 72.80 sum x sum y sum x^2 sum xy 29.2 26.6 213.78 182.27
SECTION 3.4 Determinants and Cramer’s Rule 261
We must solve the system 5.0 29.20 26.60
29.2 213.78 182.27km
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
getting 1.68k = , 0.62m = . Hence, we have the least squares line
0.62 1.68y x= + , whose graph is shown next.
84
4
2
6
8
2
6x
y
10
10
y x= +062 168. .
48. To find the least-square approximation of the form y k mx= + , we solve to a set of data points
( ){ }, : 1, 2, , i ix y i n= … to get the system
1 1
2
1 1 1
n n
i ii i
n n n
i i i ii i i
n x ykm
x x x y
= =
= = =
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
∑ ∑
∑ ∑ ∑.
Using a spreadsheet to compute the elements of the coefficient matrix and the right-hand-side vector, we get
Spreadsheet to compute least squares x y x^2 xy 0.91 1.35 0.8281 1.2285 1.07 1.96 1.1449 2.0972 2.56 3.13 6.5536 8.0128 4.11 5.72 16.8921 23.5092 5.34 7.08 28.5156 37.8072 6.25 8.14 39.0625 50.8750 sum x sum y sum x^2 sum xy 20.24 27.38 92.9968 123.5299
We must solve the system 6.00 20.2400 27.3800
20.24 92.9968 123.5299km
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
getting 0.309k = , 1.26m = . Hence, the least-squares line is 0.309 1.26y x= + .
5
5
x
y
10
10
y x= +126 0309. .
262 CHAPTER 3 Linear Algebra
Least Squares in Another Dimension 49. We seek the constants α, 1β , and 2β that minimize
( ) ( ) 21 2 1 2
1, ,
n
i i ii
F y T Pα β β α β β=
= ⎡ − + + ⎤⎣ ⎦∑ .
We write the equations
( ) ( )
( ) ( )
( ) ( )
1 21
1 211
1 212
2 1 0
2 0
2 0.
n
i i iin
i i i ii
n
i i i ii
F y T P
F y T P T
F y T P P
α β βα
α β ββ
α β ββ
=
=
=
∂= ⎡ − + + ⎤ − =⎣ ⎦∂
∂= ⎡ − + + ⎤ − =⎣ ⎦∂
∂= ⎡ − + + ⎤ − =⎣ ⎦∂
∑
∑
∑
Simplifying, we get
1 1 1
21
1 1 1 12
2
1 1 1 1
n n n
i i ii i i
n n n n
i i i i i ii i i in n n n
i i i i i ii i i i
n T P y
T T T P T y
P T P P P y
αββ
= = =
= = = =
= = = =
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
∑ ∑ ∑
∑ ∑ ∑ ∑
∑ ∑ ∑ ∑
Solving for α, 1β , and 2β , we get the least-squares plane 1 2y T Pα β β= + + .
Least Squares System Solution
50. Premultiplying each side of the system =Ax b by TA gives T T=A Ax A b , or
1 1 1
1 0 1 1 0 10 1 2
1 1 1 1 1 11 1 1
xy
⎡ ⎤ ⎡ ⎤− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
or simply 2 0 00 3 4
xy
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
Solving this 2 2× system, gives
0x = , 43
y = ,
which is the least squares approximation to theoriginal system.
2–1
–3
1
3
2
–2
–2
least squaressolution
y = 2
0 4 3,a f− + =x y 1
x y+ = 1
y
x
Suggested Journal Entry 51. Student Project
SECTION 3.5 Vector Spaces and Subspaces 263
3.5 Vector Spaces and Subspaces
They Don’t All Look Like Vectors 1. A typical vector is [ ],x y , with negative [ ], x y− − ; the zero vector is [ ]0, 0 . 2. A typical vector is [ ], , x y z , with negative [ ], , x y z− − − ; the zero vector is [ ]0, 0, 0 . 3. A typical vector is [ ], , ,a b c d , with negative [ ], , , a b c d− − − − ; the zero vector is [ ]0, 0, 0, 0 . 4. A typical vector is [ ], ,a b c , with negative [ ], , a b c− − − ; the zero vector is [ ]0, 0, 0 .
5. A typical vector is a b cd e f⎡ ⎤⎢ ⎥⎣ ⎦
, with negative a b cd e f− − −⎡ ⎤⎢ ⎥− − −⎣ ⎦
; the zero vector is 0 0 00 0 0⎡ ⎤⎢ ⎥⎣ ⎦
.
6. A typical vector is a b cd e fg h i
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
, with negative a b cd e fg h i
− − −⎡ ⎤⎢ ⎥− − −⎢ ⎥⎢ ⎥− − −⎣ ⎦
; the zero vector is0 0 00 0 00 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
7. A typical vector is a linear function ( )p t at b= + , the zero vector is 0p ≡ and the negative of
( )p t is ( )p t− . 8. A typical vector is a quadratic function
( ) 2p t at bt c= + + , the zero vector is 0p ≡ and the negative of ( )p tis ( )p t− .
4–4
–2
2
y
t
4
–4
–2 2
p t1( )
p t3( )
p t2( )
p t4( )
Segments of typical vectors in 2P
9. A typical vector is a continuous and differenti-able function, such as ( ) sinf t t= , ( ) 2g t t= . The zero vector is ( )0 0f t ≡ and the negative of
( )f t is ( )f t− .
4–4
–2
2
y
x
4
–4
–2 2
g t( )
f t( )
264 CHAPTER 3 Linear Algebra
10. [ ]2 0, 1e : Typical vectors are continuous andtwice differentiable functions such as
( ) sinf t t= , ( ) 2 2g t t t= + − , and so on. The zero vector is the zero function
( )0 0f t ≡ , and the negative of a typical vector,say ( ) sinth t e t= , is ( ) sinth t e t− = − .
4
y
t
8
–4
–8
g t( )
f t( )
–4 4
h t( )
Are They Vector Spaces? 11. Not a vector space; there is no additive inverse. 12. First octant of space: No, the vectors have no negatives. For example, [ ]1, 3, 3 belongs to the set
but [ ]1, 3, 3− − − does not. 13. Not a vector space; e.g., the negative of [ ]2,1 does not lie in the set. 14. Not a vector space; e.g., 2x x+ and ( ) 21 x− each belongs, but their sum ( )2 21x x x x+ + − = does
not. 15. Not a vector space since it is not closed under vector addition. See the example for Problem 14. 16. Yes, the vector space of all diagonal 2 × 2 matrices. 17. Not a vector space; the set of 2 × 2 matrices with zero deteriminant is not closed under vector
addition as indicated by
1 0 0 1 1 11 0 0 3 1 3⎡ ⎤ ⎡ ⎤ ⎡ ⎤
+ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
18. Not a vector space; the set of all 2 × 2 invertible matrices is not closed under vector addition. For
instance,
1 0 1 0 0 00 1 0 1 0 0
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
19. Yes, the vector space of all 3 × 3 upper-triangular matrices. 20. Not a vector space, does not contain the zero function. 21. Not a vector space; not closed under scalar multiplication; no additive inverse. 22. Yes, the vector space of all differentiable functions on ( ), −∞ ∞ . 23. Yes, the vector space of all integrable functions on [ ]0,1 .
SECTION 3.5 Vector Spaces and Subspaces 265
A Familiar Vector Space 24. Yes a vector space. Straightforward verification of the 10 commandments of a vector space; that
is, the sum of two vectors (real numbers in this case) is a vector (another real number), the product of a real number by a scalar (another real number) is a real number. The zero vector is the number 0. Every number has a negative. The distributivity and associatively properties are simply properties of the real numbers, and so on.
Not a Vector Space
25. Not a vector space; not closed under scalar multiplication.
DE Solution Space 26. Properties A3, A4, S1, S2, S3, and S4 are basic properties that hold for all functions; in particular,
solutions of a differential equation.
Another Solution Space 27. Yes, the solution space of the linear homogeneous DE
( ) ( )( ) ( )( ) ( ) ( ) 0y p t y q t y y p t y q t y′′ ′ ′′ ′− + − + − = − ⎡ + + ⎤ =⎣ ⎦ is indeed a vector space; the linearity properties are sufficient to prove all the vector space
properties.
The Space ( ), −∞ ∞C 28. This result follows from basic properties of continuous functions; the sum of continuous func-
tions is continuous, scalar multiples of continuous functions are continuous, the zero function is continuous, the negative of a continuous function is continuous, the distributive properties hold for all functions, and so on.
Vector Space Properties
29. Unique Zero: We prove that if a vector z satisfies + =v z v for any vector v , then =z 0 . We can write
( )( ) ( ) ( ) ( ) ( ) ( )= + = + + − = + + − = + + − = + − =z z 0 z v v z v v v z v v v 0 . 30. Unique Negative: We show that if v is an arbitrary vector in some vector space, then there is
only one vector n (which we call −v ) in that space that satisfies + =v n 0 . Suppose another vector n * also satisfies + =v n* 0 . Then
( )( ) ( ) ( ) ( ) ( ) ( ) ( )
.
= + = + + − = + + − = + − = + + − = + − +
= + =
n n 0 n v v n v v 0 v v n * v v v n *
0 n* n *
31. Zero as Multiplier: We can write
( )0 1 0 1 0 1+ = + = + = =v v v v v v v . Hence, by the result of Problem 30, we can conclude that 0 =v 0 .
266 CHAPTER 3 Linear Algebra
32. Negatives as Multiples: From Problem 30, we know that −v is the only vector that satisfies ( )+ − =v v 0 . Hence, if we write
( ) ( ) ( )( )1 1 1 1 1 0+ − = + − = + − = =v v v v v v 0 .
Hence, we conclude that ( )1− = −v v .
A Vector Space Equation 33. Let v be an arbitrary vector and c an arbitrary scalar. Set c =v 0 . Then either 0c = or =v 0 . For
0c ≠ ,
( ) ( )1 11 cc c
= = = =v v v 0 0 ,
which proves the result.
Nonstandard Definitions 34. ( ) ( ) ( )1 1 2 2 1 2, , , 0x y x y x x+ ≡ + and ( ) ( ), , c x y cx y≡ All vector space properties clearly hold for these operations. The set 2R with indicated vector
addition and scalar multiplication is a vector space. 35. ( ) ( ) ( )1 1 2 2 2, , 0, x y x y x+ ≡ and ( ) ( ), , c x y cx cy≡ Not a vector space because, for example, the new vector addition is not commutative:
( ) ( ) ( )2, 3 4, 5 0, 4+ = , ( ) ( ) ( )4, 5 2, 3 0, 2+ = . 36. ( ) ( ) ( )1 1 2 2 1 2 1 2, , , x y x y x x y y+ ≡ + + and ( ) ( ), , c x y cx c y≡
Not a vector space, for example, ( )c d c d+ ≠ +x x x .
For 4c = , 9d = and vector ( )1 2, x x=x , we have
( ) ( ) ( )
( ) ( ) ( ) ( ) ( )1 2 1 2
1 2 1 2 1 2 1 2 1 2
13 , 13 , 13
4 , 9 , 2 , 2 3 , 3 5 , 5 .
c d x x x x
c d x x x x x x x x x x
+ = =
+ = + = + =
x
x x
Sifting Subsets for Subspaces
37. ( ){ }, 0x y y= =W is a subspace of 2R .
38. ( ){ }2 2, 1x y x y= + =W is not a subspace of 2R because it does not contain the zero vector
(0, 0). It is also not closed under vector addition and scalar multiplication. 39. ( ){ }1 2 3 3, , 0x x x x= =W is a subspace of 3R . 40. ( ) ( ){ }degree 2p t p t= =W is not a subspace of 2P because it does not contain the zero vector
( ) 0p t ≡ . 41. ( ) ( ){ }0 0p t p= =W is a subspace of 3P .
SECTION 3.5 Vector Spaces and Subspaces 267
42. ( ) ( ){ }0 0f t f= =W is a subspace of [ ]0, 1C .
43. ( ) ( ) ( ){ }0 1 0f t f f= = =W is a subspace of [ ]0, 1C .
44. ( ) ( ){ }0b
af t f t dt= =∫W is a subspace of [ ],a bC .
45. ( ){ }0f t f f′′= + =W is a subspace of [ ]2 0, 1C .
46. { }( ) 1f t f f′′= + =W is not a subspace of [ ]2 0, 1C . It does not contain the zero vector
( ) 0y t ≡ . It is also not closed under vector addition and scalar multiplication because the sum of two solutions is not necessarily a solution. For example, 1 1 siny t= + and 2 1 cosy t= + are both solutions, but the sum
1 2 2 sin cosy y t t+ = + + is not a solution. Likewise 12 2 2siny t= + is not a solution. 47. Not a subspace because = ∉x 0 W 48. W is a subspace: Nonempty: Note that =A0 0 so ∈0 W Closure: Suppose , ∈x y W =Ax 0 and =Ay 0 , so )+A(ax by = ) ( )+A(ax A by = + = + = + =aAx bAy a0 b0 0 0 0
Hyperplanes as Subspaces 49. We select two arbitrary vectors
[ ]1 1 1 1, , ,x y z w=u , [ ]2 2 2 2, , ,x y z w=v
from the subset W. Hence, we have
1 1 1 1
2 2 2 2
00.
ax by cz dwax by cz dw
+ + + =+ + + =
Adding, we get
( ) ( ) ( ) ( ) ( ) ( )1 2 1 2 1 2 1 2 1 1 1 1 2 2 2 2
0a x x b y y c z z d w w ax by cz dw ax by cz dw+ + + + + + + = + + + + + + +
=
which says that + ∈u v W . To show k ∈u W , we must show that the scalar multiple [ ]1 1 1 1, , ,k kx ky kz kw=u
satisfies 1 1 1 1 0akx bky ckz dkw+ + + = .
But this follows from ( )1 1 1 1 1 1 1 1 0akx bky ckz dkw k ax by cz dw+ + + = + + + = .
268 CHAPTER 3 Linear Algebra
Are They Subspaces of R?
50. W = {[ , , , ] : , }a b a b a b a b− + ∈ is a subspace.
Nonempty: Let a = b = 0 Then (0, 0, 0, 0) ∈ W
Closure: Suppose [ ] [ ]2 2 2 2 2 2, , , and , , ,a b a b a b a b a b a b= − + = − + ∈x y W
[ ] [ ][ ][ ]
1 1 1 1 1 1 2 2 2 2 2 2
1 2 1 2 1 1 2 2 1 1 2 2
1 2 1 2 1 2 1 2 1 2 1 2
Then , , ( ), ( ) , , ( ), ( )
, , ( ) ( ), ( ) ( )
, , ( ) ( ),( ) ( ) for any , ,
k ka kb k a b k a b a b a b a b
ka a kb b k a b a b k a b a b
ka a kb b ka a kb b ka a kb bk
+ = − + + − +
= + + − + + + + +
= + + + − + + + +
∈ ∈
x y
W
51. No [0, 0, 0, 0, 0] ∉ {[a, 0, b, 1, c]: a, b, c ∈ } because the 4th coordinate ≠ 0 for all a, b, c ∈ 52. No For [a, b, a2, b2], the last two coordinates are not linear functions of a and b. Consider [1, 3, 1, 9] Note that 2[1, 3, 1, 9] is not in the subset. i.e., 2[1, 3, 1, 9] = [2, 6, 2, 18] ≠ [2 ⋅ 1, 2 ⋅ 3, (2 ⋅ 1)2, (2 ⋅ 3)2] = [2, 3, 4, 36]
Differentiable Subspaces 53. ( ){ }0f t f ′ = . It is a subspace.
54. ( ){ }1f t f ′ = . It is not a subspace, because it does not contain the zero vector and is not closed
under vector addition. Hence, ( )f t t= , ( ) 2g t t= + belongs to the subset but ( )( )f g t+ does not belong. It is also not closed under scalar multiplication. For example ( )f t t= belongs to the subset, but ( )2 2f t t= does not.
55. ( ){ }f t f f′ = . It is a subspace.
56. ( ){ }2f t f f′ = . It is not a subspace; e.g., not closed under scalar multiplication. ( f may satisfy
equation 2f f′ = , but 2f will not, since 22 4f f′ ≠ .
Property Failures 57. The first quadrant (including the coordinate axes) is closed under vector addition, but not scalar
multiplications. 58. An example of a set in 2R that is closed under scalar multiplication but not under vector addition
is that of two different lines passing through the origin. 59. The unit circle is not closed under either vector addition or scalar multiplication.
SECTION 3.5 Vector Spaces and Subspaces 269
Solution Spaces of Homogenous Linear Algebraic Systems 60. x1 − x2 + 4x4 + 2x5 − x6 = 0 2x1 − 2x2 + x3 + 2x4 + 4x5 − x6 = 0
The matrix of coefficients A = 1 1 0 4 2 12 2 1 2 4 1
− −− −
has RREF = 1 1 0 4 2 10 0 1 6 0 1
− −−
x1 − x2 + 4x4 + 2x5 − x6 = 0 x3 − 6x4 + x6 = 0 Let x2 = r, x4 = s, x5 = t, x6 = u x1 = r − 4s − 2t + u x3 = 6s − u
S =
1 4 2 11 0 0 00 6 0 1
: , , ,0 1 0 00 0 1 00 0 0 1
r s t u r s t u
⎧ ⎫− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎪ ⎪+ + + ∈⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎨ ⎬
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
61. 2x1 − 2x2 + 4x3 − 2x4 = 0 2x1 + x2 + 7x3 + 4x4 = 0 x1 − 4x2 − x3 + 7x4 = 0 4x1 − 12x2 − 20x4 = 0
The matrix of coefficients A =
2 2 4 22 1 7 41 4 1 74 12 0 20
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥− −⎢ ⎥− −⎣ ⎦
RREF(A) =
1 0 3 00 1 1 00 0 0 10 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
x1 + 3x3 = 0 x2 + x3 = 0 x4 = 0
S =
31
:10
r r
⎧ ⎫−⎡ ⎤⎪ ⎪⎢ ⎥−⎪ ⎪⎢ ⎥ ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
Let x3 = r x1 = −3r x2 = −r x3 = r x4 = 0
270 CHAPTER 3 Linear Algebra
62. 3x1 + 6x3 + 3x4 + 9x5 = 0 x1 + 3x2 − 4x3 − 8x4 + 3x5 = 0 x1 − 6x2 + 14x3 + 19x4 + 3x5 = 0
The matrix of cooefficients A = 3 0 6 3 91 3 4 8 31 6 14 19 3
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥−⎣ ⎦
has RREF = 1 0 2 1 30 1 2 3 00 0 0 0 0
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥⎣ ⎦
x1 + 2x3 + x4 + 3x5 = 0 x2 − 2x3 − 3x4 = 0 x1 = −2x3 − x4 − 3x5 x2 = 2x3 + 3x4
Let x3 = r, x4 = s, x5 = t x1 = −2r − s − 3t x2 = 2r + 3s
S =
2 1 32 3 0
: , ,1 0 00 1 00 0 1
r s t r s t
⎧ ⎫− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ + ∈⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
Nonlinear Differential Equations
63. 2y y′ = . Writing the equation in differential form, we have 2y dy dt− = . We get the general
solution 1yc t
=−
. Hence, from 0c = and 1, we have two solutions
( )11y tt
= − , ( )21
1y t
t=
−.
But, if we compute
( ) ( )1 21 1
1y t y t
t t+ = − +
−
it would not be a solution of the DE. So the solution set of this nonlinear DE is not a vector space. 64. sin 0y y′′ + = . Assume that y is a solution of the equation. Hence, we have the equation
sin 0y y′′ + = . But cy does not satisfy the equation because we have
( ) ( ) ( )sin sin 0cy cy c y y′′ ′′+ ≠ + = .
65. 1 0yy
′′ + = . From the DE we can see that the zero vector is not a solution, so the solution space of
this nonlinear DE is not a vector space.
DE Solution Spaces 66. 2 ty y e′ + = . Not a vector space, it doesn’t contain the zero vector.
SECTION 3.5 Vector Spaces and Subspaces 271
67. 2 0y y′ + = . The solutions are 1yt c
=−
, and the sum of two solutions is not a solution, so the set
of all solutions of this nonlinear DE do not form a vector space. 68. 0y ty′′ + = . If 1y , 2y satisfy the equation, then
1 1
2 2
00.
y tyy ty′′+ =′′ + =
By adding, we obtain ( ) ( )1 1 2 2 0y ty y ty′′ ′′+ + + = ,
which from properties of the derivative is equivalent to
( ) ( )1 1 2 2 1 1 2 2 0c y c y t c y c y′′+ + + = . This shows the set of solutions is a vector space. 69. ( )1 sin 0y t y′′ + + = . If 1y , 2y satisfy the equation then
( )( )
1 1
2 2
1 sin 0
1 sin 0.
y t y
y t y
′′+ + =
′′ + + =
By adding, we have ( ) ( )1 1 1 11 sin 1 sin 0y t y y t y′′ ′′⎡ + + ⎤ + ⎡ + + ⎤ =⎣ ⎦ ⎣ ⎦ ,
which from properties of the derivative is equivalent to
( ) ( )( )1 1 2 2 1 1 2 21 sin 0c y c y t c y c y′′+ + + + = , which shows the set of solutions is a vector space. This is true for the solution set of any linear
homogeneous DE.
Line of Solutions 70. (a) [ ] [ ] [ ]0, 1 2, 3 2 , 1 3t t t t= + = + = +x p h
Hence, calling 1x , 2x the coordinates of the vector [ ]1 2,x x=x , we have 1 2x t= ,
2 1 3x t= + . (b) [ ] [ ]2, 1, 3 2, 3, 0t= + −x (c) Showing that solutions of 0y y′ + = are closed under vector addition is a result of the
fact that the sum of two solutions is a solution. The fact that solutions are closed under scalar multiplication is a result of the fact that scalar multiples of solutions are also solutions. The zero vector (zero function) is also a solution because the negative of a solution is a solution. Computing the solution of the equation gives ( ) ty t ce−= , which is scalar multiple of te− . We will later discuss that this collection of solutions is a one-dimensional vector space.
(d) The solutions of y y t′ + = are given by ( ) ( )1 ty t t ce−= − + . The abstract point of view is
a line through the vector 1t − (remember functions are vectors here) in the direction of the vector te− .
272 CHAPTER 3 Linear Algebra
(e) The solution of any linear equation Ly f= can be interpreted as a line passing through any particular solution py in the direction of any homogeneous solution hy ; that is,
p hy y cy= + .
Orthogonal Complements 71. To prove: V⊥ = { 0n∈ ⋅ =u u v for every ∈v V} is a subspace of n
Nonempty: 0⋅ =0 v for every ∈v V
Closure: Let a and b ∈ and , ,∈u w V⊥ Let ∈v V
( ) ( ) ( )
( ) ( )a b a b
a b
a b
+ ⋅ = ⋅ + ⋅= ⋅ + ⋅
= +
u w v u v w vu v w v
0 0
72. To prove: V ∩ V ⊥ = { }0 and ⊥∈ ∈0 0V V since V is a subspace and ⋅0 v = 0 for every ∈v V , so ⊥∈0 V ∴{ }0 ⊂ V ∩ V ⊥ Now suppose ⊥∈ ∩w V V where w = [w1, w2, …, wn] Then ⋅ =w v 0 for all ∈v V However ∈w V so ⋅w w = 0 2 2 2
1 2 ... 0nw w w+ + + = ∴ w1 = w2 = … = wn = 0 ∴ =w 0
Suggested Journal Entry 73. Student Project
SECTION 3.6 Basis and Dimension 273
3.6 Basis and Dimension
The Spin on Spans
1. 2=V R . Let
[ ] [ ] [ ]
[ ][ ]
1 2
2 2
2
, 0, 0 1, 1
,
1, 1
x y c c
c c
c
= +
=
=
.
The given vectors do not span 2R , although they span the one-dimensional subspace [ ]{ }1, 1k k∈R .
2. 3=V R . Letting
[ ] [ ] [ ] [ ]1 2 3, , 1, 0, 0 0, 1, 0 2, 3, 1a b c c c c= + + yields the system of equations
1 3
2 3
3
23
c c ac c b
c c
+ =+ =
=
or
3
2 3
1 3
3 32 2 .
c ac b c b cc a c a c
=
= − = −= − = −
Hence, W spans 3R .
3. 3=V R . Letting
[ ] [ ] [ ] [ ] [ ]1 2 3 4, , 1, 0, 1 2, 0, 4 5, 0, 2 0, 0, 1a b c c c c c= − + + − + yields
1 2 3
1 2 3 4
2 50
4 2 .
a c c cbc c c c c
= + −
== − + + +
These vectors do not span 3R because they cannot give any vector with 0b ≠ . 4. 2=V P . Let
( ) ( ) ( )2 21 2 31 1 2 3at bt c c c t c t t+ + = + + + − + .
Setting the coefficients of 2t , t, and 1 equal to each other gives
23
2 3
1 2 3
:: 2
1: 3
t c at c c b
c c c c
=− =
+ + =
which has the solution 1 5c c b a= − − , 2 2c b a= + , 3c a= . Any vector in V can be written as a
linear combination of vectors in W. Hence, the vectors in W span V.
274 CHAPTER 3 Linear Algebra
5. 2=V P . Let ( ) ( ) ( )2 2 2
1 2 31 1at bt c c t c t c t t+ + = + + + + − .
Setting the coefficients of 2t , t, and 1 equal to each other gives
22 3
1 3
1 2
::
1: .
t c c at c c b
c c c
+ =− =
+ =
If we add the first and second equations, we get
1 2
1 2
c c a bc c c
+ = ++ =
This means we have a solution only if c a b= + . In other words, the given vectors do not span 2P ; they only span a one-dimensional vector space of 3R .
6. 22=V M . Letting
1 2 3 4
1 1 0 0 1 0 0 10 0 1 1 1 0 0 1
a bc c c c
c d⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= + + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
we have the equations
1 3
1 4
2 3
2 4 .
c c ac c bc c cc c d
+ =
+ =+ =
+ =
If we add the first and last equation, and then the second and third equations, we obtain the equations
1 2 3 4
1 2 3 4 .c c c c a dc c c c b c+ + + = ++ + + = +
Hence, we have a solution if and only if a d b c+ = + . This means we can solve for 1c , 2c , 3c , 4c
for only a subset of vectors in V. Hence, W does not span 4R .
Independence Day
7. 2=V R . Setting [ ] [ ] [ ]1 21, 1 1, 1 0, 0c c− + − =
we get
1 2
1 2
00
c cc c
− =− + =
which does not imply 1 2 0c c= = . For instance, if we choose 1 1c = , then 2 1c = also. Hence, the
vectors in W are linearly dependent.
SECTION 3.6 Basis and Dimension 275
8. 2=V R . Setting [ ] [ ] [ ]1 21, 1 1, 1 0, 0c c+ − =
we get
1 2
1 2
00
c cc c
+ =− =
which implies 1 2 0c c= = . Hence, the vectors in W are linearly independent.
9. 3=V R . Setting
[ ] [ ] [ ] [ ]1 2 31, 0, 0 1, 1, 0 1, 1, 1 0, 0, 0c c c+ + = we get
1 2 3
2 3
3
000
c c cc c
c
+ + =+ =
=
which implies 1 2 3 0c c c= = = . Hence vectors in W are linearly independent.
10. 3=V R . Setting
[ ] [ ] [ ]1 22, 1, 4 4, 2, 8 0, 0, 0c c− + − = we get
1 2
1 2
1 2
2 4 02 0
4 8 0
c cc cc c
+ =− − =
+ =
which (the equations are all the same) has a nonzero solution 1 2c = − , 2 1c = . Hence, the vectors
in W are linearly dependent.
11. 3=V R . Setting
[ ] [ ] [ ] [ ]1 2 31, 1, 8 3, 4, 2 7, 1, 3 0, 0, 0c c c+ − + − = we get
1 2 3
1 2 3
1 2 3
3 7 04 0
8 2 3 0
c c cc c cc c c
− + =+ − =+ + =
which has only the solution 1 2 3 0c c c= = = . Hence, the vectors in W are linearly independent.
12. 1=V P . Setting
1 2 0c c t+ = ,
we get 1 0c = , 2 0c = . Hence, the vectors in W are linearly independent.
276 CHAPTER 3 Linear Algebra
13. 1=V P . Setting ( ) ( )1 21 1 0c t c t+ + − =
we get
1 2
1 2
00
c cc c
+ =− =
which has a unique solution 1 2 0c c= = . Hence, the vectors in W are linearly independent.
14. 2=V P . Setting
( )1 2 1 0c t c t+ − = , we get
1 2
2
00
c cc
− ==
which implies 1 2 0c c= = . Hence, the vectors in W are linearly independent.
15. 2=V P . Setting
( ) ( ) 21 2 31 1 0c t c t c t+ + − + =
we get
1 2
1 2
3
000
c cc c
c
+ =− =
=
which implies 1 2 3 0c c c= = = . Hence, the vectors in W are linearly independent.
16. 2=V P . Setting
( ) ( ) ( )2 21 2 33 1 2 5 0c t c t c t t+ + − + − − =
we get
1 2 3
1 3
2 3
3 5 00
2 0
c c cc c
c c
− − =− =+ =
which has a nonzero solution 1 1c = − , 2 2c = , 3 1c = − . Hence, the vectors in W are linearly
dependent. 17. 22=V D . Setting
1 2
0 1 0 0 00 0 0 0 1a
c cb
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
we get 1c a= , 2c b= . Hence, these vectors are linearly independent and span 22D .
SECTION 3.6 Basis and Dimension 277
18. 22=V D . Setting
1 2
0 1 0 1 00 0 1 0 1a
c cb
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
we get 1 2c c a+ = , 1 2c c b− = . We can solve these equations for 1c , 2c , and hence these vectors are linearly independent and span 22D .
Function Space Dependence
19. { },t tS e e−= . We set
1 2 0t tc e c e−+ = . Because we assume this holds for all t, it holds in particular for 0t = , 1, so
1 21
1 2
00
c cec e c−
+ =+ =
which has only the zero solution 1 2 0c c= = . Hence, the functions are linearly independent. 20. { }2, ,t t tS e te t e= . We assume
21 2 3 0t t tc e c te c t e+ + =
for all t. We let 0t = , 1, 2, so
1
1 2 32 2 2
1 2 3
00
2 4 0
cec ec ec
e c e c e c
=+ + =+ + =
which has only the zero solution 1 2 3 0c c c= = = . Hence, these vectors are linearly independent. 21. { }sin , sin 2 , sin3S t t t= . Letting
1 2 3sin sin 2 sin3 0c t c t c t+ + =
for all t. In particular if we choose three values of t, say 6π ,
4π ,
2π , we obtain three equations to
solve for 1c , 2c , 3c , namely,
1 2 3
1 2 3
1 3
1 3 02 2
2 2 02 2
0.
c c c
c c c
c c
⎛ ⎞⎛ ⎞ + + =⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠⎛ ⎞ ⎛ ⎞
+ + =⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠
− =
We used Maple to compute the determinant of this coefficient matrix and found it to be
3 1 62 2
− + . Hence, the system has a unique solution 1 2 3 0c c c= = = . Thus, sin t , sin 2t , and
sin3t are linearly independent.
278 CHAPTER 3 Linear Algebra
22. { }2 21, sin , cosS t t= . Because
2 21 sin cos 0t t− − = the vectors are linearly dependent. 23. ( ){ }21, 1, 1S t t= − − . Setting
( ) ( )21 2 31 1 0c c t c t+ − + − =
we get for the coefficients of 1, t, 2t the system of equations
1 2 3
2 3
3
02 0
0
c c cc c
c
− + =− =
=
which has the only zero solution 1 2 3 0c c c= = = . Hence, these vectors are linearly independent. 24. { }, , cosht tS e e t−= . Because
( )1cosh2
t tt e e−= +
we have that 2cosh 0t tt e e−− − = is a nontrivial linear combination that is identically zero for all t. Hence, the vectors are linearly dependent.
25. { }2sin , 4, cos2S t t= . Recall the trigonometric identity
( )2 1sin 1 cos 22
t t= − ,
which can be rewritten as
( )2 12sin 4 cos 2 04
t t− + = .
Hence, we have found a nontrivial linear combination of the three vectors that is identically zero. Hence, the three vectors are linearly dependent.
Independence Testing
26. We will show the only values for which
2
1 2
020
t t
t t
e ec c
e e⎡ ⎤ ⎡ ⎤ ⎡ ⎤
+ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦ ⎣ ⎦
for all t are 1 2 0c c= = and, hence, conclude that the vectors are linearly independent. If it is true
for all t, then it must be true for 0t = (which is the easiest place to test), which yields the two linear equations
1 2
1 2
2 00
c cc c
+ =+ =
whose only solution is 1 2 0c c= = . Hence, the vectors are linearly independent. (This test works only for linear independence.)
Another approach is to say the vectors are linearly independent because clearly there is no constant k such that one vector is k times the other vector for all t.
SECTION 3.6 Basis and Dimension 279
27. We will show that
1 2
sin cos 0cos sin 0
t tc c
t t⎡ ⎤ ⎡ ⎤ ⎡ ⎤
+ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for all t implies 1 2 0c c= = , and hence, the vectors are linearly independent. If it is true for all t, then it must be true for 0t = , which gives the two equations 2 0c = , 1 0c = . This proves the vectors are linearly independent.
Another approach is to say that the vectors are linearly independent because clearly there
is no constant k such that one vector is k times the other vector for all t. 28. We write
2
21 2 3
2
02 2 3 0
0
t t t
t t t
t t t
e e ec e c e c e
e e e
−
−
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for all t and see if there are nonzero solutions for 1c , 2c , and 3c .
2
2 2 4
2
2 2 3 0
t t t
t t t t t
t t t
e e ee e e e ee e e
−
− = − ≠ for all 0t ≠ .
We see by Cramer’s Rule that there is a unique solution 1 2 3 0c c c= = = . Therefore the vectors are linearly independent.
29. We write
8
81 2 3
8
2 04 0 0
2 0
t t t
t t
t t t
e e ec e c c e
e e e
− −
−
− −
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− + + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for all t and see if there are nonzero solutions for 1c , 2c , 3c . Because the above equation is assumed true for all t, it must be true for 0t = (the easy case), or
1 2 3
1 3
1 2 3
2 04 0
2 0.
c c cc cc c c
+ + =− + =
− + =
Writing this in matrix form gives
1
2
3
1 1 2 04 0 1 01 1 2 0
ccc
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
The determinant of the coefficient matrix is 18, so the only solution of this linear system is
1 2 3 0c c c= = = , and thus the vectors are linearly independent.
280 CHAPTER 3 Linear Algebra
Twins? 30. We have { } ( ) ( ){ }1 2span cos sin , cos sin cos sin cos sint t t t c t t c t t+ − = + + −
( ) ( ){ }{ }
{ }
1 2 1 2
1 2
cos sin
cos sin
span sin , cos .
c c t c c t
C t C t
t t
= + + −
= +
=
A Questionable Basis
31. The set 1 0 21 , 1 , 10 1 1
⎧ ⎫⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
is not a basis since1 0 2
1 1 0 21 1 1 1 1 2 2 0
1 1 1 10 1 1
= − = − + =− −
−
One of the many possible answers to the second part is: 1 0 11 , 1 00 1 0
⎧ ⎫⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
Wronskian 32. We assume that the Wronskian function
[ ]( ) ( ) ( ) ( ) ( ), 0W f g t f t g t f t g t′ ′= − ≠ for every [ ]0, 1t∈ . To show f and g are linearly independent on [0, 1], we assume that
( ) ( )1 2 0c f t c g t+ = for all t in the interval [0, 1]. Differentiating, we have ( ) ( )1 2 0c f t c g t′ ′+ =
on [0, 1]. Hence, we have the two equations
( ) ( )( ) ( )
1
2
00
f t g t cf t g t c
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥′ ′ ⎣ ⎦⎣ ⎦⎣ ⎦
.
The determinant of the coefficient matrix is the Wronskian of f and g, which is assumed to be nonzero on [0, 1]. Since 1 2 0c c= = , the vectors are linearly independent.
Zero Wronskian Does Not Imply Linear Dependence
33. a) f(t) = t2 g(t) = 2
2
00
t tt t
⎧ ≥⎪⎨− <⎪⎩
f ′(t) = 2t g ′(t) = 2 02 0t tt t
≥⎧⎨− <⎩
For t ≥ 0 W = 2 2
2 2t tt t
= 0
For t < 0 W = 2 2
2 2t tt t
−−
= −2t3 + 2t3 = 0
∴W = 0 on (−∞, ∞)
b) f and g are linearly independent because f(t) ≠ kg(t) on (−∞, ∞) for every k ∈ R.
SECTION 3.6 Basis and Dimension 281
Linearly Independent Exponentials 34. We compute the Wronskian of f and g:
[ ]( ) ( ) ( )( ) ( )
( ) ( ) ( ) ( ), 0at bt
a b t a b t a b tat bt
f t g t e eW f g t be ae e b a
f t g t ae be+ + += = = − = − ≠
′ ′
for any t provided that b a≠ . Hence, f and g are linearly independent if b a≠ and linearly dependent if b a= .
Looking Ahead
35. The Wronksian is
( ) ( )2 2 21 0
1
t tt t t
t t
e teW e t te e
e e t= = + − = ≠
+.
Hence, the vectors are linearly independent.
Revisiting Linear Independence 36. The Wronskian is
( ) ( ) ( )
33 3 3
33 3 3
3
3 3
55 3 5 5
5 35 9 5 9 5 3
5 9
45 15 45 5 15 5 80 0
t t tt t t t t t
t t t t t tt t t t t t
t t t
t t
e e ee e e e e e
W e e e e e ee e e e e e
e e e
e e
−− − −
−− − −
−
−= − = − +
−
= ⎡ − − − − + + ⎤ = − ≠⎣ ⎦
Hence, the vectors are linearly independent.
Independence Checking
37. W = 2 2
5 cos sinsin cos
0 sin cos 5 5(sin cos )cos sin
0 cos sin
t tt t
t t t tt t
t t
−− = = +
− −− −
= 5 ≠ 0 ∴ The set {5, cos t, sin t} is linearly independent on
38. W = 10 1 1(1 1) 2 00
t tt t
t tt t
t t
e ee e
e ee e
e e
−−
−−
−
−− = = + = ≠
The set {et, e−t, 1} is linearly independent on
39. W =
22 22 2
1 1 2 21 1
0 0 2
t t tt t
t+ −
+ −− =
− = 2(−2 − t − 2 + t)
= −8 ≠ 0 ∴{2 + t, 2 − t, t2} is linearly independent on
40. W =
2 22 23 4 2 1
2 1 3 4 26 2 2 6 2
2 2 6 26 0 2
t t tt t t t
t tt t
− −− −
= +
= ( ) ( )2 2 2 26 4 2( 1) 2 6 8 12t t t t− − + − − = −4 ≠ 0
{3t2 − 4, 2t, t2 − 1} is linearly independent on
282 CHAPTER 3 Linear Algebra
41. W = 2 2cosh sinhcosh sinh
sin cosht t
t tt t
= −
2 2
2 2 2 2
2 2
2 2 2 2 1 04 4 4 4
t t t t
t t t t
e e e e
e e e e
− −
− −
⎛ ⎞ ⎛ ⎞+ −= −⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠
⎛ ⎞+ + − += − = + = ≠⎜ ⎟
⎝ ⎠
{cosh t, sinh t} is linearly independent on
42. W = cos sin( sin ) cos cos sin
t t
t t t t
e t e te t e t e t e t− + +
= 2 2 2 2 2cos cos sin sin sin cost t t te t e t t e t e t t+ + − = 2 2 2 2(cos sin ) 0t te t t e+ = ≠ for all t { cos , sin }t te t e t is linearly independent on
Getting on Base in 2 43. Not a basis because [ ]{ }1, 1 does not span 2 . 44. A basis because [ ] [ ]{ }1, 2 , 2, 1 are linearly independent and span 2 . 45. ( ) ( ){ }1, 1 , 1, 1− − is not a basis because [ ] [ ]1, 1 1, 1− − = − , hence they are linearly dependent. 46. [ ] [ ]{ }1, 0 , 1, 1 is a basis because the vectors are linearly independent and span 2 . 47. [ ] [ ] [ ]{ }1, 0 , 0, 1 , 1, 1 is not a basis because the vectors are linearly dependent. 48. [ ] [ ] [ ] [ ]{ }0, 0 , 1, 1 , 2, 2 , 1, 1− − is not a basis because the vectors are linearly dependent.
The Base for the Space 49. 3=V : S is not a basis because the two vectors are not enough to span 3 . 50. 3=V : Yes, S is a basis because the vectors are linearly independent and span 3 . 51. 3=V : S is not a basis because four vectors are linearly dependent in 3 . 52. 2=V P : Clearly the two vectors 2 3 1t t+ + and 2 2 4t t− + are linearly independent because they
are not constant multiples of one another. They do not span the space because 2dim 3=P . 53. 3=V P : The 3dim 4=P ; i.e., { }3 2 1, , , 1t t t is a basis for 3P .
SECTION 3.6 Basis and Dimension 283
54. 4=V P : We assume that ( ) ( ) ( ) ( )4 3 2
1 2 3 4 53 4 1 5 1 0c t c t c t c t c t t+ + + + + − + − + =
and compare coefficients. We find a homogeneous system of equations that has only the zero solution 1 2 3 4 5 0c c c c c= = = = = . Hence, the vectors are linearly independent. To show the vectors span 4P , we set the above linear combination equal to an arbitrary vector
4 3 2at bt ct dt e+ + + + , and compare coefficients to arrive at a system of equations, which can besolved for 1c , 2c , 3c , 4c , and 5c in terms of a, b, c, d, e. Hence, the vectors span 4P so that they are a basis for 4P .
55. 22=V M : We assume that
1 2 3 4
1 0 0 1 0 0 1 1 0 00 0 0 0 1 0 1 1 0 0
c c c c⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ + + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
yields the equations
1 4
2 4
3 4
4
0000
c cc c
c cc
+ =+ =+ =
=
which has the zero solution 1 2 3 4 0c c c c= = = = . Hence, the vectors are linearly independent. If
we replace the zero vector on the right of the preceding equation by an arbitrary vector
a bc d⎡ ⎤⎢ ⎥⎣ ⎦
,
we get the four equations
1 4
2 4
3 4
4
c c ac c b
c c cc d
+ =+ =+ =
=
This yields the solution 4 3 2 1, , ,c d c c d c b d c a d= = − = − = − Hence, the four given vectors span 22M . Because they are linearly independent and span 22M ,
they are a basis. 56. 23=V M : If we set a linear combination of these vectors to an arbitrary vector, like
1 2 3 4 5
1 0 1 1 1 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 1 0 1 1 1 0 1 1 1
a b cc c c c c
d e f⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
+ + + + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
we arrive at the algebraic equations
284 CHAPTER 3 Linear Algebra
1 2
2
1
3 4 5
4 5
3 5 .
c c ac b
c cc c c d
c c ec c f
+ ===
+ + =+ =+ =
Looking at the first three equations gives 1c a b= − , 1c c= . If we pick an arbitrary matrix such that a b c− ≠ , we have no solution. Hence, the vectors do not span 22M and do not form a basis. (They are linearly independent however.)
Sizing Them Up
57. { }1, 2 3 1 2 3, 0x x x x x x⎡ ⎤= + + =⎣ ⎦W
Letting 2x α= , 3x β= , we can write 1x α β= − − . Any vector in W can be written as
1
2
3
1 11 00 1
xxx
α βα α ββ
− − − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
where α and β are arbitrary real numbers. Hence, The dimension of W is 2; a basis is
{ [ ]1, 1, 0− , [ ]1, 0, 1− }. 58. { }1, 2 3 4 1 3 2 4, , 0,x x x x x x x x⎡ ⎤= + = =⎣ ⎦W
Letting 3x α= , 4x β= , we have
1
2
3
4 .
xxxx
αβαβ
= −==
=
Any vector in W can be written as
1
2
3
4
1 00 11 00 1
xxxx
αβ
α βαβ
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
where α and β are arbitrary real numbers. Hence, the two vectors [ ]1, 0, 1, 0− and [ ]0, 1, 0, 1
form a basis of W, which is only two-dimensional.
SECTION 3.6 Basis and Dimension 285
Polynomial Dimensions 59. { }, 1t t − . We write
( )1 2 1at b c t c t+ = + − yielding the equations
1 2
2
:1: .t c c a
c b+ =− =
We can represent any vector at b+ as some linear combination of t and 1t − . Hence, we have that { }, 1t t − spans a two-dimensional vector space.
60. { }2, 1, 1t t t− + . We write
( ) ( )2 21 2 31 1at bt c c t c t c t+ + = + − + +
yielding the equations
23
1 2
2 3
::
1: .
t c at c c b
c c c
=+ =− + =
Because we can solve this system for 1c , 2c , 3c in terms of a, b, c getting
1
2
3
c a c bc a cc a
= − + += −=
the subspace spans the entire three-dimensional vector space 2P . 61. { }2 2, 1, 1t t t t− − + . We can see that
( ) ( )2 2 1 1t t t t= − − + + ,
so that the dim of the subspace is 2 and is spanned by any two of the vectors in the set.
Solution Basis 62. Letting z α= we solve for x and y, obtaining 4x α= − , 5y α= . An arbitrary solution of the
system can be expressed as
4 45 5
1
xyz
αα αα
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Hence, the vector [–4, 5, 1] is a basis for the solutions.
Solution Spaces for Linear Algebraic Systems 63. The matrix of coefficients for the system in Problem 61, Section 3.5
has RREF
1 0 3 00 1 1 00 0 0 10 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
, so 1 3
2 3
4
3 000
x xx x
x
+ =
+ ==
.
Let r = x3; then
286 CHAPTER 3 Linear Algebra
W =
31
: 10
r
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥−⎪ ⎪⎢ ⎥ ∈⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
so a basis is
3110
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥−⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
.
Dim W = 1.
64. The matrix of coefficients for the system, by Problem 62, Section 3.5,
has RREF = 1 0 2 1 30 1 2 3 00 0 0 0 0
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥⎣ ⎦
, so 1 3 4 5
2 3 4
2 3 0 2 3 0
x x x xx x x
+ + + =− − =
or 1 3 4 5
2 3 4
2 32 3
x x x xx x x= − − −= +
Therefore a basis for W is
2 1 30 2 3
, ,1 0 00 1 00 0 1
⎧ ⎫− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
.
Dim W = 3.
DE Solution Spaces
65. n
n
dydt
= 0 a) By successive integration we obtain
y = cn−1tn−1 + cn−2tn−2 + ⋅⋅⋅ + c1t + c0 for cn−1 ⋅⋅⋅, c0 ∈ which is a general description of all elements in Pn−1 = the solution space ( )n⊆C
b) A basis for Pn−1 is {1, t, …, tn−1}
Dim Pn−1 = n
66. y′ − 2y = 0 This is a first order linear DE with solution (by either method of Section 2.2) y = Ce2t a) The solution space S = {Ce2t: C ∈ R} ( )n⊆C
b) A basis B = {e2t}, dim S = 1.
67. y′ − 2ty = 0 By the methods of Section 2.2
2ty Ce=
a) S = 2
{ : } ( )t nCe C∈ ⊆C
B = 2
{ },te dim S = 1.
SECTION 3.6 Basis and Dimension 287
68. y′ + (tan t)y = 0 a) By the methods of Section 2.2
y = C cos t , t ,2 2π π⎛ ⎞∈ −⎜ ⎟
⎝ ⎠
S = 1cos : , , ,2 2 2 2
C t C t π π π π⎧ ⎫⎛ ⎞ ⎛ ⎞∈ ∈ − ⊆ −⎨ ⎬⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠⎩ ⎭
C
b) A basis B = cos : ,2 2
t t π π⎧ ⎫⎛ ⎞∈ −⎨ ⎬⎜ ⎟⎝ ⎠⎩ ⎭
;
dim S = 1.
69. y′ + y2 = 0 y2 is not a linear function so y′ + y2 = 0 is not a linear differential equation. By separation of variables
y′ = − y2
2
dy dty
= −∫ ∫
1
11
1
y t c
t cy
yt c
−
= − +−
= −
=−
But these solutions do not form a vector space Let k ∈ R, k ≠ 1; then
kt c−
is not a solution of the ODE.
Hence 1 : ct c
⎧ ⎫∈⎨ ⎬−⎩ ⎭
is not a vector space.
70. y′ + (cos t)y = 0 By the method of Section 2.2 y = Ce−sin t a) S = sin 2{ : } ( )tCe C− ∈ ⊆C
b) B = { }sin te− is a basis for S;
dim S = 1.
288 CHAPTER 3 Linear Algebra
Basis for Subspaces of Rn
71. W = {(a, 0, b, a− b + c): a, b, c ∈ R}
=
1 0 00 0 0
: , ,0 1 01 1 1
a b c a b c
⎧ ⎫⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ + ∈⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪−⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
so
1 0 00 0 0
, ,0 1 01 1 1
⎧ ⎫⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪−⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
is a basis for W.
Dim W = 3.
72. W = {(a, a− b, 2a + 3b): a, b ∈ R}
= 1 01 1 : ,2 3
a b a b⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥+ − ∈⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎩ ⎭
so 1 01 , 12 3
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥−⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎩ ⎭
is a basis for W.
Dim W = 2.
73. W = {(x + y + z, x + y, 4z, 0): x, y, z ∈ R}
=
1 1 11 1 0
: , , ,0 0 40 0 0
x y z x y z
⎧ ⎫⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ + ∈⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
(Note that x + y can be a single element of R.)
so
1 11 0
,0 40 0
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
is a basis for W.
Dim W = 2.
SECTION 3.6 Basis and Dimension 289
Two-by-Two Basis 74. Setting
1 2 3
1 0 0 1 0 0 0 00 0 1 0 1 1 0 0
c c c⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
gives 1 0c = , 2 0c = , and 3 0c = . Hence, the given vectors are linearly independent. If we add the vector
0 01 0⎡ ⎤⎢ ⎥⎣ ⎦
,
then the new vectors are still linearly independent (similar proof), and an arbitrary 2 2× matrix can be written as
1 2 3 4
1 0 0 1 0 0 0 00 1 1 0 1 1 1 0
a bc c c c
c d⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= + + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
because it reduces to
1
2
3
2 3 4 .
c ac b
c dc c c c
===
+ + =
This yields
1
2
3
4
c ac b
c dc b d
==== − −
in terms of a, b, c, and d and form a basis for 22M , which is four-dimensional.
Basis for Zero Trace Matrices 75. Letting
1 2 3
1 0 0 1 0 00 1 0 0 1 0
a bc c c
c d⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
+ + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
we find 1a c= , 2b c= , 3c c= , 1d c= − . Given 0a b c d= = = = implies 1 2 3 4 0c c c c= = = = , which shows the vectors (matrices) are linearly independent. It also shows they span the set of 2 2× matrices with trace zero because if 0a d+ = , we can solve for 1c a d= = − , 2c b= , 3c c= . In other words we can write any zero trace 2 2× matrix as follows as a linear combination of the three given vectors (matrices):
1 0 0 1 0 00 1 0 0 1 0
a ba b c
c a⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
Hence, the vectors (matrices) form a basis for the 2 2× zero trace matrices.
290 CHAPTER 3 Linear Algebra
Hyperplane Basis 76. Solving the equation
3 2 6 0x y z w+ − + =
for x we get 3 2 6x y z w= − + − .
Letting y α= , z β= and w γ= , we can write
3 2 6x α β γ= − + − .
Hence, an arbitrary vector ( ), , ,x y z w in the hyperplane can be written
3 2 6 3 2 61 0 00 1 00 0 1
xyzw
α β γα
α β γβγ
− + − − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
The set of four-dimensional vectors
3 2 61 0 0
, ,0 1 00 0 1
⎧ ⎫− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
is a basis for the hyperplane. 77. Symmetric Matrices
W = : , ,a b
a b cb c
⎧ ⎫⎡ ⎤⎪ ⎪∈⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
is the subspace of all symmetric 2 × 2 matrices
A basis for W is 1 0 0 1 0 0
, ,0 0 1 0 0 1
⎧ ⎫⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
.
Dim W = 3.
Making New Basis From Old:
78. B1 = { , , }i j k (Many correct answers)
A typical answer is B2 = { , , }+ +i i j i k
To show linear independence:
Set 1c i + c2 ( )+i j + c3 ( )+i k = 0
c1 + c2 + c3 = 0 c1 + c2 = 0 c1 + c3 = 0
1 1 11 1 0 1 01 0 1
= − ≠
∴ B2 is a basis since dim R3 = 3
SECTION 3.6 Basis and Dimension 291
79. B1 = 1 0 0 0
,0 0 0 1
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
is a basis for D
A typical answer:
B2 = 1 0 1 0
,0 1 0 1
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥ ⎢ ⎥−⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
Both elements are diagonal and B2 is linearly independent
dim D = 2
80. B1 = {sin t, cos t} is a basis for the solution space S
so dim S = 2
Typical answer B2 = {sin t + cos t, sin t − cos t}
Both elements are in S and B2 is linearly independent
Basis for 2P
81. We first show the vectors span 2P by selecting an arbitrary vector from 2P and show it can be written as a linear combination of the three given vectors. We set
( ) ( )2 21 2 31 1at bt c c t t c t c+ + = + + + + +
and try to solve for 1c , 2c , 3c in terms of a, b, c. Setting the coefficients of 2t , t, and 1 equal to each other yields
21
1 2
1 2 3
::
1: ,
t c at c c b
c c c c
=+ =+ + =
giving the solution 1 2 3, , .c a c a b c b c= = − + = − + Hence, the set spans 2P . We also know that the vectors
{ }2 1, 1, 1t t t+ + +
are independent because setting ( ) ( )2
1 2 31 1 0c t t c t c+ + + + + =
we get
1
1 2
1 2 3
000
cc cc c c
=+ =+ + =
which has only the solution 1 2 3 0c c c= = = . Hence, the vectors are a basis for 2P , for example, ( ) ( ) ( )2 23 2 1 3 1 1 1 1 1t t t t t+ + = + + − + − .
292 CHAPTER 3 Linear Algebra
82. True/False Questions a) True b) False, dim W = 2
c) False, The given set is made up of vectors in R2, not R4. The basis for W is made up of
vectors in R4.
83. Essay Question Points to be covered in the essay.
1. Elements of W are linear combinations of
1100
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
and
2011
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
which span W, a subspace of the vector space R4
The set
1 21 0
,0 10 1
⎧ ⎫−⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥−⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
is linearly independent and in consequence, it is a basis for W.
Convergent Sequence Space
84. V is a vector space since the addition and scalar multiplication operations follow the rules for R
and the operations { } { } { } and { } { }n n n n n na b a b c a ca+ = + = are the precise requirements for closure under vector addition and scalar multiplication. Zero element is {0} where an = 0 for all n Additive Inverse for {an} is {−an} Let W = { }{2 }:{ }n na a ∈V Clearly {0} = W and {2an} + {2bn} = {2an + 2bn} = 2{an + bn}
Also k{2an} = {2kan} for every k ∈ R
∴ W is a subspace
dim W = ∞
A basis is { }{1,0,0,0,...}{0,1,0,0} and so forth .
Cosets in 3R 85. [ ]{ }1 2 3 1 2 3, , 0x x x x x x= + + =W , [ ]0, 0, 1=v
We want to write W in parametric form, so we solve the equation
1 2 3 0x x x+ + = by letting 2x β= , 3x γ= and solving for 1x β γ= − − . These solutions can be written as
[ ] [ ]{ }1, 1, 0 1, 0, 1 : , β γ β γ− + − ∈R ,
SECTION 3.6 Basis and Dimension 293
so the coset of [0, 0, 1] in W is the collection of vectors
[ ] [ ] [ ]{ }0, 0, 1 1, 1, 0 1, 0, 1 ,β γ β γ+ − + − ∈R .
Geometrically, this describes a plane passing through (0, 0, 1) and parallel to 1 2 3 0x x x+ + = . 86. [ ]{ }1 2 3 3, , 0x x x x= =W , [ ]1, 1, 1=v Here a coset through the point (1, 1, 1) is given by the points
[ ] [ ] [ ]{ }1, 1, 1 1, 0, 0 0, 1, 0β γ+ + where β and γ are arbitrary real numbers. This describes the plane through (1, 1, 1) parallel to
the 1x 2x plane (i.e., the subspace W).
More Cosets
87. The coset through the point (1, –2, 1) is given by the points ( ) ( ){ }1, 2, 1 1, 3, 2t− + ;
t is an arbitrary number. This describes a line through (1, –2, 1) parallel to the line ( )1, 3, 2t .
Line in Function Space 88. The general solution of 22 ty y e−′ + = is
( ) 2 2t ty t ce te− −= + . We could say the solution is a “line” in the vector space of solutions, passing through 2tte− in the
direction of 2te− .
Mutual Orthogonality Proof by Contradiction 89. Let 1{ ,..., }nv v be a set of mutually orthogonal nonzero vectors and suppose they are not linearly independent. Then for some j, jv can be written as a linear combination of the others 1 1 ...j n nc c= + +v v v (excluding j jc v )
1 1 ...
0j j j n j nc c⋅ = ⋅ + + ⋅
=
v v v v v v
* jv cannot be zero
Suggested Journal Entry I 90. Student Project
Suggested Journal Entry II 91. Student Project
Suggested Journal Entry III 92. Student Project