alongi math 291-1 notes part 2

65
80 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS Subtracting 3/5 of the left- and right-hand sides of equation 2 from those of equation 3, 8 < : x - 2y - 7z = -16 5y + 10z = 20 2z = 6 Multiplying each side of equation 3 by 1/2, 8 < : x - 2y - 7z = -16 5y + 10z = 20 z = 3 Subtracting 10 times each side equation 3 from the respective side of equation 2, and adding 7 times each side of equation 3 to the respective side of equation 1, 8 < : x - 2y = 5 5y = -10 z = 3 Multiplying each side of equation 2 by 1/5, 8 < : x - 2y = 5 y = -2 z = 3 Adding twice each side of equation 2 to the respective side of equation 1, 8 < : x = 1 y = -2 z = 3 Thus, the solution set is {(1, -2, 3)}. Geometrically each of the three equations in this linear system represents a plane in space. The solution set of the system corresponds to the intersection of these three planes. See Figure 3.1.9. N Example 3.1.10. Solve the system 8 < : x + 2z = 1 2x + y + 5z = 5 x + y + 3z = 4 Solution. Subtracting twice each side of the first equation from the corresponding side of the second equation, and subtracting each side of the first equation from the corresponding side of the third equation, 8 < : x + 2z = 1 y + z = 3 y + z = 3

Upload: ccallahan45

Post on 01-Oct-2015

224 views

Category:

Documents


4 download

DESCRIPTION

Notes for the second part of Northwestern University's Math 291 sequence.

TRANSCRIPT

  • 80 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    Subtracting 3/5 of the left- and right-hand sides of equation 2 from those of equation 3,8

  • 3.1. SYSTEMS OF LINEAR EQUATIONS 81

    - 20

    2

    x

    20

    2y

    1

    0

    1

    2

    3

    z

    --

    Figure 3.2: Example 3.1.10

    Subtracting the sides of the second equation from the corresponding sides of the third equation,8

  • 82 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    78

    910

    7

    89

    10-5.0

    -4.5

    -4.0

    -3.5

    -3.0

    Figure 3.3: Example 3.1.11

    Solution. Subtracting twice each side of the first equation from corresponding sides of the secondequation, and subtracting each side of the first equation from the sides of the third equation,8

  • 3.1. SYSTEMS OF LINEAR EQUATIONS 83

    coecients of a linear system solution of a linear system solution set of a linear system equivalent linear systems consistent linear system inconsistent linear system

    Concept

    Given a linear system, there are two fundamental questions: Is the system consistent? If the system is consistent, then does the system have a unique solution?

    Skill

    Solve systems of linear equations by applying two operations to the equations in the system: multiplying both sides of an equation by a nonzero scalar, and/or adding or subtracting the left- and right-hand sides of one equation to/from the left- andright-hand sides, respectively, of another equation.

    Exercises

    1. Determine whether each system of linear equations is consistent or inconsistent, and solve thelinear system.

    (a)

    x + 2y = 12x + 3y = 1

    (b)

    2x + 4y = 33x + 6y = 2

    (c)

    8

  • 84 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    (a) For which value(s) of k is this system consistent?(b) For each value of k you found in part (a), how many solutions does the system have?(c) Find the solution set of the system for each value of k.

    3. Linear systems are easier to solve when they are in triangular form. That is, all coecientsabove or below the diagonal are zero. Solve the upper-triangular system8>>>:

    x1 + 2x2 x3 + 4x4 = 3x2 + 3x3 + 7x4 = 5

    x3 + 2x4 = 2x4 = 0

    using back substitution.

    4. Find a polynomial of degree 2 whose graph contains the points (1, 1), (2, 3) and (3, 13).

    5. Find a system of linear equations with three unknowns whose solution set is the line containingthe points (1, 1, 1) and (3, 5, 0).

    6. Provide a examples to show that the conclusion of Theorem 3.1.8(i) may be true or false ifc = 0.

    7. Prove Theorem 3.1.8(ii).

    3.2 Matrices, Row Reduction and Echelon Forms

    Objectives

    Relate matrices to systems of linear equations. Develop elementary row operations and row reduction to echelon and reduced echelon forms.

    3.2.1 Matrices

    Definition 3.2.1. Let m and n be positive integers. An m n matrix A with entries in F is anordered n-tuple of vectors a1, . . . ,an in Fm, denoted

    A =a1 an

    The columns of A are the vectors a1, . . . ,an. The entries of A are the components of a1, . . . ,an.The size of A is m n.Example 3.2.2.

    1 2 34 5 6

    is a 2 3 matrix.

  • 3.2. MATRICES, ROW REDUCTION AND ECHELON FORMS 85

    Example 3.2.3. A coordinate vector in Fm is an m 1 matrix.Denote the set of m n matrices with entries in F by Fmn.

    Definition 3.2.4. Matrices A and B are equal, denoted A = B, if A and B have the same size,and the corresponding columns of A and B are equal.

    ConsiderA

    def=a1 an

    Denote the i-th component of the j-th column aj by ai,j . Then

    aj =

    26666664a1,j...

    ai,j...

    am,j

    37777775In this notation we have

    A =a1 an

    =

    26666664a1,1 a1,j a1,n...

    . . ....

    . . ....

    ai,1 ai,j ai,n...

    . . ....

    . . ....

    am,1 am,j am,n

    37777775More concisely we may write

    A = [ai,j ]

    when the size of A is unambiguous.

    Definition 3.2.5. A row vector is a 1 n matrix.Definition 3.2.6. The rows of an m n matrix

    Adef=

    264 a1,1 a1,n... . . . ...am,1 am,n

    375are the row vectors

    ai,1 ai,n

    for i = 1, . . . ,m.

    Definition 3.2.7. The coecient matrix of the linear system8>>>>>>>>>>>:

    a1,1x1 + + a1,nxn = b1...

    ai,1x1 + + ai,nxn = bi...

    am,1x1 + + am,nxn = bm

  • 86 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    is the m n matrix

    A =

    264 a1,1 a1,n... . . . ...am,1 am,n

    375The rows of the coecient matrix correspond to the equations in the linear system. The columns

    of the coecient matrix correspond to the variables of the linear system.

    Definition 3.2.8. The augmented matrix of the linear system8>>>>>>>>>>>:

    a1,1x1 + + a1,nxn = b1...

    ai,1x1 + + ai,nxn = bi...

    am,1x1 + + am,nxn = bmis the m (n+ 1) matrix

    A =

    264 a1,1 a1,n b1... . . . ... ...am,1 am,n bm

    375Like the rows of the coecient matrix, the rows of the augmented matrix correspond to the

    equations in the linear system. However, unlike the columns of the coecient matrix, the columnsof the augmented matrix do not all correspond to the variables of the linear system. The rightmostcolumn of the augmented matrix does not correspond to a variable.

    Example 3.2.9. Consider the linear system8

  • 3.2. MATRICES, ROW REDUCTION AND ECHELON FORMS 87

    multiplying both sides of an equation by a nonzero scalar. adding/subtracting the left- and right-hand sides of one equation to/from the left- and right-hand sides, respectively, of another equation.

    These operations have analogues in the context of matrices called elementary row operations.

    Definition 3.2.10. The elementary row operations on matrices are:

    (i) interchange two rows,

    (ii) multiply a row by a nonzero scalar, and

    (iii) add one row to, or subtract one row from, another.

    Each elementary row operation is reversible.

    (i) To reverse a row interchange, interchange the rows again.

    (ii) To reverse multiplication of a row by a nonzero scalar c, multiply the row by 1/c.

    (iii) To reverse addition of the i-th row to the k-th row, subtract the i-th row from the k-th row.To reverse subtraction of the i-th row from the k-th row, add the i-th row to the k-th row.

    Adding (or subtracting) a multiple of one row to (or from) another is the same as applying threeelementary row operations in sequence. Let c be a nonzero scalar. To add c times row i to row k:

    Multiply row i by c, Add row i to row k, and Multiply row i by 1/c.

    Definition 3.2.11. Two matrices A and B are row equivalent, denoted A B, if there is asequence of elementary row operations that transforms A into B.

    Because of the correspondence between elementary row operations and the operations we used tosolve systems of the linear equations, the following theorem is self-evident.

    Theorem 3.2.12. The augmented matrices of two linear systems are row equivalent if and only ifthe linear systems are equivalent.

    3.2.3 Echelon Form

    Definition 3.2.13. A leading entry in a row of a matrix is the first nonzero entry in the row.

    Definition 3.2.14. A matrix is in echelon form (or row echelon form) if

    1. All nonzero rows are above any rows of all zeros.

    2. Each leading entry of a row is in a column to the right of the leading entry of the row aboveit.

    3. All entries in a column below a leading entry are zeros.

  • 88 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    Definition 3.2.15. A pivot position in a matrix A is a location in A that corresponds to aleading 1 in any echelon form of A. A pivot column of A is a column of A that contains a pivotposition.

    A matrix has at most one pivot position in each row and at most one pivot position in eachcolumn. Thus, the number of pivot positions in an mn matrix is at most the minimum of m andn.

    Considering the vast number of dierent elementary row operations that one can perform on amatrix, it is somewhat surprising that elementary row operations do not change the pivot positionsof a matrix.

    Theorem 3.2.16. Row equivalent matrices have the same pivot positions.

    Proof. Let n be a positive integer. We must prove that for all positive integers m, if A is an m nmatrix with entries in F, and a matrix B is in echelon form and is row equivalent to A, then A andB have the same pivot positions. We proceed by means of induction on m.

    (i) The Base Case. Let m = 1. If A is the zero matrix, then B is the zero matrix because A andB are row equivalent. So, neither A nor B has any pivot positions. In particular, the pivotpositions of A and B are the same.If A is not the zero matrix, then A has at least one pivot column. Let the first pivot columnof A be the i-th column. Then the (1, i)-th entry of A is 1, and all previous columns arezero. Because B has exactly one row, B has at most one pivot column. Since A and B arerow equivalent, there exists a sequence of elementary row operations transforming A into B.Under any elementary row operation on A the first i 1 columns of A remain zero and thei-th column remains nonzero. Thus, the first i 1 columns of B are zero, and the (1, i)-thentry of B is nonzero. Thus, the (1, i)-th entry of B is a pivot position of B. Therefore, thepivot positions of A and B are the same.

    (ii) The Induction Step. Let k be a positive integer.

    Induction Hypothesis: Assume that if A is a k n matrix with entries in F, and the matrixB is in echelon form and is row equivalent to A, then the pivot positions of A and B are thesame.

    Let A be a (k+1)n matrix with entries in F. Assume that the matrix B is in echelon formand is row equivalent to A. If A is the zero matrix, then B is the zero matrix because A andB are row equivalent. In particular, A and B have the same pivot positions.If A is not the zero matrix, then A has a pivot column. Let the first pivot column of A bethe i-th column. Then the i-th column of A has a nonzero entry, and all previous columnsare zero. Because A and B are row equivalent, there exists a sequence of elementary rowoperations transforming A to B. Under any elementary row operation on A the first i 1columns of A remain zero and the i-th column remains nonzero. Thus, the first i 1 columnsof B are zero, and the i-th column of B is nonzero. Thus, the i-th column of B is the firstpivot column of B.By multiplying the row containing the first pivot position of B by an appropriate scalar, wecan make the pivot entries in the first rows of A and B equal. Let A0 and B0 be the thematrices obtained from A and B, respectively, by deleting the first row from each. Then

  • 3.2. MATRICES, ROW REDUCTION AND ECHELON FORMS 89

    there exists a sequence of elementary row operations transforming A0 to B0. By the inductionhypothesis A0 and B0 have the same pivot positions. Therefore, A and B have the same pivotpositions.

    By the Principle of Mathematical Induction, for all positive integers m, if A is an m n matrixwith entries in F, and the matrix B is in echelon form and is row equivalent to A, then the pivotpositions of A and B are the same.

    3.2.4 Reduced Echelon Form and Row Reduction

    Definition 3.2.17. A matrix is in reduced echelon form (or reduced row echelon form) if

    1. The matrix is in echelon form.

    2. The leading entry in each nonzero row is 1.

    3. Each leading 1 is the only nonzero entry in its column.

    The following algorithm applies elementary row operations to obtain a reduced echelon form fora matrix. An Algorithm for Elimination to Reduced Echelon Form. Place a cursor at thefirst entry of the first nonzero column of the matrix.

    1. If the cursor entry is zero, then interchange the cursor row with the first row below it whichhas a nonzero entry in the cursor column.

    2. Eliminate all other entries in the cursor column below the cursor entry by adding multiplesof the cursor row to the other rows.

    3. Move the cursor down one row and right one column. If the cursor entry and all entries belowit are zero, then move the cursor right one column. Repeat the last step if necessary. If thereare no more rows or columns, then goto step 4. Otherwise, return to step 1.

    4. Beginning with rightmost pivot and working upward and to the left, use elementary rowoperations to create zeros above each pivot. If a pivot is not 1, then multiply the row by aconstant to obtain a 1 in the pivot.

    Regardless of the elementary row operations one applies to transform a matrix into reduced echelonform, the reduced echelon form is always the same.

    Theorem 3.2.18. The reduced echelon form of a matrix is unique.

    Proof. Let n be a positive integer. We must prove that for all positive integers m, if A is an m nmatrix with entries in F and a matrix B and C are in reduced echelon form and are row equivalentto A, then B = C. We proceed by means of induction on m.

    (i) The Base Case. Let m = 1. Because B is row equivalent to A, and A is row equivalent toC, B and C are row equivalent. Thus, there exists a sequence of elementary row operationstransforming B into C. Any elementary row operation other than multiplying the one andonly row of B by 1 changes the leading entry of the row into a scalar other than 1, and hencetransforms B into a matrix that is not in reduced echelon form. Therefore, B = C.

  • 90 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    (ii) The Induction Step. Let k be a positive integer.

    Induction Hypothesis: Assume that if A is a k n matrix with entries in F, and matrices Band C are in reduced echelon form and are row equivalent to A, then B = C.

    Let A be a (k+1)n matrix with entries in F. Assume that matrices B and C are in reducedechelon form and are row equivalent to A.

    If the last row of B contains only zeros, then the last row of C contains only zeros byTheorem 3.2.16. By deleting the last rows of B and C, we obtain two k n row equivalentmatrices in reduced echelon form. By the induction hypothesis, B = C.

    If the last row of B is nonzero, then it must contain the last pivot position of B. Let the lastpivot position of B be in the j-th column of B. By Theorem 3.2.16, the last pivot position ofC is in the j-th column of C. Because B and C are in reduced echelon form, the j-th columnsof B and C are em. Let B0 and C 0 be the the matrices obtained from B and C, respectively,by deleting the the last row from each. Then there exists a sequence of elementary rowoperations transforming B0 to C 0. Because B0 and C 0 are in reduced echelon form, B0 = C 0by the inductive hypothesis. Thus, the first m 1 rows of B and C are equal.To prove that the last rows of B and C are equal, notice that any elementary row operationthat changes the last row must change at least one of the first j entries of the last row. Becausethe last rows of B and C agree in the first j entries, their last rows must be equal. Therefore,B = C.

    By the Principle of Mathematical Induction, for all positive integers m, if A is an m n matrixwith entries in F, matrices B and C are in reduced echelon form and are row equivalent to A, thenB = C.

    Denote the reduced echelon form of a matrix A by refA.

    Terms

    matrix columns of a matrix size of a matrix equal matrices row vector rows of a matrix coecient matrix of a linear system augmented matrix of a linear system elementary row operations row equivalent matrices

  • 3.2. MATRICES, ROW REDUCTION AND ECHELON FORMS 91

    echelon form

    reduced echelon form

    pivot position

    pivot column

    Concepts

    Elementary row operations are reversible.

    Row equivalent matrices have the same pivot positions.

    Every matrix has a unique reduced echelon form.

    Skill

    Find the reduced echelon form of a matrix by applying elementary row operations.

    Exercises

    1. Find the reduced echelon form of each of the following matrices. Indicate the pivot positionsin the original matrix and in its reduced echelon form. List the pivot columns of the originalmatrix.

    (a)

    24 1 2 3 44 5 6 76 7 8 9

    35(b)

    24 1 3 5 73 5 7 95 7 9 1

    352. Describe all possible echelon forms of a nonzero 2 2 matrix.

    3. Describe all possible echelon forms of a nonzero 3 2 matrix.

    4. Describe all possible echelon forms of a nonzero 2 3 matrix.

    5. Let A,B and C be matrices of the same size.

    (a) Prove that A A.(b) Prove that if A B, then B A.(c) Prove that if A B and B C, then A C.

  • 92 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    3.3 Solving Systems of Linear Equations

    Objectives

    Use row reduction to solve systems of linear equations. Use a linear system to determine whether a vector b is a linear combination of vectorsa1, . . . ,an.

    Use linear systems to determine whether lines and planes in Fn are parallel, intersecting orskew.

    3.3.1 Solving Systems of Linear Equations by Row Reduction

    Definition 3.3.1. A bound variable of a linear system is a variable corresponding to a pivotcolumn of the systems coecient matrix.

    Definition 3.3.2. A free variable of a linear system is a variable that is not a bound variable ofthe system.

    Example 3.3.3. Solve the system8

  • 3.3. SOLVING SYSTEMS OF LINEAR EQUATIONS 93

    Second, we apply elementary row operations to find the reduced echelon form of this matrix.Adding 1/3 times row 3 to row 2 and subtracting row 3 from row 1 provides24 3 2 0 70 2/3 0 4/3

    0 0 1 4

    35Multiplying row 2 by 3/2 we have 24 3 2 0 70 1 0 2

    0 0 1 4

    35Adding 2 times row 2 to row 1 yields 24 3 0 0 30 1 0 2

    0 0 1 4

    35Multiplying row 1 by 1/3 we obtain 24 1 0 0 10 1 0 2

    0 0 1 4

    35which is in reduced echelon form.

    Finally, the linear system corresponding to this augmented matrix is8>>:

    x4 + 2x5 = 22x1 + 4x2 + 2x5 = 03x1 + 6x2 + 6x3 3x5 = 62x1 + 4x2 + x4 + 4x5 = 2

    Solution. This system of equations is linear, and its augmented matrix is26640 0 0 1 2 22 4 0 0 2 03 6 6 0 3 62 4 0 1 4 2

    3775We apply elementary row operations to find an echelon form of this matrix. Interchanging row

    1 and row 2 we obtain 26642 4 0 0 2 00 0 0 1 2 23 6 6 0 3 62 4 0 1 4 2

    3775

  • 94 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    Multiplying row 1 by 1/2 yields 26641 2 0 0 1 00 0 0 1 2 23 6 6 0 3 62 4 0 1 4 2

    3775Subtracting 3 times row 1 from row 3, and subtracting 2 times row 1 from row 4 we have2664

    1 2 0 0 1 00 0 0 1 2 20 0 6 0 6 60 0 0 1 2 2

    3775Interchanging rows 2 and 3 yields 2664

    1 2 0 0 1 00 0 6 0 6 60 0 0 1 2 20 0 0 1 2 2

    3775Multiplying row 2 by 1/6 we obtain 2664

    1 2 0 0 1 00 0 1 0 1 10 0 0 1 2 20 0 0 1 2 2

    3775Finally, subtracting row 3 from row 4 we have2664

    1 2 0 0 1 00 0 1 0 1 10 0 0 1 2 20 0 0 0 0 0

    3775which is in reduced echelon form.

    The linear system corresponding to this augmented matrix is8>>>:x1 + 2x2 + x5 = 0

    x3 x5 = 1x4 + 2x5 = 2

    0 = 0

    Every (x1, x2, x3, x4, x5) satisfies the last equation, 0 = 0. Consequently, the first three equationscompletely determine the solution set.8

  • 3.3. SOLVING SYSTEMS OF LINEAR EQUATIONS 95

    The variables x1, x3 and x4 are bound variables, while x2 and x5 are free variables. Solving for thebound variables in terms of the free variables,8>>>>>>:

    x1 = 2s tx2 = sx3 = t + 1x4 = 2t + 2x5 = t

    The general solution of this linear system is

    (x1, x2, x3, x4, x5) = (2s t, s, t+ 1,2t+ 2, t)for all scalars s and t. The solution set is

    {(2s t, s, t+ 1,2t+ 2, t) : s, t 2 F}Because this linear system is equivalent to the original linear system, the solution set of our

    original linear system is{(2s t, s, t+ 1,2t+ 2, t) : s, t 2 F}

    The linear system has one solution for each choice of the parameters s and t. Hence, the linearsystem has infinitely many solutions.

    Notice that a vector x is a solution of this linear system if and only if

    x (0, 0, 1, 2, 0) = s

    26666421000

    377775+ t266664101

    21

    377775The solutions of the system are exactly the vectors x that dier from (0, 0, 1, 2, 0) by a linearcombination of the vectors 266664

    21000

    377775 and266664101

    21

    377775N

    Example 3.3.5. Solve the system8

  • 96 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    Solution. This system of equations is linear, and its augmented matrix is24 1 0 1 00 1 2 01 1 3 1

    35We apply elementary row operations to find an echelon form of this matrix. Subtracting row 1

    from row 3 we obtain 24 1 0 1 00 1 2 00 1 2 1

    35Subtracting row 2 from row 3 yields 24 1 0 1 00 1 2 0

    0 0 0 1

    35which is in reduced echelon form. The linear system corresponding to this augmented matrix is8

  • 3.3. SOLVING SYSTEMS OF LINEAR EQUATIONS 97

    3.3.2 Linear Systems and Linear Combinations

    One application of linear systems is to determine whether a vector b is a linear combination ofvectors a1, . . . ,an.

    Theorem 3.3.6. A vector b 2 Fm is a linear combination of a1, . . . ,an 2 Fm if and only if thelinear system with augmented matrix

    a1 an b

    is consistent.

    Proof. Let a1, . . . ,an,b 2 Fm. The vector b is a linear combination of a1, . . . ,an if and only ifthere are x1, . . . , xn 2 F so that

    x1a1 + + xnan = b

    x1

    264 a1,1...am,1

    375+ + xn264 a1,n...

    am,n

    375 =264 b1...

    bm

    375264 a1,1x1 + + a1,nxn...

    am,1x1 + + am,nxn

    375 =264 b1...

    bm

    375That is, b is a linear combination of a1, . . . ,an if and only if the linear system8>:

    a1,1x1 + + a1,nxn = b1...

    am,1x1 + + am,nxn = bmis consistent. This augmented matrix of this linear system is

    a1 an b

    Therefore, a vector b 2 Fm is a linear combination of a1, . . . ,an 2 Fm if and only if the linearsystem with augmented matrix

    a1 an b

    is consistent.

    Example 3.3.7. Determine whether

    24 789

    35 is a linear combination of24 12

    3

    35 and24 45

    6

    35.Solution. The reduced echelon form of 24 1 4 72 5 8

    3 6 9

    35

  • 98 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    is 24 1 0 10 1 20 0 0

    35Since the rightmost column of the reduced echelon form is not a pivot column, the corresponding

    linear system is consistent. Therefore,

    24 789

    35 is a linear combination of24 12

    3

    35 and24 45

    6

    35 . In fact,from the reduced echelon form we can see that24 78

    9

    35 = (1)24 12

    3

    35+ 224 45

    6

    35N

    Example 3.3.8. Determine whether

    24 7810

    35 is a linear combination of24 12

    3

    35 and24 45

    6

    35.Solution. The reduced echelon form of 24 1 4 72 5 8

    3 6 10

    35is 24 1 0 00 1 0

    0 0 1

    35Since the rightmost column of the reduced echelon form is a pivot column, the corresponding linear

    system is inconsistent. Therefore,

    24 7810

    35 is not a linear combination of24 12

    3

    35 and24 45

    6

    35 N3.3.3 Linear Systems and Intersections of Lines and Planes

    We can also use linear systems to determine whether lines and planes in Fn are skew, parallel, orintersecting.

    Definition 3.3.9. Lines l1 and l2 in Fn are parallel, denoted l1 k l2, if there are nonzero vectorsv1,v2 in Fn parallel to l1 and l2, respectively, so that v1 is parallel to v2.

    Definition 3.3.10. Lines in Fn are skew if they are not parallel and do not intersect. See Fig-ure 3.4.

  • 3.3. SOLVING SYSTEMS OF LINEAR EQUATIONS 99

    -10-5

    05

    10 x

    -10

    0

    10y

    0

    10

    20

    30

    z

    Figure 3.4: Skew Lines in Space

    Example 3.3.11. Determine whether the lines in R3 with parametric equations8

  • 100 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    Setting x, y and z from each set of parametric equations equal we obtain8

  • 3.3. SOLVING SYSTEMS OF LINEAR EQUATIONS 101

    Neither of these vectors is a scalar multiple of the other. Thus, the lines are not parallel.As in the previous example, the variable t does not represent the same parameter in both sets

    of parametric equations. So, call the parameter s in the first set of parametric equations.8

  • 102 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    Solution. Once again, the variable t does not represent the same parameter in both sets of paramet-ric equations. Each line and each plane must have its own set of parameters. So, call the parameteru in the set of parametric equations for the line.8

  • 3.3. SOLVING SYSTEMS OF LINEAR EQUATIONS 103

    bound variable free variable parallel lines skew lines

    Concepts

    A matrix has at most one pivot position in each row and at most one pivot position in eachcolumn.

    Existence and Uniqueness Theorem for Linear Systems

    Skills

    Solve a linear system by finding the reduced echelon form of the systems coecient matrix. Determine whether a vector b is a linear combination of vectors a1, . . . , an. Determine whether lines and planes in Fn are parallel, intersecting, or skew.

    Exercises

    1. Determine whether each of the following statements is true or false. If the statement is true,then prove it. If the statement is false, then provide a counterexample.

    (a) There is a system of three linear equations in three variables that has exactly threesolutions.

    (b) Every system of linear equations that has a free variable has infinitely many solutions.

    2. Solve the linear systems.

    (a)

    8

  • 104 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    (d) What are the possibilities for the number of solutions of a consistent overdeterminedsystem?

    4. Determine whether b is a linear combination of a1,a2 and a3.

    (a) a1 =

    24 120

    35 ,a2 =24 01

    2

    35 ,a3 =24 56

    8

    35 ,b =24 21

    6

    35(b) a1 =

    24 122

    35 ,a2 =24 05

    5

    35 ,a3 =24 20

    8

    35 ,b =24 5117

    355. Find parametric equations for the line of intersection of the planes in R3 with standard

    equations x+ 2y 3z = 5 and 5x+ 5y z = 1.6. Determine whether the lines in R2 with standard equations x 4y = 1, 2x y = 3, andx 3y = 0 have a common point of intersection.

    7. Determine whether the lines in R3 with parametric equations8

  • 3.4. SPAN 105

    12. Find parametric equations of the plane in R3 containing the point (1, 2, 1) and containingthe line of intersection of the planes with standard equations x+yz = 2 and 2xy+3z = 1.

    13. Let p1,p2 2 Fn. Let v1,v2 2 Fn be nonzero. Let l1 be the line through p1 parallel to v1.Let l2 be the line through p2 parallel to v2.

    (a) Prove that l1 and l2 intersect if and only if the the rightmost column of the matrixv1 v2 p2 p1

    is not a pivot column.

    (b) Prove that l1 and l2 are parallel if and only if one of the first two columns of the matrixv1 v2 p2 p1

    is not a pivot column.

    (c) Prove that l1 and l2 are skew if and only if every column of the matrixv1 v2 p2 p1

    is a pivot column.

    (d) Prove that no two lines in F2 are skew.

    3.4 Span

    Objective

    Introduce the span of a set of vectors.

    The span of a set of vectors is the set of all linear combinations of those vectors.

    Definition 3.4.1. The span of a nonempty set S Fn is the set of all linear combinations ofelements of S, denoted spanS.

    Example 3.4.2. The span of the set whose only element is the zero vector is span{0} = {0}.

    O

    x

    y

    O

    x

    z

    O

    x

    y

    z

    O

    x

    y

    z

    O

    x

    y

    z

    O

    x

    y

    z

    (b)

    (d) (e) (f)

    O

    x

    y

    O

    x

    y

    z

    O

    x

    y

    z

    O

    x

    y

    z

    O

    x

    y

    z

    O

    x

    y

    z

    (b) (c)

    (d) (e) (f)

    v1

    v2

    v1

    v2 v

    Figure 3.5: The Span of a Nonzero Vector in R3

    Example 3.4.3. Let v 2 Rn be nonzero. Then span{v} is the line through the origin parallel tov. See Figure 3.4.

  • 106 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    O

    x

    y

    O

    x

    y

    z

    O

    x

    y

    z

    O

    x

    y

    z

    O

    x

    y

    z

    O

    x

    y

    z

    (b) (c)

    (d) (e) (f)

    v1

    v2

    Figure 3.6: The Span of Two Nonparallel Vectors in R3

    Example 3.4.4. Let v1,v2 2 Rn be nonparallel. Then span{v1,v2} is the plane through the origincontaining v1 and v2. See Figure 3.4.

    Example 3.4.5. Determine whether

    24 001

    35 is an element of span8

  • 3.4. SPAN 107

    Example 3.4.6. Find the set of all vectors in R3 which are not elements of span

    8

  • 108 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    (b) The span of any two vectors is a plane containing 0.

    (c) The linear system with augmented matrixa1 a2 a3 b

    is consistent if and only

    if b 2 span{a1,a2,a3}.2. Determine whether b is an element of span{a1,a2,a3}.

    (a) a1 =

    24 120

    35 ,a2 =24 01

    2

    35 ,a3 =24 56

    8

    35 ,b =24 21

    6

    35(b) a1 =

    24 122

    35 ,a2 =24 05

    5

    35 ,a3 =24 20

    8

    35 ,b =24 5117

    35

    3. List five vectors in span

    8

  • 3.5. THE MATRIX-VECTOR PRODUCT 109

    3.5 The Matrix-Vector Product

    Objective

    Develop an algebraic operation for forming linear combinations.

    Definition 3.5.1. The matrix-vector product of an m n matrix A = a1 an withcolumns in Fm and a vector x =

    264 x1...xn

    375 2 Fn is the linear combinationAx def=

    nXj=1

    xjaj

    of the columns of A with the corresponding components of x as coecients.

    We define the matrix-vector product Ax only if the number of columns of A equals the numberof components in x.

    Example 3.5.2.

    1 2 34 5 6

    24 789

    35 = 7 14+ 8

    25

    + 9

    36

    =

    50122

    Example 3.5.3. 24 1 42 53 6

    35 78

    = 7

    24 123

    35+ 824 45

    6

    35 =24 3954

    69

    35Example 3.5.4.

    1 2 34 5 6

    78

    The matrix has three columns, but the vector only has two components. So, the number of columnsof the matrix does not equal the number of components of the vector. Therefore, the matrix-vectorproduct is not defined.

    3.5.1 Properties of the Matrix-Vector Product

    The matrix-vector product distributes over the vector sum and commutes with scalar multiplication.

    Theorem 3.5.5. If A is an m n matrix with entries in F, then(i) A(x+ y) = Ax+Ay, and

    (ii) A(cx) = c(Ax).

  • 110 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    for all x,y 2 Fn and all c 2 F.Proof. Let A def=

    a1 an

    be an m n matrix with entries in F.

    (i) Let x def= (x1, . . . , xn) and ydef= (y1, . . . , yn) be vectors in Fn.

    A(x+ y) = A

    0B@264 x1...

    xn

    375+264 y1...

    yn

    3751CA Definitions of x and y

    = A

    264 x1 + y1...xn + yn

    375 Definition of the Vector Sum=

    nXj=1

    (xj + yj)aj Definition of the Matrix-Vector Product

    =nX

    j=1

    (xjaj + yjaj) Distributivity of Scalar Multiplication over Addition of Scalars

    =nX

    j=1

    xjaj +nX

    j=1

    yjaj Commutativity and Associativity of the Vector Sum

    = Ax+Ay Definition of the Matrix-Vector Product

    (ii) Exercise.

    Definition 3.5.6. For each positive integer n, the n n identity matrix is the matrix

    Indef=e1 en

    Example 3.5.7. The 4 4 identity matrix is

    I4 =

    26641 0 0 00 1 0 00 0 1 00 0 0 1

    3775Theorem 3.5.8. For all x 2 Fn,

    Inx = x

    Proof. Exercise.

    Theorem 3.5.9. If A =a1 an

    is an m n matrix with entries in F, then

    (i) A0 = 0, and

    (ii) Aej = aj for all j = 1, . . . , n.

  • 3.5. THE MATRIX-VECTOR PRODUCT 111

    Proof. Exercise.

    The definition of the matrix-vector product expresses the product as a linear combination ofthe columns of the matrix. This is how we want to think about the matrix-vector product in atheoretical sense. However, when computing the product of a matrix and a vector by hand, it isoften more ecient to compute by rows as shown below and in Example 3.5.10.

    Let

    Adef=

    26666664a1,1 a1,n...

    . . ....

    ai,1 ai,n...

    . . ....

    am,1 am,n

    37777775 and xdef=

    264 x1...xn

    375

    Then

    Ax = x1

    26666664a1,1...

    ai,1...

    am,1

    37777775+ + xn26666664

    a1,n...

    ai,n...

    am,n

    37777775

    =

    26666664a1,1x1 + + a1,nxn

    ...ai,1x1 + + ai,nxn

    ...am,1x1 + + am,nxn

    37777775Example 3.5.10.

    1 2 34 5 6

    24 789

    35 = 1(7) + 2(8) + 3(9)4(7) + 5(8) + 6(9)=

    50122

    3.5.2 Linear Systems, Vector Equations and Matrix Equations

    We can express a linear system8>:a1,1x1 + + a1,nxn = b1

    ...am,1x1 + + am,nxn = bm

    as a vector equation 264 a1,1x1 + + a1,nxn...am,1x1 + + am,nxn

    375 =264 b1...

    bm

    375

  • 112 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    and as a matrix equation 264 a1,1 a1,n... . . . ...am,1 am,n

    375264 x1...

    xn

    375 =264 b1...

    bm

    375More concisely, this matrix equation is

    Ax = b

    where

    A =

    264 a1,1 a1,n... . . . ...am,1 am,n

    375 ,x =264 x1...

    xn

    375 and b =264 b1...

    bm

    375From now on we will refer to Ax = b as a linear system.

    3.5.3 Column Space

    Definition 3.5.11. The column space of an m n matrix A with entries in F is the set

    Col(A) def= {Ax : x 2 Fn}The column space of a matrix A is the span of the set of columns of A. That is, if A =a1 an

    , then

    Col(A) = span{a1, . . . ,an}

    Terms

    matrix-vector product identity matrix column space

    Concept

    If A is an m n matrix with entries in F, then(i) A(x+ y) = Ax+Ay, and(ii) A(cx) = c(Ax)

    for all x,y 2 Fn and all c 2 F.

    Skill

    Find the product of an m n matrix with an n-vector.

    Exercises

  • 3.5. THE MATRIX-VECTOR PRODUCT 113

    1. Determine whether each of the following statements is true or false. If the statement is true,then prove it. If the statement is false, then provide a counterexample.

    (a) A vector b is a linear combination of the columns of a matrix A if and only if Ax = bis consistent.

    (b) The equation Ax = b is consistent if the augmented matrixA b

    has a pivot position

    in every row.

    (c) If the span of the set of columns of an m n matrix A is Fm, then Ax = b is consistentfor all b 2 Fm.

    (d) If A is an m n matrix and Ax = b is inconsistent for some b 2 Fm, then A does nothave a pivot position in every row.

    (e) Every linear combination of vectors can be written as Ax for an appropriate matrix Aand vector x.

    2. Find the matrix-vector product using the definition of the matrix-vector product and usingthe row-column method. If the product is undefined, then explain why.

    (a)

    24 4 21 60 1

    3524 327

    35(b)

    24 261

    35 51

    (c)

    24 6 54 37 6

    35 23

    (d)8 3 45 1 2

    24 111

    35(e)

    4 2 1 24 36

    9

    353. Find a matrix A and a vector b so that the matrix equation Ax = b expresses the linear

    system 3x1 + x2 + 5x3 = 9

    x2 + 4x3 = 0

    4. Let A def=

    24 1 3 43 2 65 1 8

    35. Describe the set of all b 2 R3 so that Ax = b is inconsistent.5. Prove that if A is an m n matrix with entries in F and m > n, then there is some b 2 Fm

    so that Ax = b is inconsistent.

  • 114 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    6. Prove that if A is an n n matrix with entries in F and there exists b 2 Fn so that Ax = bhas a unique solution, then the span of the set of columns of A is Fn.

    7. Prove Theorem 3.5.5 (ii).

    8. Prove Theorem 3.5.9.

    9. Let A be an m n matrix with entries in F(a) Prove that if x,y 2 Col(A), then x+ y 2 Col(A).(b) Prove that if c is a scalar and x 2 Col(A), then cx 2 Col(A).

    10. A matrix A def= [ai,j ] is upper triangular if ai,j = 0 whenever i > j. Prove that an n nmatrix A def=

    a1 an

    is upper triangular if and only if

    span{Ae1, . . . , Aek} = span{e1, . . . , ek}for each k = 1, . . . , n.

    3.6 Solution Sets of Linear Systems

    Objective

    Relate the solution set of a nonhomogeneous linear system Ax = b to the solution set of thecorresponding homogeneous linear system Ax = 0.

    Definition 3.6.1. A linear systemAx = b

    is homogeneous if b = 0. Otherwise, the linear system is nonhomogeneous.

    3.6.1 Homogeneous Linear Systems

    Every homogeneous linear system Ax = 0 is consistent because

    A0 = 0

    Definition 3.6.2. The trivial solution of a linear system Ax = 0 is 0.

    Definition 3.6.3. The null space of an m n matrix A with entries in F is the set

    Nul(A) def= {x 2 Fn : Ax = 0}The null space of a matrix A is the solution set of the homogeneous linear system Ax = 0. Since

    every homogeneous linear system has the trivial solution, 0 2 Nul(A).

    Example 3.6.4. Express the null space of A =1 32 6

    as the span of a set of vectors.

  • 3.6. SOLUTION SETS OF LINEAR SYSTEMS 115

    O x

    y

    31

    Nul(A)

    Figure 3.7: Example 3.6.4

    Solution. The reduced echelon form of A is 1 30 0

    The corresponding system of equations is

    x+ 3y = 0

    Thus,x = 3y

    Let t = y. The null space of A consists of the vectorsxy

    = t

    31

    Therefore,

    Nul(A) = span 3

    1

    Geometrically, Nul(A) is the line in R2 containing the origin and parallel to

    31

    . See Fig-

    ure 3.6.1. N

    3.6.2 Nonhomogeneous Linear Systems

    The solution set of a nonhomogeneous linear system Ax = b diers from the solution set of thecorresponding homogeneous linear system Ax = 0 by a constant vector.

    Theorem 3.6.5. Let A be an m n matrix with entries in F, and let b 2 Fm. If p 2 Fn is asolution of Ax = b, then the solution set of Ax = b is

    {x 2 Fn : x p 2 Nul(A)}

  • 116 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    Proof. If x 2 Fn is a solution of Ax = b, thenA(x p) = AxAp = b b = 0

    Thus, x p 2 Nul(A).Conversely, if x p 2 Nul(A), then

    A(x p) = 0AxAp = 0Ax b = 0

    Ax = b

    Thus, x is a solution of Ax = b.Therefore, the solution set of Ax = b is the set {x 2 Fn : x p 2 Nul(A)}.

    O x

    y

    31

    Nul(A)

    (3,2)

    Figure 3.8: Example 3.6.6

    Example 3.6.6. Let A =1 32 6

    and b =

    918

    . Express the solution set of Ax = b in terms

    of Nul(A).

    Solution. By inspection

    p =32

    is a solution of Ax = b. In Example 3.6.4 we showed that

    Nul A = span 3

    1

  • 3.6. SOLUTION SETS OF LINEAR SYSTEMS 117

    By Theorem 3.6.6, the solution set of Ax = b consists of all vectors x so that

    x p = t 3

    1

    x =

    32

    + t 3

    1

    for some t 2 R. This set is a line in R2 containing the point (3, 2) parallel to 3

    1

    . N

    Terms

    homogeneous linear system nonhomogeneous linear system null space

    Concepts

    The null space of a matrix A is the solution set of the homogeneous linear system Ax = 0. The solution set of a nonhomogeneous linear system Ax = b diers from the solution set ofthe corresponding homogeneous linear system Ax = 0 by a constant vector.

    Skills

    Express the solution set of a homogeneous linear system as the span of a set of vectors. Express the solution set of a nonhomogeneous linear system as a translate of the solution set ofthe corresponding homogeneous linear system by a particular solution of the nonhomogeneouslinear system.

    Exercises

    1. Determine whether each of the following statements is true or false. If the statement is true,then prove it. If the statement is false, then provide a counterexample.

    (a) The solution set of the linear system with augmented matrixa1 a2 a3 b

    equals

    the solution set of Ax = b where A def=a1 a2 a3

    .

    (b) If Ax = b is inconsistent, then b is not an element of the span of the set of columns ofA.

    (c) If the augmented matrixA b

    has a pivot position in every row, then Ax = b is

    inconsistent.(d) If A is an m n matrix with entries in F and the span of the set of columns of A is not

    Fm, then Ax = b is inconsistent for some b 2 Fm.

  • 118 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    (e) A homogeneous linear system is always consistent.

    (f) If x is a nonzero solution of Ax = 0, then every component of x is nonzero.

    (g) The equation Ax = b is homogeneous if 0 is a solution.

    (h) The solution set of Ax = b is a translate of the solution set of Ax = 0.

    2. Express Nul(A) as the span of a set of vectors.

    (a) A def=

    3 9 61 3 2

    (b) A def=

    1 3 3 70 1 4 5

    (c) A def=

    26641 4 2 0 3 50 0 1 0 0 10 0 0 0 1 40 0 0 0 0 0

    3775

    3. Provide an example of a 3 3 nonzero matrix A so that24 12

    1

    35 2 Nul(A).4. Let A be an m n matrix with entries in F

    (a) Prove that if x,y 2 Nul(A), then x+ y 2 Nul(A).(b) Prove that if c is a scalar and x 2 Nul(A), then cx 2 Nul(A).

    3.7 Linear Independence

    Objective

    Develop the concepts of linear independence and linear dependence of families of vectors.

    Recall from Proposition 1.5.8 that vectors a,b 2 Fn are parallel if and only if there are scalarsk1, k2, not both zero, such that

    k1a+ k2b = 0

    We can reformulate this characterization of parallel vectors by saying that a and b are parallel ifand only if there is an equation consisting of the zero vector on one side, and a linear combinationof a and b on the other side in which at least one of the coecients is nonzero. We will use thisobservation to generalize parallel pairs of vectors to linearly dependent families of vectors.

  • 3.7. LINEAR INDEPENDENCE 119

    3.7.1 Linear Relations

    Definition 3.7.1. A linear relation among a1, . . . ,ap 2 Fn is an equationpX

    j=1

    kjaj = 0

    where k1, . . . , kp are scalars.

    Example 3.7.2. The equations

    612

    + (2)

    36

    = 0

    and012

    + 0

    36

    = 0

    are linear relations between12

    and

    36

    .

    Definition 3.7.3. The trivial relation among a1, . . . ,ap 2 Fn is the linear relationpX

    j=1

    0aj = 0

    Definition 3.7.4. A nontrivial relation among a1, . . . ,ap 2 Fn is a linear relation amonga1, . . . ,ap 2 Fn which is not the trivial relation.

    We can now characterize parallel vectors using linear relations: Vectors a,b 2 Fn are parallel ifand only if there is a nontrivial linear relation between a and b.

    3.7.2 Families

    When we specify a set by listing its elements, any repetition in the listing does not contribute tothe elements of the set. For example,

    {1, 2, 2, 2, 5, 5} = {1, 2, 5}It is now useful for us to consider collections with repeated elements. We will call such a set afamily. For example, the set of columns of the matrix

    A =1 3 12 5 2

    is

    12

    ,

    35

    However, the family of columns of A is

    12

    ,

    35

    ,

    12

  • 120 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    We often distinguish among repeated elements in a set by assigning each element a subscript index.For example, if we consider {a1, . . . ,ap} Fn as a family, then we allow for the possibility thatai = aj for i 6= j. A useful way to distinguish between a set and a family is to remember thatfamilies may have twins.

    3.7.3 Linear Independence

    Definition 3.7.5. A family {a1, . . . ,ap} Fn is linearly independent if the only linear relationamong a1, . . . ,ap is the trivial relation.

    Definition 3.7.6. A family of vectors in Fn is linearly dependent if it is not linearly independent.

    Example 3.7.7. Let a 6= 0. The family {a} is linearly independent because ifka = 0

    then k = 0.

    Example 3.7.8. The family {0} is linearly dependent because 10 = 0 is a nontrivial linear relation.Example 3.7.9. The family {a1,a2} is linearly dependent if and only if a1 and a2 are parallel.Example 3.7.10. Any family {a1, . . . ,ap} containing the zero vector is linearly dependent since

    0a1 + + 10+ + 0ap = 0is a nontrivial linear relation.

    Example 3.7.11. Any family {a1, . . . ,ap} containing the same vector more than once, say ai = aj ,is linearly dependent since

    0a1 + + 1ai + + (1)aj + + 0ap = 0is a nontrivial linear relation.

    Example 3.7.12. Determine whether the family(24 123

    35 ,24 45

    6

    35 ,24 78

    9

    35)

    is linearly independent. If the family is linearly dependent, then find a nontrivial linear relationamong its elements.

    Solution. Consider a linear relation

    k1

    24 123

    35+ k224 45

    6

    35+ k324 78

    9

    35 = 024 1 4 72 5 8

    3 6 9

    3524 k1k2k3

    35 = 0

  • 3.7. LINEAR INDEPENDENCE 121

    Let A def=

    24 1 4 72 5 83 6 9

    35. The reduced echelon form of A is24 1 0 10 1 2

    0 0 0

    35Since there is a column of A which is not a pivot column, the linear system Ax = 0 has a nontrivialsolution. Thus, the family (24 12

    3

    35 ,24 45

    6

    35 ,24 78

    9

    35)is linearly dependent.

    To obtain a nontrivial linear relation solve for k1, k2 and k3.24 k1k2k3

    35 = t24 12

    1

    35For t = 1 we obtain the nontrivial linear relation24 12

    3

    35 224 45

    6

    35+24 78

    9

    35 = 0Notice that we can solve for one of these vectors in terms of the other two. For example,24 78

    9

    35 = (1)24 12

    3

    35+ 224 45

    6

    35So,

    24 789

    35 is a linear combination of24 12

    3

    35 and24 45

    6

    35. NExample 3.7.13. Determine whether the family(24 12

    3

    35 ,24 45

    6

    35 ,24 78

    10

    35)

    is linearly independent.

    Solution. Consider a linear relation

    k1

    24 123

    35+ k224 45

    6

    35+ k324 78

    10

    35 = 024 1 4 72 5 8

    3 6 10

    3524 k1k2k3

    35 = 0

  • 122 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    Let A def=

    24 1 4 72 5 83 6 10

    35. The reduced echelon form of A is the 3 3 identity matrix24 1 0 00 1 0

    0 0 1

    35Since every column of A is a pivot column, the only solution of the linear system Ax = 0 is thetrivial solution. Thus, the family (24 12

    3

    35 ,24 45

    6

    35 ,24 78

    10

    35)

    is linearly independent. N

    Theorem 3.7.14. The family of columns of a matrix A is linearly independent if and only ifNul(A) = {0}.Proof. Let A def=

    a1 ap

    . The family {a1, . . . ,ap} is linearly independent if and only if the

    only linear relation among a1, . . . ,ap is the trivial relation

    pXj=1

    0aj = 0

    That is, the only solution of Ax = 0 is x = 0. Equivalently, Nul(A) = {0}.Theorem 3.7.15. Let p and m be positive integers. If p > m, then the family {a1, . . . ,ap} Fmis linearly dependent.

    Proof. Define a matrixA

    def=a1 ap

    The size of A is m p, and A has at most one pivot position in each row. Thus, A has at mostm pivots position. Since p > m, there is at least one column of A which is not a pivot column.Consequently, Nul(A) 6= {0}. Therefore, the family {a1, . . . ,ap} is linearly dependent.

    Example 3.7.16. The set

    12

    ,

    34

    ,

    56

    is linearly dependent since it consists of 3 vectors

    in R2 and 3 > 2.

    Linear Dependence Theorem. Let {a1, . . . ,ap} be a family of p 2 vectors in Fm. The family{a1, . . . ,ap} is linearly dependent if and only if a1 = 0 or there exists a positive integer q > 1 suchthat aq is a linear combination of a1, . . . ,aq1.

    Proof. ()) Assume that {a1, . . . ,ap} is linearly dependent. Then there exist scalars k1, . . . , kp notall zero so that

    pXj=1

    kjaj = 0

  • 3.7. LINEAR INDEPENDENCE 123

    Let q be the greatest integer so that kq 6= 0. If q = 1, thenk1a1 = 0

    Since k1 6= 0, we conclude that a1 = 0.If q > 1, then

    q1Xj=1

    kjaj + kqaq +pX

    j=q+1

    0aj = 0

    kqaq = q1Xj=1

    kjaj

    aq =q1Xj=1

    kjkq

    aj

    Therefore, there exists a positive integer q > 1 such that aq is a linear combination of a1, . . . ,aq1.(() Assume there exists a positive integer q > 1 such that aq is a linear combination of

    a1, . . . ,aq1. Then there exist scalars k1, . . . , kq1 such that

    aq =q1Xj=1

    kjaj

    Thenq1Xj=1

    (kj)aj + aq +pX

    j=q+1

    0aj = 0

    is a nontrivial linear relation among a1, . . . ,ap. Therefore, the family {a1, . . . ,ap} is linearly de-pendent.

    3.7.4 Hyperplanes

    Recall that the plane P in Fn containing p 2 Fn and parallel to the non-parallel vectors a and bin Fn is

    P def= {x 2 Fn : x p = sa+ tb for some scalars s and t}Now that we have generalized the notion of non-parallel to linear independence, we can considerhigher-dimensional analogues of planes.

    Definition 3.7.17. The hyperplane H in Fn containing p 2 Fn and parallel to the linearlyindependent vectors v1, . . . ,vn1 in Fn is

    H def=8

  • 124 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    Example 3.7.18.

    1. A hyperplane in R2 is a line in R2.

    2. A hyperplane in R3 is a plane in R3.

    Terms

    linear relation nontrivial linear relation trivial linear relation linearly independent family of vectors linearly dependent family of vectors hyperplane

    Concepts

    Vectors a,b 2 Fn are parallel if and only if there is a nontrivial linear relation between a andb.

    The family of columns of a matrix A is linearly independent if and only if Nul(A) = {0}. Let p and m be positive integers. If p > m, then the family {a1, . . . ,ap} Fm is linearlydependent.

    Linear Dependence Theorem

    Skill

    Determine whether a family of vectors is linearly independent or linearly dependent. If thefamily is linearly dependent, then find a nontrivial linear relation among the vectors in thefamily, and express one of the vectors in the family as a linear combination of the other vectorsin the family.

    Exercises

    1. Determine whether each statement is true or false. If the statement is true, then prove it. Ifthe statement is false, then provide a specific counterexample to show that the statement isnot always true.

    (a) The family of columns of a matrix A is linearly independent if the equation Ax = 0 hasthe trivial solution.

    (b) If a family S is linearly dependent, then each element of S is a linear combination of theremaining elements of S.

  • 3.7. LINEAR INDEPENDENCE 125

    (c) The family of columns of every 3 4 matrix is linearly dependent.(d) If {a,b} is linearly independent and {a,b, c} is linearly dependent, then c 2 span{a,b}.(e) Two vectors are linearly dependent if and only if they lie on the same line through the

    origin.(f) If a family in Fn contains fewer than n vectors, then the set is linearly independent.(g) If {a,b} is linearly independent and c 2 span{a,b}, then {a,b, c} is linearly dependent.(h) If a family of Fn is linearly dependent, then the family contains more than n vectors.(i) If {a1, . . . ,a4} F4 and a3 = 2a1 + a2, then {a1, . . . ,a4} is linearly dependent.(j) If {a1, . . . ,a4} F4 and a3 = 0, then {a1, . . . ,a4} is linearly dependent.(k) If a1,a2 2 F4 are not parallel, then {a1,a2} is linearly independent.(l) If {a1, . . . ,a4} F4 and a3 is not a linear combination of {a1,a2,a4}, then {a1, . . . ,a4}

    is linearly dependent.(m) If {a1, . . . ,a4} F4 and {a1,a2,a3} is linearly dependent, then {a1, . . . ,a4} is linearly

    dependent.(n) If {a1, . . . ,a4} F4 is linearly independent, then {a1,a2,a3} is linearly independent.

    2. Determine whether the family of vectors is linearly independent. If not, then find a nontriviallinear relation among the vectors.

    (a)

    8

  • 126 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS

    (b) Prove that if Ax = b has at most one solution for each b 2 Fm, then the family ofcolumns of A is linearly independent.

    5. Prove that if {a1, . . . ,ap} Fn is linearly independent, then {a1a2,a2a3, . . . ,ap1ap,ap}is linearly independent.

    6. Prove that if {a1, . . . ,ap} Fn is linearly independent, b 2 Fn and {a1 + b, . . . ,ap + b} islinearly dependent, then b 2 span{a1, . . . ,ap} .

    7. Let {a1, . . . ,ap} Fn. Prove that if a1 6= 0, and for each j = 2, . . . , p, the vector aj has anonzero entry in a component where each of the preceding vectors a1, . . . ,aj1 has a zero,then the family {a1, . . . ,ap} is linearly independent.

    8. Prove that for all scalars a1, . . . , an, b such that a1, . . . , an are not all zero, the set of vectors(x1, . . . , xn) in Rn whose components satisfy the linear equation

    a1x1 + + anxn = bis a hyperplane in Rn.

  • Chapter 4

    Linear Transformations

    4.1 Linear Transformations: Definition and Properties

    Objectives

    Define linear transformation and matrix transformation. Relate linear transformations to matrix transformations. Provide examples of linear transformations. Describe images of sets with respect to linear transformations.

    Let A be an m n matrix with entries in F. Define a function T : Fn ! Fm by

    T (x) def= Ax

    Definition 4.1.1. A function T : Fn ! Fm is a matrix transformation if there exists a matrixA such that

    T (x) = Ax

    for all x 2 Fn.Example 4.1.2. Define T : R3 ! R2 by

    T (x) def=1 3 52 4 6

    x

    Then

    T

    0@24 102

    351A = 1 3 52 4 624 10

    2

    35 = 910

    Recall two important properties of the matrix-vector product. If A is an m n matrix withentries in F, then

    127

  • 128 CHAPTER 4. LINEAR TRANSFORMATIONS

    FmFn

    x

    T (x)T

    Figure 4.1: A Linear Transformation

    (i) A(x+ y) = Ax+Ay

    (ii) A(kx) = k(Ax)

    for all x,y 2 Fn and all scalars k. A function possessing two analogous properties is called a lineartransformation.

    Definition 4.1.3. A function T : Fn ! Fm is a linear transformation if(i) (Additivity) T (x+ y) = T (x) + T (y) for all x,y 2 Fn, and(ii) (Homogeneity) T (kx) = kT (x) for all x 2 Fn and all scalars k.

    Example 4.1.4. Let r be a nonnegative real number. Define T : Rn ! Rn by

    T (x) def= rx

    ThenT (x+ y) = r(x+ y) = rx+ ry = T (x) + T (y)

    andT (kx) = r(kx) = k(rx) = kT (x)

    for all x, y 2 Rn and all scalars k. Therefore, T is a linear transformation called a scaling trans-formation. In particular, if r = 1, then we call T the identity transformation. If r > 1, then wecall T a dilation. If 0 r < 1, then we call T a contraction. If r = 0, then we call T the zerotransformation. Notice that the zero transformation is also a contraction.

    Example 4.1.5. Let a 2 Fn. Define T : Fn ! Fn by

    T (x) def= projax

    If a = 0, then T (x) = 0 for all x 2 Fn. So, T is the zero transformation which is a lineartransformation by Example 4.1.4.

  • 4.1. LINEAR TRANSFORMATIONS: DEFINITION AND PROPERTIES 129

    x

    T (x)T

    R2

    ax

    y

    Figure 4.2: A Projection Transformation R2

    If a 6= 0, then for each x 2 Fn,T (x) = projax = hx,uiu

    where u is a unit vector in the direction of a. Thus,

    T (x+ y) = hx+ y,uiu= (hx,ui+ hy,ui)u= hx,uiu+ hy,uiu= T (x) + T (y)

    and

    T (kx) = hkx,uiu= (khx,ui)u= k(hx,uiu)= kT (x)

    for all x, y 2 Fn and all scalars k. Therefore, T is a linear transformation called the projectiontransformation onto a. See Figure 4.2.

    Theorem 4.1.6. Every matrix transformation T : Fn ! Fm is a linear transformation.Proof. Let T : Fn ! Fm be a matrix transformation. There is an m n matrix A so that

    T (x) = Ax

    for all x 2 Fn. For all x,y 2 Fn and all scalars k we obtainT (x+ y) = A(x+ y) = Ax+Ay = T (x) + T (y)

  • 130 CHAPTER 4. LINEAR TRANSFORMATIONS

    and

    T (kx) = A(kx) = k(Ax) = kT (x)

    Therefore, T is a linear transformation.

    x

    T (x)

    T

    R2

    x

    y

    Figure 4.3: Rotation by in R2

    Example 4.1.7. Let be a real number. Consider the function T : R2 ! R2 which rotates eachvector in R2 by radians. See Figure 4.3.

    Let x =xy

    2 R2. Express x in polar coordinates r and :

    x =r cosr sin

    Recall the addition formulas for cosine and sine.

    cos( + ) = cos cos sin sinsin( + ) = sin cos+ cos sin

  • 4.1. LINEAR TRANSFORMATIONS: DEFINITION AND PROPERTIES 131

    Applying the addition formulas we obtain

    T (x) =r cos( + )r sin( + )

    =

    r(cos cos sin sin)r(sin cos+ cos sin)

    =

    r cos cos r sin sinr sin cos+ r cos sin

    =

    x cos y sin x sin + y cos

    =

    cos sin sin cos

    xy

    =

    cos sin sin cos

    x

    Therefore, T is a matrix transformation. Since every matrix transformation is a linear transforma-tion, T is a linear transformation, called a rotation transformation.

    x T (x)

    T

    R2

    x

    y

    Figure 4.4: A Horizontal Shear in R2

    Example 4.1.8. Let k 2 R. Define T : R2 ! R2 by

    T

    xy

    def=x+ ky

    y

    Since

    T

    xy

    =x+ ky

    y

    =1 k0 1

    xy

    the function T is a matrix transformation. Thus, T is a linear transformation. The transformationT is called a horizontal shear. See Figure 4.4.

  • 132 CHAPTER 4. LINEAR TRANSFORMATIONS

    x

    T (x)

    T

    R2

    x

    y

    Figure 4.5: A Vertical Shear in R2

    Example 4.1.9. Let k 2 R. Define T : R2 ! R2 by

    T

    xy

    def=

    xkx+ y

    Since

    T

    xy

    =

    xkx+ y

    =1 0k 1

    xy

    the function T is a matrix transformation. Thus, T is a linear transformation. The transformationT is called a vertical shear. See Figure 4.5.

    Theorem 4.1.10. If T : Fn ! Fm is a linear transformation, thenT (0) = 0

    Proof. By homogeneity,T (0) = T (00) = 0T (0) = 0

    Example 4.1.11. The function f : R! R defined by

    f(x) def= 2x 3is not a linear transformation because f(0) = 3 6= 0.Example 4.1.12. The function f : R! R defined by

    f(x) def=p|x|

    is not a linear transformation because

    f(9 + 16) = f(25) =p25 = 5 6= 7 = 3 + 4 = p9 +p16 = f(9) + f(16)

  • 4.1. LINEAR TRANSFORMATIONS: DEFINITION AND PROPERTIES 133

    Linear transformations are exactly the functions that preserve linear combinations.

    Theorem 4.1.13. A function T : Fn ! Fm is a linear transformation if and only if for everyinteger p 2,

    T

    0@ pXj=1

    kjvj

    1A = pXj=1

    kjT (vj)

    for all v1, . . . ,vp 2 Fn and all scalars k1, . . . , kp.Proof. Exercise.

    Definition 4.1.14. Let f : X ! Y be a function. The image of S X with respect to f is theset

    f(S) def= {f(x) : x 2 S}Definition 4.1.15. The image of a function f : X ! Y is the set

    im(f) def= f(X)

    FmFn

    xT (x)

    Tv

    p

    T (v)

    T (p)

    lT (l)

    Figure 4.6: Linear Transformations Map Lines to Lines or Points

    Example 4.1.16. Describe the image of a line with respect to a linear transformation.

    Solution. Let l be a line in Fn passing through p and parallel to v 6= 0. That is,

    l = {p+ tv : t is a scalar}

    Let T : Fn ! Fm be a linear transformation. Then

    T (p+ tv) = T (p) + tT (v)

  • 134 CHAPTER 4. LINEAR TRANSFORMATIONS

    for all real numbers t. If T (v) 6= 0, then the image of the line l with respect to T isT (l) = {T (p) + tT (v) : t is a scalar}

    which is the line in Fm containing T (p) and parallel to T (v). If T (v) = 0, then T (l) = {T (p)}which is a single point. N

    In general, linear transformations map lines to lines or points. See Figure 4.6.

    Theorem 4.1.17. If k is a positive integer and T : Rn ! Rm is a linear transformation, then forevery k-parallelotope P in Rn, the set T (P ) is a k-parallelotope.

    Proof. Assume that T : Rn ! Rm is a linear transformation, k is a positive integer, and P is ak-parallelotope in Rn. Then there exist p,v1, . . . ,vk 2 Rn such that

    P =

    8

  • 4.1. LINEAR TRANSFORMATIONS: DEFINITION AND PROPERTIES 135

    linear transformation scaling transformation identity transformation dilation contraction zero transformation rotation transformation projection transformation horizontal shear vertical shear image of a set with respect to a function image of a function

    Concepts

    Every matrix transformation is a linear transformation. Linear transformations are exactly those functions that preserve linear combinations. Linear transformations map the zero vector in the domain to the zero vector in the codomain.

    Skills

    Determine whether a given function is a linear transformation. Describe the image of a set with respect to a linear transformation.

    Exercises

    1. Let T : Fn ! Fm be a matrix transformation with matrix A. What is the size of A?2. True or False: The function T : R2 ! R2 which rotates vectors by a fixed angle is a linear

    transformation.

    3. For each of the following linear transformations T : R2 ! R2, plot a =52

    , b =

    24

    ,

    T (a) and T (b). Describe T geometrically.

    (a) T (x) def= 1 0

    0 1x

  • 136 CHAPTER 4. LINEAR TRANSFORMATIONS

    (b) T (x) def=1/2 00 1/2

    x

    (c) T (x) def=0 10 0

    x

    (d) T (x) def=0 11 0

    x

    4. Let T : R2 ! R2 be a linear transformation so that

    T (e1) =25

    and T (e2) =

    16

    Find T

    53

    and T

    x1x2

    .

    5. Let T : R2 ! R2 be a linear transformation so that

    T

    52

    =21

    and T

    13

    = 1

    3

    Find T

    156

    and T

    1712

    .

    6. Define f : R2 ! R3 by

    f(x1, x2)def=

    24 2x1 3x2x1 + 45x2

    35Prove that f is not a linear transformation.

    7. A function f : Fn ! Fn is a translation if there exists b 2 Fn such that

    f(x) = x+ b

    for all x 2 Fn. Prove that a translation is a linear transformation if and only if b = 0.8. Prove that if {a1, . . . ,ap} Fn is linearly dependent and T : Fn ! Fm is a linear transfor-

    mation, then {T (a1), . . . , T (ap)} is linearly dependent.9. Prove that if T : F! F is a linear transformation, then there is a unique scalar a 2 F so that

    T (x) = ax for all x 2 F.10. Let T : Rn ! Rm be a linear transformation.

    (a) Prove that for every line l in Rn, the set T (l) is a line or a point.(b) Prove that for every plane P in Rn, the set T (P) is either a plane, a line or a point.

    11. Prove that if T : Rn ! Rm is a linear transformation, then for every segment in Rn, theset T () is a segment or a point.

  • 4.2. THE STANDARD MATRIX OF A LINEAR TRANSFORMATION 137

    12. The k-simplex in Rn with vertex p 2 Rn determined by v1, . . . ,vk 2 Rn is

    def=

    8

  • 138 CHAPTER 4. LINEAR TRANSFORMATIONS

    Let x 2 Fn. Since span{e1, . . . en} = Fn there exist scalars x1, . . . , xn such that

    x =nX

    j=1

    xjej

    Hence,

    T (x) = T

    0@ nXj=1

    xjej

    1A=

    nXj=1

    xjT (ej)

    =T (e1) T (en)

    264 x1...xn

    375= Ax

    Therefore, T is a matrix transformation.

    Definition 4.2.2. The standard matrix of a linear transformation T : Fn ! Fm is the matrixT (e1) T (en)

    where {e1, . . . , en} is the set of standard basis vectors in Fn.Example 4.2.3. Let r be a nonnegative real number Find the standard matrix of the scalingtransformation T : Rn ! Rn defined by

    T (x) def= rx

    Solution. Since T (ei) = rei for i = 1, . . . , n, the standard matrix for T is

    re1 ren

    =

    26664r 0 00 r 0...

    . . .0 0 r

    37775N

    Example 4.2.4. Let a 2 Fn be nonzero. Find the standard matrix of the projection transformationT : Fn ! Fn defined by

    T (x) def= projax

    Solution. Letu =

    akak ,

  • 4.2. THE STANDARD MATRIX OF A LINEAR TRANSFORMATION 139

    and let u1, . . . , un be the components of u. Then

    T (ei) = projaei= hei,uiu= uiu

    for i = 1, . . . , n. Thus, the standard matrix for T isu1u unu

    N

    Example 4.2.5. Let be a real number. Find the standard matrix of the rotation transformationT : R2 ! R2 by radians.Solution. Since

    T (e1) =cos sin

    and

    T (e2) = sin

    cos

    the standard matrix for T is

    cos sin sin cos

    N

    Term

    standard matrix of a linear transformation

    Concept

    A function from Fn to Fm is a linear transformation if and only if it is a matrix transformation.

    Skill

    Find the standard matrix of a linear transformation from Fn to Fm.

    Exercises

    1. Determine whether each statement is true or false.

    (a) A linear transformation T : Rn ! Rm is completely determined by its eect on thecolumns of In.

    (b) There is a linear transformation T : Fn ! Fm which is not a matrix transformation.

  • 140 CHAPTER 4. LINEAR TRANSFORMATIONS

    2. Find the standard matrix of each linear transformation.

    (a) T : R2 ! R4, T (e1) =

    26643131

    3775 and T (e2) =26645200

    3775.(b) T : R3 ! R2, T (e1) =

    13

    , T (e2) =

    47

    and T (e3) =

    54

    .

    (c) T : R2 ! R2 contracts vectors by a factor of 1/3.(d) T : R2 ! R2 projects vectors onto

    12

    .

    (e) T : R2 ! R2 rotates vectors counterclockwise about the origin by 3/2 radians.(f) T : R2 ! R2 rotates vectors clockwise about the origin by /4 radians.

    3. Define T : R4 ! R4 by

    T

    0BB@2664

    x1x2x3x4

    37751CCA =

    26640

    x1 + x2x2 + x3x3 + x4

    3775Prove that T is a linear transformation by showing that T is a matrix transformation.

    4.3 The Algebra of Linear Transformations and Matrices

    Objective

    Relate sums, scalar multiples and compositions of linear transformations to sums, scalarmultiples and products of matrices.

    4.3.1 Sums

    Definition 4.3.1. Let X be a set. Let f : X ! Fm and g : X ! Fm be functions. The sum of fand g is the function f + g : X ! Fm defined by

    (f + g)(x) def= f(x) + g(x)

    Theorem 4.3.2. If S : Fn ! Fm and T : Fn ! Fm are linear transformations, then S + T is alinear transformation.

    Proof. Let x,y 2 Fn. Let k be a scalar. Then(S + T )(x+ y) = S(x+ y) + T (x+ y)

    = S(x) + S(y) + T (x) + T (y)= S(x) + T (x) + S(y) + T (y)= (S + T )(x) + (S + T )(y)

  • 4.3. THE ALGEBRA OF LINEAR TRANSFORMATIONS AND MATRICES 141

    and

    (S + T )(kx) = S(kx) + T (kx)= kS(x) + kT (x)= k[S(x) + T (x)]= k(S + T )(x)

    Therefore, S + T is a linear transformation.

    Definition 4.3.3. The sum of m n matrices A = a1 an and B = b1 bn is the m n matrix

    A+B def=a1 + b1 an + bn

    The operation of forming the sum of two matrices is called matrix addition.

    Example 4.3.4. 1 3 52 4 6

    +7 9 118 10 12

    =

    8 12 1610 14 18

    The next theorem expresss the most important properties of the matrix sum.

    Theorem 4.3.5.

    (i) (Commutativity) For all A,B 2 Fmn

    A+B = B +A

    (ii) (Associativity) For all A,B,C 2 Fmn

    A+ (B + C) = (A+B) + C

    (iii) (Existence of an Additive Identity) There is a unique matrix 0mn 2 Fmn so thatA+ 0mn = A

    for all A 2 Fmn.(iv) (Existence of Additive Inverses) For each A 2 Fmn there is a unique matrix A 2 Fmn so

    thatA+ (A) = 0mn

    Proof.

    (i) Let A,B 2 Fmn.A+B =

    a1 + b1 an + bn

    =b1 + a1 bn + an

    = B +A

    (ii) Exercise.

  • 142 CHAPTER 4. LINEAR TRANSFORMATIONS

    (iii) Define 0mn by0mn

    def=0 0

    Let A 2 Fmn with columns a1, . . . ,an.

    A+ 0mn =a1 an

    +0 0

    =a1 + 0 an + 0

    =

    a1 an

    = A

    To prove that 0mn is unique assume that Z 2 Fmn and A + Z = A for all A 2 Fmn. Inparticular,

    0mn + Z = 0mn

    By part (i),0mn + Z = Z + 0mn = Z

    Thus, Z = 0mn. Therefore, 0mn is the unique element of Fmn so that A+ 0mn = A forall A 2 Fmn.

    (iv) Exercise.

    Definition 4.3.6. The m n zero matrix is

    0mndef=0 0

    We denote the m n zero matrix by 0 when its size is clear from context.Definition 4.3.7. The dierence of matrices A,B 2 Fmn is the m n matrix

    AB def= A+ (B)

    Theorem 4.3.8. If S : Fn ! Fm and T : Fn ! Fm are linear transformations with standardmatrices A and B, respectively, then the standard matrix of S + T is A+B.

    Proof. Let A def=a1 an

    and B def=

    b1 bn

    . Since A and B are the standard

    matrices for S and T , respectively, we obtain

    (S + T )(ei) = S(ei) + T (ei)= ai + bi

    for each i = 1, . . . , n. Therefore, the standard matrix for S + T isa1 + b1 an + bn

    = A+B

  • 4.3. THE ALGEBRA OF LINEAR TRANSFORMATIONS AND MATRICES 143

    4.3.2 Scalar Multiples

    Definition 4.3.9. Let X be a set. The product of a scalar k and a function f : X ! Fm is thefunction kf : X ! Fm defined by

    (kf)(x) def= kf(x)

    Theorem 4.3.10. If T : Fn ! Fm is a linear transformation and k is a scalar, then kT is a lineartransformation.

    Proof. Let x,y 2 Fn. Let c be a scalar. Then(kT )(x+ y) = kT (x+ y)

    = k[T (x) + T (y)]= kT (x) + kT (y)= (kT )(x) + (kT )(y)

    and

    (kT )(cx) = kT (cx)= kcT (x)= c[kT (x)]= c(kT )(x)

    Therefore, kT is a linear transformation.

    Definition 4.3.11. The scalar-matrix product of a scalar k and an m n matrix A def=a1 an

    is the m n matrix

    kAdef=ka1 kan

    The operation of forming the product of a scalar with a matrix is scalar multiplication.

    Example 4.3.12.

    (7)1 3 52 4 6

    = 7 21 3514 28 42

    Theorem 4.3.13.

    (i) (Associativity) For all A 2 Fmn and all scalars k and lk(lA) = (kl)A = l(kA)

    (ii) (Existence of a Multiplicative Identity) For all A 2 Fmn

    1A = A

    (iii) (Distributivity of the Scalar-Matrix Product over the Scalar Sum) For all A 2 Fmn and allscalars k and l

    (k + l)A = kA+ lA

  • 144 CHAPTER 4. LINEAR TRANSFORMATIONS

    (iv) (Distributivity of the Scalar-Matrix Product over the Matrix Sum) For all A,B 2 Fmn andall scalars k

    k(A+B) = kA+ kB

    Proof.

    (i) Exercise.

    (ii) Exercise.

    (iii) Exercise.

    (iv) Let A def=a1 an

    and B def=

    b1 bn

    be elements of Fmn. Let k be a scalar.

    k(A+B) = ka1 + b1 an + bn

    =

    k(a1 + b1) k(an + bn)

    =

    ka1 + kb1 kan + kbn

    =

    ka1 kan

    +kb1 kbn

    = k

    a1 an

    + k

    b1 bn

    = kA+ kB

    Theorem 4.3.14.

    (i) 0A = 0mn

    (ii) (1)A = Afor all A 2 Fmn.Proof. Exercise.

    Theorem 4.3.15. If T : Fn ! Fm is a linear transformation with standard matrix A and k is ascalar, then the standard matrix of kT is kA.

    Proof. Let A def=a1 an

    . Since A is the standard matrix for T we obtain

    (kT )(ei) = kT (ei) = kai

    for each i = 1, . . . , n. Therefore, the standard matrix for kT iska1 kan

    = kA