w214 linear algebra - stellenbosch university · 2019. 5. 31. · v 0.1noteforthestudent...

140
W214 Linear Algebra

Upload: others

Post on 10-Mar-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

W214 Linear Algebra

Page 2: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants
Page 3: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

W214 Linear Algebra

Bruce Bartlett

Page 4: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants
Page 5: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

v

0.1 Note for the studentYou are about to meet linear algebra for the second time. In the first year, wefocused on systems of linear equations, matrices, and their determinants. Thatwas good and well, but the time has come for you to return to these topicsfrom a more abstract, mathematical viewpoint.

You should not be scared by abstraction. It simply means getting rid ofextraneous detail and limiting oneself exclusively to the most important fea-tures of a problem. This allows you to understand the problem better. Thereare less things to worry about! Moreover, if you encounter another problem,which is superficially different but shares the same important features as theoriginal problem, then you could understand it in the same way. This is thepower of abstraction.

When we study abstract mathematics, we use the language of definitions,theorems and proofs. Learning to think along these lines (developing abstractmathematical thinking) can be a daunting task at first. But keep trying. Oneday it will ‘click’ into place and you will realize it is all much more simple thanyou had first imagined.

You cannot read mathematics the way you read a novel. You need tohave a pencil and notepad with you, and you need to actively engage with thematerial. For instance, if you encounter a definition, start by writing down thedefinition on your notepad. Just the act of writing it out can be therapeutic!

If you encounter a worked example, try to write out the example yourself.Perhaps the example is trying to show that A equals B. Start by askingyourself: Do I understand what ‘A’ actually means? Do I understand what ‘B’actually means? Only then are you ready to consider the question of whetherA is equal to B !

Good luck in this new phase of your mathematical training. Enjoy theride!

Page 6: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

Contents

0.1 Note for the student . . . . . . . . . . . . . . . . . . . . . . . . v

1 Abstract vector spaces 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Definition of an abstract vector space . . . . . . . . . . . . . . . 61.3 First example of a vector space . . . . . . . . . . . . . . . . . . 71.4 More examples and non-examples . . . . . . . . . . . . . . . . . 101.5 Some results about abstract vector spaces . . . . . . . . . . . . 171.6 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2 Finite-dimensional vector spaces 322.1 Linear combinations and span . . . . . . . . . . . . . . . . . . . 322.2 Linear independence . . . . . . . . . . . . . . . . . . . . . . . . 402.3 Basis and dimension . . . . . . . . . . . . . . . . . . . . . . . . 452.4 Coordinate vectors . . . . . . . . . . . . . . . . . . . . . . . . . 622.5 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3 Linear maps 753.1 Definitions and Examples . . . . . . . . . . . . . . . . . . . . . 753.2 Composition of linear maps . . . . . . . . . . . . . . . . . . . . 873.3 Isomorphisms of vector spaces . . . . . . . . . . . . . . . . . . . 893.4 Linear maps and matrices . . . . . . . . . . . . . . . . . . . . . 943.5 Kernel and Image of a Linear Map . . . . . . . . . . . . . . . . 1023.6 Injective and surjective linear maps . . . . . . . . . . . . . . . . 113

4 Tutorials 1164.1 Tutorial 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164.2 Tutorial 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174.3 Tutorial 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194.4 Tutorial 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204.5 Tutorial 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234.6 Tutorial 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264.7 Tutorial 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

A Videos of Lectures 130

B Reminder about matrices 131

Bibliography 134

vi

Page 7: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

Chapter 1

Abstract vector spaces

1.1 Introduction

1.1.1 Three different setsWe start by playing a game. Recall that in mathematics, a set X is just acollection of distinct objects. We call these objects the elements of X.

I am going to show you three different sets, and you need to tell me theproperties that they all have in common.

The first set, A, is defined to be the set of all ordered pairs (x, y) where xand y are real numbers.

Let us pause for a second and translate this definition from English intomathematical symbols. The translation is:

A := {(a1, a2) : a1, a2 ∈ R}. (1.1.1)

The := stands for ‘is defined to be’. The { and } symbols standfor ‘the set of all’. The lone colon : stands for ‘where’ or ‘such that’.The comma in between a and b stands for ‘and’. The ∈ stands for ‘anelement of’. And R stands for the set of all real numbers.

Well done — you are learning the language of mathematics!

An element of A is an arbitrary pair of real numbers a = (a1, a2). Forinstance, (1, 2) ∈ A and (3.891, eπ) are elements of A. Note also that I amusing a boldface a to refer to an element of A. This is so that we can distinguisha from its components a1 and a2, which are just ordinary numbers (not elementsof A).

We can visualize an element a of A as a point in the Cartesian plane whosex-coordinate is a1 and whose y-coordinate is a2:

a visualize as7−−−−−−−→ x

y

a1

a2 a

The second set, B, is defined to be the set of all ordered real triples(b1, b2, b3) satisfying b1 − b2 + b3 = 0. Translated into mathematical sym-

1

Page 8: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 2

bols,

B := {(b1, b2, b3) : b1, b2, b3 ∈ R and b1 − b2 + b3 = 0}. (1.1.2)

For instance, (2, 3, 1) ∈ B but (1, 1, 1) /∈ B. We can visualize an element bof B as a point in the plane in 3-dimensional space carved out by the equationx− y + z = 0:

b visualize as7−−−−−−−→y

z

x

b

The third set, C, is the set of all polynomials of degree 4. Translated intomathematical symbols,

C := {polynomials of degree ≤ 4}. (1.1.3)

Recall that the degree of a polynomial is the highest power of x whichoccurs. For instance, c = x4 − 3x3 + 2x2 is a polynomial of degree 4, and so isp = 2x3 + πx. So c and p are elements of C. But r = 8x5 − 7 and s = sin(x)are not elements of C. We can visualize an element c ∈ C (i.e. a polynomialof degree 4) via its graph. For instance, the polynomial c = x4−3x3 +2x2 ∈ Ccan be visualized as:

c visualize as7−−−−−−−→ x

y c

There you have it. I have defined three different sets: A, B and C, and Ihave explained how to visualize the elements of each of these sets. On the faceof it, the sets are quite different. Elements of A are arbitrary points in R2.Elements of B are points in R3 satisfying a certain equation. Elements of Care polynomials.

What features do these sets have in common?

1.1.2 Features the sets have in commonI want to focus on two features that the sets A, B and C have in common.

1.1.2.1 Addition

Firstly, in each of these sets, there is a natural addition operation. We can addtwo elements of the set to get a third element.

In set A, we can add two elements a = (a1, a2) and a′ = (a′1, a′2) togetherby adding their components together, to form a new element a + a′ ∈ A:

(a1, a2)︸ ︷︷ ︸a

+ (a′1, a′2)︸ ︷︷ ︸a′

:= (a1 + a′1, a2 + a′2)︸ ︷︷ ︸a+a′

(1.1.4)

For instance, (1, 3) + (2, −1.6) = (3, 1.4). We can visualize this additionoperation as follows:

x

y

a1

a2 a

+ x

y

a′1

a′2 a′ =x

y

a1 + a′1

a2 + a′2

a + a′

Page 9: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 3

We can do a similar thing in set B. Suppose we have two elements of B,b = (b1, b2, b3) and b′ = (b′1, b′2, b′3). Note that, since b ∈ B, its componentssatisfy b1−b2+b3 = 0. Similarly the components of b′ satisfy and b1−b2+b3 = 0.We can add b and b′ together to get a new element b + b′ of B, by addingtheir components together as before:

(b1, b2, b3)︸ ︷︷ ︸b

+ (b′1, b′2, b′3)︸ ︷︷ ︸b′

:= (b1 + b′1, b2 + b′2, b3 + b′3)︸ ︷︷ ︸b+b′

(1.1.5)

We should be careful here. How do we know that the expression on theright hand side is really an element of B? We need to check that it satisfiesthe equation ‘the first component minus the second component plus the thirdcomponent equals zero’. Let us check that formally:

(b + b′)1 − (b + b′)2 + (b + b′)3 = (b1 + b′1)− (b2 + b′2) + (b3 + b′3)= (b1 − b2 + b3) + (b′1 − b′2 + b′3)= 0 + 0= 0.

We can visualize this addition operation in B in the same way as we didfor A.

There is also an addition operation in set C. We can add two polynomialstogether algebraically by adding their corresponding coefficients:

[c4x4 + c3x

3 + c2x2 + c1x

1 + c0] + [d4x4 + d3x

3 + d2x2 + d1x

1 + d0]:= (c4 + d4)x4 + (c3 + d3)x3 + (c2 + d2)x2 + (c1 + d1)x1 + (c0 + d0) (1.1.6)

For instance,

[2x4 + x2 − 3x+ 2] + [2x3 − 7x2 + x] = 2x4 + 2x3 − 6x2 − 2x+ 2.

There is another way to think about the addition of polynomials. Eachpolynomial c can be thought of as a function, in the sense that we can substitutean arbitrary value of x into the polynomial c, and it will output a number c(x).For instance, if c(x) = 3x2 − 1, then c(2) = 11. If we think of polynomialsas functions in this way, then the addition c + d of two polynomials can bethought of as the new function which, when you substitute some number xinto it, outputs c(x) + d(x). Written mathematically,

(c + d)(x) := c(x) + d(x) (1.1.7)

Thinking in this way, we can visualize the graph of c + d as the graph of cadded to the graph of d:

x

y c

x

c(x)

+ x

y

d

x

d(x)

= x

y

c + dx

c(x) + d(x)

1.1.2.2 Zero element

In all three sets A, B and C, there is a specific element (the zero element) 0which, when you add it to another element, leaves that element unchanged.

In A, the zero element 0 is defined by

0 := (0, 0) ∈ A. (1.1.8)

Page 10: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 4

When you add this point to another point (a1, a2) ∈ A, nothing happens!

(0, 0) + (a1, a2) = (a1, a2).

Do not confuse the zero element 0 ∈ A with the real number zero, 0 ∈ R.This is another reason why I am using boldface! (You should use underline todistinguish them.)

In B, the zero element 0 is the point (0, 0, 0) ∈ B. When you add thispoint to another point (u1, u2, u3) ∈ B, nothing happens!

(0, 0, 0) + (u1, u2, u3) = (u1, u2, u3).

In C, the zero element 0 is the zero polynomial. If we think algebraically,this is the degree polynomial whose coefficients are all zero:

0 = 0x4 + 0x3 + 0x2 + 0x+ 0 (1.1.9)

If we think of the polynomial as a function, then the zero polynomial 0 isthe function which returns zero for all values of x, that is 0(x) = 0 for all x.Whichever way we think of it, when we add the zero polynomial to anotherpolynomial, nothing happens!

[0x4 + 0x3 + 0x2 + 0x+ 0] + [c4x4 + c3x

3 + c2x2 + c1x+ c0]

= [c4x4 + c3x

3 + c2x2 + c1x+ c0]

1.1.2.3 Multiplication by scalars

The last feature all the sets A, B and C have in common is that in each set,you can multiply elements of the set by real numbers.

For instance, if a = (a1, a2) is an element of A, then we can multiply it bysome arbitrary real number, say 9, to get a new element 9.a of A. We do thismultiplication component-wise:

9.(a1, a2) := (9a1, 9a2). (1.1.10)

In general, if k ∈ R is an arbitrary real number, then we can multiplyelements a ∈ A by k to get a new element k.a ∈ A by multiplying eachcomponent of a by k:

In general, if k ∈ R is an arbitrary real number, then we can multiplyelements a ∈ A by k to get a new element k.a ∈ A by multiplying eachcomponent of a by k:

k.(a1, a2)︸ ︷︷ ︸Multiplying a vector by a scalar

:= ( ka1︸︷︷︸Multiplying two numbers together

, ka2︸︷︷︸)Just be careful to distinguish scalar multiplication k.a (written with a .)

from ordinary multiplication of real numbers ka1 (written with no symbol, justusing juxtaposition). Later on, because we are lazy, we will stop writing the ·explicitly — you have been warned!

Visually, this multiplcation operation scales a by a factor of k. That is whywe call it scalar multiplication.

There is a similar scalar multiplication in B:

k(u1, u2, u3) := (ku1, ku2, ku3) (1.1.11)

Page 11: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 5

There is also a scalar multiplication operation in C. We simply multiplyeach coefficient of a polynomial c ∈ C by k:

k.[c4x4 +c3x

3 +c2x2 +c1x+c0] = kc4x

4 +kc3x3 +kc2x

2 +kc1x+kc0 (1.1.12)

If we think of the polynomial c as a function, then this corresponds toscaling the graph of the function vertically by a factor of k.

1.1.3 Features that the sets do not haveLet us mention a few features that the sets do not have, or at least do not havein common.

• Set A = R2 has a multiplication operation. Because we can think of R2

as the complex plane C, and we know how to multiply complex numbers.There is no clear choice of a multiplication operation on B. The samefor C: if you try to multiply two degree 4 polynomials in C together, youwill get out a polynomial of degree 8, which does not live in C!

• There is a ‘take the derivative’ operation on C,

c 7→ d

dxc

which we will meet again later. Note that taking the derivative decreasesthe degree of a polynomial by 1, so the result remains in C, and so this isa well defined map from C to C. There is no analogue of this operationin A and B.

Note that there is no integration map from C to C, because in-tegrating a polynomial increases the degree by 1, so the resultmight be a polynomial of degree 5, which does not live in C!

1.1.4 RulesWe have found that each of our three sets A, B and C have an additionoperation +, a zero vector 0, and a scalar multiplication operation ·. Do theseoperations satisfy any rules, common to all three sets?

For instance, we can think of the addition operation in A as a functionwhich assigns to each pair of elements a and a′ in A a new element a + a′ inA. Does this operation satisfy any rules?

Let us see. Let a = (a1, a2) and a′ = (a′1, a′2) be elements of A. We canadd them in two different orders,

a + a′ = (a1 + a′1, a2 + a′2)

anda′ + a = (a′1 + a1, a

′2 + a2).

Are these the same? In other words, does the rule

a + a′ = a′ + a (1.1.13)

hold? The answer is yes, but why? To check whether two elements of A areequal, we have to check whether each of their components are equal. The firstcomponent of a + a′ is a1 + a′1. The first component of a′ + a is a′1 + a1. Is

Page 12: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 6

a1 + a′1 = a′1 + a1? Yes — because these are just ordinary real numbers (notelements of A anymore), and we know that for ordinary real numbers, youcan add them together in either order and get the same result. So the firstcomponent of a +a′ is equal to the first component of a′+a. Similarly, we cancheck that the second component of a + a′ is equal to the second componentof a′ + a. So all the components of a + a′ are equal to all the components ofa′ + a. So, finally, we conclude that a + a′ = a′ + a.

Does this rule (1.1.13) also hold for the addition operations in B and C?Yes. For instance, let us check that it holds in C. Suppose that c and d arepolynomials in C. Does the rule

c + d = d + c (1.1.14)hold?

The left and right hand sides of (1.1.14) are elements of C. And elements ofC are polynomials. To check if two polynomials are equal, we need to check ifthey are equal as functions, in other words, if you get identical results outputfrom both functions no matter what input value of x you substitute in.

At an arbitrary input value x, the left hand side computes as (c + d)(x) =c(x) + d(x). On the other hand, the right hand side computes as (d + c)(x) =d(x)+c(x). Now, remember that c(x) and d(x) are just ordinary numbers (notpolynomials). So c(x) + d(x) = d(x) + c(x), because this is true for ordinarynumbers. So for each input value x, (c + d)(x) = (d + c)(x). Therefore thepolynomials c + d and d + c are equal, because they output the same valuesfor all numbers x.

There are other rules that also hold in all three sets. For instance, in allthree sets, the rule

(x + y) + z = x + (y + z) (1.1.15)holds for any three elements x, y and z. Can you find the other commonrules?

1.2 Definition of an abstract vector spaceMathematics is about identifying patterns. We have found three differentsets, A, B and C, which look very different on the surface but have muchin common. In each set, there is an addition operation, a zero vector, and ascalar multiplication operation. Moreover, in each set, these operations satisfythe same rules. Let us now record this pattern by giving it a name and writingdown the rules explicitly.Definition 1.2.1 A vector space is a set V equipped with the following data:D1. An addition operation. (That is, for every pair of elements u,v ∈ V , a

new element u + v ∈ V is defined.)

D2. A zero vector. (That is, a special vector 0 ∈ V is marked out.)

D3. A scalar multiplication operation. (That is, for each real number k andeach element v ∈ V , a new element k.v ∈ V is defined).

This data should satisfy the following rules for all u,v,w in V and for allreal numbers k and l:

R1. v + w = w + v

R2. (u + v) + w = u + (v + w)

R3a. 0 + v = v

Page 13: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 7

R3b. v + 0 = v

R4. k.(v + w) = k.v + k.w

R5. (k + l).v = k.v + l.v

R6. k.(l.v) = (kl).v

R7. 1.v = v

R8. 0.v = 0

We will call the elements of a vector space vectors, and we will write themin boldface eg. v ∈ V . We do this to distinguish vectors from real numbers,which we will call scalars, and which don’t have a boldface. It is difficult touse boldface in handwriting, so you should write them with an arrow on top,like so: ~v.

Also, in this chapter we will write scalar multiplication with a ·, for instancek.v, but in later chapters we will simply write it as kv for brevity, so be careful!

To prove that a certain set can be given the structure of a vector space,one therefore needs to do the following:

1. Define a set V .

2. Define the data of an addition operation (D1), a zero vector (D2),and a scalar multiplication operation (D3) on V .

3. Check that this data satisfies the rules (R1) - (R8).

1.3 First example of a vector spaceWe were led to the definition (Definition 1.2.1) of an abstract vector space byconsidering the properties of sets A, B and C in Section 1.1. Let us check,for instance, that B indeed satisfies Definition 1.2.1. The others will be left asexercises.

Example 1.3.1 The set B is a vector space. 1. Define a set B.We define

B := {(u1, u2, u3) : u1, u2, u3 ∈ R and u1 − u2 + u3 = 0}. (1.3.1)

2. Define addition, the zero vector, and scalar multiplication

D1. Addition We define addition as follows. Suppose u = (u1, u2, u3) andv = (v1, , v2, v3) are elements of B. Note that in particular this meansu1 − u2 + u3 = 0 and v1 − v2 + v3 = 0. We define u + v by:

u + v := (u1 + v1, u2 + v2, u3 + v3). (1.3.2)

We need to check that this makes sense. We are supposed to have thatu+v is also an element of B. We can’t just write down any definition! Tocheck if u + v is an element of B, we need to check if it satisfies equation(1.3.1). Let us check:

(u1 + v1)− (u2 + v2) + (u3 + v3)

Page 14: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 8

= (u1 − u2 + u3) + (v1 − v2 + v3) (this algebra step is true for ordinary numbers)= 0 + 0 (since u and v are in B)= 0.

Therefore, u + v is indeed an element of B, so we have written down awell-defined addition operation on B, which takes two arbitrary elementsof B and outputs another element of B.

D2. Zero vector We define the zero vector 0 ∈ B as

0 := (0, 0, 0). (1.3.3)

We need to check that this makes sense. Does (0, 0, 0) really belong toB,in other words, does it satisfy equation (1.3.1)? Yes, since 0−0+0 = 0.So we have a well-defined zero vector.

D3. Scalar multiplication We define scalar multiplication on B as follows.Let k be a real number and u = (u1, u2, u3) be an element of B. Wedefine

k.u := (ku1, ku2, ku3). (1.3.4)

We need to check that this makes sense. When I multiply a multiply avector u in B by a scalar k, the result k.u is supposed to be an element ofB. Does (ku1, ku2, ku3) really belong to B? Let us check that it satisfiesthe defining equation (1.3.1):

ku1 − ku2 + ku3

= k(u1 − u2 + u3) (this algebra step is true for ordinary numbers)= k0 (since u is in B)= 0.

Therefore, k.u is indeed an element of B, so we have written down awell-defined scalar multiplication operation on B.

3. Check that the data satisfies the rulesWe must check that our data D1, D2 and D3 satisfies the rules R1 — R8.

So, suppose u = (u1, u2, u3), v = (v1, v2, v3) and w = (w1, w2, w3) are in B,and suppose that k and l are real numbers.

R1 We check:

v + w (R1.)= (v1 + w1, v2 + w2, v3 + w3) (by defn of addition in B)= (w1 + v1, w2 + v2, w3 + v3) (because x+ y = y + x is true for real numbers)= w + v. (by defn of addition in B)

R2 We check:

(u + v) + w= (u1 + v1, u2 + v2, u3 + v3) + w (by defn of addition in B)= ((u1 + v1) + w1, (u2 + v2) + w2, (u3 + v3) + w3) (by defn of addition in B)= (u1 + (v1 + w1), u2 + (v2 + w2), u3 + (v3 + w3)) (since (x+ y) + z = x+ (y + z) is true for real numbers)= u + (v1 + w1, v2 + w2, v3 + w3) (by defn of addition in B)= u + (v + w) (by defn of addition in B).

Page 15: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 9

R3 We check:

0 + v= (0, 0, 0) + (v1, v2, v3) (by defn of the zero vector in B= (0 + v1, 0 + v2, 0 + v3) (by defn of addition in B)= (v1, v2, v3) (because x+ 0 = x is true for real numbers)= v.

By the same reasoning, we can check that v + 0 = v.

R4 We check:

k.(v + w)= k.(v1 + w1, v2 + w2, v3 + w3) (by defn of addition in B= (k(v1 + w1, k(v2 + w2), k(v3 + w3)) (by defn of scalar multipliation in B)= (kv1 + kw1, kv2 + kw2, kv3 + kw3) (since k(x+ y) = kx+ ky for real numbers x, y)= (kv1, kv2, kv3) + (kw1, kw2, kw3) (by defn of addition in B)= k.v + k.w (by defn of scalar multiplication in B)

R5 We check:

(k + l).v= ((k + l)v1, (k + l)v2, (k + l)v3) (by defn of scalar multiplication in B)= (kv1 + lv1, kv2 + lv2, kv3 + lv3) (since (k + l)x = kx+ lx for real numbers)= (kv1, kv2, kv3) + (lv1, lv2, lv3) (by defn of addition in B)= k.v + l.v (by defn of scalar multiplication in B)

R6 We check:

k.(l.v)= k.(lv1, lv2, lv3) (by defn of scalar multiplication in B)= (k(lv1), k(lv2), k(lv3)) (by defn of scalar multiplication in B)= ((kl)v1, (kl)v2, (kl)v3) (since k(lx) = (kl)x for real numbers)= (kl).v (by defn of scalar multiplication in B).

R7 We check:

1.v = (1v1, 1v2, 1v3) (by defn of scalar multiplication in B)= (v1, v2, v3) (since 1x = x for real numbers x)= v.

R8 We check:

0.v = (0v1, 0v2, 0v3) (by defn of scalar multiplication in B)= (0, 0, 0) (since 0x = 0 for real numbers= 0 (by defn of the zero vector in B).

Page 16: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 10

Exercises1. Prove that set A from Section 1.1 together with the addition operation

(1.1.4), the zero vector (1.1.8) and the scalar multiplication operation(1.1.10) forms a vector space.

2. Prove that set C from Section 1.1 together with the addition operation(1.1.6), the zero vector (1.1.9) and the scalar multiplication operation(1.1.12) forms a vector space.

3. Define the set C ′ consisting of all polynomials of degree exactly 4 as wellas the zero polynomial. Show that if C ′ is given the addition operation(1.1.6), the zero vector (1.1.9) and the scalar multiplication operation(1.1.12) then C ′ does not form a vector space.Hint. Give a counterexample!

4. Consider the set

X := {(a1, a2) ∈ R2 : a1 ≥ 0, a2 ≥ 0}

equipped with the same addition operation (1.1.4), zero vector (1.1.8) andscalar multiplication operation (1.1.10) as in A. Does X form a vectorspace? If not, why not?

Solutions

· Exercises1.3.3. Solution. Consider the following two polynomials in C ′:

p(x) = x4 + x3,

q(x) = −x4.

However, their sum is not in C ′ since

p(x) + q(X) = (1− 1)x4 + x3 = x3

which has degree 3. Hence C ′ is not closed under addition and so cannot be avector space, as the addition operation is not well-defined on C ′.1.3.4. Solution. X is not a vector space since the additive inverse of anelement in X may fail to be in X. For example, consider (1, 0). The additiveinverse of (1, 0) would have to be (−1, 0). However, (−1, 0) is certainly not inX. Hence X is not a vector space.

1.4 More examples and non-examplesExample 1.4.1 A non-example. Define the set V by

V := {a,b} (1.4.1)

Define the addition operation by

a + a := a a + b := ab + a := b b + b := c

For this to be a well-defined addition operation, we need to check thatwhenever you add two elements of V together, you get out a well-defined el-

Page 17: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 11

ement of V . But b + b = c, so adding b ∈ V to itself outputs something,namely c, which is not an element of V . So V does not form a vector spacesince it does not even have a well-defined addition operation. �

Example 1.4.2 Another non-example. Define the set V by

V := {a,b}. (1.4.2)

Define the addition operation by

a + a := a a + b := bb + a := b b + b := a

This is a well-defined addition operation, since whenever you add two ele-ments of V together, you get out a well-defined element of V .

Define the zero vector by0 := a. (1.4.3)

This is well-defined, since a is indeed an element of V .Define scalar multiplication by a real number k ∈ R by

k.a := a and k.b := b. (1.4.4)

This is a well-defined scalar multiplication, since it allows us to multiplyany element v ∈ V by a scalar k and it outputs a well-defined element k.v ∈ V .

Checkpoint 1.4.3 Show that these operations satisfy rules R1, R2, R3, R4,R6 and R7, but not R5 and R8.Solution. R1: We must check whether v + w = w + v for all v,w ∈ {a,b}.Clearly a + a = a + a and likewise for b. And finally b = a + b = b + a.

R2: We must check whether (u + v) + w = u + (v + w) for all u,v,w ∈{a,b}. This requires we check 8 equations in total. For brevity, we shall onlypresent the solution for one of them, the rest are virtually identical. We checkwhether

(a + b) + b = a + (b + b

To that end, consider:

LHS = (a + b) + b= b + b= a.

By a similar method:

RHS = a + (b + b)= a + a= a.

R3, R4, R6, and R7 all follow from routine checks.We shall demonstrate why R5 is not satisfied. For that, we need to find a

counterexample. Take k = 2 = l,v = b. Then

LHS = (2 + 2).b= 4.b

Page 18: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 12

= b

whereas

RHS = 2.b + 2.b= b + b= a

Since LHS 6= RHS, R5 cannot be true.

Example 1.4.4 The zero vector space. Define the set Z by

Z := {z}. (1.4.5)

Note that it contains just a single element, z. Define the addition operationas

z + z := z (1.4.6)

Define the zero element as0 := z. (1.4.7)

Finally define scalar multiplication by a scalar k ∈ R as:

k.z := z. (1.4.8)

Checkpoint 1.4.5 Show that this data satisfies the rules R1 to R8.

Example 1.4.6 Rn. Define the set Rn by

Rn := {(x1, x2, . . . , xn) : xi ∈ R for all i = 1 . . . n}. (1.4.9)

Define the addition operation as

(x1, x2, . . . , xn)+(y1, y2, . . . , yn) := (x1 +y1, x2 +y2, . . . , xn+yn). (1.4.10)

Define the zero element as

0 := (0, 0, . . . , 0). (1.4.11)

Define scalar multiplication by

k.(x1, x2, . . . , xn) := (kx1, kx2, . . . , kxn). (1.4.12)

Checkpoint 1.4.7 Show that this data satisfies the rules R1 to R8.

Example 1.4.8 R∞. Define the set R∞ by

R∞ := {(x1, x2, x3, . . . , ) : xi ∈ R for all i = 1, 2, 3, . . .} (1.4.13)

So an element x ∈ R∞ is an infinite sequence of real numbers. Define theaddition operation componentwise:

(x1, x2, x3, . . .) + (y1, y2, y3, . . .) := (x1 + y1, x2 + y2, x3 + y3, . . .). (1.4.14)

Define the zero element as

0 := (0, 0, 0, . . .), (1.4.15)

Page 19: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 13

the infinite sequence whose components are all zero. Finally, define scalarmultiplication componentwise:

k.(x1, x2, x3, . . .) := (kx1, kx2, kx3, . . .) (1.4.16)

Thinking about infinity is an important part of mathematics. Have youwatched the movie about me called The man who knew infinity?

Checkpoint 1.4.9 Show that this data satisfies the rules R1 to R8.Solution. We shall only check R4 below, the rest are similar.

R4 :Let

v = (v1, v2, v3, . . .)w = (w1, w2, w3, . . .)

We must check whether k.(v + w) = k.v + k.w.

LHS = k.(v + w)= k. [(v1, v2, v3, . . .) + (w1, w2, w3, . . .)]= k. (v1 + w1, v2 + w2, v3 + w3, . . .)= (k(v1 + w1), k(v2 + w2), k(v3 + w3), . . .)= (kv1 + kw1, kv2 + kw2, kv3 + kw3, . . .)= (kv1, kv2, kv3, . . .) + (kw1, kw2, kw3, . . .)= k.(v1, v2, v3, . . .) + k.(w1, w2, w3, . . .)= k.v + k.=w= LHS

Example 1.4.10 Functions on a set. Let X be any set. Define the setFun(X) of real-valued functions on X by

Fun(X) := {f : X → R}. (1.4.17)

Note that the functions can be arbitrary; there is no requirement for themto be continuous, or differentiable. Such a requirement would not make sense,since X could be an arbitrary set. For instance X could be the set {a, b, c}—without any further informaiton, it does not make sense to say that a functionf : X → R is continuous.

Define the addition operation by

(f + g)(x) := f(x) + g(x), x ∈ X (1.4.18)

Make sure you understand what this formula is saying! We start with twofunctions f and g, and we are defining their sum f + g. This is supposed tobe another function on X. To define a function on X, I am supposed to writedown what value it assigns to each x ∈ X. And that is what the formula says:the value that the function f + g assigns to an element x ∈ X is defined to bethe number f(x) plus the number g(x). Remember: f is a function, while f(x)is a number!

Define the zero vector, which we will call z in this example, to be thefunction which outputs the number 0 for every input value of x ∈ X:

z(x) := 0 for all x ∈ X. (1.4.19)

Page 20: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 14

Define scalar multiplication by

(k.f)(x) := kf(x). (1.4.20)

Checkpoint 1.4.11 Notation quiz! Say whether the following combination ofsymbols represents a real number or a function.

1. f

2. f(x)

3. k.f

4. (k.f)(x)

Solution.

1. Function

2. Real Number

3. Function

4. Real NumberCheckpoint 1.4.12 Let X = {a, b, c}.

1. Write down three different functions f, g, h in Fun(X).

2. For each of the functions you wrote down in Item 1.4.12.1, calculate (i)f + g and (ii) 3.h.

Solution.

1.

f(a) = 4f(b) = 0f(c) = 2

g(a) = 1g(b) = 1g(c) = 1

h(a) = 0h(b) = 3h(c) = 0

2.

(f + g)(a) = 5(f + g)(b) = 1(f + g)(c) = 3

(3.h)(a) = 0

Page 21: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 15

(3.h)(b) = 9(3.h)(c) = 0

Checkpoint 1.4.13 Show that the data (1.4.18), (1.4.19), (1.4.20) satisfiesthe rules R1 to R8, so that Fun(X) is a vector space.

Example 1.4.14 Matrices. The set Matn,m of all n×m matrices is a vectorspace. See Appendix B for a reminder about matrices. �

Checkpoint 1.4.15 Show that when equipped with the addition operation,zero vector, and scalar multiplication operation as defined in Appendix B, theset Matn,m of all n×m matrices is a vector space.

Example 1.4.16We will write Coln for the vector space Matn,1 of n-dimensionalcolumn vectors,

Coln =

x1x2...xn

: x1, . . . , xn ∈ R

So, Coln ‘is’ just Rn, but we make explicit the fact that the components of

the vectors are arranged in a column. �

Exercises1. Define an addition operation on the set X := {0, a, b} by the following

table:+ 0 a b0 0 a ba a 0 ab b a 0

This table works as follows. To calculate, for example, b + a, find theintersection of the row labelled by b with the column labelled by a. Wesee that b + a := a.

Prove that this addition operation satisfies R1.2. Prove that the addition operation from Exercise 1.4.1 does not satisfy R2.3. Define a strange new addition operation + on R by

x+y := x− y, x, y ∈ R.

Does + satisfy R2? If it does, prove it. If it does not, give a coun-terexample.

4. Construct an operation � on R satisfying R1 but not R2.Hint. Try adjusting the formula from Exercise 1.4.3.

5. Let R+ be the set of positive real numbers. Define an addition operation⊕, a zero vector z and a scalar multiplication .on R+ by

x⊕ y := xy

z := 1k.x := xk

where x, y ∈ R+, and k is a scalar (i.e. an arbitrary real number).(a) Check that these operations are well-defined. For instance, is x+y ∈

R+, as it should be?

Page 22: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 16

(b) Check that these operations satisfy R1 to R8.

We conclude that R+, equipped with these operations, forms a vectorspace.

6. Consider the operation ⊕ on R2 defined by:

(a1, a2)⊕ (b1, b2) := (a1 + b2, a2 + b1).

(a) Does this operation satisfy R1 ?

(b) Does this operation satisfy R2 ?

Solutions

· Exercises1.4.3. Solution. No, for example:

(1+2)+3 = (1− 2)− 3 = −4.

But1+(2+3) = 1− (2− 3) = 2.

1.4.4. Solution. Define x�y = |x−y|. R1 is satisfied since x�y = |x−y| =|y−x| = y�x. However, R2 is not satisfied since (1�2)�3 = ||1−2|−3| = 2but 1� (2� 3) = |1− |2− 3| = 0

1.4.5. Solution.(a) Let x, y be two positive reals. Then

x⊕ y := xy

is certainly also a positive real number. Similarly, for any k ∈ R and anyx ∈ R+,

k.x := xk

is also positive. To see this, for any fixed k, notice that the graph of thefunction f(x) = xkrestricted to x ≥ 0 lies above the x-axis.

(b)

1.4.6. Solution.(a) No. Counterexample:

(1, 2)⊕ (2, 1) = (1 + 1, 2 + 2) = (2, 4)

but(2, 1)⊕ (1, 2) = (2 + 2, 1 + 1) = (4, 2).

(b) No. Counterexample:

((1, 2)⊕ (2, 1))⊕ (1, 3) = (2, 4)⊕ (1, 3) = (5, 5)

but(1, 2)⊕ ((2, 1)⊕ (1, 3)) = (1, 2)⊕ (5, 2) = (3, 7).

Page 23: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 17

1.5 Some results about abstract vector spacesIt is time to use the rules of a vector space to prove some general results.

We are about to do our first formal proof in the course!

Our first lemma shows that the zero vector 0 is the unique vector in Vwhich ’behaves like a zero vector’. More precisely:

Lemma 1.5.1 Suppose V is a vector space with zero vector 0. If 0′ is a vectorin V satisfying

0′ + v = v for all v ∈ V (1.5.1)

then 0′ = 0.Proof.

0 = 0′ + 0 using (1.5.1) with v = 0= 0′ ( R3b )

Definition 1.5.2 If V is a vector space, we define the additive inverse of avector v ∈ V as

−v := (−1).v

Lemma 1.5.3 If V is a vector space, then for all v ∈ V ,

− v + v = 0 and v + (−v) = 0. (1.5.2)Proof.

−v + v = (−1).v + v (using defn of −v)= (−1).v + 1.v (R7)= (−1 + 1).v (R5)= 0.v= 0 (R8)

In addition,

v + (−v) = −v + v (R1)= 0 (by previous proof)

Lemma 1.5.4 Suppose that two vectors w and v in a vector space satisfyw + v = 0. Then w = −v.Proof.

w = w + 0 (R3b)= w + (v +−v) by Lemma 1.5.3= (w + v) +−v) (R2)= 0 +−v (by assumption)= −v (R3a).

Page 24: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 18

Let us prove two more lemmas, for practice.Lemma 1.5.5 Let V be a vector space and k any scalar. Then

k.0 = 0Proof.

k.0 = k.(0.0) (R8 for v = 0)= ((k)(0)).0 (R6)= 0.0 ((k)(0) = 0 for any real number k)= 0 (R8 for v = 0)

Lemma 1.5.6 Suppose that v is a vector in a vector space V and that k is ascalar. Then

k.v = 0⇔ k = 0 or v = 0.Proof. (Proof of ⇐). Suppose k = 0. Then k.v = 0.v = 0 by R8 of avector space. On the other hand, suppose v = 0. Then k.v = k.0 = 0 byExercise 1.5.2.

(Proof of ⇒). Suppose k.v = 0. There are two possibilities: either k = 0,or k 6= 0. If k = 0, then we are done. If k 6= 0, then 1

k exists and we canmultiply both sides by it:

k.v = 0

∴1k.(k.v) = 1

k.0 (Multiplied both sides by 1

k )

(1kk

).v = 0 (On the LHS, we used R6. On the RHS, we used Exercise 1.5.2)

∴ 1.v = 0 (using 1kk = 1)

∴ v = 0 (R7)

Hence in the case k 6= 0 we must have v = 0, which is what we wanted toshow. �

Example 1.5.7 Let us practice using the rules of a vector space to performeveryday calculations. For instance, suppose that we are trying to solve for thevector x appearing in the following equation:

v + 7.x = w (1.5.3)

We do this using the rules as follows:

v + 7.x = w∴ −v + (v + 7.x) = −v + w (Added −v on left to both sides)∴ (−v + v) + 7.x = −v + w (used R2 on LHS)

∴ 0 + 7.x = −v + w (used Lemma 1.5.3 on LHS)∴ 7.x = −v + w (used R3a on LHS)

∴17 .(7.x) = 1

7 .(−v + w) (scalar multiplied both sides by 17 )

∴ (177).x = 1

7 .(−v + w) (used R6 on LHS)

∴ 1.x = 17 .(−v + w) (multiplied 1

7 with 7)

Page 25: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 19

∴ x = 17 .(−v + w) (R7)

As the course goes on we will leave out all these steps. But it is importantfor you to be able to reproduce them all, if asked to do so! �

Exercises1. Prove that for all vectors v in a vector space, −(−v) = v.2. Let V be a vector space with zero vector 0. Prove that for all scalars k,

k.0 = 0.3. Let V be a vector space. Suppose that a vector v ∈ V satisfies

5.v = 2.v. (1.5.4)

Prove that v = 0.4. Suppose that two vectors x and w in a vector space satisfy 2x + 6w = 0.

Solve for x, showing explicitly how you use the rules of a vector space, asin Example 1.5.7.

5. Suppose V is a vector space which is not the zero vector space. Show thatV contains infinitely many elements.Hint 1. Since V is not the zero vector space, there must exist a vectorv ∈ V such that v 6= 0.Hint 2. Use the idea of the proof from Exercise 1.5.3.

True or False For each of the following statements, write down whether thestatement is true or false, and prove your assertion. (In other words, if you saythat it is true, prove that it is true, and if you say that it is false, prove thatit is false, by giving an explicit counterexample.)

6. If k.v = 0 in a vector space, then it necessarily follows that k = 0.7. If k.v = 0 in a vector space, then it necessarily follows that v = 0.8. The empty set can be equipped with data D1 , D2 , D3 satisfying

the rules of a vector space.9. Rule R3b of a vector space follows automatically from the other rules.10. Rule R7 of a vector space follows automatically from the other rules.

Solutions

· Exercises1.5.1. Solution. We apply the definition of −v twice:

−(−v) = (−1).(−v) = (−1).(−1.(v)).

Using Item we get

(−1).(−1(v)) = ((−1)(−1)).v = 1.v.

Finally, a single application of Item allows us to conclude that

1.v = v

Page 26: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 20

1.5.2. Solution. We apply Item to k.0:

k.0 = k.(0 + 0).

By Item we getk.(0 + 0) = k.0 + k.0.

We now knowk.0 = k.0 + k.0.

Adding the inverse of k.0 to both sides we get

0 = k.0 + 0 = k.0.

And we are done.1.5.3. Solution.

5.v = 2.v=⇒ 5.v + (−2).v = 2.v + (−2).v=⇒ (5− 2).v = (2− 2).v=⇒ 3.v = 0.v

=⇒ (133).v = (1

30)v

=⇒ 1v = 0v=⇒ v = 0

1.5.4. Solution.

2.x + 6.w = 0=⇒ (2.x + 6.w) + (−(6.w)) = 0 + (−(6.w)) (add − (6.w) to both sides)=⇒ 2.x + (6.w + (−(6.w))) = −(6.w) (R2 on LHS,R3a on RHS)

=⇒ 2.x + 0 = −(6.w) (Lemma 1.5.3)=⇒ 2.x = −(6.w) (R3b )

=⇒ (12).(2.x) = 1

2 .(−(6.w))

=⇒ (122).x = 1

2 .((−1).(6.w)) (R6 on LHS, defn of inverse on RHS)

=⇒ 1.x = 12 .((−1)(6)).w) (R6 on RHS)

=⇒ x = 12 .((−6).w) (R7 on LHS)

=⇒ x = ((12)(−6)).w (R6 on RHS)

=⇒ x = (−3).w

True or False 1.5.6. Solution. False. Take R2 as an example. If v =(0, 0) then 2.(0, 0) = (0, 0) but, of course, 2 6= 0.

1.5.8. Solution. False. In order for the empty set to be a vector space,it must have a zero vector. That is, we must be able to find some v ∈the empty set satisfying the axioms for the zero vector. However, since theempty set has no elements in it, by definition, we cannot ever hope to findsuch a v. Hence the empty set can never be a vector space.

Page 27: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 21

1.5.9. Solution. True. Combining R1 and R3a gives R3b .

1.5.10. Solution. False. Let V be a non-zero vector space (such as R2).Redefine scalar multiplication as follows

k.v := 0 for all scalars k and all vectors v.

Then V will satisfy all the rules of a vector space except R7. Thus it is notthe case that R7 follows from the other rules.

1.6 SubspacesIn this section we will introduce the notion of a subspace of a vector space.This notion will allow us to quickly establish many more examples of vectorspaces.

1.6.1 Definition of a subspaceDefinition 1.6.1 A subset U ⊆ V of a vector space V is called a subspaceof V if:

• For all u,u′ ∈ U , u + u′ ∈ U

• 0 ∈ U

• For all scalars k and all vectors u ∈ U , k.u ∈ U

Lemma 1.6.2 If U is a subspace of a vector space V , then U is also a vectorspace, when we equip it with the same addition operation, zero vector and scalarmultiplication as in V .Proof. Since U is a subspace, we know that it actually makes sense to “equipit with the same addition operation, zero vector and scalar multiplication as inV ”. (If U was not a subspace, then we might have for instance u,u′ ∈ U butu + u′ /∈ U , so the addition operation would not make sense.)

So we simply need to check the rules R1 to R8. Since these rules hold forall vectors u,v,w in V , they certainly hold for all vectors u,v,w in U . So R1to R8 are satisfied. �

1.6.2 Examples of subspacesExample 1.6.3 Line in R2. A line L through the origin in R2 is a subspaceof R2:

Page 28: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 22

x

y

L

Figure 1.6.4: A line through the origin in R2.

Indeed, recall that L can be specified by a homogenous linear equation ofthe form:

L = {(x, y) ∈ R2 : ax+ by = 0} (1.6.1)

for some constants a and b. So, if v = (x, y) and v′ = (x′, y′) lie on L, thentheir sum v+v′ = (x+x′, y+y′) also lies on L, because its components satisfythe defining equation (1.6.1):

a(x+ x′) + b(y + y′)= (ax+ by) + (ax′ + by′)= 0 + 0 (since ax+ by = 0 and ax′ + by′ = 0)= 0.

This also makes sense geometrically: if you look at the picture, then youwill see that adding two vectors v,v′ on L by the head-to-tail method willproduce another vector on L. �

Checkpoint 1.6.5 Complete the proof that L is a subspace of R2 by checkingthat the zero vector is in L, and that multiplying a vector in L by a scalaroutputs a vector still in L.

Example 1.6.6 Lines and planes in R3. Similarly a line L or a plane Pthrough the origin in R3 is a subspace of R3:

Page 29: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 23

y

z

x

L

y

z

x

P

Figure 1.6.7: A line throughthe origin in R3.

Figure 1.6.8: A planethrough the origin in R3.

Example 1.6.9 Zero vector space. If V is a vector space, the the set{0} ⊆ V containing just the zero vector 0 is a subspace of V . �

Checkpoint 1.6.10 Check this.

Example 1.6.11 Non-example: Line not through origin. Be carefulthough — not every line L ⊂ R2 is a subspace of R2. If L does not go throughthe origin, then 0 /∈ L, so L is not a subspace.

Another reason that L is not a subspace is that it is not closed underaddition: when we add two nonzero vectors v and v′ on L, we end up with avector v + v′ which does not lie on L:

x

y

L

v v′

v′v + v′

Figure 1.6.12: A line which does not pass through the origin is not closedunder addition.

Page 30: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 24

Example 1.6.13 Hyperplanes orthogonal to a fixed vector. Thisexample generalizes Example 1.6.6 to higher dimensions. Let v ∈ Rn be afixed nonzero vector. The hyperplane orthogonal to v is the setW of all vectorsorthogonal to v, that is,

W := {w ∈ Rn : v · w = 0}. (1.6.2)

You will prove in Exercise Checkpoint 1.6.14 that W is a subspace of Rn.For example, consider the vector v = (1, 2, 3) ∈ R3. Then the hyperplane

orthogonal to v isW = {w ∈ R3 : v · w = 0}. (1.6.3)

If we write w = (w1, w2, w3) then v · w = 0 translates into the equation

w1 + 2w2 + 3w3 = 0. (1.6.4)

So, W can be regarded as the set of vectors in R3 whose components satisfy(1.6.4). �

Checkpoint 1.6.14 Let v ∈ Rn be a fixed nonzero vector. Show that

W := {w ∈ Rn : v · w = 0}.

is a subspace of Rn.Solution.

Closed under addition.. Suppose w,w′ ∈W .That is, v · w = 0 and v · w′ = 0.We must show that w + w′ ∈W .That is, we must show that v · (w + w′) = 0.Indeed,

v · (w + w) = v · w + v · w′

= 0 + 0= 0.

Contains the zero vector.. Since v · 0 = 0, we conclude that 0 ∈W .

Closed under scalar multiplication.. Suppose w ∈W and k is a scalar.That is, v · w = 0.We must show that k.w ∈W .That is, we must show that v · (k.w) = 0.Indeed,

v · (k.w) = k.(v · w)= (k)(0)= 0.

Example 1.6.15 Continuous functions as a subspace. The set

Cont(I) := {f : I → R, f continuous}

of all continuous functions on an interval I is a subspace of the set Fun(I) ofall functions on I. Let us check that it satisfies the definition. You know fromearlier courses that:

• If f and g are continuous functions on I, then f + g is also a continuous

Page 31: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 25

function.

• The zero function 0 defined by 0(x) = 0 for all x ∈ I is a continuousfunction.

• If f is a continuous function, and k is a scalar, then k.f is also continuous.

Hence, by Lemma 1.6.2, Cont(I) is a vector space in its own right. �

Example 1.6.16 Differentiable functions as a subspace. Similarly, theset

Diff(I) := {f : I → R, f differentiable}

of differentiable functions on an open interval I is a subspace of Fun(I). �

Checkpoint 1.6.17 Check this. Also, is Diff(I) a subspace of Cont(I)?

Example 1.6.18 Vector spaces of polynomials. A polynomial is a func-tion p : R→ R of the form

p(x) = anxn + an−1x

n−1 + · · ·+ a1x+ a0. (1.6.5)

for some fixed real coefficients a0, . . . , an. Two polynomials p and q are equalif they are equal as functions, that is if p(x) = q(x) for all x ∈ R. The degreeof a polynomial is the highest power of x which occurs in its formula.

For example, 2x3 − x + 7 is a polynomial of degree 3, while x5 − 2 is apolynomial of degree 5. We write the set of all polynomials as Poly and theset of all polynomials having degree less than or equal to n as Polyn. �

Checkpoint 1.6.19 Check that Poly and Polyn are subspaces of Cont(R).

Example 1.6.20 Polynomials in many variables. A monomial in twovariables x and y is an expression of the form xmyn for some nonnegativeintegers m,n. The degree of the monomial is m+ n. For example,

x3y2︸︷︷︸degree 5

, y7︸︷︷︸degree 7

.

A polynomial in two variables x and y is a linear combination of monomials.The degree of the polynomial is the highest degree of the monomials occuringin the linear combination. For instance,

p = 5x3y2 − 3xy7 (1.6.6)

is a polynomial in x and y of degree 8.We write Poly[x, y] for the set of all polynomials in two variables x and y,

and Polyn[x, y] for the set of all polynomials in x and y with degree less thanor equal to n.

We can regard a polynomial p in two variables as a function

p : R2 → R(x, y) 7→ p(x, y)

In this way, we can regard Polyn[x, y] as a subset of the vector space Fun(R2)of all real-valued functions on R2 (see Example 1.4.10 to remind yourself ofthe vector space of real-valued functions on a set X). Indeed, Polyn[x, y] is asubspace of Fun(R2), and hence it is a vector space.

Two polynomials p and q in variables x and y are defined to be equal if andonly if all their corresponding coefficients are equal. This is equivalent to the

Page 32: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 26

statement that p(x, y) = q(x, y) for all (x, y) ∈ R2.In the same way, we can talk about 3-variable polynomials, and so on, eg.

r = 5x3y2z + 3xy − 4xz3 ∈ Poly6[x, y, z].

Example 1.6.21 Polynomial vector fields. Recall that a vector field onR2 is just a vector whose components are functions of x and y. For example,

V = (x2y, x cos(y)).

We write Vectn(R2) for the set of all vector fields

V = (P (x, y), Q(x, y))

on R2 whose component functions P and Q are polynomials in x, y of degreeless than or equal to n. For example,

V = (xy, x2y3 − x) ∈ Vect5(R2).

We define addition and scalar multiplcation for vector fields just as for ordinaryvectors. That is, if V = (V1, V2) and W = (W1,W2) are vector fields on R2,then we define

V + W := (V1 +W1, V2 +W2).

Similarly, if k ∈ R, we define

kV = (kV1, kV2).

The zero vector field is defined as the vector field whose components are theconstant zero function:

Z = (0, 0)

In this way, one can check that Vectn(R2) satisfies the rules of a vector space.�

Example 1.6.22 Trigonometric polynomials. A trigonometric polynomialis a function T : R→ R of the form

T(x) = a0 +n∑k=1

ak cos(kx) +n∑k=1

bk sin(kx). (1.6.7)

The degree of a trigonometric polynomial is the highest multiple of x whichoccurs inside one of the sines or cosines in its formula. For instance,

3− cos(x) + 6 sin(3x)

is a trigonometric polynomial of degree 3. We write the set of all trigonometricpolynomials as Trig and the set of all trigonometric polynomials having degreeless than or equal to n as Trign. �

Checkpoint 1.6.23 Show that Trig and Trign are subspaces of Cont(R).

Checkpoint 1.6.24 Consider the function f(x) = sin3(x). Show that f ∈ Trig3by writing it in the form (1.6.7). Hint: use the identities

sin(A) sin(B) = 12(cos(A−B)− cos(A+B))

Page 33: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 27

sin(A) cos(B) = 12(sin(A−B) + sin(A+B))

cos(A) cos(B) = 12(cos(A−B) + cos(A+B))

which follow easily from the addition formulae

sin(A±B) = sinA cosB ± cosA sinBcos(A±B) = cosA cosB ∓ sinA sinB.

1.6.3 Solutions to homogenous linear differential equationsA homogenous nth order linear ordinary differential equation on an interval Iis a differential equation of the form

an(x)y(n)(x) + an−1(x)y(n−1)(x) + · · ·+ a1(x)y′(x) + a0(x)y(x) = 0, x ∈ I(1.6.8)

where y(k) means the nth derivative of y. A solution to the differential equationis just some function y(x) defined on the interval I which satisfies (1.6.8).

Example 1.6.25 An example of a 2nd order homogenous linear dif-ferential equation. For instance,

x2y′′− 3xy′ + 5y = 0, x ∈ (0,∞) (1.6.9)

is a homogenous 2nd order linear differential equation on the interval (0,∞),and

y1(x) = x2 sin(log x) (1.6.10)

is a solution to (1.6.9). Similarly,

y2(x) = x2 cos(log x) (1.6.11)

is also a solution to (1.6.9).We can use SageMath to check that these are indeed solutions to (1.6.9).

Click the Evaluate (Sage) button --- it should output ’True’, indicating thaty1 is indeed a solution to the differential equation.

def solves_de(y):return bool(x^2 *diff(y,x,2) -3*x*diff(y,x) + 5*y == 0)

y1 = x^2*sin(log(x))

solves_de(y1)

Edit the code above to check whether y2 is a solution of the differentialequation (1.6.9).

We can also plot the graphs of y1 and y2. Again, click on Evaluate (Sage).

y1 = x^2*sin(log(x))y2 = x^2*cos(log(x))

plot([y1, y2], (x, 0, 1), legend_label =['y1', 'y2'])

Page 34: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 28

Play with the code above, and plot some different functions.

Checkpoint 1.6.26 Check by hand that (1.6.10) and (1.6.11) are indeed so-lutions of the differential equation (1.6.9).

Suppose we are given an nth order homogenous linear differential equationof the form (1.6.8) on some interval I ⊆ R. Write V for the set of all solutionsto the differential equation. That is,

V := {y : an(x)y(n)(x) + · · ·+ a1(x)y′(x) + a0(x)y(x) = 0} (1.6.12)

We can regard V as a subset of the set of all functions on the interval I:

V ⊆ Fun(I)Checkpoint 1.6.27 Show that V is a subspace of Fun(I).

So, by Lemma 1.6.2, we conclude that the set of solutions to a homogenouslinear differential equation is a vector space.Example 1.6.28 Continuation of Example 1.6.25. Consider the differ-ential equation from Example 1.6.25. We saw that

y1 = x2 sin(log x), y2 = x2 cos(log x)

are solutions. So, any linear combination of y1 and y2 is also a solution. Forinstance,

y = 2y1 + 5y2

is also a solution. Let us check this in SageMath.

def solves_de(y):return bool(x^2 *diff(y,x,2) -3*x*diff(y,x) + 5*y == 0)

y1 = x^2*sin(log(x))y2 = x^2*sin(cos(x))

solves_de (2*y1 + 5*y2)

Example 1.6.29 A non-example: Solutions to a nonlinear ODE. Wesaw in the previous example that linear ordinary differential equations (ODEs)are well-behaved - a linear combination of solutions is still a solution. This neednot occur in the nonlinear case. For example, consider the nonlinear ODE

y′ = y2. (1.6.13)

The general solution is given by

yc = 1c− x

where c is a constant. For instance,

y1 = 11− x, y2 = 1

2− xare solutions.

Use the SageMath script below to check whether the linear combinationy1 + y2 is also a solution.

Page 35: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 29

y = function('y')(x)

def solves_de(f):return bool(diff(f,x) - f^2 == 0)

y1 = 1/(1-x)y2 = 1/(2-x)

solves_de(y1+y2)

The answer is False! So linear combinations of solutions to the nonlineardifferential equation (1.6.13) are no longer solutions, in general. �

Example 1.6.30 Finding the general solution to a differential equa-tion in SageMath. Let us use SageMath to find the general solution of thefollowing ordinary differential equation

y′′ + 2y′ + y = 0. (1.6.14)

We can do this as follows. Note that we need to be a bit more careful now,first defining our variable x and then declaring that y is a function of x.

var('x')y = function('y')(x)

diff_eqn = diff(y,x,2) +2* diff(y,x,1) + 5*y == 0desolve(diff_eqn ,y)

desolve(diff_eqn , y)

SageMath reports that the general solution is given in terms of two unspecifiedconstants _K1 and _K2 as (_K2*cos(2*x) + _K1*sin(2*x))*e^(-x).

If we set _K1 equal to 1 and _K2 equal to 0 in the general solution, we willget a particular solution y1 of the differential equation.

var('x,␣_K1 ,␣_K2')y = function('y')(x)

diff_eqn = diff(y,x,2) +2* diff(y,x,1) + 5*y == 0

my_soln = desolve(diff_eqn ,y)

y1 = my_soln.substitute(_K1==1, _K2 ==0)

y1

SageMath is telling us that y1 = e−x sin(2x) is a particular solution.

Edit the code to set _K2 equal to 0 and _K1 equal to 1 in the generalsolution to get a different particular solution y2. What is y2?

Page 36: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 30

1.6.4 Exercises1. Show that the set

V := {(a,−a, b,−b) : a, b ∈ R}

is a subspace of R4.2. Show that the set

V := {polynomials of the form p(x) = ax3 + bx2 − cx+ a, a, b, c ∈ R}

is a subspace of Poly3.3. Let b ∈ R. Prove that

V := {(x1, x2, x3) ∈ R3 : 2x1 − 3x2 + 5x3 = b}

is a subspace of R3 if and only if b = 0.4. Consider the set

V := {f ∈ Diff((−1, 1)) : f ′(0) = 2}

Is V a subspace of Diff((−1, 1))? If you think it is, prove that it is. Ifyou think it is not, prove that it is not!

5. Consider the set

V := {(x1, x2, x3, . . .) ∈ R∞ : limn→∞

xn = 0}

Is V a subspace of R∞? If you think it is, prove that it is. If you thinkit is not, prove that it is not!

6. Is R+ := {x ∈ R : x ≥ 0} a subspace of R? If you think it is, prove thatit is. If you think it is not, prove that it is not!

7. Give an example of a nonempty subset U of R2 which is closed underaddition and under taking additive inverses (i.e. if u is in U then −u isin V ), but U is not a subspace of R2.

8. Give an example of a nonempty subset V of R2 which is closed underscalar multiplication, but V is not a subspace of R2.

The next 4 exercises will help acquaint the reader with the concept of the sumof two subspaces. First, we’ll need a definition.Definition 1.6.31 Let V be a vector space. Suppose U and W are twosubspaces of V . The sum U +W of U and W is defined by

U + V = {u + w ∈ V : u ∈ U,w ∈W}

In the exercises below, V,U,W will be as above.9. Show that U + V is a subspace of V .10. Show that U + V is, in fact, the smallest subspace of V containing

both U and V .11. If W ⊂ U what is U +W?12. Can you think of two subspaces of R2 whose sum is R2? Similarly,

can you think of two subspaces of R2 whose sum is not all of R2?

Page 37: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 1. ABSTRACT VECTOR SPACES 31

1.6.5 Solutions

Page 38: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

Chapter 2

Finite-dimensional vector spaces

In this course we concentrate on finite-dimensional vector spaces, which wewill define in this chapter.

Warning: From now on, I will use shorthand and write scalar multipli-cation k.v simply as kv!

2.1 Linear combinations and spanWe start with some basic definitions.Definition 2.1.1 A linear combination of a finite collection v1, . . . ,vn ofvectors in a vector space V is a vector of the form

a1v1 + a2v2 + · · ·+ abvn (2.1.1)

where a1, a2, . . . , an are scalars. If all the scalars ai are 0, we say that it is thetrivial linear combination. ♦

Example 2.1.2 First example of a linear combination. In R3, (6, 2,−14)is a linear combination of (−3, 1, 2) and (−2, 0, 3) because

(6, 2,−14) = 2(−3, 1, 2)− 6(−2, 0, 3).

Example 2.1.3 Checking if a vector is a linear combination of othervectors. In R4, is v = (2,−1, 3, 0) a linear combination of

v1 = (1, 3, 2, 0),v2 = (5, 1, 2, 4), and v3 = (−1, 0, 2, 1)?

To check this, we need to check if the equation

v = a1v1 + a2v2 + a3v3, (2.1.2)

which is an equation in the unknowns a1, a2, a3, has any solutions. Let us writeout (2.1.2) explicitly:

(2,−1, 3, 0) = a1(1, 3, 2, 0) + a2(5, 1, 2, 4) + a3(−1, 0, 2, 1) (2.1.3)∴ (2,−1, 3, 0) = (a1 + 5a2 − a3, 3a1 + a2, 2a1 + 2a2 + 2a3, 4a2 + a3) (2.1.4)

32

Page 39: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 33

(2.1.4) is an equation between two vectors in R4. Two vectors in R4 are equal ifand only if their corresponding coefficients are equal. So, (2.1.2) is equivalentto the following system of simultaneous linear equations:

a1 + 5a2 − a3 = −2 (2.1.5)3a1 + a2 = −1 (2.1.6)

2a1 + 2a2 + 2a3 = 3 (2.1.7)4a2 + a3 = 0 (2.1.8)

In other words, our question becomes: do equations (2.1.5)–(2.1.8) have asolution?

This is the kind of problem you already know how to solve by hand, fromfirst year. We can also use SageMath to do it for us. We simply tell it what ourunknown variables are, and then ask it to solve the equation. Press Evaluate(Sage) to see the result.

var('a1,␣a2,␣a3')

solve([a1 + 5*a2 - a3 == 2,3*a1 + a2 == -1,2*a1 + 2*a2 + 2*a3 ==3,4*a2 + a3 == 0],[a1, a2, a3])

SageMath returns an empty list []. In other words, there are no solutions toequations (2.1.5)–(2.1.8). Therefore v cannot be expressed as a linear combi-nation of v1,v2,v3. �

Example 2.1.4 Checking if a polynomial is a linear combinationof other polynomials. In Poly2, can p = x2 − 1 be expressed as a linearcombination of

p1 = 1 + x2, p2 = x− 3, p3 = x2 + x+ 1, p4 = x2 + x− 1?

To check this, we need to check if the equation

p = a1p1 + a2p2 + a3p3 + a4p4, (2.1.9)

which is an equation in the unknowns a1, a2, a3, a4, has any solutions. Let uswrite out (2.1.9) explicitly, grouping together powers of x:

p = a1p1 + a2p2 + a3p3 + a4p4

∴ x2 − 1 = a1(1 + x2) + a2(x− 3) + a3(x2 + x+ 1) + a4(x2 + x− 1)∴ −1 + x2 = (a1 − 3a2 + a3 − a4) + (a2 + a3 + a4)x+ (a1 + a3 + a4)x2

Now, two polynomials are equal if and only if all their coefficients are equal. So,(2.1.9) is equivalent to the following system of simultaneous linear equations:

a1 − 3a2 + a3 − a4 = −1 (2.1.10)a2 + a3 + a4 = 0 (2.1.11)a1 + a3 + a4 = 1 (2.1.12)

In other words, our question becomes: do equations (2.1.10)–(2.1.12) havea solution? We ask SageMath.

var('a1,␣a2,␣a3,␣a4')

Page 40: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 34

solve([a1 - 3*a2 + a3 - a4 == -1,a2 + a3 + a4 == 0,a1 + a3 - a4 == 1],[a1, a2, a3, a4])

On my computer, Sage returns:[[a1 == 2*r1 + 2/3, a2 == (2/3), a3 == -r1 + 1/3, a4 == r1]]

Here, r1 and r2 are to be interpreted as free paramters. I’m going to callthem s and t instead, because that’s what we usually call our free parameters!So, equations (2.1.10)–(2.1.12) have infinitely many solutions, parameterizedby two free parameters s and t. In particular, there exists at least one solution.For instance, if we take s = 2 and t = 1 (a totally arbitrary choice!), we getthe following solution:

a1 = 83 , a2 = 2

3 , a3 = −53 , a4 = 1 (2.1.13)

i.e. p = 83p1 + 2

3p2 −53p3 + p4 (2.1.14)

You should expand out the right hand side of (2.1.14) by hand and check thatit indeed is equal to p.

We conclude that p can indeed be expressed as a linear combination of p1,p2, p3 and p4. �

Example 2.1.5 Define the functions f , f1, f2 ∈ Diff by

f(x) = cos3 x, f1(x) = cos(x), f2(x) = cos(3x).

Then f is a linear combination of f1 and f2 because of the identity cos(3x) =14 (3 cosx+ cos(3x)). See Example 1.6.22. In other words,

f = 34 f1 + 1

4 f2.

This example shows that f is also a trigonometric polynomial, even thoughits original formula f(x) = cos(3x) was not in the form (1.6.7). �

Definition 2.1.6 We say that a list of vectors B = {-v1,v2, . . . ,vn-} in a vectorspace V spans V if every vector v ∈ V is a linear combination of the vectorsfrom B. ♦

Example 2.1.7 R2 is spanned by

e1 := (1, 0), e2 := (0, 1)

because every vector v = (a1, a2) can be written as the linear combination

v = a1e1 + a2e2.

Example 2.1.8 Checking if a list of vectors spans the vector space.Is R2 spanned by the following list of vectors?

f1 := (−1, 2), f2 := (1, 1), f3 := (2,−1)

Page 41: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 35

x

yf1

f2

f3

Figure 2.1.9: A list of vectors which spans R2.

Solution. To check this, we need check if every vector v ∈ V can be writtenas a linear combination of f1, f2 and f3.

So, let v = (v1, v2) be a fixed, but arbitrary, vector in R2. We need to checkif the following equation has a solution for a1, a2, a3:

v = a1f1 + a2f2 + a3f3 (2.1.15)

Let us write this equation out explicitly:

v = a1f1 + a2f2 + a3f3 (2.1.16)∴ (v1, v2) = a1(−1, 2) + a2(1, 1) + a3(2,−1) (2.1.17)∴ (v1, v2) = (−a1 + a2 + 2a3, 2a1 + a2 − a3) (2.1.18)

(2.1.18) is an equation between two vectors in R2. Two vectors in R2 areequal if and only if their corresponding coefficients are equal. So, (2.1.18) isequivalent to the following system of simultaneous linear equations:

−a1 + a2 + 2a3 = v1 (2.1.19)2a1 + a2 − a3 = v2 (2.1.20)

In other words, the original question

Is R2 spanned by f1, f2, f3 ?

is equivalent to the question

Can we always solve (2.1.19)–(2.1.20) for a1, a2, a3, no matter what thefixed constants v1,v2 are?

You already know how to solve simultaneous linear equations such as (2.1.19)–(2.1.20)by hand:

−a1 + a2 + 2a3 = v1 (2.1.21)2a1 + a2 − a3 = v2 (2.1.22)

(2.1.23)∴ −a1 + a2 + 2a3 = v1 (2.1.24)

Page 42: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 36

3a2 + 3a3 = 2v1 + v2 R2→ R2 + 2R1 (2.1.25)(2.1.26)

Let a3 = t (2.1.27)

∴ a2 = 13(2v1 + v2)− t (2.1.28)

∴ a1 = −13(−v1 + v2) + t (2.1.29)

In other words, no matter what v1, v2 are, there are always infinitely manysolutions (they are parameterized a free parameter t) to (2.1.19)–(2.1.20), andhence to our original equation (2.1.15). That is, we can express any v ∈ R2 asa linear combination of the vectors f1, f2, f3... and in fact there are infinitelymany ways to do it, parameterized by a free parameter t!

For instance, suppose we try to write v = (2, 3) as a linear combination off1, f2, f3. If we take our general solution ((2.1.27)–(2.1.29)), and set t = 0, thenwe get

a1 = 13 , a2 = 7

3 , a3 = 0

i.e. v = 13 f1 + 7

3 f2

Or we could take, say, t = 1. Then our solution will be

a1 = 43 , a2 = 4

3 , a3 = 1

i.e. v = 43 f1 + 4

3 f2 + f3

There are infinitely many solutions. But the important point is that thereis always a solution to (2.1.15), no matter what v is. Therefore, the vectorsf1, f2, f3 indeed span R2.

Finally, let us solve this problem using SageMath. Working by hand, wearrive at the simultanous linear equations (2.1.19)–(2.1.20), and then put itinto a Sage cell:

var('a1,␣a2,␣a3,␣v1,␣v2')

solve([-a1 + a2 + 2*a3 == v1,2*a1 + a2 - a3 == v2],[a1, a2, a3])

Note that I needed to tell Sage that v1 and v2 are variables, and that I amasking it to solve for a1, a2 and a3. On my computer, Sage outputs:[[a1 == r1 - 1/3*v1 + 1/3*v2, a2 == -r1 + 2/3*v1 + 1/3*v2, a3 == r1]]

Here, r1 is to be interpreted as our free parameter t. So Sage is giving usthe same solution as we found by hand, (2.1.27)–(2.1.29). �

Example 2.1.10 Rn is spanned by

e1 := (1, 0, . . . , 0), e2 := (0, 1, . . . , 0), . . . , en := (0, 0, . . . , 0, 1) (2.1.30)

because every vector v = (a1, a2, . . . , an) can be written as the linear combi-nation

v = a1e1 + a2e2 + · · ·+ anen. (2.1.31)

Page 43: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 37

Checkpoint 2.1.11 Check equation (2.1.31).Solution.

v = a1e1+a2e2+· · ·+anen = a1(1, . . . , 0)+· · · an(0, . . . , 1) = (a1, . . . , 0)+· · ·+(0, . . . , an) = (a1, . . . , an)The next Lemma provides a very convenient method for checking whether

a given list of vectors C spans a vector space V if one already knows that someother list B spans V .

Lemma 2.1.12 Suppose that B = {b1, . . . ,bm} spans a vector space V . Fur-thermore, suppose that each vector in B is a linear combination of the vectorsfrom some other list C = {c1, . . . , cn}. Then C also spans V .Proof. Let v be an arbitrary vector in V . Since B spans V , we can write v asa linear combination of the B vectors:

v =m∑i=1

aibi (2.1.32)

But each B vector can be written as a linear combination of the C vectors:

bi =n∑j=1

λi,jcj

Subsituting into Equation (2.1.32) gives

v =m∑i=1

ai

n∑j=1

λi,jcj

=

n∑j=1

(m∑i=1

aiλi,j

)cj

So we have expressed v as a linear combination of the C vectors. Hence C spansV . �

Exercises1. Recall from 1st year that a function f : R → R is even if f(−x) = f(x)

and odd if f(−x) = −f(x). Show that every vector in the vector spaceFun(R) is a linear combination of an even function and an odd function.

2. Suppose v1,v2,v3,v4 spans V . Prove that v1 − v2,v2 − v3,v3 − v4,v4also spans V .

3. Consider the following polynomials in Poly2:

r1(x) := 3x2 − 2, r2(x) := x2 + x, r3(x) := x+ 1, r4(x) := x− 1

(a) Can the polynomial p with p(x) = x2 + 1 be written as a linearcombination of r1, r2, r3, r4?

(b) If so, in how many ways can this be done?4. Suppose that the vectors e1, e2, e3 and e4 span a vector space V . Show

that the vectors f1 := e2 − e1, f2 := e3 − e2, f3 := e4 − e3, f4 := e4 alsospan V .

Page 44: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 38

5. Show that the polynomials

q0(x) := 1, q1(x) := x, q2(x) := 2x2 − 1, q3(x) := 4x3 − 3x

span Poly3.6. Let S = {-v1, . . . ,vn-} be a list of vectors in a vector space V . Suppose

that S spans V . Suppose that w is another vector in V . Prove that thelist of vectors S ′ = {-w,v1, . . . ,vn-} also spans V .

7. Let S = {-v1, . . . ,vn-} be a list of vectors in a vector space V . Supposethat S spans V . Suppose that one of the vectors in the list, say vr, canbe expressed as a linear combination of the preceding vectors:

vr = a1v1 + · · ·+ ar−1vr−1 (2.1.33)

Suppose that we remove vr from S, to arrive at a new list

T = {-v1, . . . , vr, . . . , sn-}

Prove that T also spans V .

Solutions

· Exercises2.1.1. Solution. The solution is relatively straightforward. Define the fol-lowing two functions:

feven(x) = 12 (f(x) + f(−x)) , fodd(x) = 1

2 (f(x)− f(−x))

It is easy to see that, as the names suggest, feven is an even function andfodd(x) is an odd function. We simply sum feven and fodd:

feven(x) + fodd(x) = 12 (f(x) + f(−x)) + 1

2 (f(x)− f(−x)) = f(x).

2.1.2. Solution. If we are given that v1,v2,v3,v4 spans V then to showthat any other collection of vectors in V spans V it suffices to show that eachof v1,v2,v3,v4 can be written has a linear combination of the new collectionby Lemma 2.1.12. With this observation in hand, the exercise has an easysolution.

v1 = (v1 − v2) + (v2 − v3) + (v3 − v4) + v4

v2 = (v2 − v3) + (v3 − v4) + v4

v3 = (v3 − v4) + v4

v4 = v4

2.1.3. Solution.(a) We must set up the appropriate system of linear equations:

ar1(x) + br2(x) + cr3(x) + dr4(x) =p(x)=⇒ a(3x2 − 2) + b(x2 + x) + c(x+ 1) + d(x− 1) =x2 + 1

After grouping like powers of x we obtain

x2(3a+ b) + x(b+ c+ d) + (−2a+ c− d) = x2 + 1.

Page 45: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 39

We equate coefficients on both sides of the equation to obtain the follow-ing system of linear equations:

3a+ b+ 0c+ 0d = 1,0a+ 1b+ 1c+ 1d = 0,−2a+ 0b+ 1c+−1d = 1.

Using your preferred method for solving a system of linear equations(such as Gauss reduction), we obtain a solution set of the form:

d is free,a = 2 + 2d,b = −5− 6d,c = 5 + 5d.

And so p(x) is indeed a linear combination of r1(x), r2(x), r3(x), r4(x).

(b) Since d is free in the above solution set, we can write p(x) as a linear com-bination of r1(x), r2(x), r3(x), r4(x) in an uncountably infinite number ofways (one for each real number!).

2.1.4. Solution. You could choose to show this directly or we could usea clever approach based on 2 and Lemma 2.1.12. From 2, we know thate1 − e2, e2 − ev3, e3 − e4, e4 must span V . But if these vectors span V , thennon-zero multiples of the vectors also span V . Thus f1, f2, f3, f4 must span V .

2.1.5. Solution. Once again we base our strategy on Lemma 2.1.12. Picka spanning set for Poly3. We’ll use 1, x, x2, x3, since it’s the simplest. 1, x arecertainly spanned by q0x),q1,q2,q3 since 1 = q0(x) and x = q1(x). It caneasily be seen that

x2 = 12q2(x) + 1

2q0(x)x3 = 14q3(x) + 3

4q1(x),

completing the proof.

2.1.7. Solution. We must show that every vector v ∈ V can be written asa linear combination of the vectors from T . So let v ∈ V . Since S spans V ,we know we can write v as a linear combination of the vectors from S:

v = b1v1 + · · ·+ brvr + · · ·+ bnvn (2.1.34)

Substituting (2.1.33) into (2.1.34) gives

v = b1v1 + · · ·+ br(a1v1 + · · ·+ ar−1vr−1) + br+1vr+1 + · · ·+ bnvn(2.1.35)

= (b1 + bra1)v1 + · · ·+ (br + brar−1)vr−1 + br+1vr+1 + · · ·+ bnvn(2.1.36)

Equation (2.1.36) shows that we can express v as a linear combination of thevectors from T . Hence T spans V.

Page 46: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 40

2.2 Linear independenceDefinition 2.2.1 A list of vectors B = {-v1,v2, . . . ,vn-} in a vector space V iscalled linearly independent if the equation

k1v1 + k2v2 + · · · knvn = 0 (2.2.1)

has only the trivial solution k1 = k2 = · · · = kn = 0. Otherwise (if (2.2.1)has a solution with at least one scalar ki 6= 0) the list B is called linearlydependent. ♦

Remark 2.2.2 Zero vector implies linear dependence. Suppose oneof the vectors vi in the list B = {-v1, . . . ,vn-} is the zero vector 0. Then B islinearly dependent, since the equation (2.2.1) has the nontrivial solution

0v1 + · · ·+ 0vi−1 + 1vi + 0vi+1 + · · ·+ 0vn = 0,

in other words,

k1 = 0, . . . , ki−1 = 0, ki = 1, ki+1 = 0, . . . , kn = 0.

So: a linearly independent list ofvectors never contains the zero vector!

Example 2.2.3 The list of vectors f1 = (−1, 2), f2 = (1, 1) from Example 2.1.8is linearly independent, because the equation

k1(−1, 2) + k2(1, 1) = (0, 0)

is equivalent to the system of equations

− k1 + k2 = 0, 2k1 + k2 = 0 (2.2.2)

which has only the trivial solution k1 = 0 and k2 = 0. �

Checkpoint 2.2.4 Check that (2.2.2) has only the trivial solution.Solution.

(2k1 + k2)− (−k1 + k2) = 0 = 3k2 =⇒ k2 = 0 =⇒ k1 = 0 too.

Example 2.2.5 The list of vectors f1 = (−1, 2), f2 = (1, 1), f3 = (2, −1) fromExample 2.1.8 is linearly dependent, because the equation

k1(−1, 2) + k2(1, 1) + k3(2, −1) = (0, 0) (2.2.3)

is equivalent to the system of equations

− k1 + k2 + 2k3 = 0, 2k1 + k2 − k3 = 0 (2.2.4)

which has a one-dimensional vector space of solutions parameterized by t,

k1 = t, k2 = −t, k3 = t, t ∈ R. (2.2.5)

For instance, for t = 2, we have

2(−1, 2)− 2(1, 1) + 2(2, −1) = (0, 0)

so that (2.2.3) has nontrivial solutions. �

Page 47: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 41

Checkpoint 2.2.6 Check that (2.2.4) has the solution set (2.2.5).Solution. We have a system of consistent homogenous linear equations sowe know there exists at least one solution, namely the trivial solution. Sincewe have 3 unknowns but only 2 equations, we do not have a unique solution.Let k1 be free, i.e. k1 = t, t ∈ R. Then

(−k1+k2+2k3)−(2k1+k2−k3) = 0 = −3k1+3k3 =⇒ −k1+k3 = 0 =⇒ k3 = t.−t+k2+2t = 0 =⇒ k2 = −t

Example 2.2.7 The list of polynomials

q0(x) := 1, q1(x) := x, q2(x) := 2x2 − 1, q3(x) := 4x3 − 3x

from Example 2.1.5 is linearly independent in Poly3. This is because the equa-tion

k0q0 + k1q1 + k2q2 + k3q3 = 0

becomes the following equation between polynomials:

4k3x3 + 2k2x

2 + (−3k3 + k1)x+ (−k2 + k0) = 0

This is equivalent to the following system of equations,

4k3 = 0, 2k2 = 0, −3k3 + k1 = 0, k0 − k2 = 0

which has only the trivial solution k0 = k1 = k2 = k3 = 0. �

Here are two more ways to think about linearly dependent lists of vectors.Proposition 2.2.8 Equivalent Formulations of Linear Dependence.Let B = {-v1, . . . ,vn-} be a list of vectors in a vector space V . The followingstatements are equivalent:

1. The list of vectors B is linearly dependent.

2. (Linear Combination of Other Vectors)One of the vectors in the list B isa linear combination of the other vectors in B.

3. (Linear Combination of Preceding Vectors)Either v1 = 0, or for somer ∈ {2, 3, . . . , n}, vr is a linear combination of v1, v2, . . ., vr−1.

Proof. We will show that (1) ⇔ (2), (1) ⇒ (3) and (3) ⇒ (2), and concludethat each statement implies the others.

(1) ⇒ (2). Suppose that B is linearly dependent. This means that thereare scalars k1, k2, . . ., kn, not all zero, such that

k1v1 + k2v2 + · · ·+ knvn = 0. (2.2.6)

Let ks be one of the nonzero coefficients. Then, by taking the other vectors tothe other side of the equation, and multuplying by 1

kswe can solve for vs in

terms of the other vectors:

vs = −k1

ksv1 − . . .−

knks

vn (No vi terms on RHS)

Therefore, (2) is true.(2) ⇒ (1). Suppose that one of the vectors in the list, say vs, is a linear

combination of the others vectors. That is,

vs = k1v1 + . . .+ knvn (No term on RHS.)

Rearranging this equation gives:

k1v1 + . . .+ (−1)vs + . . .+ knvn = 0. (2.2.7)

Page 48: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 42

Not all the coefficients on the LHS of (2.2.7) are zero, since the coefficient ofvs is equal to −1. Therefore, B is linearly dependent.

(1) ⇒ (3). Suppose that the list B = {-v1, . . . ,vn-} is linearly dependent.This means that there are scalars k1, k2, . . ., kn, not all zero, such that

k1v1 + k2v2 + · · ·+ knvn = 0. (2.2.8)

Let r ∈ {1, 2, . . . , n} be the largest index such that kr 6= 0. (We are toldthat not all the ki are zero, so this makes sense.) If r = 1, then (2.2.8) is simplythe equation

k1v1 = 0,where k1 6= 0.

Therefore v1 = 0 by Lemma 1.5.6, and we are done. On the other hand,suppose r 6= 1. Then (2.2.8) becomes the equation

k1v1 + k2v2 + · · ·+ krvr = 0,where kr 6= 0.

By dividing by kr, we can now solve for vr in terms of the preceding vectorsv1, v2, . . ., vr−1:

∴ vr = −k1

krv1 −

k2

krv2 − · · · −

kr−1

krvr−1

Therefore, (3) is true.(3) ⇒ (2) Suppose that (3) is true. In other words, either:

• v1 = 0. Therefore, B is linearly dependent, by Remark 2.2.2. In otherwords, (1) is true. Therefore, since we have already proved that (1) ⇒(2), we conclude that (2) is true.

• For some r ∈ {2, . . . , n}, vr is a linear combination of v1, . . . ,vr−1. Inthis case, clearly vr is a linear combination of the other vectors in B, so(2) is true.

In both cases, (2) is true. So, (3) ⇒ (2). �

Example 2.2.9 We saw in Example 2.2.5, using the definition of linear depen-dence, that the list of vectors f1 = (−1, 2), f2 = (1, 1), f3 = (2, −1) in R3 islinearly dependent. Give two alternative proofs of this, using Proposition 2.2.8.Solution 1. We check Item 2 of Proposition 2.2.8. That is, we check if oneof the vectors in the list is a linear combination of the other vectors. Indeed,we observe by inspection that

f2 = f1 + f3. (2.2.9)

Hence, B is linearly dependent.Solution 2. We check Item 3 of Proposition 2.2.8. That is, we check:

• Is f1 = 0? No.

• Is f2 is a scalar multiple of f1? No.

• Is f3 is a linear combination of f1 and f2? Yes, since

f3 = −f1 + f2.

Hence, B is linearly dependent.

Page 49: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 43

Proposition 2.2.10 Bumping Off Proposition. Suppose L = {-l1, l2, . . . , lm-}is a linearly independent list of vectors in a vector space V , and that S ={-s1, s2, . . . , sn-} spans V . Then m ≤ n.Proof. Start with the original spanning list of vectors

S = { s1, s2, . . . , sn} (2.2.10)

and consider the ‘bloated’ list

S ′ = {l1, s1, s2, · · · , sn} (2.2.11)

Now, since S spans V , we know that l1 can be written as a linear com-bination of the vectors s1, . . . , sn. Therefore, by Item 2 of Proposition 2.2.8,we know that S ′ is linearly dependent. Thus, by Item 3 of Proposition 2.2.8,either:

• l1 = 0. This cannot be true, since then L would be linearly dependentby Remark 2.2.2, contradicting our initial assumption.

• or one of the s-vectors, say sr, can be expressed as a linear combinationof the preceding vectors. We can then remove sr from the list S ′ (‘bumpit off’), and the resulting list

S1 := {l1, s1, s2, · · · , sr, · · · , sn} (sr omitted) (2.2.12)

will still span V , by Exercise 2.1.7.

We can go on in this way, each time transferring another one of the l-vectorsinto the list, and removing another one of the s-vectors, and still have a listwhich spans V :

L = {-l1, . . . , lm-} S = {-s1, . . . , sn-}L1 = {-l2, . . . , lm-} S1 = {-l1, s1, . . . , sn︸ ︷︷ ︸

n−1

-}

L2 = {-l3, . . . , lm-} S2 = {-l2, l1, s1, . . . , sn︸ ︷︷ ︸n−2

-}

......

Now, suppose that m > n. When we reach the nth stage of this process, wewill have Sn = {-ln, . . . , l1-}, and it will span V . Therefore, in particular, ln+1(which we know exists, since m > n) will be a linear combination of l1, . . . , ln.But then, by Item 2 of Proposition 2.2.8, we conclude that L is linearly de-pendent. But we were told in the beginning that L is linearly independent,so we have a contradiction. Hence, our assumption that m > n must be false.Therefore, we must have m ≤ n. �

Exercises1. Show that the list of vectors (2, 3, 1), (1, −1, 2), (7, 3, c) is linearly de-

pendent in R3 if and only if c = 8.2. The list of vectors in Mat2,2 given by

v1 =[1 21 1

], v2 =

[1 0−2 1

], v3 =

[1 02 3

], v4 =

[0 31 −1

], v5 =

[1 00 1

]

Page 50: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 44

is linearly dependent (you will prove this in Exercise 2.3.6.4, but for thesake of this question you may assume it to be true). Go through the samesteps as in Example 2.2.9 to find the first vector in the list which is eitherthe zero vector or a linear combination of the preceding vectors.

3. Let S = {-v1, . . . ,vn-} be a list of vectors in a vector space V . Supposethat S spans V . Suppose that w is another vector in V . Prove that thelist of vectors S ′ = {-w,v1, . . . ,vn-} also spans V .

4. Let B = {v1, . . . ,vn} be a linearly independent list of vectors in a vec-tor space V . Suppose that v is a vector in V which cannot be writ-ten as a linear combination of the vectors in B. Show that the listB′ = {v1, . . . ,vn,v} is still linearly independent. (Hint: Use the Lin-ear Combination of Preceding Vectors Proposition.)

5. Consider the vector space of functions on the closed unit interval, Fun([0, 1]).Show that for any n ∈ N, we can find n linear independent vectors inFun([0, 1]).

6. (Bonus) Try adapt the argument in the question above to show that forany n ∈ N, we can find n linear independent vectors in Cont([0, 1]), thevector space of all continuous real valued functions on [0, 1].

Solutions

· Exercises2.2.1. Solution. We set up a linear equation and find the necessary condi-tions on c. Suppose some linear combination of the vectors equals 0:

k1(2, 3, 1) + k2(1, −1, 2) + k3(7, 3, c) = 0 = (0, 0, 0)

This vector equation gives rise to a system of 3 linear equations:

2k1 + k2 + 7k3 = 0, 3k1 − k2 + 3k3 = 0, k1 + 2k2 + ck3 = 0.

The corresponding matrix equation is2 1 73 −1 31 2 c

k1k2k3

=

000

This matrix is non-invertible if and only if its determinant is 0. Furthermore,the matrix being non-invertible will mean we can find a non-trivial solution tothe intial equation. We compute the determinant:s

det

2 1 73 −1 31 2 c

= −5c+ 40

which is 0 if and only if c = 8.2.2.2. Solution. Firstly note that v1 is non-zero, so we consider v2. v2cannnot be a scalar multiple of v1 by considering the matrix entry in position(1,2). We now consider v3. Suppose

a

[1 21 1

]+ b

[1 0−2 1

]=[1 02 3

]This gives rise to a system of four linear equations. In particular, we have the

Page 51: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 45

equation for the matrix entry in position (1,2):

2a+ 0b = 0

And hence a = 0. But clearly

b

[1 0−2 1

]6=[1 02 3

]for any choice of b. Hence v3 is not a scalar multiple of v1 and v2. We considerv4 next. Suppose

a

[1 21 1

]+ b

[1 0−2 1

]+ c

[1 02 3

]=[0 31 −1

]The equation for the entry in position (1,2) is simply

2a = 3

and so a = 32 . The corresponding equation for the entry in position (1,1) is

thus32 + b+ c = 0.

Using this result, we consider the equation for the entry in position (2,2) andcompute:

32 + b+ 3c = −1 =⇒ 3

2 + b+ c+ 2c = −1 =⇒ 2c = −1 =⇒ c = −12

and so b = −1. SHOW THAT THIS IS INCONSISTENT WITH (2,1).

2.2.3. Solution. To say that S spans V is to say that for every vector v ∈ Vthere exist scalars (av

i ) depending on v such thatn∑i=1

avi vi = v.

But of course, it is also true that

0w +n∑i=1

avi vi = v

because 0w = 0 and so has no effect on the sum. Hence any vector in Vis a linear combination of w,v1, . . . ,vn which is to say that the set S ′ ={-w,v1, . . . ,vn-} spans V .

2.3 Basis and dimensionIn this section we introduce the notions of:

• a basis of a vector space, and

• the dimension of a vector space.

Then we compute the dimensions of the vector spaces we have introduced upto now. We end by explaining the sifting algorithm which allows us to provesome useful results concerning bases and dimension.

Page 52: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 46

Definition 2.3.1 A list of vectors B = {-e1, e2, . . . , en-} in a vector space V iscalled a basis for V if it is linearly independent and spans V . ♦

Theorem 2.3.2 Invariance of dimension. If B = {b1,b2, . . . ,bm} andC = {c1, c2, . . . , cn} are bases of a vector space V , then m = n.Proof. This is a consequence of Proposition 2.2.10 (the Bumping Off Proposi-tion). Since the b-vectors are linearly independent and the c-vectors span V ,we havem ≤ n. On the other hand, since the c-vectors are linearly independentand the b-vectors span V , we have n ≤ m. Hence m = n. �

Definition 2.3.3 A vector space V is finite-dimensional if there exists abasis B for V . In that case, the dimension of V is the number of vectors in thebasis B. A vector space is infinte-dimensional if it is not finite-dimensional.

Note that the concept of ‘dimension of a vector space’ is only well-defined because onlyf Theorem 2.3.2.

The case of the zero vector space Z = {0} is not explicitly handled inDefinition 2.3.3 and we treat it as a special case. Namely, we define thedimension of the zero vector space Z to be 0. So, by definition, Z isfinite-dimensional, and its dimension equals 0.

2.3.1 Dimensions of some familiar vector spacesExample 2.3.4 Standard basis for Rn. The list of vectors

e1 := (1, 0, . . . , 0), e2 := (0, 1, . . . , 0), . . . , en := (0, 0, . . . , 0, 1)

is a basis for Rn. We already saw in Example 2.1.10 that this list spans Rn.We need to check that it is linearly independent. So, suppose that

a1e1 + a2e2 + · · ·+ anen = 0.

Expanding out the left hand side in components using the definition of thestandard basis vectors ei, this becomes the equation

(a1, 0, 0, . . . , 0) + (0, a2, 0, . . . , 0) + · · ·+ (0, 0, 0, . . . , an) = (0, 0, 0, . . . , 0).

In other words, we have

(a1, a2, a3, . . . , an) = (0, 0, 0, . . . , 0)

which says precisely that a1 = a2 = a3 = · · · = an = 0, which is what weneeded to prove. Thus the list of vectors e1, e2, . . . , en is linearly independent,and is hence a basis for Rn. So Rn has dimension n. �

Example 2.3.5 A basis for R4. Check whether the following list of vectors

v1 = (1, 0, 2,−3), v2 = (1, 3,−1, 2), v3 = (0, 1, 2,−1),v4 = (1, 2, 3, 4) (2.3.1)

is a basis for R4.Solution. First we check if the list of vectors is linearly independent. Con-

Page 53: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 47

sider the equation

a1v1 + a2v2 + a3v3 + a4v4 = 0(2.3.2)

∴ a1(1, 0, 2,−3) + a2(1, 3,−1, 2) + a3(0, 1, 2,−1) + a4(1, 2, 3, 4) = (0, 0, 0, 0)(2.3.3)

∴ (a1 − a2 + a4, 3a2 + a3 + 2a4, 2a1 − a2 + 2a3 + 3a4,−3a1 + 2a2 − a3 + 4a4) = (0, 0, 0, 0)(2.3.4)

So the list of vectors is linearly indepedent if and only if the following equationshave only the trivial solution a1 = 0, a2 = 0, a3 = 0, a4 = 0:

a1 − a2 + a4 = 0 (2.3.5)3a1 + a3 + 2a4 = 0 (2.3.6)

2a1 − a2 + 2a3 + 3a4 = 0 (2.3.7)−3a1 + 2a2 − a3 + 4a4 = 0 (2.3.8)

We can compute the solutions to equations (2.3.5)–(2.3.8) by hand, or usingSageMath.

var('a1,␣a2,␣a3,␣a4')

solve([a1 - a2 + a4 == 0,3*a1 + a3 + 2*a4 == 0,2*a1 - a2 + 2*a3 + 3*a4 == 0,-3*a1 + 2*a2 - a3 + 4*a4 == 0],[a1, a2, a3, a4])

SageMath outputs:[[a1 == 0, a2 == 0, a3 == 0, a4 == 0]]

So indeed, equations (2.3.5)–(2.3.8) have only the trivial solution. Thereforethe list of vectors is linearly independent.

Next, we need to check that the list of vectors spans R4. (There is a shorterway of doing this, using Corollary 2.3.32 below, but for now we prove it fromfirst principles.) So, let w = (w1, w2, w3, w4) be an arbitrary vector in R4.We need to show that there exists at least one way to express w as a linearcombination of the vectors v1,v2,v3,v4. In other words, we need to check ifthere exists at least one solution to the following equation:

a1v1 + a2v2 + a3v3 + a4v4 = w(2.3.9)

∴ a1(1, 0, 2,−3) + a2(1, 3,−1, 2) + a3(0, 1, 2,−1) + a4(1, 2, 3, 4) = (w1, w2, w3, w4)(2.3.10)

∴ (a1 − a2 + a4, 3a2 + a3 + 2a4, 2a1 − a2 + 2a3 + 3a4,−3a1 + 2a2 − a3 + 4a4) = (w1, w2, w3, w4)(2.3.11)

So the list of vectors spans R4 if and only if the following equations fora1, a2, a3, a4 always have a solution, no matter what the values of w1, w2, w3, w4are:

a1 − a2 + a4 = w1 (2.3.12)3a1 + a3 + 2a4 = w2 (2.3.13)

2a1 − a2 + 2a3 + 3a4 = w3 (2.3.14)−3a1 + 2a2 − a3 + 4a4 = w4 (2.3.15)

Page 54: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 48

We can compute the solutions to equations (2.3.12)–(2.3.15) by hand, or usingSageMath:

var('a1,␣a2,␣a3,␣a4,␣w1,␣w2,␣w3,␣w4')

solve([a1 - a2 + a4 == w1,3*a1 + a3 + 2*a4 == w2,2*a1 - a2 + 2*a3 + 3*a4 == w3,-3*a1 + 2*a2 - a3 + 4*a4 == w4],[a1, a2, a3, a4])

Note that we ask SageMath to solve for a1, a2, a3, a4, since w1, w2, w3,w4 are regarded as constants in the equation... we are not trying to solve forthem, they are fixed, but arbitrary! SageMath outputs:[[a1 == 1/9*w1 + 7/18*w2 - 2/9*w3 - 1/18*w4, a2 == -2/3*w1 + 5/12*w2- 1/6*w3 + 1/12*w4, a3 == -7/9*w1 - 2/9*w2 + 5/9*w3 - 1/9*w4, a4 ==2/9*w1 + 1/36*w2 + 1/18*w3 + 5/36*w4]]

In other words, there does indeed exist a solution, no matter what (w1, w2, w3, w4)is. For instance, if (w1, w2, w3, w4) = (3, 1, 2, 4), then the solution is

a1 = 118 , a2 = −19

12 , a3 = −179 , a4 = 49

36 .

In other words,

(3, 1, 2, 4) = 118v1 −

1912v2 −

179 v3 + 49

36v4.

Since there exists a solution to equation (2.3.9) for each vector w ∈ R4, weconclude that {-v1,v2,v3,v4-} spans R4.

Hence {-v1,v2,v3,v4-} is a basis for R4, since it is linearly independent andspans R4. �

Example 2.3.6 Dimension of Polyn. The list of polynomials

p0(x) := 1, p1(x) := x, p2(x) := x2, . . . , pn(x) := xn

is a basis for Polyn, so Dim Polyn = n + 1. Indeed, this list spans Polyn bydefinition, so we just need to check that it is linearly independent. Supposethat

a0p0 + a1p1 + a2p2 + · · ·+ anpn = 0.

This is an equation between functions, so it holds for all x ∈ R! In otherwords, for all x ∈ R, the following equation holds:

a0 + a1x+ a2x2 + · · ·+ anx

n = 0 (2.3.16)

Think about this carefully. Equation (2.3.16) represents an infinite familyof equations for the unknowns a0, a1, . . . , an. There is one equation for eachvalue of x ∈ R. For example:

(x = 1) a0 + a1 + a2 + · · ·+ an = 0 (2.3.17)(x = −1) a0 − a1 + a2 + · · ·+ (−1)nan = 0 (2.3.18)

(x = 2) a0 + 2a1 + 4a3 + · · ·+ 2nan = 0 (2.3.19)(x = 3) a0 + 3a1 + 9a3 + · · ·+ 3nan = 0 (2.3.20)

... (2.3.21)

Page 55: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 49

Suppose we find values for a0, a1, a2, . . . , an which solve all these infinitelymany equations (2.3.17)–(2.3.21). We can now change our point of view.Namely, substitute these fixed values for a0, a1, . . . , an into Equation (2.3.16)and regard Equation (2.3.16) as an equation for the unknown x (the coefficientsa0, a1, . . . , an are now fixed.) We conclude that every x ∈ R is a root of thispolynomial equation!

But, we know from algebra that a polynomial equation of the form (2.3.16)with nonzero coefficients has at most n roots x1, x2, . . . , xn. So, in order for(2.3.16) to hold for all real numbers x, the coefficients must be zero, i.e. a0 =a1 = a2 = · · · = an = 0, which is what we needed to show. �

Example 2.3.7 Dimension of Polyn[x, y]. Recall from Example 1.6.20 thevector space Polyn[x, y] of polynomials in variables x and y of degree less thanor equal to n.

A basis for Poly0[x, y] is given by the constant polynomial

1

so that Dim Poly0[x, y] = 1. Similarly, a basis for Poly1[x, y] is given by thepolynomials

1, x, y

so that Dim Poly1[x, y] = 3. Similarly, a basis for Poly2[x, y] is given by thepolynomials

1, x, y, x2, xy, y2

so that Dim Poly2[x, y] = 6. Reasoning in this way, we see that

Dim Polyn[x, y] = 1 + 2 + 3 + · · ·n+ 1

= (n+ 1)(n+ 2)2 .

Example 2.3.8 Dimension of Vectn(R2). Recall from Example 1.6.21 thevector space Vectn(R2) of polynomial vector fields on R2 whose componentfunctions have degree less than or equal to n.

A basis for Vect0(R2) is given by the polynomial vector fields

(1, 0), (0, 1)

so that Dim Vect0(R2) = 1 + 1 = 2. Similarly, a basis for Vect1(R2) is givenby the polynomial vector fields

(1, 0), (x, 0), (y, 0), (0, 1), (0, x), (0, y)

so that Dim Vect1(R2) = 3 + 3 = 6. Similarly, a basis for Vect2(R2) is givenby the polynomial vector fields

(1, 0), (x, 0), (y, 0), (x2, 0), (xy, 0), (y2, 0),(0, 1), (0, x), (0, y), (0, x2), (0, xy), (0, y2)

so that Dim Vect2(R2) = 6 + 6 = 12. Reasoning in this way, we see that

Dim Vectn(R2) = Dim Polyn[x, y] + Dim Polyn[x, y]= (n+ 1)(n+ 2).

Page 56: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 50

Example 2.3.9 Suppose X is a finite set. Then Fun(X) is finite-dimensional,with dimension |X|, with basis given by the functions fa, a ∈ X, defined by:

fa(x) :={

1 if x = a

0 otherwise(2.3.22)

We will prove this in a series of exercises.

The formula on the right hand side of (2.3.22) occurs so often in math-ematics we give it a symbol of its own, δab (the ‘Kronecker delta’). Thissymbol stands for the formula: “If a = b, return a 1. If a 6= b, return a0”. In this language, we can rewrite the definition of the functions fa as

fa(x) := δax. (2.3.23)

Checkpoint 2.3.10 Suppose X = {a, b, c}.1. Evaluate the function fb at each x ∈ X.

2. Show that {-fa, fb, fc-} is a basis for Fun(X).

Checkpoint 2.3.11 Now let X be an arbitrary finite set. Consider the col-lection of functions

B = {-fa : a ∈ X-}

Show that B is a basis for Fun(X).

Example 2.3.12 Trign is (2n+ 1)-dimensional, with basis

T0(x) := 1, T1(x) := cosx, T2(x) := sin x, T3(x) := cos 2x,T4(x) := sin 2x, . . . , T2n−1(x) := cosnx, T2n(x) := sinnx.

You know that these functions span Trign, by definition. They are alsolinearly independent, though we will not prove this. �

Example 2.3.13 The dimension of Matn,m is nm, with basis given by thematrices

Eij , i = 1 . . . n, j = 1 . . .m

which have a 1 in the ith row and jth column and zeroes everywhere else.

Usually A is a matrix, and Aij is the element of the matrix at position(i, j). But now Eij is a matrix in its own right! Its element at position(k, l) will be written as (Eij)kl. I hope you don’t find this too confusing.In fact, we can write down an elegant formula for the elements of Eijusing the Kronecker delta symbol:

(Eij)kl = δikδjl (2.3.24)

Checkpoint 2.3.14 Check that (2.3.24) is indeed the correct formula for thematrix elements of Eij .

Page 57: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 51

Example 2.3.15 The standard basis of Mat2,2 is

E11 =[

1 00 0

], E12 =

[0 10 0

],E21 =

[0 01 0

],E22 =

[0 00 1

].

Example 2.3.16 The standard basis of Coln is

e1 :=

10...0

, e2 :=

01...0

, . . . , en :=

00...1

.

Example 2.3.17 Dimension of a hyperplane. Let v ∈ Rn be a fixed vec-tor, and consider the hyperplaneW ⊂ Rn orthogonal to v as in Example 1.6.13.Then you will prove in Exercise Checkpoint 2.3.19 that Dim(W ) = n− 1.

For instance, consider the specific example from Example 1.6.13, namelythe plane W ⊂ R3 of vectors orthogonal to v = (1, 2, 3). In other words,

W = {(w1, w2, w3) ∈ R3 : w1 + 2w2 + 3w3 = 0}. (2.3.25)

There is no ’standard’ basis for W . But, here is one basis (as good as anyother):

a = (1, 0,−13), b = (0, 1,−2

3). (2.3.26)

You will show this is indeed a basis for W in Checkpoint 2.3.18 below. Icomputed these vectors as follows. To obtain a, I simply set w1 = 1, w2 = 0and then solved for w3 using Equation (2.3.25). Similarly, for b, I simply setw1 = 0, w2 = 1 and then solved for w3 using (2.3.25).

There is nothing special about my method above for computing a basis forW . Here is a different basis for W , which I arrived at by choosing randomvalues of w1 and w2 and then calculating what w3 must be in order to satisfyEquation (2.3.25):

u = (1, 2,−53), v = (−4, 2, 0). (2.3.27)

In any event, we see that Dim(W ) = 2. �

Checkpoint 2.3.18 Show that the list of vectors {-a,b-} from (2.3.26) in Ex-ample 2.3.17 is a basis for W .

Checkpoint 2.3.19 Let v ∈ Rn be a fixed vector, and set

W := {w ∈ Rn : v · w = 0}

Prove that Dim(W ) = n− 1.Hint. Find a basis for the solution space of the equation determining W .

2.3.2 Dimension of space of solutions to a homogenous lin-ear differential equation

We will now compute the dimension of the vector space of solutions to a ho-mogenous linear ordinary differential equation. We will need the followingtheorem from the theory of differential equations, which we won’t prove.

Page 58: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 52

Theorem 2.3.20 Existence and uniqueness of solutions to linearODE’s. Let

y(n) + an−1(x)y(n−1) + · · ·+ a1(x)y(1) + a0(x)y == 0 (2.3.28)

be a linear homogenous ordinary differential equation on an interval I, wherea0(x), · · · , an−1(x) are continuous on I. Suppose we are given initial conditions

y(x0) = c0 (2.3.29)y(1)(x0) = c1 (2.3.30)

... (2.3.31)y(n−1)(x0) = cn−1 (2.3.32)

where x0 ∈ I and c0, . . . , cn−1 are arbitrary constants. Then there exists aunique function y(x) on I saisfying the differential equation (2.3.28) and theinitial conditions (2.3.29)–(2.3.32).

We will not cover the proof of Theorem 2.3.20, which you can find in acourse on Differential Equations. We are more interested in its consequencesfor Linear Algebra. But first, an example to illustrate the Theorem.Example 2.3.21 Visualizing Theorem 2.3.20. The following app givesa visual demonstration of Theorem 2.3.20. Drag the red and green slidersto change the coefficients of the differential equation (in this example, thecoefficients are just numbers, but in general they are functions of x). Drag theblue points to change the initial conditions.

Specify static image with @preview attribute,Or create and provide automatic screenshot as

images/interactive-1-preview.png via the mbx script

www.geogebra.org/material/iframe/id/https://www.geogebra.org/material/iframe/id/xfs4bpqk/width/598/height/804/border/888888/sfsb/true/smb/false/stb/false/stbh/false/ai/false/asb/false/sri/false/rc/false/ld/false/sdz/false/ctl/false

Figure 2.3.22: GeoGebra App: 2nd order ODE with constant coefficients.

Instead, we are interested in the following Linear Algebra consequence ofTheorem 2.3.20.Corollary 2.3.23 Dimension of space of solutions to a homogenouslinear ODE of order n. Let V be the vector space of all solutions to an nthorder homogenous linear ordinary differential equation on an interval I,

y(n) + an−1(x)y(n−1) + · · ·+ a1(x)y(1) + a0(x)y = 0, (2.3.33)

where where a0(x), · · · , an−1(x) are continuous on I. Then

Dim(V ) = n.

Proof. Choose a fixed a ∈ I. By the existence part of Theorem 2.3.20, we knowthat there exist

y0, . . . , yn−1 ∈ V (2.3.34)

Page 59: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 53

satisfyingy

(j)i (a) = δij , i, j = 1 . . . n− 1. (2.3.35)

Here is (2.3.35) written out in full:

y0(a) = 1 y1(a) = 0 · · · yn−1(a) = 0 (2.3.36)

y(1)0 (a) = 0 y

(1)1 (a) = 1 · · · y

(1)n−1(a) = 0 (2.3.37)

...... (2.3.38)

y(n−1)0 (a) = 0 y

(n−1)1 (a) = 0 · · · y

(n−1)n−1 (a) = 1 (2.3.39)

I claim that {-y0, . . . , yn−1-} is a basis for V .Step 1. {-y0, . . . , yn−1-} is linearly independent.. Suppose

k0y0 + · · ·+ kn−1yn−1 = 0. (2.3.40)

Differentiating Equation (2.3.40) successively gives us n equations:

k0y0 + · · ·+ kn−1yn−1 = 0 (2.3.41)k0y′0 + · · ·+ kn−1y

′n−1 = 0 (2.3.42)

... (2.3.43)

k0y(n−1)0 + · · ·+ kn−1y

(n−1)n−1 = 0 (2.3.44)

Evaluating (2.3.41) at x = a gives

k0 y0(a)︸ ︷︷ ︸=1

+k1 y1(a)︸ ︷︷ ︸=0

+ · · ·+ kn−1 yn−1(a)︸ ︷︷ ︸=0

= 0

∴ k0 = 0.

Similarly, evaluating (2.3.42) at x = a gives

k0 y′0(a)︸ ︷︷ ︸=0

+k1 y′1(a)︸ ︷︷ ︸=1

+ · · ·+ kn−1 y′n−1(a)︸ ︷︷ ︸

=0

= 0

∴ k1 = 0.

Similarly, evaluating the remaining higher derivatives at x = a gives k2 =0, . . . , kn−1 = 0. Therefore, {-y0, . . . , yn−1-} is linearly independent.Step 2. {-y0, . . . , yn−1-} spans V .. Let y be an arbitrary solution to thedifferential equation (2.3.28). We need to show that y can be expressed as alinear combination of y0, . . . , yn−1.

Define scalars c0, . . . , cn−1 by evaluating the derivatives of y at x = a:

c0 := y(a)c1 := y′(a)...

cn−1 := y(n−1)(a)

I claim thaty = c0y0 + c1y1 + · · ·+ cn−1yn−1. (2.3.45)

To prove this, let f be the function on the right hand side of (2.3.45):

f := c0y0 + c1y1 + · · ·+ cn−1yn−1

Page 60: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 54

Clearly f ∈ V , in other words it is a solution of the differential equation(2.3.28).

Moreover, consider successively differentiating f and evaluating at x = a.Using the initial conditions satisfied by the yi, Equation (2.3.35), we compute:

f(a) = c0

f ′(a) = c1

...f (n−1)(a) = cn−1

These are precisely the same initial conditions satisfied by y in Equations(2.3.36)–(2.3.39)! Therefore, by the uniqueness part of Theorem 2.3.20, weconclude that f = y. Hence (2.3.45) is indeed true. �

Let us do another example.Example 2.3.24 Application of existence and uniqueness theorem ofsolutions to ODE. Consider the ODE

x2y′′− 3xy

′+ 5y = 0 on (0,∞) (2.3.46)

from Example 1.6.25. To apply Theorem 2.3.20, we first rewrite it in the form

y′′− 3xy′ + 5

x2 y = 0 on (0,∞). (2.3.47)

The coefficient functions 3x and 5

x2 are continuous on (0,∞) so we can applyTheorem 2.3.20. Choose, say, x0 = 1 and arbitrary numbers c0, c1. ThenTheorem 2.3.20 says that there exists a unique solution y(x) to the differentialequation (2.3.47) satisfying the initial conditions:

y(1) = c0 (2.3.48)y′(1) = c1 (2.3.49)

Let us verify this in SageMath. First, we ask SageMath to find the most generalsolution to the differential equation (2.3.47):

x = var('x')y = function('y')(x)

ode = diff(y,x,2) - 3/x * diff(y,x,1) + 5/x^2 * y == 0

show(desolve(ode , y))

SageMath tells us that the most general solution to the differential equation(2.3.47) is

y = K1x2 sin(log(x)) +K2x

2 cos(log(x)). (2.3.50)Let us now apply the initial conditions (2.3.48)–(2.3.49). We can compute y(1)and y′(1) using the formula for y from (2.3.50). So (2.3.48)–(2.3.49) becomes(check this):

K2 = c0 (2.3.51)K1 + 2K2 = c1 (2.3.52)

Equations (2.3.51)–(2.3.52) have a unique solution, namelyK1 = c1−2c0,K2 =c0. So indeed, for any initial conditions (2.3.48)–(2.3.49), the differential equa-tion (2.3.47) has a unique solution, namely:

y = (c1 − 2c0)x2 sin(log(x)) + c0x2 cos(log(x))

Page 61: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 55

For instance, if our initial conditions were

y(1) = 1 (2.3.53)y′(1) = 0 (2.3.54)

then the unique solution is

y = −2x2 sin(log(x)) + x2 cos(log(x)). (2.3.55)

You can also check this explicitly in SageMath, using the ics=[1,1,0] optionof desolve (the first number is the value of x0, the second number is the valueof y(x0), and the third number is the value of y’(x_0), etc.):

x = var('x')y = function('y')(x)

ode = diff(y,x,2) - 3/x * diff(y,x,1) + 5/x^2 * y == 0

show(desolve(ode , y, ics=[1,1,0]))

SageMath outputs the same solution as in (2.3.55). Similarly, if our initialconditions were

y(1) = 1 (2.3.56)y′(1) = 0 (2.3.57)

then the unique solution is

y = x2 sin(log(x)).

2.3.3 Dimensions of subspacesWe now consider dimensions of subspaces of vector spaces.Proposition 2.3.25 Let W be a subspace of a finite-dimensional vector spaceV . Then W is finite-dimensional, and Dim(W ) ≤ Dim(V ). Moreover, ifDim(W ) = Dim(V ) then W = V .Proof.Proving that Dim(W ) ≤ Dim(V ). Let

C = {-v1, . . . ,vn-}

be a basis for V , so that Dim(V ) = n. We just need to show that W isfinite-dimensional, i.e. that there exists a basis

B = {-w1, . . . ,wk-}

for W . For then B will be a list of k linearly independent vectors which live inW (and hence also in V ) and hence we must have k ≤ n by Proposition 2.2.10,as C spans V .

We show that W is finite-dimensional as follows.IfW is the zero vector space {0}, thenW is finite-dimensional by definition.IfW is not the zero vector space, then there exists a nonzero vector w1 ∈W .

Consider the list B1 = {-w1-}. Note that B1 is linearly independent, by Item 3of Proposition 2.2.8. So, if B1 spans W , then it is a basis for W , and so W is

Page 62: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 56

finite-dimensional and we are done.If B1 does not span W , then there exists a vector w2 ∈ W which is not a

scalar multiple of e1. Now consider the list B2 = {e1, e2}. Once again, B2 islinearly independent, by Item 3 of Proposition 2.2.8. So, if B2 spans W , thenit is a basis for W , and we are done.

If B2 does not span W , then there exists a vector w3 ∈ W which is not alinear combination of w1 and w2. Now consider the list B3 = {w1,w2,w3}.Again, B3 is linearly independent, by Item 3 of Proposition 2.2.8. If it does notspan W , then there exists a vector w4 ∈ W which is not a linear combinationof w1,w2,w3. So consider the list B4 = {-w1,w2,w3,w4-}.

This process must terminate for some k ≤ n. If not, then it will produce alist Bn+1 = {-w1, . . . ,wn+1-}. This would be a linearly independent list of n+ 1vectors from V . But DimV = n, so this is impossible, by Proposition 2.2.10.Hence for some k ≤ n we must have that Bk is a basis for W , and we are done.Proving that Dim(V ) = Dim(W ) ⇒ W = V . Suppose that Dim(W ) =Dim(V ) but that W 6= V . Since W 6= V , there exists a vector v which is anelement of V but not an element of W . Let B = {-w1, . . . ,wn-} be a basis forW . We can add v to B to get the following list of vectors in V :

B′ = {-w1, . . . ,wn,v-}.

Since v cannot be written as a linear combination of w1, . . . ,wn, we concludethat B′ linearly lindependent, by Item 3. But then B′ is a list of n+ 1 linearlyindependent vectors in an n-dimensional vector space V . This is impossible,by Proposition 2.2.10. Therefore our assumption was false, and we must haveW = V . �

2.3.4 Infinite-dimensional vector spacesIt is good to have an example of an infinite-dimensional vector space.Proposition 2.3.26 Poly is infinite-dimensional.Proof. Suppose Poly is finite-dimensional. This means there exists a finitecollection of polynomials p1,p2, . . . ,pn which spans Poly. But, let d be thehighest degree of all the polynomials in the list p1,p2, . . . ,pn. Then p :=xd+1 is a polynomial which is not in the span of p1,p2, . . . ,pn, since addingpolynomials together and multiplying them by scalars can never increase thedegree. We have arrived at a contradiction. So our initial assumption cannotbe correct, i.e. Poly cannot be finite-dimensional. �

Example 2.3.27 We will not prove this here, but the following vector spacesare also infinite-dimensional:

• R∞,

• Fun(X) where X is an infinite set,

• Cont(I) for any nonempty interval I, and

• Diff(I) for any open interval I

• Polyk

Page 63: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 57

2.3.5 The sifting algorithm and its usesIf we consider the proof of Proposition 2.2.10 (the ‘Bumping off’ Proposition)carefully, we find that it makes use of a sifting algorithm. This algorithmcan actually be applied to any list of vectors v1,v2, . . . ,vn in a vector space.Consider each vector vi in the list consecutively. If vi is the zero vector, or ifit is a linear combination of the preceding vectors v1,v2, . . . ,vn−1, remove itfrom the list.Example 2.3.28 Sift the following list of vectors in R3:

v1 = (1, 2,−1), v2 = (0, 0, 0), v3 = (3, 6,−3)v4 = (1, 0, 5), v5 = (5, 4, 13), v6 = (1, 1, 0).

We start with v1. Since it is not the zero vector, and is not a linearcombination of any preceding vectors, it remains. We move on to v2, which iszero, so we remove it. We move on to v3, which by inspection is equal to 3v1,so we remove it. We move on to v4. It is not zero, and cannot be expressed asa multiple of v1 (check this), so it remains. We move on to v5. We check if itcan be written as a linear combination

v5 = av1 + bv4

and find the solution a = 2, b = 3 (check this), so we remove it. Finally wemove on to v6. We check if it can be written as a linear combination

v6 = av1 + bv4

and find no solutions (check this), so it remains. Our final sifted list is

v1,v4,v6.

Checkpoint 2.3.29 Do the three ‘check this’ calculations above.Solution.

1. Suppose(a, 2a,−a) = (1, 0, 5).

Then the first entry requires that a = 1 but the second entry requiresthat a = 0. Hence there can be no a satisfying the equation.

2.

2v1+3v2 = 2(1, 2,−1)+3(1, 0, 5) = (2+3, 4+0,−2+15) = (5, 4, 13) = v4

3. Supposeav1 + bv4 = v6

i.e.(a, 2a,−a) + (b, 0, 5b) = (1, 1, 0).

Consideration of the second entry gives a = 12 which, by looking at the

first entry, forces b = 12 but then, if we look at the third entry, − 1

2 + 52 6= 0.

Hence there are no solutions to the equation.Sifting is a very useful way to construct a basis of a vector space!

Page 64: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 58

Lemma 2.3.30 If a list of vectors v1,v2, . . . ,vn spans a vector space V , thensifting the list will result in a basis for V .Proof. At each step, the vector that is removed from the list is either the zerovector, or a linear combination of the vectors before it. So if we remove thisvector, the resulting list will still span V . Thus by the end of the process, thefinal sifted list of vectors still spans V .

To see that the final sifted list is linearly independent, we can apply Propo-sition 2.2.8. By construction, no vector in the final sifted list is a linear combi-nation of the preceding vectors (if it was, it would have been removed!). Hencethe final sifted list is not linearly dependent, so it must be linearly independent!

Corollary 2.3.31 Any linearly independent list of vectors v1,v2, . . . ,vk in afinite-dimensional vector space V can be extended to a basis of V .Proof. Since V is finite-dimensional, it has a basis e1, . . . , en. Now considerthe list

L : v1,v2, . . . ,vk, e1, e2, . . . , enwhich clearly spans V . By sifting this list, we will arrive at a basis for V ,by Lemma 2.3.30. Some of the e-vectors may have been removed. But noneof the v-vectors will have been removed, since that would mean some vi is alinear combination of the preceding vectors v1, . . . ,vi−1, which is impossible,as v1, . . . ,vk is linearly independent list. Hence after sifting the list L weindeed extend our original list v1, . . . ,vk to a basis of V . �

Corollary 2.3.32 If v1,v2, . . . ,vn is a linearly independent list of n vectorsin an n-dimensional vector space V , then it is a basis.Proof. By Corollary 2.3.31, we can extend v1,v2, . . . ,vn to a basis for V . ButV has dimension n, so the basis must contain only n vectors by Theorem 2.3.2(Invariance of Dimension). So we have not added any vectors at all! Hence ouroriginal list was already a basis. �

Example 2.3.33 In Example 2.2.7 we showed that the list of polynomials

q0(x) := 1, q1(x) := x, q2(x) := 2x2 − 1, q3(x) := 4x3 − 3x

is linearly independent in Poly3. Since Dim Poly3 = 4, we see that it is a basisof Poly3.

In Exercise 2.1.5, you showed that q0, . . . ,q3 is a basis for Poly3 by‘brute force’. This new method is different!

2.3.6 Exercises1. Sift the list of vectors

v1 = (0, 0, 0), v2 = (1, 0,−1), v3 = (1, 2, 3)v4 = (3, 4, 5), v5 = (4, 8, 12), v6 = (1, 1, 0).

2. Let V be a vector space of dimension n. State whether each of the fol-lowing statements is true or false. If it is true, prove it. If it is false, givea counterexample.(a) Any linearly independent list of vectors in V contains at most n

Page 65: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 59

vectors.

(b) Any list of vectors which spans V contains at least n vectors.3. Complete the proof of the following lemma.

Lemma. Suppose V is a vector space of dimension n. Then any linearlyindependent set of n vectors in V is a basis for V .

Proof. Let B = {v1, . . . ,vn} be a linearly independent set of vectorsin V .

Suppose that B is not a basis for V .Therefore, B does not span V , since ... (a)Therefore, there exists v ∈ V such that ... (b)Now, add v to the list B to obtain a new list B′ := ... (c)The new list B′ is linearly independent because ... (d)This is a contradiction because ... (e)Hence, B must be a basis for V .

4. Use Exercise 2.3.6.2(a) to show that the list of matrices in Mat2,2 inExercise 2.2.2 is linearly dependent.

5. In each case, use the results in Exercise 2.3.6.2 and Exercise 2.3.6.3 todetermine if B is a basis for V .(a) V = Poly2, B =

{2 + x2, 1− x, 1 + x− 3x2, x− x2}

(b) V = Mat2,2,

B ={[

1 2−1 3

],

[0 13 −1

],

[1 23 4

]}

(c) V = Trig2, B ={

sin2 x, cos2 x, 1− sin 2x, cos 2x+ 3 sin 2x}

6. Let {u,v,w} be a linearly independent list of vectors in a vector space V .State whether each of the following statements is true or false. If it is true,prove it. If it is false, give a counterexample. (Hint: Use the definition oflinear independence.)(a) The list {u + v, v + w, u + w} is linearly independent.

(b) The list {u− v, v−w, u−w} is linearly independent.7. For each of the following, show that V is a subspace of Poly2, find a basis

for V , and compute DimV .(a) V = {p ∈ Poly2 : p(2) = 0}

(b) V = {p ∈ Poly2 : xp′(x) = p(x)}8. Prove or disprove: there exits a basis {-p0, p1, p2, p3-} of Poly3 such that

none of the polynomials p0, p1, p2, p3 have degree 2.9. Prove or disprove: if U and W are distinct subspaces of V with U 6= V

and W 6= V , then dim(U +V ) = dim(U) + dim(V ). (Recall the definitionof the sum of two subspaces from Exercise 1.6.4.9.)

10. Let V be the vector space of solutions to the differential equation

x3y′′′ + x2y′′ − 2xy′ + 2y = 0 (2.3.58)

(a) Without performing any explicit calculations, determine the dimen-sion of V .

(b) Using Sage, find a basis for V . (If you’re unsure of the syntax, look

Page 66: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 60

at the examples in Section 2.3.) Show explicitly (by hand!) thateach basis element does indeed satisfy (2.3.58).

(c) Find that unique function y1 that is a solution to (2.3.58) subjectto the initial conditions

y1(1) = 1 y′1(1) = 0 y′′1 (1) = 0.

(d) Find that unique function y2 that is a solution to (2.3.58) subjectto the initial conditions

y2(1) = 0 y′2(1) = 1 y′′2 (1) = 0.

(e) Does the function y3 = y1 + y2 also solve (2.3.58)? By Theo-rem 2.3.20, y3 is the unique solution to an initial value probleminvolving (2.3.58) - what are those initial conditions?

(f) Read through 2D Plotting in Sage. Use SageMath to plot a graphof y1, y2, y3 on the interval [0, 10] .

2.3.7 Solutions2.3.6.2. Solution.(a) True

Proof. Suppose we had a list of n + 1 linearly independent vectors. ByCorollary 2.3.31, we can extend the list to a basis for V . Hence wewould obtain a basis for V with at least n+ 1 vectors. We conclude thatdimV ≥ n+ 1 contradicting, the fact that dimV = n. �

(b) TrueProof. Try it yourself! Your proof will be very similar to the one givenfor a. �

2.3.6.3. Solution.(a) Any linearly independent spanning set is by definition a basis, contra-

dicting our assumption.

(b) v is not a linearly combination of the vectors in B.

(c) {-v1 . . . ,vn,v-}

(d) No vector in B′ is a linear combination of the previous vectors and so thelist is thus linearly indepedent by Proposition 2.2.8.

(e) The vector subspace W of V spanned by the vectors in B′ has dimensionn+ 1 and so dim(W ) > dim(V ) which contradicts Proposition 2.3.25.

2.3.6.4. Solution. Mat2,2 has dimension 4 since it is spanned by e1,1, e1,2, e2,1, e2,2where ei,j is the matrix with a 1 in the ijth entry and 0’s everywhere else. ByExercise 2.3.6.2(a), any linearly indepedent list of vectors in Mat2,2 has lengthat most 4. Since the list of matrices in Mat2,2 in Exercise 2.2.2 has length 5,it cannot be linearly independent.

Page 67: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 61

2.3.6.5. Solution.(a) Poly2 has a basis B′ = {-1, x, x2-} and so has dimension 3. Since B has

length 4, it cannot be linearly independent by Exercise 2.3.6.2(a). HenceB cannot be a basis for Poly2.

(b) Mat2,2 has dimension 4. Since B has length 3, it cannot span Mat2,2.Hence B cannot be a basis for Mat2,2.

(c) Trig2 has dimension 5. Since B has length 5, it cannot span Trig2. HenceB cannot be a basis for Trig2.

2.3.6.6. Solution.(a) True. Suppose there is a linear relation on {u + v, v + w, u + w}:

a(u + v) + b(v + w) + c(u + w).

This induces a linear relation on {u,v,w}: (a+c)u+(a+b)v+(b+c)w = 0Since {u,v,w} is linearly independent, we must have that

a+ c = 0 (1)a+ b = 0 (2)b+ c = 0 (3)

(1) + (2) combined with (3) gives us that a = 0 and so b = c = 0 too. Weconclude that {u + v, v + w, u + w} is linearly independent.

(b) False. Let V = R3. Let

u = (1, 0, 0) v = (0, 1, 0), w = (0, 0, 1)

and so

u− v = (1,−1, 0)v−w = (0, 1,−1)u−w = (1, 0,−1).

By inspection, we see that

(1,−1, 0) + (0, 1,−1) = (1, 0,−1)

and so {u− v, v−w, u−w} is linearly dependent.Alternatively, we could have noticed that for any vectors u,v,w:

(u− v) + (v−w) = u−w.

Thus {u− v, v−w, u−w} satisfies Proposition 2.2.8 and so is linearlydependent.

2.3.6.7. Solution.(a) We omit the check that V is a subspace of Poly2 since it is routine. Let

us now construct a basis for V . Begin with a non-zero vector v1 in V .We make the most obvious choice, let v1 = x− 2. Next pick any vectorv2 in V not in the span of {v1}. An obvious choice is v2 = x(x − 2).Since V is not all of Poly2, we know that dimV < 3. Thus B′ = {-v1,v2-}is a basis for V and so dimV = 2.

(b) Once again, we omit the check that V is a subspace of Poly2. Pick anynon-zero vector v1 in V . Let’s choose v1 = x. It is not as obvious asbefore whether there are indeed any vectors in V not in the span of {v1},and so we must do some computations. If p(x) = ax2 + bx + c is in V ,

Page 68: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 62

p(x) must satisfy

x(2ax+ b) = ax2 + bx+ c =⇒ 2ax2 + bx = ax2 + bx+ c.

Hence a = c = 0. Thus all vectors in V are scalar multiples of v1 = x.Hence {-v1-} is a basis for V and so dimV = 1.

2.3.6.8. Solution. The basis B = {-x3, x3 + x2, x, 1-} works. The check isroutine.2.3.6.9. Solution. We can disprove the statement with the following coun-terexample. Let V = R3. Let U be the vector subspace with basis {-(1, 0, 0), (0, 1, 0)-}and letW be the vector subspace with basis {-(0, 0, 1), (0, 1, 0)-}. Clearly dimU =dimW = 2 and thus dim(U) + dim(V ) = 4. But since U + W = R3,dim(U + V ) = dimR3 = 3.

2.4 Coordinate vectorsThere is a more direct way to think about a basis of a vector space.

Figure 2.4.1: Video recording for Proposition 2.4.2 and proof.

Proposition 2.4.2 Bases give coordinates. Let B = {e1, e2, . . . , en} be alist of vectors in a vector space V . Then the following statements are equivalent:

1. B is a basis for V .

2. Every vector v ∈ V can be written as a linear combination

v = a1e1 + a2e2 + · · ·+ anen (2.4.1)

in precisely one way. (That is, for each v ∈ V there exist scalars a1, a2,. . ., an satisfying (2.4.1), and that moreover these scalars are unique).

It is important to understand the mathematical phrase ‘There exists aunique X satisfying Y ’. It means two things. Firstly, that there doesexist an X which satisfies Y . And secondly, that there is no more thanone X which satisfies Y .

Proof.1 ⇒ 2. Suppose that the list of vectorsB = {e1, e2, . . . , en} is a basis for V .Suppose v ∈ V . Since the list of vectors B spans V , we know that we can writev as a linear combination of the vectors in the list in at least one way,

v = a1e1 + a2e2 + · · ·+ anen. (2.4.2)

We need to show that this is the only way to express v as a linear combi-

Page 69: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 63

nation of the vectors ei. Indeed, suppose that we also have

v = b1e1 + b2e2 + · · ·+ bnen. (2.4.3)

Subtracting these two equations gives

0 = (a1 − b1)e1 + (a2 − b2)e2 + · · ·+ (an − bn)en.

Since the list of vectors e1, e2, . . ., en is linearly independent, we concludethat

a1 − b1 = 0, a2 − b2 = 0, · · · , an − bn = 0.

That is, a1 = b1, a2 = b2, and so on up to an = bn, and hence the expansion(2.4.2) is unique.2 ⇐ 1. Conversely, suppose that every vector v can be written as a uniquelinear combination

v = a1e1 + a2e2 + · · ·+ anen.

The fact that each v can be written as a linear combination of the vectorse1, e2, . . ., en means that B spans V . We still need to show that this list B islinearly independent. So, suppose that there exist scalars b1, b2, . . ., bn suchthat

b1e1 + b2e2 + · · ·+ bnen = 0. (2.4.4)

We need to show that all the bi must equal zero. We already know onepossible solution of (2.4.4) : simply set each bi = 0. But we are told that eachvector (in particular, the vector 0) can be expressed as a linear combination ofthe ei in exactly one way. Hence this must be the only solution, i.e. we musthave b1 = b2 = · · · = bn = 0, and so the list B is linearly independent. �

Definition 2.4.3 Let B = {b1,b2, . . . ,bn} be a basis for a vector space V ,and let v ∈ V . Write

v = a1b1 + a2b2 + · · ·+ anbn .

The scalars ai appearing in the above expansion are called the coordinatesof the vector v with respect to the basis B. The column vector

[v]B :=

a1a2...an

∈ Coln

is called the coordinate vector of v with respect to the basis B. ♦

I indicate that a collection of things is a list (where the order matters)and not merely a set (where the order does not matter) using my ownhome-made symbols {}. A basis B = {b1,b2, . . . ,bn} is a list of vectors.The order of the vectors matters because it affects the coordinate vector[v]B.

Example 2.4.4 Coordinate vectors in R2. Find the coordinate vectors ofv and w in Figure 2.4.5 with respect to the basis B = {b1,b2}.

Page 70: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 64

b1

b2

v

w

Figure 2.4.5: The basis B for R2.

Solution. By inspection, we see that v = 2b1 − b2, so that

[v]B :=[

2−1

]Also by inspection, we see that w = −3b1 + 2b2, so that

[v]B =[−32

]�

Example 2.4.6 Geogebra App: Coordinate vectors in R2. The follow-ing GeoGebra interactive applet displays a vector v ∈ R2 (in black), a basisB = {-b1,b2-} (in red), and a background of integral linear combinations of thebasis vectors. Drag the tip of v and see how the coordinate vector [v]B of vwith respect to B changes. You can also change the basis B by dragging thetips of the basis vectors b1,b2. Notice the effect on the coordinate vector [v]B.

Specify static image with @preview attribute,Or create and provide automatic screenshot as

images/interactive-2-preview.png via the mbx script

www.geogebra.org/material/iframe/id/https://www.geogebra.org/material/iframe/id/tpdmz8kj/width/600/height/703/border/888888/sfsb/true/smb/false/stb/false/stbh/false/ai/false/asb/false/sri/false/rc/false/ld/false/sdz/false/ctl/false

Figure 2.4.7: Interactive GeoGebra applet displaying the coordinate vectorof v with respect to the basis B.

Page 71: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 65

Example 2.4.8

Figure 2.4.9: Video recording for Example 2.4.8.

Find the coordinate vector of p = 2x2 − 2x + 3 with respect to the basisB =

{1 + x, x2 + x− 1, x2 + x+ 1

}of Poly3.

Solution. We need to write p as a linear combination of polynomials fromthe basis B:

2x2 − 2x+ 3 = a1(1 + x) + a2(x2 + x− 1) + a3(x2 + x+ 1)

Collecting powers of x2, x and 1 on the right hand side gives:

2x2 − 2x+ 3 = (a2 + a3)x2 + (a1 + a2 + a3)x+ (a1 − a2 + a3)1

This translates into the equations:

a2 + a3 = 2a1 + a2 + a3 = −2a1 − a2 + a3 = 3

We can solve these equations by hand, or we can use SageMath:

var('a1␣a2␣a3')

show(solve((a2 + a3 == 2,a1 + a2 + a3 == -2,a1 - a2 + a3 == 3), (a1, a2, a3)))

We compute the coordinates of p as a1 = −4, a2 = − 52 , a3 = 9

2 . In otherwords,

2x2 − 2x+ 3 = −4(1 + x)− 52(x2 + x− 1) + 9

2(x2 + x+ 1)

Therefore,

[p]B :=

−4− 5

292

Example 2.4.10

Page 72: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 66

Figure 2.4.11: Video recording for Example 2.4.10.

Find the coordinate vector of the function f given by

f(x) = sin2 x− cos3 x

with respect to the standard basis

S = {-1, cosx, sin x, cos 2x, sin 2x, cos 3x, sin 3x-}

of Trig3.Solution. Using the addition formulae for sin and cos as in Exercise 1.6.24,we compute:

sin2 x− cos3 x = 12 −

34 cosx− 1

2 cos 2x− 14 cos 3x. (2.4.5)

We could also do this in SageMath as follows:

x=var('x')

f = sin(x)^2 - cos(x)^3

show(f.reduce_trig ())

Hence

[f ]S =

12− 3

40− 1

20− 1

40

Checkpoint 2.4.12 Check the expansion (2.4.5) by hand.Solution.

cos 2x = 1− 2 sin2 x =⇒ sin2 x = 12 −

12 cos 2x

cos3 x = cosx cos2 x = cosx(1−sin2 x) = cosx(12+1

2 cos 2x) = 12 cosx+1

2 cosx cos 2x

cosx cos 2x = cos 3x+sin x sin 2x = cos 3x+2 sin2 x cosx = cos 3x+(1−cos 2x) cosx = cos 3x+cosx−cos 2x cosx =⇒ cosx cos 2x = 12 cos 3x+1

2 cosx

Thus

sin2 x−cos3 x = 12−

12 cos 2x−(1

2 cosx+12 cosx cos 2x) = 1

2−12 cos 2x−1

2 cosx−12(1

2 cos 3x+12 cosx) = 1

2−34 cosx−1

2 cos 2x−14 cos 3x.

Page 73: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 67

Lemma 2.4.13 Let B = {e1, e2, . . . , en} be a basis for a vector space V . Thenfor all vectors v,w ∈ V and all scalars k we have

1. [v + w]B = [v]B + [w]B

2. [kv]B = k[v]BProof. (a) Suppose that

v = a1e1 + a2e2 + · · ·+ anen

and

w = b1e1 + b2e2 + · · ·+ bnen .

Then, using the rules of a vector space, we compute

v + w = (a1 + b1)e1 + (a2 + b2)e2 + · · ·+ (an + bn)en .

From this we read off that

[v + w]B =

a1 + b1a2 + b2

...an + bn

.

The proof of (b) is similar. �

Exercises1. Prove Lemma 2.4.13(b) in the case where V is two-dimensional, so that

B = {e1, e2}. Justify each step using the rules of a vector space.2. Let B = {-B1,B2,B3,B4-} be the basis of Mat2,2 given by

B1 =[1 00 1

],B2 =

[1 00 −1

],B3 =

[1 11 1

],B4 =

[0 1−1 0

].

Determine [A]B, where

A =[1 23 4

].

3.(a) Find a basis B for the vector space

V := {p ∈ Poly2 : p(2) = 0}.

(b) Consider p(x) = x2 + x− 6. Show that p ∈ V .

(c) Determine the coordinate vector of p with respect to your basis B,i.e. determine [p]B.

4. Find the coordinate representation of p(x) = 3x3− 7x+ 1 with respect toyour basis in Exercise 2.3.6.8.

5. Consider the vector space W from Example 2.3.17,

W = {(w1, w2, w3) ∈ R3 : w1 + 2w2 + 3w3 = 0},

Page 74: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 68

and the following bases for W :

B = {-a,b-}, C = {-u,v-}

where

a = (1, 0,−13), b = (0, 1,−2

3)

u = (1, 2,−53), v = (−4, 2, 0)

Consider the vector w = (−2, 4,−2) ∈ R3.(a) Show that w ∈W .

(b) Determine [w]B.

(c) Determine [w]C .6. Let V be the vector space of solutions to the differential equation

y′′

+ y = 0. (2.4.6)

(a) Show that B = {- cosx, sin x-} is a basis for V .

(b) Let y ∈ V be defined as the unique solution to the differential equa-tion in (2.4.6) satisfying

y(π6 ) = 1, y′(π6 ) = 0.

(Note that we can indeed define y uniquely in this way due to The-orem 2.3.20.) Compute [y]B.

(c) Let z(x) = cos(x− π

3).

i. Show that z ∈ V by checking that it solves the differentialequation (2.4.6).

ii. Determine [z]B.7. Let V be the vector space of solutions to the differential equation

(1− x2)y′′− xy′ + 4y = 0, x ∈ (−1, 1). (2.4.7)

(a) Show that y1 and y2 are elements of V , where

y1(x) = 2x2 − 1, y2(x) = x√

1− x2.

(b) Show that B = {-y1,y2-} is a basis for V .

(c) Let y ∈ V be defined as the unique solution to the differential equa-tion in (2.4.7) satisfying

y(12) = 1, y′(1

2) = 0.

(Note that we can indeed define y uniquely in this way due to The-orem 2.3.20.) Compute [y]B.

Page 75: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 69

2.5 Change of basis

Figure 2.5.1: Video recording for Section 2.5.

2.5.1 Coordinate vectors are different in different basesSuppose that B = {b1,b2} and C = {c1, c2} are two different bases for R2,shown below:

b1

b2c1

c2

(a) The basisB = {b1, b2}

(b) The basisC = {c1, c2}.

Figure 2.5.2: Two different bases for R2

Suppose we are given a vector w ∈ R2:w

We would like to compute the coordinate vector of the same vector w withrespect to the two different bases B and C.

For this particular w, from Figure 2.5.3, we see that in the basis B, we have

w = −3b1 + 2b2 ∴ [w]B =[−32

]. (2.5.1)

On the other hand, in the basis C, we see from Figure 2.5.4 that

w = c1 − 3c2 ∴ [w]C =[

1−3

]. (2.5.2)

Page 76: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 70

b1

b2

w

c2

c1w

Figure 2.5.3: w = −3b1 + 2b2 Figure 2.5.4: w = c1 − 3c2

So, the same vector w has different coordinate vectors [w]B and [w]C withrespect to the bases B and C!

2.5.2 Changing from one basis to anotherNow, suppose we know the bases B and C, and we know the coordinate vector[w]B of w in the basis B,

[w]B =[−32

],

that is, w = −3b1 + 2b2. How can we compute [w]C , the coordinate vector ofw in the basis C?

The best way is to express each vector in the basis B as a linear combinationof the basis vectors in C. In the next figure, the vectors b1 and b2 are displayedagainst the background of integral linear combinations of the basis C:

c2

c1 b1

b2

We read off that:

b1 = c1 + 3c2 (2.5.3)b2 = 2c1 + 3c2 (2.5.4)

Therefore, we compute:

w = −3b1 + 2b2

= −3(c1 + 3c2) + 2(2c1 + 3c2)

Page 77: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 71

= c1 − 3c2

From this we read off that

[w]C =[

1−3

](2.5.5)

which is the right answer, as we know from (2.5.2).In fact, this calculation can be phrased in terms of matrices.

Definition 2.5.5 Let B = {b1, . . . ,bn} and C = {c1, . . . , cn} be bases fora vector space V . The change-of-basis matrix from B to C is the n × nmatrix PC←B whose columns are the coordinate vectors [b1]C , . . . , [bn]C :

PC←B :=

b1

C

b2

C

. . .

bn

C

.

Example 2.5.6 In our running example, we see from (2.5.3) and (2.5.4) that

[b1]C =[

13

], [b2]C =

[23

].

Hence the change-of-basis matrix from B to C is

PC←B =[1 23 3

].

Before we move on, we need to recall something about matrix multiplica-tion. Suppose you collect together m column vectors to form a matrix: C1

C2

. . .

Cm

(For instance, our change-of-basis matrix PC←B was formed in this way.)

Then the product of this matrix with a column vector can be computed asfollows:

C1

C2

. . .

Cm

a1a2...am

= a1

C1

+a2

C2

+· · ·+am

Cm

.

(2.5.6)Checkpoint 2.5.7 Prove the above formula!Solution. We check the ith entry of the LHS of (2.5.6) using just the defini-tion of matrix multiplation.

(LHS)i = (C1)ia1 + . . .+ (Cn)ian = (RHS)i

and we’re done!We can now prove the following theorem.

Theorem 2.5.8 Change of basis. Suppose that B = {b1, . . . ,bn} andC = {c1, . . . , cn} are bases for a vector space V , and let PC←B be the change-

Page 78: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 72

of-basis matrix from B to C. Then for all vectors v in V ,

[v]C = PC←B[v]B. (2.5.7)Proof. Let v ∈ V . Expand it in the basis B:

v = a1b1 + a2b2 + · · ·+ anbn, i.e. [v]B =

a1...an

.

Then,

[v]C = [a1b1 + · · ·+ anbn]C= a1[b1]C + · · ·+ an[bn]C (Lemma 2.4.13)

=

b1

b2

. . .

bn

a1

...an

(2.5.6)

= PC←B[v]B .

Example 2.5.9 In our running example, the theorem says that for any vectorv ∈ R2,

[v]C =[1 23 3

][v]B.

In particular, this must hold for our vector w, whose coordinate vector inthe basis B was:

[w]B =[−32

].

So in this case, the theorem is saying that

[w]C =[1 23 3

] [−32

]=[

1−3

]which agrees with our previous calculation (2.5.5)! �

2.5.3 Exercises1. This is a continuation of Exercise 2.4.2. Consider the following two bases

for Mat2,2:

B ={B1 =

[1 00 1

], B2 =

[1 00 −1

], B3 =

[1 11 1

], B4 =

[0 1−1 0

]}

C ={C1 =

[1 10 0

], C2 =

[1 −10 0

], C3 =

[0 01 1

], C4 =

[0 01 −1

]}(a) Determine the change-of-basis matrices PC←B and PB←C .

(b) Determine [A]B and [A]C where

A =[1 23 4

].

(c) Check that [A]C = PC←B[A]B and that [A]B = PB←C [A]C .

Page 79: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 73

2. Compute the change-of-basis matrix PB←S from the standard basis

S = {1, cosx, sin x, cos 2x, sin 2x}

of Trig2 to the basis

B ={

1, cosx, sin x, cos2 x, sin2 x}.

3. Figure 2.5.10 displays a basis B = {-b1,b2-} for R2, a background of integrallinear combinations of b1 and b2, and a certain vector w ∈ R2. Similarly,Figure 2.5.11 displays another basis C = {-c1, c2-} for R2, a background ofintegral linear combinations of c1 and c2, and the same vector w ∈ R2.

b1

b2

w

c1c2

w

Figure 2.5.10: The vector wagainst a background of integrallinear combinations of the basisvectors from B.

Figure 2.5.11: The vector wagainst a background of integrallinear combinations of the basisvectors from C.

(a) Determine [w]B, directly from Figure 2.5.10.

(b) Determine [w]C , directly from Figure 2.5.11.

(c) The following figure displays the B basis against a background ofintegral linear combinations of the C basis:

Page 80: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 74

c1c2 b1

b2

Determine the change-of-basis matrix PC←B. (You may assume thatall coefficients are either integers and half-integers.)

(d) Multiply the matrix you computed in (c) with the column vectoryou computed in (a). That is, compute the product PC←B[w]B. Isyour answer the same as what you obtained in (b)?

4. Consider the following three bases for R3:

A = {-(1, 0, 0), (0, 1, 0), (0, 0, 1)-}B = {-(2, 1, 1), (1, 1, 1), (0, 2, 1)-}C = {-(1, 2, 3), (0, 1, 0), (1, 0, 1)-}

Compute PC←B,PB←A,PC←A and verify the equation

PC←BPB←A = PC←A.

Page 81: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

Chapter 3

Linear maps

3.1 Definitions and Examples

3.1.1 Definition of a linear map

Figure 3.1.1: Video recording for the intro to and definition of a linear map.

Recall that a function (or a map) f : X → Y from a set X to a set Y issimply a rule which assigns to each element of X an element f(x) of Y . Wewrite x 7→ f(x) to indicate that an element x ∈ X maps to f(x) ∈ Y underthe function f . See Figure 3.1.2. Two functions f, g : X → Y are equal iff(x) = g(x) for all x in X.

X Y

f

x f(x)

Figure 3.1.2: A function f : X → Y .

Definition 3.1.3 Let V and W be vector spaces. A linear map from V toW is a function T : V →W satisfying:

• T (v + v′) = T (v) + T (v′) for all vectors v,v′ ∈ V .

• T (kv) = kT (v) for all vectors v ∈ V and scalars k ∈ R.

75

Page 82: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 76

Another name for a linear map is a linear transformation.

3.1.2 Examples of linear mapsExample 3.1.4 Matrices give rise to linear maps.

Figure 3.1.5: Video recording for Example 3.1.4.

Every n×m matrix A induces a linear map

TA : Colm → Colnv 7→ Av .

That is, TA(v) := Av is the matrix product of A with the column vector v.The fact that TA is indeed a linear map follows from the linearity of matrixmultiplication (Proposition B.0.4 parts 2 and 3).

Note that an n×m matrix gives a linear map from Colm to Coln!

For instance, considerA =

[1 21 −2

]This gives rise to a linear map

T : Col2 → Col2[v1v2

]7→[1 21 −2

] [v1v2

]The following interactive Geogebra app demonstrates this. Drag the red vectorv around to see the effect on Av. You can also change the coefficients of thematrix A by dragging the sliders.

Specify static image with @preview attribute,Or create and provide automatic screenshot as

images/interactive-3-preview.png via the mbx script

www.geogebra.org/material/iframe/id/https://www.geogebra.org/material/iframe/id/vexfrzez/width/600/height/701/border/888888/sfsb/true/smb/false/stb/false/stbh/false/ai/false/asb/false/sri/false/rc/false/ld/false/sdz/false/ctl/false

Figure 3.1.6: Interactive GeoGebra applet displaying the linear map v 7→ Av.

Page 83: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 77

Example 3.1.7 A non-linear map. The following interactive JSXGraphapplet illustrates the ’polar coordinates’ map

T : R2 → R2[v1v2

]7→[v1 cos(πv2)v1 sin(πv2)

]Note that T is a nonlinear map, since, for most points v,v′ ∈ R2, we haveT (v + v′) 6= T (v) + T (v′).

Specify static image with @preview attribute,Or create and provide automatic screenshot asimages/interactive-polar-map-preview.png

via the mbx script

Figure 3.1.8: Polar coordinates as a nonlinear map.

Another way to see that T is nonlinear is that T does not send straightlines to straight lines. �

Example 3.1.9 Identity map. Let V be a vector space. The function

idV : V → V

v 7→ v

is called the identity map on V . It is clearly a linear map, since

idV (v + w) = v + w= idV (v) + idV (w)

and

idV (kv) = kv= k idV (v).

Example 3.1.10 Projection as a linear map. The function

T : R2 → R(x, y) 7→ x

which projects vectors onto the x-axis is a linear map.Let us check additivity algebraically:

T ((x1, y1)) + ((x2, y2)) ?= T ((x1, y1)) + T ((x2, y2))

LHS = T ((x1 + x2, y1 + y2)) RHS = x1 + x2

= x1 + x2

Page 84: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 78

Here is a a graphical version of this proof:

v

w

T (v) T (w)T (v + w)

T (v) + T (w)

Checkpoint 3.1.11 Prove algebraically that we also have T (kv) = kT (v), sothat T is a linear map.Solution.

T (kv) = T (k(x, y))= T ((kx, ky))= kx = kT ((x, y))= kT (v)

Example 3.1.12 Rotation as a linear map. Fix an angle θ. The function

R : R2 → R2

v 7→ rotation of v counterclockwise through angle θ

is a linear map, by a similar graphical argument as in Example 3.1.10.

v

R(v)

θ

Example 3.1.13 Cross product with a fixed vector as a linear map.

Figure 3.1.14: Video recording for Example 3.1.13.

Page 85: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 79

Fix a vector w ∈ R3. The function

C : R3 → R3

v 7→ w× v

is a linear map because of the properties of the cross-product,

w× (v1 + v2) = w× v1 + w× v2

w× (kv) = kw× v.

Example 3.1.15 Dot product with a fixed vector as a linear map.Fix a vector u ∈ R3. The function

D : R3 → Rv 7→ u · v

(here · is the dot product of vectors, not scalar multiplication!) is a linearmap, because of the properties

u · (v1 + v2) = u · v1 + u.v2

u.(kv) = ku · v.

We will soon see that all linear maps R3 → R (indeed all linear mapsRn → R) are of this form.

Example 3.1.16 Differentiation as a linear map.

Figure 3.1.17: Video recording for Example 3.1.16.

The operation ‘take the derivative’ can be interpreted as a linear map

D : Polyn → Polyn−1

p 7→ p′ .

For example, D(2x3 − 6x+ 2) = 6x2 − 6. �

Checkpoint 3.1.18 (a) Why is D a map from Polyn to Polyn−1? (b) Checkthat D is linear.Solution.

1. As you know from calculus, taking the derivative of a polynomial de-creases every power of x by 1. So if p is in Polyn, then p has degree at

Page 86: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 80

most n . Therefore, its image under D has degree at most n − 1. Thusp′ is in Polyn−1.

2. Let p, q be in Polyn. Then:

D(p+ q) = (p+ q)′

= p′ + q′ (rule of differentiation)= D(p) +D(q)

Similarly,

D(kp) = (kp)′

= kp′ (rule of differentiation)= kD(p)

Example 3.1.19 Antiderivative as a linear map.

Figure 3.1.20: Video recording for Example 3.1.19.

The operation ‘find the unique antiderivative with zero constant term’ canbe interpreted as a linear map

A : Polyn → Polyn+1

p 7→∫ x

0p(t) dt

For example, A(2x3 − 6x+ 2) = 4x4 − 3x2 + 2x. �

Checkpoint 3.1.21 (a) Check that A is indeed a linear map. (b) Why is A amap from Polyn to Polyn+1? (c) Check that A is linear.Solution.

1.

2. You know from calculus that the antiderivative of a polynomial p mustalways have degree one greater than p. Hence A maps Polyn to Polyn+1.

3. Let p,q be in Polyn. Using the usual properties of the integral, wecompute

A(p + q) =∫ x

0p(t) + q(t) dt =

∫ x

0p(t) dt+

∫ x

0q(t) dt = A(p) +A(q).

Similarly,

A(kp) = k

∫ x

0p(t) dt =

∫ x

0kp(t) dt = A(kp).

Page 87: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 81

Example 3.1.22 Shift map. Define the ‘shift forward by 1’ map

S : Polyn → Polynp 7→ S(p)

by S(p)(x) = p(x− 1).Consider the case n = 3. In terms of the standard basis

p0(x) = 1, p1(x) = x, p2(x) = x2, p3(x) = x3

of Poly3, we have:

S(p0) = p0

S(p1) = p1 − p0

S(p2) = p2 − 2p1 + p0

S(p3) = p3 − 3p2 + 3p1 − p0

Checkpoint 3.1.23 Check that S is a linear map.Solution. Let

p =n∑j=0

ajxjq =

n∑j=0

bjxj

S(kp) = S

n∑j=0

kajxj

=n∑j=0

kaj(x− 1)j = k

n∑j=0

aj(x− 1)j = kS(p).

S(p+q) = S

n∑j=0

(aj + bj)xj =

n∑j=0

(aj+bj)(x−1)j =n∑j=0

aj(x−1)j+n∑j=0

bj(x−1)j = S(p)+S(q).

Checkpoint 3.1.24 Check this.Solution. S(p0) = p0 is trivial.

S(p1) = x−1 = p1−p0S(p2) = (x−1)2 = x2−2x+1 = p2−2p1+p0S(p3) = (x−1)3 = x3−3x2+3x−1 = p3−3p2+3p1−p0.

Example 3.1.25 Gradient as a linear map. Recall the vector spacePolyn of polynomials in x and y (Example 1.6.20, dimension computed inExample 2.3.7) and the vector space Vectn(R2) of polynomial vector fields onR2 (Example 1.6.21, dimension computed in Example 2.3.8).

The operation ’take the gradient’ can be thought of as a linear map

∇ : Polyn[x, y]→ Vectn−1(R2)f 7→ ∇f

This is a linear map because ∇(f + g) = ∇f +∇g and ∇(kf) = k∇f .For example,

∇(x+ xy) = (1 + y, x).

Example 3.1.26 Double integral as a linear map. Let D ⊆ R2 be aregion in the plane. Integrating polynomial functions over D can be thoughtof as a linear map

I : Polyn[x, y] 7→ R

Page 88: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 82

f 7→∫∫

D

fdA

This is a linear map because∫∫D

(f + g)dA =∫∫

D

fdA+∫∫

D

gdA∫∫D

kf dA = k

∫∫D

f dA

For example, let D = {(x, y) : x2 + y2 ≤ 1}, and let f = x2. Then

I(f) =∫∫

D

x2 dA = π

4

as the reader will verify. �

Example 3.1.27 Initial conditions for differential equations as a linearmap. Let V be the vector space of solutions to a 2nd order linear homogenousordinary differential equation on an interval I:

y′′

+ a1(x)y′ + a0(x)y = 0, x ∈ I.

Fix a point x0 ∈ I. Then we get a linear map

Tx0 : Col2 → V[ab

]7→ unique y ∈ V such that y(x0) = a, y′(x0) = b

Now is a good time to review Theorem 2.3.20 and Example 2.3.21.

This is illustrated in Figure 3.1.28 for the case of the differential equation

y′′

+ xy = 0.

Drag the blue vector v to see the effect on its image Tx0(v) = y in V . You canalso change the value of x0.

Specify static image with @preview attribute,Or create and provide automatic screenshot as

images/interactive-5-preview.png via the mbx script

www.geogebra.org/material/iframe/id/https://www.geogebra.org/material/iframe/id/kxcmtv5t/width/800/height/909/border/888888/sfsb/true/smb/false/stb/false/stbh/false/ai/false/asb/false/sri/false/rc/false/ld/false/sdz/false/ctl/false

Figure 3.1.28: GeoGebra App: Understanding the linear map T .

Page 89: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 83

3.1.3 Some results about linear mapsLemma 3.1.29 Suppose T : V →W is a linear map. Then

1. T (0V ) = 0W .

2. T (−v) = −T (v) for all vectors v ∈ V .Proof.

(i)T (0V ) = T (00V )(R8 applied to v = 0V ∈ V )= 0T (0V ) (T is linear)= 0W (R8 applied to v = T (0V ) ∈W )

(ii)T (−v) = T ((−1)v)(defn of −v in V )= (−1)T (v) (T is linear)= −T (v) (defn of −T (v) in W )

The next result is very important. It tells us that if we know how a linearmap T acts on a basis, then we know how it acts on the whole vector space(this is the ‘uniqueness’ part). Moreover, we are free to write down any willy-nilly formula for what T does on the basis vectors, and we are guaranteed thatthis will always extend to a linear map defined on the whole vector space (thisis the ‘existence’ part).

Proposition 3.1.30 Sufficient to Define a Linear Map on a Basis.Suppose B = {-e1, . . . , em-} is a basis for V and w1, . . . ,wm are vectors in W .Then there exists a unique linear map T : V →W such that

T (ei) = wi, i = 1 . . .m.Proof. Proof of existence. To define a linear map T , we must define T (v)for each vector v. We can write v in terms of its coordinate vector [v]B withrespect to the basis B as

v = [v]B,1e1 + · · ·+ [v]B,mem (3.1.1)

where [v]B,i is the entry at row i of the coordinate vector [v]B. We define

T (v) := [v]B,1w1 + [v]B,2w2 + · · · [v]B,mwm. (3.1.2)

We clearly have T (ei) = wi. To complete the proof of existence, we mustshow that T is linear:

T (v + v′) = [v + v′]B,1w1 + · · ·+ [v + v′]B,mwm

=([v]B,1 + [v′]B,1

)w1 + · · ·+

([v]B,m + [v′]B,m

)wm ([v + v′]B = [v]B + [v′]B)

= [v]B,1w1 + · · ·+ [v]B,mwm+[v′]B,1w1 + · · ·+ [v′]B,mwm

= T (v) + T (v′).

Similarly, one can check that T (kv) = kT (v), which completes the proof ofexistence.

Proof of uniqueness. Suppose that S, T : V →W are linear maps with

S(ei) = wi, and T (ei) = wi, i = 1 . . .m. (3.1.3)

Page 90: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 84

Then,

S(v) = S([v]B,1e1 + · · ·+ [v]B,mem

)= [v]B,1S(e1) + · · ·+ [v]B,mS(em) (S is linear)= [v]B,1w1 + · · ·+ [v]B,mwm (since S(ei) = wi)= [v]B,1T (e1) + · · ·+ [v]B,mT (em) (since T (ei) = wi)= T ([v]B,1e1 + · · ·+ [v]B,mem) (T is linear)= T (v).

Hence S = T , in other words the linear map satisfying (3.1.3) is unique.�

Example 3.1.31 As an application of Proposition 3.1.30, we can define alinear map

T : Col2 → Fun(R)

simply by defining its action on the standard basis of Col2. For instance, wemay set

e1 → f1

e2 → f2

The point is that we are free to send e1 and e2 to any functions f1 andf2 we like, and we are assured that this will give a well-defined linear mapT : Col2 → Fun(R). For instance, we might set f1(x) = sin x and f2(x) = |x|.Then the general formula for T is(

T

[ab

])(x) = a sin x+ b|x|

Example 3.1.32 Rotation map on standard basis. Let us compute theaction of the ‘counterclockwise rotation by θ’ map R from Example 3.1.12 withrespect to the standard basis e1, e2 of R2.

e1

R(e1)θ

e2R(e2)

θ

From the figure, we have:

R(e1) = (cos θ, sin θ) R(e2) = (− sin θ, cos θ)

so that

R(e1) = cos θ e1 + sin θ e2, R(e2) = − sin θ e1 + cos θ e2

Now that we know the action of R on the standard basis vectors, we can

Page 91: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 85

compute its action on an arbitrary vector (x, y) ∈ R2:

R ((x, y)) = R(xe1 + ye2)= xR(e1) + yR(e2)= x(cos θ, sin θ) + y(− sin θ, cos θ)= (x cos θ − y sin θ, x sin θ + y cos θ).

3.1.4 Exercises1. Let V be a vector space, and let a 6= 0 be a fixed vector. Let Define the

map T as follows:

T : V → V

v 7→ a + v

(a) Is T a linear map? (Yes or no)

(b) Prove your assertion from (a).2. Consider the map T : R3 → R3 given by T ((x, y, z)) = (z, x, y).

(a) Is T a linear map? (Yes or no)

(b) Prove your assertion from (a).3. Define the ‘multiplication by x2’ map

M : Polyn → Polyn+2

p 7→M(p)

where M(p)(x) = x2p(x).(a) Why does M map from Polyn to Polyn+2?

(b) Prove that M is linear.

(c) Compute the action of M in the standard basis for Poly3, as inExample 3.1.22.

4. Define the map

C : Polyn → Trignby the formula

C(p)(x) := p(cosx).

(a) Compute C(p), where p(x) = 3x2 − x + 1. Express your answer interms of the standard basis for Trig2,

T0(x) = 1, T1(x) = cosx, T2(x) = sin x, T3(x) = cos(2x), T4(x) = sin(2x).

(b) Is C a linear map?

(c) Justify that for every polynomial p ∈ Polyn, the function C(p) ∈Trign? (This is implicitly assumed above.)

(d) Compute C(p0), C(p1), C(p2), C(p3) where

p0(x) = 1, p1(x) = x, p2(x) = x2, p3(x) = x3

Page 92: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 86

are the standard basis vectors for Poly3. Express your answers interms of the standard basis of Trign.

5. Determine the action of the gradient linear map

∇ : Poly2[x, y]→ Vect1(R2)

from Example 3.1.25 in terms of the standard bases

{-q1, q2, q3, q4, q5-}q1 = 1, q2 = x, q3 = y, q4 = x2, q5 = xy, q6 = y2

of Poly2 and

{-V1, V2, V3, V4, V5, V6-}V1 = (1, 0), V2 = (x, 0), V3 = (y, 0)V4 = (0, 1), V5 = (0, x), V6 = (0, y)

of Vect1(R2)respectively.6. Consider the following function:

T : Poly2 → R2

p 7→[p(0)p(1)

](a) Is T a linear map? (Yes / No).

(b) Prove your assertion from (a).7. Let V be the vector space of solutions to the differential equation

y(n) + an−1(x)y(n−1) + · · ·+ a1(x)y′ + a0(x) = 0.

Consider the ’evaluate at x = 1’ map

T : V → Ry 7→ y(1)

Is T a linear map? Prove your assertion.8. Define the ‘integrate over the interval [−1, 1]’ map

I : Polyn → R

p 7→∫ 1

−1p(x) dx

(a) Prove that I is linear.

(b) Compute the action of I with respect to the standard basis p0, . . . ,p3for Poly3.

(c) Compute the action of I with respect to the basis q0, . . . ,q3 forPoly3 from Example 2.2.7.

9. Compute the action of the differentiation map D : Poly4 → Poly3 fromExample 3.1.16 with respect to the standard bases of these two vectorspaces.

Page 93: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 87

10. Consider the cross-product linear map C : R3 → R3 from Example 3.1.13in the case w = (1, 2, 3). Compute the action of C with respect to thestandard basis of R3.

11. Prove that if V is a finite dimensional vector space then the set of all linearmaps from V to R,L(V,R), is itself a vector space. Since we already knowthat the set of all real valued functions from V to R forms a vector space,you will only have to show that L(V,R) contains the function 0, and isclosed under both addition and scalar multiplication.

12. If V is a finite dimensional vector space, find a basis for L(V,R).

3.2 Composition of linear mapsDefinition 3.2.1 If S : U → V and T : V → W are linear maps, then thecomposition of T with S is the map T ◦ S : U →W defined by

(T ◦ S)(u) := T (S(u))

where u is in U . ♦

See Figure 3.2.2.

UVW

ST

T ◦ S

uS(u)T (S(u))

Figure 3.2.2: Composition of linear maps.

The standard convention in mathematics is to write the evaluation of afunction from right-to-left, eg. f(x). That is, you start with the righthand symbol x, then you apply f . So the most natural way to drawthese pictures is from right-to-left!

Example 3.2.3 Let S : R3 → Poly2 and T : Poly2 → Poly4 be the linear mapsdefined by

S((a, b, c)) := ax2 + (a− b)x+ c, T (p(x)) = x2p(x)

Then T ◦ S can be computed as follows:

(T ◦ S)((a, b, c)) = T (S((a, b, c)))= T (ax2 + (a− b)x+ c)= x2(ax2 + (a− b)x+ c)= ax4 + (a− b)x3 + cx2.

Page 94: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 88

Proposition 3.2.4 If S : U → V and T : V → W are linear maps, thenT ◦ S : U →W is also a linear map.Proof. Let u1,u2 ∈ U . Then:

(T ◦ S)(u1 + u2) = T (S(u1 + u2)) (defn of T ◦ S)= T (S(u1) + S(u2)) (S is linear)= T (S(u1)) + T (S(u2)) (T is linear)= (T ◦ S)(u1) + (T ◦ S)(u2) (defn of T ◦ S)

Similarly,

(T ◦ S)(ku) = T (S(ku)) (defn of T ◦ S)= T (kS(u)) (S is linear)= kT (S(u)) (T is linear)= k(T ◦ S)(u) (defn of T ◦ S)

Example 3.2.5 Consider the antiderivative (A) and derivative (D) linear maps

A : Polyn → Polyn+1

D : Polyn+1 → Polyn .

Is D◦A = idPolyn?Solution. We compute the action of D◦A on the basis

xk, k = 0 . . . n of Polyn:

xkA7→ xk+1

k + 1D7→ k + 1k + 1x

k = xk

Hence for k = 0 . . . n,

(D ◦A)(xk) = xk

= idPolyn(xk).

Since D ◦A and idPolynagree on a basis for Polyn, they agree on all vectors

p ∈ Polyn by Proposition 3.1.30. Hence D ◦A = idPolyn.

In fact, the statement that D ◦ A = idPolynis precisely Part I of the

Fundamental Theorem of Calculus, applied to the special case of poly-nomials!

Checkpoint 3.2.6 Is A ◦D = idPolyn+1? If it is, prove it. If it is not, give anexplicit counterexample.Solution. The statement is not true! For example, let p(x) = x+ 1. Then

(A ◦D)(x+ 1) = A(1) = x 6= x+ 1.

Page 95: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 89

Exercises1. Let Rθ be the ‘rotation by θ’ map from Example 3.1.32,

Rθ : Col2 → Col2[v1v2

]7→[

cos θ sin θ− sin θ cos θ

]Check algebraically that Rφ ◦ Rθ = Rφ+θ by computing the action

of the linear maps on both sides of this equation on an arbitrary vectorv ∈ Col2.

2. Let M : Poly3 → Poly4 be the ‘multiplication by x’ map, M(p)(x) =xp(x). Let S : Poly4 → Poly4 be the map S(p)(x) = p(x − 1). Similarlylet T : Poly3 → Poly3 be the map T (p)(x) = p(x − 1). Compute S ◦Mand M ◦ T . Are they equal?

3.3 Isomorphisms of vector spacesSuppose you have two sets,

A = {bird, eye,person}andB = {, , } .

The elements of A and B are not the same, so A is not equal to B. Butthis is unsatisfactory — clearly the elements of A are just English versions ofthe Chinese symbols in B. How can we make this mathematically precise?

We could define two maps, say

S : A→ B

bird 7→eye 7→

cross 7→

and

T : B → A

7→ bird7→ eye7→ cross.

Then we observe that

T ◦ S = idA andS ◦ T = idB . (3.3.1)

A pair of maps S : A → B and T : B → A satisfying (3.3.1) is called anisomorphism of sets between A and B. If you like, you can rename T as S−1

since S−1 ◦ S = idA and S ◦ S−1 = idB . (Calling T by the name S−1 from thebeginning would have been presumptive of me. I needed to first define it, andthen check that it satisfied (3.3.1). Only then did I have the right to call itS−1!)

Perhaps you are somewhat of a penny-pincher. You see the need for theEnglish-to-Chinese map S, but not the need for a Chinese-to-English map T .After all, you say, since no two different English symbols in A get mapped tothe same Chinese symbol in B (‘S is one-to-one’) and every Chinese symbol

Page 96: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 90

y ∈ B is equal to S(x) for some x ∈ A (‘S is onto’), we have no need for T . Itis an extravagance!

To this I respond: you are right, but is it not useful to have the explicitChinese-to-English map T? In bookshops, cross-language dictionaries like thismost often come bundled as a pair, in a single volume. After all, if one needsto look up the English word for , it is a nuisance to have to traverse throughthe entire English-to-Chinese dictionary, trying to find the English word whichtranslates to !

This motivates the following definition.Definition 3.3.1 We say that a linear map T : V → W is an isomorphismif there exists a linear map T−1 : W → V such that

T−1 ◦ T = idV andT ◦ T−1 = idW . (3.3.2)

Lemma 3.3.2 Uniqueness of Inverses. If T : V → W is a linear map,and S, S′ : W → V satisfy

S ◦ T = idV , T ◦ S = idWS′ ◦ T = idV , T ◦ S′ = idW

then S = S′.

This lemma justifies us talking about "the inverse" (instead of "an in-verse") of a linear map. So it makes sense for us to use the notationT−1, which reads as "the" inverse of T .

Proof. To show that S = S′, we must show that for all w ∈W , S(w) = S′(w).Indeed:

S(w) = S(idW (w)) (Defn of idW )= S((T ◦ S′)(w)) (T ◦ S′ = idW )= S(T (S′(w))) (Defn of T ◦ S′)= (S ◦ T )(S′(w)) (Defn of S ◦ T )= idV (S′(w)) (S ◦ T = idV )= S′(w) (Defn of idV ).

Definition 3.3.3 We say that two vector spaces V and W are isomorphic ifthere exists an isomorphism between them. ♦

Example 3.3.4 Show that Rn is isomorphic to Polyn−1Solution. We definethe following linear maps:

T : Rn → Polyn−1

(a1, a2, . . . , an) 7→ a1 + a2x+ · · ·+ anxn−1

T−1 : Polyn−1 → Rn

a1 + a2x+ · · ·+ anxn−1 7→ (a1, a2, . . . , an)

We clearly have T−1 ◦ T = idRn and T ◦ T−1 = idPolyn−1 . �

Page 97: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 91

Checkpoint 3.3.5 Check that these maps are linear.We will now show that up to isomorphism, there is only one vector space

of each dimension!Theorem 3.3.6 Two finite-dimensional vector spaces V andW are isomorphicif and only if they have the same dimension.Proof. ⇒. Suppose V and W are isomorphic, via a pair of linear maps S :V � W : T . Let B = {-e1, . . . , em-} be a basis for V . Then I claim thatC = {-S(e1), . . . , S(em)-} is a basis for W . Indeed, the list of vectors C islinearly independent, since if

a1S(e1) + a2S(e2) + · · ·+ amS(em) = 0W ,

then applying T to both sides we obtain

T (a1S(e1) + a2S(e2) + · · ·+ amS(em)) = T (0W )∴ a1T (S(e1)) + a2T (S(e2)) + · · ·+ amT (S(em)) = 0V (T is linear)

∴ a1e1 + a2e2 + · · ·+ amem = 0V (T ◦ S = idV )

which implies that a1 = a2 = · · · = am = 0, since B is linearly independent.Moreover, the list of vectors C spans W , for if w ∈ W , then applying T , wecan write

T (w) = a1e1 + a2e2 + · · ·+ amemfor some scalars ai since B spans V . But then

w = S(T (w)) (since S ◦ T = idW )= S(a1e1 + a2e2 + · · ·+ amem)= a1S(e1) + a2S(e2) + · · ·+ amS(em) (S is linear)

so that C spansW . Hence C is a basis for V , so DimV = number of vectors in= m, while DimW = number of vectors in = m.⇐. Suppose DimV = DimW . Let e1, . . . , em be a basis for V , and let

f1, . . . , fm be a basis for W . (We know that the number of basis vectors is thesame since DimV = DimW .)

To define linear maps

S : V �W : T

it is sufficient, by Proposition 3.1.30 (Sufficient to Define a Linear Map on aBasis), to define the action of S and T on the basis vectors. We set:

eiS7→ fi

fiT7→ ei

Clearly we have T ◦ S = idV and S ◦ T = idW . �

Example 3.3.7 Show that Matn,m is isomorphic to Rmn.Solution. We simply observe that by Example 2.3.13, Dim Matn,m = mnwhile from Example 2.3.4, DimRmn is also equal to mn. �

There is one very important isomorphism which we will use over and over.Let V be a vector space with basis B = {-b1, . . . ,bm-}. Consider the map

[·]B : V → Colmv 7→ [v]B

Page 98: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 92

which sends a vector v ∈ V to its corresponding coordinate vector [v]B ∈ Colm.Lemma 2.4.13 says precisely that [.]B is a linear map. We will now describe itsinverse.Definition 3.3.8 Let V be an m-dimensional vector space with basis B ={-e1, . . . , em-}. Let c ∈ Colm be an m-dimensional column vector. Then thevector in V corresponding to c with respect to the basis B is

vecV,B(c) := c1e1 + c2e2 + · · ·+ cmem.

Example 3.3.9 The polynomials B = {-p1,p2,p3-} where

p1 := 1 + x,p2 := 1 + x+ x2,p3 := 1− x2

are a basis of Poly2 (check this). Then, for instance,

vecPoly3,B

2−33

= 2(1 + x)− 3(1 + x+ x2) + 3(1− x2)

= 2− x− 6x2 ∈ Poly3 .

Checkpoint 3.3.10 Show that:1. vecV,B(c + c′) = vecV,B(c) + vecV,B(c′)

2. vecV,B(kc) = k vecV,B(c).

This means that vecV,B : Colm → V is a linear map.Solution.

1.

vecV,B(c + c′) = vecV,B

c1

...cn

+

c′1...c′n

= vecV,B

c1 + c′1

...cn + c′n

= (c1 + c′1)e1 + · · ·+ (cn + c′n)en= (c1e1 + · · ·+ cnen) + (c′1e1 + · · ·+ c′nen)= vecV,B(c) + vecV,B(c′)

2.

vecV,B(kc) = vecV,B

kc1

...cn

= vecV,B

kc1

...kcn

= (kc1e1 + · · ·+ kcnen)

Page 99: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 93

= k(c1e1 + · · ·+ cnen)= k vecV,B(c)

Theorem 3.3.11 Let V be a vector space with basis B = {-e1, . . . , em-}. Themap

[·]B : V → Colmv 7→ [v]B

is an isomorphism of vector spaces, with inverse

Colm → V

c 7→ vecV,B(c).Proof. Given v ∈ V , expand it in the basis B:

v = a1e1 + . . .+ amem.

Then

(vecV,B ◦ [·]B)(v)

= vecV,B([v]B

)= vecV,B

a1

...am

= a1e1 + · · ·+ amem= v

so that vecV,B ◦ [·]B = idV . Conversely, given

c =

c1...cm

∈ Colm ,

we have ([.]B ◦ vecV,B

)(c)

=[vecV,B(c)

]B

=[c1e1 + · · ·+ cmem

]=

c1...cm

= c

where the second last step uses the definition of the coordinate vector of v =c1e1 + · · ·+ cmem. Hence [·]B ◦ vecV,B = idColm

. �

The result above is very important in linear algebra. It says that, oncewe have chosen a basis for an abstract finite-dimensional vector spaceV , we can treat the elements of V as if they were column vectors!

Page 100: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 94

Exercises1. Are the following vector spaces isomorphic?

V ={

v ∈ Col4 :[

1 2 0 −1−1 1 1 0

]v = 0

}W =

{p ∈ Poly2 :

∫ 2

0p(x)dx = 0

}.

If they are, construct an explicit isomorphism between them. If not,prove that they are not isomorphic.

2. Are the following vector spaces isomorphic?

V = {v ∈ R3 : v× (1, 2, 3) = 0}

W ={M ∈ Mat2,2 : MT = −M

}.

If they are, construct an explicit isomorphism between them. If not,prove that they are not isomorphic.

3. Are the following vector spaces

V = {v ∈ R4 : (1,−1, 2, 1) · v = 0}

andPoly1[x, y]

isomorphic?If they are, construct an explicit isomorphism between them. If not,

prove that they are not isomorphic.4. Are the following vector spaces isomorphic?

V = {p ∈ Poly3[x, y] :∫∫

D

p dA·v = 0}D = {(x, y) ∈ R2 : x2+y2 = 1}

Vect2(R2)

3.4 Linear maps and matricesDefinition 3.4.1 Let T : V →W be a linear map from a vector space V to avector space W . Let B = {-b1, . . . ,bm-} be a basis for V and C = {-c1, . . . , cn-}be a basis for W . The matrix of T with respect to the bases B and C isdefined as the n×m matrix whose columns are the coordinate vectors of T (bi)with respect to the basis C:

[T ]C←B :=

T (b1)

C

T (b2)

C

. . .

T (bm)

C

Do you understand why [T ]C←B is an n×m matrix?

Page 101: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 95

Example 3.4.2 Matrix of a Linear Map. Let

T : Poly2 → Poly3

be defined byT (p)(x) := xp(x)

LetB = {-b1 = 1 + x, b2 = 1− x, b3 = 1 + x+ x2-}

andC = {-c1 = 1, c2 = 1 + x, c3 = 1 + x+ x2, c4 = x3-}

be bases for Poly2 and Poly3 respectively. Determine [T ]C←B.Solution. We compute:

T (b1) = x(1 + x)= x+ x2

= −c1 + c3

∴ [T (b1)]C =

−1010

T (b2) = x(1− x)

= x− x2

= −c1 + 2c2 − c3

∴ [T (b2)]C =

−12−10

T (b3) = x(1 + x+ x2)

= x+ x2 + x3

= −c1 + c3 + c4

∴ [T (b3)]C =

−1011

Assembling all these coordinate vectors together gives

[T ]C←B =

−1 −1 −10 2 01 −1 10 0 1

Theorem 3.4.3 Linear Maps and Matrix Multiplication of Coordi-nate Vectors. Let T : V → W be a linear map from a vector space V to avector space W . Let B = {-b1, . . . ,bm-} be a basis for V and C be a basis forW . Then for all vectors v in V ,

[T (v)]C = [T ]C←B[v]B (3.4.1)where the right hand side is the product of the matrix [T ]C←B with the coordinate

Page 102: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 96

vector v]B.Proof. Similar to the proof of the Change-Of-Basis Theorem (Theorem 2.5.8).Let v ∈ V . Expand it in the basis B:

v = a1b1 + a2b2 + · · ·+ ambm, i.e. [v]B =

a1...am

.

Then,

[T (v)]C = [T (a1b1 + · · ·+ ambm)]C= [a1T (b1) + · · ·+ anT (bm)]C (T is linear)= a1[T (b1)]C + · · ·+ an[T (bm)]C (Lemma 2.4.13)

=

T (b1)

C

T (b2)

C

. . .

T (bm)

C

a1

...am

= [T ]C←B[v]B .

Example 3.4.4 Verifying Theorem 3.4.3 in an example. Let us verifythat Theorem 3.4.3 really works, in the context of Example 3.4.2. Let us takethe vector v ∈ Poly2 to be, for instance, x.

Exapnding x relative to the basis B, we get

x = 12(p1 − p2).

So,

[x]B =

12− 1

20

Also, T (x) = x2 = −q2 + q3, so

[T (x)]C =

0−110

We can now compute the left and right hand sides of Equation (3.4.1) and seethat they are indeed the same.

LHS of (3.4.1) = [T (x)]C

=

0−110

.RHS of (3.4.1) = [T ]C←B[x]B

=

−1 −1 −10 2 01 −1 10 0 1

1

2− 1

20

Page 103: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 97

=

0−110

So the Theorem indeed works, at least in this case! �

We can interpret Theorem 3.4.3 in a more abstract way as follows. We havethe following diagram of linear maps of vector spaces:

T

[.]C

[·]C ◦ T ◦ vecV,B

[.]B vecV,B

V W

Colm Coln

The map at the top is the linear map T : V → W . The map on the leftfrom V to Colm is the coordinate vector map [.]B associated to the basis B.Its inverse map vecV,B : Colm → V is also drawn. The map on the rightis the coordinate vector map [·]C from W to Coln associated to the basis C.The dotted arrow on the bottom is the composite map, and can be computedexplicitly as follows.Lemma 3.4.5 The composite map

[·]C ◦ T ◦ vecV,B : Colm → Coln

is multiplication by the matrix [T ]C←B. That is, for all column vectors u inColm,

([·]C ◦ T ◦ vecV,B) (u) = [T ]C←B u.Proof. Let u be a column vector in Colm. Define v := vecV,B(u). Then v isthe vector in V whose coordinate vector with respect to the basis B is u. Thatis, u = [v]B. So,

([·]C ◦ T ◦ vecV,B) (u) = [.]C (T (vecV,B(u))) (Defn of composite map)= [·]C(T (v)) (Defn of v)= [T (v)]C (Defn of [·]C)= [T ]C←B[v]B (Theorem 3.4.3).

Before we move on, we need to recall another thing about matrices. Suppose

Page 104: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 98

A is a matrix with n rows. Let

e1 =

10...0

, e2 =

01...0

, . . . , en =

00...1

be the standard basis for Coln. Then the ith column of A can be obtained bymultiplying A with ei:

ith column of A = Aei. (3.4.2)Checkpoint 3.4.6 Check this!

Now we can prove the following important Theorem.Theorem 3.4.7 Functoriality of the Matrix of a Linear Map. LetS : U → V and T : V → W be linear maps between finite-dimensional vectorspaces. Let B, C and D be bases for U , V and W respectively. Then

[T ◦ S]D←B = [T ]D←C [S]C←B

where the right hand side is the product of the matrices [T ]D←C and [S]C←B.Proof. We have:

ith column of [T ◦ S]D←B= [(T ◦ S)(bi)]D (Defn of [T ◦ S]D←B)= [T (S(bi))]D (Defn of T ◦ S)= [T ]D←C [S(bi]C (Theorem 3.4.3)= [T ]D←C [S]C←B [bi]B (Theorem 3.4.3)= [T ]D←C [S]C←B ei ( since [bi]B = ei )= ith column of [T ]D←C [S]C←B (3.4.2).

Corollary 3.4.8 Let T : V → W be a linear map, and suppose B is a basisfor V , and C is a basis for W . Then

T is an isomorphism⇐⇒ [T ]C←B is invertible.Proof. ⇒. Suppose the linear map T is an isomorphism. This means thereexists a linear map S : W → V such that

S ◦ T = idV and T ◦ S = idW

Therefore,

[S ◦ T ]B←B = [idV ]B←B and [T ◦ S]C←C = [idW ]C←C .

Therefore, by the Functoriality of the Matrix of a Linear Map (Theo-rem 3.4.7),

[S]B←C [T ]C←B = I and [T ]C←B[S]B←C = I

Therefore the matrix [T ]C←B is invertible, with inverse given by

[T ]−1C←B = [S]B←C .

⇐. Suppose the matrix [T ] ≡ [T ]C←B is invertible. Define the linear map

S : W → V

Page 105: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 99

by firstly defining it on the basis vectors in C by

S(ci) :=dimV∑p=1

[T ]−1pi bp

and then extending to all of W by linearity. Then we have

(T ◦ S)(ci) = T (S(ci))

= T

(dimV∑p=1

[T ]−1pi bp

)

=dimV∑p=1

dimW∑q=1

[T ]−1pi [T ]qpcq

=dimW∑q=1

(dimV∑p=1

[T ]qp[T ]−1pi

)cq

=dimW∑q=1

([T ][T ]−1)

qicq

=dimW∑q=1

Iqicq

=dimW∑q=1

δqicq

= ci.

Therefore, T ◦ S = idW . In a similar way, we can prove that S ◦ T = idV .Therefore the linear map T is an isomorphism, with inverse map T−1 = S. �

We can refine this a bit further. Explicitly, ‘the inverse of the matrix of alinear map equals the matrix of the inverse of the linear map’.Corollary 3.4.9 Suppose B and C are bases for vector spaces V and W re-spectively. Suppose a linear map T : V → W has inverse T−1 : W → V .Then

[T ]−1C←B = [T−1]B←C.

Proof. We have

[T ]C←B[T−1]B←C = [T ◦ T−1]C←C (Theorem 3.4.7)= [idW ]C←C (T ◦ T−1 = idW )= I

and

[T−1]B←C [T ]C←B = [T−1 ◦ T ]B←B (Theorem 3.4.7)= [idV ]B←B (T−1 ◦ T = idV )= I.

The next Lemma says that the ‘change-of-basis matrix’ from Section 2.5 isjust the matrix of the identity linear map with respect to the relevant bases.

Page 106: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 100

Lemma 3.4.10 Let B and C be bases for an m-dimensional vector space V .Then

PC←B = [id]C←B.Proof.

PC←B = [[b1]C · · · [bm]C ] (Defn of PC←B)= [[id(b1)]C · · · [id(bm)]C ]= [id]C←B. (Defn of [id]C←B)

�The next Theorem tells us how the matrix of a linear operator changes

when we change the bases used in computing it.Theorem 3.4.11 Let B and C be bases for a vector space V , and let T : V → Vbe a linear operator on V . Then

[T ]C←C = P−1[T ]B←BP

where P ≡ PB←C.Proof.

RHS = P−1[T ]B←BP= [id]−1

B←C [T ]B←B[id]B←C (Lemma 3.4.10)= [id]C←B[T ]B←B[id]B←C (Corollary 3.4.9)= [id ◦T ◦ id]C←C (Theorem 3.4.7)= [T ]C←C= LHS.

Exercises1. Let

T : Trig1 → Trig2

be the ‘multiply with sin x’ linear map, T (f)(x) = sin xf(x). Compute[T ]C←B with respect to the standard basis B of Trig1 and C of Trig2.

2. LetS : Trig2 → Trig2

be the ‘shift by π6 ’ map, S(f)(x) = f(x− π

6 ).(a) Compute [S]C←C with respect to the standard basis C of Trig2.

(b) Compute [S]B←C , where B is the following basis for Trig2:

B = {-1, cosx, sin x, cos2 x, sin2 x-}3. Verify Theorem 3.4.3 for the linear map S : Mat2,2 → Mat2,2 given by

S(M) = MT , for the vector A ∈ Mat2,2 given by

A =[0 10 0

]and using the following bases of Mat2,2:

B = C = {-M1 =[1 12 3

], M2 =

[1 01 1

], M3 =

[1 11 1

], M4 =

[0 11 1

]-}.

Page 107: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 101

4. Verify Theorem 3.4.3 for the linear map

T : Poly3 → Trig3

defined byT (p)(x) := p(cosx).

Use the standard bases B for Poly3 and C for Trig3.5. Check that the linear maps T and S from Exercises 3.4.1 and Exercise 3.4.2

satisfy [S ◦ T ]C←B = [S]B←B[T ]C←B.6. Verify Theorem 3.4.7 for the `gradient’ and `divergence’ linear maps

G : Poly3[x, y]→ Vect2(R2)G(p) := ∇p

Div : Vect2(R2)→ Poly1[x, y]

Div((P,Q)) := ∂P

∂x+ ∂Q

∂y

Use the standard bases

B = {-1, x, y, x2, xy, y2, x3, x2y, xy2, y3-}

C = {-(1, 0), (x, 0), (y, 0), (x2, 0), (xy, 0), (y2, 0), (0, 1), (0, x), (0, y), (0, x2), (0, xy), (0, y2)-}

D = {-1, x, y-}

for Poly3[x, y], Vect2(R2), Poly1[x, y] respectively. That is, compute

[Div]D←C [G]C←B

and[Div ◦G]D←C

and check that they are equal.7. Verify Theorem 3.4.7 in the case of the linear maps

S : Mat2,3 → Col3[A11 A12 A13A21 A22 A23

]7→

A11 +A21A12 +A22A13 +A23

T : Col3 → Poly2[x, y]abc

7→ a+ b(x− y − 1)2 + c(x+ y + 1)2

Use the standard basis B for Mat2,3 (see Example 2.3.13), the basis

C = {-

101

,0

10

,0

11

-}

for Col3, and the standard basis

C = {-1, x, y, x2, xy, y2-}

Page 108: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 102

for Poly2[x, y]. That is, compute

[T ◦ S]D←B

and[T ]D←C [S]C←B

and check that they are equal.

3.5 Kernel and Image of a Linear MapDefinition 3.5.1 Let T : V → W be a linear map between vector spaces Vand W . The kernel of T , written Ker(T ), is the set of all vectors v ∈ V suchthat are mapped to 0W by T . That is,

Ker(T ) := {v ∈ V : T (v) = 0W }.

The image of T , written Im(T ), is the set of all vectors w ∈ W such thatw = T (v) for some v ∈ V . That is,

Im(T ) := {w ∈W : w = T (v) for some v ∈ V }

See Figure 3.5.2 and Figure 3.5.3 for a schematic representation.

Sometimes, to be absolutely clear, I will put a subscript on the zerovector to indicate which vector space it belongs to, eg. 0W refers to thezero vector in W , while 0V refers to the zero vector in V .

Another name for the kernel of T is the nullspace of T , and anothername for the image of T is the range of T .

V W

0WKer(T )

Figure 3.5.2: Ker(T )

V W

Im(T )

Figure 3.5.3: Im(T )

Page 109: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 103

Lemma 3.5.4 Let T : V →W be a linear map. Then:1. Ker(T ) is a subspace of V

2. Im(T ) is a subspace of WProof. (i) We must check the three requirements for being a subspace.

1. Ker(T ) is closed under addition. Suppose v and v′ are in Ker(T ). Inother words, T (v) = 0 and T (v′) = 0. We need to show that v + v′ is inKer(T ), in other words, that T (v + v′) = 0. Indeed,

T (v + v′) = T (v) + T (v′) = 0 + 0 = 0.

2. 0V ∈ Ker(T ). To show that 0V is in Ker(T ), we need to show thatT (0V ) = 0W . Indeed, this is true since T is a linear map, by Lemma 3.1.29.

3. Ker(T ) is closed under scalar multiplication. Suppose v ∈ Ker(T ) andk ∈ R is a scalar. We need to show that kv ∈ Ker(T ), that is, we needto show that T (kv) = 0. Indeed,

T (kv) = kT (v) = k0 = 0.

(ii) Again, we must check the three requirements for being a subspace.

1. Im(T ) is closed under addition. Suppose w and w′ are in Im(T ). Inother words, there exist vectors v and v′ in V such that T (v) = w andT (v′) = w′. We need to show that w+w′ is also in Im(T ), in other words,that there exists a vector u in V such that T (u) = w + w′. Indeed, setu := v + v′. Then,

T (u) = T (v + v′) = T (v) + T (v′) = w + w′.

2. 0W ∈ Im(T ). To show that 0W ∈ Im(T ), we need to show that thereexists v ∈ V such that T (v) = 0W . Indeed, choose v = 0V . ThenT (v) = T (0V ) = 0W by Lemma 3.1.29.

3. Im(T ) is closed under scalar multiplication. Suppose w ∈ Im(T ) and kis a scalar. We need to show that kw ∈ Im(T ). The fact that w is inIm(T ) means that there exists a v in V such that T (v) = w. We needto show that there exists a u ∈ V such that T (u) = kw. Indeed, setu := kv. Then

T (u) = T (kv) = kT (v) = kw.

Now that we know that the kernel and image of a linear map are subspaces,and hence vector spaces in their own right, we can make the following definition.Definition 3.5.5 Let T : V → W be a linear map from a finite-dimensionalvector space V to a vector space W . The nullity of T is the dimension ofKer(T ), and the rank of T is the dimension of Im(T ):

Nullity(T ) := Dim(Ker(T ))Rank(T ) = Dim(Im(T ))

Page 110: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 104

The ‘dimension of Ker(T )’ makes sense because Ker(T ) is a subspace ofa finite-dimensional vector space V , and hence is finite-dimensional byProposition 2.3.25. We do not yet know that Im(T ) is finite-dimensional,but this will follow from the Rank-Nullity Theorem (Theorem 3.5.10).

Example 3.5.6 Let a ∈ R3 be a fixed nonzero vector. Consider the ‘crossproduct with a’ linear map from Example 3.1.13,

C : R3 → R3

v 7→ a × v

Determine the kernel, image, nullity and rank of C.Solution. The kernel of C is the subspace of R3 consisting of all vectorsv ∈ V such that a×v = 0. From the geometric formula for the cross-product,

|a × v| = |a||v| sin θ

where θ is the angle from a to θ, we see that

a × v = 0⇔ v = 0 or θ = 0 or θ = π.

In other words, v must be a scalar multiple of a. So,

Ker(C) = {ka, k ∈ R}.

I claim that the image of C is the subspace of all vectors perpendicular toa, i.e.

Im(C) := {u ∈ R3 : u · a = 0}. (3.5.1)

If you believe me, then the picture is as follows:

KerC

Im(C)

a

Let me prove equation (3.5.1). By definition, the image of C is the subspaceof R3 consisting of all vectors w of the form w = a× v for some v ∈ R3. Thisimplies that w is perpendicular to a. This was the ‘easy’ part. The ‘harder’part is to show the converse. That is, we need to show that if u is perpendicularto a, then u is in the image of C, i.e. there exists a vector v such that C(v) = u.

Indeed, we can choose v to be the vector obtained by rotating u by 90degrees clockwise in the plane I, and scaling it appropriately:

Page 111: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 105

a

u

v

In terms of a formula, we have

v = |u||a|u× a.

Note that this is not the only vector v such that C(v) = u. Indeed, if weadd to v any vector that lies on the line through a, the resulting vector

v = v + ka

also satisfies C(v) = u, since

C(v) = C(v + ka) = C(v) + C(ka) = u + 0 = u.

Example 3.5.7 Determine the kernel, image, nullity and rank of the linearmap

I : Trig2 → R

f 7→∫ π

0f(x)dx.

Solution. The kernel of I consists of all degree 2 trigonometric polynomials

f(x) = a0 + a1 cosx+ b1 sin x+ a2 cos 2x+ b2 sin 2x

such that ∫ π

0(a0 + a1 cosx+ b1 sin x+ a2 cos 2x+ b2 sin 2x) dx = 0.

Performing the integrals, this becomes the equation

πa0 + 2b1 = 0

with no constraints on the other constants a1, a2, b2. In other words,

Ker(I) ={all trigonometric polynomials of the form (a0(1− π

2 sin x) + a1 cosx+ a2 cos 2x+ b2 sin 2x), where (a0, a1, a2, b2 ∈ R).}

Hence Nullity(I) = Dim(Ker(I)) = 4.The image of I consists of all real numbers p ∈ R such that there exists a

f ∈ Trig2 such that I(f) = p. I claim that

Im(I) = R.

Indeed, given p ∈ R, we may choose f(x) = p2 sin x, since

I(f) = p

2

∫ π

0sin x dx = p.

Page 112: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 106

Hence Im(I) = R, and Rank(I) = 1.Note that the choice of f(x) = p

2 sin(x) satisfying I(f) = p is not unique.We could set f = f + g where g ∈ Ker(I) and we would still have I(f) = p:

I(f) = I(f + g) = I(f) + I(g) = p+ 0 = p.

Example 3.5.8 Consider the function

T : Poly2 → R2

p 7→ (p(1), p′(1)).

Show that T is a linear map, and determine its kernel, image, rank andnullity.Solution. We first show that T is a linear map. Let p, q ∈ Poly2. Then

T (p+ q) = ((p+ q)(1), (p+ q)′(1)) (Defn of T )= (p(1) + q(1), (p+ q)′(1)) (Defn of the function p+ q)= (p(1) + q(1), (p′ + q′)(1)) ((p+ q)′ = p′ + q′)= (p(1) + q(1), p′(1) + q′(1)) (Defn of the function p′ + q′)= (p(1), p′(1)) + (q(1), q′(1)) (Defn of addition in R2)= T (p) + T (q).

The proof of T (kp) = kT (p) is similar.The kernel of T is the set of all polynomials

p(x) = a0 + a1x+ a2x2

such that T (p) = (0, 0). This translates into the equation

(a0 + a1 + a2, a1 + 2a2) = (0, 0).

This in turn translates into two equations:

a0 + a1 + a2 = 0a0 + a1 + a2 = 0

a1 + 2a2 = 0

whose solution is a2 = t, a1 = −2t, a0 = −t, where t ∈ R. Hence

Ker(T ) ={all polynomials of the form −t− 2tx+ tx2 where t ∈ R

}.

Hence Nullity(T ) = 1.The image of T is the set of all (v, w) ∈ R2 such that there exists a polyno-

mial p = a0 + a1x+ a2x2 in Poly2 such that T (p) = (v, w). So, (v, w) is in the

image of T if and only if we can find a polynomial p = a0 + a1x + a2x2 such

that(a0 + a1 + a2, a1 + 2a2) = (v, w).

In other words, (v, w) is in the image of T if and only if the equations

a0 + a1 + a2 = v

a1 + 2a2 = w

Page 113: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 107

have a solution for some a0, a1, a2. But these equations always have a solution,for all (v, w) ∈ R2 is. For instance, one solution is

a2 = 0, a1 = w, a0 = v − w

which corresponds to the polynomial

p(x) = v − w + wx. (3.5.2)

Note that indeed T (p) = (v, w). Hence,

Im(T ) = {all (v, w) ∈ R2} = R2.

Hence Rank(T ) = Dim(Im(T )) = 2.Note that the choice of the polynomial p(x) = v−w+wx from (3.5.2) which

satisfies T (p) = (v, w) is not the only possible choice. Indeed, any polynomialof the form p = p+ q where q ∈ Ker(T ) will also satisfy T (p) = (v, w), since

T (p) = T (p+ q) = T (p) + T (q) = (v, w) + (0, 0) = (v, w).

Example 3.5.9 This example is a modification of Example 3.5.8. Determinethe kernel, image, rank and nullity of the following linear map:

T : Poly2 → Col3

p : 7→

p(1)p′(1)

p(2)− 12p′′(3)

Solution. We start by computing Ker(T ). We have

p ∈ Ker(T ) ⇔ T (p) =

000

(3.5.3)

Writep = a+ bx+ cx2

Then equation (3.5.3) becomes: a+ b+ cb+ 2c

a+ 2b+ 3c

=

000

This equation between column vectors is satisfied if and only if the following

linear equations are simultaneously satisfied:

a+ b+ c = 0b+ 2c = 0

a+ 2b+ 3c = 0

We observe that the third equation is equal to the sum of the first equation andthe second equation. So when we transform these equations into row-echelonform, we get:

a+ b+ c = 0

Page 114: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 108

b+ 2c = 0

The general solution is:

c = t, b = −2t, a = t, t ∈ R

Therefore,Ker(T ) = {t− 2tx+ tx2 : t ∈ R}

A basis for Ker(T ) is obtained by setting the parameter t = 1:

Basis for Ker(T ) = {-1− 2x+ x2-}

SoNullity(T ) = Dim(Ker(T )) = 1.

Now let us compute Im(T ). We have

w ≡

w1w2w3

∈ Im(T )

⇔ there exists p ∈ Poly2 such that T (p) =

w1w2w3

Write p = a+ bx+ cx2. Then

T (p) =

w1w2w3

a+ b+ cb+ 2c

a+ 2b+ 3c

=

w1w2w3

In other words,w ∈ Im(T ) if and only if there exists some solution to the

following equations for a, b, c:

a+ b+ c = w1

b+ 2c = w2

a+ 2b+ 3c = w3

Let us solve these equations using row reduction: 1 1 1 w10 1 2 w21 2 3 w3

R3−R1−−−−−→

1 1 1 w10 1 2 w20 1 2 w3 − w1

R3−R2−−−−−→

1 1 1 w10 1 2 w20 0 0 w3 − w1 − w2

So: there is no solution for a, b, c unless w3−w1−w2 = 0, because otherwise

we will have an equation of the form ’zero equals nonzero value’. Moreover, if

Page 115: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 109

w3 −w1 −w2, then the above equations can be solved for a, b, c, because thenthey are in augmented row echelon form with no invalid rows. Therefore,

Im(T ) =

w1w2w3

: −w1 − w2 + w3 = 0

.

So, a basis for Im(T ) is given by:

{-

101

,0

11

-}

and henceRank(T ) = 2.

Theorem 3.5.10 Rank-nullity theorem. Let T : V →W be a linear mapfrom a finite-dimensional vector space V to a vector space W . Then

Nullity(T ) + Rank(T ) = Dim(V ).Proof. Let B = {-e1, . . . , ek-} be a basis for Ker(T ). Since B is a list of linearlyindependent vectors in V , we can extend it to a basis C = {-e1, . . . , ek, f1, . . . , fp-}for V , by Corollary 2.3.31. I claim that

D := {-T (f1), . . . , T (fp)-}

is a basis for Im(T ). If I can prove that, then we are done, since then we have

Nullity(T ) + Rank(T ) = k + p

= Dim(V ).

Let us prove that D is a basis for Im(T ).D is linearly independent. Suppose

b1T (f1) + · · ·+ bpT (fp) = 0W .

We recognize the left hand side as T (b1f1 + · · ·+ bpfp). Hence

b1f1 + · · ·+ bpfp ∈ Ker(T )

which means we can write it as a linear combination of the vectors in B,

b1f1 + · · ·+ bpfp = a1e1 + · · ·+ akek.

Bringing all terms onto one side, this becomes the equation

−a1e1 − · · · − akek + b1f1 + · · ·+ bpfp = 0V .

We recognize the left hand side as a linear combination of the C basisvectors. Since they are are linearly independent, all the scalars must be zero.In particular, b1 = · · · = bp = 0, which is what we wanted to prove.D spans W . Suppose w ∈ Im(T ). We need to show that w is a linear

combination of the vectors from D. Since w is in the image of T , there existsv ∈ V such that T (v) = w. Since C is a basis for V , we can write

v = a1e1 + · · ·+ akek + b1f1 + · · ·+ bpfp

Page 116: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 110

for some scalars a1, . . . , ak, b1, . . . , bp. Then

w = T (v)= T (a1e1 + · · ·+ akek + b1f1 + · · ·+ bpfp)= a1T (e1) + · · ·+ akT (ek) + b1T (f1) + · · ·+ bpT (fp)= b1T (f1) + · · ·+ bpT (fp) (ei ∈ Ker(T ))

so that w is indeed a linear combination of the vectors from D. �

Example 3.5.11 The identity map on a vector space. Consider theidentity map on a finite-dimensional vector space V :

idV : V → V

We haveKer(idV ) = {0}

because the only vector sent to zero by the identity map is the zero vectoritself. Therefore,

Nullity(idV ) = 0.

Similarly,Im(idV ) = V

because every vector v ∈ V is in the image of idV . Indeed, we have idV (v) = v,proving that v ∈ Im(idV ). Hence,

Rank(idV ) = Dim(V ).

So indeed the Rank Nullity theorem (Theorem 3.5.10) is true in this case, as

Rank(idV )︸ ︷︷ ︸=Dim(V )

+ Ker(idV )︸ ︷︷ ︸=0

= Dim(V ).

Example 3.5.12 The zero map. Consider the zero map Zon a finite-dimensional vector space V :

Z : V → V

v 7→ 0

We haveKer(Z) = V

because every vector v ∈ V is sent to the zero vector by Z. So,Nullity(Z) = Dim(V ).

Similarly, we haveIm(Z) = {0}

because the only vector in the image of Z is the zero vector 0. That is because,for all vectors v ∈ V , Z(v) = 0. So,

Rank(Z) = 0.Hence the Rank-Nullity theorem (Theorem 3.5.10) is true in this case, as

Rank(Z)︸ ︷︷ ︸=0

+ Ker(Z)︸ ︷︷ ︸=Dim(V )

= Dim(V ).

Page 117: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 111

Example 3.5.13 Verifying that Example 3.5.9 satisfies the Rank-Nullity Theorem. Let us verify that the linear map T from Example 3.5.9satisfies Theorem 3.5.10. In Example 3.5.9, we computed that

Nullity(T ) = 1, Rank(T ) = 2.

Hence,Rank(T )︸ ︷︷ ︸

=2

+ Nullity(T )︸ ︷︷ ︸=1

= Dim(Poly2)︸ ︷︷ ︸=3

so it indeed satisfies Theorem 3.5.10. �

Exercises

Page 118: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 112

1. Verify the Rank-Nullity theorem for the following linear maps. That is,for each map T , (a) determine Ker(T ) and Im(T ) explicitly, (b) determinethe dimension of Ker(T ) and Im(T ), (c) check that these numbers satisfythe Rank-Nullity theorem.(a) The identity map idV : V → V on a finite-dimensional vector space

V .

(b) The zero map

Z : V → V

v 7→ 0

on a finite-dimensional vector space V .

(c) The map

T : Poly3 → Col3

p 7→

p(1)p(2)p(3)

(d) The map

S : Trig2 → Col2

f 7→

∫ π

0f(x) cosxdx∫ π

0f(x) sin xdx

(e) The `curl’ map

C : Vect2(R2)→ Poly1[x, y]

(P,Q) 7→ ∂Q

∂y− ∂P

∂x

Bonus question: What kind of vector fields live in the kernel ofC?

(f) (Poole 6.5.12) The map

T : Mat2,2 → Mat2,2

A 7→ AB− BA

whereB =

[1 −1−1 1

]2. Give an example of a linear map T : Col4 → Col4 such that Rank(T ) =

Nullity(T ).3. For each of the following assertions, state whether it is true or false. If it

is true, prove it. If it is false, prove that it is false.(a) There exists a linear map T : R5 → R2 such that

Ker(T ) = {(x1, x2, x3, x4, x5) ∈ R5 : x1 = 3x2 and x3 = x4 = x5}.

(b) There exists a linear map F : Trig3 → Trig3 such that Rank(T ) =Nullity(T ).

1https://en.wikipedia.org/wiki/Lagrange_polynomial

Page 119: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 113

4. Let f(x, y, z) be a function on R3 and fix a point p = (x0, y0, z0) ∈ R3.For each vector u ∈ R3, we can regard the derivative of f in the directionof u at p as a map

Dp : R3 → Ru 7→ (∇f)(p) · u.

(a) Show that Dp as defined above is a linear map.

(b) Consider the example of f(x, y, z) = x2 + y2 + z2. DetermineKer(Dp) for all points p ∈ R3.

5. Using the Rank-Nullity theorem, give a different proof of the fact that theimage of the map C in Example 3.5.6 is {u ∈ R3 : u · a = 0}.

6. Determine the kernel and image of the linear map

T : Poly2[x, y, z]→ Poly2

defined byT (p)(x) := p(x, x, x).

7. Let V be a finite-dimensional vector space. Let U be a subspace of V .Show that there exists a linear map T : V → V such that KerT = U .

8. Consider the function F : R2 → R2 defined by

F (x, y) = (x2 + y2, xy).

Consider now the DFp, the Jacobian matrix of F at the point p = (1, 2).Find Ker(DFp) and Im(DFp) and subsequently find Nullity(DFp) andRank(DFp).

3.6 Injective and surjective linear mapsDefinition 3.6.1 A function f : X → Y from a set X to a set Y is calledone-to-one (or injective) if whenever f(x) = f(x′) for some x, x′ ∈ X itnecessarily holds that x = x′. The function f is called onto (or surjective) iffor all y ∈ Y there exists an x ∈ X such that f(x) = y. ♦

If f is a linear map between vector spaces (and not just an arbitrary functionbetween sets), there is a simple way to check if f is injective.

Lemma 3.6.2 Let T : V →W be a linear map between vector spaces. Then:

T is injective⇐⇒ Ker(T ) = {0V }.Proof. ⇒. Suppose T : V → W is one-to-one. We already know one elementin Ker(T ), namely 0V , since T (0V ) = 0W as T is linear. Since T is one-to-one,this must be the only element in Ker(T ).⇐. Suppose Ker(T ) = {0V }. Now, suppose that

T (v) = T (v′)

for some vectors v,v′ ∈ V . Then we have T (v)− T (v′) = 0W , and since T islinear, this means T (v−v′) = 0W . Hence v−v′ ∈ Ker(T ), and so v−v′ = 0V ,in other words, v = v′, which is what we wanted to show. �

Page 120: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 114

Another simplification occurs if T is a linear map from a vector space V toitself (i.e. T is a linear operator on V ), and V is finite-dimensional.

Lemma 3.6.3 Let T : V → V be a linear operator on a finite-dimensionalvector space V . Then:

T is injective⇐⇒ T is surjective.Proof. ⇒. Suppose T is injective.

∴ Ker(T ) = {0V } (Lemma 3.6.2)∴ Nullity(T ) = 0∴ Rank(T ) = Dim(V ) (Theorem 3.5.10)∴ Im(T ) = V (Proposition 2.3.25)

Hence T is surjective.⇐. Suppose T is surjective.

∴ Im(T ) = V

∴ Rank(T ) = Dim(V )∴ Nullity(T ) = 0 (Theorem 3.5.10)∴ Ker(T ) = {0V }∴ T is injective. (Lemma 3.6.3)

Proposition 3.6.4 A linear map T : V → W is an isomorphism if and onlyif T is injective and surjective.Proof. ⇒. Suppose V and W are isomorphic. That is, there exists a pair oflinear maps T : V � W : S such that T ◦ S = idW and S ◦ T = idC . We willshow that T is injective and surjective.

Suppose that T (v1) = T (v2).∴ S(T (v1)) = S(T (v2))∴ idV (v1) = idV (v2)(using S ◦ T = idV )

∴ v1 = v2

which shows that T is injective. To show that T is surjective, let w ∈ W . Wemust show that there exists v ∈ V such that T (v) = w. Indeed, set v := S(w).Then

T (v) = T (S(w))= idW (w)(using T ◦ S = idW )= w.

⇐. Suppose that there exists a linear map T : V → W which is injectiveand surjective. We will show that there exists a linear map S : W → V suchthat S ◦ T = idV and T ◦ S = idW , which will prove that V and W areisomorphic.

We define the inverse map S as follows:

S : W → V

w 7→ the unique v ∈ V such that T (v) = w.

This map is well-defined. Indeed, given w ∈W , the fact that T is surjectivemeans there does exist some v ∈ V such that T (v) = w. The fact that T is

Page 121: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 3. LINEAR MAPS 115

injective implies that this v is unique. For if there exists another v′ ∈ V withT (v′) = w, then we have v′ = v since T is injective.

At this point we have a well-defined function S : W → V which satisfiesT ◦ S = idW and S ◦ T = idV . We only need to check that S is linear.

Let w1,w2 ∈W . Then

S(aw1 + bw2) = S(aT (S(w1)) + bT (S(w2))

)(using T ◦ S = idW )

= S(aT (v1) + bT (v2)

)(setting v1 := S(w1),v2 := S(w2)))

= S(T (av1 + bv2)

)(T is linear)

= av1 + bv2 ((S ◦ T = idV )

Hence S is linear, which completes the proof. �

Proposition 3.6.5 Let T : V → V be a linear operator on a finite-dimensionalvector space V . The following statements are equivalent:

• T is injective.

• T is surjective.

• T is an isomorphism.Proof. (1) is equivalent to (2) by Lemma 3.6.3. On the other hand, (1) and(2) is equivalent to (3) by Proposition 3.6.4. �

Page 122: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

Chapter 4

Tutorials

4.1 W214 Linear Algebra 2019, Tutorial 1Tutorial 1 covers Section 1.1 up until the end of Section 1.5. The followingexercises have been selected.

Exercises1. Prove that set C from Section 1.1 together with the addition operation

(1.1.6), the zero vector (1.1.9) and the scalar multiplication operation(1.1.12) forms a vector space.

2. Define the set C ′ consisting of all polynomials of degree exactly 4. Showthat if C ′ is given the addition operation (1.1.6), the zero vector (1.1.9)and the scalar multiplication operation (1.1.12) then C ′ does not form avector space.Hint. Give a counterexample!

3. Consider the set

X := {(a1, a2) ∈ R2 : a1 ≥ 0, a2 ≥ 0}

equipped with the same addition operation (1.1.4), zero vector (1.1.8) andscalar multiplication operation (1.1.10) as in A. Does X form a vectorspace? If not, why not?

4. Notation quiz! Say whether the following combination of symbols repre-sents a real number or a function.(a) f

(b) f(x)

(c) k.f

(d) (k.f)(x)5. Let X = {a, b, c}.

(a) Write down three different functions f, g, h in Fun(X).

(b) For each of the functions you wrote down in Item 4.1.5.a, calculate(i) f + g and (ii) 3.h.

116

Page 123: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 117

6. Define a strange new addition operation + on R by

x+y := x− y, x, y ∈ R.

Does + satisfy R2? If it does, prove it. If it does not, give a coun-terexample.

7. Construct an operation � on R satisfying R1 but not R2.Hint. Try adjusting the formula from Exercise 4.1.6.

8. Prove that for all vectors v in a vector space, −(−v) = v.9. Let V be a vector space. Suppose that a vector v ∈ V satisfies

5.v = 2.v. (4.1.1)

Prove that v = 0.10. If k.v = 0 in a vector space, then it necessarily follows that k = 0.11. The empty set can be equipped with data D1 , D2 , D3 satisfying the

rules of a vector space.12. Rule R7 of a vector space follows automatically from the other rules.

4.2 W214 2019, Linear Algebra Tutorial 2Note: You are welcome to use SageMath to help you solve some of the problemsbelow. You can either just type into one of the provided Sage cells, or you canuse the SageMath cell server.

Exercises

1.6 Subspaces.

1. Read through the webpage version of Subsection 1.6.3 (Solutions to ho-mogenous linear differential equations), which is new and contains a lotof SageMath examples.

2. Show that the set

V := {(a,−a, b,−b) : a, b ∈ R}

is a subspace of R4.3. Consider the set

V := {f ∈ Diff((−1, 1)) : f ′(0) = 2}

Is V a subspace of Diff((−1, 1))? If you think it is, prove that it is. Ifyou think it is not, prove that it is not!

4. Is R+ := {x ∈ R : x ≥ 0} a subspace of R? If you think it is, prove thatit is. If you think it is not, prove that it is not!

5. Give an example of a nonempty subset V of R2 which is closed underscalar multiplication, but V is not a subspace of R2.

Page 124: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 118

2.1 Linear Combinations and Span.

6. Can the polynomial p = x3 − x + 2 ∈ Poly3 be expressed as a linearcombination of

p1 = 1 + x, p2 = x3 + x2 + x− 1, p3 = x3 − x2 + 1 ?

Setup the appropriate system of simultaneous linear equations. Then solvethese by hand, or using SageMath, as in Example Example 2.1.4.

7. Carrying on from the previous question, can the same polynomial p =x3 − x+ 2 ∈ Poly3 be expressed as a linear combination of

p1 = 1 + x, p2 = x3 + x2 + x− 1, p3 = x3 − x2 + 1, p4 = 1− x ?

Setup the appropriate system of simultaneous linear equations. Then solvethese by hand, or using SageMath, as in Example Example 2.1.4.

8. Show that the polynomials

p1 = 1 + x, p2 = x3 + x2 + x− 1, p3 = x3 − x2 + 1, p4 = 1− x

from the previous question span Poly3. Setup the appropriate systemof simultaneous linear equations. Then solve these by hand, or usingSageMath, as in Example Example 2.1.8.

2.2 (Linear Independence).

9. Read through the webpage version of Section 2.1. I have added some newmaterial, and gave examples of how to use SageMath to solve systems oflinear equations.

10. Let S = {-v1, . . . ,vn-} be a list of vectors in a vector space V . Supposethat S spans V . Suppose that w is another vector in V . Prove that thelist of vectors S ′ = {-w,v1, . . . ,vn-} also spans V .

Consider the following list of matrices, thought of as vectors in Mat2,2:

v1 =[1 21 1

], v2 =

[1 0−2 1

], v3 =

[1 02 3

], v4 =

[0 31 −1

], v5 =

[1 00 1

]

11. Show that the list is linearly dependent. You are welcome to useSageMath (you will first need to setup the appropriate system of linearequations).

12. Go through the same steps as in Example 2.2.9 to find the first vectorin the list which is either the zero vector or a linear combination of thepreceding vectors. You are welcome to use SageMath at the pointsin your calculation when you need to solve a system of simultaneous

Page 125: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 119

linear equations.

4.3 W214 2019, Linear Algebra Tutorial 3

Exercises

1.6 Subspaces.

1. Prove or disprove: The set

V := {p ∈ Poly2 : p(3) = 1}

is a subspce of Poly2.

2.2 Linear independence.

2. Consider the vector space V = Fun([0, 1]) of functions on the closed unitinterval. Write down a linearly independent list containing 4 vectors inV .

2.3 Basis and Dimension.

3. Prove or disprove: there exits a basis {-p0, p1, p2, p3-} of Poly3 such thatnone of the polynomials p0, p1, p2, p3 have degree 2.

4. Let W ⊂ R3 be the plane orthogonal to the vector v = (1, 2, 3), as inExample 1.6.13 and Example 2.3.17. Show that {-a,b-} is a basis for W ,where

a = (1, 0,−13), b = (0, 1,−1

2).

5. For each of the following, show that V is a subspace of Poly2, find a basisfor V , and compute DimV .(a) V = {p ∈ Poly2 : p(0) = 0, p(2) = 0}

(b) V = {p ∈ Poly2 :∫ 1

0 p(t)dt = 0}6. Sift the list of vectors

v1 = (0, 0, 0), v2 = (1, 0,−1), v3 = (1, 2, 3)v4 = (3, 4, 5), v5 = (4, 8, 12), v6 = (1, 1, 0).

7. Complete the following ’alternative’ proof of Corollary 2.3.32.Lemma. Suppose V is a vector space of dimension n. Then any linearly

independent set of n vectors in V is a basis for V .Proof. Let B = {v1, . . . ,vn} be a linearly independent set of vectors

Page 126: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 120

in V .Suppose that B is not a basis for V .Therefore, B does not span V , since ... (a)Therefore, there exists v ∈ V such that ... (b)Now, add v to the list B to obtain a new list B′ := ... (c)The new list B′ is linearly independent because ... (d)This is a contradiction because ... (e)Hence, B must be a basis for V .

8. Use the Bumping Off Proposition or the Invariance of Dimension Theoremto determine if B is a basis for V .(a) V = Poly2, B =

{2 + x2, 1− x, 1 + x− 3x2, x− x2}

(b) V = Mat2,2,

B ={[

1 2−1 3

],

[0 13 −1

],

[1 23 4

]}

(c) V = Trig2, B ={

sin2 x, cos2 x, 1− sin 2x, cos 2x+ 3 sin 2x}

(d) V = Mat2,2, B ={[

1 21 1

],

[1 0−2 1

],

[1 02 3

],

[0 31 −1

], v5 =

[1 00 1

]}

4.4 W214 2019, Linear Algebra Tutorial 4Note: You are welcome to use the SageMath cell server to help you solve someof the problems below, or at least to check your calculations.

Exercises

2.4 Coordinate vectors.

1. Let B = {-B1,B2,B3,B4-} be the basis of Mat2,2 given by

B1 =[1 00 1

],B2 =

[1 00 −1

],B3 =

[1 11 1

],B4 =

[0 1−1 0

].

Determine [A]B, where

A =[1 23 4

].

2.(a) Find a basis B for the vector space

V := {p ∈ Poly2 : p(2) = 0}.

(b) Consider p(x) = x2 + x− 6. Show that p ∈ V .

(c) Determine the coordinate vector of p with respect to your basis B,i.e. determine [p]B.

Page 127: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 121

3. Consider the vector space W from Example 2.3.17,

W = {(w1, w2, w3) ∈ R3 : w1 + 2w2 + 3w3 = 0},

and the following bases for W :

B = {-a,b-}, C = {-u,v-}

where

a = (1, 0,−13), b = (0, 1,−2

3)

u = (1, 2,−53), v = (−4, 2, 0)

Consider the vector w = (−2, 4,−2) ∈ R3.(a) Show that w ∈W .

(b) Determine [w]B.

(c) Determine [w]C .4. Let V be the vector space of solutions to the differential equation

y′′

+ y = 0. (4.4.1)

(a) Show that B = {- cosx, sin x-} is a basis for V .

(b) Let y ∈ V be defined as the unique solution to the differential equa-tion in (4.4.1) satisfying

y(π6 ) = 1, y′(π6 ) = 0.

(Note that we can indeed define y uniquely in this way due to The-orem 2.3.20.) Compute [y]B.

(c) Let z(x) = cos(x− π

3).

i. Show that z ∈ V by checking that it solves the differentialequation (4.4.1).

ii. Determine [z]B.

2.5 Change of basis.

5. This is a continuation of Exercise 4.4.1. Consider the following two basesfor Mat2,2:

B ={B1 =

[1 00 1

], B2 =

[1 00 −1

], B3 =

[1 11 1

], B4 =

[0 1−1 0

]}C =

{C1 =

[1 10 0

], C2 =

[1 −10 0

], C3 =

[0 01 1

], C4 =

[0 01 −1

]}(a) Determine the change-of-basis matrices PC←B and PB←C .

(b) Determine [A]B and [A]C where

A =[1 23 4

].

Page 128: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 122

(c) Check that [A]C = PC←B[A]B and that [A]B = PB←C [A]C .6. Figure 4.4.1 displays a basis B = {-b1,b2-} for R2, a background of integral

linear combinations of b1 and b2, and a certain vector w ∈ R2. Similarly,Figure 4.4.2 displays another basis C = {-c1, c2-} for R2, a background ofintegral linear combinations of c1 and c2, and the same vector w ∈ R2.

b1

b2

w

c1c2

w

Figure 4.4.1: The vector wagainst a background of integrallinear combinations of the basisvectors from B.

Figure 4.4.2: The vector wagainst a background of integrallinear combinations of the basisvectors from C.

(a) Determine [w]B, directly from Figure 2.5.10.

(b) Determine [w]C , directly from Figure 2.5.11.

(c) The following figure displays the B basis against a background ofintegral linear combinations of the C basis:

c1c2 b1

b2

Page 129: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 123

Determine the change-of-basis matrix PC←B. (You may assume thatall coefficients are either integers and half-integers.)

(d) Multiply the matrix you computed in (c) with the column vectoryou computed in (a). That is, compute the product PC←B[w]B. Isyour answer the same as what you obtained in (b)?

7. Consider the following three bases for R3:

A = {-(1, 0, 0), (0, 1, 0), (0, 0, 1)-}B = {-(2, 1, 1), (1, 1, 1), (0, 2, 1)-}C = {-(1, 2, 3), (0, 1, 0), (1, 0, 1)-}

Compute PC←B,PB←A,PC←A and verify the equation

PC←BPB←A = PC←A.

4.5 W214 2019, Linear Algebra Tutorial 5It is a long tutorial, to give you plenty of practice for the test. The solutionswill be made available on Tuesday 23 April.

I have included some exercises from the textbook Poole, Linear Algebra -A Modern Introduction. This textbook is not required for the course, but thisat least shows you where to look if you want to find even more exercises.

As always, you are welcome to use the SageMath cell server to help yousolve some of the problems below, or at least to check your calculations.

Exercises

3.1 Linear Maps - Definition and Examples.

1. (Poole Exercise 6.4 14). Let T : Col2 → Col3 be a linear map satisfying

T

([10

])=

12−1

, and T (([

01

])=

304

Find

T

([52

])and T

([ab

])2. (Poole Exercise 6.4 18) Let T : Mat2,2 → R be a linear map satisfying

T

([1 00 0

])= 1, T

([1 10 0

])= 2

T

([1 11 0

])= 3, T

([1 11 1

])= 4

FindT

([1 34 2

])and T

([a bc d

]).

3. Let V be a vector space, and let a 6= 0 be a fixed vector. Define the map

Page 130: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 124

T as follows:

T : V → V

v 7→ a + v

(a) Is T a linear map? (Yes or no)

(b) Prove your assertion from (a).4. (Poole Exercise 6.4 20) Show that there is no linear map T : Col3 → Poly2

such that

T

210

= 1 + x, T

302

= 2− x+ x2T

06−8

= −2 + 2x2

5. Determine the action of the gradient linear map

∇ : Poly2[x, y]→ Vect1(R2)

from Example 3.1.25 on the standard basis vectors

{-q1, q2, q3, q4, q5-}q1 = 1, q2 = x, q3 = y, q4 = x2, q5 = xy, q6 = y2

of Poly2. Express your answers as linear combinations of the standardbasis vectors

{-V1, V2, V3, V4, V5, V6-}V1 = (1, 0), V2 = (x, 0), V3 = (y, 0)V4 = (0, 1), V5 = (0, x), V6 = (0, y)

of Vect1(R2).6. Let V be the vector space of solutions to the differential equation

y(n) + an−1(x)y(n−1) + · · ·+ a1(x)y′ + a0(x) = 0.

Consider the ’evaluate at x = 1’ map

T : V → Ry 7→ y(1)

Is T a linear map? Prove your assertion.

3.2 Composition of linear maps.

7. Let Rθ be the ‘rotation by θ’ map from Example 3.1.32,

Rθ : Col2 → Col2[v1v2

]7→[

cos θ sin θ− sin θ cos θ

] [v1v2

]

(a) Check algebraically that Rφ ◦ Rθ = Rφ+θ by computing the actionof the linear maps on both sides of this equation on an arbitraryvector v ∈ Col2.

Page 131: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 125

(b) Explain what this result says, in words.8. Let M : Poly3 → Poly4 be the ‘multiplication by x’ map, M(p)(x) =

xp(x). Let S : Poly4 → Poly4 be the map S(p)(x) = p(x − 1). Similarlylet T : Poly3 → Poly3 be the map T (p)(x) = p(x − 1). Compute S ◦Mand M ◦ T . Are they equal?

9. Consider the vector space V of solutions to the differential equation

y′′

+ y = 0.

In Example 3.1.27 we defined a linear map (here we have chosen x0 = 0)

S : Col2 → V[ab

]7→ unique y ∈ V such that y(0) = a, y′(0) = b

Similarly, as in Exercise 4.5.6, there is an `evaluation at x = π6 ’ linear

map

T : V → R

y 7→ y(π6 )

Compute T ◦ S.

3.3 Isomorphisms of vector spaces.

10. Are the following vector spaces

V = {v ∈ R4 : (1,−1, 2, 1) · v = 0}

andPoly1[x, y]

isomorphic?If they are, construct an explicit isomorphism between them. If not,

prove that they are not isomorphic.11. (Poole Exercise 6.5 22). Determine whether V and W are isomorphic. If

they are, give an explicit isomorphism T : V →W .

Sym3 ={

A ∈ Mat3,3 : AT = A}

U3 = {B ∈ Mat3,3 : B is upper triangular}

3.3 Linear Maps and Matrices (up to Theorem 3.4.3).

12. (Poole Exercise 6.6 2)(a) Find the matrix [T ]C←B of the linear map T : Poly1 → Poly1 defined

by T (a+ bx) = b− ax with respect to the bases B = {-1 + x, 1− x-}and C = {-1, x-} for Poly1.

(b) Verify Theorem 3.4.3 for the vector v = 4+2x by computing [T (v)]Cand [T ]C←B[v]B independently, and checking that they are equal.

Page 132: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 126

13. LetS : Trig2 → Trig2

be the ‘shift by π6 ’ map, S(f)(x) = f(x− π

6 ). Compute [S]B←C , where Band C are the following two bases for Trig2:

B = {-1, cosx, sin x, cos 2x, sin 2x-}C = {-1, cosx, sin x, cos2 x, sin2 x-}

14. Verify Theorem 3.4.3 for the linear map S : Mat2,2 → Mat2,2 given byS(M) = MT , for the vector A ∈ Mat2,2 given by

A =[0 10 0

]and using the following bases of Mat2,2:

B = C = {-M1 =[1 12 3

], M2 =

[1 01 1

], M3 =

[1 11 1

], M4 =

[0 11 1

]-}.

4.6 W214 2019, Linear Algebra Tutorial 6As always, you are welcome to use the SageMath cell server to help you solvesome of the problems below, or at least to check your calculations.

Exercises

Notation quiz.

1. Write out the English terminology for each of the following symbolic ex-pressions. The first two have been done for you.(a) [T ]C←B. Answer: The matrix of T with respect to the bases B andC.

(b) T : V → W . Answer: A linear map T from a vector space V to avector space W .

(c) [v]B.

(d) [T (v]C .

(e) [·]B.

(f) S ◦ T .

(g) vecV,B(c).2. Write out the symbolic expressions for each of the following English phrases.

The first one has been done for you.(a) The coordinate vector of v with respect to the basis B. Answer:

[v]B.

(b) The change-of-basis matrix from the basis B to the basis C.

(c) The composite of the linear map T after the linear map S.

Page 133: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 127

(d) The vector in V whose coordinate vector with respect to the basisB is c.

(e) The map from V to the space of n-dimensional column vectors whichcomputes coordinate vectors with respect to a basis B.

3.4 Linear Maps and Matrices - From Lemma 3.4.5 onwards.

3. Verify Theorem 3.4.7 in the case of the linear maps

S : Mat2,3 → Col3[A11 A12 A13A21 A22 A23

]7→

A11 +A21A12 +A22A13 +A23

T : Col3 → Poly2[x, y]abc

7→ a+ b(x− y − 1)2 + c(x+ y + 1)2

Use the standard basis B for Mat2,3 (see Example 2.3.13), the basis

C = {-

101

,0

10

,0

11

-}

for Col3, and the standard basis

D = {-1, x, y, x2, xy, y2-}

for Poly2[x, y]. That is, compute

[T ◦ S]D←B

and[T ]D←C [S]C←B

and check that they are equal.

3.5 Kernel and Image of a Linear Map.

4. Determine the kernel, image, nullity (the dimension of the kernel) andrank (the dimension of the image) of the following linear maps.(a) The identity map idV : V → V on a finite-dimensional vector space

V .

(b) The zero map

Z : V → V

v 7→ 0

on a finite-dimensional vector space V .

Page 134: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 128

(c) The map

T : Poly3 → Col3

p 7→

p(1)p(2)p(3)

(d) The map

S : Trig2 → Col2

f 7→

∫ π

0f(x) cosxdx∫ π

0f(x) sin xdx

(e) The `curl’ map

C : Vect2(R2)→ Poly1[x, y]

(P,Q) 7→ ∂Q

∂x− ∂P

∂y

Bonus question: What kind of vector fields are elements of Ker(C)?

(f) (Poole 6.5.12) The map

T : Mat2,2 → Mat2,2

A 7→ AB− BA

whereB =

[1 −1−1 1

]

4.7 W214 2019, Linear Algebra Tutorial 7Since this is the last week of the semester, W214 Tutorial 7 will be a com-bination of Linear Algebra problems (this worksheet) and Advanced Calculusproblems, which you can find under the Advanced Calculus section on Sun-Learn.

Exercises

3.5 Kernel and Image of a Linear Map.

1. Find all values of t such that the map

T : Poly2 → Poly1

(a+ bx+ cx2)→ (2a+ 3b+ 4c) + (−a+ tb)x

has rank 1.2. For each of the following assertions, state whether it is true or false. If it

Page 135: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

CHAPTER 4. TUTORIALS 129

is true, prove it. If it is false, prove that it is false.(a) There exists a linear map T : R5 → R2 such that

Ker(T ) = {(x1, x2, x3, x4, x5) ∈ R5 : x1 = 3x2 and x3 = x4 = x5}.

(b) There exists a linear map F : Trig3 → Trig3 such that Rank(T ) =Nullity(T ).

3.6 Injective and Surjective Maps.

3. For each of the following assertions, state whether it is true or false. If itis true, prove it. If it is false, prove that it is false.(a) (Poole Ch 6 Review Questions 1.h) If T : Mat3,3 → Poly4 is a linear

map and Nullity(T ) = 4, then T is surjective.

(b) Every surjective linear map T : Poly2[x, y, z]→ Mat3,3 is also injec-tive.

(c) If T : V → W and S : W → V are linear maps between vectorspaces and S ◦ T = idV , then T is injective.

(d) If T : V → W and S : W → V are linear maps between vectorspaces and S ◦ T = idV , then T is surjective.

(e) If T : V → W and S : W → V are linear maps between vectorspaces and S ◦ T = idV , then S is injective.

(f) If T : V → W and S : W → V are linear maps between vectorspaces and S ◦ T = idV , then S is surjective.

(g) (2018 Exam 1). Let S : V → V and T : V → V be linear maps froma finite-dimensional vector space V to itself. Then Nullity(T ◦ S) ≥Nullity(S).

Page 136: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

Appendix A

Videos of Lectures

Figure A.0.1: Lecture 17

Figure A.0.2: Lecture 18

130

Page 137: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

Appendix B

Reminder about matrices

Let us recall a few things about matrices, and set up our notation.An n×m matrix A is just a rectangular array of numbers, with n rows and

m columns:

A =

A11 A12 · · · A1mA21 A22 · · · A2m... . . . ...

An1 An2 · · · Anm

I will always write matrices in ‘sans serif’ font, eg. A. It is difficultto ‘change fonts’ in handwritten text, but I encourage you to at leastreserve the letters A, B, C, etc. for matrices, and S, T , etc. for linearmaps!

Two n × m matrices A and B can be added, to get a new n × m matrixA + B:

(A + B)ij := Aij + BijThere is the zero n×m matrix:

0 =

0 0 · · · 00 0 · · · 0... . . . ...0 0 · · · 0

You can also multiply an n×m matrix A by a scalar k, to get a new n×m

matrix kA:(kA)ij := kAij

Lemma B.0.11. Equipped with these operations, the set Matn,m of all n ×m matrices is

a vector space.

2. The dimension of Matnm is nm, with basis given by the matrices

Eij , i = 1 . . . n, j = 1 . . .m

which have a 1 in the ith row and jth column and zeroes everywhere else.Proof. Left as an exercise. �

131

Page 138: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

APPENDIX B. REMINDER ABOUT MATRICES 132

Example B.0.2 Mat2,2 has basis

E11 =[

1 00 0

], E12 =

[0 10 0

],E21 =

[0 01 0

],E22 =

[0 00 1

].

Usually A is a matrix, and Aij is the element of the matrix at position(i, j). But now Eij is a matrix in its own right! Its element at position(k, l) will be written as (Eij)kl. I hope you don’t find this too confusing.In fact, we can write down an elegant formula for the elements of Eijusing the Kronecker delta symbol:

(Eij)kl = δikδjl (B.0.1)

Check that (B.0.1) is indeed the correct formula for the matrix elements ofEij .

Example B.0.3We will write Coln for the vector space Matn,1 of n-dimensionalcolumn vectors, and we will write the standard basis vectors Ei1 of Coln moresimply as ei:

e1 :=

10...0

, e2 :=

01...0

, . . . , en :=

00...1

.

Vectors in Coln will be written in bold sans-serif font eg. v ∈ Coln. �

Equipped with these operations, the set Matn,m of all n×m matrices is avector space with dimension nm . We write Coln for the vector space Matn,1of n-dimensional column vectors.

The most important operation is matrix multiplication. An n× k matrix Acan be multiplied from the right with a k×m matrix B to get a n×m matrixAB,

by defining the entries of AB to be

(AB)ij := Ai1B1j + Ai2B2j + · · ·+ AikBkj .

Proposition B.0.4 The above operations on matrices satisfy the followingrules whenever the sums and products involved are defined:

1. (A + B)C = AC + BC

2. A(B + C) = AB + AC

3. (kA)B = A(kB) = k(AB)

4. (AB)C = A(BC)Proof. The proofs of (1) - (3) are all routine checks which you have hopefullydone before. Let us prove (4), to practice using Σ-notation! Suppose A, B andC have sizes n× k, k × r and r ×m respectively, so that the matrix products

Page 139: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

APPENDIX B. REMINDER ABOUT MATRICES 133

make sense. Then:

((AB)C)ij =r∑p=1

(AB)ipCpj

=r∑p=1

(k∑q=1

AiqBqp

)Cpj

=∑p,q

AiqBqpCpj

=k∑q=1

Aiq

(r∑p=1

BqpCpj

)

=k∑q=1

Aiq(BC)qj

= (A(BC))ij .

I hope the Σ-notation does not confuse you in the above proof! Let mewrite out the exact same proof without Σ-notation, in the simple casewhere A, B and C are all 2× 2 matrices, and we are trying to calculate,say, the entry at position 11.

((AB)C)11 = (AB)11C11 + (AB)12C21

= (A11B11 + A12B21)C11 + (A11B12 + A12B22)C21

= A11B11C11 + A12B21C11 + A11B12C21 + A12B22C21

= A11(B11C11 + B12C21) + A12(B21C11 + B22C21)= A11(BC)11 + A12(BC)21

= (A(BC))11.

Do you understand the Σ-notation proof now? The crucial step(going from the second to the fourth lines) is called exchanging theorder of summation.

The transpose of an n×m matrix A is the m× n matrix AT whose entriesare given by

(AT )ij := Aji.

Prove that if A ∈ Mat2,2 satisfies

AB = BA

for all B ∈ Mat2,2, then A is of the form

A =[a 00 a

].

Page 140: W214 Linear Algebra - Stellenbosch University · 2019. 5. 31. · v 0.1Noteforthestudent Youareabouttomeetlinearalgebraforthesecondtime. Inthefirstyear,we focusedonsystemsoflinearequations,matrices,andtheirdeterminants

Bibliography

134