math 532: linear algebra › ~fass › notes532_ch4.pdf · 2015-02-18 · math 532: linear algebra...
TRANSCRIPT
![Page 1: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/1.jpg)
MATH 532: Linear AlgebraChapter 4: Vector Spaces
Greg Fasshauer
Department of Applied MathematicsIllinois Institute of Technology
Spring 2015
[email protected] MATH 532 1
![Page 2: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/2.jpg)
Outline
1 Spaces and Subspaces
2 Four Fundamental Subspaces
3 Linear Independence
4 Bases and Dimension
5 More About Rank
6 Classical Least Squares
7 Kriging as best linear unbiased predictor
[email protected] MATH 532 2
![Page 3: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/3.jpg)
Spaces and Subspaces
Outline
1 Spaces and Subspaces
2 Four Fundamental Subspaces
3 Linear Independence
4 Bases and Dimension
5 More About Rank
6 Classical Least Squares
7 Kriging as best linear unbiased predictor
[email protected] MATH 532 3
![Page 4: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/4.jpg)
Spaces and Subspaces
Spaces and Subspaces
While the discussion of vector spaces can be rather dry and abstract,they are an essential tool for describing the world we work in, and tounderstand many practically relevant consequences.
After all, linear algebra is pretty much the workhorse of modern appliedmathematics.
Moreover, many concepts we discuss now for traditional “vectors”apply also to vector spaces of functions, which form the foundation offunctional analysis.
[email protected] MATH 532 4
![Page 5: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/5.jpg)
Spaces and Subspaces
Vector Space
DefinitionA set V of elements (vectors) is called a vector space (or linear space)over the scalar field F if
(A1) x + y ∈ V for any x ,y ∈ V(closed under addition),
(A2) (x + y) + z = x + (y + z) for allx ,y , z ∈ V,
(A3) x + y = y + x for all x ,y ∈ V,
(A4) There exists a zero vector 0 ∈ Vsuch that x + 0 = x for everyx ∈ V,
(A5) For every x ∈ V there is anegative (−x) ∈ V such thatx + (−x) = 0,
(M1) αx ∈ V for every α ∈ F andx ∈ V (closed under scalarmultiplication),
(M2) (αβ)x = α(βx) for all αβ ∈ F ,x ∈ V,
(M3) α(x + y) = αx + αy for all α ∈ F ,x ,y ∈ V,
(M4) (α+ β)x = αx + βx for allα, β ∈ F , x ∈ V,
(M5) 1x = x for all x ∈ V.
[email protected] MATH 532 5
![Page 6: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/6.jpg)
Spaces and Subspaces
Examples of vector spaces
V = Rm and F = R (traditional real vectors)V = Cm and F = C (traditional complex vectors)V = Rm×n and F = R (real matrices)V = Cm×n and F = C (complex matrices)
But alsoV is polynomials of a certain degree with real coefficients, F = RV is continuous functions on an interval [a,b], F = R
[email protected] MATH 532 6
![Page 7: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/7.jpg)
Spaces and Subspaces
Examples of vector spaces
V = Rm and F = R (traditional real vectors)
V = Cm and F = C (traditional complex vectors)V = Rm×n and F = R (real matrices)V = Cm×n and F = C (complex matrices)
But alsoV is polynomials of a certain degree with real coefficients, F = RV is continuous functions on an interval [a,b], F = R
[email protected] MATH 532 6
![Page 8: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/8.jpg)
Spaces and Subspaces
Examples of vector spaces
V = Rm and F = R (traditional real vectors)V = Cm and F = C (traditional complex vectors)
V = Rm×n and F = R (real matrices)V = Cm×n and F = C (complex matrices)
But alsoV is polynomials of a certain degree with real coefficients, F = RV is continuous functions on an interval [a,b], F = R
[email protected] MATH 532 6
![Page 9: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/9.jpg)
Spaces and Subspaces
Examples of vector spaces
V = Rm and F = R (traditional real vectors)V = Cm and F = C (traditional complex vectors)V = Rm×n and F = R (real matrices)V = Cm×n and F = C (complex matrices)
But alsoV is polynomials of a certain degree with real coefficients, F = RV is continuous functions on an interval [a,b], F = R
[email protected] MATH 532 6
![Page 10: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/10.jpg)
Spaces and Subspaces
Examples of vector spaces
V = Rm and F = R (traditional real vectors)V = Cm and F = C (traditional complex vectors)V = Rm×n and F = R (real matrices)V = Cm×n and F = C (complex matrices)
But alsoV is polynomials of a certain degree with real coefficients, F = RV is continuous functions on an interval [a,b], F = R
[email protected] MATH 532 6
![Page 11: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/11.jpg)
Spaces and Subspaces
Subspaces
DefinitionLet S be a nonempty subset of V. If S is a vector space, then S iscalled a subspace of V.
Q: What is the difference between a subset and a subspace?A: The structure provided by the axioms (A1)–(A5), (M1)–(M5)
TheoremThe subset S ⊆ V is a subspace of V if and only if
αx + βy ∈ S for all x ,y ∈ S, α, β ∈ F . (1)
RemarkZ = {0} is called the trivial subspace.
[email protected] MATH 532 7
![Page 12: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/12.jpg)
Spaces and Subspaces
Subspaces
DefinitionLet S be a nonempty subset of V. If S is a vector space, then S iscalled a subspace of V.
Q: What is the difference between a subset and a subspace?
A: The structure provided by the axioms (A1)–(A5), (M1)–(M5)
TheoremThe subset S ⊆ V is a subspace of V if and only if
αx + βy ∈ S for all x ,y ∈ S, α, β ∈ F . (1)
RemarkZ = {0} is called the trivial subspace.
[email protected] MATH 532 7
![Page 13: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/13.jpg)
Spaces and Subspaces
Subspaces
DefinitionLet S be a nonempty subset of V. If S is a vector space, then S iscalled a subspace of V.
Q: What is the difference between a subset and a subspace?A: The structure provided by the axioms (A1)–(A5), (M1)–(M5)
TheoremThe subset S ⊆ V is a subspace of V if and only if
αx + βy ∈ S for all x ,y ∈ S, α, β ∈ F . (1)
RemarkZ = {0} is called the trivial subspace.
[email protected] MATH 532 7
![Page 14: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/14.jpg)
Spaces and Subspaces
Subspaces
DefinitionLet S be a nonempty subset of V. If S is a vector space, then S iscalled a subspace of V.
Q: What is the difference between a subset and a subspace?A: The structure provided by the axioms (A1)–(A5), (M1)–(M5)
TheoremThe subset S ⊆ V is a subspace of V if and only if
αx + βy ∈ S for all x ,y ∈ S, α, β ∈ F . (1)
RemarkZ = {0} is called the trivial subspace.
[email protected] MATH 532 7
![Page 15: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/15.jpg)
Spaces and Subspaces
Subspaces
DefinitionLet S be a nonempty subset of V. If S is a vector space, then S iscalled a subspace of V.
Q: What is the difference between a subset and a subspace?A: The structure provided by the axioms (A1)–(A5), (M1)–(M5)
TheoremThe subset S ⊆ V is a subspace of V if and only if
αx + βy ∈ S for all x ,y ∈ S, α, β ∈ F . (1)
RemarkZ = {0} is called the trivial subspace.
[email protected] MATH 532 7
![Page 16: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/16.jpg)
Spaces and Subspaces
Proof.“=⇒”: Clear, since we actually have
(1) ⇐⇒ (A1) and (M1)
“⇐=”: Only (A1), (A4), (A5) and (M1) need to be checked (why?).
In fact, we see that (A1) and (M1) imply (A4) and (A5):
If x ∈ S, then — using (M1) — −1x = −x ∈ S, i.e., (A5) holds.
Using (A1), x + (−x) = 0 ∈ S, so that (A4) holds.
[email protected] MATH 532 8
![Page 17: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/17.jpg)
Spaces and Subspaces
Proof.“=⇒”: Clear, since we actually have
(1) ⇐⇒ (A1) and (M1)
“⇐=”: Only (A1), (A4), (A5) and (M1) need to be checked (why?).
In fact, we see that (A1) and (M1) imply (A4) and (A5):
If x ∈ S, then — using (M1) — −1x = −x ∈ S, i.e., (A5) holds.
Using (A1), x + (−x) = 0 ∈ S, so that (A4) holds.
[email protected] MATH 532 8
![Page 18: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/18.jpg)
Spaces and Subspaces
Proof.“=⇒”: Clear, since we actually have
(1) ⇐⇒ (A1) and (M1)
“⇐=”: Only (A1), (A4), (A5) and (M1) need to be checked (why?).
In fact, we see that (A1) and (M1) imply (A4) and (A5):
If x ∈ S, then — using (M1) — −1x = −x ∈ S, i.e., (A5) holds.
Using (A1), x + (−x) = 0 ∈ S, so that (A4) holds.
[email protected] MATH 532 8
![Page 19: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/19.jpg)
Spaces and Subspaces
Proof.“=⇒”: Clear, since we actually have
(1) ⇐⇒ (A1) and (M1)
“⇐=”: Only (A1), (A4), (A5) and (M1) need to be checked (why?).
In fact, we see that (A1) and (M1) imply (A4) and (A5):
If x ∈ S, then — using (M1) — −1x = −x ∈ S, i.e., (A5) holds.
Using (A1), x + (−x) = 0 ∈ S, so that (A4) holds.
[email protected] MATH 532 8
![Page 20: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/20.jpg)
Spaces and Subspaces
Proof.“=⇒”: Clear, since we actually have
(1) ⇐⇒ (A1) and (M1)
“⇐=”: Only (A1), (A4), (A5) and (M1) need to be checked (why?).
In fact, we see that (A1) and (M1) imply (A4) and (A5):
If x ∈ S, then — using (M1) — −1x = −x ∈ S, i.e., (A5) holds.
Using (A1), x + (−x) = 0 ∈ S, so that (A4) holds.
[email protected] MATH 532 8
![Page 21: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/21.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. The span of S is
span(S) =
{r∑
i=1
αiv i : αi ∈ F
}.
Remarkspan(S) contains all possible linear combinations of vectors in S.One can easily show that span(S) is a subspace of V.
Example (Geometric interpretation)1 If S = {v1} ⊆ R3, then span(S) is the line through the origin with
direction v1.2 If S = {v1,v2 : v1 6= αv2, α 6= 0} ⊆ R3, then span(S) is the
plane through the origin “spanned by” v1 and v2.
[email protected] MATH 532 9
![Page 22: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/22.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. The span of S is
span(S) =
{r∑
i=1
αiv i : αi ∈ F
}.
Remarkspan(S) contains all possible linear combinations of vectors in S.One can easily show that span(S) is a subspace of V.
Example (Geometric interpretation)1 If S = {v1} ⊆ R3, then span(S) is the line through the origin with
direction v1.2 If S = {v1,v2 : v1 6= αv2, α 6= 0} ⊆ R3, then span(S) is the
plane through the origin “spanned by” v1 and v2.
[email protected] MATH 532 9
![Page 23: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/23.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. The span of S is
span(S) =
{r∑
i=1
αiv i : αi ∈ F
}.
Remarkspan(S) contains all possible linear combinations of vectors in S.One can easily show that span(S) is a subspace of V.
Example (Geometric interpretation)1 If S = {v1} ⊆ R3, then span(S) is
the line through the origin withdirection v1.
2 If S = {v1,v2 : v1 6= αv2, α 6= 0} ⊆ R3, then span(S) is theplane through the origin “spanned by” v1 and v2.
[email protected] MATH 532 9
![Page 24: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/24.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. The span of S is
span(S) =
{r∑
i=1
αiv i : αi ∈ F
}.
Remarkspan(S) contains all possible linear combinations of vectors in S.One can easily show that span(S) is a subspace of V.
Example (Geometric interpretation)1 If S = {v1} ⊆ R3, then span(S) is the line through the origin with
direction v1.
2 If S = {v1,v2 : v1 6= αv2, α 6= 0} ⊆ R3, then span(S) is theplane through the origin “spanned by” v1 and v2.
[email protected] MATH 532 9
![Page 25: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/25.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. The span of S is
span(S) =
{r∑
i=1
αiv i : αi ∈ F
}.
Remarkspan(S) contains all possible linear combinations of vectors in S.One can easily show that span(S) is a subspace of V.
Example (Geometric interpretation)1 If S = {v1} ⊆ R3, then span(S) is the line through the origin with
direction v1.2 If S = {v1,v2 : v1 6= αv2, α 6= 0} ⊆ R3, then span(S) is
theplane through the origin “spanned by” v1 and v2.
[email protected] MATH 532 9
![Page 26: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/26.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. The span of S is
span(S) =
{r∑
i=1
αiv i : αi ∈ F
}.
Remarkspan(S) contains all possible linear combinations of vectors in S.One can easily show that span(S) is a subspace of V.
Example (Geometric interpretation)1 If S = {v1} ⊆ R3, then span(S) is the line through the origin with
direction v1.2 If S = {v1,v2 : v1 6= αv2, α 6= 0} ⊆ R3, then span(S) is the
plane through the origin “spanned by” v1 and v2.
[email protected] MATH 532 9
![Page 27: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/27.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. If spanS = V then S is called a spanningset for V.
RemarkA spanning set is sometimes referred to as a (finite) frame.A spanning set is not the same as a basis since the spanning setmay include redundancies.
Example1
00
,
010
,
001
is a spanning set for R3.
1
00
,
010
,
001
,
200
,
020
,
002
is also a spanning set
for R3.
[email protected] MATH 532 10
![Page 28: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/28.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. If spanS = V then S is called a spanningset for V.
RemarkA spanning set is sometimes referred to as a (finite) frame.
A spanning set is not the same as a basis since the spanning setmay include redundancies.
Example1
00
,
010
,
001
is a spanning set for R3.
1
00
,
010
,
001
,
200
,
020
,
002
is also a spanning set
for R3.
[email protected] MATH 532 10
![Page 29: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/29.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. If spanS = V then S is called a spanningset for V.
RemarkA spanning set is sometimes referred to as a (finite) frame.A spanning set is not the same as a basis since the spanning setmay include redundancies.
Example1
00
,
010
,
001
is a spanning set for R3.
1
00
,
010
,
001
,
200
,
020
,
002
is also a spanning set
for R3.
[email protected] MATH 532 10
![Page 30: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/30.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. If spanS = V then S is called a spanningset for V.
RemarkA spanning set is sometimes referred to as a (finite) frame.A spanning set is not the same as a basis since the spanning setmay include redundancies.
Example1
00
,
010
,
001
is a spanning set for R3.
1
00
,
010
,
001
,
200
,
020
,
002
is also a spanning set
for R3.
[email protected] MATH 532 10
![Page 31: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/31.jpg)
Spaces and Subspaces
DefinitionLet S = {v1, . . . ,v r} ⊆ V. If spanS = V then S is called a spanningset for V.
RemarkA spanning set is sometimes referred to as a (finite) frame.A spanning set is not the same as a basis since the spanning setmay include redundancies.
Example1
00
,
010
,
001
is a spanning set for R3.
1
00
,
010
,
001
,
200
,
020
,
002
is also a spanning set
for [email protected] MATH 532 10
![Page 32: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/32.jpg)
Spaces and Subspaces
Connection to linear systems
TheoremLet S = {a1,a2, . . . ,an} be the set of columns of an m × n matrix A.span(S) = Rm if and only if for every b ∈ Rm there exists an x ∈ Rn
such that Ax = b (i.e., if and only if Ax = b is consistent for everyb ∈ Rm).
Proof.By definition, S is a spanning set for Rm if and only if for every b ∈ Rm
there exist α1, . . . , αn ∈ R such that
b = α1a1 + . . .+ αnan = Ax ,
where A =
a1 a2 · · · an
m×n
and x =
α1...αn
.
[email protected] MATH 532 11
![Page 33: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/33.jpg)
Spaces and Subspaces
Connection to linear systems
TheoremLet S = {a1,a2, . . . ,an} be the set of columns of an m × n matrix A.span(S) = Rm if and only if for every b ∈ Rm there exists an x ∈ Rn
such that Ax = b (i.e., if and only if Ax = b is consistent for everyb ∈ Rm).
Proof.By definition, S is a spanning set for Rm if and only if for every b ∈ Rm
there exist α1, . . . , αn ∈ R such that
b = α1a1 + . . .+ αnan = Ax ,
where A =
a1 a2 · · · an
m×n
and x =
α1...αn
.
[email protected] MATH 532 11
![Page 34: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/34.jpg)
Spaces and Subspaces
RemarkThe sum
X + Y = {x + y : x ∈ X , y ∈ Y}
is a subspace of V provided X and Y are subspaces.
If SX and SY are spanning sets for X and Y, respectively, then SX ∪ SYis a spanning set for X + Y.
[email protected] MATH 532 12
![Page 35: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/35.jpg)
Spaces and Subspaces
RemarkThe sum
X + Y = {x + y : x ∈ X , y ∈ Y}
is a subspace of V provided X and Y are subspaces.
If SX and SY are spanning sets for X and Y, respectively, then SX ∪ SYis a spanning set for X + Y.
[email protected] MATH 532 12
![Page 36: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/36.jpg)
Four Fundamental Subspaces
Outline
1 Spaces and Subspaces
2 Four Fundamental Subspaces
3 Linear Independence
4 Bases and Dimension
5 More About Rank
6 Classical Least Squares
7 Kriging as best linear unbiased predictor
[email protected] MATH 532 13
![Page 37: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/37.jpg)
Four Fundamental Subspaces
Four Fundamental SubspacesRecall that a linear function f : Rn → Rm satisfies
f (αx + βy) = αf (x) + βf (y) ∀α, β ∈ R, x ,y ∈ Rn.
ExampleLet A be a real m × n matrix and
f (x) = Ax , x ∈ Rn.
The function f is linear since A(αx + βy) = αAx + βAy .Moreover, the range of f ,
R(f ) = {Ax : x ∈ Rn} ⊆ Rm,
is a subspace of Rm since for all α, β ∈ R and x ,y ∈ Rn
α( Ax︸︷︷︸∈R(f )
) + β( Ay︸︷︷︸∈R(f )
) = A(αx + βy) ∈ R(f ).
[email protected] MATH 532 14
![Page 38: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/38.jpg)
Four Fundamental Subspaces
Four Fundamental SubspacesRecall that a linear function f : Rn → Rm satisfies
f (αx + βy) = αf (x) + βf (y) ∀α, β ∈ R, x ,y ∈ Rn.
ExampleLet A be a real m × n matrix and
f (x) = Ax , x ∈ Rn.
The function f is linear
since A(αx + βy) = αAx + βAy .Moreover, the range of f ,
R(f ) = {Ax : x ∈ Rn} ⊆ Rm,
is a subspace of Rm since for all α, β ∈ R and x ,y ∈ Rn
α( Ax︸︷︷︸∈R(f )
) + β( Ay︸︷︷︸∈R(f )
) = A(αx + βy) ∈ R(f ).
[email protected] MATH 532 14
![Page 39: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/39.jpg)
Four Fundamental Subspaces
Four Fundamental SubspacesRecall that a linear function f : Rn → Rm satisfies
f (αx + βy) = αf (x) + βf (y) ∀α, β ∈ R, x ,y ∈ Rn.
ExampleLet A be a real m × n matrix and
f (x) = Ax , x ∈ Rn.
The function f is linear since A(αx + βy) = αAx + βAy .Moreover, the range of f ,
R(f ) = {Ax : x ∈ Rn} ⊆ Rm,
is a subspace of Rm since
for all α, β ∈ R and x ,y ∈ Rn
α( Ax︸︷︷︸∈R(f )
) + β( Ay︸︷︷︸∈R(f )
) = A(αx + βy) ∈ R(f ).
[email protected] MATH 532 14
![Page 40: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/40.jpg)
Four Fundamental Subspaces
Four Fundamental SubspacesRecall that a linear function f : Rn → Rm satisfies
f (αx + βy) = αf (x) + βf (y) ∀α, β ∈ R, x ,y ∈ Rn.
ExampleLet A be a real m × n matrix and
f (x) = Ax , x ∈ Rn.
The function f is linear since A(αx + βy) = αAx + βAy .Moreover, the range of f ,
R(f ) = {Ax : x ∈ Rn} ⊆ Rm,
is a subspace of Rm since for all α, β ∈ R and x ,y ∈ Rn
α( Ax︸︷︷︸∈R(f )
) + β( Ay︸︷︷︸∈R(f )
) = A(αx + βy) ∈ R(f ).
[email protected] MATH 532 14
![Page 41: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/41.jpg)
Four Fundamental Subspaces
RemarkFor the situation in this example we can also use the terminologyrange of A (or image of A), i.e.,
R(A) = {Ax : x ∈ Rn} ⊆ Rm
Similarly,R(AT ) =
{AT y : y ∈ Rm
}⊆ Rn
is called the range of AT .
[email protected] MATH 532 15
![Page 42: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/42.jpg)
Four Fundamental Subspaces
RemarkFor the situation in this example we can also use the terminologyrange of A (or image of A), i.e.,
R(A) = {Ax : x ∈ Rn} ⊆ Rm
Similarly,R(AT ) =
{AT y : y ∈ Rm
}⊆ Rn
is called the range of AT .
[email protected] MATH 532 15
![Page 43: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/43.jpg)
Four Fundamental Subspaces
Column space and row space
SinceAx = α1a1 + . . .+ αnan,
we have R(A) = span{a1, . . .an}, i.e.,
R(A) is the column space of A.
Similarly,R(AT ) is the row space of A.
[email protected] MATH 532 16
![Page 44: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/44.jpg)
Four Fundamental Subspaces
Column space and row space
SinceAx = α1a1 + . . .+ αnan,
we have R(A) = span{a1, . . .an}, i.e.,
R(A) is the column space of A.
Similarly,R(AT ) is the row space of A.
[email protected] MATH 532 16
![Page 45: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/45.jpg)
Four Fundamental Subspaces
ExampleConsider
A =
1 2 34 5 67 8 9
By definition
the columns of A span R(A), i.e., they form a spanning set ofR(A),
the rows of A span R(AT ), i.e., they form a spanning set of R(AT ),However, since
(A)∗3 = 2(A)∗2 − (A)∗1 and (A)3∗ = 2(A)2∗ − (A)1∗
we also haveR(A) = span{(A)∗1, (A)∗2}R(AT ) = span{(A)1∗, (A)2∗}
[email protected] MATH 532 17
![Page 46: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/46.jpg)
Four Fundamental Subspaces
ExampleConsider
A =
1 2 34 5 67 8 9
By definition
the columns of A span R(A), i.e., they form a spanning set ofR(A),the rows of A span R(AT ), i.e.,
they form a spanning set of R(AT ),However, since
(A)∗3 = 2(A)∗2 − (A)∗1 and (A)3∗ = 2(A)2∗ − (A)1∗
we also haveR(A) = span{(A)∗1, (A)∗2}R(AT ) = span{(A)1∗, (A)2∗}
[email protected] MATH 532 17
![Page 47: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/47.jpg)
Four Fundamental Subspaces
ExampleConsider
A =
1 2 34 5 67 8 9
By definition
the columns of A span R(A), i.e., they form a spanning set ofR(A),the rows of A span R(AT ), i.e., they form a spanning set of R(AT ),
However, since
(A)∗3 = 2(A)∗2 − (A)∗1 and (A)3∗ = 2(A)2∗ − (A)1∗
we also haveR(A) = span{(A)∗1, (A)∗2}R(AT ) = span{(A)1∗, (A)2∗}
[email protected] MATH 532 17
![Page 48: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/48.jpg)
Four Fundamental Subspaces
ExampleConsider
A =
1 2 34 5 67 8 9
By definition
the columns of A span R(A), i.e., they form a spanning set ofR(A),the rows of A span R(AT ), i.e., they form a spanning set of R(AT ),
However, since
(A)∗3 = 2(A)∗2 − (A)∗1 and (A)3∗ = 2(A)2∗ − (A)1∗
we also haveR(A) =
span{(A)∗1, (A)∗2}R(AT ) = span{(A)1∗, (A)2∗}
[email protected] MATH 532 17
![Page 49: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/49.jpg)
Four Fundamental Subspaces
ExampleConsider
A =
1 2 34 5 67 8 9
By definition
the columns of A span R(A), i.e., they form a spanning set ofR(A),the rows of A span R(AT ), i.e., they form a spanning set of R(AT ),
However, since
(A)∗3 = 2(A)∗2 − (A)∗1 and (A)3∗ = 2(A)2∗ − (A)1∗
we also haveR(A) = span{(A)∗1, (A)∗2}R(AT ) =
span{(A)1∗, (A)2∗}
[email protected] MATH 532 17
![Page 50: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/50.jpg)
Four Fundamental Subspaces
ExampleConsider
A =
1 2 34 5 67 8 9
By definition
the columns of A span R(A), i.e., they form a spanning set ofR(A),the rows of A span R(AT ), i.e., they form a spanning set of R(AT ),
However, since
(A)∗3 = 2(A)∗2 − (A)∗1 and (A)3∗ = 2(A)2∗ − (A)1∗
we also haveR(A) = span{(A)∗1, (A)∗2}R(AT ) = span{(A)1∗, (A)2∗}
[email protected] MATH 532 17
![Page 51: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/51.jpg)
Four Fundamental Subspaces
In general, how do we find such minimal spanning sets as in theprevious example?
An important tool is
LemmaLet A,B be m × n matrices. Then
(1) R(AT ) = R(BT ) ⇐⇒ A row∼ B (⇐⇒ EA = EB).
(2) R(A) = R(B) ⇐⇒ A col∼ B (⇐⇒ EAT = EBT ).
[email protected] MATH 532 18
![Page 52: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/52.jpg)
Four Fundamental Subspaces
Proof
1 “⇐=”: Assume A row∼ B, i.e., there exists a nonsingular matrix Psuch that
PA = B ⇐⇒ AT PT = BT .
Now a ∈ R(AT )⇐⇒ a = AT y for some y .We rewrite this as
[email protected] MATH 532 19
![Page 53: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/53.jpg)
Four Fundamental Subspaces
Proof
1 “⇐=”: Assume A row∼ B, i.e., there exists a nonsingular matrix Psuch that
PA = B ⇐⇒ AT PT = BT .
Now a ∈ R(AT )⇐⇒ a = AT y for some y .
We rewrite this as
[email protected] MATH 532 19
![Page 54: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/54.jpg)
Four Fundamental Subspaces
Proof
1 “⇐=”: Assume A row∼ B, i.e., there exists a nonsingular matrix Psuch that
PA = B ⇐⇒ AT PT = BT .
Now a ∈ R(AT )⇐⇒ a = AT y for some y .We rewrite this as
a = AT PT P−T y
[email protected] MATH 532 19
![Page 55: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/55.jpg)
Four Fundamental Subspaces
Proof
1 “⇐=”: Assume A row∼ B, i.e., there exists a nonsingular matrix Psuch that
PA = B ⇐⇒ AT PT = BT .
Now a ∈ R(AT )⇐⇒ a = AT y for some y .We rewrite this as
a = AT PT︸ ︷︷ ︸=BT
P−T y
[email protected] MATH 532 19
![Page 56: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/56.jpg)
Four Fundamental Subspaces
Proof
1 “⇐=”: Assume A row∼ B, i.e., there exists a nonsingular matrix Psuch that
PA = B ⇐⇒ AT PT = BT .
Now a ∈ R(AT )⇐⇒ a = AT y for some y .We rewrite this as
a = AT PT︸ ︷︷ ︸=BT
P−T y
⇐⇒ a = BT x for x = P−T y
[email protected] MATH 532 19
![Page 57: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/57.jpg)
Four Fundamental Subspaces
Proof
1 “⇐=”: Assume A row∼ B, i.e., there exists a nonsingular matrix Psuch that
PA = B ⇐⇒ AT PT = BT .
Now a ∈ R(AT )⇐⇒ a = AT y for some y .We rewrite this as
a = AT PT︸ ︷︷ ︸=BT
P−T y
⇐⇒ a = BT x for x = P−T y
⇐⇒ a ∈ R(BT ).
[email protected] MATH 532 19
![Page 58: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/58.jpg)
Four Fundamental Subspaces
(cont.)
“=⇒”: Assume R(AT ) = R(BT ), i.e.,
span{(A)1∗, . . . , (A)m∗} = span{(B)1∗, . . . , (B)m∗},
i.e., the rows of A are linear combinations of rows of B and viceversa.Now apply row operations to A (all collected in P) to obtain
PA = B, i.e., A row∼ B.
2 Let A = AT and B = BT in (1). �
[email protected] MATH 532 20
![Page 59: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/59.jpg)
Four Fundamental Subspaces
(cont.)
“=⇒”: Assume R(AT ) = R(BT ), i.e.,
span{(A)1∗, . . . , (A)m∗} = span{(B)1∗, . . . , (B)m∗},
i.e., the rows of A are linear combinations of rows of B and viceversa.
Now apply row operations to A (all collected in P) to obtain
PA = B, i.e., A row∼ B.
2 Let A = AT and B = BT in (1). �
[email protected] MATH 532 20
![Page 60: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/60.jpg)
Four Fundamental Subspaces
(cont.)
“=⇒”: Assume R(AT ) = R(BT ), i.e.,
span{(A)1∗, . . . , (A)m∗} = span{(B)1∗, . . . , (B)m∗},
i.e., the rows of A are linear combinations of rows of B and viceversa.Now apply row operations to A (all collected in P) to obtain
PA = B, i.e., A row∼ B.
2 Let A = AT and B = BT in (1). �
[email protected] MATH 532 20
![Page 61: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/61.jpg)
Four Fundamental Subspaces
(cont.)
“=⇒”: Assume R(AT ) = R(BT ), i.e.,
span{(A)1∗, . . . , (A)m∗} = span{(B)1∗, . . . , (B)m∗},
i.e., the rows of A are linear combinations of rows of B and viceversa.Now apply row operations to A (all collected in P) to obtain
PA = B, i.e., A row∼ B.
2 Let A = AT and B = BT in (1). �
[email protected] MATH 532 20
![Page 62: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/62.jpg)
Four Fundamental Subspaces
TheoremLet A be an m× n matrix and U any row echelon form obtained from A.Then
1 R(AT ) = span of nonzero rows of U.2 R(A) = span of basic columns of A.
RemarkLater we will see that any minimal span of the columns of A forms abasis for R(A).
[email protected] MATH 532 21
![Page 63: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/63.jpg)
Four Fundamental Subspaces
TheoremLet A be an m× n matrix and U any row echelon form obtained from A.Then
1 R(AT ) = span of nonzero rows of U.2 R(A) = span of basic columns of A.
RemarkLater we will see that any minimal span of the columns of A forms abasis for R(A).
[email protected] MATH 532 21
![Page 64: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/64.jpg)
Four Fundamental Subspaces
Proof
1 This follows from (1) in the Lemma since A row∼ U.
2 Assume the columns of A are permuted (with a matrix Q1) suchthat
AQ1 =(B N
),
where B contains the basic columns, and N the nonbasic columns.
By definition, the nonbasic columns are linear combinations of thebasic columns, i.e., there exists a nonsingular Q2 such that(
B N)
Q2 =(B O
),
where O is a zero matrix.
[email protected] MATH 532 22
![Page 65: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/65.jpg)
Four Fundamental Subspaces
Proof
1 This follows from (1) in the Lemma since A row∼ U.
2 Assume the columns of A are permuted (with a matrix Q1) suchthat
AQ1 =(B N
),
where B contains the basic columns, and N the nonbasic columns.
By definition, the nonbasic columns are linear combinations of thebasic columns, i.e., there exists a nonsingular Q2 such that(
B N)
Q2 =(B O
),
where O is a zero matrix.
[email protected] MATH 532 22
![Page 66: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/66.jpg)
Four Fundamental Subspaces
Proof
1 This follows from (1) in the Lemma since A row∼ U.
2 Assume the columns of A are permuted (with a matrix Q1) suchthat
AQ1 =(B N
),
where B contains the basic columns, and N the nonbasic columns.
By definition, the nonbasic columns are linear combinations of thebasic columns, i.e., there exists a nonsingular Q2 such that(
B N)
Q2 =(B O
),
where O is a zero matrix.
[email protected] MATH 532 22
![Page 67: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/67.jpg)
Four Fundamental Subspaces
(cont.)Putting this together, we have
AQ1Q2 =(B O
),
so that A col∼(B O
).
(2) in the Lemma says that
R(A) = span{B∗1, . . . ,B∗r},
where r = rank(A). �
[email protected] MATH 532 23
![Page 68: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/68.jpg)
Four Fundamental Subspaces
(cont.)Putting this together, we have
A Q1Q2︸ ︷︷ ︸=Q
=(B O
),
so that A col∼(B O
).
(2) in the Lemma says that
R(A) = span{B∗1, . . . ,B∗r},
where r = rank(A). �
[email protected] MATH 532 23
![Page 69: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/69.jpg)
Four Fundamental Subspaces
(cont.)Putting this together, we have
A Q1Q2︸ ︷︷ ︸=Q
=(B O
),
so that A col∼(B O
).
(2) in the Lemma says that
R(A) = span{B∗1, . . . ,B∗r},
where r = rank(A). �
[email protected] MATH 532 23
![Page 70: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/70.jpg)
Four Fundamental Subspaces
So far, we have two of the four fundamental subspaces:
R(A) and R(AT ).
Third fundamental subspace: N(A) = {x : Ax = 0} ⊆ Rn,
N(A) is the nullspace of A
(also called the kernel of A)
Fourth fundamental subspace: N(AT ) = {y : AT y = 0} ⊆ Rm,
N(AT ) is the left nullspace of A
RemarkN(A) is a linear space, i.e., a subspace of Rn.
To see this, assume x ,y ∈ N(A), i.e., Ax = Ay = 0.Then
A(αx + βy) = αAx + βAy = 0,
so that αx + βy ∈ N(A).
[email protected] MATH 532 24
![Page 71: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/71.jpg)
Four Fundamental Subspaces
So far, we have two of the four fundamental subspaces:
R(A) and R(AT ).
Third fundamental subspace: N(A) = {x : Ax = 0} ⊆ Rn,
N(A) is the nullspace of A
(also called the kernel of A)
Fourth fundamental subspace: N(AT ) = {y : AT y = 0} ⊆ Rm,
N(AT ) is the left nullspace of A
RemarkN(A) is a linear space, i.e., a subspace of Rn.
To see this, assume x ,y ∈ N(A), i.e., Ax = Ay = 0.Then
A(αx + βy) = αAx + βAy = 0,
so that αx + βy ∈ N(A).
[email protected] MATH 532 24
![Page 72: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/72.jpg)
Four Fundamental Subspaces
So far, we have two of the four fundamental subspaces:
R(A) and R(AT ).
Third fundamental subspace: N(A) = {x : Ax = 0} ⊆ Rn,
N(A) is the nullspace of A
(also called the kernel of A)
Fourth fundamental subspace: N(AT ) = {y : AT y = 0} ⊆ Rm,
N(AT ) is the left nullspace of A
RemarkN(A) is a linear space, i.e., a subspace of Rn.
To see this, assume x ,y ∈ N(A), i.e., Ax = Ay = 0.Then
A(αx + βy) = αAx + βAy = 0,
so that αx + βy ∈ N(A).
[email protected] MATH 532 24
![Page 73: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/73.jpg)
Four Fundamental Subspaces
So far, we have two of the four fundamental subspaces:
R(A) and R(AT ).
Third fundamental subspace: N(A) = {x : Ax = 0} ⊆ Rn,
N(A) is the nullspace of A
(also called the kernel of A)
Fourth fundamental subspace: N(AT ) = {y : AT y = 0} ⊆ Rm,
N(AT ) is the left nullspace of A
RemarkN(A) is a linear space, i.e., a subspace of Rn.
To see this, assume x ,y ∈ N(A), i.e., Ax = Ay = 0.Then
A(αx + βy) = αAx + βAy = 0,
so that αx + βy ∈ N(A).
[email protected] MATH 532 24
![Page 74: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/74.jpg)
Four Fundamental Subspaces
So far, we have two of the four fundamental subspaces:
R(A) and R(AT ).
Third fundamental subspace: N(A) = {x : Ax = 0} ⊆ Rn,
N(A) is the nullspace of A
(also called the kernel of A)
Fourth fundamental subspace: N(AT ) = {y : AT y = 0} ⊆ Rm,
N(AT ) is the left nullspace of A
RemarkN(A) is a linear space, i.e., a subspace of Rn.
To see this, assume x ,y ∈ N(A), i.e., Ax = Ay = 0.
ThenA(αx + βy) = αAx + βAy = 0,
so that αx + βy ∈ N(A).
[email protected] MATH 532 24
![Page 75: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/75.jpg)
Four Fundamental Subspaces
So far, we have two of the four fundamental subspaces:
R(A) and R(AT ).
Third fundamental subspace: N(A) = {x : Ax = 0} ⊆ Rn,
N(A) is the nullspace of A
(also called the kernel of A)
Fourth fundamental subspace: N(AT ) = {y : AT y = 0} ⊆ Rm,
N(AT ) is the left nullspace of A
RemarkN(A) is a linear space, i.e., a subspace of Rn.
To see this, assume x ,y ∈ N(A), i.e., Ax = Ay = 0.Then
A(αx + βy) = αAx + βAy = 0,
so that αx + βy ∈ N(A).
[email protected] MATH 532 24
![Page 76: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/76.jpg)
Four Fundamental Subspaces
How to find a (minimal) spanning set for N(A)
Find a row echelon form U of A and solve Ux = 0.
Example
We can compute A =
1 2 34 5 67 8 9
−→ U =
1 2 30 −3 −60 0 0
.
So that Ux = 0 =⇒
{x2 = −2x3
x1 = −2x2 − 3x3 = x3, or
x1x2x3
=
x3−2x3
x3
= x3
1−21
.
Therefore
N(A) = span
1−21
.
[email protected] MATH 532 25
![Page 77: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/77.jpg)
Four Fundamental Subspaces
How to find a (minimal) spanning set for N(A)
Find a row echelon form U of A and solve Ux = 0.
Example
We can compute A =
1 2 34 5 67 8 9
−→ U =
1 2 30 −3 −60 0 0
.
So that Ux = 0 =⇒
{x2 = −2x3
x1 = −2x2 − 3x3 = x3,
or
x1x2x3
=
x3−2x3
x3
= x3
1−21
.
Therefore
N(A) = span
1−21
.
[email protected] MATH 532 25
![Page 78: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/78.jpg)
Four Fundamental Subspaces
How to find a (minimal) spanning set for N(A)
Find a row echelon form U of A and solve Ux = 0.
Example
We can compute A =
1 2 34 5 67 8 9
−→ U =
1 2 30 −3 −60 0 0
.
So that Ux = 0 =⇒
{x2 = −2x3
x1 = −2x2 − 3x3 = x3, or
x1x2x3
=
x3−2x3
x3
= x3
1−21
.
Therefore
N(A) = span
1−21
.
[email protected] MATH 532 25
![Page 79: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/79.jpg)
Four Fundamental Subspaces
How to find a (minimal) spanning set for N(A)
Find a row echelon form U of A and solve Ux = 0.
Example
We can compute A =
1 2 34 5 67 8 9
−→ U =
1 2 30 −3 −60 0 0
.
So that Ux = 0 =⇒
{x2 = −2x3
x1 = −2x2 − 3x3 = x3, or
x1x2x3
=
x3−2x3
x3
= x3
1−21
.
Therefore
N(A) = span
1−21
.
[email protected] MATH 532 25
![Page 80: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/80.jpg)
Four Fundamental Subspaces
RemarkWe will see later that — as in the example — if rank(A) = r , then N(A)is spanned by n − r vectors.
TheoremLet A be an m × n matrix. Then
1 N(A) = {0} ⇐⇒ rank(A) = n.2 N(AT ) = {0} ⇐⇒ rank(A) = m.
Proof.1 We know rank(A) = n ⇐⇒ Ax = 0, but that implies x = 0.2 Repeat (1) with A = AT and use rank(AT ) = rank(A).
[email protected] MATH 532 26
![Page 81: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/81.jpg)
Four Fundamental Subspaces
RemarkWe will see later that — as in the example — if rank(A) = r , then N(A)is spanned by n − r vectors.
TheoremLet A be an m × n matrix. Then
1 N(A) = {0} ⇐⇒ rank(A) = n.2 N(AT ) = {0} ⇐⇒ rank(A) = m.
Proof.1 We know rank(A) = n ⇐⇒ Ax = 0, but that implies x = 0.2 Repeat (1) with A = AT and use rank(AT ) = rank(A).
[email protected] MATH 532 26
![Page 82: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/82.jpg)
Four Fundamental Subspaces
RemarkWe will see later that — as in the example — if rank(A) = r , then N(A)is spanned by n − r vectors.
TheoremLet A be an m × n matrix. Then
1 N(A) = {0} ⇐⇒ rank(A) = n.2 N(AT ) = {0} ⇐⇒ rank(A) = m.
Proof.1 We know rank(A) = n ⇐⇒ Ax = 0,
but that implies x = 0.2 Repeat (1) with A = AT and use rank(AT ) = rank(A).
[email protected] MATH 532 26
![Page 83: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/83.jpg)
Four Fundamental Subspaces
RemarkWe will see later that — as in the example — if rank(A) = r , then N(A)is spanned by n − r vectors.
TheoremLet A be an m × n matrix. Then
1 N(A) = {0} ⇐⇒ rank(A) = n.2 N(AT ) = {0} ⇐⇒ rank(A) = m.
Proof.1 We know rank(A) = n ⇐⇒ Ax = 0, but that implies x = 0.
2 Repeat (1) with A = AT and use rank(AT ) = rank(A).
[email protected] MATH 532 26
![Page 84: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/84.jpg)
Four Fundamental Subspaces
RemarkWe will see later that — as in the example — if rank(A) = r , then N(A)is spanned by n − r vectors.
TheoremLet A be an m × n matrix. Then
1 N(A) = {0} ⇐⇒ rank(A) = n.2 N(AT ) = {0} ⇐⇒ rank(A) = m.
Proof.1 We know rank(A) = n ⇐⇒ Ax = 0, but that implies x = 0.2 Repeat (1) with A = AT and use rank(AT ) = rank(A).
[email protected] MATH 532 26
![Page 85: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/85.jpg)
Four Fundamental Subspaces
How to find a spanning set of N(AT )
TheoremLet A be an m × n matrix with rank(A) = r , and let P be a nonsingularmatrix so that PA = U (row echelon form). Then the last m − r rows ofP span N(AT ).
Remark
We will later see that this spanning set is also a basis for N(AT ).
[email protected] MATH 532 27
![Page 86: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/86.jpg)
Four Fundamental Subspaces
How to find a spanning set of N(AT )
TheoremLet A be an m × n matrix with rank(A) = r , and let P be a nonsingularmatrix so that PA = U (row echelon form). Then the last m − r rows ofP span N(AT ).
Remark
We will later see that this spanning set is also a basis for N(AT ).
[email protected] MATH 532 27
![Page 87: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/87.jpg)
Four Fundamental Subspaces
Proof
Partition P as P =
(P1P2
), where P1 is r ×m and P2 is m − r ×m.
The claim of the theorem implies that we should show thatR(PT
2 ) = N(AT ).
We do this in two parts:1 Show that R(PT
2 ) ⊆ N(AT ).2 Show that N(AT ) ⊆ R(PT
2 ).
[email protected] MATH 532 28
![Page 88: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/88.jpg)
Four Fundamental Subspaces
Proof
Partition P as P =
(P1P2
), where P1 is r ×m and P2 is m − r ×m.
The claim of the theorem implies that we should show thatR(PT
2 ) = N(AT ).
We do this in two parts:1 Show that R(PT
2 ) ⊆ N(AT ).2 Show that N(AT ) ⊆ R(PT
2 ).
[email protected] MATH 532 28
![Page 89: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/89.jpg)
Four Fundamental Subspaces
Proof
Partition P as P =
(P1P2
), where P1 is r ×m and P2 is m − r ×m.
The claim of the theorem implies that we should show thatR(PT
2 ) = N(AT ).
We do this in two parts:1 Show that R(PT
2 ) ⊆ N(AT ).2 Show that N(AT ) ⊆ R(PT
2 ).
[email protected] MATH 532 28
![Page 90: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/90.jpg)
Four Fundamental Subspaces
(cont.)
1 Partition Um×n =
(CO
)with C ∈ Rr×n and O ∈ Rm−r×n (a zero
matrix).Then
PA = U ⇐⇒(
P1P2
)A =
(CO
)
=⇒ P2A = O.
This also means thatAT PT
2 = OT ,
i.e., every column of PT2 is in N(AT ) so that R(PT
2 ) ⊆ N(AT ).
[email protected] MATH 532 29
![Page 91: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/91.jpg)
Four Fundamental Subspaces
(cont.)
1 Partition Um×n =
(CO
)with C ∈ Rr×n and O ∈ Rm−r×n (a zero
matrix).Then
PA = U ⇐⇒(
P1P2
)A =
(CO
)=⇒ P2A = O.
This also means thatAT PT
2 = OT ,
i.e., every column of PT2 is in N(AT ) so that R(PT
2 ) ⊆ N(AT ).
[email protected] MATH 532 29
![Page 92: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/92.jpg)
Four Fundamental Subspaces
(cont.)
1 Partition Um×n =
(CO
)with C ∈ Rr×n and O ∈ Rm−r×n (a zero
matrix).Then
PA = U ⇐⇒(
P1P2
)A =
(CO
)=⇒ P2A = O.
This also means thatAT PT
2 = OT ,
i.e., every column of PT2 is in N(AT ) so that R(PT
2 ) ⊆ N(AT ).
[email protected] MATH 532 29
![Page 93: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/93.jpg)
Four Fundamental Subspaces
(cont.)
1 Partition Um×n =
(CO
)with C ∈ Rr×n and O ∈ Rm−r×n (a zero
matrix).Then
PA = U ⇐⇒(
P1P2
)A =
(CO
)=⇒ P2A = O.
This also means thatAT PT
2 = OT ,
i.e., every column of PT2 is in N(AT ) so that R(PT
2 ) ⊆ N(AT ).
[email protected] MATH 532 29
![Page 94: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/94.jpg)
Four Fundamental Subspaces
(cont.)2 Now, show N(AT ) ⊆ R(PT
2 ).
We assume y ∈ N(AT ) and show that then y ∈ R(PT2 ).
By definition,
y ∈ N(AT ) =⇒ AT y = 0 ⇐⇒ yT A = 0T .
Since PA = U =⇒ A = P−1U, and so
0T = yT P−1U = yT P−1(
CO
)or
0T = yT Q1C, where P−1 =
(Q1︸︷︷︸m×r
Q2︸︷︷︸m×m−r
).
[email protected] MATH 532 30
![Page 95: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/95.jpg)
Four Fundamental Subspaces
(cont.)2 Now, show N(AT ) ⊆ R(PT
2 ).We assume y ∈ N(AT ) and show that then y ∈ R(PT
2 ).
By definition,
y ∈ N(AT ) =⇒ AT y = 0 ⇐⇒ yT A = 0T .
Since PA = U =⇒ A = P−1U, and so
0T = yT P−1U = yT P−1(
CO
)or
0T = yT Q1C, where P−1 =
(Q1︸︷︷︸m×r
Q2︸︷︷︸m×m−r
).
[email protected] MATH 532 30
![Page 96: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/96.jpg)
Four Fundamental Subspaces
(cont.)2 Now, show N(AT ) ⊆ R(PT
2 ).We assume y ∈ N(AT ) and show that then y ∈ R(PT
2 ).By definition,
y ∈ N(AT ) =⇒ AT y = 0 ⇐⇒ yT A = 0T .
Since PA = U =⇒ A = P−1U, and so
0T = yT P−1U = yT P−1(
CO
)or
0T = yT Q1C, where P−1 =
(Q1︸︷︷︸m×r
Q2︸︷︷︸m×m−r
).
[email protected] MATH 532 30
![Page 97: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/97.jpg)
Four Fundamental Subspaces
(cont.)2 Now, show N(AT ) ⊆ R(PT
2 ).We assume y ∈ N(AT ) and show that then y ∈ R(PT
2 ).By definition,
y ∈ N(AT ) =⇒ AT y = 0 ⇐⇒ yT A = 0T .
Since PA = U =⇒ A = P−1U, and so
0T = yT P−1U = yT P−1(
CO
)
or
0T = yT Q1C, where P−1 =
(Q1︸︷︷︸m×r
Q2︸︷︷︸m×m−r
).
[email protected] MATH 532 30
![Page 98: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/98.jpg)
Four Fundamental Subspaces
(cont.)2 Now, show N(AT ) ⊆ R(PT
2 ).We assume y ∈ N(AT ) and show that then y ∈ R(PT
2 ).By definition,
y ∈ N(AT ) =⇒ AT y = 0 ⇐⇒ yT A = 0T .
Since PA = U =⇒ A = P−1U, and so
0T = yT P−1U = yT P−1(
CO
)or
0T = yT Q1C, where P−1 =
(Q1︸︷︷︸m×r
Q2︸︷︷︸m×m−r
).
[email protected] MATH 532 30
![Page 99: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/99.jpg)
Four Fundamental Subspaces
(cont.)
However, since rank(C) = r and C is m × n we get (using m = r in ourearlier theorem)
N(CT ) = {0}
and therefore yT Q1 = 0T .
Obviously, this implies that we also have
yT Q1P1 = 0T (2)
[email protected] MATH 532 31
![Page 100: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/100.jpg)
Four Fundamental Subspaces
(cont.)
However, since rank(C) = r and C is m × n we get (using m = r in ourearlier theorem)
N(CT ) = {0}
and therefore yT Q1 = 0T .
Obviously, this implies that we also have
yT Q1P1 = 0T (2)
[email protected] MATH 532 31
![Page 101: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/101.jpg)
Four Fundamental Subspaces
(cont.)
Now P =
(P1P2
)and P−1 =
(Q1 Q2
)so that
I = P−1P
= Q1P1 + Q2P2
orQ1P1 = I−Q2P2. (3)
Now we insert (3) into (2) and get
Therefore y ∈ R(PT2 ). �
[email protected] MATH 532 32
![Page 102: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/102.jpg)
Four Fundamental Subspaces
(cont.)
Now P =
(P1P2
)and P−1 =
(Q1 Q2
)so that
I = P−1P = Q1P1 + Q2P2
orQ1P1 = I−Q2P2. (3)
Now we insert (3) into (2) and get
Therefore y ∈ R(PT2 ). �
[email protected] MATH 532 32
![Page 103: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/103.jpg)
Four Fundamental Subspaces
(cont.)
Now P =
(P1P2
)and P−1 =
(Q1 Q2
)so that
I = P−1P = Q1P1 + Q2P2
orQ1P1 = I−Q2P2. (3)
Now we insert (3) into (2) and get
Therefore y ∈ R(PT2 ). �
[email protected] MATH 532 32
![Page 104: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/104.jpg)
Four Fundamental Subspaces
(cont.)
Now P =
(P1P2
)and P−1 =
(Q1 Q2
)so that
I = P−1P = Q1P1 + Q2P2
orQ1P1 = I−Q2P2. (3)
Now we insert (3) into (2) and get
Therefore y ∈ R(PT2 ). �
[email protected] MATH 532 32
![Page 105: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/105.jpg)
Four Fundamental Subspaces
(cont.)
Now P =
(P1P2
)and P−1 =
(Q1 Q2
)so that
I = P−1P = Q1P1 + Q2P2
orQ1P1 = I−Q2P2. (3)
Now we insert (3) into (2) and get
yT (I−Q2P2)
Therefore y ∈ R(PT2 ). �
[email protected] MATH 532 32
![Page 106: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/106.jpg)
Four Fundamental Subspaces
(cont.)
Now P =
(P1P2
)and P−1 =
(Q1 Q2
)so that
I = P−1P = Q1P1 + Q2P2
orQ1P1 = I−Q2P2. (3)
Now we insert (3) into (2) and get
yT (I−Q2P2) = 0T ⇐⇒ yT = yT Q2P2
Therefore y ∈ R(PT2 ). �
[email protected] MATH 532 32
![Page 107: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/107.jpg)
Four Fundamental Subspaces
(cont.)
Now P =
(P1P2
)and P−1 =
(Q1 Q2
)so that
I = P−1P = Q1P1 + Q2P2
orQ1P1 = I−Q2P2. (3)
Now we insert (3) into (2) and get
yT (I−Q2P2) = 0T ⇐⇒ yT = yT Q2︸ ︷︷ ︸=zT
P2.
Therefore y ∈ R(PT2 ). �
[email protected] MATH 532 32
![Page 108: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/108.jpg)
Four Fundamental Subspaces
(cont.)
Now P =
(P1P2
)and P−1 =
(Q1 Q2
)so that
I = P−1P = Q1P1 + Q2P2
orQ1P1 = I−Q2P2. (3)
Now we insert (3) into (2) and get
yT (I−Q2P2) = 0T ⇐⇒ yT = yT Q2︸ ︷︷ ︸=zT
P2.
Therefore y ∈ R(PT2 ). �
[email protected] MATH 532 32
![Page 109: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/109.jpg)
Four Fundamental Subspaces
Finally,
TheoremLet A,B be m × n matrices.
1 N(A) = N(B) ⇐⇒ A row∼ B.
2 N(AT ) = N(BT ) ⇐⇒ A col∼ B.
Proof.See [Mey00, Section 4.2].
[email protected] MATH 532 33
![Page 110: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/110.jpg)
Linear Independence
Outline
1 Spaces and Subspaces
2 Four Fundamental Subspaces
3 Linear Independence
4 Bases and Dimension
5 More About Rank
6 Classical Least Squares
7 Kriging as best linear unbiased predictor
[email protected] MATH 532 34
![Page 111: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/111.jpg)
Linear Independence
Linear Independence
DefinitionA set of vectors S = {v1, . . . ,vn} is called linearly independent if
α1v1 + α2v2 + . . .+ αnvn = 0 =⇒ α1 = α2 = . . . = αn = 0.
Otherwise S is linearly dependent.
RemarkLinear independence is a property of a set, not of vectors.
[email protected] MATH 532 35
![Page 112: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/112.jpg)
Linear Independence
Example
Is S =
1
47
,
258
,
369
linearly independent?
Consider
α1
147
+ α2
258
+ α3
369
=
000
⇐⇒ Ax = 0, where A =
1 2 34 5 67 8 9
, x =
α1α2α3
[email protected] MATH 532 36
![Page 113: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/113.jpg)
Linear Independence
Example
Is S =
1
47
,
258
,
369
linearly independent?
Consider
α1
147
+ α2
258
+ α3
369
=
000
⇐⇒ Ax = 0, where A =
1 2 34 5 67 8 9
, x =
α1α2α3
[email protected] MATH 532 36
![Page 114: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/114.jpg)
Linear Independence
Example
Is S =
1
47
,
258
,
369
linearly independent?
Consider
α1
147
+ α2
258
+ α3
369
=
000
⇐⇒ Ax = 0, where A =
1 2 34 5 67 8 9
, x =
α1α2α3
[email protected] MATH 532 36
![Page 115: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/115.jpg)
Linear Independence
Example ((cont.))Since
A row∼ EA =
1 2 30 1 20 0 0
we know that N(A) is nontrivial,i.e., the system Ax = 0 has a nonzero solution, and therefore S islinearly dependent.
[email protected] MATH 532 37
![Page 116: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/116.jpg)
Linear Independence
More generally,
TheoremLet A be an m × n matrix.
1 The columns of A are linearly independent if and only ifN(A) = {0} ⇐⇒ rank(A) = n.
2 The rows of A are linearly independent if and only ifN(AT ) = {0} ⇐⇒ rank(A) = m.
Proof.See [Mey00, Section 4.3].
[email protected] MATH 532 38
![Page 117: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/117.jpg)
Linear Independence
DefinitionA square matrix A is called diagonally dominant if
|aii | >n∑
j=1j 6=i
|aij |, i = 1, . . . ,n.
RemarkAside from being nonsingular (see next slide), diagonallydominant matrices are important since they ensure that Gaussianelimination will succeed without pivoting.Also, diagonally dominance ensures convergence of certainiterative solvers (more later).
[email protected] MATH 532 39
![Page 118: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/118.jpg)
Linear Independence
DefinitionA square matrix A is called diagonally dominant if
|aii | >n∑
j=1j 6=i
|aij |, i = 1, . . . ,n.
RemarkAside from being nonsingular (see next slide), diagonallydominant matrices are important since they ensure that Gaussianelimination will succeed without pivoting.Also, diagonally dominance ensures convergence of certainiterative solvers (more later).
[email protected] MATH 532 39
![Page 119: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/119.jpg)
Linear Independence
TheoremLet A be an n × n matrix. If A is diagonally dominant then A isnonsingular.
ProofWe will show that N(A) = {0} since then we know that rank(A) = nand A is nonsingular.
We will do this with a proof by contradiction.
We assume that there exists an x( 6= 0) ∈ N(A) and we will concludethat A cannot be diagonally dominant.
[email protected] MATH 532 40
![Page 120: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/120.jpg)
Linear Independence
TheoremLet A be an n × n matrix. If A is diagonally dominant then A isnonsingular.
ProofWe will show that N(A) = {0} since then we know that rank(A) = nand A is nonsingular.
We will do this with a proof by contradiction.
We assume that there exists an x( 6= 0) ∈ N(A) and we will concludethat A cannot be diagonally dominant.
[email protected] MATH 532 40
![Page 121: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/121.jpg)
Linear Independence
TheoremLet A be an n × n matrix. If A is diagonally dominant then A isnonsingular.
ProofWe will show that N(A) = {0} since then we know that rank(A) = nand A is nonsingular.
We will do this with a proof by contradiction.
We assume that there exists an x( 6= 0) ∈ N(A) and we will concludethat A cannot be diagonally dominant.
[email protected] MATH 532 40
![Page 122: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/122.jpg)
Linear Independence
(cont.)
If x ∈ N(A) then Ax = 0.
Now we take k so that xk is the maximum (in absolute value)component of x and consider
Ak∗x = 0.
We can rewrite this as
n∑j=1
akjxj = 0 ⇐⇒ akkxk = −n∑
j=1j 6=k
akjxj .
[email protected] MATH 532 41
![Page 123: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/123.jpg)
Linear Independence
(cont.)
If x ∈ N(A) then Ax = 0.
Now we take k so that xk is the maximum (in absolute value)component of x and consider
Ak∗x = 0.
We can rewrite this as
n∑j=1
akjxj = 0 ⇐⇒ akkxk = −n∑
j=1j 6=k
akjxj .
[email protected] MATH 532 41
![Page 124: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/124.jpg)
Linear Independence
(cont.)
If x ∈ N(A) then Ax = 0.
Now we take k so that xk is the maximum (in absolute value)component of x and consider
Ak∗x = 0.
We can rewrite this as
n∑j=1
akjxj = 0 ⇐⇒
akkxk = −n∑
j=1j 6=k
akjxj .
[email protected] MATH 532 41
![Page 125: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/125.jpg)
Linear Independence
(cont.)
If x ∈ N(A) then Ax = 0.
Now we take k so that xk is the maximum (in absolute value)component of x and consider
Ak∗x = 0.
We can rewrite this as
n∑j=1
akjxj = 0 ⇐⇒ akkxk = −n∑
j=1j 6=k
akjxj .
[email protected] MATH 532 41
![Page 126: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/126.jpg)
Linear Independence
(cont.)Now we take absolute values:
|akkxk | =
∣∣∣∣∣∣∣n∑
j=1j 6=k
akjxj
∣∣∣∣∣∣∣
≤n∑
j=1j 6=k
∣∣akj∣∣ ∣∣xj
∣∣≤ |xk |︸︷︷︸
max. component
n∑j=1j 6=k
∣∣akj∣∣
Finally, dividing both sides by |xk | yields
|akk | ≤n∑
j=1j 6=k
∣∣akj∣∣ ,
which shows that A cannot be diagonally dominant (which is acontradiction since A was assumed to be diagonally dominant). �
[email protected] MATH 532 42
![Page 127: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/127.jpg)
Linear Independence
(cont.)Now we take absolute values:
|akkxk | =
∣∣∣∣∣∣∣n∑
j=1j 6=k
akjxj
∣∣∣∣∣∣∣ ≤n∑
j=1j 6=k
∣∣akj∣∣ ∣∣xj
∣∣
≤ |xk |︸︷︷︸max. component
n∑j=1j 6=k
∣∣akj∣∣
Finally, dividing both sides by |xk | yields
|akk | ≤n∑
j=1j 6=k
∣∣akj∣∣ ,
which shows that A cannot be diagonally dominant (which is acontradiction since A was assumed to be diagonally dominant). �
[email protected] MATH 532 42
![Page 128: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/128.jpg)
Linear Independence
(cont.)Now we take absolute values:
|akkxk | =
∣∣∣∣∣∣∣n∑
j=1j 6=k
akjxj
∣∣∣∣∣∣∣ ≤n∑
j=1j 6=k
∣∣akj∣∣ ∣∣xj
∣∣≤ |xk |︸︷︷︸
max. component
n∑j=1j 6=k
∣∣akj∣∣
Finally, dividing both sides by |xk | yields
|akk | ≤n∑
j=1j 6=k
∣∣akj∣∣ ,
which shows that A cannot be diagonally dominant (which is acontradiction since A was assumed to be diagonally dominant). �
[email protected] MATH 532 42
![Page 129: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/129.jpg)
Linear Independence
(cont.)Now we take absolute values:
|akkxk | =
∣∣∣∣∣∣∣n∑
j=1j 6=k
akjxj
∣∣∣∣∣∣∣ ≤n∑
j=1j 6=k
∣∣akj∣∣ ∣∣xj
∣∣≤ |xk |︸︷︷︸
max. component
n∑j=1j 6=k
∣∣akj∣∣
Finally, dividing both sides by |xk | yields
|akk | ≤n∑
j=1j 6=k
∣∣akj∣∣ ,
which shows that A cannot be diagonally dominant (which is acontradiction since A was assumed to be diagonally dominant). �
[email protected] MATH 532 42
![Page 130: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/130.jpg)
Linear Independence
(cont.)Now we take absolute values:
|akkxk | =
∣∣∣∣∣∣∣n∑
j=1j 6=k
akjxj
∣∣∣∣∣∣∣ ≤n∑
j=1j 6=k
∣∣akj∣∣ ∣∣xj
∣∣≤ |xk |︸︷︷︸
max. component
n∑j=1j 6=k
∣∣akj∣∣
Finally, dividing both sides by |xk | yields
|akk | ≤n∑
j=1j 6=k
∣∣akj∣∣ ,
which shows that A cannot be diagonally dominant (which is acontradiction since A was assumed to be diagonally dominant). �
[email protected] MATH 532 42
![Page 131: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/131.jpg)
Linear Independence
ExampleConsider m real numbers x1, . . . , xm such that xi 6= xj , i 6= j .Show that the columns of the Vandermonde matrix
V =
1 x1 x2
1 · · · xn−11
1 x2 x22 · · · xn−1
2...
1 xm x2m · · · xn−1
m
form a linearly independent set provided n ≤ m.
From above, the columns of V are linearly independent if and only ifN(V) = {0}
⇐⇒ Vz = 0 =⇒ z = 0, z =
α0...
αn−1
.
[email protected] MATH 532 43
![Page 132: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/132.jpg)
Linear Independence
ExampleConsider m real numbers x1, . . . , xm such that xi 6= xj , i 6= j .Show that the columns of the Vandermonde matrix
V =
1 x1 x2
1 · · · xn−11
1 x2 x22 · · · xn−1
2...
1 xm x2m · · · xn−1
m
form a linearly independent set provided n ≤ m.From above, the columns of V are linearly independent if and only ifN(V) = {0}
⇐⇒ Vz = 0 =⇒ z = 0, z =
α0...
αn−1
.
[email protected] MATH 532 43
![Page 133: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/133.jpg)
Linear Independence
Example(cont.)Now Vz = 0 if and only if
α0 + α1xi + α2x2i + . . .+ αn−1xn−1
i = 0, i = 1, . . . ,m.
In other words, x1, x2, . . . , xm are all (distinct) roots of
p(x) = α0 + α1x + α2x2 + . . .+ αn−1xn−1.
This is a polynomial of degree at most n − 1.
It can have m distinct roots only if m ≤ n − 1.
Otherwise, p is the zero polynomial, i.e., α0 = α1 = . . . = αn−1 = 0, sothat the columns of V are linearly dependent.
[email protected] MATH 532 44
![Page 134: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/134.jpg)
Linear Independence
Example(cont.)Now Vz = 0 if and only if
α0 + α1xi + α2x2i + . . .+ αn−1xn−1
i = 0, i = 1, . . . ,m.
In other words, x1, x2, . . . , xm are all (distinct) roots of
p(x) = α0 + α1x + α2x2 + . . .+ αn−1xn−1.
This is a polynomial of degree at most n − 1.
It can have m distinct roots only if m ≤ n − 1.
Otherwise, p is the zero polynomial, i.e., α0 = α1 = . . . = αn−1 = 0, sothat the columns of V are linearly dependent.
[email protected] MATH 532 44
![Page 135: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/135.jpg)
Linear Independence
The example implies that in the special case m = n there is a uniquepolynomial of degree (at most) m − 1 that interpolates the data{(x1, y1), (x2, y2), . . . , (xm, ym)} ⊂ R2.
We see this by writing the polynomial in the form
`(t) = α0 + α1t + α2t2 + . . .+ αm−1tm−1.
Then, interpolation of the data implies
`(xi) = yi , i = 1, . . . ,m
or 1 x1 x2
1 · · · xm−11
1 x2 x22 · · · xm−1
2...
1 xm x2m · · · xm−1
m
α0α1...
αm−1
=
y1y2...
ym
.
Since the columns of V are linearly independent it is nonsingular, andthe coefficients α0, . . . , αm−1 are uniquely determined.
[email protected] MATH 532 45
![Page 136: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/136.jpg)
Linear Independence
The example implies that in the special case m = n there is a uniquepolynomial of degree (at most) m − 1 that interpolates the data{(x1, y1), (x2, y2), . . . , (xm, ym)} ⊂ R2.We see this by writing the polynomial in the form
`(t) = α0 + α1t + α2t2 + . . .+ αm−1tm−1.
Then, interpolation of the data implies
`(xi) = yi , i = 1, . . . ,m
or 1 x1 x2
1 · · · xm−11
1 x2 x22 · · · xm−1
2...
1 xm x2m · · · xm−1
m
α0α1...
αm−1
=
y1y2...
ym
.
Since the columns of V are linearly independent it is nonsingular, andthe coefficients α0, . . . , αm−1 are uniquely determined.
[email protected] MATH 532 45
![Page 137: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/137.jpg)
Linear Independence
The example implies that in the special case m = n there is a uniquepolynomial of degree (at most) m − 1 that interpolates the data{(x1, y1), (x2, y2), . . . , (xm, ym)} ⊂ R2.We see this by writing the polynomial in the form
`(t) = α0 + α1t + α2t2 + . . .+ αm−1tm−1.
Then, interpolation of the data implies
`(xi) = yi , i = 1, . . . ,m
or
1 x1 x2
1 · · · xm−11
1 x2 x22 · · · xm−1
2...
1 xm x2m · · · xm−1
m
α0α1...
αm−1
=
y1y2...
ym
.
Since the columns of V are linearly independent it is nonsingular, andthe coefficients α0, . . . , αm−1 are uniquely determined.
[email protected] MATH 532 45
![Page 138: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/138.jpg)
Linear Independence
The example implies that in the special case m = n there is a uniquepolynomial of degree (at most) m − 1 that interpolates the data{(x1, y1), (x2, y2), . . . , (xm, ym)} ⊂ R2.We see this by writing the polynomial in the form
`(t) = α0 + α1t + α2t2 + . . .+ αm−1tm−1.
Then, interpolation of the data implies
`(xi) = yi , i = 1, . . . ,m
or 1 x1 x2
1 · · · xm−11
1 x2 x22 · · · xm−1
2...
1 xm x2m · · · xm−1
m
α0α1...
αm−1
=
y1y2...
ym
.
Since the columns of V are linearly independent it is nonsingular, andthe coefficients α0, . . . , αm−1 are uniquely determined.
[email protected] MATH 532 45
![Page 139: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/139.jpg)
Linear Independence
In fact,
`(t) =m∑
i=1
yiLi(t) (Lagrange interpolation polynomial)
with Li(t) =m∏
k=1k 6=i
(t − xk )/m∏
k=1k 6=i
(xi − xk ) (Lagrange functions).
To verify (4) we note that the degree of ` is m − 1 (since each Li is ofdegree m − 1) and
Li(xj) = δij , i , j = 1, . . . ,m,
so that
`(xj) =m∑
i=1
yi Li(xj)︸ ︷︷ ︸=δij
= yj , j = 1, . . . ,m.
[email protected] MATH 532 46
![Page 140: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/140.jpg)
Linear Independence
In fact,
`(t) =m∑
i=1
yiLi(t) (Lagrange interpolation polynomial)
with Li(t) =m∏
k=1k 6=i
(t − xk )/m∏
k=1k 6=i
(xi − xk ) (Lagrange functions).
To verify (4) we note that the degree of ` is m − 1 (since each Li is ofdegree m − 1) and
Li(xj) = δij , i , j = 1, . . . ,m,
so that
`(xj) =m∑
i=1
yi Li(xj)︸ ︷︷ ︸=δij
= yj , j = 1, . . . ,m.
[email protected] MATH 532 46
![Page 141: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/141.jpg)
Linear Independence
TheoremLet S = {u1,u2 . . . ,un} ⊆ V be nonempty. Then
1 If S contains a linearly dependent subset, then S is linearlydependent.
2 If S is linearly independent, then every subset of S is also linearlyindependent.
3 If S is linearly independent and if v ∈ V, then Sext = S ∪ {v} islinearly independent if and only if v /∈ span(S).
4 If S ⊆ Rm and n > m, then S must be linearly dependent.
[email protected] MATH 532 47
![Page 142: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/142.jpg)
Linear Independence
Proof1 If S contains a linearly dependent subset, {u1, . . . ,uk} say, then
there exist nontrivial coefficients α1, . . . , αk such that
α1u1 + . . .+ αkuk = 0.
Clearly, then
α1u1 + . . .+ αkuk + 0uk+1 + . . .+ 0un = 0
and S is also linearly dependent.
2 Follows from (1) by contraposition.
[email protected] MATH 532 48
![Page 143: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/143.jpg)
Linear Independence
Proof1 If S contains a linearly dependent subset, {u1, . . . ,uk} say, then
there exist nontrivial coefficients α1, . . . , αk such that
α1u1 + . . .+ αkuk = 0.
Clearly, then
α1u1 + . . .+ αkuk + 0uk+1 + . . .+ 0un = 0
and S is also linearly dependent.2 Follows from (1) by contraposition.
[email protected] MATH 532 48
![Page 144: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/144.jpg)
Linear Independence
(cont.)3 “=⇒”: Assume Sext is linearly independent. Then v can’t be a
linear combination of u1, . . . ,un.
“⇐=”: Assume v /∈ span(S) and consider
α1u1 + α2u2 + . . .+ αnun + αn+1v = 0.
First, αn+1 = 0 since otherwise v ∈ span(S).That leaves
α1u1 + α2u2 + . . .+ αnun = 0.
However, the linear independence of S implies αi = 0,i = 1, . . . ,n, and therefore Sext is linearly independent.
[email protected] MATH 532 49
![Page 145: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/145.jpg)
Linear Independence
(cont.)3 “=⇒”: Assume Sext is linearly independent. Then v can’t be a
linear combination of u1, . . . ,un.
“⇐=”: Assume v /∈ span(S) and consider
α1u1 + α2u2 + . . .+ αnun + αn+1v = 0.
First, αn+1 = 0 since otherwise v ∈ span(S).
That leavesα1u1 + α2u2 + . . .+ αnun = 0.
However, the linear independence of S implies αi = 0,i = 1, . . . ,n, and therefore Sext is linearly independent.
[email protected] MATH 532 49
![Page 146: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/146.jpg)
Linear Independence
(cont.)3 “=⇒”: Assume Sext is linearly independent. Then v can’t be a
linear combination of u1, . . . ,un.
“⇐=”: Assume v /∈ span(S) and consider
α1u1 + α2u2 + . . .+ αnun + αn+1v = 0.
First, αn+1 = 0 since otherwise v ∈ span(S).That leaves
α1u1 + α2u2 + . . .+ αnun = 0.
However, the linear independence of S implies αi = 0,i = 1, . . . ,n, and therefore Sext is linearly independent.
[email protected] MATH 532 49
![Page 147: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/147.jpg)
Linear Independence
(cont.)4 We know that the columns of an m × n matrix A are linearly
independent if and only if rank(A) = n.
Here A =(u1 u2 · · · un
)with ui ∈ Rm.
If n > m, then rank(A) ≤ m and S must be linearly dependent. �
[email protected] MATH 532 50
![Page 148: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/148.jpg)
Bases and Dimension
Outline
1 Spaces and Subspaces
2 Four Fundamental Subspaces
3 Linear Independence
4 Bases and Dimension
5 More About Rank
6 Classical Least Squares
7 Kriging as best linear unbiased predictor
[email protected] MATH 532 51
![Page 149: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/149.jpg)
Bases and Dimension
Bases and Dimension
Earlier we introduced the concept of a spanning set of a vector spaceV, i.e.,
V = span{v1, . . . ,vn}
Now
DefinitionConsider a vector space V with spanning set S. If S is also linearlyindependent then we call it a basis of V.
Example1 {e1, . . . ,en} is the standard basis for Rn.2 The columns/rows of an n × n matrix A with rank(A) = n form a
basis for Rn.
[email protected] MATH 532 52
![Page 150: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/150.jpg)
Bases and Dimension
Bases and Dimension
Earlier we introduced the concept of a spanning set of a vector spaceV, i.e.,
V = span{v1, . . . ,vn}
Now
DefinitionConsider a vector space V with spanning set S. If S is also linearlyindependent then we call it a basis of V.
Example1 {e1, . . . ,en} is the standard basis for Rn.2 The columns/rows of an n × n matrix A with rank(A) = n form a
basis for Rn.
[email protected] MATH 532 52
![Page 151: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/151.jpg)
Bases and Dimension
Bases and Dimension
Earlier we introduced the concept of a spanning set of a vector spaceV, i.e.,
V = span{v1, . . . ,vn}
Now
DefinitionConsider a vector space V with spanning set S. If S is also linearlyindependent then we call it a basis of V.
Example1 {e1, . . . ,en} is the standard basis for Rn.
2 The columns/rows of an n × n matrix A with rank(A) = n form abasis for Rn.
[email protected] MATH 532 52
![Page 152: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/152.jpg)
Bases and Dimension
Bases and Dimension
Earlier we introduced the concept of a spanning set of a vector spaceV, i.e.,
V = span{v1, . . . ,vn}
Now
DefinitionConsider a vector space V with spanning set S. If S is also linearlyindependent then we call it a basis of V.
Example1 {e1, . . . ,en} is the standard basis for Rn.2 The columns/rows of an n × n matrix A with rank(A) = n form a
basis for Rn.
[email protected] MATH 532 52
![Page 153: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/153.jpg)
Bases and Dimension
RemarkLinear algebra deals with finite-dimensional linear spaces.
Functional analysis can be considered as infinite-dimensional linearalgebra, where the linear spaces are usually function spaces such as
infinitely differentiable functions with Taylor (polynomial) basis
{1, x , x2, x3, . . .}
square integrable functions with Fourier basis
{1, sin(x), cos(x), sin(2x), cos(2x), . . .}
[email protected] MATH 532 53
![Page 154: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/154.jpg)
Bases and Dimension
RemarkLinear algebra deals with finite-dimensional linear spaces.
Functional analysis can be considered as infinite-dimensional linearalgebra, where the linear spaces are usually function spaces such as
infinitely differentiable functions with Taylor (polynomial) basis
{1, x , x2, x3, . . .}
square integrable functions with Fourier basis
{1, sin(x), cos(x), sin(2x), cos(2x), . . .}
[email protected] MATH 532 53
![Page 155: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/155.jpg)
Bases and Dimension
RemarkLinear algebra deals with finite-dimensional linear spaces.
Functional analysis can be considered as infinite-dimensional linearalgebra, where the linear spaces are usually function spaces such as
infinitely differentiable functions with Taylor (polynomial) basis
{1, x , x2, x3, . . .}
square integrable functions with Fourier basis
{1, sin(x), cos(x), sin(2x), cos(2x), . . .}
[email protected] MATH 532 53
![Page 156: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/156.jpg)
Bases and Dimension
RemarkLinear algebra deals with finite-dimensional linear spaces.
Functional analysis can be considered as infinite-dimensional linearalgebra, where the linear spaces are usually function spaces such as
infinitely differentiable functions with Taylor (polynomial) basis
{1, x , x2, x3, . . .}
square integrable functions with Fourier basis
{1, sin(x), cos(x), sin(2x), cos(2x), . . .}
[email protected] MATH 532 53
![Page 157: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/157.jpg)
Bases and Dimension
Earlier we mentioned the idea of minimal spanning sets.
TheoremLet V be a subspace of Rm and let
B = {b1,b2, . . . ,bn} ⊆ V.
The following are equivalent:1 B is a basis for V.2 B is a minimal spanning set for V.3 B is a maximal linearly independent subset of V.
RemarkWe say “a basis” here since V can have many different bases.
[email protected] MATH 532 54
![Page 158: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/158.jpg)
Bases and Dimension
Earlier we mentioned the idea of minimal spanning sets.
TheoremLet V be a subspace of Rm and let
B = {b1,b2, . . . ,bn} ⊆ V.
The following are equivalent:1 B is a basis for V.2 B is a minimal spanning set for V.3 B is a maximal linearly independent subset of V.
RemarkWe say “a basis” here since V can have many different bases.
[email protected] MATH 532 54
![Page 159: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/159.jpg)
Bases and Dimension
Earlier we mentioned the idea of minimal spanning sets.
TheoremLet V be a subspace of Rm and let
B = {b1,b2, . . . ,bn} ⊆ V.
The following are equivalent:1 B is a basis for V.2 B is a minimal spanning set for V.3 B is a maximal linearly independent subset of V.
RemarkWe say “a basis” here since V can have many different bases.
[email protected] MATH 532 54
![Page 160: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/160.jpg)
Bases and Dimension
ProofSince it is difficult to directly relate (2) and (3), our strategy will be toprove
Show (1) =⇒ (2) and (2) =⇒ (1), so that (1)⇐⇒ (2).
Show (1) =⇒ (3) and (3) =⇒ (1), so that (1)⇐⇒ (3).
Then — by transitivity — we will also have (2)⇐⇒ (3).
[email protected] MATH 532 55
![Page 161: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/161.jpg)
Bases and Dimension
Proof (cont.)(1) =⇒ (2): Assume B is a basis (i.e., a linearly independent spanningset) of V and show that it is minimal.
Assume B is not minimal, i.e., we can find a smaller spanning set{x1, . . . ,xk} for V with k ≤ n elements.But then
bj =k∑
i=1
αijx i , j = 1, . . . ,n,
orB = XA,
where
B =(b1 b2 · · · bn
)∈ Rm×n,
X =(x1 x2 · · · xk
)∈ Rm×k ,
[A]ij = αij , A ∈ Rk×n.
[email protected] MATH 532 56
![Page 162: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/162.jpg)
Bases and Dimension
Proof (cont.)(1) =⇒ (2): Assume B is a basis (i.e., a linearly independent spanningset) of V and show that it is minimal.Assume B is not minimal, i.e., we can find a smaller spanning set{x1, . . . ,xk} for V with k ≤ n elements.
But then
bj =k∑
i=1
αijx i , j = 1, . . . ,n,
orB = XA,
where
B =(b1 b2 · · · bn
)∈ Rm×n,
X =(x1 x2 · · · xk
)∈ Rm×k ,
[A]ij = αij , A ∈ Rk×n.
[email protected] MATH 532 56
![Page 163: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/163.jpg)
Bases and Dimension
Proof (cont.)(1) =⇒ (2): Assume B is a basis (i.e., a linearly independent spanningset) of V and show that it is minimal.Assume B is not minimal, i.e., we can find a smaller spanning set{x1, . . . ,xk} for V with k ≤ n elements.But then
bj =k∑
i=1
αijx i , j = 1, . . . ,n,
or
B = XA,
where
B =(b1 b2 · · · bn
)∈ Rm×n,
X =(x1 x2 · · · xk
)∈ Rm×k ,
[A]ij = αij , A ∈ Rk×n.
[email protected] MATH 532 56
![Page 164: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/164.jpg)
Bases and Dimension
Proof (cont.)(1) =⇒ (2): Assume B is a basis (i.e., a linearly independent spanningset) of V and show that it is minimal.Assume B is not minimal, i.e., we can find a smaller spanning set{x1, . . . ,xk} for V with k ≤ n elements.But then
bj =k∑
i=1
αijx i , j = 1, . . . ,n,
orB = XA,
where
B =(b1 b2 · · · bn
)∈ Rm×n,
X =(x1 x2 · · · xk
)∈ Rm×k ,
[A]ij = αij , A ∈ Rk×n.
[email protected] MATH 532 56
![Page 165: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/165.jpg)
Bases and Dimension
Proof (cont.)
Now, rank(A) ≤ k < n, which implies N(A) is
nontrivial, i.e., thereexists a z 6= 0 such that
Az = 0.
But thenBz = XAz = 0,
and therefore N(B) is nontrivial.
However, since B is a basis, the columns of B are linearly independent(i.e., N(B) = {0}) — and that is a contradiction.
Therefore, B has to be minimal.
[email protected] MATH 532 57
![Page 166: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/166.jpg)
Bases and Dimension
Proof (cont.)
Now, rank(A) ≤ k < n, which implies N(A) is nontrivial, i.e., thereexists a z 6= 0 such that
Az = 0.
But thenBz = XAz = 0,
and therefore N(B) is nontrivial.
However, since B is a basis, the columns of B are linearly independent(i.e., N(B) = {0}) — and that is a contradiction.
Therefore, B has to be minimal.
[email protected] MATH 532 57
![Page 167: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/167.jpg)
Bases and Dimension
Proof (cont.)
Now, rank(A) ≤ k < n, which implies N(A) is nontrivial, i.e., thereexists a z 6= 0 such that
Az = 0.
But thenBz = XAz = 0,
and therefore N(B) is nontrivial.
However, since B is a basis, the columns of B are linearly independent(i.e., N(B) = {0}) — and that is a contradiction.
Therefore, B has to be minimal.
[email protected] MATH 532 57
![Page 168: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/168.jpg)
Bases and Dimension
Proof (cont.)
Now, rank(A) ≤ k < n, which implies N(A) is nontrivial, i.e., thereexists a z 6= 0 such that
Az = 0.
But thenBz = XAz = 0,
and therefore N(B) is nontrivial.
However, since B is a basis, the columns of B are linearly independent(i.e., N(B) = {0}) — and that is a contradiction.
Therefore, B has to be minimal.
[email protected] MATH 532 57
![Page 169: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/169.jpg)
Bases and Dimension
Proof (cont.)
Now, rank(A) ≤ k < n, which implies N(A) is nontrivial, i.e., thereexists a z 6= 0 such that
Az = 0.
But thenBz = XAz = 0,
and therefore N(B) is nontrivial.
However, since B is a basis, the columns of B are linearly independent(i.e., N(B) = {0}) — and that is a contradiction.
Therefore, B has to be minimal.
[email protected] MATH 532 57
![Page 170: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/170.jpg)
Bases and Dimension
Proof (cont.)(2) =⇒ (1): Assume B is a minimal spanning set and show that it mustalso be linearly independent.
This is clear sinceif B were linearly dependent,then we would be able to remove at least one vector from B andstill have a spanning setbut then it would not have been minimal.
[email protected] MATH 532 58
![Page 171: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/171.jpg)
Bases and Dimension
Proof (cont.)(2) =⇒ (1): Assume B is a minimal spanning set and show that it mustalso be linearly independent.
This is clear sinceif B were linearly dependent,then we would be able to remove at least one vector from B andstill have a spanning setbut then it would not have been minimal.
[email protected] MATH 532 58
![Page 172: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/172.jpg)
Bases and Dimension
Proof (cont.)(3) =⇒ (1): Assume B is a maximal linearly independent subset of Vand show that B is a basis of V.
Assume that B is not a basis, i.e., there exists a v ∈ V such thatv /∈ span{b1, . . . ,bn}.
Then — by an earlier theorem — the extension set B ∪ {v} is linearlyindependent.
But this contradicts the maximality of B, so that B has to be a basis.
[email protected] MATH 532 59
![Page 173: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/173.jpg)
Bases and Dimension
Proof (cont.)(3) =⇒ (1): Assume B is a maximal linearly independent subset of Vand show that B is a basis of V.
Assume that B is not a basis, i.e., there exists a v ∈ V such thatv /∈ span{b1, . . . ,bn}.
Then — by an earlier theorem — the extension set B ∪ {v} is linearlyindependent.
But this contradicts the maximality of B, so that B has to be a basis.
[email protected] MATH 532 59
![Page 174: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/174.jpg)
Bases and Dimension
Proof (cont.)(3) =⇒ (1): Assume B is a maximal linearly independent subset of Vand show that B is a basis of V.
Assume that B is not a basis, i.e., there exists a v ∈ V such thatv /∈ span{b1, . . . ,bn}.
Then — by an earlier theorem — the extension set B ∪ {v} is linearlyindependent.
But this contradicts the maximality of B, so that B has to be a basis.
[email protected] MATH 532 59
![Page 175: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/175.jpg)
Bases and Dimension
Proof (cont.)(3) =⇒ (1): Assume B is a maximal linearly independent subset of Vand show that B is a basis of V.
Assume that B is not a basis, i.e., there exists a v ∈ V such thatv /∈ span{b1, . . . ,bn}.
Then — by an earlier theorem — the extension set B ∪ {v} is linearlyindependent.
But this contradicts the maximality of B, so that B has to be a basis.
[email protected] MATH 532 59
![Page 176: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/176.jpg)
Bases and Dimension
Proof (cont.)(1) =⇒ (3): Assume B is a basis, but not a maximal linearlyindependent subset of V, and show that this leads to a contradiction.
LetY = {y1, . . . ,yk} ⊆ V, with k > n
be a maximal linearly independent subset of V (note that such a setalways exists).But then Y must be a basis for V by our “(1) =⇒ (3)” argument.On the other hand, Y has more vectors than B and a basis has to be aminimal spanning set.Therefore B has to already be a maximal linearly independent subsetof V. �
[email protected] MATH 532 60
![Page 177: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/177.jpg)
Bases and Dimension
Proof (cont.)(1) =⇒ (3): Assume B is a basis, but not a maximal linearlyindependent subset of V, and show that this leads to a contradiction.
LetY = {y1, . . . ,yk} ⊆ V, with k > n
be a maximal linearly independent subset of V (note that such a setalways exists).
But then Y must be a basis for V by our “(1) =⇒ (3)” argument.On the other hand, Y has more vectors than B and a basis has to be aminimal spanning set.Therefore B has to already be a maximal linearly independent subsetof V. �
[email protected] MATH 532 60
![Page 178: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/178.jpg)
Bases and Dimension
Proof (cont.)(1) =⇒ (3): Assume B is a basis, but not a maximal linearlyindependent subset of V, and show that this leads to a contradiction.
LetY = {y1, . . . ,yk} ⊆ V, with k > n
be a maximal linearly independent subset of V (note that such a setalways exists).But then Y must be a basis for V by our “(1) =⇒ (3)” argument.
On the other hand, Y has more vectors than B and a basis has to be aminimal spanning set.Therefore B has to already be a maximal linearly independent subsetof V. �
[email protected] MATH 532 60
![Page 179: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/179.jpg)
Bases and Dimension
Proof (cont.)(1) =⇒ (3): Assume B is a basis, but not a maximal linearlyindependent subset of V, and show that this leads to a contradiction.
LetY = {y1, . . . ,yk} ⊆ V, with k > n
be a maximal linearly independent subset of V (note that such a setalways exists).But then Y must be a basis for V by our “(1) =⇒ (3)” argument.On the other hand, Y has more vectors than B and a basis has to be aminimal spanning set.Therefore B has to already be a maximal linearly independent subsetof V. �
[email protected] MATH 532 60
![Page 180: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/180.jpg)
Bases and Dimension
RemarkAbove we remarked that B is not unique, i.e., a vector space V canhave many different bases.
However, the number of elements in all of these bases is unique.
DefinitionThe dimension of the vector space V is given by
dimV = the number of elements in any basis of V.
Special case: by convention
dim{0} = 0.
[email protected] MATH 532 61
![Page 181: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/181.jpg)
Bases and Dimension
RemarkAbove we remarked that B is not unique, i.e., a vector space V canhave many different bases.
However, the number of elements in all of these bases is unique.
DefinitionThe dimension of the vector space V is given by
dimV = the number of elements in any basis of V.
Special case: by convention
dim{0} = 0.
[email protected] MATH 532 61
![Page 182: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/182.jpg)
Bases and Dimension
ExampleConsider
P = span
1
00
,
010
⊂ R3.
Geometrically, P corresponds to the plane z = 0, i.e., the xy -plane.
Note that dimP = 2.
Moreover, any subspace of R3 has dimension at most 3.
[email protected] MATH 532 62
![Page 183: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/183.jpg)
Bases and Dimension
ExampleConsider
P = span
1
00
,
010
⊂ R3.
Geometrically, P corresponds to the plane z = 0, i.e., the xy -plane.
Note that dimP = 2.
Moreover, any subspace of R3 has dimension at most 3.
[email protected] MATH 532 62
![Page 184: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/184.jpg)
Bases and Dimension
ExampleConsider
P = span
1
00
,
010
⊂ R3.
Geometrically, P corresponds to the plane z = 0, i.e., the xy -plane.
Note that dimP = 2.
Moreover, any subspace of R3 has dimension at most 3.
[email protected] MATH 532 62
![Page 185: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/185.jpg)
Bases and Dimension
ExampleConsider
P = span
1
00
,
010
⊂ R3.
Geometrically, P corresponds to the plane z = 0, i.e., the xy -plane.
Note that dimP = 2.
Moreover, any subspace of R3 has dimension at most 3.
[email protected] MATH 532 62
![Page 186: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/186.jpg)
Bases and Dimension
In general,
TheoremLetM and N be vector spaces such thatM⊆ N . Then
1 dimM≤ dimN ,2 dimM = dimN =⇒ M = N .
Proof.See [Mey00].
[email protected] MATH 532 63
![Page 187: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/187.jpg)
Bases and Dimension
In general,
TheoremLetM and N be vector spaces such thatM⊆ N . Then
1 dimM≤ dimN ,2 dimM = dimN =⇒ M = N .
Proof.See [Mey00].
[email protected] MATH 532 63
![Page 188: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/188.jpg)
Bases and Dimension
Back to the 4 fundamental subspaces
Consider an m × n matrix A with rank(A) = r .
R(A) We know that
R(A) = span{columns of A}.
If rank(A) = r , then only r columns of A are linearlyindependent, i.e.,
dim R(A) = r .
A basis of R(A) is given by the basic columns of A(determined via a row echelon form U).
[email protected] MATH 532 64
![Page 189: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/189.jpg)
Bases and Dimension
Back to the 4 fundamental subspaces
Consider an m × n matrix A with rank(A) = r .
R(A) We know that
R(A) = span{columns of A}.
If rank(A) = r , then only r columns of A are linearlyindependent, i.e.,
dim R(A) = r .
A basis of R(A) is given by the basic columns of A(determined via a row echelon form U).
[email protected] MATH 532 64
![Page 190: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/190.jpg)
Bases and Dimension
Back to the 4 fundamental subspaces
Consider an m × n matrix A with rank(A) = r .
R(A) We know that
R(A) = span{columns of A}.
If rank(A) = r , then only r columns of A are linearlyindependent, i.e.,
dim R(A) = r .
A basis of R(A) is given by the basic columns of A(determined via a row echelon form U).
[email protected] MATH 532 64
![Page 191: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/191.jpg)
Bases and Dimension
Back to the 4 fundamental subspaces
Consider an m × n matrix A with rank(A) = r .
R(A) We know that
R(A) = span{columns of A}.
If rank(A) = r , then only r columns of A are linearlyindependent, i.e.,
dim R(A) = r .
A basis of R(A) is given by the basic columns of A(determined via a row echelon form U).
[email protected] MATH 532 64
![Page 192: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/192.jpg)
Bases and Dimension
R(AT ) We know that
R(AT ) = span{rows of A}.
Again, rank(A) = r implies that only r rows of A arelinearly independent, i.e.,
dim R(AT ) = r .
A basis of R(AT ) is given by the nonzero rows of U (fromthe LU factorization of A).
[email protected] MATH 532 65
![Page 193: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/193.jpg)
Bases and Dimension
R(AT ) We know that
R(AT ) = span{rows of A}.
Again, rank(A) = r implies that only r rows of A arelinearly independent, i.e.,
dim R(AT ) = r .
A basis of R(AT ) is given by the nonzero rows of U (fromthe LU factorization of A).
[email protected] MATH 532 65
![Page 194: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/194.jpg)
Bases and Dimension
R(AT ) We know that
R(AT ) = span{rows of A}.
Again, rank(A) = r implies that only r rows of A arelinearly independent, i.e.,
dim R(AT ) = r .
A basis of R(AT ) is given by the nonzero rows of U (fromthe LU factorization of A).
[email protected] MATH 532 65
![Page 195: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/195.jpg)
Bases and Dimension
N(AT ) One of our earlier theorems states that the last m − rrows of P span N(AT ) (where P is nonsingular such thatPA = U is in row echelon form).
Since P is nonsingular these rows are linearlyindependent and so
dim N(AT ) = m − r .
A basis of N(AT ) is given by the last m − r rows of P.
[email protected] MATH 532 66
![Page 196: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/196.jpg)
Bases and Dimension
N(AT ) One of our earlier theorems states that the last m − rrows of P span N(AT ) (where P is nonsingular such thatPA = U is in row echelon form).
Since P is nonsingular these rows are linearlyindependent and so
dim N(AT ) = m − r .
A basis of N(AT ) is given by the last m − r rows of P.
[email protected] MATH 532 66
![Page 197: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/197.jpg)
Bases and Dimension
N(AT ) One of our earlier theorems states that the last m − rrows of P span N(AT ) (where P is nonsingular such thatPA = U is in row echelon form).
Since P is nonsingular these rows are linearlyindependent and so
dim N(AT ) = m − r .
A basis of N(AT ) is given by the last m − r rows of P.
[email protected] MATH 532 66
![Page 198: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/198.jpg)
Bases and Dimension
N(A) Replace A by AT above so that
dim N((AT )T
)= n − rank(AT ) = n − r
so thatdim N(A) = n − r .
A basis of N(A) is given by the n − r linearly independentsolutions of Ax = 0.
[email protected] MATH 532 67
![Page 199: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/199.jpg)
Bases and Dimension
N(A) Replace A by AT above so that
dim N((AT )T
)= n − rank(AT ) = n − r
so thatdim N(A) = n − r .
A basis of N(A) is given by the n − r linearly independentsolutions of Ax = 0.
[email protected] MATH 532 67
![Page 200: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/200.jpg)
Bases and Dimension
TheoremFor any m × n matrix A we have
dim R(A) + dim N(A) = n.
This follows directly from the above discussion of R(A) and N(A).
The theorem shows that there is always a balance between the rank ofA and the dimension of its nullspace.
[email protected] MATH 532 68
![Page 201: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/201.jpg)
Bases and Dimension
TheoremFor any m × n matrix A we have
dim R(A) + dim N(A) = n.
This follows directly from the above discussion of R(A) and N(A).
The theorem shows that there is always a balance between the rank ofA and the dimension of its nullspace.
[email protected] MATH 532 68
![Page 202: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/202.jpg)
Bases and Dimension
TheoremFor any m × n matrix A we have
dim R(A) + dim N(A) = n.
This follows directly from the above discussion of R(A) and N(A).
The theorem shows that there is always a balance between the rank ofA and the dimension of its nullspace.
[email protected] MATH 532 68
![Page 203: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/203.jpg)
Bases and Dimension
ExampleFind the dimension and a basis for
S = span
1231
,
2462
,
2464
,
3695
,
1263
.
Before we even do any calculations we know that
S ⊆ R4, so that dimS ≤ 4.
We will now answer this question in two different ways using
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
.
[email protected] MATH 532 69
![Page 204: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/204.jpg)
Bases and Dimension
ExampleFind the dimension and a basis for
S = span
1231
,
2462
,
2464
,
3695
,
1263
.
Before we even do any calculations we know that
S ⊆ R4, so that dimS ≤ 4.
We will now answer this question in two different ways using
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
.
[email protected] MATH 532 69
![Page 205: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/205.jpg)
Bases and Dimension
ExampleFind the dimension and a basis for
S = span
1231
,
2462
,
2464
,
3695
,
1263
.
Before we even do any calculations we know that
S ⊆ R4, so that dimS ≤ 4.
We will now answer this question in two different ways using
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
.
[email protected] MATH 532 69
![Page 206: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/206.jpg)
Bases and Dimension
Example (cont.)
Via R(A), i.e., by finding the basic columns of A:
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
G.–J.−→ EA =
1 2 0 1 00 0 1 1 00 0 0 0 10 0 0 0 0
Therefore, dimS = 3 and
S = span
1231
,
2464
,
1263
since the basic columns of EA are the first, third and fifth columns.
[email protected] MATH 532 70
![Page 207: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/207.jpg)
Bases and Dimension
Example (cont.)
Via R(A), i.e., by finding the basic columns of A:
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
G.–J.−→ EA =
1 2 0 1 00 0 1 1 00 0 0 0 10 0 0 0 0
Therefore, dimS = 3 and
S = span
1231
,
2464
,
1263
since the basic columns of EA are the first, third and fifth columns.
[email protected] MATH 532 70
![Page 208: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/208.jpg)
Bases and Dimension
Example (cont.)
Via R(AT ), i.e., R(A) = span{rows of AT}, i.e., we need the nonzerorows of U (from the LU factorization of AT :
AT =
1 2 3 12 4 6 22 4 6 43 6 9 41 2 6 3
zero out [AT ]∗,1−→
1 2 3 10 0 0 00 0 0 20 0 0 20 0 3 2
permute−→
1 2 3 10 0 3 20 0 0 20 0 0 00 0 0 0
︸ ︷︷ ︸
=U
Therefore, dimS = 3 and
S = span
1231
,
0032
,
0002
since the nonzero rows of U are the first, second and third rows.
[email protected] MATH 532 71
![Page 209: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/209.jpg)
Bases and Dimension
Example (cont.)
Via R(AT ), i.e., R(A) = span{rows of AT}, i.e., we need the nonzerorows of U (from the LU factorization of AT :
AT =
1 2 3 12 4 6 22 4 6 43 6 9 41 2 6 3
zero out [AT ]∗,1−→
1 2 3 10 0 0 00 0 0 20 0 0 20 0 3 2
permute−→
1 2 3 10 0 3 20 0 0 20 0 0 00 0 0 0
︸ ︷︷ ︸
=U
Therefore, dimS = 3 and
S = span
1231
,
0032
,
0002
since the nonzero rows of U are the first, second and third [email protected] MATH 532 71
![Page 210: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/210.jpg)
Bases and Dimension
ExampleExtend
S = span
1231
,
1263
to a basis for R4.
The procedure will be to augment the columns of S by an identitymatrix , i.e., to form
A =
1 1 1 0 0 02 2 0 1 0 03 6 0 0 1 01 3 0 0 0 1
and then to get a basis via the basic columns of U.
[email protected] MATH 532 72
![Page 211: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/211.jpg)
Bases and Dimension
ExampleExtend
S = span
1231
,
1263
to a basis for R4.The procedure will be to augment the columns of S by an identitymatrix , i.e., to form
A =
1 1 1 0 0 02 2 0 1 0 03 6 0 0 1 01 3 0 0 0 1
and then to get a basis via the basic columns of U.
[email protected] MATH 532 72
![Page 212: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/212.jpg)
Bases and Dimension
Example (cont.)
A =
1 1 1 0 0 02 2 0 1 0 03 6 0 0 1 01 3 0 0 0 1
−→
1 1 1 0 0 00 0 −2 1 0 00 3 −3 0 1 00 2 −1 0 0 1
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 −2 1 0 0
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 0 1 −43 2
so that the basic columns are [A]∗1, [A]∗2, [A]∗3, [A]∗4 and
R4 = span
1231
,
1263
,
1000
,
0100
.
[email protected] MATH 532 73
![Page 213: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/213.jpg)
Bases and Dimension
Example (cont.)
A =
1 1 1 0 0 02 2 0 1 0 03 6 0 0 1 01 3 0 0 0 1
−→
1 1 1 0 0 00 0 −2 1 0 00 3 −3 0 1 00 2 −1 0 0 1
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 −2 1 0 0
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 0 1 −43 2
so that the basic columns are [A]∗1, [A]∗2, [A]∗3, [A]∗4 and
R4 = span
1231
,
1263
,
1000
,
0100
.
[email protected] MATH 532 73
![Page 214: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/214.jpg)
Bases and Dimension
Example (cont.)
A =
1 1 1 0 0 02 2 0 1 0 03 6 0 0 1 01 3 0 0 0 1
−→
1 1 1 0 0 00 0 −2 1 0 00 3 −3 0 1 00 2 −1 0 0 1
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 −2 1 0 0
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 0 1 −43 2
so that the basic columns are [A]∗1, [A]∗2, [A]∗3, [A]∗4 and
R4 = span
1231
,
1263
,
1000
,
0100
.
[email protected] MATH 532 73
![Page 215: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/215.jpg)
Bases and Dimension
Example (cont.)
A =
1 1 1 0 0 02 2 0 1 0 03 6 0 0 1 01 3 0 0 0 1
−→
1 1 1 0 0 00 0 −2 1 0 00 3 −3 0 1 00 2 −1 0 0 1
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 −2 1 0 0
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 0 1 −43 2
so that the basic columns are [A]∗1, [A]∗2, [A]∗3, [A]∗4 and
R4 = span
1231
,
1263
,
1000
,
0100
.
[email protected] MATH 532 73
![Page 216: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/216.jpg)
Bases and Dimension
Example (cont.)
A =
1 1 1 0 0 02 2 0 1 0 03 6 0 0 1 01 3 0 0 0 1
−→
1 1 1 0 0 00 0 −2 1 0 00 3 −3 0 1 00 2 −1 0 0 1
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 −2 1 0 0
−→
1 1 1 0 0 00 2 −1 0 0 10 0 −3
2 0 1 −32
0 0 0 1 −43 2
so that the basic columns are [A]∗1, [A]∗2, [A]∗3, [A]∗4 and
R4 = span
1231
,
1263
,
1000
,
0100
.
[email protected] MATH 532 73
![Page 217: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/217.jpg)
Bases and Dimension
Earlier we defined the sum of subspaces X and Y as
X + Y = {x + y : x ∈ X , y ∈ Y}
TheoremIf X ,Y are subspaces of V, then
dim(X + Y) = dimX + dimY − dim(X ∩ Y).
Proof.See [Mey00], but the basic idea is pretty clear.We want to avoid double counting.
[email protected] MATH 532 74
![Page 218: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/218.jpg)
Bases and Dimension
Earlier we defined the sum of subspaces X and Y as
X + Y = {x + y : x ∈ X , y ∈ Y}
TheoremIf X ,Y are subspaces of V, then
dim(X + Y) = dimX + dimY − dim(X ∩ Y).
Proof.See [Mey00], but the basic idea is pretty clear.We want to avoid double counting.
[email protected] MATH 532 74
![Page 219: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/219.jpg)
Bases and Dimension
Earlier we defined the sum of subspaces X and Y as
X + Y = {x + y : x ∈ X , y ∈ Y}
TheoremIf X ,Y are subspaces of V, then
dim(X + Y) = dimX + dimY − dim(X ∩ Y).
Proof.See [Mey00], but the basic idea is pretty clear.We want to avoid double counting.
[email protected] MATH 532 74
![Page 220: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/220.jpg)
Bases and Dimension
CorollaryLet A and B be m × n matrices. Then
rank(A + B) ≤ rank(A) + rank(B).
ProofFirst we note that
R(A + B) ⊆ R(A) + R(B) (4)
since for any b ∈ R(A + B) we have
b = (A + B)x = Ax + Bx ∈ R(A) + R(B).
[email protected] MATH 532 75
![Page 221: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/221.jpg)
Bases and Dimension
CorollaryLet A and B be m × n matrices. Then
rank(A + B) ≤ rank(A) + rank(B).
ProofFirst we note that
R(A + B) ⊆ R(A) + R(B) (4)
since for any b ∈ R(A + B) we have
b = (A + B)x = Ax + Bx ∈ R(A) + R(B).
[email protected] MATH 532 75
![Page 222: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/222.jpg)
Bases and Dimension
CorollaryLet A and B be m × n matrices. Then
rank(A + B) ≤ rank(A) + rank(B).
ProofFirst we note that
R(A + B) ⊆ R(A) + R(B) (4)
since for any b ∈ R(A + B) we have
b = (A + B)x = Ax + Bx ∈ R(A) + R(B).
[email protected] MATH 532 75
![Page 223: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/223.jpg)
Bases and Dimension
(cont.)Now,
rank(A + B) = dim R(A + B)
(4)≤ dim(R(A) + R(B))
Thm= dim R(A) + dim R(B)− dim (R(A) ∩ R(B))
≤ dim R(A) + dim R(B)
= rank(A) + rank(B)
�
[email protected] MATH 532 76
![Page 224: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/224.jpg)
Bases and Dimension
(cont.)Now,
rank(A + B) = dim R(A + B)
(4)≤ dim(R(A) + R(B))
Thm= dim R(A) + dim R(B)− dim (R(A) ∩ R(B))
≤ dim R(A) + dim R(B)
= rank(A) + rank(B)
�
[email protected] MATH 532 76
![Page 225: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/225.jpg)
Bases and Dimension
(cont.)Now,
rank(A + B) = dim R(A + B)
(4)≤ dim(R(A) + R(B))
Thm= dim R(A) + dim R(B)− dim (R(A) ∩ R(B))
≤ dim R(A) + dim R(B)
= rank(A) + rank(B)
�
[email protected] MATH 532 76
![Page 226: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/226.jpg)
Bases and Dimension
(cont.)Now,
rank(A + B) = dim R(A + B)
(4)≤ dim(R(A) + R(B))
Thm= dim R(A) + dim R(B)− dim (R(A) ∩ R(B))
≤ dim R(A) + dim R(B)
= rank(A) + rank(B)
�
[email protected] MATH 532 76
![Page 227: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/227.jpg)
More About Rank
Outline
1 Spaces and Subspaces
2 Four Fundamental Subspaces
3 Linear Independence
4 Bases and Dimension
5 More About Rank
6 Classical Least Squares
7 Kriging as best linear unbiased predictor
[email protected] MATH 532 77
![Page 228: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/228.jpg)
More About Rank
More About Rank
We know that A ∼ B if and only if rank(A) = rank(B).
Thus (for invertible P,Q), PAQ = B implies rank(A) = rank(PAQ).
As we now show, it is a general fact that multiplication by a nonsingularmatrix does not change the rank of a given matrix.
Moreover, multiplication by an arbitrary matrix can only lower the rank.
TheoremLet A be an m × n matrix, and let B by n × p. Then
rank(AB) = rank(B)− dim (N(A) ∩ R(B)) .
RemarkNote that if A is nonsingular, then N(A) = {0} so thatdim (N(A) ∩ R(B)) = 0 and rank(AB) = rank(B).
[email protected] MATH 532 78
![Page 229: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/229.jpg)
More About Rank
More About Rank
We know that A ∼ B if and only if rank(A) = rank(B).
Thus (for invertible P,Q), PAQ = B implies rank(A) = rank(PAQ).
As we now show, it is a general fact that multiplication by a nonsingularmatrix does not change the rank of a given matrix.
Moreover, multiplication by an arbitrary matrix can only lower the rank.
TheoremLet A be an m × n matrix, and let B by n × p. Then
rank(AB) = rank(B)− dim (N(A) ∩ R(B)) .
RemarkNote that if A is nonsingular, then N(A) = {0} so thatdim (N(A) ∩ R(B)) = 0 and rank(AB) = rank(B).
[email protected] MATH 532 78
![Page 230: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/230.jpg)
More About Rank
More About Rank
We know that A ∼ B if and only if rank(A) = rank(B).
Thus (for invertible P,Q), PAQ = B implies rank(A) = rank(PAQ).
As we now show, it is a general fact that multiplication by a nonsingularmatrix does not change the rank of a given matrix.
Moreover, multiplication by an arbitrary matrix can only lower the rank.
TheoremLet A be an m × n matrix, and let B by n × p. Then
rank(AB) = rank(B)− dim (N(A) ∩ R(B)) .
RemarkNote that if A is nonsingular, then N(A) = {0} so thatdim (N(A) ∩ R(B)) = 0 and rank(AB) = rank(B).
[email protected] MATH 532 78
![Page 231: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/231.jpg)
More About Rank
More About Rank
We know that A ∼ B if and only if rank(A) = rank(B).
Thus (for invertible P,Q), PAQ = B implies rank(A) = rank(PAQ).
As we now show, it is a general fact that multiplication by a nonsingularmatrix does not change the rank of a given matrix.
Moreover, multiplication by an arbitrary matrix can only lower the rank.
TheoremLet A be an m × n matrix, and let B by n × p. Then
rank(AB) = rank(B)− dim (N(A) ∩ R(B)) .
RemarkNote that if A is nonsingular, then N(A) = {0} so thatdim (N(A) ∩ R(B)) = 0 and rank(AB) = rank(B).
[email protected] MATH 532 78
![Page 232: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/232.jpg)
More About Rank
ProofLet S = {x1,x2, . . . ,xs} be a basis for N(A) ∩ R(B).
Since N(A) ∩ R(B) ⊆ R(B) we know that
dim(R(B)) = s + t , for some t ≥ 0.
We can construct an extension set such that
B = {x1,x2, . . . ,xs, z1, . . . , z2, . . . , z t}
is a basis for R(B).
[email protected] MATH 532 79
![Page 233: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/233.jpg)
More About Rank
ProofLet S = {x1,x2, . . . ,xs} be a basis for N(A) ∩ R(B).
Since N(A) ∩ R(B) ⊆ R(B) we know that
dim(R(B)) = s + t , for some t ≥ 0.
We can construct an extension set such that
B = {x1,x2, . . . ,xs, z1, . . . , z2, . . . , z t}
is a basis for R(B).
[email protected] MATH 532 79
![Page 234: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/234.jpg)
More About Rank
ProofLet S = {x1,x2, . . . ,xs} be a basis for N(A) ∩ R(B).
Since N(A) ∩ R(B) ⊆ R(B) we know that
dim(R(B)) = s + t , for some t ≥ 0.
We can construct an extension set such that
B = {x1,x2, . . . ,xs, z1, . . . , z2, . . . , z t}
is a basis for R(B).
[email protected] MATH 532 79
![Page 235: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/235.jpg)
More About Rank
(cont.)
If we can show that dim(R(AB)) = t then
rank(B) = dim(R(B)) = s + t = dim (N(A) ∩ R(B)) + dim(R(AB)),
and we are done.
Therefore, we now show that dim(R(AB)) = t .In particular, we show that
T = {Az1,Az2, . . . ,Az t}
is a basis for R(AB).
We do this by showing that1 T is a spanning set for R(AB),2 T is linearly independent.
[email protected] MATH 532 80
![Page 236: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/236.jpg)
More About Rank
(cont.)
If we can show that dim(R(AB)) = t then
rank(B) = dim(R(B)) = s + t = dim (N(A) ∩ R(B)) + dim(R(AB)),
and we are done.
Therefore, we now show that dim(R(AB)) = t .In particular, we show that
T = {Az1,Az2, . . . ,Az t}
is a basis for R(AB).
We do this by showing that1 T is a spanning set for R(AB),2 T is linearly independent.
[email protected] MATH 532 80
![Page 237: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/237.jpg)
More About Rank
(cont.)
If we can show that dim(R(AB)) = t then
rank(B) = dim(R(B)) = s + t = dim (N(A) ∩ R(B)) + dim(R(AB)),
and we are done.
Therefore, we now show that dim(R(AB)) = t .In particular, we show that
T = {Az1,Az2, . . . ,Az t}
is a basis for R(AB).
We do this by showing that1 T is a spanning set for R(AB),2 T is linearly independent.
[email protected] MATH 532 80
![Page 238: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/238.jpg)
More About Rank
(cont.)
Spanning set: Consider an arbitrary b ∈ R(AB). It can be written as
b = ABy for some y .
But then By ∈ R(B), so that
By =s∑
i=1
ξix i +t∑
j=1
ηjz j
and
b = ABy =s∑
i=1
ξiAx i +t∑
j=1
ηjAz j =t∑
j=1
ηjAz j
since x i ∈ N(A).
[email protected] MATH 532 81
![Page 239: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/239.jpg)
More About Rank
(cont.)
Spanning set: Consider an arbitrary b ∈ R(AB). It can be written as
b = ABy for some y .
But then By ∈ R(B), so that
By =s∑
i=1
ξix i +t∑
j=1
ηjz j
and
b = ABy =s∑
i=1
ξiAx i +t∑
j=1
ηjAz j =t∑
j=1
ηjAz j
since x i ∈ N(A).
[email protected] MATH 532 81
![Page 240: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/240.jpg)
More About Rank
(cont.)
Spanning set: Consider an arbitrary b ∈ R(AB). It can be written as
b = ABy for some y .
But then By ∈ R(B), so that
By =s∑
i=1
ξix i +t∑
j=1
ηjz j
and
b = ABy =s∑
i=1
ξiAx i +t∑
j=1
ηjAz j
=t∑
j=1
ηjAz j
since x i ∈ N(A).
[email protected] MATH 532 81
![Page 241: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/241.jpg)
More About Rank
(cont.)
Spanning set: Consider an arbitrary b ∈ R(AB). It can be written as
b = ABy for some y .
But then By ∈ R(B), so that
By =s∑
i=1
ξix i +t∑
j=1
ηjz j
and
b = ABy =s∑
i=1
ξiAx i +t∑
j=1
ηjAz j =t∑
j=1
ηjAz j
since x i ∈ N(A).
[email protected] MATH 532 81
![Page 242: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/242.jpg)
More About Rank
(cont.)Linear independence: Let’s use the definition of linear independence
and look at
t∑i=1
αiAz i = 0 ⇐⇒ At∑
i=1
αiz i = 0.
The identity on the right implies thatt∑
i=1
αiz i ∈ N(A).
But we also have z i ∈ B, i.e.,t∑
i=1
αiz i ∈ R(B).
And so together
t∑i=1
αiz i ∈ N(A) ∩ R(B).
[email protected] MATH 532 82
![Page 243: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/243.jpg)
More About Rank
(cont.)Linear independence: Let’s use the definition of linear independence
and look at
t∑i=1
αiAz i = 0 ⇐⇒ At∑
i=1
αiz i = 0.
The identity on the right implies thatt∑
i=1
αiz i ∈ N(A).
But we also have z i ∈ B, i.e.,t∑
i=1
αiz i ∈ R(B).
And so together
t∑i=1
αiz i ∈ N(A) ∩ R(B).
[email protected] MATH 532 82
![Page 244: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/244.jpg)
More About Rank
(cont.)Linear independence: Let’s use the definition of linear independence
and look at
t∑i=1
αiAz i = 0 ⇐⇒ At∑
i=1
αiz i = 0.
The identity on the right implies thatt∑
i=1
αiz i ∈ N(A).
But we also have z i ∈ B, i.e.,t∑
i=1
αiz i ∈ R(B).
And so together
t∑i=1
αiz i ∈ N(A) ∩ R(B).
[email protected] MATH 532 82
![Page 245: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/245.jpg)
More About Rank
(cont.)Linear independence: Let’s use the definition of linear independence
and look at
t∑i=1
αiAz i = 0 ⇐⇒ At∑
i=1
αiz i = 0.
The identity on the right implies thatt∑
i=1
αiz i ∈ N(A).
But we also have z i ∈ B, i.e.,t∑
i=1
αiz i ∈ R(B).
And so together
t∑i=1
αiz i ∈ N(A) ∩ R(B).
[email protected] MATH 532 82
![Page 246: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/246.jpg)
More About Rank
(cont.)
Now, since S = {x1, . . . ,xs} is a basis for N(A) ∩ R(B) we have
t∑i=1
αiz i =s∑
j=1
βjx j ⇐⇒
t∑i=1
αiz i −s∑
j=1
βjx j = 0.
But B = {x1, . . . ,xs, z1, . . . , z t} is linearly independent, so thatα1 = · · · = αt = β1 = · · · = βs = 0 and therefore T is also linearlyindependent. �
[email protected] MATH 532 83
![Page 247: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/247.jpg)
More About Rank
(cont.)
Now, since S = {x1, . . . ,xs} is a basis for N(A) ∩ R(B) we have
t∑i=1
αiz i =s∑
j=1
βjx j ⇐⇒t∑
i=1
αiz i −s∑
j=1
βjx j = 0.
But B = {x1, . . . ,xs, z1, . . . , z t} is linearly independent, so thatα1 = · · · = αt = β1 = · · · = βs = 0 and therefore T is also linearlyindependent. �
[email protected] MATH 532 83
![Page 248: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/248.jpg)
More About Rank
(cont.)
Now, since S = {x1, . . . ,xs} is a basis for N(A) ∩ R(B) we have
t∑i=1
αiz i =s∑
j=1
βjx j ⇐⇒t∑
i=1
αiz i −s∑
j=1
βjx j = 0.
But B = {x1, . . . ,xs, z1, . . . , z t} is linearly independent, so thatα1 = · · · = αt = β1 = · · · = βs = 0 and
therefore T is also linearlyindependent. �
[email protected] MATH 532 83
![Page 249: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/249.jpg)
More About Rank
(cont.)
Now, since S = {x1, . . . ,xs} is a basis for N(A) ∩ R(B) we have
t∑i=1
αiz i =s∑
j=1
βjx j ⇐⇒t∑
i=1
αiz i −s∑
j=1
βjx j = 0.
But B = {x1, . . . ,xs, z1, . . . , z t} is linearly independent, so thatα1 = · · · = αt = β1 = · · · = βs = 0 and therefore T is also linearlyindependent. �
[email protected] MATH 532 83
![Page 250: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/250.jpg)
More About Rank
It turns out that dim(N(A) ∩ R(B)) is relatively difficult to determine.
Therefore, the following upper and lower bounds for rank(AB) areuseful.
TheoremLet A be an m × n matrix, and let B by n × p. Then
1 rank(AB) ≤ min{rank(A), rank(B)},2 rank(AB) ≥ rank(A) + rank(B)− n.
[email protected] MATH 532 84
![Page 251: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/251.jpg)
More About Rank
It turns out that dim(N(A) ∩ R(B)) is relatively difficult to determine.
Therefore, the following upper and lower bounds for rank(AB) areuseful.
TheoremLet A be an m × n matrix, and let B by n × p. Then
1 rank(AB) ≤ min{rank(A), rank(B)},2 rank(AB) ≥ rank(A) + rank(B)− n.
[email protected] MATH 532 84
![Page 252: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/252.jpg)
More About Rank
It turns out that dim(N(A) ∩ R(B)) is relatively difficult to determine.
Therefore, the following upper and lower bounds for rank(AB) areuseful.
TheoremLet A be an m × n matrix, and let B by n × p. Then
1 rank(AB) ≤ min{rank(A), rank(B)},2 rank(AB) ≥ rank(A) + rank(B)− n.
[email protected] MATH 532 84
![Page 253: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/253.jpg)
More About Rank
Proof of (1)
We show that rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B).
The previous theorem states
Similarly,
rank(AB) = rank(AB)T = rank(BT AT )as above≤ rank(AT ) = rank(A).
To make things as tight as possible we take the smaller of the twoupper bounds.
[email protected] MATH 532 85
![Page 254: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/254.jpg)
More About Rank
Proof of (1)
We show that rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B).
The previous theorem states
rank(AB) = rank(B)− dim(N(A) ∩ R(B))
Similarly,
rank(AB) = rank(AB)T = rank(BT AT )as above≤ rank(AT ) = rank(A).
To make things as tight as possible we take the smaller of the twoupper bounds.
[email protected] MATH 532 85
![Page 255: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/255.jpg)
More About Rank
Proof of (1)
We show that rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B).
The previous theorem states
rank(AB) = rank(B)− dim(N(A) ∩ R(B))︸ ︷︷ ︸≥0
≤ rank(B).
Similarly,
rank(AB) = rank(AB)T = rank(BT AT )as above≤ rank(AT ) = rank(A).
To make things as tight as possible we take the smaller of the twoupper bounds.
[email protected] MATH 532 85
![Page 256: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/256.jpg)
More About Rank
Proof of (1)
We show that rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B).
The previous theorem states
rank(AB) = rank(B)− dim(N(A) ∩ R(B))︸ ︷︷ ︸≥0
≤ rank(B).
Similarly,
rank(AB) =
rank(AB)T = rank(BT AT )as above≤ rank(AT ) = rank(A).
To make things as tight as possible we take the smaller of the twoupper bounds.
[email protected] MATH 532 85
![Page 257: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/257.jpg)
More About Rank
Proof of (1)
We show that rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B).
The previous theorem states
rank(AB) = rank(B)− dim(N(A) ∩ R(B))︸ ︷︷ ︸≥0
≤ rank(B).
Similarly,
rank(AB) = rank(AB)T = rank(BT AT )
as above≤ rank(AT ) = rank(A).
To make things as tight as possible we take the smaller of the twoupper bounds.
[email protected] MATH 532 85
![Page 258: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/258.jpg)
More About Rank
Proof of (1)
We show that rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B).
The previous theorem states
rank(AB) = rank(B)− dim(N(A) ∩ R(B))︸ ︷︷ ︸≥0
≤ rank(B).
Similarly,
rank(AB) = rank(AB)T = rank(BT AT )as above≤ rank(AT ) =
rank(A).
To make things as tight as possible we take the smaller of the twoupper bounds.
[email protected] MATH 532 85
![Page 259: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/259.jpg)
More About Rank
Proof of (1)
We show that rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B).
The previous theorem states
rank(AB) = rank(B)− dim(N(A) ∩ R(B))︸ ︷︷ ︸≥0
≤ rank(B).
Similarly,
rank(AB) = rank(AB)T = rank(BT AT )as above≤ rank(AT ) = rank(A).
To make things as tight as possible we take the smaller of the twoupper bounds.
[email protected] MATH 532 85
![Page 260: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/260.jpg)
More About Rank
Proof of (1)
We show that rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B).
The previous theorem states
rank(AB) = rank(B)− dim(N(A) ∩ R(B))︸ ︷︷ ︸≥0
≤ rank(B).
Similarly,
rank(AB) = rank(AB)T = rank(BT AT )as above≤ rank(AT ) = rank(A).
To make things as tight as possible we take the smaller of the twoupper bounds.
[email protected] MATH 532 85
![Page 261: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/261.jpg)
More About Rank
Proof of (2)
We begin by noting that N(A) ∩ R(B) ⊆ N(A).
Therefore,
dim(N(A) ∩ R(B)) ≤ dim(N(A)) = n − rank(A).
But then (using the previous theorem)
rank(AB) = rank(B)− dim(N(A) ∩ R(B))
≥ rank(B)− n + rank(A).
�
[email protected] MATH 532 86
![Page 262: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/262.jpg)
More About Rank
Proof of (2)
We begin by noting that N(A) ∩ R(B) ⊆ N(A).
Therefore,
dim(N(A) ∩ R(B)) ≤ dim(N(A)) =
n − rank(A).
But then (using the previous theorem)
rank(AB) = rank(B)− dim(N(A) ∩ R(B))
≥ rank(B)− n + rank(A).
�
[email protected] MATH 532 86
![Page 263: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/263.jpg)
More About Rank
Proof of (2)
We begin by noting that N(A) ∩ R(B) ⊆ N(A).
Therefore,
dim(N(A) ∩ R(B)) ≤ dim(N(A)) = n − rank(A).
But then (using the previous theorem)
rank(AB) = rank(B)− dim(N(A) ∩ R(B))
≥ rank(B)− n + rank(A).
�
[email protected] MATH 532 86
![Page 264: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/264.jpg)
More About Rank
Proof of (2)
We begin by noting that N(A) ∩ R(B) ⊆ N(A).
Therefore,
dim(N(A) ∩ R(B)) ≤ dim(N(A)) = n − rank(A).
But then (using the previous theorem)
rank(AB) = rank(B)− dim(N(A) ∩ R(B))
≥ rank(B)− n + rank(A).
�
[email protected] MATH 532 86
![Page 265: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/265.jpg)
More About Rank
Proof of (2)
We begin by noting that N(A) ∩ R(B) ⊆ N(A).
Therefore,
dim(N(A) ∩ R(B)) ≤ dim(N(A)) = n − rank(A).
But then (using the previous theorem)
rank(AB) = rank(B)− dim(N(A) ∩ R(B))≥ rank(B)− n + rank(A).
�
[email protected] MATH 532 86
![Page 266: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/266.jpg)
More About Rank
To prepare for our study of least squares solutions, where the matricesAT A and AAT are important, we prove
LemmaLet A be a real m × n matrix. Then
1 rank(AT A) = rank(AAT ) = rank(A).2 R(AT A) = R(AT ), R(AAT ) = R(A).3 N(AT A) = N(A), N(AAT ) = N(AT ).
[email protected] MATH 532 87
![Page 267: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/267.jpg)
More About Rank
To prepare for our study of least squares solutions, where the matricesAT A and AAT are important, we prove
LemmaLet A be a real m × n matrix. Then
1 rank(AT A) = rank(AAT ) = rank(A).2 R(AT A) = R(AT ), R(AAT ) = R(A).3 N(AT A) = N(A), N(AAT ) = N(AT ).
[email protected] MATH 532 87
![Page 268: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/268.jpg)
More About Rank
ProofFrom our earlier theorem we know
rank(AT A) = rank(A)− dim(N(AT ) ∩ R(A)).
For (1) to be true we need to show dim(N(AT ) ∩ R(A)) = 0, i.e.,N(AT ) ∩ R(A) = {0}.This is true since
x ∈ N(AT ) ∩ R(A) =⇒ AT x = 0 and x = Ay for some y .
Therefore (using xT = yT AT )
xT x = yT AT x = 0.
But
xT x = 0 ⇐⇒m∑
i=1
x2i = 0 =⇒ x = 0.
[email protected] MATH 532 88
![Page 269: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/269.jpg)
More About Rank
ProofFrom our earlier theorem we know
rank(AT A) = rank(A)− dim(N(AT ) ∩ R(A)).
For (1) to be true we need to show dim(N(AT ) ∩ R(A)) = 0, i.e.,N(AT ) ∩ R(A) = {0}.
This is true since
x ∈ N(AT ) ∩ R(A) =⇒ AT x = 0 and x = Ay for some y .
Therefore (using xT = yT AT )
xT x = yT AT x = 0.
But
xT x = 0 ⇐⇒m∑
i=1
x2i = 0 =⇒ x = 0.
[email protected] MATH 532 88
![Page 270: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/270.jpg)
More About Rank
ProofFrom our earlier theorem we know
rank(AT A) = rank(A)− dim(N(AT ) ∩ R(A)).
For (1) to be true we need to show dim(N(AT ) ∩ R(A)) = 0, i.e.,N(AT ) ∩ R(A) = {0}.This is true since
x ∈ N(AT ) ∩ R(A) =⇒ AT x = 0 and x = Ay for some y .
Therefore (using xT = yT AT )
xT x = yT AT x = 0.
But
xT x = 0 ⇐⇒m∑
i=1
x2i = 0 =⇒ x = 0.
[email protected] MATH 532 88
![Page 271: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/271.jpg)
More About Rank
ProofFrom our earlier theorem we know
rank(AT A) = rank(A)− dim(N(AT ) ∩ R(A)).
For (1) to be true we need to show dim(N(AT ) ∩ R(A)) = 0, i.e.,N(AT ) ∩ R(A) = {0}.This is true since
x ∈ N(AT ) ∩ R(A) =⇒ AT x = 0 and x = Ay for some y .
Therefore (using xT = yT AT )
xT x = yT AT x
= 0.
But
xT x = 0 ⇐⇒m∑
i=1
x2i = 0 =⇒ x = 0.
[email protected] MATH 532 88
![Page 272: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/272.jpg)
More About Rank
ProofFrom our earlier theorem we know
rank(AT A) = rank(A)− dim(N(AT ) ∩ R(A)).
For (1) to be true we need to show dim(N(AT ) ∩ R(A)) = 0, i.e.,N(AT ) ∩ R(A) = {0}.This is true since
x ∈ N(AT ) ∩ R(A) =⇒ AT x = 0 and x = Ay for some y .
Therefore (using xT = yT AT )
xT x = yT AT x = 0.
But
xT x = 0 ⇐⇒m∑
i=1
x2i = 0 =⇒ x = 0.
[email protected] MATH 532 88
![Page 273: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/273.jpg)
More About Rank
ProofFrom our earlier theorem we know
rank(AT A) = rank(A)− dim(N(AT ) ∩ R(A)).
For (1) to be true we need to show dim(N(AT ) ∩ R(A)) = 0, i.e.,N(AT ) ∩ R(A) = {0}.This is true since
x ∈ N(AT ) ∩ R(A) =⇒ AT x = 0 and x = Ay for some y .
Therefore (using xT = yT AT )
xT x = yT AT x = 0.
But
xT x = 0 ⇐⇒m∑
i=1
x2i = 0 =⇒
x = 0.
[email protected] MATH 532 88
![Page 274: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/274.jpg)
More About Rank
ProofFrom our earlier theorem we know
rank(AT A) = rank(A)− dim(N(AT ) ∩ R(A)).
For (1) to be true we need to show dim(N(AT ) ∩ R(A)) = 0, i.e.,N(AT ) ∩ R(A) = {0}.This is true since
x ∈ N(AT ) ∩ R(A) =⇒ AT x = 0 and x = Ay for some y .
Therefore (using xT = yT AT )
xT x = yT AT x = 0.
But
xT x = 0 ⇐⇒m∑
i=1
x2i = 0 =⇒ x = 0.
[email protected] MATH 532 88
![Page 275: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/275.jpg)
More About Rank
(cont.)
rank(AAT ) = rank(AT ) obtained by switching A and AT , and then userank(AT ) = rank(A).
The first part of (2) follows from R(AT A) ⊆ R(AT ) (see HW) and
dim(R(AT A)) = rank(AT A)(1)= rank(AT ) = dim(R(AT ))
since forM⊆ N with dimM = dimN one hasM = N (from anearlier theorem).
The other part of (2) follows by switching A and AT .
[email protected] MATH 532 89
![Page 276: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/276.jpg)
More About Rank
(cont.)
rank(AAT ) = rank(AT ) obtained by switching A and AT , and then userank(AT ) = rank(A).
The first part of (2) follows from R(AT A) ⊆ R(AT ) (see HW) and
dim(R(AT A)) = rank(AT A)
(1)= rank(AT ) = dim(R(AT ))
since forM⊆ N with dimM = dimN one hasM = N (from anearlier theorem).
The other part of (2) follows by switching A and AT .
[email protected] MATH 532 89
![Page 277: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/277.jpg)
More About Rank
(cont.)
rank(AAT ) = rank(AT ) obtained by switching A and AT , and then userank(AT ) = rank(A).
The first part of (2) follows from R(AT A) ⊆ R(AT ) (see HW) and
dim(R(AT A)) = rank(AT A)(1)= rank(AT ) = dim(R(AT ))
since forM⊆ N with dimM = dimN one hasM = N (from anearlier theorem).
The other part of (2) follows by switching A and AT .
[email protected] MATH 532 89
![Page 278: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/278.jpg)
More About Rank
(cont.)
rank(AAT ) = rank(AT ) obtained by switching A and AT , and then userank(AT ) = rank(A).
The first part of (2) follows from R(AT A) ⊆ R(AT ) (see HW) and
dim(R(AT A)) = rank(AT A)(1)= rank(AT ) = dim(R(AT ))
since forM⊆ N with dimM = dimN one hasM = N (from anearlier theorem).
The other part of (2) follows by switching A and AT .
[email protected] MATH 532 89
![Page 279: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/279.jpg)
More About Rank
(cont.)
rank(AAT ) = rank(AT ) obtained by switching A and AT , and then userank(AT ) = rank(A).
The first part of (2) follows from R(AT A) ⊆ R(AT ) (see HW) and
dim(R(AT A)) = rank(AT A)(1)= rank(AT ) = dim(R(AT ))
since forM⊆ N with dimM = dimN one hasM = N (from anearlier theorem).
The other part of (2) follows by switching A and AT .
[email protected] MATH 532 89
![Page 280: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/280.jpg)
More About Rank
(cont.)
The first part of (3) follows from N(A) ⊆ N(AT A) (see HW) and
dim(N(A)) = n − rank(A)
= n − rank(AT A) = dim(N(AT A))
using the same reasoning as above.
The other part of (3) follows by switching A and AT . �
[email protected] MATH 532 90
![Page 281: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/281.jpg)
More About Rank
(cont.)
The first part of (3) follows from N(A) ⊆ N(AT A) (see HW) and
dim(N(A)) = n − rank(A) = n − rank(AT A) = dim(N(AT A))
using the same reasoning as above.
The other part of (3) follows by switching A and AT . �
[email protected] MATH 532 90
![Page 282: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/282.jpg)
More About Rank
(cont.)
The first part of (3) follows from N(A) ⊆ N(AT A) (see HW) and
dim(N(A)) = n − rank(A) = n − rank(AT A) = dim(N(AT A))
using the same reasoning as above.
The other part of (3) follows by switching A and AT . �
[email protected] MATH 532 90
![Page 283: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/283.jpg)
More About Rank
Connection to least squares and normal equations
Consider a — possibly inconsistent — linear system
Ax = b
with m × n matrix A (and b /∈ R(A) if inconsistent).
To find a “solution” we multiply both sides by AT to get the normalequations:
AT Ax = AT b,
where AT A is an n × n matrix.
[email protected] MATH 532 91
![Page 284: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/284.jpg)
More About Rank
Connection to least squares and normal equations
Consider a — possibly inconsistent — linear system
Ax = b
with m × n matrix A (and b /∈ R(A) if inconsistent).
To find a “solution” we multiply both sides by AT to get the normalequations:
AT Ax = AT b,
where AT A is an n × n matrix.
[email protected] MATH 532 91
![Page 285: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/285.jpg)
More About Rank
TheoremLet A be an m × n matrix, b an m-vector, and consider the normalequations
AT Ax = AT b
associated with Ax = b.1 The normal equations are always consistent, i.e., for every A and
b there exists at least one x such that AT Ax = AT b.2 If Ax = b is consistent, then AT Ax = AT b has the same solution
set (the least squares solution of Ax = b).3 AT Ax = AT b has a unique solution if and only if rank(A) = n.
Thenx = (AT A)−1AT b,
regardless of whether Ax = b is consistent or not.4 If Ax = b is consistent and has a unique solution, then the same
holds for AT Ax = AT b and x = (AT A)−1AT b.
[email protected] MATH 532 92
![Page 286: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/286.jpg)
More About Rank
Proof(1) follows from our previous lemma, i.e.,
AT b ∈ R(AT ) = R(AT A).
To show (2) we assume the p is some particular solution of Ax = b,i.e., Ap = b.
If we multiply by AT , then
AT Ap = AT p,
so that p is also a solution of the normal equations.
[email protected] MATH 532 93
![Page 287: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/287.jpg)
More About Rank
Proof(1) follows from our previous lemma, i.e.,
AT b ∈ R(AT ) = R(AT A).
To show (2) we assume the p is some particular solution of Ax = b,i.e., Ap = b.
If we multiply by AT , then
AT Ap = AT p,
so that p is also a solution of the normal equations.
[email protected] MATH 532 93
![Page 288: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/288.jpg)
More About Rank
Proof(1) follows from our previous lemma, i.e.,
AT b ∈ R(AT ) = R(AT A).
To show (2) we assume the p is some particular solution of Ax = b,i.e., Ap = b.
If we multiply by AT , then
AT Ap = AT p,
so that p is also a solution of the normal equations.
[email protected] MATH 532 93
![Page 289: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/289.jpg)
More About Rank
(cont.)Now, the general solution of Ax = b is from the set (see Problem 2 onHW#4)
S = p + N(A).
Moreover, the general solution of AT Ax = AT b is of the form
p + N(AT A) lemma= p + N(A) = S.
[email protected] MATH 532 94
![Page 290: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/290.jpg)
More About Rank
(cont.)Now, the general solution of Ax = b is from the set (see Problem 2 onHW#4)
S = p + N(A).
Moreover, the general solution of AT Ax = AT b is of the form
p + N(AT A)
lemma= p + N(A) = S.
[email protected] MATH 532 94
![Page 291: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/291.jpg)
More About Rank
(cont.)Now, the general solution of Ax = b is from the set (see Problem 2 onHW#4)
S = p + N(A).
Moreover, the general solution of AT Ax = AT b is of the form
p + N(AT A) lemma= p + N(A) = S.
[email protected] MATH 532 94
![Page 292: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/292.jpg)
More About Rank
(cont.)
For (3) we want to show that AT Ax = AT b has a unique solution if andonly if rank(A) = n.
What we know immediately is that AT Ax = AT b has a unique solutionif and only if rank(AT A) = n.Since we showed earlier that rank(AT A) = rank(A) this part is done.
Now, if rank(AT A) = n we know that AT A is invertible (even though AT
and A may not be) and therefore
AT Ax = AT b ⇐⇒ x = (AT A)−1AT b.
To show (4) we note that Ax = b has a unique solution if and only ifrank(A) = n. But rank(AT A) = rank(A) and the rest follows from (3). �
[email protected] MATH 532 95
![Page 293: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/293.jpg)
More About Rank
(cont.)
For (3) we want to show that AT Ax = AT b has a unique solution if andonly if rank(A) = n.
What we know immediately is that AT Ax = AT b has a unique solutionif and only if rank(AT A) = n.
Since we showed earlier that rank(AT A) = rank(A) this part is done.
Now, if rank(AT A) = n we know that AT A is invertible (even though AT
and A may not be) and therefore
AT Ax = AT b ⇐⇒ x = (AT A)−1AT b.
To show (4) we note that Ax = b has a unique solution if and only ifrank(A) = n. But rank(AT A) = rank(A) and the rest follows from (3). �
[email protected] MATH 532 95
![Page 294: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/294.jpg)
More About Rank
(cont.)
For (3) we want to show that AT Ax = AT b has a unique solution if andonly if rank(A) = n.
What we know immediately is that AT Ax = AT b has a unique solutionif and only if rank(AT A) = n.Since we showed earlier that rank(AT A) = rank(A) this part is done.
Now, if rank(AT A) = n we know that AT A is invertible (even though AT
and A may not be) and therefore
AT Ax = AT b ⇐⇒ x = (AT A)−1AT b.
To show (4) we note that Ax = b has a unique solution if and only ifrank(A) = n. But rank(AT A) = rank(A) and the rest follows from (3). �
[email protected] MATH 532 95
![Page 295: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/295.jpg)
More About Rank
(cont.)
For (3) we want to show that AT Ax = AT b has a unique solution if andonly if rank(A) = n.
What we know immediately is that AT Ax = AT b has a unique solutionif and only if rank(AT A) = n.Since we showed earlier that rank(AT A) = rank(A) this part is done.
Now, if rank(AT A) = n we know that AT A is invertible (even though AT
and A may not be) and therefore
AT Ax = AT b ⇐⇒ x = (AT A)−1AT b.
To show (4) we note that Ax = b has a unique solution if and only ifrank(A) = n. But rank(AT A) = rank(A) and the rest follows from (3). �
[email protected] MATH 532 95
![Page 296: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/296.jpg)
More About Rank
(cont.)
For (3) we want to show that AT Ax = AT b has a unique solution if andonly if rank(A) = n.
What we know immediately is that AT Ax = AT b has a unique solutionif and only if rank(AT A) = n.Since we showed earlier that rank(AT A) = rank(A) this part is done.
Now, if rank(AT A) = n we know that AT A is invertible (even though AT
and A may not be) and therefore
AT Ax = AT b ⇐⇒ x = (AT A)−1AT b.
To show (4) we note that Ax = b has a unique solution if and only ifrank(A) = n. But rank(AT A) = rank(A) and the rest follows from (3). �
[email protected] MATH 532 95
![Page 297: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/297.jpg)
More About Rank
RemarkThe normal equations are not recommended for serious computationssince they are often rather ill-conditioned since one can show that
cond(AT A) = cond(A)2.
There’s an example in [Mey00] that illustrates this fact.
[email protected] MATH 532 96
![Page 298: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/298.jpg)
More About Rank
Historical definition of rank
Let A be an m × n matrix. Then A has rank r if there exists atleast one nonsingular r × r submatrix of A (and none larger).
ExampleThe matrix
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
cannot have rank 4 since rows one and two are linearly dependent.
But rank(A) ≥ 2 since(
9 65 3
)is nonsingular.
[email protected] MATH 532 97
![Page 299: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/299.jpg)
More About Rank
Historical definition of rank
Let A be an m × n matrix. Then A has rank r if there exists atleast one nonsingular r × r submatrix of A (and none larger).
ExampleThe matrix
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
cannot have rank 4
since rows one and two are linearly dependent.
But rank(A) ≥ 2 since(
9 65 3
)is nonsingular.
[email protected] MATH 532 97
![Page 300: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/300.jpg)
More About Rank
Historical definition of rank
Let A be an m × n matrix. Then A has rank r if there exists atleast one nonsingular r × r submatrix of A (and none larger).
ExampleThe matrix
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
cannot have rank 4 since rows one and two are linearly dependent.
But rank(A) ≥ 2 since(
9 65 3
)is nonsingular.
[email protected] MATH 532 97
![Page 301: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/301.jpg)
More About Rank
Historical definition of rank
Let A be an m × n matrix. Then A has rank r if there exists atleast one nonsingular r × r submatrix of A (and none larger).
ExampleThe matrix
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
cannot have rank 4 since rows one and two are linearly dependent.
But rank(A) ≥ 2
since(
9 65 3
)is nonsingular.
[email protected] MATH 532 97
![Page 302: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/302.jpg)
More About Rank
Historical definition of rank
Let A be an m × n matrix. Then A has rank r if there exists atleast one nonsingular r × r submatrix of A (and none larger).
ExampleThe matrix
A =
1 2 2 3 12 4 4 6 23 6 6 9 61 2 4 5 3
cannot have rank 4 since rows one and two are linearly dependent.
But rank(A) ≥ 2 since(
9 65 3
)is nonsingular.
[email protected] MATH 532 97
![Page 303: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/303.jpg)
More About Rank
Example (cont.)
In fact, rank(A) = 3 since 4 6 26 9 64 5 3
is nonsingular.
Note that other singular 3× 3 submatrices are allowed, such as1 2 22 4 43 6 6
.
[email protected] MATH 532 98
![Page 304: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/304.jpg)
More About Rank
Example (cont.)
In fact, rank(A) = 3 since 4 6 26 9 64 5 3
is nonsingular.
Note that other singular 3× 3 submatrices are allowed, such as1 2 22 4 43 6 6
.
[email protected] MATH 532 98
![Page 305: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/305.jpg)
More About Rank
Earlier we showed that
rank(AB) ≤ rank(A),
i.e., multiplication by another matrix does not increase the rank of agiven matrix, i.e., we can’t “fix” a singular system by multiplication.
Now
TheoremLet A and E be m × n matrices. Then
rank(A + E) ≥ rank(A),
provided the entries of E are “sufficiently small”.
[email protected] MATH 532 99
![Page 306: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/306.jpg)
More About Rank
Earlier we showed that
rank(AB) ≤ rank(A),
i.e., multiplication by another matrix does not increase the rank of agiven matrix, i.e., we can’t “fix” a singular system by multiplication.
Now
TheoremLet A and E be m × n matrices. Then
rank(A + E) ≥ rank(A),
provided the entries of E are “sufficiently small”.
[email protected] MATH 532 99
![Page 307: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/307.jpg)
More About Rank
This theorem has at least two fundamental consequences of practicalimportance:
Beware!! A theoretically singular system may becomenonsingular, i.e., have a “solution” — just due to round-off error.
We may want to intentionally “fix” a singular system, so that it hasa “solution”. One such strategy is known as Tikhonovregularization, i.e.,
Ax = b −→ (A + µI)x = b,
where µ is a (small) regularization parameter.
[email protected] MATH 532 100
![Page 308: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/308.jpg)
More About Rank
This theorem has at least two fundamental consequences of practicalimportance:
Beware!! A theoretically singular system may becomenonsingular, i.e., have a “solution” — just due to round-off error.
We may want to intentionally “fix” a singular system, so that it hasa “solution”. One such strategy is known as Tikhonovregularization, i.e.,
Ax = b −→ (A + µI)x = b,
where µ is a (small) regularization parameter.
[email protected] MATH 532 100
![Page 309: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/309.jpg)
More About Rank
This theorem has at least two fundamental consequences of practicalimportance:
Beware!! A theoretically singular system may becomenonsingular, i.e., have a “solution” — just due to round-off error.
We may want to intentionally “fix” a singular system, so that it hasa “solution”. One such strategy is known as Tikhonovregularization, i.e.,
Ax = b −→ (A + µI)x = b,
where µ is a (small) regularization parameter.
[email protected] MATH 532 100
![Page 310: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/310.jpg)
More About Rank
ProofWe assume that rank(A) = r and that we have nonsingular P and Qsuch that we can convert A to rank normal form, i.e.,
PAQ =
(Ir OO O
).
Then — formally — PEQ =
(E11 E12E21 E22
)with appropriate blocks Eij .
This allows us to write
P(A + E)Q =
(Ir + E11 E12
E21 E22
).
[email protected] MATH 532 101
![Page 311: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/311.jpg)
More About Rank
ProofWe assume that rank(A) = r and that we have nonsingular P and Qsuch that we can convert A to rank normal form, i.e.,
PAQ =
(Ir OO O
).
Then — formally — PEQ =
(E11 E12E21 E22
)with appropriate blocks Eij .
This allows us to write
P(A + E)Q =
(Ir + E11 E12
E21 E22
).
[email protected] MATH 532 101
![Page 312: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/312.jpg)
More About Rank
ProofWe assume that rank(A) = r and that we have nonsingular P and Qsuch that we can convert A to rank normal form, i.e.,
PAQ =
(Ir OO O
).
Then — formally — PEQ =
(E11 E12E21 E22
)with appropriate blocks Eij .
This allows us to write
P(A + E)Q =
(Ir + E11 E12
E21 E22
).
[email protected] MATH 532 101
![Page 313: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/313.jpg)
More About Rank
(cont.)Now, we note that
(I− B)(I + B + B2 + . . .+ Bk−1)
= I− Bk
→ I,
provided the entries of B are “sufficiently small” (i.e., so that Bk → Ofor k →∞).Therefore (I− B)−1 exists.
This technique is known as the Neumann series expansion of theinverse of I− B.
[email protected] MATH 532 102
![Page 314: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/314.jpg)
More About Rank
(cont.)Now, we note that
(I− B)(I + B + B2 + . . .+ Bk−1) = I− Bk
→ I,
provided the entries of B are “sufficiently small” (i.e., so that Bk → Ofor k →∞).Therefore (I− B)−1 exists.
This technique is known as the Neumann series expansion of theinverse of I− B.
[email protected] MATH 532 102
![Page 315: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/315.jpg)
More About Rank
(cont.)Now, we note that
(I− B)(I + B + B2 + . . .+ Bk−1) = I− Bk
→ I,
provided the entries of B are “sufficiently small” (i.e., so that Bk → Ofor k →∞).
Therefore (I− B)−1 exists.
This technique is known as the Neumann series expansion of theinverse of I− B.
[email protected] MATH 532 102
![Page 316: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/316.jpg)
More About Rank
(cont.)Now, we note that
(I− B)(I + B + B2 + . . .+ Bk−1) = I− Bk
→ I,
provided the entries of B are “sufficiently small” (i.e., so that Bk → Ofor k →∞).Therefore (I− B)−1 exists.
This technique is known as the Neumann series expansion of theinverse of I− B.
[email protected] MATH 532 102
![Page 317: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/317.jpg)
More About Rank
(cont.)Now, we note that
(I− B)(I + B + B2 + . . .+ Bk−1) = I− Bk
→ I,
provided the entries of B are “sufficiently small” (i.e., so that Bk → Ofor k →∞).Therefore (I− B)−1 exists.
This technique is known as the Neumann series expansion of theinverse of I− B.
[email protected] MATH 532 102
![Page 318: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/318.jpg)
More About Rank
(cont.)
Now, letting B = −E11, we know that (Ir + E11)−1 exists and we can
write(Ir O
−E21(Ir + E11)−1 Im−r
)(Ir + E11 E12
E21 E22
)(Ir −(Ir + E11)
−1E12O In−r
)=
(Ir + E11 O
O S
),
where S = E22 − E21(Ir + E11)−1E12 is the Schur complement of
I + E11 in PAQ.
[email protected] MATH 532 103
![Page 319: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/319.jpg)
More About Rank
(cont.)The Schur complement calculation shows that
A + E ∼(
Ir + E11 OO S
).
But then this rank normal form with invertible diagonal blocks tells us
rank(A + E) = rank(Ir + E11) + rank(S)
= rank(A) + rank(S)≥ rank(A).
�
[email protected] MATH 532 104
![Page 320: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/320.jpg)
More About Rank
(cont.)The Schur complement calculation shows that
A + E ∼(
Ir + E11 OO S
).
But then this rank normal form with invertible diagonal blocks tells us
rank(A + E) = rank(Ir + E11) + rank(S)
= rank(A) + rank(S)≥ rank(A).
�
[email protected] MATH 532 104
![Page 321: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/321.jpg)
More About Rank
(cont.)The Schur complement calculation shows that
A + E ∼(
Ir + E11 OO S
).
But then this rank normal form with invertible diagonal blocks tells us
rank(A + E) = rank(Ir + E11) + rank(S)= rank(A) + rank(S)
≥ rank(A).
�
[email protected] MATH 532 104
![Page 322: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/322.jpg)
More About Rank
(cont.)The Schur complement calculation shows that
A + E ∼(
Ir + E11 OO S
).
But then this rank normal form with invertible diagonal blocks tells us
rank(A + E) = rank(Ir + E11) + rank(S)= rank(A) + rank(S)≥ rank(A).
�
[email protected] MATH 532 104
![Page 323: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/323.jpg)
Classical Least Squares
Outline
1 Spaces and Subspaces
2 Four Fundamental Subspaces
3 Linear Independence
4 Bases and Dimension
5 More About Rank
6 Classical Least Squares
7 Kriging as best linear unbiased predictor
[email protected] MATH 532 105
![Page 324: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/324.jpg)
Classical Least Squares
[email protected] MATH 532 106
Linear least squares (linear regression)
Given: data {(t1,b1), (t2,b2), . . . , (tm,bm)}
Find: “best fit” by a line
t 1 2 3 4 5b 1.3 3.5 4.2 5.0 7.0
Idea for best fitMinimize the sum of the squares of the vertical distances of line fromthe data points.
![Page 325: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/325.jpg)
Classical Least Squares
[email protected] MATH 532 106
Linear least squares (linear regression)
Given: data {(t1,b1), (t2,b2), . . . , (tm,bm)}Find: “best fit” by a line
t 1 2 3 4 5b 1.3 3.5 4.2 5.0 7.0
Idea for best fitMinimize the sum of the squares of the vertical distances of line fromthe data points.
![Page 326: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/326.jpg)
Classical Least Squares
[email protected] MATH 532 106
Linear least squares (linear regression)
Given: data {(t1,b1), (t2,b2), . . . , (tm,bm)}Find: “best fit” by a line
t 1 2 3 4 5b 1.3 3.5 4.2 5.0 7.0
Idea for best fitMinimize the sum of the squares of the vertical distances of line fromthe data points.
![Page 327: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/327.jpg)
Classical Least Squares
More precisely, letf (t) = α+ βt
with α, β such that
m∑i=1
ε2i =
m∑i=1
(f (ti)− bi)2
=m∑
i=1
(α+ βti − bi)2 = G(α, β) −→ min
From calculus, necessary (and sufficient) condition for minimum
∂G(α, β)
∂α= 0,
∂G(α, β)
∂β= 0.
where
∂G(α, β)
∂α= 2
m∑i=1
(α+ βti − bi) ,∂G(α, β)
∂β= 2
m∑i=1
(α+ βti − bi) ti
[email protected] MATH 532 107
![Page 328: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/328.jpg)
Classical Least Squares
More precisely, letf (t) = α+ βt
with α, β such that
m∑i=1
ε2i =
m∑i=1
(f (ti)− bi)2
=m∑
i=1
(α+ βti − bi)2 = G(α, β)
−→ min
From calculus, necessary (and sufficient) condition for minimum
∂G(α, β)
∂α= 0,
∂G(α, β)
∂β= 0.
where
∂G(α, β)
∂α= 2
m∑i=1
(α+ βti − bi) ,∂G(α, β)
∂β= 2
m∑i=1
(α+ βti − bi) ti
[email protected] MATH 532 107
![Page 329: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/329.jpg)
Classical Least Squares
More precisely, letf (t) = α+ βt
with α, β such that
m∑i=1
ε2i =
m∑i=1
(f (ti)− bi)2
=m∑
i=1
(α+ βti − bi)2
= G(α, β)
−→ min
From calculus, necessary (and sufficient) condition for minimum
∂G(α, β)
∂α= 0,
∂G(α, β)
∂β= 0.
where
∂G(α, β)
∂α= 2
m∑i=1
(α+ βti − bi) ,∂G(α, β)
∂β= 2
m∑i=1
(α+ βti − bi) ti
[email protected] MATH 532 107
![Page 330: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/330.jpg)
Classical Least Squares
More precisely, letf (t) = α+ βt
with α, β such thatm∑
i=1
ε2i =
m∑i=1
(f (ti)− bi)2
=m∑
i=1
(α+ βti − bi)2 = G(α, β) −→ min
From calculus, necessary (and sufficient) condition for minimum
∂G(α, β)
∂α= 0,
∂G(α, β)
∂β= 0.
where
∂G(α, β)
∂α= 2
m∑i=1
(α+ βti − bi) ,∂G(α, β)
∂β= 2
m∑i=1
(α+ βti − bi) ti
[email protected] MATH 532 107
![Page 331: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/331.jpg)
Classical Least Squares
More precisely, letf (t) = α+ βt
with α, β such thatm∑
i=1
ε2i =
m∑i=1
(f (ti)− bi)2
=m∑
i=1
(α+ βti − bi)2 = G(α, β) −→ min
From calculus, necessary (and sufficient) condition for minimum
∂G(α, β)
∂α= 0,
∂G(α, β)
∂β= 0.
where
∂G(α, β)
∂α= 2
m∑i=1
(α+ βti − bi) ,∂G(α, β)
∂β= 2
m∑i=1
(α+ βti − bi) ti
[email protected] MATH 532 107
![Page 332: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/332.jpg)
Classical Least Squares
More precisely, letf (t) = α+ βt
with α, β such thatm∑
i=1
ε2i =
m∑i=1
(f (ti)− bi)2
=m∑
i=1
(α+ βti − bi)2 = G(α, β) −→ min
From calculus, necessary (and sufficient) condition for minimum
∂G(α, β)
∂α= 0,
∂G(α, β)
∂β= 0.
where
∂G(α, β)
∂α= 2
m∑i=1
(α+ βti − bi) ,∂G(α, β)
∂β= 2
m∑i=1
(α+ βti − bi) ti
[email protected] MATH 532 107
![Page 333: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/333.jpg)
Classical Least Squares
Equivalently, (m∑
i=1
1
)α+
(m∑
i=1
ti
)β =
m∑i=1
bi(m∑
i=1
ti
)α+
(m∑
i=1
t2i
)β =
m∑i=1
bi ti
which can be written asQx = y
with
Q =
m∑
i=1
1m∑
i=1
ti
m∑i=1
tim∑
i=1
t2i
, x =
(αβ
), y =
m∑
i=1
bi
m∑i=1
bi ti
[email protected] MATH 532 108
![Page 334: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/334.jpg)
Classical Least Squares
Equivalently, (m∑
i=1
1
)α+
(m∑
i=1
ti
)β =
m∑i=1
bi(m∑
i=1
ti
)α+
(m∑
i=1
t2i
)β =
m∑i=1
bi ti
which can be written asQx = y
with
Q =
m∑
i=1
1m∑
i=1
ti
m∑i=1
tim∑
i=1
t2i
, x =
(αβ
), y =
m∑
i=1
bi
m∑i=1
bi ti
[email protected] MATH 532 108
![Page 335: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/335.jpg)
Classical Least Squares
Equivalently, (m∑
i=1
1
)α+
(m∑
i=1
ti
)β =
m∑i=1
bi(m∑
i=1
ti
)α+
(m∑
i=1
t2i
)β =
m∑i=1
bi ti
which can be written asQx = y
with
Q =
m∑
i=1
1m∑
i=1
ti
m∑i=1
tim∑
i=1
t2i
, x =
(αβ
), y =
m∑
i=1
bi
m∑i=1
bi ti
[email protected] MATH 532 108
![Page 336: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/336.jpg)
Classical Least Squares
We can write each of these sums as inner products:
m∑i=1
1 = 1T 1,m∑
i=1
ti = 1T t = tT 1,m∑
i=1
t2i = tT t
m∑i=1
bi = 1T b = bT 1,m∑
i=1
bi ti = bT t = tT b,
where
1T =(1 · · · 1
), tT =
(t1 · · · tm
), bT =
(b1 · · · bm
)With this notation we have
Qx = y ⇐⇒
(1T 1 1T ttT 1 tT t
)x =
(1T btT b
)⇐⇒ AT Ax = AT b, AT =
(1T
tT
), A =
(1 t
)
[email protected] MATH 532 109
![Page 337: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/337.jpg)
Classical Least Squares
We can write each of these sums as inner products:
m∑i=1
1 = 1T 1,m∑
i=1
ti = 1T t = tT 1,m∑
i=1
t2i = tT t
m∑i=1
bi = 1T b = bT 1,m∑
i=1
bi ti = bT t = tT b,
where
1T =(1 · · · 1
), tT =
(t1 · · · tm
), bT =
(b1 · · · bm
)
With this notation we have
Qx = y ⇐⇒
(1T 1 1T ttT 1 tT t
)x =
(1T btT b
)⇐⇒ AT Ax = AT b, AT =
(1T
tT
), A =
(1 t
)
[email protected] MATH 532 109
![Page 338: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/338.jpg)
Classical Least Squares
We can write each of these sums as inner products:
m∑i=1
1 = 1T 1,m∑
i=1
ti = 1T t = tT 1,m∑
i=1
t2i = tT t
m∑i=1
bi = 1T b = bT 1,m∑
i=1
bi ti = bT t = tT b,
where
1T =(1 · · · 1
), tT =
(t1 · · · tm
), bT =
(b1 · · · bm
)With this notation we have
Qx = y ⇐⇒
(1T 1 1T ttT 1 tT t
)x =
(1T btT b
)⇐⇒ AT Ax = AT b, AT =
(1T
tT
), A =
(1 t
)
[email protected] MATH 532 109
![Page 339: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/339.jpg)
Classical Least Squares
We can write each of these sums as inner products:
m∑i=1
1 = 1T 1,m∑
i=1
ti = 1T t = tT 1,m∑
i=1
t2i = tT t
m∑i=1
bi = 1T b = bT 1,m∑
i=1
bi ti = bT t = tT b,
where
1T =(1 · · · 1
), tT =
(t1 · · · tm
), bT =
(b1 · · · bm
)With this notation we have
Qx = y ⇐⇒(
1T 1 1T ttT 1 tT t
)x =
(1T btT b
)
⇐⇒ AT Ax = AT b, AT =
(1T
tT
), A =
(1 t
)
[email protected] MATH 532 109
![Page 340: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/340.jpg)
Classical Least Squares
We can write each of these sums as inner products:
m∑i=1
1 = 1T 1,m∑
i=1
ti = 1T t = tT 1,m∑
i=1
t2i = tT t
m∑i=1
bi = 1T b = bT 1,m∑
i=1
bi ti = bT t = tT b,
where
1T =(1 · · · 1
), tT =
(t1 · · · tm
), bT =
(b1 · · · bm
)With this notation we have
Qx = y ⇐⇒(
1T 1 1T ttT 1 tT t
)x =
(1T btT b
)⇐⇒ AT Ax = AT b, AT =
(1T
tT
), A =
(1 t
)[email protected] MATH 532 109
![Page 341: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/341.jpg)
Classical Least Squares
Therefore we can find the parameters of the line, x =
(αβ
), by solving
the square linear system
AT Ax = AT b.
Also note that since εi = α+ βti − bi we have
ε =
ε1...εm
=
1...1
α+
t1...
tm
β −
b1...
bm
= 1α+ tβ − b
= Ax − b.
This implies that
G(α, β) =m∑
i=1
ε2i = εTε = (Ax − b)T (Ax − b).
[email protected] MATH 532 110
![Page 342: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/342.jpg)
Classical Least Squares
Therefore we can find the parameters of the line, x =
(αβ
), by solving
the square linear system
AT Ax = AT b.
Also note that since εi = α+ βti − bi we have
ε =
ε1...εm
=
1...1
α+
t1...
tm
β −
b1...
bm
= 1α+ tβ − b
= Ax − b.
This implies that
G(α, β) =m∑
i=1
ε2i = εTε = (Ax − b)T (Ax − b).
[email protected] MATH 532 110
![Page 343: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/343.jpg)
Classical Least Squares
Therefore we can find the parameters of the line, x =
(αβ
), by solving
the square linear system
AT Ax = AT b.
Also note that since εi = α+ βti − bi we have
ε =
ε1...εm
=
1...1
α+
t1...
tm
β −
b1...
bm
= 1α+ tβ − b = Ax − b.
This implies that
G(α, β) =m∑
i=1
ε2i = εTε = (Ax − b)T (Ax − b).
[email protected] MATH 532 110
![Page 344: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/344.jpg)
Classical Least Squares
Therefore we can find the parameters of the line, x =
(αβ
), by solving
the square linear system
AT Ax = AT b.
Also note that since εi = α+ βti − bi we have
ε =
ε1...εm
=
1...1
α+
t1...
tm
β −
b1...
bm
= 1α+ tβ − b = Ax − b.
This implies that
G(α, β) =m∑
i=1
ε2i
= εTε = (Ax − b)T (Ax − b).
[email protected] MATH 532 110
![Page 345: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/345.jpg)
Classical Least Squares
Therefore we can find the parameters of the line, x =
(αβ
), by solving
the square linear system
AT Ax = AT b.
Also note that since εi = α+ βti − bi we have
ε =
ε1...εm
=
1...1
α+
t1...
tm
β −
b1...
bm
= 1α+ tβ − b = Ax − b.
This implies that
G(α, β) =m∑
i=1
ε2i = εTε
= (Ax − b)T (Ax − b).
[email protected] MATH 532 110
![Page 346: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/346.jpg)
Classical Least Squares
Therefore we can find the parameters of the line, x =
(αβ
), by solving
the square linear system
AT Ax = AT b.
Also note that since εi = α+ βti − bi we have
ε =
ε1...εm
=
1...1
α+
t1...
tm
β −
b1...
bm
= 1α+ tβ − b = Ax − b.
This implies that
G(α, β) =m∑
i=1
ε2i = εTε = (Ax − b)T (Ax − b).
[email protected] MATH 532 110
![Page 347: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/347.jpg)
Classical Least Squares
ExampleData:
t -1 0 1 2 3 4 5 6b 10 9 7 5 4 3 0 -1
AT Ax = AT b
⇐⇒
(∑8i=1 1
∑8i=1 ti∑8
i=1 ti∑8
i=1 t2i
)(αβ
)=
(∑8i=1 bi∑8
i=1 bi ti
)
⇐⇒(
8 2020 92
)(αβ
)=
(3725
)=⇒ α ≈ 8.643, β ≈ −1.607
So that the best fit line to the given data is
f (t) ≈ 8.643− 1.607t .
[email protected] MATH 532 111
![Page 348: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/348.jpg)
Classical Least Squares
ExampleData:
t -1 0 1 2 3 4 5 6b 10 9 7 5 4 3 0 -1
AT Ax = AT b
⇐⇒
(∑8i=1 1
∑8i=1 ti∑8
i=1 ti∑8
i=1 t2i
)(αβ
)=
(∑8i=1 bi∑8
i=1 bi ti
)
⇐⇒(
8 2020 92
)(αβ
)=
(3725
)=⇒ α ≈ 8.643, β ≈ −1.607
So that the best fit line to the given data is
f (t) ≈ 8.643− 1.607t .
[email protected] MATH 532 111
![Page 349: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/349.jpg)
Classical Least Squares
ExampleData:
t -1 0 1 2 3 4 5 6b 10 9 7 5 4 3 0 -1
AT Ax = AT b ⇐⇒
(∑8i=1 1
∑8i=1 ti∑8
i=1 ti∑8
i=1 t2i
)(αβ
)=
(∑8i=1 bi∑8
i=1 bi ti
)
⇐⇒(
8 2020 92
)(αβ
)=
(3725
)=⇒ α ≈ 8.643, β ≈ −1.607
So that the best fit line to the given data is
f (t) ≈ 8.643− 1.607t .
[email protected] MATH 532 111
![Page 350: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/350.jpg)
Classical Least Squares
ExampleData:
t -1 0 1 2 3 4 5 6b 10 9 7 5 4 3 0 -1
AT Ax = AT b ⇐⇒
(∑8i=1 1
∑8i=1 ti∑8
i=1 ti∑8
i=1 t2i
)(αβ
)=
(∑8i=1 bi∑8
i=1 bi ti
)
⇐⇒(
8 2020 92
)(αβ
)=
(3725
)
=⇒ α ≈ 8.643, β ≈ −1.607
So that the best fit line to the given data is
f (t) ≈ 8.643− 1.607t .
[email protected] MATH 532 111
![Page 351: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/351.jpg)
Classical Least Squares
ExampleData:
t -1 0 1 2 3 4 5 6b 10 9 7 5 4 3 0 -1
AT Ax = AT b ⇐⇒
(∑8i=1 1
∑8i=1 ti∑8
i=1 ti∑8
i=1 t2i
)(αβ
)=
(∑8i=1 bi∑8
i=1 bi ti
)
⇐⇒(
8 2020 92
)(αβ
)=
(3725
)=⇒ α ≈ 8.643, β ≈ −1.607
So that the best fit line to the given data is
f (t) ≈ 8.643− 1.607t .
[email protected] MATH 532 111
![Page 352: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/352.jpg)
Classical Least Squares
ExampleData:
t -1 0 1 2 3 4 5 6b 10 9 7 5 4 3 0 -1
AT Ax = AT b ⇐⇒
(∑8i=1 1
∑8i=1 ti∑8
i=1 ti∑8
i=1 t2i
)(αβ
)=
(∑8i=1 bi∑8
i=1 bi ti
)
⇐⇒(
8 2020 92
)(αβ
)=
(3725
)=⇒ α ≈ 8.643, β ≈ −1.607
So that the best fit line to the given data is
f (t) ≈ 8.643− 1.607t .
[email protected] MATH 532 111
![Page 353: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/353.jpg)
Classical Least Squares
General Least Squares
The general least squares problem behaves analogously to the linearexample.
TheoremLet A be a real m × n matrix and b an m-vector. Any vector x thatminimizes the square of the residual Ax − b, i.e.,
G(x) = (Ax − b)T (Ax − b)
is called a least squares solution of Ax = b.
The set of all least squares solutions is obtained by solving the normalequations
AT Ax = AT b.
Moreover, a unique solution exists if and only if rank(A) = n so that
x = (AT A)−1AT b.
[email protected] MATH 532 112
![Page 354: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/354.jpg)
Classical Least Squares
General Least Squares
The general least squares problem behaves analogously to the linearexample.
TheoremLet A be a real m × n matrix and b an m-vector. Any vector x thatminimizes the square of the residual Ax − b, i.e.,
G(x) = (Ax − b)T (Ax − b)
is called a least squares solution of Ax = b.The set of all least squares solutions is obtained by solving the normalequations
AT Ax = AT b.
Moreover, a unique solution exists if and only if rank(A) = n so that
x = (AT A)−1AT b.
[email protected] MATH 532 112
![Page 355: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/355.jpg)
Classical Least Squares
ProofThe statement about uniqueness follows directly from our earliertheorem on p. 92 on the normal equations.
To characterize the least squares solutions we first show that if xminimizes G(x) then x satisfies AT Ax = AT b.
As in our earlier example, a necessary condition for the minimum is:∂G(x)∂xi
= 0, i = 1, . . . ,n.
Let’s first work out what G(x) looks like:
G(x) = (Ax − b)T (Ax − b)
= xT AT Ax − xT AT b − bT Ax + bT b
= xT AT Ax − 2xT AT b + bT b
since bT Ax = (bT Ax)T = xT AT b is a scalar.
[email protected] MATH 532 113
![Page 356: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/356.jpg)
Classical Least Squares
ProofThe statement about uniqueness follows directly from our earliertheorem on p. 92 on the normal equations.
To characterize the least squares solutions we first show that if xminimizes G(x) then x satisfies AT Ax = AT b.
As in our earlier example, a necessary condition for the minimum is:∂G(x)∂xi
= 0, i = 1, . . . ,n.
Let’s first work out what G(x) looks like:
G(x) = (Ax − b)T (Ax − b)
= xT AT Ax − xT AT b − bT Ax + bT b
= xT AT Ax − 2xT AT b + bT b
since bT Ax = (bT Ax)T = xT AT b is a scalar.
[email protected] MATH 532 113
![Page 357: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/357.jpg)
Classical Least Squares
ProofThe statement about uniqueness follows directly from our earliertheorem on p. 92 on the normal equations.
To characterize the least squares solutions we first show that if xminimizes G(x) then x satisfies AT Ax = AT b.
As in our earlier example, a necessary condition for the minimum is:∂G(x)∂xi
= 0, i = 1, . . . ,n.
Let’s first work out what G(x) looks like:
G(x) = (Ax − b)T (Ax − b)
= xT AT Ax − xT AT b − bT Ax + bT b
= xT AT Ax − 2xT AT b + bT b
since bT Ax = (bT Ax)T = xT AT b is a scalar.
[email protected] MATH 532 113
![Page 358: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/358.jpg)
Classical Least Squares
ProofThe statement about uniqueness follows directly from our earliertheorem on p. 92 on the normal equations.
To characterize the least squares solutions we first show that if xminimizes G(x) then x satisfies AT Ax = AT b.
As in our earlier example, a necessary condition for the minimum is:∂G(x)∂xi
= 0, i = 1, . . . ,n.
Let’s first work out what G(x) looks like:
G(x) = (Ax − b)T (Ax − b)
= xT AT Ax − xT AT b − bT Ax + bT b
= xT AT Ax − 2xT AT b + bT b
since bT Ax = (bT Ax)T = xT AT b is a scalar.
[email protected] MATH 532 113
![Page 359: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/359.jpg)
Classical Least Squares
ProofThe statement about uniqueness follows directly from our earliertheorem on p. 92 on the normal equations.
To characterize the least squares solutions we first show that if xminimizes G(x) then x satisfies AT Ax = AT b.
As in our earlier example, a necessary condition for the minimum is:∂G(x)∂xi
= 0, i = 1, . . . ,n.
Let’s first work out what G(x) looks like:
G(x) = (Ax − b)T (Ax − b)
= xT AT Ax − xT AT b − bT Ax + bT b
= xT AT Ax − 2xT AT b + bT b
since bT Ax = (bT Ax)T = xT AT b is a scalar.
[email protected] MATH 532 113
![Page 360: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/360.jpg)
Classical Least Squares
ProofThe statement about uniqueness follows directly from our earliertheorem on p. 92 on the normal equations.
To characterize the least squares solutions we first show that if xminimizes G(x) then x satisfies AT Ax = AT b.
As in our earlier example, a necessary condition for the minimum is:∂G(x)∂xi
= 0, i = 1, . . . ,n.
Let’s first work out what G(x) looks like:
G(x) = (Ax − b)T (Ax − b)
= xT AT Ax − xT AT b − bT Ax + bT b
= xT AT Ax − 2xT AT b + bT b
since bT Ax = (bT Ax)T = xT AT b is a scalar.
[email protected] MATH 532 113
![Page 361: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/361.jpg)
Classical Least Squares
ProofThe statement about uniqueness follows directly from our earliertheorem on p. 92 on the normal equations.
To characterize the least squares solutions we first show that if xminimizes G(x) then x satisfies AT Ax = AT b.
As in our earlier example, a necessary condition for the minimum is:∂G(x)∂xi
= 0, i = 1, . . . ,n.
Let’s first work out what G(x) looks like:
G(x) = (Ax − b)T (Ax − b)
= xT AT Ax − xT AT b − bT Ax + bT b
= xT AT Ax − 2xT AT b + bT b
since bT Ax = (bT Ax)T = xT AT b is a scalar.
[email protected] MATH 532 113
![Page 362: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/362.jpg)
Classical Least Squares
(cont.)Therefore
∂G(x)∂xi
=∂xT
∂xiAT Ax + xT AT A
∂x∂xi− 2
∂xT
∂xiAT b
= eTi AT Ax + xT AT Aei − 2eT
i AT b
= 2eTi AT Ax − 2eT
i AT b
since xT AT Aei = (xT AT Aei)T = eT
i AT Ax is a scalar.This means that
∂G(x)∂xi
= 0 ⇐⇒ (AT )i∗Ax = (AT )i∗b.
If we collect all such conditions (for i = 1, . . . ,n) in one linear systemwe get
AT Ax = AT b.
[email protected] MATH 532 114
![Page 363: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/363.jpg)
Classical Least Squares
(cont.)Therefore
∂G(x)∂xi
=∂xT
∂xiAT Ax + xT AT A
∂x∂xi− 2
∂xT
∂xiAT b
= eTi AT Ax + xT AT Aei − 2eT
i AT b
= 2eTi AT Ax − 2eT
i AT b
since xT AT Aei = (xT AT Aei)T = eT
i AT Ax is a scalar.This means that
∂G(x)∂xi
= 0 ⇐⇒ (AT )i∗Ax = (AT )i∗b.
If we collect all such conditions (for i = 1, . . . ,n) in one linear systemwe get
AT Ax = AT b.
[email protected] MATH 532 114
![Page 364: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/364.jpg)
Classical Least Squares
(cont.)Therefore
∂G(x)∂xi
=∂xT
∂xiAT Ax + xT AT A
∂x∂xi− 2
∂xT
∂xiAT b
= eTi AT Ax + xT AT Aei − 2eT
i AT b
= 2eTi AT Ax − 2eT
i AT b
since xT AT Aei = (xT AT Aei)T = eT
i AT Ax is a scalar.This means that
∂G(x)∂xi
= 0 ⇐⇒ (AT )i∗Ax = (AT )i∗b.
If we collect all such conditions (for i = 1, . . . ,n) in one linear systemwe get
AT Ax = AT b.
[email protected] MATH 532 114
![Page 365: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/365.jpg)
Classical Least Squares
(cont.)Therefore
∂G(x)∂xi
=∂xT
∂xiAT Ax + xT AT A
∂x∂xi− 2
∂xT
∂xiAT b
= eTi AT Ax + xT AT Aei − 2eT
i AT b
= 2eTi AT Ax − 2eT
i AT b
since xT AT Aei = (xT AT Aei)T = eT
i AT Ax is a scalar.
This means that
∂G(x)∂xi
= 0 ⇐⇒ (AT )i∗Ax = (AT )i∗b.
If we collect all such conditions (for i = 1, . . . ,n) in one linear systemwe get
AT Ax = AT b.
[email protected] MATH 532 114
![Page 366: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/366.jpg)
Classical Least Squares
(cont.)Therefore
∂G(x)∂xi
=∂xT
∂xiAT Ax + xT AT A
∂x∂xi− 2
∂xT
∂xiAT b
= eTi AT Ax + xT AT Aei − 2eT
i AT b
= 2eTi AT Ax − 2eT
i AT b
since xT AT Aei = (xT AT Aei)T = eT
i AT Ax is a scalar.This means that
∂G(x)∂xi
= 0 ⇐⇒
(AT )i∗Ax = (AT )i∗b.
If we collect all such conditions (for i = 1, . . . ,n) in one linear systemwe get
AT Ax = AT b.
[email protected] MATH 532 114
![Page 367: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/367.jpg)
Classical Least Squares
(cont.)Therefore
∂G(x)∂xi
=∂xT
∂xiAT Ax + xT AT A
∂x∂xi− 2
∂xT
∂xiAT b
= eTi AT Ax + xT AT Aei − 2eT
i AT b
= 2eTi AT Ax − 2eT
i AT b
since xT AT Aei = (xT AT Aei)T = eT
i AT Ax is a scalar.This means that
∂G(x)∂xi
= 0 ⇐⇒ (AT )i∗Ax = (AT )i∗b.
If we collect all such conditions (for i = 1, . . . ,n) in one linear systemwe get
AT Ax = AT b.
[email protected] MATH 532 114
![Page 368: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/368.jpg)
Classical Least Squares
(cont.)Therefore
∂G(x)∂xi
=∂xT
∂xiAT Ax + xT AT A
∂x∂xi− 2
∂xT
∂xiAT b
= eTi AT Ax + xT AT Aei − 2eT
i AT b
= 2eTi AT Ax − 2eT
i AT b
since xT AT Aei = (xT AT Aei)T = eT
i AT Ax is a scalar.This means that
∂G(x)∂xi
= 0 ⇐⇒ (AT )i∗Ax = (AT )i∗b.
If we collect all such conditions (for i = 1, . . . ,n) in one linear systemwe get
AT Ax = AT b.
[email protected] MATH 532 114
![Page 369: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/369.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
Now, for any other y = z + u we have
since uT AT Au =∑m
i=1(Au)2i ≥ 0. �
[email protected] MATH 532 115
![Page 370: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/370.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
G(z) = (Az − b)T (Az − b)
= zT AT Az − 2zT AT b + bT b
Now, for any other y = z + u we have
since uT AT Au =∑m
i=1(Au)2i ≥ 0. �
[email protected] MATH 532 115
![Page 371: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/371.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
G(z) = (Az − b)T (Az − b)
= zT AT Az − 2zT AT b + bT b
= zT (AT Az − AT b)− zT AT b + bT b
Now, for any other y = z + u we have
since uT AT Au =∑m
i=1(Au)2i ≥ 0. �
[email protected] MATH 532 115
![Page 372: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/372.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
G(z) = (Az − b)T (Az − b)
= zT AT Az − 2zT AT b + bT b
= zT (AT Az − AT b︸ ︷︷ ︸=0
)− zT AT b + bT b = −zT AT b + bT b.
Now, for any other y = z + u we have
since uT AT Au =∑m
i=1(Au)2i ≥ 0. �
[email protected] MATH 532 115
![Page 373: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/373.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
G(z) = (Az − b)T (Az − b)
= zT AT Az − 2zT AT b + bT b
= zT (AT Az − AT b︸ ︷︷ ︸=0
)− zT AT b + bT b = −zT AT b + bT b.
Now, for any other y = z + u we have
since uT AT Au =∑m
i=1(Au)2i ≥ 0. �
[email protected] MATH 532 115
![Page 374: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/374.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
G(z) = (Az − b)T (Az − b)
= zT AT Az − 2zT AT b + bT b
= zT (AT Az − AT b︸ ︷︷ ︸=0
)− zT AT b + bT b = −zT AT b + bT b.
Now, for any other y = z + u we have
G(y) = (z + u)T AT A(z + u)− 2(z + u)T AT b + bT b
since uT AT Au =∑m
i=1(Au)2i ≥ 0. �
[email protected] MATH 532 115
![Page 375: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/375.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
G(z) = (Az − b)T (Az − b)
= zT AT Az − 2zT AT b + bT b
= zT (AT Az − AT b︸ ︷︷ ︸=0
)− zT AT b + bT b = −zT AT b + bT b.
Now, for any other y = z + u we have
G(y) = (z + u)T AT A(z + u)− 2(z + u)T AT b + bT b
= G(z) + uT AT Au + zT AT Au + uT AT Az − 2uT AT b
since uT AT Au =∑m
i=1(Au)2i ≥ 0. �
[email protected] MATH 532 115
![Page 376: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/376.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
G(z) = (Az − b)T (Az − b)
= zT AT Az − 2zT AT b + bT b
= zT (AT Az − AT b︸ ︷︷ ︸=0
)− zT AT b + bT b = −zT AT b + bT b.
Now, for any other y = z + u we have
G(y) = (z + u)T AT A(z + u)− 2(z + u)T AT b + bT b
= G(z) + uT AT Au + zT AT Au︸ ︷︷ ︸=uT AT Az
+uT AT Az − 2uT AT b︸︷︷︸AT Az
since uT AT Au =∑m
i=1(Au)2i ≥ 0. �
[email protected] MATH 532 115
![Page 377: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/377.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
G(z) = (Az − b)T (Az − b)
= zT AT Az − 2zT AT b + bT b
= zT (AT Az − AT b︸ ︷︷ ︸=0
)− zT AT b + bT b = −zT AT b + bT b.
Now, for any other y = z + u we have
G(y) = (z + u)T AT A(z + u)− 2(z + u)T AT b + bT b
= G(z) + uT AT Au + zT AT Au︸ ︷︷ ︸=uT AT Az
+uT AT Az − 2uT AT b︸︷︷︸AT Az
= G(z) + uT AT Au
since uT AT Au =∑m
i=1(Au)2i ≥ 0. �
[email protected] MATH 532 115
![Page 378: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/378.jpg)
Classical Least Squares
(cont.)To verify that we indeed have a minimum we show that if z is a solutionof the normal equations then G(z) is minimal.
G(z) = (Az − b)T (Az − b)
= zT AT Az − 2zT AT b + bT b
= zT (AT Az − AT b︸ ︷︷ ︸=0
)− zT AT b + bT b = −zT AT b + bT b.
Now, for any other y = z + u we have
G(y) = (z + u)T AT A(z + u)− 2(z + u)T AT b + bT b
= G(z) + uT AT Au + zT AT Au︸ ︷︷ ︸=uT AT Az
+uT AT Az − 2uT AT b︸︷︷︸AT Az
= G(z) + uT AT Au ≥ G(z)since uT AT Au =
∑mi=1(Au)2
i ≥ 0. �
[email protected] MATH 532 115
![Page 379: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/379.jpg)
Classical Least Squares
RemarkUsing this framework we can compute least squares fits from anylinear function space.
Example1 Let f (t) = α0 + α1t + α2t2, i.e., we can use quadratic polynomials
(or any other degree).2 Let f (t) = α0 + α1 sin t + α2 cos t , i.e., we can use trigonometric
polynomials.3 Let f (t) = αet + β
√t , i.e., we can use just about anything we want.
[email protected] MATH 532 116
![Page 380: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/380.jpg)
Classical Least Squares
RemarkUsing this framework we can compute least squares fits from anylinear function space.
Example1 Let f (t) = α0 + α1t + α2t2, i.e., we can use quadratic polynomials
(or any other degree).
2 Let f (t) = α0 + α1 sin t + α2 cos t , i.e., we can use trigonometricpolynomials.
3 Let f (t) = αet + β√
t , i.e., we can use just about anything we want.
[email protected] MATH 532 116
![Page 381: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/381.jpg)
Classical Least Squares
RemarkUsing this framework we can compute least squares fits from anylinear function space.
Example1 Let f (t) = α0 + α1t + α2t2, i.e., we can use quadratic polynomials
(or any other degree).2 Let f (t) = α0 + α1 sin t + α2 cos t , i.e., we can use trigonometric
polynomials.
3 Let f (t) = αet + β√
t , i.e., we can use just about anything we want.
[email protected] MATH 532 116
![Page 382: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/382.jpg)
Classical Least Squares
RemarkUsing this framework we can compute least squares fits from anylinear function space.
Example1 Let f (t) = α0 + α1t + α2t2, i.e., we can use quadratic polynomials
(or any other degree).2 Let f (t) = α0 + α1 sin t + α2 cos t , i.e., we can use trigonometric
polynomials.3 Let f (t) = αet + β
√t , i.e., we can use just about anything we want.
[email protected] MATH 532 116
![Page 383: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/383.jpg)
Classical Least Squares
Regression in Statistics (BLUE)
One assumes that there is a random process that generates data as arandom variable Y of the form
Y = β0 + β1X1 + β2X2 + . . .+ βnXn,
where X1, . . . ,Xn are (input) random variables and β1, . . . , βn areunknown parameters.
Now the actually observed data may be affected by noise, i.e.,
y = Y + ε = β0 + β1X1 + β2X2 + . . .+ βnXn + ε,
where ε ∼ N (0, σ2) (normally distributed with mean zero and varianceσ2) is another random variable denoting the noise.To determine the model parameters β1, . . . , βn we now look atmeasurements, i.e.,
yi = β0 + β1xi,1 + β2xi,2 + . . .+ βnxi,n + ε, i = 1, . . . ,m.
[email protected] MATH 532 117
![Page 384: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/384.jpg)
Classical Least Squares
Regression in Statistics (BLUE)
One assumes that there is a random process that generates data as arandom variable Y of the form
Y = β0 + β1X1 + β2X2 + . . .+ βnXn,
where X1, . . . ,Xn are (input) random variables and β1, . . . , βn areunknown parameters.Now the actually observed data may be affected by noise, i.e.,
y = Y + ε = β0 + β1X1 + β2X2 + . . .+ βnXn + ε,
where ε ∼ N (0, σ2) (normally distributed with mean zero and varianceσ2) is another random variable denoting the noise.
To determine the model parameters β1, . . . , βn we now look atmeasurements, i.e.,
yi = β0 + β1xi,1 + β2xi,2 + . . .+ βnxi,n + ε, i = 1, . . . ,m.
[email protected] MATH 532 117
![Page 385: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/385.jpg)
Classical Least Squares
Regression in Statistics (BLUE)
One assumes that there is a random process that generates data as arandom variable Y of the form
Y = β0 + β1X1 + β2X2 + . . .+ βnXn,
where X1, . . . ,Xn are (input) random variables and β1, . . . , βn areunknown parameters.Now the actually observed data may be affected by noise, i.e.,
y = Y + ε = β0 + β1X1 + β2X2 + . . .+ βnXn + ε,
where ε ∼ N (0, σ2) (normally distributed with mean zero and varianceσ2) is another random variable denoting the noise.To determine the model parameters β1, . . . , βn we now look atmeasurements, i.e.,
yi = β0 + β1xi,1 + β2xi,2 + . . .+ βnxi,n + ε, i = 1, . . . ,m.
[email protected] MATH 532 117
![Page 386: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/386.jpg)
Classical Least Squares
In matrix-vector form this gives us
y = Xβ + ε
Now, the least squares solution of Xβ = y , i.e., β = (XT X)−1XT y is infact the best linear unbiased estimator (BLUE) for β.
To show this one needs an assumption that the error is unbiased, i.e.,E[ε] = 0.Then
E[y ] = E[Xβ + ε] = E[Xβ] + E[ε] = Xβ
and therefore
E[β] = E[(XT X)−1XT y ] = (XT X)−1XTE[y ]= (XT X)−1XT Xβ = β,
so that the estimator is indeed unbiased.
[email protected] MATH 532 118
![Page 387: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/387.jpg)
Classical Least Squares
In matrix-vector form this gives us
y = Xβ + ε
Now, the least squares solution of Xβ = y , i.e., β = (XT X)−1XT y is infact the best linear unbiased estimator (BLUE) for β.To show this one needs an assumption that the error is unbiased, i.e.,E[ε] = 0.
ThenE[y ] = E[Xβ + ε] = E[Xβ] + E[ε] = Xβ
and therefore
E[β] = E[(XT X)−1XT y ] = (XT X)−1XTE[y ]= (XT X)−1XT Xβ = β,
so that the estimator is indeed unbiased.
[email protected] MATH 532 118
![Page 388: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/388.jpg)
Classical Least Squares
In matrix-vector form this gives us
y = Xβ + ε
Now, the least squares solution of Xβ = y , i.e., β = (XT X)−1XT y is infact the best linear unbiased estimator (BLUE) for β.To show this one needs an assumption that the error is unbiased, i.e.,E[ε] = 0.Then
E[y ] = E[Xβ + ε]
= E[Xβ] + E[ε] = Xβ
and therefore
E[β] = E[(XT X)−1XT y ] = (XT X)−1XTE[y ]= (XT X)−1XT Xβ = β,
so that the estimator is indeed unbiased.
[email protected] MATH 532 118
![Page 389: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/389.jpg)
Classical Least Squares
In matrix-vector form this gives us
y = Xβ + ε
Now, the least squares solution of Xβ = y , i.e., β = (XT X)−1XT y is infact the best linear unbiased estimator (BLUE) for β.To show this one needs an assumption that the error is unbiased, i.e.,E[ε] = 0.Then
E[y ] = E[Xβ + ε] = E[Xβ] + E[ε]
= Xβ
and therefore
E[β] = E[(XT X)−1XT y ] = (XT X)−1XTE[y ]= (XT X)−1XT Xβ = β,
so that the estimator is indeed unbiased.
[email protected] MATH 532 118
![Page 390: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/390.jpg)
Classical Least Squares
In matrix-vector form this gives us
y = Xβ + ε
Now, the least squares solution of Xβ = y , i.e., β = (XT X)−1XT y is infact the best linear unbiased estimator (BLUE) for β.To show this one needs an assumption that the error is unbiased, i.e.,E[ε] = 0.Then
E[y ] = E[Xβ + ε] = E[Xβ] + E[ε] = Xβ
and therefore
E[β] = E[(XT X)−1XT y ] = (XT X)−1XTE[y ]= (XT X)−1XT Xβ = β,
so that the estimator is indeed unbiased.
[email protected] MATH 532 118
![Page 391: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/391.jpg)
Classical Least Squares
In matrix-vector form this gives us
y = Xβ + ε
Now, the least squares solution of Xβ = y , i.e., β = (XT X)−1XT y is infact the best linear unbiased estimator (BLUE) for β.To show this one needs an assumption that the error is unbiased, i.e.,E[ε] = 0.Then
E[y ] = E[Xβ + ε] = E[Xβ] + E[ε] = Xβ
and therefore
E[β] = E[(XT X)−1XT y ]
= (XT X)−1XTE[y ]= (XT X)−1XT Xβ = β,
so that the estimator is indeed unbiased.
[email protected] MATH 532 118
![Page 392: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/392.jpg)
Classical Least Squares
In matrix-vector form this gives us
y = Xβ + ε
Now, the least squares solution of Xβ = y , i.e., β = (XT X)−1XT y is infact the best linear unbiased estimator (BLUE) for β.To show this one needs an assumption that the error is unbiased, i.e.,E[ε] = 0.Then
E[y ] = E[Xβ + ε] = E[Xβ] + E[ε] = Xβ
and therefore
E[β] = E[(XT X)−1XT y ] = (XT X)−1XTE[y ]
= (XT X)−1XT Xβ = β,
so that the estimator is indeed unbiased.
[email protected] MATH 532 118
![Page 393: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/393.jpg)
Classical Least Squares
In matrix-vector form this gives us
y = Xβ + ε
Now, the least squares solution of Xβ = y , i.e., β = (XT X)−1XT y is infact the best linear unbiased estimator (BLUE) for β.To show this one needs an assumption that the error is unbiased, i.e.,E[ε] = 0.Then
E[y ] = E[Xβ + ε] = E[Xβ] + E[ε] = Xβ
and therefore
E[β] = E[(XT X)−1XT y ] = (XT X)−1XTE[y ]= (XT X)−1XT Xβ
= β,
so that the estimator is indeed unbiased.
[email protected] MATH 532 118
![Page 394: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/394.jpg)
Classical Least Squares
In matrix-vector form this gives us
y = Xβ + ε
Now, the least squares solution of Xβ = y , i.e., β = (XT X)−1XT y is infact the best linear unbiased estimator (BLUE) for β.To show this one needs an assumption that the error is unbiased, i.e.,E[ε] = 0.Then
E[y ] = E[Xβ + ε] = E[Xβ] + E[ε] = Xβ
and therefore
E[β] = E[(XT X)−1XT y ] = (XT X)−1XTE[y ]= (XT X)−1XT Xβ = β,
so that the estimator is indeed unbiased.
[email protected] MATH 532 118
![Page 395: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/395.jpg)
Classical Least Squares
Remark
One can also show (maybe later) that β has minimal variance amongall unbiased linear estimators, so it is the best linear unbiasedestimator of the model parameters.
In fact, the theorem ensuring this is the so-called Gauss-Markovtheorem.
[email protected] MATH 532 119
![Page 396: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/396.jpg)
Kriging as best linear unbiased predictor
Outline
1 Spaces and Subspaces
2 Four Fundamental Subspaces
3 Linear Independence
4 Bases and Dimension
5 More About Rank
6 Classical Least Squares
7 Kriging as best linear unbiased predictor
[email protected] MATH 532 120
![Page 397: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/397.jpg)
Kriging as best linear unbiased predictor
Kriging: a regression approach
Assume: the approximate value of a realization of a zero-mean(Gaussian) random field is given by a linear predictor of the form
Yx =N∑
j=1
Yx j wj(x) = w(x)T Y ,
where Yx and Yx j are random variables, Y =(Yx1 · · · YxN
)T , and
w(x) =(w1(x) · · · wN(x)
)T is a vector of weight functions at x .
Since all of the Yx j have zero mean the predictor Yx is automaticallyunbiased.Goal: to compute “optimal” weights
?wj(·), j = 1, . . . ,N. To this end,
consider the mean-squared error (MSE) of the predictor, i.e.,
MSE(Yx) = E[(
Yx −w(x)T Y)2].
We now present some details (see [FM15]).
[email protected] MATH 532 121
![Page 398: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/398.jpg)
Kriging as best linear unbiased predictor
Kriging: a regression approach
Assume: the approximate value of a realization of a zero-mean(Gaussian) random field is given by a linear predictor of the form
Yx =N∑
j=1
Yx j wj(x) = w(x)T Y ,
where Yx and Yx j are random variables, Y =(Yx1 · · · YxN
)T , and
w(x) =(w1(x) · · · wN(x)
)T is a vector of weight functions at x .Since all of the Yx j have zero mean the predictor Yx is automaticallyunbiased.
Goal: to compute “optimal” weights?wj(·), j = 1, . . . ,N. To this end,
consider the mean-squared error (MSE) of the predictor, i.e.,
MSE(Yx) = E[(
Yx −w(x)T Y)2].
We now present some details (see [FM15]).
[email protected] MATH 532 121
![Page 399: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/399.jpg)
Kriging as best linear unbiased predictor
Kriging: a regression approach
Assume: the approximate value of a realization of a zero-mean(Gaussian) random field is given by a linear predictor of the form
Yx =N∑
j=1
Yx j wj(x) = w(x)T Y ,
where Yx and Yx j are random variables, Y =(Yx1 · · · YxN
)T , and
w(x) =(w1(x) · · · wN(x)
)T is a vector of weight functions at x .Since all of the Yx j have zero mean the predictor Yx is automaticallyunbiased.Goal: to compute “optimal” weights
?wj(·), j = 1, . . . ,N.
To this end,consider the mean-squared error (MSE) of the predictor, i.e.,
MSE(Yx) = E[(
Yx −w(x)T Y)2].
We now present some details (see [FM15]).
[email protected] MATH 532 121
![Page 400: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/400.jpg)
Kriging as best linear unbiased predictor
Kriging: a regression approach
Assume: the approximate value of a realization of a zero-mean(Gaussian) random field is given by a linear predictor of the form
Yx =N∑
j=1
Yx j wj(x) = w(x)T Y ,
where Yx and Yx j are random variables, Y =(Yx1 · · · YxN
)T , and
w(x) =(w1(x) · · · wN(x)
)T is a vector of weight functions at x .Since all of the Yx j have zero mean the predictor Yx is automaticallyunbiased.Goal: to compute “optimal” weights
?wj(·), j = 1, . . . ,N. To this end,
consider the mean-squared error (MSE) of the predictor, i.e.,
MSE(Yx) = E[(
Yx −w(x)T Y)2].
We now present some details (see [FM15]).
[email protected] MATH 532 121
![Page 401: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/401.jpg)
Kriging as best linear unbiased predictor
Kriging: a regression approach
Assume: the approximate value of a realization of a zero-mean(Gaussian) random field is given by a linear predictor of the form
Yx =N∑
j=1
Yx j wj(x) = w(x)T Y ,
where Yx and Yx j are random variables, Y =(Yx1 · · · YxN
)T , and
w(x) =(w1(x) · · · wN(x)
)T is a vector of weight functions at x .Since all of the Yx j have zero mean the predictor Yx is automaticallyunbiased.Goal: to compute “optimal” weights
?wj(·), j = 1, . . . ,N. To this end,
consider the mean-squared error (MSE) of the predictor, i.e.,
MSE(Yx) = E[(
Yx −w(x)T Y)2].
We now present some details (see [FM15])[email protected] MATH 532 121
![Page 402: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/402.jpg)
Kriging as best linear unbiased predictor
Covariance Kernel
We need the covariance kernel K of a random field Y with mean µ(x).It is defined via
σ2K (x , z) = Cov(Yx ,Yz) = E [(Yx − µ(x))(Yz − µ(z))]
= E [(Yx − E[Yx ])(Yz − E[Yz ])]
= E [YxYz − YxE[Yz ]− E[Yx ]Yz + E[Yx ]E[Yz ]]
= E[YxYz ]− E[Yx ]E[Yz ]− E[Yx ]E[Yz ] + E[Yx ]E[Yz ]
= E[YxYz ]− E[Yx ]E[Yz ] = E[YxYz ]− µ(x)µ(z).
Therefore, the variance of the random field,
Var(Yx) = E[Y 2x ]− E[Yx ]
2 = E[Y 2x ]− µ2(x),
corresponds to the “diagonal” of the covariance, i.e.,
Var(Yx) = σ2K (x ,x).
[email protected] MATH 532 122
![Page 403: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/403.jpg)
Kriging as best linear unbiased predictor
Covariance Kernel
We need the covariance kernel K of a random field Y with mean µ(x).It is defined via
σ2K (x , z) = Cov(Yx ,Yz) = E [(Yx − µ(x))(Yz − µ(z))]= E [(Yx − E[Yx ])(Yz − E[Yz ])]
= E [YxYz − YxE[Yz ]− E[Yx ]Yz + E[Yx ]E[Yz ]]
= E[YxYz ]− E[Yx ]E[Yz ]− E[Yx ]E[Yz ] + E[Yx ]E[Yz ]
= E[YxYz ]− E[Yx ]E[Yz ] = E[YxYz ]− µ(x)µ(z).
Therefore, the variance of the random field,
Var(Yx) = E[Y 2x ]− E[Yx ]
2 = E[Y 2x ]− µ2(x),
corresponds to the “diagonal” of the covariance, i.e.,
Var(Yx) = σ2K (x ,x).
[email protected] MATH 532 122
![Page 404: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/404.jpg)
Kriging as best linear unbiased predictor
Covariance Kernel
We need the covariance kernel K of a random field Y with mean µ(x).It is defined via
σ2K (x , z) = Cov(Yx ,Yz) = E [(Yx − µ(x))(Yz − µ(z))]= E [(Yx − E[Yx ])(Yz − E[Yz ])]
= E [YxYz − YxE[Yz ]− E[Yx ]Yz + E[Yx ]E[Yz ]]
= E[YxYz ]− E[Yx ]E[Yz ]− E[Yx ]E[Yz ] + E[Yx ]E[Yz ]
= E[YxYz ]− E[Yx ]E[Yz ] = E[YxYz ]− µ(x)µ(z).
Therefore, the variance of the random field,
Var(Yx) = E[Y 2x ]− E[Yx ]
2 = E[Y 2x ]− µ2(x),
corresponds to the “diagonal” of the covariance, i.e.,
Var(Yx) = σ2K (x ,x).
[email protected] MATH 532 122
![Page 405: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/405.jpg)
Kriging as best linear unbiased predictor
Covariance Kernel
We need the covariance kernel K of a random field Y with mean µ(x).It is defined via
σ2K (x , z) = Cov(Yx ,Yz) = E [(Yx − µ(x))(Yz − µ(z))]= E [(Yx − E[Yx ])(Yz − E[Yz ])]
= E [YxYz − YxE[Yz ]− E[Yx ]Yz + E[Yx ]E[Yz ]]
= E[YxYz ]− E[Yx ]E[Yz ]− E[Yx ]E[Yz ] + E[Yx ]E[Yz ]
= E[YxYz ]− E[Yx ]E[Yz ] = E[YxYz ]− µ(x)µ(z).
Therefore, the variance of the random field,
Var(Yx) = E[Y 2x ]− E[Yx ]
2 = E[Y 2x ]− µ2(x),
corresponds to the “diagonal” of the covariance, i.e.,
Var(Yx) = σ2K (x ,x).
[email protected] MATH 532 122
![Page 406: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/406.jpg)
Kriging as best linear unbiased predictor
Covariance Kernel
We need the covariance kernel K of a random field Y with mean µ(x).It is defined via
σ2K (x , z) = Cov(Yx ,Yz) = E [(Yx − µ(x))(Yz − µ(z))]= E [(Yx − E[Yx ])(Yz − E[Yz ])]
= E [YxYz − YxE[Yz ]− E[Yx ]Yz + E[Yx ]E[Yz ]]
= E[YxYz ]− E[Yx ]E[Yz ]− E[Yx ]E[Yz ] + E[Yx ]E[Yz ]
= E[YxYz ]− E[Yx ]E[Yz ] = E[YxYz ]− µ(x)µ(z).
Therefore, the variance of the random field,
Var(Yx) = E[Y 2x ]− E[Yx ]
2 = E[Y 2x ]− µ2(x),
corresponds to the “diagonal” of the covariance, i.e.,
Var(Yx) = σ2K (x ,x).
[email protected] MATH 532 122
![Page 407: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/407.jpg)
Kriging as best linear unbiased predictor
Covariance Kernel
We need the covariance kernel K of a random field Y with mean µ(x).It is defined via
σ2K (x , z) = Cov(Yx ,Yz) = E [(Yx − µ(x))(Yz − µ(z))]= E [(Yx − E[Yx ])(Yz − E[Yz ])]
= E [YxYz − YxE[Yz ]− E[Yx ]Yz + E[Yx ]E[Yz ]]
= E[YxYz ]− E[Yx ]E[Yz ]− E[Yx ]E[Yz ] + E[Yx ]E[Yz ]
= E[YxYz ]− E[Yx ]E[Yz ] = E[YxYz ]− µ(x)µ(z).
Therefore, the variance of the random field,
Var(Yx) = E[Y 2x ]− E[Yx ]
2 = E[Y 2x ]− µ2(x),
corresponds to the “diagonal” of the covariance, i.e.,
Var(Yx) = σ2K (x ,x).
[email protected] MATH 532 122
![Page 408: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/408.jpg)
Kriging as best linear unbiased predictor
Let’s now work out the MSE:
MSE(Yx) = E[(
Yx −w(x)T Y)2]
= E[YxYx ]− 2E[Yxw(x)T Y ] + E[w(x)T YY T w(x)]
Now use E[YxYz ] = K (x , z) (the covariance, since Y is centered):
MSE(Yx) = σ2K (x ,x)− 2w(x)T (σ2k(x)) + w(x)T (σ2K)w(x),
whereσ2k(x) = σ2 (k1(x) · · · kN(x)
)T : withσ2kj(x) = σ2K (x ,x j) = E[YxYx j ]
K: the covariance matrix has entries σ2K (x i ,x j) = E[Yx i Yx j ]
Finding the minimum MSE is straightforward. Differentiation andequating to zero yields
−2k(x) + 2Kw(x) = 0,
and so the optimum weight vector is?w(x) = K−1k(x).
[email protected] MATH 532 123
![Page 409: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/409.jpg)
Kriging as best linear unbiased predictor
Let’s now work out the MSE:
MSE(Yx) = E[(
Yx −w(x)T Y)2]
= E[YxYx ]− 2E[Yxw(x)T Y ] + E[w(x)T YY T w(x)]
Now use E[YxYz ] = K (x , z) (the covariance, since Y is centered):
MSE(Yx) = σ2K (x ,x)− 2w(x)T (σ2k(x)) + w(x)T (σ2K)w(x),
whereσ2k(x) = σ2 (k1(x) · · · kN(x)
)T : withσ2kj(x) = σ2K (x ,x j) = E[YxYx j ]
K: the covariance matrix has entries σ2K (x i ,x j) = E[Yx i Yx j ]
Finding the minimum MSE is straightforward. Differentiation andequating to zero yields
−2k(x) + 2Kw(x) = 0,
and so the optimum weight vector is?w(x) = K−1k(x).
[email protected] MATH 532 123
![Page 410: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/410.jpg)
Kriging as best linear unbiased predictor
Let’s now work out the MSE:
MSE(Yx) = E[(
Yx −w(x)T Y)2]
= E[YxYx ]− 2E[Yxw(x)T Y ] + E[w(x)T YY T w(x)]
Now use E[YxYz ] = K (x , z) (the covariance, since Y is centered):
MSE(Yx) = σ2K (x ,x)− 2w(x)T (σ2k(x)) + w(x)T (σ2K)w(x),
whereσ2k(x) = σ2 (k1(x) · · · kN(x)
)T : withσ2kj(x) = σ2K (x ,x j) = E[YxYx j ]
K: the covariance matrix has entries σ2K (x i ,x j) = E[Yx i Yx j ]
Finding the minimum MSE is straightforward. Differentiation andequating to zero yields
−2k(x) + 2Kw(x) = 0,
and so the optimum weight vector is?w(x) = K−1k(x).
[email protected] MATH 532 123
![Page 411: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/411.jpg)
Kriging as best linear unbiased predictor
Let’s now work out the MSE:
MSE(Yx) = E[(
Yx −w(x)T Y)2]
= E[YxYx ]− 2E[Yxw(x)T Y ] + E[w(x)T YY T w(x)]
Now use E[YxYz ] = K (x , z) (the covariance, since Y is centered):
MSE(Yx) = σ2K (x ,x)− 2w(x)T (σ2k(x)) + w(x)T (σ2K)w(x),
whereσ2k(x) = σ2 (k1(x) · · · kN(x)
)T : withσ2kj(x) = σ2K (x ,x j) = E[YxYx j ]
K: the covariance matrix has entries σ2K (x i ,x j) = E[Yx i Yx j ]
Finding the minimum MSE is straightforward.
Differentiation andequating to zero yields
−2k(x) + 2Kw(x) = 0,
and so the optimum weight vector is?w(x) = K−1k(x).
[email protected] MATH 532 123
![Page 412: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/412.jpg)
Kriging as best linear unbiased predictor
Let’s now work out the MSE:
MSE(Yx) = E[(
Yx −w(x)T Y)2]
= E[YxYx ]− 2E[Yxw(x)T Y ] + E[w(x)T YY T w(x)]
Now use E[YxYz ] = K (x , z) (the covariance, since Y is centered):
MSE(Yx) = σ2K (x ,x)− 2w(x)T (σ2k(x)) + w(x)T (σ2K)w(x),
whereσ2k(x) = σ2 (k1(x) · · · kN(x)
)T : withσ2kj(x) = σ2K (x ,x j) = E[YxYx j ]
K: the covariance matrix has entries σ2K (x i ,x j) = E[Yx i Yx j ]
Finding the minimum MSE is straightforward. Differentiation andequating to zero yields
−2k(x) + 2Kw(x) = 0,
and so the optimum weight vector is?w(x) = K−1k(x).
[email protected] MATH 532 123
![Page 413: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/413.jpg)
Kriging as best linear unbiased predictor
Let’s now work out the MSE:
MSE(Yx) = E[(
Yx −w(x)T Y)2]
= E[YxYx ]− 2E[Yxw(x)T Y ] + E[w(x)T YY T w(x)]
Now use E[YxYz ] = K (x , z) (the covariance, since Y is centered):
MSE(Yx) = σ2K (x ,x)− 2w(x)T (σ2k(x)) + w(x)T (σ2K)w(x),
whereσ2k(x) = σ2 (k1(x) · · · kN(x)
)T : withσ2kj(x) = σ2K (x ,x j) = E[YxYx j ]
K: the covariance matrix has entries σ2K (x i ,x j) = E[Yx i Yx j ]
Finding the minimum MSE is straightforward. Differentiation andequating to zero yields
−2k(x) + 2Kw(x) = 0,
and so the optimum weight vector is?w(x) = K−1k(x).
[email protected] MATH 532 123
![Page 414: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/414.jpg)
Kriging as best linear unbiased predictor
We have shown that the (simple) kriging predictor
Yx = k(x)T K−1Y
is the best (in the MSE sense) linear unbiased predictor (BLUP).
Since we are given the observations y as realizations of Y we cancompute the prediction
yx = k(x)T K−1y .
[email protected] MATH 532 124
![Page 415: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/415.jpg)
Kriging as best linear unbiased predictor
We have shown that the (simple) kriging predictor
Yx = k(x)T K−1Y
is the best (in the MSE sense) linear unbiased predictor (BLUP).
Since we are given the observations y as realizations of Y we cancompute the prediction
yx = k(x)T K−1y .
[email protected] MATH 532 124
![Page 416: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/416.jpg)
Kriging as best linear unbiased predictor
The MSE of the kriging predictor with optimal weights?w(·),
E[(
Yx − Yx
)2]= σ2
(K (x ,x)− k(x)T K−1k(x)
),
is known as the kriging variance.It allows us to give confidence intervals for our prediction. It also givesrise to a criterion for choosing an optimal parametrization of the familyof covariance kernels used for prediction.
RemarkFor Gaussian random fields the BLUP is also the best nonlinearunbiased predictor (see, e.g., [BTA04, Chapter 2]).
[email protected] MATH 532 125
![Page 417: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/417.jpg)
Kriging as best linear unbiased predictor
The MSE of the kriging predictor with optimal weights?w(·),
E[(
Yx − Yx
)2]= σ2
(K (x ,x)− k(x)T K−1k(x)
),
is known as the kriging variance.It allows us to give confidence intervals for our prediction. It also givesrise to a criterion for choosing an optimal parametrization of the familyof covariance kernels used for prediction.
RemarkFor Gaussian random fields the BLUP is also the best nonlinearunbiased predictor (see, e.g., [BTA04, Chapter 2]).
[email protected] MATH 532 125
![Page 418: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/418.jpg)
Kriging as best linear unbiased predictor
Remark1 The simple kriging approach just described is precisely how Krige
[Kri51] introduced the method:The unknown value to be predicted is given by a weighted averageof the observed values, where the weights depend on the predictionlocation.Usually one assigns a smaller weight to observations further awayfrom x .
The latter statement implies that one should be using kernelswhose associated weights decay away from x . Positive definitetranslation invariant kernels have this property.
2 More advanced kriging variants are discussed in papers such as[SWMW89, SSS13], or books such as [Cre93, Ste99, BTA04].
[email protected] MATH 532 126
![Page 419: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/419.jpg)
Kriging as best linear unbiased predictor
Remark1 The simple kriging approach just described is precisely how Krige
[Kri51] introduced the method:The unknown value to be predicted is given by a weighted averageof the observed values, where the weights depend on the predictionlocation.Usually one assigns a smaller weight to observations further awayfrom x .
The latter statement implies that one should be using kernelswhose associated weights decay away from x . Positive definitetranslation invariant kernels have this property.
2 More advanced kriging variants are discussed in papers such as[SWMW89, SSS13], or books such as [Cre93, Ste99, BTA04].
[email protected] MATH 532 126
![Page 420: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/420.jpg)
Kriging as best linear unbiased predictor
Remark1 The simple kriging approach just described is precisely how Krige
[Kri51] introduced the method:The unknown value to be predicted is given by a weighted averageof the observed values, where the weights depend on the predictionlocation.Usually one assigns a smaller weight to observations further awayfrom x .
The latter statement implies that one should be using kernelswhose associated weights decay away from x . Positive definitetranslation invariant kernels have this property.
2 More advanced kriging variants are discussed in papers such as[SWMW89, SSS13], or books such as [Cre93, Ste99, BTA04].
[email protected] MATH 532 126
![Page 421: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/421.jpg)
Appendix References
References I
[BTA04] A. Berlinet and C. Thomas-Agnan, Reproducing Kernel Hilbert Spaces inProbability and Statistics, Kluwer, Dordrecht, 2004.
[Cre93] N. Cressie, Statistics for Spatial Data, revised ed., Wiley–Interscience,New York, 1993.
[FM15] G. E. Fasshauer and M. J. McCourt, Kernel-based ApproximationMethods using MATLAB, Interdisciplinary Mathematical Sciences, vol. 19,World Scientific Publishing, Singapore, 2015.
[Kri51] D. G. Krige, A statistical approach to some basic mine valuation problemson the Witwatersrand, J. Chem. Met. & Mining Soc., S. Africa 52 (1951),no. 6, 119–139.
[Mey00] Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM,Philadelphia, PA, 2000.
[SSS13] M. Scheuerer, R. Schaback, and M. Schlather, Interpolation of spatialdata — a stochastic or a deterministic problem?, Eur. J. Appl. Math. 24(2013), no. 4, 601–629.
[email protected] MATH 532 127
![Page 422: MATH 532: Linear Algebra › ~fass › Notes532_Ch4.pdf · 2015-02-18 · MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois](https://reader034.vdocument.in/reader034/viewer/2022042402/5f1100968e87ed25c537c13c/html5/thumbnails/422.jpg)
Appendix References
References II
[Ste99] M. L. Stein, Interpolation of Spatial Data: Some Theory for Kriging,Springer, Berlin; New York, 1999.
[SWMW89] Jerome Sacks, W. J. Welch, T. J. Mitchell, and H. P. Wynn, Design andanalysis of computer experiments, Stat. Sci. 4 (1989), no. 4, 409–423.
[email protected] MATH 532 128