nm-slides-b-and-w
TRANSCRIPT
-
7/28/2019 nm-slides-b-and-w
1/228
Numerical Methods
Aaron NaimanJerusalem College of Technology
[email protected]://jct.ac.il/naiman
based on: Numerical Mathematics and Computing
by Cheney & Kincaid, c1994Brooks/Cole Publishing Company
ISBN 0-534-20112-1
Copyright c2011 by A. E. Naiman
-
7/28/2019 nm-slides-b-and-w
2/228
Taylor Series
Definitions and Theorems
Examples
Proximity of x to c Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 1
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
3/228
Motivation
Sought: cos (0.1)
Missing: calculator or lookup table Known: cos for another (nearby) value, i.e., at 0
Also known: lots of (all) derivatives at 0
Can we use them to approximate cos (0.1)?
What will be the worst error of our approximation?
These techniques are used by computers, calculators, tables.
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 2
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
4/228
Taylor Series
Series definition: If f(k)(c), k = 0, 1, 2, . . ., then:
f(x)
f(c) + f(c)(x
c) +
f(c)
2!
(x
c)2 +
=
k=0
f(k)(c)
k!(x c)k
c is a constant and much is known about it (f(k)(c)) x a variable near c, and f(x) is sought With c = 0 Maclaurin series
What is the maximum error if we stop after n terms?
Real life: crowd estimation: 100K 10K vs. 100K 1K
Key NM questions: What is estimate? What is its max error?
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 3
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
5/228
Taylor Series cos x
-1.5
-1
-0.5
0
0.5
1
1.5
-4 -2 0 2 4
fu
nction
value
cos x
1
1 x2
2
1 x22 +x4
4!
1 x22 +x4
4! x6
6!
!
s
z
z
Better and better approximation, near c, and away.
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 4
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
6/228
Taylors Theorem
Theorem: If f Cn+1[a, b] then
f(x) =n
k=0f(k)(c)
k!(x c)k + f
(n+1)((x))
(n + 1)!(x c)n+1
where
x, c [a, b], (x) open interval between x and c
Notes: f C(X) means f is continuous on X f Ck(X) means f, f, f, f(3), . . . , f (k) are continuous
on X
= (x), i.e., a point whose position is a function of x Error term is just like other terms, with k := n + 1-term is truncation error, due to series termination
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 5
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
7/228
Taylor SeriesProcedure
Writing it out, step-by-step: write formula for f(k)(x)
choose c (if not already specified)
write out summation and error term note: sometimes easier to write out a few terms
Things to (possibly) prove by analyzing worst case
letting n LHS remains f(x)
summation becomes infinite Taylor series
if error term
0
infinite Taylor series represents f(x) for given n, we can estimate max of error term
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 6
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
8/228
Taylor Series
Definitions and Theorems
Examples
Proximity of x to c Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 7
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
9/228
Taylor Series: ex
f(x) = ex, |x| < f(k)(x) = ex, k
Choose c := 0
We have
ex =n
k=0xk
k!+
e(x)
(n + 1)!xn+1
As n take worst case (just less than x)error term 0 (why?)
ex = k=0
xk
k!= 1 + x + x
2
2!+ x
3
3!+
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 8
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
10/228
Taylor Series: sin x
f(x) = sin x, |x| < f(k)(x) = sin
x + k2
, k, c := 0
We have
sin x =n
k=0
sin
k2
k!
xk +sin
(x) +
(n+1)2
(n + 1)!
xn+1
Error term 0 as n Even k terms are zero = 0, 1, 2, . . ., and k 2 + 1
sin x =
=0
sin(2+1)2 (2+1)! x2+1 =
k=0
(
1)kx2k+1
(2k+1)! = xx3
3! +
x5
5!
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 9
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
11/228
Taylor Series: cos x
f(x) = cos x, |x| < f(k)(x) = cos
x + k2
, k, c := 0
We have
cos x =n
k=0
cos
k2
k!
xk +cos
(x) +
(n+1)2
(n + 1)!
xn+1
Error term 0 as n Odd k terms are zero = 0, 1, 2, . . ., and k 2
cos x =
=0
cos (2)2 (2)! x2 =
k=0
(1)kx2k
(2k)! = 1 x2
2! +x4
4!
Copyright c2011 by A. E. NaimanNM Slides
Taylor Series, p. 10
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
12/228
Numerical Example: cos (0.1)
We have 1)f(x) = cos x and 2)c = 0 obtain series: cos x = 1 x22! + x
4
4!
Actual value: cos (0.1) = 0.99500416527803 . . .
With 3)x = 0.1 and 4)specific ns from Taylor approximations:
n approximation|error
| 0, 1 1 0.01/2!2, 3 0.995 0.0001/4!4, 5 0.99500416 0.000001/6!6, 7 0.99500416527778 0.00000001/8!
..
.
..
.
..
.includes odd k
Obtain accurate approximation easily and quickly.
Copyright c2011 by A. E. NaimanNM Slides
Taylor Series, p. 11
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
13/228
Taylor Series: (1
x)1
f(x) = 11x, |x| < 1 f(k)(x) = k!(1x)k+1, k, choose c := 0
We have
11 x =
nk=0
xk + (n + 1)!(1 (x))n+2 x
n+1
(n + 1)!
=n
k=0xk +
x
1
(x)
n+11
1
(x)
Why bother, with LHS so simple? Ideas?
Sufficient: x
1
(x)
n+1 0 as n
For what range of x is this satisfied?Need to determine radius of convergence.
Copyrightc2011 by A. E. Naiman
NM SlidesTaylor Series, p. 12
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
14/228
(1
x)1 Range of Convergence
Sufficient:
x1(x)
< 1
Approach: get variable x in middle of sufficiency inequality transform range of inequality to LHS and RHS of
sufficiency inequality require restriction on x but check if already satisfied
|| < 1 1 > 0 sufficient: (1 ) < x < 1
Copyrightc2011 by A. E. Naiman
NM SlidesTaylor Series, p. 13
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
15/228
(1
x)1 Range of Convergence (cont.)
case x < < 0: LHS: (1 x) < (1 ) < 1 require: 1 x RHS: 1 < 1 < 1 x require: x 1
case 0 < < x: LHS: 1 < (1 ) < (1 x) require: (1 x) x,
or: 1 < 0
RHS: 1
x < 1
< 1
require: x
1
x, or: x
1
2
Therefore, for 1 < x 121
1 x=
k=0 xk = 1 + x + x2 + x3 +
Zeno: x =1
2
, . . . Need more analysis for the whole range |x| < 1.
Copyright c
2011 by A. E. Naiman NM Slides Taylor Series, p. 14
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
16/228
Taylor Series: ln x
f(x) = ln x, 0 < x 2 f(k)(x) = (1)k1(k1)!xk , k 1
Choose c := 1 We have
lnx
=n
k=1 (
1)
k
1(x
1)k
k+
(1
)
n 1
n + 1
(x
1)n+1
n+1(x)
Sufficient
x1(x)
n+1 0 as n
Again, for what range of x is this satisfied?
Copyright c
2011 by A. E. Naiman NM Slides Taylor Series, p. 15
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
17/228
ln x Range of Convergence
Sufficient: x1(x) < 1 . . . 1 < x < 1 + case 1 < < x:
LHS: 1 x < 1 < 0 require: 0 x
RHS: 2 < 1 + < 1 + x require: x 2 case x < < 1:
LHS: 0 < 1 < 1 x require: 1 x x, or: 12 x RHS: 1 + x < 1 + < 2 require: x 1 + x
Therefore, for 12 x 2
ln x =
k=1
(1)k
1(x
1)k
k = (x 1) (x
1)2
2 +
(x
1)3
3 Again, need more analysis for entire range of x.
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 16
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
18/228
Ratio Test and ln x Revisited
Theorem: an+1an (< 1) partial sums converge ln x: ratio of adjacent summand terms (not the error term)
an+1an = (x 1) nn + 1 Obtain convergence of partial sums for 0 < x < 2
Note: not looking at and the error term
x = 2: 1 12 + 13 , which is convergent (why?)
x = 0: same series, all same sign divergent harmonic series we have 0 < x 2
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 17
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
19/228
(1 x)1 Revisited Letting x (1 x)
ln (1 x) =
x +x2
2+
x3
3+
, 1 x < 1
ddx: lhs = 11x and rhs =
1 + x + x2 + x3 +
! : no = for x = 1 as rhs oscillates (note: correct avgvalue)
|x| < 1 we have (also with ratio test)1
1 x = 1 + x + x2 + x3 +
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 18
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
20/228
Taylor Series
Definitions and Theorems
Examples
Proximity of x to c Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 19
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
21/228
Proximity of x to c
Problem: Approximate ln 2
Solution 1: Taylor ln (1 + x) around 0 with x = 1
ln 2 = 1 12
+1
3 1
4+
1
5 1
6+
1
7 1
8+
Solution 2: Taylor ln
1+x1x
around 0 with x = 13
ln 2 = 231 +33
3
+35
5
+37
7
+
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 20
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
22/228
Proximity of x to c (cont.)
Approximated values, rounded:
Solution 1, first 8 terms: 0.63452 Solution 2, first 4 terms: 0.69313
Actual value, rounded: 0.69315
importance of proximity of evaluation and expansion points
This error is in addition to the truncation error.
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 21
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
23/228
Taylor Series
Definitions and Theorems
Examples
Proximity of x to c Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 22
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
24/228
Polynomials and a Second Form
Polynomials C(, ) have finite number of non-zero derivatives, Taylor series c . . . original polynomial, i.e., error = 0
f(x) = 3x2
1, . . . f (x) =
2
k=0
f(k)(0)
k!xk =
1 + 0 + 3x2
Taylor Theorem can be used for fewer terms e.g.: approximate a P17 near c by a P3
Taylors Theorem, second form (x = constant expansion
point, h = distance, x + h = variable evaluation point):If f Cn+1[a, b] then
f(x + h) =
n
k=0
f(k)(x)
k! h
k
+
f(n+1)((h))
(n + 1)! h
n+1
x, x + h [a, b], (h) open interval between x and x + hCopyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 23
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
25/228
-
7/28/2019 nm-slides-b-and-w
26/228
Second Form ln (e + h)
Evaluation of interest: ln (e + h), for e < h e
Define: f(z) ln (z)
x = e is the constant expansion point
ln z > 0
Derivativesf(z) = ln z f(e) = 1
f(z) = z1 f(e) = e1f(z) =
z2 f(e) =
e2
f(z) = 2z3 f(e) = 2e3f(n)(z) = (1)n1(n 1)!zn f(n)(e) = (1)n1(n 1)! en
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 25
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
27/228
ln (e + h) Expansion and Convergence
Expansion (recall: x = e)
ln (e + h) f(x + h) = 1 +n
k=1(1)k1(k 1)!ekhk
k!+
(1)n n! (h)(n+1)hn+1(n + 1)!
or
ln (e + h) = 1 +n
k=1
(1)k1k
he
k+
(1)nn + 1
h(h)
n+1
Range of convergence, sufficient (for variable h):
< h <
case e + h < < e: . . . e2 h case e < < e + h: . . . h e
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 26
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
28/228
O() Notation and MVT
As
h 0, we write the speed of
f(h) 0
f(h) = O
hk
|f(h)| C|h|k
e.g., f(h): h, 11000h, h2; let h 110, 1100, 11000, . . .
Taylor truncation error = Ohn+1; if for a given n the maxexists, then
C := max(h) f(n+1)((h))/(n + 1)!
Mean value theorem (Taylor, n = 0): If f C1[a, b] thenf(b) = f(a) + (b
a)f(),
(a, b)
or:
f() =f(b) f(a)
b aCopyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 27
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
29/228
Alternating Series Theorem
Alternating series theorem: If ak > 0, ak ak+1, k 0, andak 0, then
nk=0
(1)k
ak S and |S Sn| an+1
Intuitively understood
Note: direction of error is also know for specific n
We had this with sin and cos
Another useful method for max truncation error estimationMax truncation error estimation without -analysis
Copyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 28
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
30/228
ln (e + h) Max Trunc. Error Estimate
What is the max error after
n+ 1 terms?
Max error estimate also depends on proximitysize of h from Taylor: obtain O
hn+1
|error| 1
n + 1|h|n+1
max1
n+1
from AST (check the conditions!): also obtainO
hn+1
, with different constant
|error| 1n + 1
hen+1
E.g.: h = e2: ln 32e = 1 + 12 12 122 +13 123
14 124 +
Taylor max error (occurs as e+
):
1
n+1 1
2n+1 AST max error: 1n+1 12n+1 note same max error estimate; but can be very differentCopyright c2011 by A. E. Naiman NM Slides Taylor Series, p. 29
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
31/228
Base Representations
Definitions
Conversions
Computer Representation Loss of Significant Digits
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 1
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
32/228
Number Representation
Simple representation in one base simple representation inanother base, e.g.
(0.1)10 = (0.0 0011 0011 0011 . . .)2
Base 10:
37294 = 4 + 90 + 200 + 7000 + 30000
= 4 100 + 9 101 + 2 102 + 7 103 + 3 104
in general: an . . . a0 =n
k=0ak10
k
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 2
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
33/228
Fractions and Irrationals
Base 10 fraction:
0.7217 = 7 101 + 2 102 + 1 103 + 7 104
In general, for real numbers:an . . . a0.b1 . . . =
nk=0
ak10k +
k=1
bk10k
Note: numbers, i.e., irrationals, such that an infinite numberof digits are required, in any rational base, e.g., e,,
2
Need infinite number of digits in a base
irrational
(0.333 . . .)10 but1
3is not irrational
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 3
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
34/228
Other Bases
Base 8, 8 or 9, using octal digits(21467)8 = = (9015)10
(0
.36207
)8= 8
53 84 + =
15495
32768=
(0
.47286
. . .)10
Base 16: 0, 1, . . . , 9, A (10), B (11), C (12), D(13), E (14), F (15)
Base
(an . . . a0.b1 . . .) =n
k=0
akk +
k=1
bkk
Base 2: just 0 and 1, or for computers: off and on,bit = binary digit
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 4
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
35/228
Base Representations
Definitions
Conversions
Computer Representation Loss of Significant Digits
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 5
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
36/228
Conversion: Base 10 Base 2 Basic idea:
3781 = 1 + 10
(1010)2
8
(1000)2+10(7 + 1 0(3))
=
= (111 011 000 101)2
Easy for computer, but by hand: (3781.372)10
remainder2)3781 2)1890 1 = a0
2)945 0 = a1
...
0.372 2 b1 = 0 .744
2
b2= 1
.488 (drop 1 )
... only useful for converting to lower base (one-digit /)
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 6
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
37/228
Base 8 Shortcut
Base 2
base 8, trivial
(551.624)8 = (101 101 001.110 010 100)2
3 bits for every 1 octal digit
One digit produced for every step in (hand) conversion
base 10 base 8 base 2
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 7
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
38/228
Base Representations
Definitions
Conversions
Computer Representation Loss of Significant Digits
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 8
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
39/228
Computer Representation
Scientific notation:32.213 0.32213 102
In generalx = 0.d1d2 . . . 10n, d1 = 0, or: x = r 10n,
1
10 r < 1
we have sign, mantissa r and exponent n
On the computer, base 2 is represented
x = 0.b1b2 . . . 2n, b1 = 0, or: x = r 2n,1
2 r < 1
Finite number of mantissa digits, therefore roundoff ortruncation error
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 9
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
40/228
Base Representations
Definitions
Conversions
Computer Representation Loss of Significant Digits
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 10
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
41/228
LSDAddition
(a + b) + c = a + (b + c) on the computer?
Six decimal digits for mantissa
1, 000, 000. + 1. + + 1. million times
= 1, 000, 000.
because
0.100000 107
+ 0.100000 101
= 0.100000 107
but
1. + + 1. million times
+1, 000, 000. = 2, 000, 000.
Add numbers in size order.
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 11
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
42/228
LSDSubtraction
E.g.: x sin x for xs close to zero
x =1
15(radians)
x = 0.66666 66667 101sin x = 0.66617 29492 101
x sin x = 0.00049 37175 101= 0
.49371 75000
10
4
Note still have 1010 precision (because no more info), but
can we rework calculation for 10
13 precision?
Avoid subtraction of close numbers.
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 12
S S
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
43/228
LSD Avoidance for Subtraction
x sin x for x 0 use Taylor series no subtraction of close numbers e.g., 3 terms: 0.49371 74328 104
actual: 0.49371 74327
10
4
ex e2x for x 0 use Taylor series twice and addcommon powers
x2 + 1 1 for x 0 x2x2+1+1
cos2 x sin2 x for x 4 cos2x
ln x 1 for x e ln xe
Copyright c2011 by A. E. Naiman NM Slides Base Representations, p. 13
N li E ti
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
44/228
Nonlinear Equations
Motivation Bisection Method Newtons Method Secant Method Summary
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 1
M ti ti
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
45/228
Motivation
For a given function f(x), find its root(s), i.e.: find x (or r = root) such that f(x) = 0
BVP: dipping of suspended power cable. What is ? cosh
50
10 = 0
(Some) simple equations solve analytically6x2 7x + 2 = 0
(3x 2)(2x 1) = 0x = 2
3, 1
2
cos3x cos7x = 02sin5x sin2x = 0
x = n5
, n2
, n ZZ
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 2
M ti ti ( t )
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
46/228
Motivation (cont.)
In general, we cannot exploit the function, e.g.:
2x2 10x + 1 = 0
and
cosh
x2 + 1 ex
+ log |sin x| = 0
Note: at times multiple roots e.g., previous parabola and cosine we want at least one we may only get one (for each search)
Need a general, function-independent algorithm.
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 3
Nonlinear Equations
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
47/228
Nonlinear Equations
Motivation Bisection Method Newtons Method Secant Method Summary
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 4
Bisection Method Example
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
48/228
Bisection MethodExample
-5
-4
-3
-2
-1
01
2
3
4
a bx
0x
1x
2x
3
fu
nction
val
ue
Intuitive, like guessing a number [0, 100].
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 5
Restrictions and Max Error Estimate
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
49/228
Restrictions and Max Error Estimate
Restrictions function slices x-axis at root
start with two points a and b f(a)f(b) < 0 graphing tool (e.g., Matlab) can help to find a
and b
require C0[a, b] (why? note: not a big deal) Max error estimate
after n steps, guess midpoint of current range
error: ba2n+1
(think of n = 0, 1, 2)
note: error is in x; can also look at error in f(x) orcombination
enters entire world of stopping criteria
Question: Given tolerance (in x), what is n? . . .
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 6
Convergence Rate
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
50/228
Convergence Rate
Given tolerance (e.g., 106), how many steps are needed?
Tolerance restriction ( from before):
b a2n+1
< 1) 2, 2) log (any base)
log (b a) n log2 < log2or
n >log (b a) log2
log2
Rate is independent of function.
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 7
Convergence Rate (cont )
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
51/228
Convergence Rate (cont.)
Base 2 (i.e., bits of accuracy)n > log2 (b a) 1 log2
i.e., number of steps is a constant plus one step per bit
Linear convergence rate: C [0, 1)xn+1 r C|xn r|, n 0i.e., monotonic decreasing error at every step, andxn+1 r Cn+1|x0 r|
Bisection convergence
not linear (examples?), but compared to init. max error:
similar form: xn+1 r Cn+1(b a), with C = 12Okay, but restrictive and slow.
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 8
Nonlinear Equations
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
52/228
Nonlinear Equations
Motivation Bisection Method
Newtons Method Secant Method Summary
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 9
Newtons MethodDefinition
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
53/228
Newton s Method Definition
Approximate f(x) near x0 by tangent (x)f(x) f(x0) + f(x0)(x x0) (x)
Want (r) = 0
r = x0
f(x0)
f(x0),
x1 := r, likewise:
xn+1 = xn f(xn)
f(xn)
Alternatively (Taylors): have x0, for what h is
f
x0 + h
x1
= 0
f(x0 + h) f(x0) + hf(x0) or h = f(x0)f(x0)
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 10
Newtons MethodExample
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
54/228
Newton s Method Example
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
x0x1x2x3
fun
ction
value
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 11
Convergence Rate
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
55/228
Convergence Rate
English: With enough continuity and proximity quadratic convergence!
Theorem: With the following three conditions:1)f(r) = 0, 2)f(r) = 0, 3)f C2
B
r,
x0
B(r, ) and
n we have xn+1 r C()|xn r|
2
for a given , C is a constant (not necessarily < 1)
Note: again, use graphing tool to seed x0
Newtons method can be very fast.
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 12
Convergence Rate Example
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
56/228
Convergence Rate Example
f(x) = x3 2x2 + x 3, x0 = 4n xn f(xn)0 4 33
1 3 92 2.4375 2.0368652343753 2.21303271631511 0.2563633850614184 2.17555493872149 0.006463361488813065 2.17456010066645 4.47906804996122e
06
6 2.17455941029331 2.15717547991101e 12 Stopping criteria
theorem: uses x; above: uses f(x)often all we have
possibilities: absolute/relative, size/change, x or f(x)
(combos, . . . )But proximity issue can bite, . . . .
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 13
Sample Newton Failure #1
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
57/228
p #
-4
-3
-2
-1
0
12
3
4
5
xn
function
valu
e
Runaway process
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 14
Sample Newton Failure #2
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
58/228
p #
-4
-2
0
2
4
xn
function
valu
e
Division by zero derivativerecall algorithm
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 15
Sample Newton Failure #3
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
59/228
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
xnxn+1
function
valu
e
Loop-d-loop (can happen over m points)
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 16
Nonlinear Equations
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
60/228
Motivation Bisection Method
Newtons Method Secant Method Summary
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 17
Secant MethodDefinition
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
61/228
Motivation: avoid derivatives
Taylor (or derivative): f(xn) f(xn)f(xn1)xnxn1
xn+1 = xn f(xn)xn
xn
1
f(xn) fxn1 Bisection requirements comparison:
2 previous points
f(a)f(b) < 0
Additional advantage vs. Newton:
only one function evaluation per iteration
Superlinear convergence:xn+1 r C|xn r|1.618...
(recognize the exponent?)
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 18
Nonlinear Equations
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
62/228
Motivation Bisection Method
Newtons Method Secant Method Summary
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 19
Root FindingSummary
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
63/228
Performance and requirementsf C2 nbhd(r) init. pts. speedy
bisection 2 1 Newton 1 2 secant 2 1 \requirement that f(a)f(b) < 0function evaluations per iteration
Often methods are combined (how?), with restarts fordivergence or cycles
Recall: use graphing tool to seed x0 (and x1)
Copyright c2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 20
Interpolation and Approximation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
64/228
Motivation
Polynomial Interpolation
Numerical Differentiation Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 1
Motivation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
65/228
Three sample problems {(xi, yi)|i = 0, . . . , n}, (xi distinct), want simple (e.g.,
polynomial) p(x) yi = p(xi), i = 0, . . . , n interpolation
Assume data includes errors, relax equality but stillclose, . . . least squares
Replace complicated f(x) with simple p(x) f(x)
Interpolation similar to English term (contrast: extrapolation) for now: polynomial
later: splinesUse p(x) for p(xnew),
p(x) dx, . . . .
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 2
Interpolation and Approximation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
66/228
Motivation
Polynomial Interpolation
Numerical Differentiation Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 3
Constant and Linear Interpolation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
67/228
y0
y1
x0 x1
function
value
p0(x)
p1(x)
n = 0: p(x) = y0 n = 1: p(x) = y0 + g(x)(y1 y0), g(x) P1, and
g(x) = 0, x = x0,
1, x = x1 g(x) = xx0x1
x0
n = 2: more complicated, . . . .
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 4
Lagrange Polynomials
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
68/228
Given: xi, i = 0, . . . , n; Kronecker delta: i j = 0, i = j,
1, i = j
Lagrange polynomials: i(x) Pn, i
xj
= i j, i = 0, . . . , n
independent of any yi values
E.g., n = 2:
0
1
x0 x1 x2
function
value
0(x) 1(x) 2(x)
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 5
Lagrange InterpolationWe have
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
69/228
We have
0(x) =x x1
x0 x1 x x2
x0 x2,
1(x) =x x0
x1 x0 x x2
x1 x2,
2(x) =x x0
x2 x0 x x1
x2 x1,
y00
xj
= y00j =
0, j = 0,y0, j = 0
y11xj = y11j = 0, j = 1,y
1, j = 1
y22
xj
= y22j =
0, j = 2,y2, j = 2
!p(x) P2, with pxj = yj, j = 0, 1, 2: p(x) =2
i=0 yii(x)
In general: i(x) =n
j = 0j = ix xjxi
xj
, i = 0, . . . , n
Great! What could be wrong? Easy functions (polynomials),interpolation ( error = 0 at xi) . . . but what about p(xnew)?
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 6
Interpolation Error & the Runge Function(xi f (xi)) i = 0 n f (x) p(x) ?
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
70/228
{(xi, f(xi))
|i = 0, . . . , n
},
|f(x)
p(x)
| ?
Runge function: fR(x) =
1 + x21
, x [5, 5] and uniform
mesh: ! p(x)s wrong shape and high oscillations
limn max5x5 |fR(x) pn(x)| =
0
1
x0x1x2x30x5x6x7x8
fu
nction
valu
e
(= 5)(= x4)(= 5)
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 7
Error Theorem
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
71/228
Theorem: . . . , f Cn+1[a, b], x [a, b], (a, b)
f(x)
p(x) =1
(n + 1)!f(n+1)()
n
i=0 (x xi)
Max error
with xi and x, still need max(a,b) f
(n+1)()
with xi only, also need max of without xi:
max
(a,b)
n
i=0 (x xi) = (b
a)n+1
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 8
Chebyshev Points1
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
72/228
0 x0x1x2x3x4x5x6x7x8
func
tion
value
(= 1)(= 0)(= 1)
Chebyshev points on [1, 1]: xi = cos
in
, i = 0, . . . , n
In general on [a, b]: xi = 12(a + b) + 12(b a) cos in
,
i = 0, . . . , n
Points concentrated at edgesCopyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 9
Runge Function with Chebyshev Points
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
73/228
0
1
x0x1x2x30x5x6x7x8
f
unction
va
lue
(= 5)(= x4)(= 5)
Is this good interpolation?
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 10
Chebyshev Interpolation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
74/228
Same interpolation method Different interpolation points
Minimizes
ni=0
(x xi)
Periodic behavior interpolate with sins/coss instead of Pn uniform mesh minimizes max error
Note: uniform partition with spacing = cheb1 cheb0
num. points
polynomial degree
oscillations
Note: shape is still wrong . . . see splines later
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 11
Interpolation and Approximation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
75/228
Motivation
Polynomial Interpolation
Numerical Differentiation Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 12
Numerical Differentiation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
76/228
Note: until now, approximating f(x), now f(x)
f(x) f(x+h)f(x)h
Error = ?
Taylor: f(x + h) = f(x) + hf(x) + h2 f()2
f(x) =f(x + h) f(x)
h 1
2hf()
I.e., truncation error: O(h)
Can we do better?
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 13
Numerical DifferentiationTake Two
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
77/228
Taylor for +h and h:f(x h) =
f(x) hf(x) + h2 f(x)2! h3f(x)
3! + h4 f
(4)(x)4! h5
f(5)(x)5! +
Subtracting:
f(x + h) f(x h) = 2hf(x) + 2h3f(x)3!
+ 2h5f(5)(x)
5!+
f(x) =f(x + h) f(x h)
2h 1
6h2f(x)
We gained O(h) to O
h2
. However, . . .
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 14
Richardson ExtrapolationTake Three
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
78/228
We have
f(x) =f(x + h) f(x h)
2h
(h)+a2h
2 + a4h4 + a6h
6 +
Halving the stepsize,
(h) = f(x) a2h2 a4h4 a6h6 h
2
= f(x) a2h
2
2 a4h2
4 a6h2
6 (h) 4
h
2
= 3f(x) 3
4a4h
4 1516
a6h6
Q: So what? A: The h2 term disappeared!
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 15
RichardsonTake Three (cont.)
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
79/228
Divide by 3 and write f(x)
f(x) = 43h
2 13(h) 14a4h4 516a6h6
=
h
2
+
1
3
h
2
(h)
()+O
h4
() only uses old and current information
We gained O
h2
to O
h4
!!
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 16
Interpolation and Approximation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
80/228
Motivation Polynomial Interpolation Numerical Differentiation
Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 17
Additional NotesThree f(x) formulae used additional points
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
81/228
vs. Taylor, more derivatives in same point
Similar for f(x):
f(x h) = f(x)hf(x)+h2f
(x)
2! h3f
(x)
3! +h4f(4)(x)
4! h5f(5)(x)
5! + Adding:
f(x + h) + f(x
h) = 2f(x) + h2f(x) +
1
12
h4f(4)(x) +
or:
f(x) =f(x + h) 2f(x) + f(x h)
h2+
1
12h2f(4)(x) +
error = O
h2
Copyright c2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 18
Numerical Quadrature
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
82/228
Introduction Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule Gaussian Quadrature
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 1
Numerical QuadratureInterpretation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
83/228
f(x) 0 on [a, b] bounded ba f(x) dx is area under f(x)
a b
function
value
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 2
Numerical QuadratureMotivation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
84/228
Analytical solutionsrare:2
0sin x dx = cos x|20 = (0 1) = 1
In general:2
01 a2 sin2 13 d
Need general numerical technique.
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 3
DefinitionsMesh: P a = x0 < x1 < < xn = b , n subintervals (n + 1
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
85/228
{ }points) Infima and suprema (or minima and maxima):
mi inff(x) : xi x xi+1Mi supf(x) : xi x xi+1
Two methods (i.e., integral estimates): lower and upper sums
L(f; P) n
1
i=0mixi+1 xi
U(f; P)
n1
i=0 Mixi+1 xi For example, . . . .Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 4
Lower SumInterpretation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
86/228
x0 x1x2 x3x4
function
va
lue
(= a) (= b)
Clearly a lower bound of integral estimate, and . . .
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 5
Upper SumInterpretation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
87/228
x0 x1x2 x3x4
function
va
lue
(= a) (= b)
. . . an upper bound. What is the max error?
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 6
Lower and Upper SumsExample
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
88/228
Third method, use lower and upper sums: (L + U)/2 f(x) = x2, [a, b] = [0, 1] and P =
0, 14,
12,
34, 1
. . . , L = 7
32, U = 15
32 Split the difference: estimate 1132 (actual 13) Bottom line
naive approach
low n still error of 196. (!) Max error: (U L)/2 = 18
Is this good enough?
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 7
Numerical QuadratureRethinking
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
89/228
Perhaps lower and upper sums are enough? Error seems small
Work seems small as well
But: estimate of max error was not small (18)
Do they converge to integral as n
?
Will the extrema always be easy to calculate? Accurately?(Probably not!)
Proceed in theoretical and practical directions.
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 8
Numerical Quadrature
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
90/228
Introduction Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule Gaussian Quadrature
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 9
Riemann Integrability
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
91/228
f C0[a, b], [a, b] bdd f is Riemann integrable
When integrable, and max subinterval in P 0 (|P|0):
lim|P|0 L(f; P) =
ba
f(x) dx = lim|P|0 U(f; P)
Counter example: Dirichlet function d(x)
0, x rational,1, x irrational
L = 0, U = b a
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 10
Challenge: Estimate n for Third Method
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
92/228
Current restrictions for n estimate:
Monotone functions
Uniform partition
Challenge:
estimate
0
ecos x dx
error tolerance = 12 103 using L and U n = ?
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 11
Estimate nSolution
f (x) = ecos x on [0 ] mi = fxi+1 and Mi = f (xi)
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
93/228
f(x) = ecos x on [0, ] mi = fxi+1 and Mi = f(xi) L(f; P) = h
n1
i=0f
xi+1
and U(f; P) = hn1
i=0f(xi), h =
n
Want 12(U L) < 12 103 or n
e1 e1
< 103
. . . n
7385 (!!) (note for later: max error estimate = O(h))
Number of f(x) evaluations 2 for (U L) max error calculation
> 7000 for either L or U
We need something better.
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 12
Numerical Quadrature
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
94/228
Introduction Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule Gaussian Quadrature
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 13
Composite Trapezoid Rule (CTR) Each area: 12xi+1 xif(xi) + fxi+1
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
95/228
Rule: T(f; P) 1
2
n1i=0
xi+1 xi
f(xi) + f
xi+1
Note: for monotone functions and any given mesh (why?):T = (L + U)/2
Pro: no need for extrema calculations Con: adding new points to existing ones (for a
non-monotonic function)
T can land on bad point
no monotonic improvement (necessarily) L, U and (L + U)/2 look for extrema on
xi, xi+1
monotonic improvementCopyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 14
CTRInterpretation
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
96/228
x0 x1x2 x3x4
function
va
lue
(= a) (= b)
Almost always better than L or U. (When not?)
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 15
Uniform Mesh and Associated Error
Constant stepsize h = ban
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
97/228
Constant stepsize h n
T(f; P) h
n1
i=1f(xi) +
1
2[f(x0) + f(xn)]
Theorem: f C2[a, b] (a, b)
ba f(x) dx T(f; P) =
1
12(b a)h2
f() = Oh2 Note: leads to popular Romberg algorithm (built on
Richardson extrapolation)
How many steps does T(f; P) require?
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 16
ecos x RevisitedUsing CTR
Challenge: cos x d error tolerance 1 103 ?
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
98/228
Challenge:
0ecos x dx, error tolerance = 12 10 3, n = ?
f(x) = ecos x f(x) = ecos x sin x . . .f(x)
e on (0, )
|error| 112(/n)2e 12 103
. . . n 119
Recall perennial two questions/calculations of NM monotonic estimate of T produces same (L + U)/2 but previous max error estimate was less exact (O(h))
Better estimate of max error better estimate of n
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 17
Another CTR Example
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
99/228
Challenge:1
0ex2 dx, error tolerance = 12 104, n = ?
f(x) = ex2
, f(x) = 2xex2
and f(x) = 4x2 2ex2
f(x) 2 on (0, 1) |error|
1
6h2
1
2 104
We have: n2 13 104 or n 58 subintervals
How can we do better?
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 18
Numerical Quadrature
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
100/228
Introduction Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule Gaussian Quadrature
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 19
Trapezoid Rule as
Linear Interpolant
Linear interpolant, one subinterval: p1(x) = xbabf(a) + xaba f(b),
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
101/228
( ) a b ( ) b a ( )
intuitively:
b
a
p1(x) dx =f(a)
a b b
a
(x
b) dx +
f(b)
b a b
a
(x
a) dx
=f(a)
a b
b2 a2
2 b(b a)
+
f(b)
b a
b2 a2
2 a(b a)
=
f(a)
a + b
2 b + f(b)
a + b
2 a
= f(a)
a b2
+ f(b)
b a
2
=b a
2(f(a) + f(b))
CTR is integral of composite linear interpolant.
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 20
CTR for Two Equal Subintervals
n = 2 (i.e., 3 points):
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
102/228
T(f) =b a
2
f
a + b
2
+
1
2[f(a) + f(b)]
=
b
a
4 f(a) + 2fa + b
2 + f(b)with error = O
ba
2
3
(Previously, CTR error = Oh2 = TR error n subintervals= O
h3
O
1h
)
Deficiency: each subinterval ignores the other
How can we take the entire picture into account?
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 21
Simpsons Rule
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
103/228
Motivation: use p2(x) over the two equal subintervals
Similar analysis actually loses O(h), but . . .
(a, b)
ba
f(x) dx =b a
6
f(a) + 4f
a + b
2
+ f(b)
1
90
b a
2
5f(4)()
Similar to CTR, but weights midpoint more Note: for each method, denominator =
coefficients
Each method multiplies width by weighted average of height.
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 22
Composite Simpsons Rule (CSR)
For an even number of subintervals n h = ba (a b)
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
104/228
For an even number of subintervals n, h = n , (a, b)
b
a
f(x) dx =h
3
[f(a) + f(b)] + 4
n/2
i=1 f[a + (2i 1)h] odd nodes +
2(n2)/2
i=1
f(a + 2ih)
even nodes
b a180
h4f(4)()
Note: denominator =
coefficients = 3n
but only n + 1 function evaluations
Can we do better than O
h4
?
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 23
Evaluating the Error
Another important accuracy angle
until now: error = O(h)
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
105/228
now on, looking at f(): error = 0 f P1
With higher , p(x) can approximate any f(x) better
Define (x) f(x) p(x)
f = (p + ) = p + = methodp + =method(f) method() +
As
: (x)
, method() method(f) f
Can we do better than Simpsons P3?
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 24
Integration Introspection
Simpson beat CTR because heavier weighted midpoint
B CSR i il l ff bi l i b d i
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
106/228
But CSR similarly suffers at subinterval-pair boundaries(weight = 2 vs. 4 for no reason)
All composite rules ignore other areas patch together local calculations
will suffer from this
What about using all nodes and higher degree interpolation?
Also note: we can choose weights location of calculation nodes
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 25
Numerical Quadrature
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
107/228
Introduction Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule Gaussian Quadrature
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 26
Interpolatory Quadrature
xi, i(x) =n
j = 0
x
xj
xi xj , i = 0, . . . , n; p(x) =n
i=0f(xi)i(x)
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
108/228
j 0j = i
j
i 0
If f(x)
p(x)
hopefully
b
af(x) dx
b
ap(x) dx
b
ap(x) dx =
ba
ni=0
f(xi)i(x) dx =n
i=0
f(xi)b
ai(x) dx
Ai Ai = Ai
a, b;
xjn
j=0
, but Ai = Ai(f) !
(Endpoints, nodes) Ai ba
f(x) dx n
i=0
Aif(xi).
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 27
Interp. Quad.Error Analysis
f P f ( ) ( ) and
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
109/228
f Pn f(x) = p(x), and f Pn
ba
f(x) dx =n
i=0Aif(xi), i.e., error = 0
n + 1 weights determined by nodes xi (and a and b)
True for any choice of n + 1 nodes xi
What if we choose n + 1 specific nodes (with weights, total:2(n + 1) choices)?
Can we get error = 0 f P2n+1?
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 28
Gaussian Quadrature (GQ)Theorem
Let
q(x) Pn+1 b
xkq(x) dx = 0, k = 0, . . . , n
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
110/228
q(x) n+1
ax q(x) dx 0, k 0, , n
i.e., q(x) all polynomials of lower degree
note: n + 2 coefficients, n + 1 conditions
unique to a constant multiplier
xi, i = 0, . . . , n, q(xi) = 0i.e., xi are zeros of q(x)
Then f P2n+1, even though f(x) = p(x) (f Pm, m > n)ba
f(x) dx =n
i=0Aif(xi)
We jumped from Pn to P2n+1!
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 29
Gaussian QuadratureProof
Let f P2n+1, and divide by q f = sq + r s, r Pn
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
111/228
We have (note: until last step, xi can be arbitrary)
b
a
f(x) dx = b
a
s(x)q(x) dx + b
a
r(x) dx (division above)
=b
ar(x) dx
ity of q(x)=
n
i=0Air(xi) (r Pn)
=n
i=0
Ai[f(xi) s(xi)q(xi)] (division above)
=n
i=0Aif(xi) (xi are zeros of q(x))
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 30
GQAdditional Notes
Example qn(x): Legendre Polynomials: for [a, b] = [
1, 1] and
qn(1) = 1 ( a 3-term recurrence formula)
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
112/228
q0(x) = 1, q1(x) = x, q2(x) =3
2x2 1
2, q3(x) =
5
2x3 3
2x , . . .
Use qn+1(x) (why?), depends only on a, b and n
Gaussian nodes (a, b)
good if f(a) =
and/or f(b) =
(e.g.,
1
0
1
xdx)
More general: with weight function w(x) in original integral q(x) orthogonality weights Ai
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 31
Numerical QuadratureSummary
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
113/228
n + 1 function evaluations
composite? node placement error = 0 PCTR
uniform (usually) 1
CSR
uniform (usually) 3interp. any (distinct) n
GQ
zeros of q(x) 2n + 1
P.S. There are also powerful adaptive quadrature methods
Copyright c2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 32
Linear Systems
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
114/228
Introduction Naive Gaussian Elimination
Limitations
Operation Counts Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 1
What Are Linear Systems (LS)?
a1 1 x1 + a1 2 x2 + + a1 n xn = b1
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
115/228
a2 1 x1 + a2 2 x2 + + a2 n xn = b2... + ... + . . . + ... = ...
am 1 x1 + am 2 x2 + + am n xn = bm
Dependence on unknowns: powers of degree 1
Summation form:n
j=1ai j xj = bi , 1 i m, i.e., m
equations
Presently: m = n, i.e., square systems (later: m
= n)
Q: How to solve for [x1 x2 . . . xn]T? A: . . .
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 2
Linear Systems
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
116/228
Introduction Naive Gaussian Elimination
Limitations
Operation Counts Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 3
Overall Algorithm and Definitions
Currently: direct methods only (later: iterative methods)
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
117/228
General idea: Generate upper triangular system
(forward elimination)
Easily calculate unknowns in reverse order(backward substitution)
Pivot row = current one being processedpivot = diagonal element of pivot row
Steps applied to RHS as well.
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 4
Forward Elimination
Generate zero columns below diagonal
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
118/228
Process rows downwardfor each row i := 1, n 1 { // the pivot row
for each row k := i + 1, n { // rows below pivotmultiply pivot row ai i = ak isubtract pivot row from rowk // now ak i = 0
} // now column below ai i is zero
} // now ai j = 0, i > j
Obtain triangular system
Lets work an example, . . .
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 5
Compact Form of LS
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
119/228
6 x1 2 x2 + 2 x3 + 4 x4 = 1612 x1
8 x2 + 6 x3 + 10 x4 = 26
3 x1 13 x2 + 9 x3 + 3 x4 = 19 6 x1 + 4 x2 + 1 x3 18 x4 = 34
6 2 2 4 16
12
8 6 10 263 13 9 3 19
6 4 1 18 34
Proceeding with the forward elimination, . . .
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 6
Forward EliminationExample
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
120/228
6 2 2 4 1612
8 6 10 26
3 13 9 3 196 4 1 18 34
6 2 2 4 160
4 2 2
6
0 12 8 1 270 2 3 14 18
6 2 2 4 160
4 2 2
6
0 0 2 5 90 0 4 13 21
6 2 2 4 160
4 2 2
6
0 0 2 5 90 0 0 3 3
Matrix is upper triangular.
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 7
Backward Substitution
6 2 2 4 160 4 2 2 60 0 2 5 9
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
121/228
0 0 2 5 90 0 0 3 3
Last equation:
3x
4=
3
x
4= 1
Second to last equation: 2x3 5 x4=1
= 2x3 5 = 9
x3 = 2
. . . second equation . . . x2 = . . .
. . . [x1 x2 x3 x4]T = [3 1 2 1]T
For small problems, check solution in original system.
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 8
Linear Systems
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
122/228
Introduction Naive Gaussian Elimination
Limitations
Operation Counts Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 9
Zero Pivots
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
123/228
Clearly, zero pivots prevent forward elimination
! zero pivots can appear along the way
Later: When guaranteed no zero pivots?
All pivots = 0 ? we are safe
Experiment with system with known solution.
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 10
Vandermonde Matrix
1 2 4 8 2n11 3 9 27
3n
1
1 4 16 64 4n1... ... ... ... . . . ...
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
124/228
1 n + 1 (n + 1)2 (n + 1)3 (n + 1)n1
Want row sums on RHS
x
i= 1, i = 1, . . . , n
Geometric series:1 + t + t2 + + tn1 = t
n 1t
1
We obtain bi, for row i = 1, . . . , nn
j=1(1 + i)j1
ai j 1xj
=(1 + i)n 1(1 + i)
1
=1
i[(1 + i)n 1]
biSystem is ready to be tested.
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 11
Vandermonde Test
Platform with 7 significant (decimal) digits n = 1, . . . , 8 expected results9: error > 16 000% !!
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
125/228
n = 9: error > 16,000% !!
Questions: What happened? Why so sudden? Can anything be done?
Answer: matrix is ill-conditioned Sensitivity to roundoff errors Leads to error propagation and magnification
First, how to assess vector errors.
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 12
Errors
Given system: Ax = b and solution estimate x
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
126/228
Residual (error): r Ax b
Absolute error (if x is known): e x x
Norm taken of r or e: vector scalar quantity
(more on norms later)
Relative errors: ||r||/||b|| and ||e||/||x||
Back to ill-conditioning, . . .
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 13
Ill-conditioning
0 x1 + x2 = 1x1 + x2 = 2
0 pivot
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
127/228
General rule: if 0 is problematic
numbers near 0 are problematic
x1 + x2 = 1x1 + x2 = 2
. . . x2 =
21/11/ and x1 =
1x2
small (e.g., = 109 with 8 significant digits) x2 = 1 andx1 = 0wrong!
What can be done?
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 14
Pivoting
Switch order of equations, moving offending element offdiagonal
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
128/228
diagonal
x
1+ x
2= 2
x1 + x2 = 1 , x2 = 121 and x1 = 2 x2 = 11
This is correct, even for small (or even = 0)
Compare size of diagonal (pivot) elements above, to
Ratio of first row of Vandermonde matrix = 1 : 2n1
Issue is relative size, not absolute size.
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 15
Scaled Partial Pivoting
Also called row pivoting (vs. column pivoting)
Instability source: subtracting large values: ak j -= ai j ak iai i
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
129/228
W|o l.o.g.: n rows, and choosing which row to be first
Find i rows k = i, columns j > 1: minimize ai j ak 1ai 1 O
n3
calculations! simplify (remove k), imagine: ak 1 = 1
find i
columns j > 1: mini
ai j
ai 1 Still 1)O
n2
calculations, 2)how to minimize each row?
Find i: mini
maxj |ai j||ai 1|
, or: maxi
|ai 1|maxj ai j (e.g., first matrix)
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 16
Linear Systems
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
130/228
Introduction Naive Gaussian Elimination
Limitations
Operation Counts Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 17
How Much Work on A?
Real life: crowd estimation costs? (will depend on accuracy)
Counting and (i.e., long operations) only Pivoting: row decision amongst k rows = k ratios
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
131/228
First row:
n ratios (for choice of pivot row) n 1 multipliers (n 1)2 multiplications
total: n2 operations
forward elimination operations (for large n)n
k=2
k2 =n
6(n + 1)(2n + 1) 1 n
3
3
How about the work on b?
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 18
Rest of the Work
Forward elimination work on RHS:n
k=2 (k 1) =n(n
1)
2
n n(n + 1)
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
132/228
Backward substitution:k=1
k =n(n + 1)
2
Total: n2 operations
O(n) fewer operations than forward elimination on A
Important for multiple RHSs known from the start do not repeat O
n3
work for each
rather, line them up, and process simultaneously
Can we do better at times?
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 19
Sparse Systems
0 0 . . . . . . ...0 . . . . . .
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
133/228
... . . . . . . ...
. . . 0
... . . . . . . . . . 0 0
Above, e.g., tridiagonal system (half bandwidth = 1)
note: a
ij= 0 for
|i
j|
> 1
Opportunities for savings storage
computations Both are O(n)Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 20
Linear Systems
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
134/228
Introduction Naive Gaussian Elimination
Limitations
Operation Counts Additional Notes
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 21
Pivot-Free Guarantee
When are we guaranteed non-zero pivots?
Diagonal dominance (just like it sounds):
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
135/228
Diagonal dominance (just like it sounds):
|ai i| >n
j = 1j = i
ai j, i = 1, . . . , n
(Or > in one row, and
in remaining)
Many finite difference and finite element problems diagonally dominant systems
Occurs often enough to justify individual study.
Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 22
LU Decomposition
E.g.: same A, many bs of time-dependent problem
not all bs are known from the start Want A = LU for decreased work later
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
136/228
Then define y: L U xy
= b
solve Ly = b for y solve U x = y for x
U is upper triangular, result of Gaussian elimination
L is unit lower triangular, 1s on diagonal and Gaussianmultipliers below
For small systems, verify (even by hand): A = LU
Each new RHS is n2
work, instead of On3Copyright c2011 by A. E. Naiman NM Slides Linear Systems, p. 23
Approximation by Splines
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
137/228
Motivation Linear Splines
Quadratic Splines
Cubic Splines Summary
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 1
Motivation
20
4060
value
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
138/228
-40
-200
20
-4 -2 0 2 4
function
Given: set of many points, or perhaps very involved function Want: simple representative function for analysis or
manufacturing
Any suggestions?
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 2
Lets Try Interpolation
20
40
60
value
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
139/228
-40
-200
20
-4 -2 0 2 4
function
Disadvantages:
Values outside x-range diverge quickly (interp(10) = 1592)
Numerical instabilities of high-degree polynomials
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 3
Runge FunctionTwo Interpolations
1
n
value
Chebyshevuniform
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
140/228
0
c0c1c2c30x5x6x7x8
function
(= 5)(= x4 = c4)(=
5)
More disadvantages:
Within x-range, often high oscillations
Even Chebyshev points often uncharacteristic oscillations
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 4
Splines
Given domain [a, b], a spline S(x) Is defined on entire domain
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
141/228
Provides a certain amount of smoothness
partition of knots (= where spline can change form)
{a = t0, t1, t2, . . . , tn = b}
such that
S(x) =
S0(x), x [t0, t1],S1(x), x [t1, t2],
... ...
Sn1(
x)
, x tn1, tnis piecewise polynomial
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 5
Interpolatory Splines
Note: splines split up range [a, b] opposite of CTR CSR GQ development
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
142/228
pp Q p
Spline implies no interpolation, not even any y-values
If given points
{(t0, y0), (t1, y1), (t2, y2), . . . , (tn, yn)}interpolatory spline traverses these as well
Splines = nice, analytical functions
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 6
Approximation by Splines
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
143/228
Motivation Linear Splines
Quadratic Splines
Cubic Splines Summary
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 7
Linear Splines
Given domain [a, b], a linear spline S(x)
Is defined on entire domain
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
144/228
Provides continuity, i.e., is C0
[a, b]
partition of knots
{a = t0, t1, t2, . . . , tn = b
}such that
Si(x) = ai x + bi P1
ti, ti+1
, i = 0, . . . , n 1
Recall: no y-values or interpolation yet
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 8
Linear SplineExamples
ue
undefined part
di ti
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
145/228
a b
functionv
alu discontinuous
nonlinear part
linear spline
Definition outside of [a, b] is arbitrary
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 9
Interpolatory Linear Splines
Given points
{(t0, y0), (t1, y1), (t2, y2), . . . , (tn, yn)}spline must interpolate as well
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
146/228
Are the Si(x) (with no additional knots) unique? Coefficients: ai x + bi, i = 0, . . . , n 1 total = 2n Conditions: 2 prescribed interpolation points for Si(x),
i = 0, . . . , n
1 (includes continuity condition)
total = 2n
Obtain
Si(x) = ai x + (yi ai ti), ai =yi+1
yi
ti+1 ti , i = 0, . . . , n 1
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 10
Interpolatory Linear SplinesExample
40
60
ue
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
147/228
-40
-20
0
20
-4 -2 0 2 4
function
val
Discontinuous derivatives at knots are unpleasing, . . .
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 11
Approximation by Splines
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
148/228
Motivation Linear Splines Quadratic Splines Cubic Splines Summary
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 12
Quadratic Splines
Given domain [a, b], a quadratic spline S(x) Is defined on entire domain
Provides continuity of zeroth and first derivatives i e is
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
149/228
Provides continuity of zeroth and first derivatives, i.e., is
C1[a, b]
partition of knots
{a=
t0, t1, t2, . . . , tn=
b}such that
Si(x) = ai x2 + bi x + ci P2
ti, ti+1
, i = 0, . . . , n 1
Again no y-values or interpolation yet
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 13
Quadratic SplineExample
f(x) = x
2
, x 0,x2, 0 x 1,1 2x, x 1,
f(x)?
= quadratic spline
Defined on domain (, )
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
150/228
De ed o do a ( , )
Continuity (clearly okay away from x = 0 and 1): Zeroth derivative:
f
0
= f
0+
= 0
f1 = f1+ = 1 First derivative:
f
0
= f
0+
= 0
f1 = f1+ = 2
Each part of f(x) is P2
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 14
Interpolatory Quadratic Splines
Given points
{(t0, y0), (t1, y1), (t2, y2), . . . , (tn, yn)}spline must interpolate as well
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
151/228
Are the Si(x) unique (same knots)? Coefficients: ai x2 + bi x + ci, i = 0, . . . , n 1
total = 3n
Conditions:
2 prescribed interpolation points for Si(x),i = 0, . . . , n 1 (includes continuity of functioncondition)
(n
1) C1 continuities
total = 3n 1
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 15
Interpolatory Quadratic Splines (cont.)
Underdetermined system
need to add one condition
Define (as yet to be determined) zi = S(ti), i = 0, . . . , n
Write
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
152/228
Write
Si(x) =zi+1 zi
2
ti+1 ti(x ti)2 + zi(x ti) + yi
therefore
Si(x) =z
i+1
zi
ti+1 ti (x ti) + zi
Need to
verify continuity and interpolatory conditions
determine zi
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 16
Checking Interpolatory Quadratic Splines
Check four continuity (and interpolatory) conditions:
(i) Si(ti)
= yi
(ii) Si
ti+1
= (below)
(iii) Si(ti)= zi
(iv) Si
ti+1
= zi+1
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
153/228
(ii) Si
ti+1
= zi+1 zi2
ti+1 ti
+ zi
ti+1 ti
+ yi
=zi+1 + zi
2
ti+1 ti
+ yi
set= yi+1
therefore (n equations, n + 1 unknowns)
zi+1 = 2yi+1 yit
i+1 t
i
zi, i = 0, . . . , n 1
Choose any 1 zi and the remaining n are determined.
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 17
Interpolatory Quadratic SplinesExample
40
60
alue
z0 := 0
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
154/228
-40
-20
0
20
-4 -2 0 2 4
function
va
Okay, but discontinuous curvature at knots, . . .
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 18
Approximation by Splines
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
155/228
Motivation Linear Splines Quadratic Splines
Cubic Splines Summary
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 19
Cubic Splines
Given domain [a, b], a cubic spline S(x) Is defined on entire domain
Provides continuity of zeroth, first and second derivatives,
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
156/228
i.e., is C2[a, b]
partition of knots
{a = t
0, t
1, t
2, . . . , tn = b
}such that for i = 0, . . . , n 1
Si(x) = ai x3 + bi x
2 + ci x + di P3
ti, ti+1
,
In general: spline of degree k . . . Ck1 . . . Pk . . .
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 20
Why Stop at k = 3?
Continuous curvature is visually pleasing
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
157/228
Usually little numerical advantage to k > 3
Technically, odd ks are better for interpolating splines
Natural (defined later) cubic splines best in an analytical sense (stated later)
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 21
Interpolatory Cubic Splines
Given points
{(t0, y0), (t1, y1), (t2, y2), . . . , (tn, yn)}spline must interpolate as well
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
158/228
Are the Si(x) unique (same knots)? Coefficients: ai x3 + bi x2 + ci x + di, i = 0, . . . , n 1
total = 4n
Conditions:
2 prescribed interpolation points for Si(x),i = 0, . . . , n 1 (includes continuity of functioncondition)
(n
1) C1 + (n
1) C2 continuities
total = 4n 2
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 22
Interpolatory Cubic Splines (cont.)
Underdetermined system need to add two conditions Natural cubic spline
add: S(a) = S(b) = 0Assumes straight lines (i e no more constraints)
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
159/228
Assumes straight lines (i.e., no more constraints)
outside of [a, b]
Imagine bent beam of ship hull Defined for non-interpolatory case as well
Required matrix calculation for Si definitions Linear: independent ai = yi+1yiti+1ti diagonal Quadratic: two-term zi definition bidiagonal
Cubic: . . . tridiagonal
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 23
Interp. Natural Cubic SplinesExample
40
60
value
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
160/228
-40
-20
0
20
-4 -2 0 2 4
function
v
Now the curvature is continuous as well.
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 24
Optimality of Natural Cubic Spline
Theorem: If f C2[a, b], knots: {a = t0, t1, t2, . . . , tn = b}
interpolation points: (ti, yi) : yi = f (ti), i = 0, . . . , n
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
161/228
interpolation points: (ti, yi) : yi f(ti), i 0, . . . , n
S(x) is the natural cubic spline which interpolates f(x)then
b
a S(x)
2dx
b
a f(x)
2dx
Bottom line average curvature of S that of f compare with interpolating polynomial
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 25
Approximation by Splines
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
162/228
Motivation Linear Splines Quadratic Splines Cubic Splines Summary
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 26
Interpolation vs. SplinesSerpentine Curve
0.5
1
1.5
2
value
interpolator
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
163/228
-2
-1.5
-1
-0.5
0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
function
v
linear spline
Vs. oscillatory interpolatoreven linear spline is better.
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 27
Three Splines
40
60
80
value
quadratic
natural cubic
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
164/228
-20
0
20
-4 -2 0 2 4
function
linear
Increased smoothness with increase of degree.
Copyright c2011 by A. E. Naiman NM Slides Approximation by Splines, p. 28
Ordinary Differential Equations
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
165/228
Introduction Euler Method Higher Order Taylor Methods Runge-Kutta Methods Summary
Copyright c2011 by A. E. Naiman NM Slides Ordinary Differential Equations, p. 1
Ordinary Differential EquationDefinition ODE = an equation
involving one or more derivatives of x(t)
x(t) is unknown and the desired target somewhat opposite of numerical differentiation
E.g.: x37(t) + 37 t ex
2(t) sin 4 x(t) log 1 = 42
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
166/228
t Which x(t)s fulfill this behavior? Ordinary (vs. partial) = one independent variable t
Order = highest (composition of) derivative(s) involved
Linear = derivatives, including zeroth, appear in linear form
Homogeneous = all terms involve some derivative(including zeroth)
Copyright c2011 by A. E. Naiman NM Slides Ordinary Differential Equations, p. 2
Analytical Approach
Good luck with previous equation, but others . . .
Shorthand: x = x(t), x = d(x(t))dt , x =d2(x(t))
dt2, . . .
Analytically solvablex x = et x(t) = t et + c et
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
167/228
( )
x + 9x = 0 x(t) = c1 sin3t + c2 cos3t x + 12x = 0 x(t) =
c t
c, c1 and c2 are arbitrary constants
Need more conditions/information to pin down constants Initial value problems (IVP) Boundary value problems (BVP)
Here: IVP for first-order ODE.
Copyright c2011 by A. E. Naiman NM Slides Ordinary Differential Equations, p. 3
First-Order IVP
General form:
x = f(t, x), x(a) given
Note: non-linear, non-homogeneous; but, x not on RHS
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
168/228
Examples x = x + 1, x(0) = 0 x(t) = et 1 x = 6t 1, x(1) = 6 x(t) = 3t2 t + 4
x
= t
x+1, x(0) = 0
x(t) = t2 + 1 1
Physically: e.g., t is time, x is distance and f = x isspeed/velocity
Another optimistic scenario . . .
Copyright c2011 by A. E. Naiman NM Slides Ordinary Differential Equations, p. 4
RHS Independence of x
f = f(t) but f = f(x)
E.g. x = 3t2 4t1 +
1 + t2
1x(5) = 17
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
169/228
Perform indefinite integral
x(t) =
d(x(t))
dtdt =
f(t) dt
Obtain x(t) = t3 4 ln t + arctan t + CC = 17 53 + 4 l n 5 arctan 5
And now for the bad news . . .
Copyright c2011 by A. E. Naiman NM Slides Ordinary Differential Equations, p. 5
Numerical Techniques
Source of need Usually analytical solution is not known Even if known, perhaps very complicated, expensive to
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
170/228
compute
Numerical techniques Generate a table of values for x(t) Usually equispaced in t, stepsize = h ! with small h, and far from initial value
roundoff error can accumulate and kill
Copyright c2011 by A. E. Naiman NM Slides Ordinary Differential Equations, p. 6
Ordinary Differential Equations
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
171/228
Introduction Euler Method Higher Order Taylor Methods Runge-Kutta Methods Summary
Copyright c2011 by A. E. Naiman NM Slides Ordinary Differential Equations, p. 7
Euler Method
First-order IVP: given
x=
f(t, x),
x(a), want
x(b)
Use first 2 terms of Taylor series (i.e., n = 1) to get fromx(a) to x(a + h)
t ti
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
172/228
x(a + h) = x(a) + h x(a) use f(a, x(a))
+truncation error
O
h2
Repeat to get from x(a + h) to x(a + 2h), . . . Total n = bah steps until x(b)
Note: units of time/distance/speed are consistent
Copyright c2011 by A. E. Naiman NM Slides Ordinary Differential Equations, p. 8
Euler MethodExample
x(a)
value,
x(t)
Euler
B
!
http://jct.ac.il/~naiman/nmhttp://jct.ac.il/~naiman/nm -
7/28/2019 nm-slides-b-and-w
173/228
a
function
t
h