notes - the department of mathematics at virginia tech€¦ · let it go dynamics is mo deled b y...

42

Upload: phungngoc

Post on 13-Jun-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

1

Notes for Numerical Analysis

Math 5466

by

S. Adjerid

Virginia Polytechnic Institute

and State University

(A Rough Draft)

2

Contents

1 Numerical Methods for ODEs 51.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 One-step Methods . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.1 Taylor Methods . . . . . . . . . . . . . . . . . . . . . . 71.2.2 Runge-Kutta Methods . . . . . . . . . . . . . . . . . . 121.2.3 Error Estimation and Control . . . . . . . . . . . . . . 171.2.4 Systems and higher-order di�erential equations . . . . 20

1.3 Multi-step Methods . . . . . . . . . . . . . . . . . . . . . . . . 251.3.1 Adams-Bashforth Methods . . . . . . . . . . . . . . . . 251.3.2 Adams-Moulton Methods . . . . . . . . . . . . . . . . 271.3.3 Predictor-Corrector Methods . . . . . . . . . . . . . . . 281.3.4 Methods Based on Backward Di�erence Formulas (BDF) 29

1.4 Consistency, Stability and Convergence . . . . . . . . . . . . . 311.4.1 Basic notions and de�nitions . . . . . . . . . . . . . . . 311.4.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 341.4.3 Absolute stability . . . . . . . . . . . . . . . . . . . . . 38

1.5 Two-point Boundary Value Problems . . . . . . . . . . . . . . 421.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 421.5.2 The Shooting Method . . . . . . . . . . . . . . . . . . 421.5.3 The Finite Di�erence Method . . . . . . . . . . . . . . 42

3

4 CONTENTS

Chapter 1

Numerical Methods for ODEs

1.1 Introduction

Di�erential equation are used to model many physical situations such as vi-brations, electric circuits, chemical reactions, biology and numerical solutionsof partial di�erential equations.

Example 1: Vibration of a mass attached to an elastic spring

l0 is the initial legth of spring at rest.

l(t) is the length at time t

x(t) = l(t)� l0

If we stretch the spring to have x(0) = l(0) � l0 and let it go the dynamicsis modeled by Newton equation ma = kx(t) where a is the accelerationx00(t) = a(t) to obtain the second-order di�erential equation

mx00(t)� kx(t) = 0; t > 0 (1.1.1a)

Subject the initial conditions

x(0) = x0; x0(0) = v0 (1.1.1b)

Example 2: Swinging pendulum

5

6 CHAPTER 1. NUMERICAL METHODS FOR ODES

If �(t) is the angle between the vertical position at rest (� = 0) and theposition at t, the motion of the pendulum is described by the equation

�00(t)� g

Lsin(�) = 0; t > 0 (1.1.2)

with initial conditions �(0) = �0 and �0(0) = �1. Where L is the length ofthe pendulum and g is the gravitational acceleration.

Here, we solve problems which consist of �nding y(t) such that

y0(t) = f(t; y); t > 0 (1.1.3a)

subject to the initial condition

y(t0) = y0: (1.1.3b)

The next theorem states the existence of solution to (1.1.3)

Theorem 1.1.1. Let f(t; y) be continuous function for all t0 < t < T andall y. Suppose further that f(t; y) satis�es the Lipschitz condition

jf(t; w)� f(t; z)j � Ljw � zj; for all w; z; t0 < t < T: (1.1.4)

where L > 0.

Then, the problem (1.1.3) has a unique di�erentiable solution.

Example:y0(t) = 3y(t) + et;

with f(t; y) = 2y + et is continuous and

f(t; w)� f(t; z) = 3(w � z);

which leads tojf(t; w)� f(t; z)j � 3jw � zj:

.Thus, f(t; y) is Lipschitz continuous with L = 3.

1.2. ONE-STEP METHODS 7

1.2 One-step Methods

In this section we will study Taylor and Runge-Kutta methods.

1.2.1 Taylor Methods

We note that since the initial condition y(t0) is given then one may compute

y0(t0) = f(t0; y0)

y00(t0) = ft(t0; y0) + fy(t0; y0)f(t0; y0)

y000(t0) =d2f(t; y(t))

dt2= ftt + 2ftyf + fyyf

2 + fy(ft + fyf); at (t0; y0)

Therefore, all derivatives y(n)(t0) will be also be known.

This leads us to use Taylor series to write

y(t0 + h) = y(t0) + y0(t0)h+ � � �+ y(p)(t0)

p!hp +

y(p+1)(c)

(p + 1)!hp+1 (1.1.5)

if

Tp(t0; y0; h) = y(t0) + y0(t0)h+ � � �+ y(p)(t0)

p!hp (1.1.6)

we de�ne the O(hp) Taylor method as

step 0: set h > 0 , ti = t0 + ih; i = 1; 2; 3; � � �Step 1: input y(t0) = y0

for i=1:nyi+1 = Tp(ti; yi; h)

end

8 CHAPTER 1. NUMERICAL METHODS FOR ODES

In the remainder of this chapter we adopt the notation yi � y(ti).

The O(hp+1) Taylor method has a degree of precision p

The local discretization error is

�(tk) =y(p+1)(c)

(p+ 1)!hp+1 (1.1.7)

The truncation error is

�p =�

h=

y(p+1)(c)

(p + 1)!hp (1.1.8)

The total (global) discretization error is

e(ti) = y(ti)� yi (1.1.9)

Now, let us study the special case p = 1 which leads to a well know method.

Euler's method:

Let t0 < t < T and subdivide [t0; T ] into N subintervals to have ti = t0+ i�hwhere h = (T � t0)=N is the stepsize.

y0 known

yi+1 = yi + hf(ti; yi); i = 0; 1; 2; � � � ; N � 1: (1.1.10)

Example:

y0 = y; 0 < t < 1; y(0) = 1:

where the exact solution is y(t) = et

Let h = 1=10 and ti = i � 0:1; i = 0; 1; � � � ; N

1.2. ONE-STEP METHODS 9

y0 = 1

y1 = y0 + hf(t0; y0) = 1 + h

y2 = y1 + hf(t1; y1) = (1 + h)2

� � � � � �yk = yk�1 + nhf(tk�1; yk�1) = (1 + h)k; k = 1; 2; � � � ; 10 (1.1.11)

Show a plot the exact solution and numerical solution

Error analysis for Euler's method

The local error is de�ned by

�(tk) = y(tk)� yk

where yk is the approximation obtained from Euler's method

yk = y(tk�1) + hf(tk�1; y(tk�1)

Subtracting (1.1.5) with p = 1 and

y1 = y0 + h(t0; y0)

to obtain

�(t1) = y(t1)� y1 =h2y00(c)

2

which can be bounded as

j�(t1)j � h2M

2

whereM2 = jjy00jj1;[t0;T ]

The global discretization error is de�ned as

ek+1 = y(tk+1)� yk+1

where yk+1 is the approximation of y(tk+1) obtained from Euler's method.

10 CHAPTER 1. NUMERICAL METHODS FOR ODES

Next, we state a theorem on the global discretization error for Euler's method

Theorem 1.2.1. Let y(t) be the solution of the initial value problem (1.1.3).If f(t; y) is Lipschitz continuous with respect to y and if jjy00jj1;[t0;T ] � M2

then the global discretization error can be bounded as

jek+1j < hM2

2L(eL(tk+1�t0) � 1) (1.1.12)

Proof. Subtracting (1.1.5) from

yk+1 = yk + hf(tk; yk)

leads to

ek+1 = ek + h[f(tk; y(tk))� f(tk; yk)] + h2y00(c)=2

Using the hypothesis of the theorem we have

jek+1j � jekj+ hLjekj+ h2M2

2= (1 + hL)jekj+ h2M2

2

Using this recursive inequality we show that

jek+1j � jek�1j(1 + hL)2 + (1 + hL)h2M2

2+h2M2

2

The global error

jek+1j � je0j(1+hL)k+1+h2M2

2

�1 + (1 + hL) + (1 + hL)2 + � � �+ (1 + hL)k

We can assume e0 = 0 and use a geometric series

a + ar + ar2 + � � �+ arn = arn � 1

r � 1

to obtain

1.2. ONE-STEP METHODS 11

jek+1j � hM2

2L

�(1 + hL)k+1 � 1

From Calculus we know that 1 + x < ex; x > 0 use x = hL in the previousinequality to obtain

jek+1j � hM2

2L

�eLh(k+1) � 1

Noting that h(k + 1) = tk+1 � t0 we complete the proof of the theorem.

Remarks:

1. The local error is O(h2)

2. The global error is O(h)

Taylor method of order 2:

Consider the linear problem:

y0 = ay; y(0) = y0; y(t) = y0eat:

y00 = a2y

Taylor method of order 2 is given as

y0 is given (1.1.13)

yk+1 = yk + hayk +a2h2

2yk = (1 + ah+

a2h2

2)yk (1.1.14)

yk+1 = (1 + ah +a2h2

2)k+1y0 (1.1.15)

Show numerical results for Taylor methods of order p = 1; 2; 3; 4 and compareconvergence rates.

12 CHAPTER 1. NUMERICAL METHODS FOR ODES

1.2.2 Runge-Kutta Methods

We note that Taylor methods main advantage consists of achieving highaccuracy by using O(hp) methods. On the-other-hand they require high-order derivates of f(t; y) and a large number of function evaluations.

In this section we will introduce a class of methods that only use valuesof f(t; y) with a smaller number of function evaluations while keeping thehigh-order precision of Taylor methods.

Second-order Runge-Kutta methods

We would like to construct a method of the form:

yRKk+1 = yk + c1f(tk + �; yk + �)

that has the same order as Taylor method

yTk+1 = yk + hf(tk; yk) +h2

2[ft(tk; yk) + fy(tk; yk)f(tk; yk)]

Our aim is to �nd c1; � and � such that

yRKk+1 � yTk+1 = O(h3)

Use Taylor expansion in two variables to write

f(tk + �; yk + �) = f(tk; yk) + �ft(tk; yk) + �fy(tk; yk) +O(�2) +O(�2)

This leads to

yRKk+1 = yk + c1f(tk; yk) + c1�ft(tk; yk) + c1�fy(tk; yk)

Match the coeÆcients with those of second-order Taylor method to obtain

c1 = h (1.1.16)

c1� =h2

2(1.1.17)

c1� =h2

2f(tk; yk) (1.1.18)

1.2. ONE-STEP METHODS 13

This in turn leads to

c1 = h (1.1.19)

� =h

2(1.1.20)

� =h

2f(tk; yk) (1.1.21)

This yieldsMidpoint method:

y0 given (1.1.22)

yk+1 = yk + hf(tk +h

2; yk +

h

2f(tk; yk)) (1.1.23)

The global error of the midpoint method is O(h2)

We also note that the di�erence yT � yRK � O(c1(�2 + �2)) = O(h3)

Other second-order Runge-Kutta methods have the form

yk+1 = yk + c1f(tk; yk) + c2f(tk + �; yk + Æf(tk; ; yk))

Applying Taylor series in two dimensions and match the coeÆcients withsecond order Taylor method leads to a family of methods. Here we giveHeun's method.

Heun Method: with c1 =h4, c2 =

3h4, � = Æ = 2h

3

y0 known

yk+1 = yk +h

4

�f(tk; yk) + 3f(tk +

2h

3; yk +

2h

3f(tk; yk))

(1.1.24)

We note that Heun's method uses two function evaluations while the corre-sponding Taylor method uses three function evaluations.

14 CHAPTER 1. NUMERICAL METHODS FOR ODES

Example:

y0 = ty + et(1 + t� t2) = f(t; y); 1 < t < 3 (1.1.25)

y(0) = e = y0 (1.1.26)

where the exact solution is y(t) = tet

Let us use h = 0:2 and t0 = 1, tk = 1 + k � 0:2; k > 0.

y1 = y0 +h

4

�f(t0; y0) + 3f(t0 +

2h

3; y0 +

2h

3f(t0; y0))

y1 = 2:71828y2 = 3:97094y3 = 5:64005y4 = 7:84479

General Form of Runge-Kutta Methods:

The general form of explicit s-stage Runge-Kutta method has the form:

K1 = f(tk; yk)

Ki = f(tk + �ih; yk + hi�1Xj=1

�ijKj)

i = 1; 2; � � � ; s

yk+1 = yk + hsX

i=1

ciKi (1.1.27)

It is also convenient to write the coeÆcients in a Table

0�2 �21...�s �s1 � � � � � � �s;s�1

c1 � � � � � � cs�1 cs

1.2. ONE-STEP METHODS 15

We note thatPs

i=1 ci = 1.

Examples

Second-Order methods:

Midpoint Method

012

12

0 1

Heun's Method

023

2314

34

A third-order method:

3rd order RK

012

12

1 0 11 0 0 1

16

23

0 16

The classical Runge-Kutta with O(h4) global error:

This method is derived by applying Taylor series in two dimensions andmatching coeÆcients with those of the fourth-order Taylor method. Themethod obtained is a the four-stage method

16 CHAPTER 1. NUMERICAL METHODS FOR ODES

y0 known

k1 = f(tk; yk)

k2 = f(tk +h

2; yk +

h

2k1)

k3 = f(tk +h

2; yk +

h

2k2)

k4 = f(tk + h; yk + hk3)

yk+1 = yk +h

6(k1 + 2k2 + 2k3 + k4)

k = 0; 1; 2; � � � (1.1.28)

Classical Fourth-order RK

012

12

12

0 12

1 0 0 116

26

26

16

Example:

y0 = y + 2t; 1 < t < 2; y(1) = 1; f(t; y) = y + 2t:

Let us use h = 0:1, tk = 1 + k � 0:1; k = 0; 1; 2; � � �

y0 = 1

k1 = f(1; 1) = 3

k2 = f(1 + 0:12; 1 + 0:1

2k1) = 3:25

k3 = f(1 + 0:12; 1 + 0:1

2k2) = 3:2625

k4 = f(1 + 0:1; 1 + 0:1k3) = 3:52625

yk+1 = 1 + 0:16(k1 + 2k2 + 2k3 + k4) = 1:325854 (1.1.29)

1.2. ONE-STEP METHODS 17

1.2.3 Error Estimation and Control

I: Embedded Runge-Kutta Methods

Embedded RK methods contain a numerical solution y1 and a higher-orderand more precise approximation y1. These two approximations will serve forerror estimation and stepsize control in an adaptive algorithm. There areseveral embedded RK methods described in the book by Hairer, Norsett andWanner.

(i) Runge-Kutta-Fehlberg 2(3) Method

01 112

14

14

y112

12

0y1

16

16

46

(ii) Runge-Kutta-Fehlberg 4(5) Method

014

14

38

332

932

1213

19322197

�72002197

72962197

1 439216

-8 3680513

� 8454104

12

� 827

2 �35442565

18594104

�1140

y125216

0 14082565

21974104

�15

0

y116135

0 665612825

2856156430

� 950

255

18 CHAPTER 1. NUMERICAL METHODS FOR ODES

This method yields an O(h4) y1 and an O(h5) y1 Runge-Kutta solutions thatare combined to obtain the adaptive Matlab ode45 function.

We note that in all embedded Runge-Kutta methods we use yk to compute

yk+1 = yk + h6X

i=1

ciKi(tk; h; yk) (1.1.30)

and

yk+1 = yk + h6X

i=1

ciKi(tk; h; yk): (1.1.31)

In the literature, there are other embedded RK methods such as Merson4(5), RKF2(3). A embedded RK method that minimizes the error is givenby Dormand-Prince 5(4). for more details consult the book by Hairer, Norsettand Wanner.

II: Richardson Extrapolation

We may use Richardson extrapolation to obtain higher-order methods

If we assume that we have an O(hp) method and solve the problem usingone-step of size h to obtain

y(tk+1)� yhk+1 = chp +O(hp+1); (1.1.32)

and with two steps of size h=2 to obtain a new approximation of y(tk+1 as

y(tk+1)� yh2

k+1 = c

�h

2

�p

+O(hp+1) (1.1.33)

Multiplying (1.1.33) by 2p and subtracting (1.1.32) leads to

2py(tk+1 � y(tk+1)� 2pyh2

k+1 + yhk+1 = O(hp+1):

which can be written as

1.2. ONE-STEP METHODS 19

y(tk+1) =2py

h2

k+1 � yhk+1

2p � 1+O(hp+1)

Therefore,

yk+1 =2py

h2

k+1 � yhk+1

2p � 1

is a higher-order, i.e., O(hp+1) approximation to y(tk+1).

III: Error Control and Stepsize Selection

Adaptive methods can be very eÆcient and reliable by using automatic step-size selection. Thus, the stepsize depends on the solution and is adjustedautomatically. An adaptive algorithm should use smaller stepsizes when theerror becomes large and larger stepsizes in regions of smaller errors.

Embedded Runge-Kutta methods and Richardson extrapolation provide anestimate of the error as described below. Apply an Embedded Runge-Kuttamethod or Richardson extrapolation with double stepping to obtain two ap-proximations to y(tk), yk and yk such that

y(tk) = yk + chp+1 + � � � ;

andy(tk) = yk + dhp+2 + � � � :

Subtracting the two equations we obtain

errest = yk � yk = chp+1 � dhp+2 + � � � � chp+1; (1.1.34)

which can be used to select the the next step-size, H, such that the dis-cretization error is less than a prescribed tolerance tol with a safety factor.

Assume c to be the same for all stepsize h and the targeted error can bewritten as

targeterror = cHp+1:

Using (1.1.34) we write

c =errest

hp+1:

20 CHAPTER 1. NUMERICAL METHODS FOR ODES

Thus, the targeted error is

targeterror =errest

hp+1Hp+1

Setting targeterror < tol, we obtain

cHp+1 =errest

hp+1Hp+1 < tol: (1.1.35)

Now, we solve for the new stepsize H to obtain

H = safetyfactor � h � (tol=errest) 1

p+1 : (1.1.36)

The safety factor is selected to be 0:251

p+1 .

An adaptive algorithm for an embedded RKp(p+1)

Step 0: Read t_0, T, h, y_0, p, safetyfactor,tol

Step 1: If t0 + h > T, stop

Step 2: Compute y1 and yhat1

Step 3: errest = |yhat1 - y1|

Step 4: if errest < tol, t0=t0+h,y0=yhat1 (accept the step)

Step 5: if errest > tol/200 or errest < tol go to Step 2

Step 6: compute new h according to

h = safetyfactor* h* (tol/errest)^{1/(p+1)}

Step 7: Go to Step 2

1.2.4 Systems and higher-order di�erential equations

Example:

x0(t) = 2y + x+ t (1.1.37)

y0(t) = sin(x) + exp(t) (1.1.38)

1.2. ONE-STEP METHODS 21

subject to the initial conditions x(0) = x0 and y(0) = y0.

We use the following vector notation

Y =

�x(t)y(t)

�F =

�f1(t; Y )f2(t; Y )

�=

�2y2 + y1 + t

sin(y1) + exp(t)

and write our system (1.1.37) as

Y 0(t) = F (t; Y ); t > 0; Y (0) = Y0 =

�x0y0

�:

Runge-Kutta methods can be used with vector notation

Euler's method:

Y0; known

Yk+1 = Yk + hF (tk; Yk); k = 0; 1; 2; � � � (1.1.39)

Heun's method:

Y0; known

Yk+1 = Yk +h

4

�F (tk; Yk) + 3F (tk +

2h

3; Yk +

2h

3F (tk; Yk))

�;

k = 0; 1; � � � : (1.1.40)

Classical Runge-Kutta Method:

22 CHAPTER 1. NUMERICAL METHODS FOR ODES

Y0 given

K1 = F (tk; Yk)

K2 = F (tk +h

2; Yk +

h

2K1)

K3 = F (tk +h

2; Yk +

h

2K2)

K4 = F (tk + h; Yk + hK3)

Yk+1 = Yk +h

6(K1 + 2K2 + 2K3 +K4); k = 0; 1; 2; � � � :(1.1.41)

Higher-order di�erential equations:

x(m) + am�1x(m�1) + � � �+ a2x

00(t) + a1x0(t) + a0x(t) = g(t) (1.1.42)

with initial conditions

x(k)(0) = xk; k = 0; 1; 2; � � � ; m� 1:

Transform the equation (1.1.42) into a system of �rst order ordinary di�er-ential equations. using the mapping

Yk+1(t) = x(k)(t); k = 0; 1; 2; � � � ; m� 1;

and noting that

Y 0k(t) = x(k+1) = Yk+1(t) = Fk(t; Y ); k = 1; 2; � � � ; m� 1;

and

Y 0m(t) = x(m)(t) = g(t)�m�1Xi=1

aiYi+1 = Fm(t; Y ):

Now using vector notation we write the system as

Y 0(t) = F (t; Y );

subject to the initial conditions

1.2. ONE-STEP METHODS 23

Y (0) = Y0 = (x0; x1; � � � ; xm�1)t:

Example 1: The pendulum motion is described by the nonlinear second-order di�erential equation

m�00 +g

Lsin(�); t � 0; �(0) = x0; �

0(0) = x1; (1.1.43)

where g denotes the gravitational acceleration and L the length of pendulum.

Example 2: The vibration of an elastic string is modeled by linear second-order equation

mx00 � kx = 0; t � 0; x(0) = x0; x0(0) = x1: (1.1.44)

Example 3: The vibration of a two-mass-three-spring system

jj � �Spring(k1)��(m1)��Spring(k)��(m2)��Spring(k2)��jjis modeled by the system of second-order di�erential equations

m1x001 + (k + k1)x1 � kx2 = 0

m2x002 + (k + k2)x2 � kx1 = 0

Example 4: Van der Pol problem given by the nonlinear second-order dif-ferential equation

x00 + �(x2 � 1)x0 + x = 0; t � 0; x(0) = x0; x0(0) = x1 (1.1.45)

Example 5: Method of Lines for the one-dimensional heat equation wherethe temperature T (x; t) satis�es the partial di�erential equation

Tt(t; x) = Txx(t; x); 0 < x < 1; t > 0: (1.1.46a)

subject to the initial and boundary conditions

T (0; x) = f(x); (1.1.46b)

24 CHAPTER 1. NUMERICAL METHODS FOR ODES

and

T (t; 0) = T (t; 1) = 0; t � 0: (1.1.46c)

Next, we subdivide the interval [0; 1] into N subintervals and de�ne xi =i � h; h = 1=N . The heat equation at the point (t; xi); i = 1; 2; � � � ; N � 1

Tt(t; xi) = Txx(t; xi); i = 1; 2; � � � ; N � 1;

can be written as

Tt(t; xi) =T (t; xi�1)� 2T (t; xi) + T (t; xi+1)

h2+O(h2)

Now, we neglect the truncation error and let

Ti(t) � T (t; xi)

to obtain the following system of ordinary di�erential equations

dTi(t)

dt(t) =

Ti�1(t)� 2Ti(t) + Ti+1(t)

h2; i = 1; 2; � � � ; N � 1:

Since T0(t) = TN (t) = 0; t � 0 we write the system as

T 01(t) =�2T1(t) + T2(t)

h2

T 0i (t) =Ti�1(t)� 2Ti(t) + Ti+1(t)

h2; i = 1; 2; � � � ; N � 1;

T 0N�1(t) =TN�2(t)� 2TN�1(t)

h2: (1.1.47a)

The initial conditions for the system of ordinary di�erential equations areobtained from g(x) as

Ti(0) = T (0; xi) = g(xi); i = 1; 2; � � � ; N � 1: (1.1.47b)

1.3. MULTI-STEP METHODS 25

1.3 Multi-step Methods

A Runge-Kutta method of order m will require at least m function eval-uations. In this section, we will study explicit and implicit methods thatwill require only one function evaluation per step. however, these methodsrequire the solution at several previous time steps.

1.3.1 Adams-Bashforth Methods

We integrate the di�erential equation (1.1.3) on [tk; tk+1] to obtain

Z tk+1

tk

y0(t)dt =

Z tk+1

tk

f(t; y(t))dt (1.1.48)

We interpolate f(t; y(t)) at tk�n; tk�n+1; � � � ; tk to write

f(t; y(t)) =kX

i=k�n

Li(t)f(ti; y(ti)) +y(n+2)(�)

(n+ 1)!

kYi=k�n

(t� ti);

where Li are the Lagrange interpolation polynomials.

Next, we approximate the equation (1.1.48) as

y(tk+1) = y(tk) +nXi=1

ck�if(tk�i; y(tk�i) + Cy(n+2)(�) (1.1.49)

where

ck�i =

Z tk+1

tk

Lk�i(t)dt; C =

R tk+1tk

kQi=k�n

(t� ti)dt

(n + 1)!:

We used the fact thekQ

i=k�n

(t� ti) does not change sign on [tk; tk+1].

If we assume tk = t0 + kh, k = 0; 1; 2; � � �

Few Adams-Bashforth methods:

26 CHAPTER 1. NUMERICAL METHODS FOR ODES

n = 0, O(h) , Euler method (local error = 5h3

12y3(�))

yk+1 = yk + hfk; k = 0; 1; 2; � � � (1.1.50)

n = 1, O(h2) (local error = 3h4

8y4(�))

yk; yk�1 known

yk+1 = yk + h[3

2fk � 1

2fk�1]; k = 1; 2; � � � (1.1.51)

n = 2 O(h3)

yk; yk�1; yk�2 known

yk+1 = yk + h[23

12fk � 16

12fk�1 +

5

12fk�2];

k = 2; 3; � � � (1.1.52)

n = 3, O(h4)

yk�3; yk�2; yk�1; yk known

yk+1 = yk + h[55

24fk � 59

24fk�1 +

37

24fk�2 � 9

24fk�3];

k = 3; 4; � � � (1.1.53)

Where

fl = f(tl; yl); k = k � 3; k � 2; k � 1; k

The local discretization error for n = 3 is

� =251

720h5y(5)(�)

1.3. MULTI-STEP METHODS 27

1.3.2 Adams-Moulton Methods

Adams-Moulton are obtained from (1.1.48) by interpolating f(t; y(t)) at then + 1 points tk�n+1; � � � ; tk�1; tk; tk+1 and write

f(t; y(t)) =k+1X

i=k�n+1

Li(t)f(ti; y(ti)) + Error

Substituting this into (1.1.48) to obtain the general form of Adams Moultonmethods

yk+1 = yk +k+1X

i=k�n+1

cif(ti; yi) (1.1.54)

If yi; i = k � n + 1; � � � ; k are known we solve for yk+1 from (1.1.54).

These methods are implicit and more stable than (explicit) Adams-Bashforthmethods from the previous section.

Few Adams-Moutlon methods:

n = 0, O(h), (Implicit Euler Method) (local error = h2

2y(2)(�))

yk+1 = yk + hfk+1; k = 0; 1; 2; � � � (1.1.55)

n = 1, O(h2) (Trapezoidal rule) (Local error = h3

12y3(�))

yk+1 = yk + h[1

2fk+1 +

1

2fk]; k = 0; 1; 2; � � � (1.1.56)

n = 2, O(h3), (Local error = �h4

24y4(�))

yk+1 = yk + h[5

12fk+1 � 8

12fk � 1

12fk�1]; k = 1; 2; � � � (1.1.57)

28 CHAPTER 1. NUMERICAL METHODS FOR ODES

n = 3, O(h4)

yk+1 = yk + h[9

24fk+1 +

19

24fk � 5

24fk�1 +

1

24fk�2]; k = 2; 3; � � � (1.1.58)

The local discretization error for n = 3 is

�(tk+1 = y(tk+1)� yk+1 = � 19

720h5y(5)(�)

1.3.3 Predictor-Corrector Methods

Explicit and implicit Adams methods may be used as predictor-corrector asillustrated in the following examples.

Example 1:

We consider a two-step predictor-corrector method where

Predictor using second-order Adams-Bashforth:

~yk+1 = yk +h

2(3fk � fk�1); (1.1.59)

Corrector using second-order Adams-Moulton:

yk+1 = yk +h

2( ~fk+1 + fk) (1.1.60)

where ~fk+1 = f(tk+1; ~yk+1)

Example 2

Predictor using fourth-order Adams-Bashforth:

~yk+1 = yk + h[55

24fk � 59

24fk�1 +

37

24fk�2 � 9

24fk�3]; (1.1.61)

1.3. MULTI-STEP METHODS 29

Corrector using Fourth-order Adams-Moulton:

yk+1 = yk + h[9

24~fk+1 +

19

24fk � 5

24fk�1 +

1

24fk�2] (1.1.62)

where~fk+1 = f(tk+1; ~yk+1)

Remarks:

1. In general, Adams-Moulton methods yield more accurate solutions thanAdams-Bashforth. A price has to be paid for this by requiring thesolution of a nonlinear algebraic problem. Usually, Newton-Raphsonmethod is used to solve it.

2. Adams-Bashforth methods require one function evaluation and may besuperior to RK methods when function evaluations are expensive

3. A one-step method of the same order is needed to generate startingvalues

4. Adams methods extend to systems of ordinary di�erential equationsusing vector notation

5. To avoid solving an algebraic problem Adams-Bashforth and Adams-Moulton may be used in pairs as predictor-corrector

1.3.4 Methods Based on Backward Di�erence Formu-las (BDF)

Backward Di�erence Formula (BDF) methods are obtained by interpolatingy(t) instead of f(t; y(t)). BDF methods up to sixth order are implicit andA-stable, thus, suitable for sti� problems where stability restriction for someadams methods requires much smaller time step than what is required byaccuracy.

Now let us consider the model problem

30 CHAPTER 1. NUMERICAL METHODS FOR ODES

y0(t) = f(t; y(t)); y(0) = y0

Assume y0; y1; � � � ; yk are known (may be computed using a one-step method)and write

y0(tk+1) = f(tk+1; y(tk))

Using the backward di�erence formula

y0(tk+1) =y(tk+1)� y(tk)

h+h

2y00(�)

to obtain

yk+1 � ykh

= f(tk+1; yk+1)

We note that yk+1 is de�ned implicitly. Again this requires the solution of anonlinear algebraic equation at each step. Usually, Newton's method is usedto solve it.

Higher-order BDF methods are derived by interpolating y(t) at tk+1; tk � � � ;tk+1�n, n = 1; 2; � � � , to obtain the following methods using uniform time-steph

n = 1, O(h)

yk+1 � yk = hfk+1; k = 0; 1; � � � (1.1.63)

n = 2, O(h2)

3

2yk+1 � 2yk +

1

2yk�1 = hfk+1; k = 1; 2; � � � (1.1.64)

n = 3, O(h3)

1.4. CONSISTENCY, STABILITY AND CONVERGENCE 31

11

6yk+1 � 3yk +

3

2yk�1 � 1

3yk�2 = hfk+1 (1.1.65)

n = 4, O(h4)

25

12yk+1 � 4yk + 3yk�1 � 4

3yk�2 +

1

4yk�3 = hfk+1 (1.1.66)

n = 5, O(h5)

137

60yk+1 � 5yk + 5yk�1 � 10

3yk�2 +

5

4yk�3 � 1

4yk�4 = hfk+1 (1.1.67)

n = 6, O(h6)

147

60yk+1 � 6yk +

15

2yk�1 � 20

3yk�2 +

15

4yk�3 � 6

5yk�4 +

1

6= hfk+1

(1.1.68)

Remarks:

We note that BDF methods

1. Exhibit good A-stability

2. Are very eÆcient for sti� problems

3. Require the solution of an algebraic problem each time step

1.4 Consistency, Stability and Convergence

1.4.1 Basic notions and de�nitions

Consistency:

32 CHAPTER 1. NUMERICAL METHODS FOR ODES

De�nition 1. A one-step numerical method de�ned by

yk+1 = yk + h�(tk; yk; h); k = 0; 1; 2; � � � (1.1.69)

with truncation error

�k+1(h) =y(tk+1)� y(tk)

h� �(tk; y(tk); h)]; k = 0; 1; 2; � � � ; N (1.1.70)

is consistent if and only if

limh!0

max0�k�N

j�k(h)j = 0: (1.1.71)

De�nition 2. The method (1.1.69) is O(hp) consistent if and only if

max0�k�N

j�k(h)j = O(hp) < Chp; as h! 0: (1.1.72)

De�nition 3. The method (1.1.69) is stable if and only if there exists C > 0independent of h such that

max0�k�N

jwk � ykj � C(jw0 � y0j+ max0�k�N

j�i(h; yk)� �k(h; wk)j); for h < h0;

(1.1.73)

where yk; k = 0; 1; 2; � � � ; N and wk; k = 0; 1; 2; � � � ; N are approximationsgiven by (1.1.69).

De�nition 4. The numerical solution given by (1.1.69) is convergent if andonly if

limh!0

max0�k�N

jyk � y(tk)j = 0: (1.1.74)

Using the strong stability condition (1.1.73) we can prove the following the-orem.

Theorem 1.4.1. The numerical method (1.1.69) converges if and only if itis stable and consistent.

In the next theorem we show that Lipschitz continuity of � with respectto y is suÆcient for stability.

1.4. CONSISTENCY, STABILITY AND CONVERGENCE 33

Theorem 1.4.2. If the numerical method (1.1.69) is O(hp) consistent and�(t; y; h) is Lipschitz continuous, i.e., there exists L > 0 such that

j�(t; w; h)� �(t; z; h)j < Ljw � zj;then

jyk+1 � y(tk+1)j � Chp

L(eL(tk+1�t0) � 1): (1.1.75)

Proof. Assume the method (1.1.69) is O(hp) consistent and write

y(tk+1) = y(tk) + h�(tk; y(tk); h) +O(hp+1): (1.1.76)

Subtracting (1.1.69) we obtain

y(tk+1)� yk+1 = y(tk)� yk + h[�(tk; y(tk); h)� �(tk; yk; h)] +O(hp+1):(1.1.77)

Applying the triangle inequality and Lipschitz property leads to

jek+1j � (1 + hL)jekj+ Chp+1: (1.1.78)

Using the recursive formula we obtain

jek+1j � (1 + hL)k+1je0j+ Chp+1(1 + A+ A2 + � � �+ Ak): (1.1.79)

where A = 1 + hL. Since e0 = 0 we have

jek+1j � �Ak+1 � 1

1� AChp+1 =

Chk

L((1 + hL)k+1 � 1): (1.1.80)

Using 1 + x � ex for x � 0, we write

jek+1j � Chp

L(eLh(k+1) � 1): (1.1.81)

Using tk+1 � t0 = h(k + 1) we complete the proof. Thus,

max0�k�N

jyk � y(tk)j < hpC(eLT � 1)

L; (1.1.82)

which shows convergence.

34 CHAPTER 1. NUMERICAL METHODS FOR ODES

0 1 2 3 4 5 6 7 8−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

h=0.25

h=0.125

Midpoint y’=−y, y0=1, y

1=e−h

Exact solution is y(t) =e−t

Weak instability for

yk+1

= yk−1

+ 2 h f(tk, y

k)

Oscillations appear for long times

Figure 1.1: Weak instability for the midpoint method

1.4.2 Stability

An example of a weakly stable method:

For instance we study the midpoint method given y0 and y1:

yk+1 = yk�1 + 2hf(tk; yk); k = 1; 2 � � � ; (1.1.83)

for the problem

y0 = �y; y(0) = 1;

where the exact solution y(t) = e�t.

To eliminate the e�ect of initial errors we use y0 = 1 and y1 = e�h and presentthe numerical solutions for h = 0:25; 0:125; 0:0625 in Figure 1.1. This weakinstability is not due to round-o� errors but due to the method itself. Forinstance let us consider the problem

y0 = �y; y(0) = 1:

1.4. CONSISTENCY, STABILITY AND CONVERGENCE 35

The midpoint method becomes

yk+1 = yk�1 + 2zyk; y0 = 1; y1 = e�h; z = h�: (1.1.84)

We look for solutions of the form yk = rk where r is solution of

r2 � 2zr � 1 = 0

r1 = z +pz2 + 1 and r2 = z �pz2 + 1 Thus, the general solution yk can be

expressed asyk = C1r

k1 + C2r

k2 ;

where C1 and C2 are determined by

y0 = 1 = C1 + C2; y1 = ez = C1r1 + C2r2;

which yields

C1 =(�z +p

z2 + 1) + ez

2pz2 + 1

;

C2 =(z +

pz2 + 1)� ez

2pz2 + 1

:

Hence, the exact solution of the di�erence equation (1.1.84) is

yk = C1rk1 + C2r

k2 ; (1.1.85)

where C1 = 1 +O(z3) and C2 = O(z3) .For the numerical example, � = �1 we have 0 < r1 < 1 and r2 < �1. Thus,jr1jk ! 0 and k ! 1 while jr2jk ! 1 as k ! 1. This explains the weakstability since jr2j ! 1 as h ! 0. Smaller stepsizes h will only delay thespurious oscillations and will not eliminate them.The main problem is that the numerical solution (1.1.85) contains a parasiticterm that does not correspond to the solution. The term C1r

k1 converges to

the true solution while C2rk2 cause the spurious oscillations.

This analysis can be applied to a general n-step method of the form

yk+1 =nXi=0

an�iyk�i + hnX

i=�1

bn�if(tk�i; yk�i) (1.1.86)

36 CHAPTER 1. NUMERICAL METHODS FOR ODES

The stability polynomial is obtained by assuming f(t; y) = 0 and look for asolution of form yk = rk to obtain the stability polynomial

p(r) = rn+1 �nXi=0

an�irn�i: (1.1.87)

Thus, the exact solution can be written as a linear combination

yk =nXi=0

Cirki :

De�nition 5. (Root condition) The stability polynomial satis�es the rootcondition if and only if all roots of p(r) = 0 are such that

jrij � 1; 0 � i � n (1.1.88)

and if each root ri such that jrij = 1 is simple.

The strong root condition is satis�ed if and only if

r0 = 1; jrij < 1; i = 1; 2; � � � ; n: (1.1.89)

1. If the stability polynomial satis�es the root condition with more thanone simple root on the unit circle, then the method is weakly stable,i.e., for small h it will give an accurate solution over a �xed interval.

2. If the strong root condition is satis�ed, then all parasitic terms will goto zero as k ! 1. Method is strongly stable, i.e., for h small enoughthe solution is stable.

3. All methods which do not satisfy the root condition are unstable.

Examples:

1. Midpoint-Method:The stability polynomial is

p(r) = r2 � 1; r0 = 1; r1 = �1:

Thus, the midpoint is weakly stable.

1.4. CONSISTENCY, STABILITY AND CONVERGENCE 37

2. For all Adams-Bashforth methods:The stability polynomial is

p(r) = rn+1 � rn; r0 = 1; ri = 0; i = 1; 2; � � � ; n:

Thus, Adams-Bashforth are strongly stable.

3. One-step Runge-Kutta methods are strongly stable

4. For the method

yk+1 = 4yk � 3yk�1 � 2hf(tk�1; yk�1)

derived using the di�erence formula

y0(tk�1) � �yk+1 + 4yk � 3yk�12h

whose stability polynomial is

p(r) = r2 � 4r + 3 = 0; r0 = 1; r1 = 3:

Thus the method is unstable.

Consistency of multistep Methods:

To every multistep method (1.1.86) we can associate two polynomials: thestability polynomial p(r) and

s(r) =n+1Xi=0

bn+1�irn+1�i

One can prove that the multistep method (1.1.86) is consistent if it is exactfor the two problems

y0 = 0; y(0) = 1;

which is equivalent to p(1) = 0. and

y0(t) = 1; y(0) = 0:

38 CHAPTER 1. NUMERICAL METHODS FOR ODES

which yields by setting yk = h � k and f(t; y) = 1

(k + 1) � h�nXi=0

an�i(k � i) � h = hn+1Xi=0

bn+1�i

Writing k � i = k � n+ n� i to obtain

(k � n)p(1) + (n+ 1)�nXi=0

an�i(n� i) = s(1)

Since p(1) = 0 and (n + 1)�nPi=0

an�i(n� i) = p0(1) we have p0(1) = s(1)

Now we are ready the state the convergence theorem

Theorem 1.4.3. The multistep method (1.1.86) converges if and only if itsatis�es the root condition de�nition (5) with p(1) = 0, p0(1) = s(1).

Proof. Consult Cheney and Kincaid.

1.4.3 Absolute stability

If we consider the linear problem

y0 = �y; t � 0; y(0) = y0 > 0;

where � is a complex number with negative real part. Thus, the exact solu-tion is

y(t) = y0e�t:

As t! +1 the exact solution decays to 0, i.e.,

limt!1

jy(t)j = 0

A method is A-stable if the numerical solution mimics the true solution, i.e.,

limk!1

jykj = 0:

1.4. CONSISTENCY, STABILITY AND CONVERGENCE 39

Absolute stability for one-step Methods:

First, let us examine the behavior of Euler's method

yk = (1 + �h)ky0:

We would like the numerical solution to mimic the true solution, i.e., to decayto 0 as t approaches +1.

Let us write the norm of yk as

jykj = j1 + �hjkjy0j:

In order to obtain limk!1

jykj = 0, we must have

j1 + zj < 1; z = �h:

This is equivalent to z lying in the interior of a unit disk centered at (�1; 0).For instance, if � is negative then we have

�2 < �h < 0

which leads to the A-stability condition

h < �2

�:

We note that this is not related to accuracy.

If h = �2�, the numerical solution is constant. In this case we say that Euler

method is 0� stable.

If h > �2�, the numerical solution diverges while oscillating as k !1

Stability of Heun's method:

Repeating the same process as for Euler's method we obtain

yk = (1 + z + z2=2)ky0:

40 CHAPTER 1. NUMERICAL METHODS FOR ODES

Again the method is absolutely stable if

j1 + z + z2=2j < 1:

If � < 0 we have

�1 < 1 + �h+(�h)2

2< 1:

This is equivalent to h < �2�.

Applying (1.1.69) to the linear problem y0 = �y we obtain

yk = �(z)ky0; z = �h:

De�nition 6. The method (1.1.69) is A-stable if and only if j�(z)j < 1, forall z, such that Real(z) < 0.

De�nition 7. The stability region of a method is the set of complex numbersz such that the method is A-stable.

Remark: In order to obtain the stability curves we partition 2� into �i =i � 2�=N , solve

�(z) = ei�i; i = 0; 1; � � � :N � 1;

and plot the roots.

Absolute stability regions for multi-step methods:Adams-Bashforth Method:Let us study, for instance, the second-order Adams-Bashforth method fory0 = �y to obtain

yk+1 = (1 +3z

2)yk � z

2yk�1:

If we look for an exact solution of the form yk = rk, r must satisfy thepolynomial:

q(r) = r2 � (1 +3z

2)r � z

2= 0

1.4. CONSISTENCY, STABILITY AND CONVERGENCE 41

De�nition 8. The stability region is the set of complex numbers z, Real(z) <0 such that all roots of q(r) are such that jri(z)j < 1 for all i, i.e.,The methodis absolutely stable.

BDF Methods:

1. Backward Euler Method:

yk+1 = yk + zyk+1;

where

q(r) = (1� z)r � 1; r0 = 1=(1� z):

The absolute stability region is

fz j j(1� z)j > 1g:

Thus, backward Euler is A-stable.

In general the A-stability region of a multistep method is the set of zsuch that the roots of

p(r) = zs(r);

satisfy ri < 1; i = 0; 1; � � �n. Thus, the boundary of the stability regioncan be obtained by plotting

z =p(r)

s(r); for r = ei�; � = 0; 2�=N; � � � ; 2�; N > 0:

2. The trapezoidal method is A-stable and has the smallest error constantC = 1=12 among all second-order methods.

3. There are no O(hp); p > 2 multistep A-stable methods

4. No explicit linear multistep method is A-stable

5. There exist O(hp) implicit Runge-Kutta A-stable methods ( see Haireret al. ).

42 CHAPTER 1. NUMERICAL METHODS FOR ODES

n jx(tn)� xnj jy(tn)� ynj jje(1)jj110 E24 E24 2.49804E-0220 E20 E24 1.27421E-0230 E22 E22 8.55254E-0340 E12 E12 6.43634E-0350 9E�6 2E�7 5.15967E-0360 E�12 E�12 4.30564E-03

Table 1.1: Errors for RKF45 and backward Euler applied to the sti� problem(1.1.90).

Example of a sti� problem:

x0 = 198x+ 199y

y0 = �398x� 399y (1.1.90)

with initial conditionsx(0) = 1; y(0) = �1:

The exact solution isx(t) = e�t; y(t) = �e�t

The Eigenvalues of the matrix

A =

�198 199�398 �399

are �1 = �1 and �2 = �200. We use RKF45 on [0,1] and backward EulerMethods with h = 1=n and show the results in Table 1.1. In the second andthird columns we show the �nal error for RKF45. The last column containsthe in�nity norm of the �nal error for the backward Euler method.

1.5 Two-point Boundary Value Problems

1.5.1 Introduction

1.5.2 The Shooting Method

1.5.3 The Finite Di�erence Method