1 roots of equations open methods (part 1) fixed point iteration & newton-raphson methods

30
1 Roots of Equations Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods

Post on 20-Dec-2015

275 views

Category:

Documents


6 download

TRANSCRIPT

1

Roots of EquationsOpen Methods

(Part 1)Fixed Point Iteration &

Newton-Raphson Methods

2

The following root finding methods will be introduced:

A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi

B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant Method

3

B. Open Methods

(a) Bisection method

(b) Open method (diverge)

(c) Open method (converge)

To find the root for f(x) = 0, we construct a magic formulae

xi+1 = g(xi)

to predict the root iteratively until x converge to a root. However, x may diverge!

4

What you should know about Open Methods

How to construct the magic formulae g(x)?

How can we ensure convergence?

What makes a method converges quickly or diverge?

How fast does a method converge?

5

B.1. Fixed Point Iteration• Also known as one-point iteration or

successive substitution

• To find the root for f(x) = 0, we reformulate f(x) = 0 so that there is an x on one side of the equation.

xxgxf )(0)(• If we can solve g(x) = x, we solve f(x) = 0.

– x is known as the fixed point of g(x).

• We solve g(x) = x by computing

until xi+1 converges to x. given with)( 01 xxgx ii

6

Fixed Point Iteration – Example032)( 2 xxxf

2

3)(

2

332032

2

1

222

iii

xxgx

xxxxxx

Reason: If x converges, i.e. xi+1 xi

032

2

3

2

3

2

22

1

ii

ii

ii

xx

xx

xx

7

ExampleFind root of f(x) = e-x - x = 0.

(Answer: α= 0.56714329)

ixi ex 1putWe

i xi εa (%) εt (%)

0 0 100.0

1 1.000000 100.0 76.3

2 0.367879 171.8 35.1

3 0.692201 46.9 22.1

4 0.500473 38.3 11.8

5 0.606244 17.4 6.89

6 0.545396 11.2 3.83

7 0.579612 5.90 2.20

8 0.560115 3.48 1.24

9 0.571143 1.93 0.705

10 0.564879 1.11 0.399

8

Two Curve Graphical Method

Demo

The point, x, where the two curves,

f1(x) = x and

f2(x) = g(x),

intersect is the solution to f(x) = 0.

9

Fixed Point Iteration

032)( 2 xxxf

2

3)(

2

3

32

032

2

2

2

2

xxg

xx

xx

xx

2

3)(

2

3

03)2(

0322

xxg

xx

xx

xx

32)(

32

32

0322

2

xxg

xx

xx

xx

• There are infinite ways to construct g(x) from f(x).

For example,

So which one is better?

(ans: x = 3 or -1)

Case a: Case b: Case c:

10

32

aCase

1 ii xx

1. x0 = 4

2. x1 = 3.31662

3. x2 = 3.10375

4. x3 = 3.03439

5. x4 = 3.01144

6. x5 = 3.00381

2

3

bCase

1

ii x

x2

3

cCase2

1

ii

xx

1. x0 = 4

2. x1 = 1.5

3. x2 = -6

4. x3 = -0.375

5. x4 = -1.263158

6. x5 = -0.919355

7. x6 = -1.02762

8. x7 = -0.990876

9. x8 = -1.00305

1. x0 = 4

2. x1 = 6.5

3. x2 = 19.625

4. x3 = 191.070

Converge!

Converge, but slower

Diverge!

11

How to choose g(x)?

• Can we know which g(x) would converge to solution before we do the computation?

12

Convergence of Fixed Point Iteration

By definition

)2(

)1(

11

ii

ii

x

x

Fixed point iteration

)4()(

and

)3()(

1 ii xgx

g

)6()()()5(in)2(Sub

)5()()()4()3(

1

1

ii

ii

xgg

xggx

13

Convergence of Fixed Point Iteration

According to the derivative mean-value theorem, if g(x) and g'(x) are continuous over an interval xi ≤ x ≤ α, there exists a value x = c within the interval such that

)7()()(

)('i

i

x

xggcg

• Therefore, if |g'(c)| < 1, the error decreases with each iteration. If |g'(c)| > 1, the error increase.

• If the derivative is positive, the iterative solution will be monotonic.

• If the derivative is negative, the errors will oscillate.

)()(havewe(6),and(1)From 1 iiii xggandx

iii

i cgcg

)(')(')7(Thus 11

14

Demo

(a) |g'(x)| < 1, g'(x) is +ve converge, monotonic

(b) |g'(x)| < 1, g'(x) is -ve converge, oscillate

(c) |g'(x)| > 1, g'(x) is +ve diverge, monotonic

(d) |g'(x)| > 1, g'(x) is -ve diverge, oscillate

15

Fixed Point Iteration Impl. (as C function)// x0: Initial guess of the root// es: Acceptable relative percentage error// iter_max: Maximum number of iterations alloweddouble FixedPt(double x0, double es, int iter_max) { double xr = x0; // Estimated root double xr_old; // Keep xr from previous iteration int iter = 0; // Keep track of # of iterations

do { xr_old = xr; xr = g(xr_old); // g(x) has to be supplied if (xr != 0) ea = fabs((xr – xr_old) / xr) * 100;

iter++; } while (ea > es && iter < iter_max);

return xr;}

16

The following root finding methods will be introduced:

A. Bracketing MethodsA.1. Bisection MethodA.2. Regula Falsi

B. Open MethodsB.1. Fixed Point IterationB.2. Newton Raphson's MethodB.3. Secant Method

17

B.2. Newton-Raphson Method

Use the slope of f(x) to predict the location of the root.

xi+1 is the point where the tangent at xi intersects x-axis.

)('

)(0)()(' 1

1 i

iii

ii

ii xf

xfxx

xx

xfxf

18

Newton-Raphson Method

What would happen when f '(α) = 0?

For example, f(x) = (x –1)2 = 0

)('

)(1

i

iii xf

xfxx

19

Error Analysis of Newton-Raphson Method

By definition

)2(

)1(

11

ii

ii

x

x

Newton-Raphson method

)3())(('))((')(

))(('))((')(

))((')()('

)(

1

1

1

1

iiiii

iiiii

iiii

i

iii

xxfxxfxf

xxfxxfxf

xxxfxfxf

xfxx

20

Suppose α is the true value (i.e., f(α) = 0).Using Taylor's series

Error Analysis of Newton-Raphson Method

221

21

21

2

2

)('2

)("

)('2

)("

))2(and)1(from()(2

)("))(('0

))3(from()(2

)("))(('0

)(2

)("))((')(0

)(2

)("))((')()(

iii

i

iii

iii

iiii

iiii

f

f

xf

cf

cfxf

xcf

xxf

xcf

xxfxf

xcf

xxfxff

When xi and α are very close to each other, c is between xi and α.

The iterative process is said to be of second order.

21

The Order of Iterative Process (Definition)

Using an iterative process we get xk+1 from xk and other info.

We have x0, x1, x2, …, xk+1 as the estimation for the root α.

Let δk = α – xk

Then we may observe

The process in such a case is said to be of p-th order.• It is called Superlinear if p > 1.

– It is call quadratic if p = 2

• It is called Linear if p = 1.• It is called Sublinear if p < 1.

)(1

p

kk O

22

Error of the Newton-Raphson Method

Each error is approximately proportional to the square of the previous error. This means that the number of correct decimal places roughly doubles with each approximation.

Example: Find the root of f(x) = e-x - x = 0

(Ans: α= 0.56714329)

11

i

i

xi

x

ii e

xexx

Error Analysis

56714329.0)("

56714329.11)('

ef

ef

23

Error Analysis

2

2

21

18095.0)56714329.1(2

56714329.0)('2

)("

i

i

ii f

f

i xi εt (%) |δi| estimated |δi+1|

0 0 100 0.56714329 0.0582

1 0.500000000 11.8 0.06714329 0.008158

2 0.566311003 0.147 0.0008323 0.000000125

3 0.567143165 0.0000220 0.000000125 2.83x10-15

4 0.567143290 < 10-8

24

Newton-Raphson vs. Fixed Point Iteration

Find root of f(x) = e-x - x = 0.

(Answer: α= 0.56714329) ixi ex 1

i xi εa (%) εt (%)

0 0 100.0

1 1.000000 100.0 76.3

2 0.367879 171.8 35.1

3 0.692201 46.9 22.1

4 0.500473 38.3 11.8

5 0.606244 17.4 6.89

6 0.545396 11.2 3.83

7 0.579612 5.90 2.20

8 0.560115 3.48 1.24

9 0.571143 1.93 0.705

10 0.564879 1.11 0.399

i xi εt (%) |δi|

0 0 100 0.56714329

1 0.500000000 11.8 0.06714329

2 0.566311003 0.147 0.0008323

3 0.567143165 0.0000220 0.000000125

4 0.567143290 < 10-8

Newton-Raphson

Fixed Point Iteration with

25

Pitfalls of the Newton-Raphson Method

• Sometimes slowiteration xi

0 0.5

1 51.65

2 46.485

3 41.8365

4 37.65285

5 33.8877565

… …

40

41

42

43

1.002316024

1.000023934

1.000000003

1.000000000

1)( 10 xxf

26

Pitfalls of the Newton-Raphson Method

Figure (a)

An inflection point (f"(x)=0) at the vicinity of a root causes divergence.

Figure (b)

A local maximum or minimum causes oscillations.

27

Pitfalls of the Newton-Raphson Method

Figure (c)

It may jump from one location close to one root to a location that is several roots away.

Figure (d)

A zero slope causes division by zero.

28

Overcoming the Pitfalls?• No general convergence criteria for Newton-

Raphson method.

• Convergence depends on function nature and accuracy of initial guess.– A guess that's close to true root is always a better

choice– Good knowledge of the functions or graphical analysis

can help you make good guesses

• Good software should recognize slow convergence or divergence.– At the end of computation, the final root estimate

should always be substituted into the original function to verify the solution.

29

Other Facts• Newton-Rahpson method converges quadratically

(when it converges).– Except when the root is a multiple roots

• When the initial guess is close to the root, Newton-Rahpson method usually converges.

– To improve the chance of convergence, we could use a bracketing method to locate the initial value for the Newton-Raphson method.

30

Summary• Differences between bracketing methods and open

methods for locating roots– Guarantee of convergence?– Performance?

• Convergence criteria for fixed-point iteration method

• Rate of convergence– Linear, quadratic, super-linear, sublinear

• Understand what conditions make Newton-Raphson method converges quickly or diverges