polynomial interpolation and approximation

67
Polynomial Interpolation and Approximation Errors using inadequate data are much less than those using no data at all. Charles Babbage 1 Fall 2010

Upload: others

Post on 04-Feb-2022

22 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Polynomial Interpolation and Approximation

Polynomial Interpolation and Approximation

Errors using inadequate data are much less than those using no data at all.

Charles Babbage

1

g

Fall 2010

Page 2: Polynomial Interpolation and Approximation

Topics to Be DiscussedTopics to Be Discussed

This unit only requires the knowledge of simpleThis unit only requires the knowledge of simple polynomial algebra and algebraic manipulations.The following topics will be presented:The following topics will be presented:

A naïve interpolation methodThe Lagrange formThe Newton formDivided DifferenceLeast square polynomial approximationLeast square polynomial approximation

2

Page 3: Polynomial Interpolation and Approximation

Polynomial Interpolation: 1/5Polynomial Interpolation: 1/5

In many applications, we know a function butIn many applications, we know a function but don’t know its exact form. For example:

1/ 2

2 20

1( )1 sin ( )sin ( )

K k dxk x

π=

−∫

The above shows a function of k. But, what is

( ) ( )

this function K(k)? How do we plot K(k) ? Do we have to plug a value of k into the formula and integrate with respect to x to find K(k)?

3

Page 4: Polynomial Interpolation and Approximation

Polynomial Interpolation: 2/5Polynomial Interpolation: 2/5

If K(k) is simple, integration may be a viableIf K(k) is simple, integration may be a viable approach; however, most integrations are difficult. This is a problem.difficult. This is a problem.One way to overcome this problem is to evaluate K(k) at some values of k and fit them with aK(k) at some values of k and fit them with a simpler function (e.g., polynomial).O k k k t l t K(k )One may use k0, k1, …, kn to evaluate K(k0), K(k1), …, K(kn), and fit (k0, K(k0)), (k1, K(k1)), …, (k K(k )) ith a polynomial P( )(kn, K(kn)) with a polynomial y=P(x).Then, when K(t) is needed, one uses P(t) instead!

4

Page 5: Polynomial Interpolation and Approximation

Polynomial Interpolation: 3/5Polynomial Interpolation: 3/5Since polynomial y=P(x) passes all data points p y y ( ) p p(ki,K(ki)) for i = 0, 2, …, n, P(x) is said to interpolate function K(k) at k0, k1, …, kn. Since a degree n polynomial, P(x) = a0 + a1x + a2x2 + … + anxn, has n+1 coefficients, n+1 distinctdata points (ki,K(ki)) are needed to have a unique solution to a0, a1, a2, …, an.Polynomials may have other forms, each of which requires a different computation.We will discuss the Lagrange and Newton forms.Interpolating polynomials are unique. Th j t h diff t f

5They just have different forms.

Page 6: Polynomial Interpolation and Approximation

Polynomial Interpolation: 4/5Polynomial Interpolation: 4/5Polynomial Interpolation:y p

Given a set of data points (x0,y0), (x1,y1), …, (xn,yn), where xi’s are all distinct, find a ( n,yn), i ,polynomial y = P(x) that interpolates the data points (i.e., yi = P(xi) for i = 0, 1, …, n).The degree of the interpolating polynomial P(x) is n.Then, we may use P(x) as if it is the unknown function that generates data points (x0,y0), (x1,y1), …, (xn,yn).All transcendental functions (e.g., sin(),

) i6

exp(), log()) are computed this way.

Page 7: Polynomial Interpolation and Approximation

Polynomial Interpolation: 5/5Polynomial Interpolation: 5/5

A number of fundamental issues haveA number of fundamental issues have to be addressed properly:

How easy can we determine the interpolatingHow easy can we determine the interpolating polynomial from the input data points?H t P( ) fHow easy can we compute P(x) for a new xafter the polynomial is determined?How easy can we add new data points afterP(x) is determined?How efficient can we solve these three issues?

7

Page 8: Polynomial Interpolation and Approximation

A Naïve Method: 1/3A Naïve Method: 1/3Let us try a naïve method before getting into y g ggood ones.Suppose the interpolating polynomial for n+1pp p g p ydata points is P(x) = a0 + a1x + a2x2 + … + anxn. In other words, P(x) is in the power form.Let the data points be (x0,y0), (x1,y1), …, (xn,yn).For each xi, we have the following:i, g

0

1

aa⎡ ⎤⎢ ⎥⎢ ⎥

2 20 1 2 2( ) 1, , , ,n n

i i i i n i i i iy P x a a x a x a x x x x a⎢ ⎥⎢ ⎥⎡ ⎤= = + + + + = ⋅⎣ ⎦ ⎢ ⎥⎢ ⎥⎢ ⎥

8na⎢ ⎥⎣ ⎦

Page 9: Polynomial Interpolation and Approximation

A Naïve Method: 2/3A Naïve Method: 2/3

Collecting all [1,xi,xi2,…,xi

n]’s into a (n+1)×(n+1)Collecting all [1,xi,xi ,…,xi ] s into a (n 1)×(n 1)matrix yields:

20 00 0 01 ny ax x x⎡ ⎤⎡ ⎤ ⎡ ⎤0 00 0 0

21 11 1 1

22 22 2 2

111

n

n

y ax x xy ax x xy ax x x

⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥= ⋅⎢ ⎥⎢ ⎥ ⎢ ⎥

2 22 2 2

2

1

1 nn n

y x x x

y ax x x

⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

Since the column matrix [yj](n+1)×1 and the matrix [xi j](n+1)×(n+1) are known (i.e., input data),

1n nn n ny x x x⎣ ⎦ ⎣ ⎦⎣ ⎦

matrix [xi,j](n+1)×(n+1) are known (i.e., input data), Gaussian elimination may be used to solve for the a0, a1, a2, …, an.

9

e a0, a1, a2, …, an.

Page 10: Polynomial Interpolation and Approximation

A Naïve Method: 3/3A Naïve Method: 3/3Issue #1: Determining polynomial P(x) is not g p ydifficult with Gaussian elimination; however, it is an O(n3) method (i.e., slow)!Issue #2: E l ti P( ) ith i iIssue #2: Evaluating P(x) with a given x is efficient if one uses the nested form which requires n multiplications (i.e., O(n)):requires n multiplications (i.e., O(n)):

0 1 2 2 1( ( ( ( ) ) ) ) )n n na a a a a a x x x x x− −+ + + + + +

Issue #3: Adding a new data point requires solving a new system of linear equations. Moreover, pivoting may be needed.

10

Page 11: Polynomial Interpolation and Approximation

Lagrange Polynomials: 1/14Lagrange Polynomials: 1/14Lagrange used a special form of P(x). g g p ( )Each term of P(x) is a degree n polynomial, and the i-th term does not include x-xi.1 0 1 1 0

2 0 1 2 1 0 2 2 0 1

( ) ( ) ( )( ) ( )( ) ( )( ) ( )( )

P x a x x a x xP x a x x x x a x x x x a x x x x

= − + −= − − + − − + − −

0 1 2( ) ( )( ) ( )n nP x a x x x x x x= − − − +

1 0 2( )( ) ( )na x x x x x x− − − +2 0 1 3( )( )( ) ( )na x x x x x x x x− − − − +

0 1 1( )( ) ( )n na x x x x x x −− − −

( ) ( )n

nP ⎡ ⎤∏⎣ ⎦∑

Pk(x) means degree k

General form11

0,0

( ) ( )nn i j j i j

iP x a x x= ≠

=

⎡ ⎤= ∏ −⎣ ⎦∑ General form

Page 12: Polynomial Interpolation and Approximation

Lagrange Polynomials: 2/14Lagrange Polynomials: 2/14Why is the Lagrange form useful?y g gIt is easy to compute coefficients a0, a1, …, an. The i-th term of P (x) does not have x-xi:The i th term of Pn(x) does not have x xi:

Therefore plugging one of x=x x=x x=x0 1 1 1( )( ) ( )( ) ( )i i i na x x x x x x x x x x− +− − − − −

Therefore, plugging one of x=x0, x=x1, …, x=xi-1, x=xi+1, …, x=xn into this term yields a zero.However plugging x=x into this term yields:However, plugging x=xi into this term yields:

H k th i th t d ll0 1 1 1( )( ) ( )( ) ( )i i i i i i i i na x x x x x x x x x x− +− − − − −

Hence, xi makes the i-th term non-zero and all other terms zero.

12

Page 13: Polynomial Interpolation and Approximation

Lagrange Polynomials: 3/14Lagrange Polynomials: 3/14Recall that the Lagrange form isg g

0,0

( ) ( )n

nn i j j i j

iP x a x x= ≠

=

⎡ ⎤= ∏ −⎣ ⎦∑If we plug xi into Pn(xi) , all but the i-th term become zero, and the i-th term is

0i=

Therefore, we have0 1 1 1( )( ) ( )( ) ( )i i i i i i i i na x x x x x x x x x x− +− − − − −

and ai is:0 1 1 1( ) ( )( ) ( )( ) ( )i n i i i i i i i i i ny P x a x x x x x x x x x x− += = − − − − −

0 1 1 1( )( ) ( )( ) ( )i

ii i i i i i i

yax x x x x x x x x x

=− − − − −

13

0 1 1 1( )( ) ( )( ) ( )i i i i i i i nx x x x x x x x x x− +

Page 14: Polynomial Interpolation and Approximation

Lagrange Polynomials: 4/14Lagrange Polynomials: 4/14Plugging the computed ai’s back to Pn(x) yields gg g p i n( ) ythe Lagrange interpolating polynomial.

( )( ) ( )( ) ( )n x x x x x x x x x x− − − − −∑ 0 1 1 1

0 0 1 1 1

( )( ) ( )( ) ( )( )( )( ) ( )( ) ( )

i i nn i

i i i i i i i i n

x x x x x x x x x xP x yx x x x x x x x x x

− +

= − +

=− − − − −∑

Given (xi,yi), i=0,1,…,n, the following computes all coefficients ai’s in array a(0:n):

DO i = 0, n ! Compute aia(i) = 1.0DO j = 0 n ! For each xj j != iDO j = 0, n ! For each xj, j != i

IF (i /= j) a(i) = a(i)*(x(i) – x(j))END DOa(i) y(i)/a(i)

14

a(i) = y(i)/a(i)END DO

Page 15: Polynomial Interpolation and Approximation

Lagrange Polynomials: 5/14Lagrange Polynomials: 5/14How many multiplications are there?y

For each i, n multiplications are needed in the inner loop because it iterates n+1 times

d lti li ti i d d if i jand no multiplication is needed if i = j.Since the outer DO loops for n+1 times, the total number of multiplications is n(n+1)total number of multiplications is n(n+1).This is high; but, it is easy to do!

DO i = 0, n ! Compute aia(i) = 1.0DO j = 0 n ! For each xj j != iDO j = 0, n ! For each xj, j != i

IF (i /= j) a(i) = a(i)*(x(i) – x(j))END DOa(i) y(i)/a(i)

15

a(i) = y(i)/a(i)END DO

Page 16: Polynomial Interpolation and Approximation

Lagrange Polynomials: 6/14Lagrange Polynomials: 6/14

Consider three data points (-1,1), (0,2) and (1,5).Consider three data points ( 1,1), (0,2) and (1,5). Thus, x0=-1, y0=1, x1=0, y1=2 and x2=1, y2=5. Here are the a ’s and the interpolatingHere are the ai s and the interpolating polynomial of degree 2:

1 1y00

0 1 0 2

1

1 1( )( ) (( 1) 0) (( 1) 1) 2

2 2

yax x x x

y

= = =− − − − × − −

11

1 0 1 2

2

2( )( ) (0 ( 1)) (0 1)

5 5

yax x x x

ya

= = = −− − − − × −

= = =22 0 2 1( )( ) (1 ( 1)) (1 0) 2

ax x x x

= = =− − − − × −

1 5( ) ( 1) 2( 1)( 1) ( 1)P16

2 ( ) ( 1) 2( 1)( 1) ( 1)2 2

P x x x x x x x= − − + − + +

Page 17: Polynomial Interpolation and Approximation

Lagrange Polynomials: 7/14Lagrange Polynomials: 7/14How to find Pn(x) for an arbitrary x?n yThe polynomial Pn(x) is:

( ) ( )( ) ( )( ) ( )n

P x a x x x x x x x x x x= − − − − −∑The number of multiplications is still n(n+1)!

0 1 1 10

( ) ( )( ) ( )( ) ( )n i i i ni

P x a x x x x x x x x x x− +=

=∑

! x is the inputPx = 0.0

With some programmingtrick, one may reduce the Px 0.0

DO i = 0, ns = a(i)DO j = 0 n

ynumber of multiplicationsto O(n)! How?(Hint: use division)DO j = 0, n

IF (i /= j) s = s * (x – x(j))END DOP P +

(Hint: use division)

However, this approachis risky Why???

17

Px = Px + sEND DO

is risky. Why???

Page 18: Polynomial Interpolation and Approximation

Lagrange Polynomials: 8/14Lagrange Polynomials: 8/14We computed the interpolating polynomial P2(x)p p g p y 2( )for (-1,1), (0,2) and (1,5) as follows:

21 5( ) ( 1) 2( 1)( 1) ( 1)P x x x x x x x= − − + − + +

Then, we may use P2(x) to find other values:

2 ( ) ( 1) 2( 1)( 1) ( 1)2 2

P x x x x x x x+ + +

21 5(3) 3(3 1) 2(3 1)(3 1) (3 1)3 3 16 30 17P = − − + − + + = − + =2

2

(3) 3(3 1) 2(3 1)(3 1) (3 1)3 3 16 30 172 2

1 5( 2) ( 2)(( 2) 1) 2(( 2) 1)(( 2) 1) (( 2) 1)( 2) 3 6 5 22 2

P

P

+ + + +

− = − − − − − + − − + − + − = − + =

21 5(0.5) 0.5(0.5 1) 2(0.5 1)(0.5 1) (0.5 1)0.5 0.125 1.5 1.875 3.252 2

P = − − + − + + = − + + =

18

Page 19: Polynomial Interpolation and Approximation

Lagrange Polynomials: 9/14Lagrange Polynomials: 9/14

Use a polynomial to evaluateUse a polynomial to evaluate the sin() function.It is sufficient to restrict the

xi yi

0.0 0.0It is sufficient to restrict the range to about [0,1.5]. A table of (x y )’s is generated

0.3 0.295520210.6 0.5646425

table of (xi,yi) s is generated.This is a degree 5 polynomial

ith ffi i t ’

0.9 0.78332691.2 0.93203911 5 0 997495with coefficients ai’s.

Plot both functions to see how 1.5 0.997495

close they are!a0 a1 a2 a3 a4 a5

19

0 1 2 3 4 50.0 5.067218 -19.363594 26.863062 -15.981465 3.4207656

Page 20: Polynomial Interpolation and Approximation

Lagrange Polynomials: 10/14Lagrange Polynomials: 10/14

With the coefficients below, we may compute orWith the coefficients below, we may compute or “approximate” sin() with a polynomial.

a a a a a aa0 a1 a2 a3 a4 a50.0 5.067218 -19.363594 26.863062 -15.981465 3.4207656

x P5(x) sin(x) error

0.2 0.19866316 0.19866933 6.1690807E-6

0.4 0.38942143 0.38941833 -3.0994415E-6

0.6 0.5646424 0.5646425 5.9604644E-8

1.0 0.84147375 0.84147095 -2.8014183E-6

1.4 0.98543715 0.98544973 1.257658E-5

20

1.4 0.98543715 0.98544973 1.257658E 5

Page 21: Polynomial Interpolation and Approximation

Lagrange Polynomials: 11/14Lagrange Polynomials: 11/14

Now use 0, π/8, π/6, π/4, π/3Now use 0, π/8, π/6, π/4, π/3 and π/2.It is easy to calculate the

xi yi

0.0 0.0It is easy to calculate the sin() at these six values.W h d 5

π/8 0.38268345π/6 0.5

We have a degree 5 interpolating polynomial

ith ffi i t ’

π/4 0.70710676π/3 0.8660254

/2 1 0with coefficients ai’s as shown below.

π/2 1.0

a0 a1 a2 a3 a4 a50.0 24.586205 -50.82025 42.590042 -17.604652 1.2548212

21

Page 22: Polynomial Interpolation and Approximation

Lagrange Polynomials: 12/14Lagrange Polynomials: 12/14

With the coefficients below, we may compute orWith the coefficients below, we may compute or “approximate” sin() with a polynomial.

a0 a1 a2 a3 a4 a5a0 a1 a2 a3 a4 a50.0 24.586205 -50.82025 42.590042 -17.604652 1.2548212

x P5(x) sin(x) error

0.2 0.19866273 0.19866933 6.6012144E-6

0.4 0.3894185 0.38941833 -1.7881393E-7

0.6 0.5646418 0.5646425 7.1525573E-7

1.0 0.8414726 0.8414711 -1.5497208E-6

1.4 0.9854038 0.9854498 4.6014785E-5

22

1.4 0.9854038 0.9854498 4.6014785E 5

Page 23: Polynomial Interpolation and Approximation

Lagrange Polynomials: 13/14Lagrange Polynomials: 13/14What if we have a new data point(xn+1,yn+1)?We need a new term an+1 that does not have x-xn+1:

11

1 0 1 1 1( )( ) ( )n

nn n n n

yax x x x x x

++

+ + +

=− − −

We also need to update each old term ai:

( )( ) ( )( ) ( )( )i

iya =

new

We need n+1 multiplications to compute an+1, and lti li ti ( di i i ) f h (0≤i≤ )

0 1 1 1 1( )( ) ( )( ) ( )( )ii i i i i i i n i nx x x x x x x x x x x x− + +− − − − − −

one multiplication (or division) for each ai (0≤i≤n). The total is (n+1)+(n+1) = 2(n+1) !

23the original ai

Page 24: Polynomial Interpolation and Approximation

Lagrange Polynomials: 14/14Lagrange Polynomials: 14/14We obtained the interpolating polynomial P2(x)p g p y 2( )for (-1,1), (0,2) and (1,5) as follows:

21 5( ) ( 1) 2( 1)( 1) ( 1)P x x x x x x x= − − + − + +

If a new data point (2,10) is introduced. Then,

2 ( ) ( 1) 2( 1)( 1) ( 1)2 2

P x x x x x x x+ + +

10 53

22

10 5(2 ( 1)) (2 0) (2 1) 3

5 / 2 5

a

aa

= =− − × − × −

= = = −22 3

11

1 3

1 2 22 1

0 2

x xaa

x x

− −−= = =

− −

00

0 3

1/ 2 1( 1) 2 6

aax x

= = = −− − −

1 5 5( ) ( 1)( 2) ( 1)( 1)( 2) ( 1) ( 2) ( 1) ( 2)P x x x x x x x x x x x x x−= + + + + +24

3( ) ( 1)( 2) ( 1)( 1)( 2) ( 1) ( 2) ( 1) ( 2)6 2 3

P x x x x x x x x x x x x x= − − + + − − − + − + + −

new

Page 25: Polynomial Interpolation and Approximation

Newton Divided Difference: 1/19Newton Divided Difference: 1/19Newton chose to use a different form as follows:

1 0 1 0

2 0 1 0 2 0 1

( ) ( )( ) ( ) ( )( )

P x a a x xP x a a x x a x x x x

= + −= + − + − −

3 0 1 0 2 0 1 3 0 1 2( ) ( ) ( )( ) ( )( )( )P x a a x x a x x x x a x x x x x x= + − + − − + − − −

Here are some observations:0 1 0 2 0 1 0 1 1( ) ( ) ( )( ) ( )( ) ( )n n nP x a a x x a x x x x a x x x x x x −= + − + − − + + − − −

Pk(x) has x-x0, x-x1, …, x-xk-1, and does not depend on x-xk+1, x-xk+2, …, x-xn.Hence, Pk(x) only depends on x0, x1, …, xk.Pk(x) is the sum of Pk-1(x) and ak(x-x0)…(x-xk-1).

25

Page 26: Polynomial Interpolation and Approximation

Newton Divided Difference: 2/19Newton Divided Difference: 2/19

For all k, since Pk(x0) = a0, we have a0 = y0.For all k, since Pk(x0) a0, we have a0 y0.From P1(x), we have y1 = P1(x1) = a0+a1(x1-x0)and a = (y -a )/(x -x ) = (y -y )/(x -x )and a1 = (y1-a0)/(x1-x0) = (y1-y0)/(x1-x0).From P2(x2) and the computed a0 and a1, a2 is

[ ]( )y a a x x+[ ]2 0 1 2 12

2 0 2 1

( )( )( )

y a a x xa

x x x x− + −

=− −

1 0 1 0

2 0 1 0 2 0 1

( ) ( )( ) ( ) ( )( )

P x a a x xP x a a x x a x x x x

= + −= + − + − −

3 0 1 0 2 0 1 3 0 1 2( ) ( ) ( )( ) ( )( )( )

( ) ( ) ( )( ) ( )( ) ( )

P x a a x x a x x x x a x x x x x x= + − + − − + − − −

260 1 0 2 0 1 0 1 1( ) ( ) ( )( ) ( )( ) ( )n n nP x a a x x a x x x x a x x x x x x −= + − + − − + + − − −

Page 27: Polynomial Interpolation and Approximation

Newton Divided Difference: 3/19Newton Divided Difference: 3/19From P3(x) , we have P3(x3) = y3 and a3 is3( ) 3( 3) y3 3

[ ]3 0 1 3 0 2 3 0 3 13

( ) ( )( )( )( )( )

y a a x x a x x x xa

x x x x x x− + − + − −

=

Similarly, we can compute all ai’s (0 ≤ i ≤ n).3 0 3 1 3 2( )( )( )x x x x x x− − −

y i

( ) ( )P x a a x x= + −1 0 1 0

2 0 1 0 2 0 1

3 0 1 0 2 0 1 3 0 1 2

( ) ( )( ) ( ) ( )( )( ) ( ) ( )( ) ( )( )( )

P x a a x xP x a a x x a x x x xP x a a x x a x x x x a x x x x x x

= += + − + − −= + − + − − + − − −3 0 1 0 2 0 1 3 0 1 2

0 1 0 2 0 1 0 1 1

( ) ( ) ( )( ) ( )( )( )

( ) ( ) ( )( ) ( )( ) ( )n n n

P x a a x x a x x x x a x x x x x x

P x a a x x a x x x x a x x x x x x −

+ + +

= + − + − − + + − − −

27

0 1 0 2 0 1 0 1 1( ) ( ) ( )( ) ( )( ) ( )n n n

Page 28: Polynomial Interpolation and Approximation

Newton Divided Difference: 4/19Newton Divided Difference: 4/19

Suppose we have a0, a1, …, ai 1 and xi, and wishSuppose we have a0, a1, …, ai-1 and xi, and wish to compute ai in Pi(x).From P (x) we haveFrom Pi(x) , we have

0 1 0 2 0 1 0 1 1( ) ( ) ( )( ) ( )( ) ( )i i iP x a a x x a x x x x a x x x x x x −= + − + − − + + − − −

Plug the ak’s (0 ≤ k ≤ i-1) and xi into Pi(x) and solve for ai as shown below:[ ]0 1 0 2 0 1 1 0 2

0 1 1

( ) ( )( ) ( ) ( )( )( ) ( )

i i i i i i i ii

i i i i

y a a x x a x x x x a x x x xa

x x x x x x− −

− + − + − − + + − −=

− − −

28

Page 29: Polynomial Interpolation and Approximation

Newton Divided Difference: 5/19Newton Divided Difference: 5/19The following subroutine takes the ak’s (0 ≤ k ≤g k (i-1) and (xk,yk)’s (0 ≤ k ≤ i) and computes ai.

[ ]0 1 0 2 0 1 1 0 2( ) ( )( ) ( ) ( )i i i i i i i iy a a x x a x x x x a x x x x− + − + − − + + − −[ ]0 1 0 2 0 1 1 0 2

0 1 1

( ) ( )( ) ( ) ( )( )( ) ( )

i i i i i i i ii

i i i i

ya

x x x x x x− −

=− − −

SUBROUTINE Term-i(…)sum = a0 ! initialize sumprod = 1.0 ! holds (xi-x0)..(xi-xi-1)i 0 i i 1

DO j = 1, i-1 ! add the termsprod = prod * (xi – xj-1) ! update the productsum = sum + aj*prod ! add the termsum sum aj prod ! add the term

END DOai = (yi – sum)/(prod*(xi – xi-1)) ! Final touch

END SUBROUTINE

29

END SUBROUTINE

2(i-1)+1 = 2i-1 multiplications 1 division are needed

Page 30: Polynomial Interpolation and Approximation

Newton Divided Difference: 6/19Newton Divided Difference: 6/19With subroutine Term-i(…), the following gcomputes a0, a1, a2, …, an one at a time.Since Term-i(…) uses 2i-1 multiplications for

i i th t t l b f lti li ti ia given i, the total number of multiplications is 2( 1)(2 1) 2 1 2

2

n n n n ni i n n+⎛ ⎞− = − = − =⎜ ⎟⎝ ⎠

∑ ∑ ∑Not bad, slightly better than Lagrange method which uses n(n+1) multiplications

1 1 1( )

2i i i= = =⎜ ⎟⎝ ⎠

∑ ∑ ∑

method, which uses n(n+1) multiplications

a0 = y0i 1DO i = 1, n

CALL Term-i(……) ! compute ai from a0,…ai-1END DO

30

Page 31: Polynomial Interpolation and Approximation

Newton Divided Difference: 7/19Newton Divided Difference: 7/19Consider three data points (-1,1), (0,2), (2,10).p ( , ), ( , ), ( , )The Newton interpolating polynomial P2(x) is2 0 1 0 2 0 1 0 1 2( ) ( ) ( )( ) ( 1) ( 1)P x a a x x a x x x x a a x a x x= + − + − − = + + + +

2

2

( 1) 1(0) 2

PP

− ==

We have x0=-1, y0=1, x1=0, y1=2, x2=2, y2=10.2 (2) 10P =

a y

ay a

0 0

1 0

12 1 1

= =

=−

= − =

( )

ax x

ay a a x x

11 0

22 0 1 2 0

0 11

10 1 1 2 1 10 1 3 1

=−

=− −

=

=− + −

= − + × − − = − + =

( )( ) ( ( ( )) ( )

31

ax x x x2

2 0 2 1 2 1 2 0 3 21

− − − − − ×( )( ) ( ( ))( )

Page 32: Polynomial Interpolation and Approximation

Newton Divided Difference: 8/19Newton Divided Difference: 8/19The nested form should be used to evaluate a Newton interpolating polynomial efficiently.Here are two simple examples:

3 0 1 2 3 2 1 0

4 0 1 2 3 4 3 2 1 0

( ) { [ ( )]( )}( )( ) [ { [ ( )]( )}( )]( )

P x a a a a x x x x x xP x a a a a a x x x x x x x x

= + + + − − −= + + + + − − − −

The following computes Pn(x) for a given x.Px = anDO i = n-1 0 -1DO i = n-1, 0, -1

Px = ai + Px*(x – xi)END DO

Number of multiplications: n.Newton method is faster than the Lagrange

th d hi h i ( +1)32

method, which requires n(n+1).

Page 33: Polynomial Interpolation and Approximation

Newton Divided Difference: 9/19Newton Divided Difference: 9/19Adding one more data point (xn+1,yn+1) ?g n+1 yn+1Pn+1(x) = Pn(x) + an+1(x-x0)(x-x1)…(x-xn).Since ai’s and (xi,yi)’s (0 ≤ i ≤ n) are known, the i i yiuse of subroutine Term-i() to compute an+1requires 2(n+1)-1 = 2n +1 multiplications, similar to Lagrange methodsimilar to Lagrange method.Suppose we wish to add (4,16) to P2(x)computed earlier.computed earlier.Since P2(4)=26, we have a3 in P3(x) as follows:

( ) 16 26 1y P x− − −3 2 33

3 0 3 1 3 2

( ) 16 26 1( )( )( ) (4 ( 1)) (4 0) (4 2) 4

y P xax x x x x x

= = =− − − − − × − × −

1333

1( ) 1 ( 1) ( 1) ( 1) ( 2)4

P x x x x x x x= + + + + − + −

Page 34: Polynomial Interpolation and Approximation

Newton Divided Difference: 10/19Newton Divided Difference: 10/19In summary, we have the following:y, gIssue #1: n2 multiplications are needed to generate the interpolating polynomial. Not bad!g p g p yIssue #2: n multiplications are required to compute Pn(x). Excellent!p n( )Issue #3: 2n+1 multiplications are needed to add a new data point (xn+1,yn+1). Excellent!p ( n+1,yn+1)Can we make issue #1 and #3 a little better or easier? Yes, this is the technique , qof divided difference.

34

Page 35: Polynomial Interpolation and Approximation

Newton Divided Difference: 11/19Newton Divided Difference: 11/19The leading coefficient of a polynomial is the g ff p ycoefficient of the highest degree term. The leading coefficient of Pn(x)=a0+a1(x-

)+ + ( )( ) ( ) ix0)+…+an(x-x0)(x-x1)…(x-xn-1) is an.Since an only depends on x0, x1, …, xn, we use a new symbol for a : a = f[x x x ]new symbol for an: an = f[x0, x1, …, xn].Therefore, Pi(x)’s can be rewritten as:

P x f x( ) [ ]P x f xP x f x f x x x xP x f x f x x x x f x x x x x x x

0 0

1 0 0 1 0

( ) [ ]( ) [ ] [ , ]( )( ) [ ] [ ]( ) [ ]( )( )

== + −= + − + − −P x f x f x x x x f x x x x x x x

in

2 0 0 1 0 0 1 2 0 1

1

( ) [ ] [ , ]( ) [ , , ]( )( )= + +

⎡ ⎤−

35P x f x f x x x x xn i j

ji0 0 1

01

( ) [ ] [ , , , ] ( )= + −⎡

⎣⎢

⎦⎥

==∏∑

Page 36: Polynomial Interpolation and Approximation

Newton Divided Difference: 12/19Newton Divided Difference: 12/19How can the new symbol help?yLet Pk-1(x) be a degree k-1 polynomial interpolating x0, …, xk-1 and Qk-1(x) be a degree k 1 l i l i t l tik-1 polynomial interpolating x1, …, xk. Consider the following polynomial of degree k.

01 1

0 0

( ) ( ) ( )kk k k

k k

x x x xP x Q x P xx x x x− −− −= +− −

Pk(x) interpolates x0, …, xk (i.e., Pk(xi)=yi) because P (x ) = P (x ) = y P (x ) = Q (x ) =

0 0k kx x x x

because Pk(x0) = Pk-1(x0) = y0, Pk(xk) = Qk-1(xk) = yk, and for all 1≤ i ≤ k-1,

0 0( ) ( ) ( )i k i i k ix x x x x x x xP Q P− − − −36

0 01 1

0 0 0 0

( ) ( ) ( )i k i i k ik i k i k i i i i

k k k k

x x x x x x x xP x Q x P x y y yx x x x x x x x− −= + = + =− − − −

Page 37: Polynomial Interpolation and Approximation

Newton Divided Difference: 13/19Newton Divided Difference: 13/19Since Pk(x) interpolates x0, x1, …, xk, its leading k( ) p 0 1 k gcoefficient f[x0, x1, …, xk] is the sum of leading coefficients of xQk-1(x) and –xPk-1(x) .

01 1

0 0

( ) ( ) ( )kk k k

k k

x x x xP x Q x P xx x x x− −− −= +− −

d kSince xQk-1(x)’s leading coefficient is f[x1, …, x ]/(x -x ) and -xP (x)’s is -f[x x ]/(x -x )

degree k

xk]/(xk-x0) and -xPk-1(x) s is -f[x0, …, xk-1]/(xk-x0), the leading coefficient of Pk(x) is

1 0 10 1 1 0 1

[ , , ] [ , , ]1 1[ , , , ] [ , , ] [ , , ] k kk k k

k k k

f x x f x xf x x x f x x f x xx x x x x x

−−

−= − =− − −

37

0 0 0k k kx x x x x x

Page 38: Polynomial Interpolation and Approximation

Newton Divided Difference: 14/19Newton Divided Difference: 14/19What does the following relation gmean?

1 0 1[ , , ] [ , , ][ ] k kf x x f x xf x x x −−=0 10

[ , , , ]kk

f x x xx x

=−

It means we have a very simply way to computeall coefficients (i.e., ai’s).Th l ti h h t dj t f[ ]The relation shows how two adjacent f[x1,…,xk]and f[x0,…,xk-1] can be used to compute a new, expanded one f[x0,…,xk] with two subtractionsexpanded one f[x0,…,xk] with two subtractions and one division.

38

Page 39: Polynomial Interpolation and Approximation

Newton Divided Difference: 15/19Newton Divided Difference: 15/19The relation among f[x1,…,xk], f[x0,…,xk-1] and g f[ 1, , k], f[ 0, , k-1]f[x0,…,xk] is triangular:

f x x xk[ , , , ]0 1 1−

We may arrange all xi’s and f[xi]’s on column 0f x x x f x x x xk k k[ , , , ] [ , , , , ]1 2 0 1 1−→

We may arrange all xi s and f[xi] s on column 0 and 1, respectively, and take two adjacent f[xi]’s to compute the f[xi,xi+1]’s on column 2.p f[ i, i+1]Similarly, column k-1 has all f[ ]’s with k-1 xi’s and two adjacent ones are used to compute all j pf[]’s with k xi’s. The results are on column k.This table looks like the following (next slide):

39

g ( )

Page 40: Polynomial Interpolation and Approximation

Newton Divided Difference: 16/19Newton Divided Difference: 16/19! initially di’s have yi’s! at the end d d d are the a ’s! at the end, d0, d1, …, dn are the ai sDO j = 1, n ! column j

DO i = n, j, -1 ! working upwardd (d d )/( )di = (di – di-1)/(xi – xi-j)

END DO ! new di’s overwrites old di’sEND DOx f x0 0[ ]

coefficient ai’s[ ] [ ]f f

x f x f x x1 1 0 1[ ] [ , ]→

2 3 1 21 2 3

3 1

[ , ] [ , ][ , , ] f x x f x xf x x xx x−=−x f x f x x f x x x2 2 1 2 0 1 2[ ] [ , ] [ , , ]→ →

x f x f x x f x x x f x x x x

x f x f x x f x x x f x x x x f x x x x x

3 3 2 3 1 2 3 0 1 2 3[ ] [ , ] [ , , ] [ , , , ]

[ ] [ ] [ ] [ ] [

→ → →

→ → → → ]40col 1 col 2 col 3 col 4 col 5

x f x f x x f x x x f x x x x f x x x x x4 4 3 4 2 3 4 1 2 3 4 0 1 2 3[ ] [ , ] [ , , ] [ , , , ] [ , , , ,→ → → → 4 ]

Page 41: Polynomial Interpolation and Approximation

Newton Divided Difference: 17/19Newton Divided Difference: 17/19

Interpolate (-1,1), (0,2), (2,10) and (4,16) usingInterpolate ( 1,1), (0,2), (2,10) and (4,16) using Newton method.

[ ]f x +( )P x =xi yif[x0]

0

0 1 0

[ ][ , ]( )[ ]( )( )

f xf x x x xf x x x x x x x

+− +

− − +

3 ( )P x =

−1 1

f[x0,x1]

0 1 2 0 1

0 1 2 3 0 1 2

[ , , ]( )( )[ , , , ]( )( )( )

f x x x x x x xf x x x x x x x x x x

+− − −

→0 2 1

f[x0,x1,x2]1210

114

111

111→ →2 10 4 1 10

1643

1-¼

1-¼

→ →2 10 4 1

1 141

f[x0,x1,x2,x3]→ → − → −4 16 3 14

14

Page 42: Polynomial Interpolation and Approximation

Newton Divided Difference: 18/19Newton Divided Difference: 18/19

How many divisions are there?How many divisions are there?Starting with f[xi], there are n+1 columns to compute f[x x x ]compute f[x0,x1,…,xn].Each horizontal arrow requires one division.The second column needs n, the third need n-1, …, and the n-th needs 1. The total is 1+2+…+n

2= n(n+1)/2 division. This is better than n2.x f x0 0[ ]

x f x f x x

x f x f x x f x x x

1 1 0 1

2 2 1 2 0 1 2

[ ] [ , ]

[ ] [ , ] [ , , ]

→ →

42

x f x f x x f x x x f x x x x

x f x f x x f x x x f x x x x f x x x x x

3 3 2 3 1 2 3 0 1 2 3

4 4 3 4 2 3 4 1 2 3 4 0 1 2 3

[ ] [ , ] [ , , ] [ , , , ]

[ ] [ , ] [ , , ] [ , , , ] [ , , , ,

→ → →

→ → → → 4 ]

Page 43: Polynomial Interpolation and Approximation

Newton Divided Difference: 19/19Newton Divided Difference: 19/19

How about adding new data points?How about adding new data points?One may use the technique discussed earlier to compute a = f[x x x ]compute an+1 = f[x0,x1,..,xn+1] .The last row in the table must be saved to use di id d diff Do it yourself!divided difference. Do it yourself!Number of divisions is n, faster than 2n+1!

f [ ]x f x

x f x f x x

0 0

1 1 0 1

[ ]

[ ] [ , ]→

last row wish to compute thisx f x f x x f x x x

x f x f x x f x x x f x x x x

2 2 1 2 0 1 2

3 3 2 3 1 2 3 0 1 2 3

[ ] [ , ] [ , , ]

[ ] [ , ] [ , , ] [ , , , ]

→ →

→ → →

43new data

f f f f

x f x f x x f x x x f x x x x f x x x x x

3 3 2 3 1 2 3 0 1 2 3

4 4 3 4 2 3 4 1 2 3 4 0 1 2 3

[ ] [ , ] [ , , ] [ , , , ]

[ ] [ , ] [ , , ] [ , , , ] [ , , , ,→ → → → 4 ]

Page 44: Polynomial Interpolation and Approximation

Efficiency of Methods: A SummaryEfficiency of Methods: A Summary

The following table summarizes the efficiency ofThe following table summarizes the efficiency of each discussed method, based on the number of multiplications.multiplications.

Naive Lagrange Newton Divided Difference

Computing ai’s O(n3) n(n+1) n2 n(n+1)/2

Computing P(x) n n(n+1) n n

Adding a New Point O(n3) 2(n+1) 2n+1 n

44

Page 45: Polynomial Interpolation and Approximation

Example 1: 1/2Example 1: 1/2Some functions are difficult to interpolate well

√with polynomials. The following is for √x at 0, 0.2, 0.4, 0.6, 0.8 and 1 with divided difference.

a0 a1 a2 a3 a4 a5

Divided Difference

x P5(x) sqrt(x) |error|

0.0 2.23606798 -3.2746457 4.5598096 -5.1583509 4.8266509

x P5(x) q ( ) |error|

0.1 0.28283819 0.31622777 0.03338958

0 3 0 55208708 0 54772256 0 004364520.3 0.55208708 0.54772256 0.00436452

0.5 0.70514379 0.70710678 0.00196299

0 7 0 83867091 0 83666003 0 0020108845

0.7 0.83867091 0.83666003 0.00201088

0.9 0.94392207 0.94868330 0.00476123

Page 46: Polynomial Interpolation and Approximation

Example 1: 2/2Example 1: 2/2Interpolating polynomials can wiggle around

y x=

y x=

y x

P(x)

f(x)

wiggling

460 1 0.8 1

Page 47: Polynomial Interpolation and Approximation

Example 2: 1/3Example 2: 1/3This “wiggling” can be worse. Consider Runge’s example 1/(1+x2) at -5, -2.5, 0, 2.5 and 5.

Divided Difference

a0 a1 a2 a3 a40.038461538 0.039787798 0.061007958 -0.026525199 5.3050398E-3

x P4(x) 1/(1+x2) |error|x P4(x) 1/(1 x ) |error|±4 -0.37931034 0.058823529 0.43813387±3 -0.11007958 0.1 0.21007958

±2 0.40053050 0.2 0.20053050

±1 0.83421751 0.5 0.33421751

47Note that this function is symmetric about the origin

Page 48: Polynomial Interpolation and Approximation

Example 2: 2/3Example 2: 2/30

-2.5 2.51( )f-5

2.5 2.552( )

1f x

x=

+

48

Page 49: Polynomial Interpolation and Approximation

Example 2: 3/3Example 2: 3/3

12

1( )1

f xx

=+

x = -5, -3.75, -2.5, -1.25, 0, 1.25, 2.5, 3.75, 5

49Situation can become worse with more points!

Page 50: Polynomial Interpolation and Approximation

Interpolation ErrorInterpolation Error

Let Pn(x) be the polynomial interpreting anLet Pn(x) be the polynomial interpreting an unknown function f(x) at x0, x1, …, xn. Then, we havewe have

[ 1]

0 1( )( ) ( ) ( )( ) ( )

( 1)!

n

n nff x P x x x x x x x

nµ+

− = − − −+

where µ lies in the smallest interval containing ( 1)!n +

x0, x1, …, xn and x (i.e., in the interval [min(x,x0,…,xn), max(x,x0,…,xn)]).Therefore, if f[n+1](x) is bounded in the indicated interval, interpolating error comes, mostly,

50from (x-x0)(x- x1)…(x- xn).

Page 51: Polynomial Interpolation and Approximation

Chebyshev Points (Nodes)Chebyshev Points (Nodes)Rather than using equally spaced points, g q y p p ,Chebyshev suggested the following:

2( ) 1n i− +⎛ ⎞ 0 1

Since Chebyshev points are in [-1,1], one must

2( ) 1cos2 2in ixn

π+⎛ ⎞= ⎜ ⎟+⎝ ⎠0,1, ,i n= …where

Since Chebyshev points are in [ 1,1], one must scale them before use.For example, to use Chebyshev points onFor example, to use Chebyshev points on interval [a,b], the scaled xi is a+((b-a)/2)*(xi+1), where xi is a Chebyshev point.i y pUse the Chebyshev points xi’s to find yi’s, and interpolate (xi, yi)’s with a polynomial.

51

p ( i, yi) p y

Page 52: Polynomial Interpolation and Approximation

Example: 1/3Example: 1/3Consider Runge’s example 1/(1+x2) with 5 Chebyshev points.

Divided Difference

a0 a1 a2 a3 a40.042350069 0.033811409 0.057019170 -0.020896600 4.3943971E-3

xi P4(x) 1/(1+x2) |error|±5 0 20351552 0 038461538 0 16505398±5 0.20351552 0.038461538 0.16505398±4 -0.14254326 0.058823529 0.20136679

±3 0.080472398 0.1 0.019527602±3 0.080472398 0.1 0.019527602

±2 0.50343312 0.2 0.30343312

±1 0.86267509 0.5 0.36267509

52Note that this function is symmetric about the origin±1 0.86267509 0.5 0.36267509

Page 53: Polynomial Interpolation and Approximation

Example: 2/3Example: 2/3equally spaced

Chebyshev points

2

11 x+

Chebyshev points

equally spaced

53Degree 4 interpolation

Page 54: Polynomial Interpolation and Approximation

Example: 3/3Example: 3/3

12

11 x+

Chebyshev points

equally spaced

54Degree 8 interpolation

Page 55: Polynomial Interpolation and Approximation

A Short SummaryA Short SummaryThe Newton divided difference method is the best of the three discussed here.Consider the use of Chebyshev points if the y pdegree of an interpolating polynomial is high.There are more advanced methods: rational functions and spline functions.A rational function is the quotient of two qpolynomials.A spline function is a collection of lower degree p gpolynomials strung together acting as a single function.

55

Page 56: Polynomial Interpolation and Approximation

Polynomial Approximation: 1/8Polynomial Approximation: 1/8We may have many data points (xi,yi)’s, each of y y p ( i yi)which may be noisy.In this case, interpolation is not the best way b (1) th d b t hi h d (2)because (1) the degree can be too high, and (2)interpolating noisy data is useless.Approximation means the curve follows theApproximation means the curve follows the trend of the data points closely rather than passing through all of them.A polynomial P(x) of degree n requires n+1 data points for interpolation; however, n+1 data

i t b i t d ith l i lpoints can be approximated with a polynomial of degree m < n. Now, m is a parameter!A simple way is the least-square method

56

A simple way is the least-square method.

Page 57: Polynomial Interpolation and Approximation

Polynomial Approximation: 2/8Polynomial Approximation: 2/8

Suppose we have n+1 data points (x0,y0), (x1,y1),Suppose we have n 1 data points (x0,y0), (x1,y1), …, (xn,yn) and wish to find an approximation polynomial P(x) of degree m, where m < n.polynomial P(x) of degree m, where m n.Consider the “error” ei at point (xi,yi) .

y=P(x)ei = yi - P(xi) error at xi : ei = yi - P(xi)

(xi,yi)y=P(x)

(xj,yj)P(xi)P(xj) ej = yj - P(xj)

57

( j yj)( i)

Page 58: Polynomial Interpolation and Approximation

Polynomial Approximation: 3/8Polynomial Approximation: 3/8

The least-square method minimizes the sum ofThe least square method minimizes the sum of squared errors:

m m

f ?

2 2

0 0min ( ( ))i i i

i ie y P x

= =

= −∑ ∑How to find the minimum? You perhaps learned it in calculus and/or linear algebra.Since polynomial P(x)=a0+a1x+a2x2+…+amxm

has unknowns a0, a1, …, am, minimization 0 1 mmeans: find a0, a1, …, am so that the above sum of squared errors reaches a minimum!

58

Page 59: Polynomial Interpolation and Approximation

Polynomial Approximation: 4/8Polynomial Approximation: 4/8

Recall the following fact:⎡ ⎤

Recall the following fact:0

12 2

0 1 2 2( ) 1, , , ,m mi i i i i i i

aa

P x a a x a x a x x x x a

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎡ ⎤= + + + + = ⎣ ⎦…0 1 2 2( ) 1, , , ,i i i m i i i i

m

P x a a x a x a x x x x a

a

⎢ ⎥⎡ ⎤+ + + + ⎣ ⎦ ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

We can collect all xi’s into a matrix form:(n+1)×(m+1)⎡ ⎤⎡ ⎤ ⎡ ⎤(n+1)×1 (m+1)×12

0 00 0 02

1 11 1 1

( ) 1( ) 1

m

m

P x ax x xP x ax x x

⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥2

2 22 2 2( ) 1 mP x ax x x⎢ ⎥ ⎢ ⎥

⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥

i

592( ) 1 m

n mn n nP x ax x x⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

Page 60: Polynomial Interpolation and Approximation

Polynomial Approximation: 5/8Polynomial Approximation: 5/8

The error terms, also in matrix form, are:The error terms, also in matrix form, are:E Y X A

20 0 0 0 00 0 0

2

( ) 1( ) 1

m

m

e y P x y ax x xP

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥2

1 1 1 1 11 1 12

2 2 2 2 22 2 2

( ) 1( ) 1

m

m

e y P x y ax x xe y P x y ax x x

⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − = −⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢

i⎥⎥

If we use E, Y, X and A to indicate the error2( ) 1 m

n n n n mn n ne y P x y ax x x⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦

⎥⎥

If we use E, Y, X and A to indicate the error vector, y vector, matrix of [xi,j] and the unknown coefficients [ai], the above becomes:unknown coefficients [ai], the above becomes:

E Y X A= − ⋅60

Page 61: Polynomial Interpolation and Approximation

Polynomial Approximation: 6/8Polynomial Approximation: 6/8

Note that ET⋅E is the sum of squared errors:Note that E E is the sum of squared errors:0

1

ee⎡ ⎤⎢ ⎥⎢ ⎥

[ ]1

20 1 2 2

1, , , ,

nT

i ni

e e e e e e E E−

⎢ ⎥⎢ ⎥= =⎢ ⎥⎢ ⎥

∑ … i i

Plugging equation E=Y-X⋅A into the abovene

⎢ ⎥⎢ ⎥⎣ ⎦

Plugging equation E Y X A into the above yields:

m2

0( ) ( ) 2 ( )

mT T T T T

ii

e Y X A Y X A Y Y X Y A A X X A=

= − ⋅ − ⋅ = ⋅ − ⋅ ⋅ +∑

61

Page 62: Polynomial Interpolation and Approximation

Polynomial Approximation: 7/8Polynomial Approximation: 7/8

Note that ET⋅E is the following:Note that E E is the following:2

02 ( )

mT T T T

ii

e Y Y X Y A A X X A= ⋅ − ⋅ ⋅ +∑

Differentiating with respect to A yields:

0i=

⎛ ⎞

( ) ( )2

0 2 2

m

ii T T

eX Y X X A

A=

⎛ ⎞∂ ⎜ ⎟⎝ ⎠ = − +∂

Setting the above to zero and solving for A gives the minimum:

( ) ( )A∂

the minimum:

( ) ( 1) 1 ( 1) 1( 1) ( 1)

T Tm mX X A X Y+ × + ×=

62

( ) ( 1) 1 ( 1) 1( 1) ( 1) m mm m + × + ×+ × +

Page 63: Polynomial Interpolation and Approximation

Polynomial Approximation: 8/8Polynomial Approximation: 8/8Since (XTX)A = XTY, we have a way of finding ( ) , y gA that minimizes the sum of squared errors:

Read in (xi,yi)’s (0 ≤ i ≤ n) and the degree m( i,yi) ( ) gForm matrices X(n+1)×(n+1) and Y(n+1)×1Compute (XTX)( +1) ( +1) and XTY( +1) 1Compute (X X)(m+1)×(m+1) and X Y(m+1)×1Solve A from (XTX)A = XTYMatrix A gives the coefficients ofMatrix A(m+1)×1 gives the coefficients of polynomial P(x) of degree m

Multiplications used for computing XTX XTYMultiplications used for computing XTX, XTYand A are O(n3), O(n2) and O(m3). Actually, it is O(n3), because in approximation n > m.

63

O(n ), because in approximation n > m.

Page 64: Polynomial Interpolation and Approximation

Example: 1/3Example: 1/3

Suppose we have 11 data xi yiSuppose we have 11 data points as shown in the table.Therefore n = 10

0.0 3.123081210.4 5.15545177

Therefore, n = 10.But, degree 10 may be too hi h f ti l

0.8 3.984566211.2 3.51668167

high for practical purpose.Least square with a degree 3

1.6 1.839072472.0 -0.357634312 4 1 48452199polynomial yields the

following result:2.4 -1.484521992.8 -1.515322803 2 -1 245269663.2 -1.245269663.6 1.208311324.0 4.96375036

f x x x x( ) . . . .= + − +341285 4 85546 543021 1081262 3

64

Page 65: Polynomial Interpolation and Approximation

Example: 2/3Example: 2/3

This diagram showsThis diagram shows the approximation polynomial of degreepolynomial of degree 3 and data points.The approximationThe approximation polynomial follows the shape of the datathe shape of the data points closely, but it does not passdoes not pass through all of them.

65

Page 66: Polynomial Interpolation and Approximation

Example: 3/3Example: 3/3

This diagram showsThis diagram shows both degree 2 and degree 3 polynomials.

degree 3

degree 3 polynomials.Degree 2 is obviously inadequate because theinadequate because the curve is too far away from the data pointsfrom the data points.Degree 4 is a bit better; b t th i t i

degree 2but, the improvement is not significant in this

66

case.

Page 67: Polynomial Interpolation and Approximation

The EndThe End

67