approximating sinusoidal functions with polynomials (1)

Upload: michael-lo

Post on 07-Jul-2018

227 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    1/14

    Approximating Sinusoidal Functions with Interpolating Polynomials

    When most mathematicians think about approximating the values of 

    transcendental functions, particularly the sine and cosine functions, what typically comes

    to mind are the Taylor polynomial approximations. For instance,

    1sin (  x x T x≈ =

    !

    !sin ( !"

     x x x T x≈ − =

    ! #

    #sin ( !" #"

     x x x x T x≈ − + =

    and so forth. $ote that x must be in radians. These polynomials have the properties%

    1. They agree perfectly with the function at & x  = '

    . The closer that  x  is to the origin, the better the approximation' the further that  x  is

    from the origin, the poorer the approximation'

    !. The higher the degree of the approximating polynomial, the better the approximation,

    meaning that one gets more accuracy over a larger interval centered at the origin.

    We show the graphs of the sine function along with the first three Taylor 

     polynomial approximations on the interval )&, *+π    in Figure 1a. The linear 

    approximation is in red' the sine curve and the cubic and fifth degree polynomials are

    essentially indistinguishable. We oom in on the right-hand portion of the interval in

    Figure 1b. From these graphs, it is fairly obvious that these three properties do hold. The

    authors have provided a dynamic xcel spreadsheet ) * + to allow readers and their 

    students to investigate the use of Taylor polynomials to approximate the sine and cosine,

    as well as the exponential and logarithmic functions. /ordon )+ demonstrates how these

    Taylor approximation formulas can be found based on simple data analysis without any

    reference to calculus.0owever, it turns out that using Taylor polynomials to approximate the sine and

    cosine is  not necessarily the most effective approach. nstead, we look at a different

    approach, the idea of polynomial interpolation. There are two primary forms of 

     polynomial approximation. 2ne of them was developed by saac $ewton and other,

    which is attributed to 3agrange, was actually discovered by dward Waring in 1445 and

    1

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    2/14

    separately by uler a few years later. nterpolation is based on the problem of finding a

     polynomial that passes through a set of 1n +   data points' in general, 1n +   points

    determine a polynomial of degree n (or possibly lower if the points happen to fall onto a

    lower degree curve. n this article, we consider the 3agrange interpolating formula,

    which we introduce later. (n contrast, regression analysis seeks to find a polynomial or 

    other function that captur*es a trend in a set of data, but may not pass through any of the

     points.

    6efore discussing 3agrange interpolation, however, we first consider several

    important issues.

    Using Sinusoidal Behavior First, although we can theoretically obtain any desired

    degree of accuracy on any finite interval with Taylor polynomials simply by increasing

    the degree sufficiently, in reality that is not 7uite so simple. f we want to approximate a

    function at a point very far from the center at & x = , we need a very high degree

     polynomial and computations with such an approximating polynomial may not be all that

    trivial. 8oreover, there is a ma9or issue in trying to decide what degree polynomial

    would be needed to achieve a given level of accuracy, say four decimal places or ten

    decimal places, at all points within a given interval. f we proceed unthinkingly, welikely would start by essentially picking a degree n at random, checking how accurate or 

    inaccurate the results are, and then likely having to increase the degree continually until

    we reach the desired level of accuracy. t is certainly preferable to decide on the desired

    level of accuracy and being able to determine the degree of the polynomial that gives that

    accuracy.

    Figure 1a: Sine function and its first three aylor

    approximations

     

    Figure 1!: Sine function and its first three aylor

    approximations on

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    3/14

    We can circumvent much of this problem by using the periodicity and symmetry

     properties of the sinusoidal functions. :ince both the sine and cosine are periodic with

     period π  , all we really need is a polynomial that gives the desired level of accuracy on

    an interval of length π  , or better on the interval from π −  to π  that is centered at & x  = ,

    and the value of either sin x   or cos x   for any  x outside this interval can be found.

    8oreover, since the sine curve on the left of the origin is the upside-down mirror image

    of the portion on the right, all we actually need do is find a sufficiently accurate

    approximating polynomial on )&, +π  . Furthermore, because the portion of the sine curve

    on this interval is symmetric about  x   π = , we really only need something that is

    sufficiently accurate on )&, +π  . Finally, because the values of the sine function between

    *π   and π   are the same as the values of cos x  from * x   π =  to & x  = , we really onlyneed an approximation that is sufficiently accurate on this fairly small interval. The

    comparable reasoning applies to the cosine function' all that is needed is a sufficiently

    accurate approximation on )&, *+π  .

    he "rror in an Approximation :econd, we need to be able to assess 9ust how good

    an approximation is. We begin by defining the error   in an approximation as the

    difference between the function and its approximating polynomial. For example, with the

    cubic Taylor approximation to the sine, the error is !sin ( !" x x x− − . The graph of the

    error function !sin ( !" x x x− −  on )&, *+π   is shown in Figure a' we observe that it is

    actually 7uite small across the interval.  n fact, visually, the maximum error is about

    &.&&* and it occurs at the right endpoint. :imilarly, Figure b shows the error function

    associated with the fifth degree approximation and we observe that its maximum, in

    absolute value, is about &.&&&&!;. To measure how closely an approximation a function

    on an interval such as )&, *+π  , we use the maximum absolute value of the error. This is

    e7uivalent to finding the maximum deviation between the function and the approximation

    on the entire interval. ffectively, the maximum absolute value of the error provides

    information on the t a simplistic level, we can use technology to create a table of values of the error 

    function' for instance, if we use #&&, say, uniformly spaced  x-values and then identify

    !

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    4/14

    the largest value of the error in absolute value at these points, we find that the maximum

    absolute value of the cubic?s error is roughly &.&&*#*. t means that the cubic

    approximation is e7ual to the sine function to at least two decimal places on )&, *+π  .

    (2bviously, it is conceivable that there might be some intermediate point(s where the

    error becomes significantly larger,

     

     but this is 7uite unlikely. 2ne can also apply optimiation methods from calculus to find

    the actual maximum absolute value of the error, if desired, but that would not be

    appropriate at an algebra or precalculus level. f we wanted greater accuracy, we would

    use a higher degree polynomial. Thus, the maximum error with the fifth degree Taylor 

     polynomial at the same #&& points is -#!.;;*; 1&× , so the fifth degree Taylor 

     polynomial and the sine function are e7ual to at least four decimal places on )&, *+π  .

    We leave it to the interested reader to conduct a comparable investigation to see

    how accurate the successive Taylor polynomial approximations

    cos 1"

     x x ≈ −

    *cos 1

    " *"

     x x x ≈ − +

    * ;

    cos 1" *" ;"

     x x x x ≈ − + −

    are to the cosine function on the interval )&, *+π  .

    *

     

    Figure #a: "rror !etween the sine function and its

    cu!ic aylor approximations

    Figure #!: "rror !etween the sine function and its

    fifth degree aylor approximations

     

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    5/14

    he $agrange Interpolating Polynomial  >s discussed in )!+, the 3agrange

    interpolating polynomial of degree 1 that passes through the two points & &( ,  x y   and

    1 1( ,  x y  is

    &11 & 1

    & 1 1 &

    ( . x x x x

     L x y y x x x x

    −−= +

    − −

    For instance, if the points are (1,#  and (,! , then

    &11 & 1

    & 1 1 &

    1( # ! #( !( 1.

    1 1

     x x x x x x L x y y x x

     x x x x

    −− − −= + = + = − − + −

    − − − −

    @ou can easily show that this is e7uivalent to the point-slope form by multiplying it out.

    :imilarly, the 3agrange interpolating polynomial of degree that passes through the three

     points & &( ,  x y , 1 1( ,  x y , and ( ,  x y  is

    1 & & 1 & 1

    & 1 & 1 & 1 & 1

    ( ( ( ( ( ( ( .

    ( ( ( ( ( (

     x x x x x x x x x x x x L x y y y

     x x x x x x x x x x x x

    − − − − − −= + +

    − − − − − −

     $otice that this expression is composed of the sum of three distinct 7uadratic functions.

    ach component function contains two of the three possible linear factors &(  x x− ,

    1(  x x− , and (  x x− , so that each component contributes ero to the sum at two of the

    three interpolating points & x x= , 1 x x= , and  x x= . >t the third point, each component

    contributes, respectively,

    #

    Figure %: $agrange interpolating polynomial through the three

    points &1' #(' &%' )( and &*' +(' and its three component ,uadratic

    functions

     

    (!, A

    (1,

    (;, *

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    6/14

    & y y= , 1 y y= , and  y y= . That is, at & x x= , the second and third term are ero, so that

    the only contribution is from the first term, which contributes & y   to the sum, so

    & &(  L x y= , and so on for the other two points. :ee Figure ! for the graph of the

    3agrange interpolating polynomial through the points (1, , (!,A , and (;,* , as well as

    the three component 7uadratic functions. The authors have created an interactive

    spreadsheet )#+ to allow interested readers and their students to investigate the 3agrange

    interpolating polynomial, as well as its components, in a dynamic way for any choice of 

    interpolating points.

    n general, the n B 1 points & &( ,  x y , 1 1( ,  x y , C ( , n n x y  determine a uni7ue

     polynomial of degree at most n. the 3agrange formula for this polynomial consists of a

    sum of n polynomial terms of degree n, each involving n  of the possible n B 1 linear 

    factors &(  x x− , 1(  x x− , C ( n x x− .

    We now apply these ideas to approximate the values of the sine function on the

    interval )&, *+π   using a 7uadratic 3agrange polynomial. To do so, consider the three

     points (&,& , ( A,sin( Aπ π  , and ( *, π  . (Dsing EA has the added advantage that

    sin (EA can be calculated exactly using the half-angle formula. We construct the

    associated 7uadratic interpolating polynomial

    A * A* A

    A * A A * * * A

    ( ( ( &( ( &( ( & sin(

    (& (& ( &( ( &(

    ( &.4A#!5A ( &.!5;55&.!A;A! &.4&41&4(&.!5;55( &.!5;55 (&.4A#!5A(&.!5;55

    .*A1#! ( &.4A#!5A

     x x x x x x L x

     x x x x

     x x

    π π π π π 

    π π π π π π π π  

    − − − −− −= + +

    − − − − − −

    − −≈ +

    ≈ − − + .5;!4 ( &.!5;55, x x −

    ;

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    7/14

    rounded to six decimal places. We show the graph of this 7uadratic function (in red

    along with the sine curve (in blue on )&, *+π   in Figure * and observe that the two are

    essentially indistinguishable.

      > more informative view is the associated error function shown in Figure #.

     $otice that the error function appears to oscillate in a somewhat sinusoidal pattern that

    ranges from about G&.&&!# to about &.&&!#. n fact, using the same #&& points as before,

    the largest negative error is G&.&&!;!A and the largest positive error is &.&&!*;4, rounded

    to six decimal places. Therefore, the maximum deviation is &.&&!;!A. $ote that this is

    only slightly larger (i.e., a slightly worse approximation than what we achieved with the

    cubic Taylor approximation. 2n the other hand, this is accomplished with a 7uadratic

    approximation, so the accuracy is 7uite impressive. n fact, we have the comparable two

    decimal place accuracy as we had with the cubic Taylor polynomial. We could certainly

    improve on this level of accuracy by moving to a cubic interpolating polynomial, but will

    not do so here. nstead, we will attempt to improve on the accuracy by a more insightful

    approach rather than increasing the level of computation by using a higher degree

     polynomial.

     $otice, also from Figure #, that the error is ero at each of the three interpolating

     points, as should be expected, and that the error is very small on either side of these

     points. This suggests that we are essentially wasting the excellent accuracy 9ust to the left

    of the origin and 9ust to the right of * x   π = . n turn, this suggests that it might be

    helpful if we choose slightly different interpolating points for a 7uadratic interpolating

    4

    Figure +: Sine function and its ,uadraticinterpolating polynomial

    Figure -: "rror of the ,uadratic approximatingpolynomial to sine function

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    8/14

     polynomial that give us the advantage of the very small errors on either side of the

    endpoints.

    For example, suppose we choose the points &. x  =   and &.4# x  =   along with

    A x   π =  at the center. (Hifferent results will occur with other choices of the two points.

    The resulting 7uadratic interpolating polynomial is

    A A

    A A A

    A

    A

    ( ( &.4# ( &.( &.4#( sin(&. sin(

    (&. (&. &.4# ( &.( &.4#

    ( &.( sin(&.4#

    (&.4# &.(&.4#

    ( &.!5;55( &.4# (&.&*5545 &.!A;A!

    (&. &.!5;55(&. &.4#

     x x   x x L x

     x x

     x x

    π π 

    π π π 

    π 

    π 

    − −   − −= +

    − − − −

    − −+

    − −

    − −≈ +

    − −

    &.( &.4#

    (&.!5;55 &.(&.!5;55 &.4#

    ( &.( &.!5;55&.;A1;!5 .

    (&.4# &.(&.4# &.!5;55

     x x

     x x

    − −

    − −

    − −+

    − −

    >s before, the graphs of this function and the sine curve are indistinguishable between &

    and *π  . 0owever, as seen in Figure ;, the error function with this approximation is

    considerably smaller than with the previous approximation. n particular, the maximum

    absolute error at the same #&& points is &.&&##1. Iresumably, with a little

    experimentation with the interpolating points, one could almost certainly improve on this

    further.

    From a pedagogical standpoint, another advantage to using interpolating

     polynomials to approximate the sinusoidal functions instead of Taylor polynomials is that

    A

    Figure *: "rror of the ,uadratic approximating

    polynomial to sine function

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    9/14

    the latter re7uire the use of radians while interpolating polynomials can be used with

    either radians or degrees. n some ways, degrees might make classroom investigations a

    little simpler for many students. For instance, since we restrict our attention to the

    interval )& ,*# +° ° , the center is .# x  = ° , and so it is slightly easier to examine what

    happens to the level of accuracy when the endpoints are chosen symmetrically, say

    ! x  = °  and * x  = ° . The resulting 3agrange polynomial is then

    ( .#( * ( !( * ( !( .#( sin(! sin(.# sin(*

    (! .#(! * (.# !(.# * (* !(* .#

    ( .#( * ( !( *&.!!; &.!A;A!

    (! .#(! * (.# !(.# *

     

     x x x x x x L x

     x x x x

    − − − − − −= ° + ° + °

    − − − − − −

    − − − −≈ +

    − − − −

    ( !( .#

      &.;;51!1 .(* !(* .#

     x x− −

    + − −

    >gain, the graphs of this approximating polynomial and the sine curve are

    indistinguishable. The corresponding error function is shown in Figure 4. The maximum

    absolute error in the approximation over the same #&& points is now &.&&;4&, which is

    not 7uite as good as our previous attempt. 2bviously, if we change the interpolating

     points, we will get other approximating polynomials and it is likely that some of them

    will give better results. The authors have also created an interactive spreadsheet,

    available from the $JT8 website, that allows interested readers and their students to

    investigate dynamically the way that the 3agrange interpolating polynomial approximates

    the sine function in either radians or degrees for any choice of the interpolating points

    and see the effects, both graphically and numerically.

    5

    Figure .: "rror of the ,uadratic approximating

    polynomial to sine function in degrees

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    10/14

    We suggest that interested readers can use these kinds of investigation as some

    very valuable classroom pro9ects for their students. :tudents can be tasked with selecting

    other possible interpolating points to see how small they can make the maximum absolute

    error. 8any students tend to respond to such investigations as a challenge to get the

    lso, a series of comparable investigations can be conducted to

    approximate the values of the cosine function, but we will not go into that here. nstead,

    we leave that for the interested readers and their students. t makes a wonderful

    classroom activity or for individual or group explorations. The authors have created an

    interactive spreadsheet so that readers can investigate dynamically the way that the

    3agrange interpolating polynomial approximates the cosine function in radians or degrees

    for any choice of the interpolating points.

    he Behavior of the "rror Function We next consider the behavior patterns of the

    error function. 3ook at Figure *, which shows the error function associated with the

    7uadratic approximation to the sine function. The shape of the curve is reminiscent of a

    cubic polynomial. Kealie that the 7uadratic is based on three interpolating points,

    & &( ,  x y , 1 1( ,  x y , and ( ,  x y . >t each of these points, there is exact agreement between

    the function and the 7uadratic interpolating polynomial, so that the error must be ero.

    Jonse7uently, the error function will have three real eros, so that the appearance of a

    cubic pattern is not coincidental. :imilarly, Figure A shows the error function associated

    with a cubic approximation interpolating the four points &.&! x  = , &.* x  = , &.#* x  = ,

    and &.4; x  = , to the sine, and its shape is suggestive of a 7uartic polynomial, which can

     be investigated using the interactive spreadsheet that approximates the sine function with

    3agrange interpolating polynomials.

    1&

    Figure ): "rror of the cu!ic approximating

    polynomial to sine function

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    11/14

    n general, given 1n +   interpolating points & &( ,  x y , 1 1( ,  x y , C , ( , n n x y , the

    error must be ero at each i x . Thus, the error function must contain 1n +  linear factors

    i x x− , and hence must contain the polynomial & 1( ( ( n x x x x x x− − −L . n fact, there is

    a formula for the error associated with the interpolating polynomial ( n L x  based on 1n +

    interpolating points & &( ,  x y , 1 1( ,  x y , C , ( , n n x y %

    ( 1

    & 1

    sin ( ( sin ( ( ( ( ( ,

    ( 1"

    n

    n n n E x x L x x x x x x x x xn

    ξ += − = − − − −

    +  L

    where ξ   is some real number between & and *π  , in this case, and it depends on

    & , , n x xK  , and  x ' also,( 1sin   n+   indicates the 1n + st  derivative of the sine function.

    While we will not go into the details here, interested readers are referred to )1+.

    0owever, with the interactive spreadsheets, we can search for the best possible

    7uadratic (or cubic interpolating polynomials based on three (or four points. n this

    case, we want the maximum absolute error of an approximation to be as small as

     possible. n the process of finding the 7uadratic interpolating polynomials with the

    smallest error, we find that the maximum absolute error tends to be small if the error is

    oscillatory and somewhat evenly distributed throughout the interval of approximation. t

    is true for both sine and cosine functions, as well as for any number of the interpolating

     points. This again is not coincidental. The polynomial component

    & 1( ( ( n x x x x x x− − −L   in the error formula ( n E x  not only dictates the shape of the

    error function, but also gives us a way to minimie the error in interpolation. :ince there

    is no explicit way to represent the dependence of ξ  on & , , n x xK  , and  x , we only seek to

    11

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    12/14

    minimie the maximum value of & 1( ( ( n x x x x x x− − −L . $otice that this may not

    give the best approximation since we do not know the value of ξ  and hence the value of 

    ( 1sin ( n ξ + , but it does give a very good approximation.

    The issue of finding the interpolating points that minimie the maximum of the

     product (the so-called 8ini8ax problem has been studied extensively. The best points

    to use are known as the Jhebyshev nodes (see, for instance, ) +, for which

    *

    & 1 &

    1max ( ( ( (

    n   n x

     x x x x x x x xπ ≤ ≤

    − − − − =L

    is minimal. For the 7uadratic interpolating polynomial on the interval )&, *+π  , the

    Jhebyshev nodes are &

    #

    cos 1 &.A ; x

      π π   

    = + ≈ ÷   , 1!

    cos 1 &.!5A ; x

      π π   

    = + ≈ ÷   , and

    cos 1 &.4!A ;

     x  π π   = + ≈ ÷

     , rounded to two decimal places. The associated error function

    is shown in Figure 5. The maximum absolute error at the same #&& points is &.&&!5!.

    6y far, this is the best result we have obtained. 3ikewise, we use the four Jhebyshev

    nodes &4

    cos 1 &.&!

    A A

     x  π π   = + ≈ ÷

     

    , 1#

    cos 1 &.*

    A A

     x  π π   = + ≈ ÷

     

    , !

    cos 1 &.#*

    A A

     x  π π   = + ≈ ÷

     

    ,

    and !1

    cos 1 &.4;A A

     x  π π   = + ≈ ÷

       to obtain the cubic interpolating polynomial for a very

    good approximation on )&, *+π  . The corresponding maximum absolute error at the same

    #&& points is &.&&&A, so it gives at least four decimal place accuracy. Figure 5 shows

    its error function.

    1Figure /: "rror of the ,uadratic approximating

    polynomial at the 0he!yshev nodes to sine function

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    13/14

    We summarie the error results in Table 1. Jlearly, the 7uadratic interpolating

     polynomial with Jhebyshev nodes is a better fit to the sine function than the cubic Taylor 

     polynomial. :imilarly, the maximum error with the cubic Taylor polynomial is much

    larger than that for the cubic interpolating polynomial with Jhebyshev nodes. Thus, the

    interpolation approach lets us use a lower degree polynomial to obtain a better 

    approximation compared to Taylor polynomials. 8oreover, if we use the same degree for 

     both the interpolating and Taylor polynomials, the interpolating polynomial produces a

    much better approximation, although admittedly hand or calculator calculations with

    3agrange interpolating polynomials are more complicated than those with Taylor 

     polynomials that are centered at the origin.

    >lso, the error formula for the interpolating polynomial

    ( 1

    & 1

    sin ( ( sin ( ( ( ( (

    ( 1"

    n

    n n n E x x L x x x x x x x x xn

    ξ += − = − − − −

    +  L

    can be used to estimate the maximum absolute error. For example, for the 7uadratic

    Jhebyshev polynomial, the corresponding absolute error is capped by &.&*1;;4, because

    & F* & F*

    max sin ( max cos( 1( &.&*1;;4

    !" !" !"

     x x E x   π π ξ ξ 

    ≤ ≤ ≤ ≤

    ′′′   −≤ = ≤ ≈ .

    :imilarly, the maximum absolute error for the cubic Jhebyshev polynomial is

    (*

    & F * & F *! ! ! !

    max sin ( max sin( ( &.&&!;A!

    *" *" *"

     x x E x   π π ξ ξ 

    ≤ ≤ ≤ ≤≤ = ≤ ≈ .

    This estimate lets us determine the lowest degree polynomial that gives the desired

    accuracy.

    Therefor e, when we think about approximating the values of transcendental

    functions, we should consider the interpolation approach.

    Table 1. Jomparison of rrors

      8ethod 8aximum >bsolute rror  

    !T   (!rd degree Taylor at & x  = &.&&*#*

    1!

  • 8/18/2019 Approximating Sinusoidal Functions With Polynomials (1)

    14/14

    #T   (#th degree Taylor at & x  = &.&&&&!;

     L  (with Jhebyshev nodes &.&&!5!

    ! L  (with Jhebyshev nodes &.&&&A

    eferences

    [1] 6urden, K., and L. Faires, Numerical Analysis, 5th dition, 6rooksJole, &1&.

    [2] /ordon, :. pproximating :inusoidal Functions

    with Iolynomials.= The Mathematics Teacher, 12+ (8ay &11% ;4;-;A.

    [3] >uthor.