LAGRANGE INTERPOLATION - DIVERGENCE
G.B. Baker and T.M. Mills
(Received April 1986, revised November 1987)
1. The interpolation problem
Interpolation deals with the problem of fitting curves or surfaces
through data points.
It is important to distinguish the problem from the statistical regres
sion problem. This statistical problem is illustrated by the classical
problem of finding the line of best fit to data points. Clearly the experi
mental scientist does not expect the line to pass through the points but
merely that the line is close to the points. The scientist admits that there
is a certain amount of error associated with the data points and hence is it
not proper to expect the curve to pass through the points.
Therefore, the interpolation problem must deal with the case where there
is no error.
Q. Where does one find measurements with no error?
A. In the Mathematics Department.
There is a great deal of truth in this facetious answer. Interpolation
arises naturally in the use of mathematical tables. Lagrange first encoun
tered the problem in trying to create interpolation methods for astronomical
tables. Actuarial tables provided a number of interpolation problems for
actuaries: it is no coincidence that J. Steffensen, the author of a
classical text ([28]) , on the subject was an actuary. In statistical
quality control, there are still many tables which are very cumbersome to
use on the shop floor and which should be replaced by formulae which could
be found by interpolation methods.
Numerical integration is another area which requires the use of inter-
(bpolation methods. To evaluate !(/) ■= /, the values of f(x) for
'avarious x are given exactly, and used to define a function p(x) which
interpolates / so that I(p) is used to approximate I(,f) .
Math. Chronicle 17(1988), 1 - 18 .
1
Newton used interpolation methods for numerical integration in an
interesting manner. To test his interpolation formula, he let
I(t) = f (1 -xrfdx .
He knew the values of 1(0) , J(l) , J(2) , ... from his study of integration.
Using these values he estimated 1(h) by interpolation '>~J ^ u p m e d the
estimate with the exact value which can be c'^ulated by elementary mensuration.
Interpolation is still used extensively in numerical integration methods.
The classic text by P.J. Davis and P. Rabinowitz [7] gives many relevant
details and references.
The finite element method for solving differential equations uses inter
polation techniques extensively. The following problem arose in the State
Electricity Commission of Victoria. To study the effect of certain vibrations
on an impeller blade which was fixed to a spinning wheel a blade was detached
and set up on a concrete slab. To construct a mathematical model of the blade,
certain measurements were made and the three dimensional co-ordinates of a
large number of points on the surface of the blade were fed into the computer.
An interpolation method was then used to construct a surface which represented
the blade.
The three problems (use of mathematical tables, numerical integration,
and the finite element method) were chosen to illustrate the uses of interpo
lation in applied mathematics. Perhaps one could superficially assess the
importance of the study of interpolation by listing mathematicians who have
devoted their energies to this area: J. Wallis, I. Newton, J.L. Lagrange,
P.S. Laplace, L. Zuler, A.M. Legendre, C.F. Gauss, F.N. Bessel, A. Cauchy,
Ch. Hermite, P.L. Chebyshev, S.N. Bernstein, G.D. Birkhoff, L. Fejer,
G. Polya, G. Szego, A. Zygmund, P. Turan, P. Erdos, C. de Boor, P.J. Davis
are on the list.
Having discussed the place of interpolation in mathematics we now turn
to giving a m o d e m mathematical setting for this old problem.
2
2. A modern setting for an old problem
Let I be the interval [-1,1] and let M be the matrix of points
M =
22
where
l £ x , > x„ > ... > x 2 -1 .In 2 n nn
If fix') is a real-valued function defined on I then there is a unique
polynomial L (/;x) such that n -1
(i) the degree of Ln _ 1 (f;x) does not exceed n - 1
and (ii) L Cf;x. ) = f(x. ) , i = l , 2 , . . . , n .n -1 ^n inJ * , , »
The sequence {LQ(/;x) , L^if-.x) , L2 (/;x) , ...} is the sequence of Lagrange
interpolation polynomials corresponding to the function f(x) and based on
the system of nodes M . Since Ln j(/;x) and / (x) agree for n
distinct values of x , the approximation theorist is interested in the
question, "Does L^ (/;x) converge to /(x) as n tends to infinity?"
Before we answer this question let us become familiar with the formula
for these interpolation polynomials. We may represent L (/;x) in various
ways, but for our purposes, Lagrange's io?- — 1.* will be the most useful.
We w-i I*-
nL if;x) = I /(x.)Z. (x) 71-1 ft. 1 K
3
where l^ix) is the polynomial of degree n - 1 such that ** 6^.
j = 1 , 2 , ... , n . This representation shows that Ln j may be
regarded as a linear operator acting on the function f .
(Here, and elsewhere there is no confusion, x . = x . .)J J«
From the uniqueness of the interpolation polynomial we can deduce that,
if p(x) is a polynomial of degree n - 1 or less, then L (p;x) = p(x) .n-1
In particular we see that
I = 1fc.1
by using the polynomial p(x) = 1 . This is a very useful result because
one of the consequences is
n nL if;x) - fix) = I /(xjZ.(x) - /(x) X L(x) n_1 fc-i * K k = \ K
n
= I (/M -/(x))Z.(x) fc-i * *
and we have a nice representation of the error.
An important quantity is the Lebesgue function
X (x) : = [ U.(x)| n fc-1 *
and related to this is the Lebesgue constant
X : = max{X (x) : -1 S x S 1) n '
which is the norm of the operator
Ln l : C{I) — C(J) .
We shall have more to say about X (x) and X later.
4
3. Some early examples of divergence
In 1896 Ch. Meray [18] published an interesting paper which studies
the error
H V j t f ) = _A x ) | : -l a * * iJ .
His results were independently published later by C. Runge [26] .
Their fundamental result is as follows.
For convenience, consider I to be the interval [-5,5] .
We choose the nodes in M to be equidistantly spaced in I , and let
/(x) = l/(l+x2) . Then L (/;x) does not converge to f(x) uniformlyn-1
in I . In fact the sequence {\\Ln (/)|| : n = 1 , 2 , 3 , . . . } is
unbounded. We can see from Figures 1 and 2 that there is a point £
such that \L (f;x) -,f(x)| is large for |x| > £ and small for |x| < £ . w-1
Here we have a situation where the nodes are equidistantly spaced, and
the function being interpolated is infinitely differentiable in I , but
the approximation provided by the interpolation polynomials is very poor.
A depressing find for the working calculator, a fascinating twist for the
pure mathematician. Depending on your fancy, there are more depressing/
fascinating matters to come.
It seems that Meray may have suspected this in 1986 when he wrote -
"Mais je viens d' apercevoir une infinite d 1 autres examples
due meme accident".
5
4. Later results concerning divergence
In 1914, G. Faber [l1] considered the Lagrange interpolation poly
nomials for an arbitrary matrix of nodes M . In short his result proves
that for each M one can find a function f such that L■ j (fix') does not
converge to f(x) uniformly on I = [-1,1] as n increases without bound.
His proof utilizes X^ , the norm of the operator L ̂ .
Recall that
n
Ln - " I n * kn k w ^*1
n
■ I l^c*)! k= i
X = max{X (x) : -1 S x § 1} . n n
Faber proceeds as follows.
Lemma. Given arbitrary distinct points Xj , x2 , ... , x^ in I ,
there exists a polynomial P^ with degree n - 1 or lees suah that
|P(x^)| £ 8/tT i = 1 , 2 , ... , n
and for some a f I , P(e) > In n .
Theorem. (G. Faber) X^ < (£hh)/(8/7) .
Proof. Choose P as in the Lemma. Then
nP(x) = I P{x.)l. (x)
k«l * *
and hence
|P(*)| s I |P(x.)| • |l.(x)| fc« l * *
S 8 / 7 X (x) .
6
X (x) B i £ M i" 8 / 7
xn M * J£<£lL >8 / Y
in n
8 / 7
8 / ¥
Corollary. There exists a function f € C([-l,l]) such that
^ 0 as
Proof. This follows immediately from the Theorem and the Uniform Boundedness
Principle.
The Runge-M6ray example showed how bad these interpolation polynomials
may be for a special matrix of nodes but the Faber theorem shows that the
divergence phenomenon occurs with all M . However, in the Runge-Meray
example at least the bad approximation is restricted to the extremities of
the interval: near the middle the approximation is quite good. In 1918
S.N. Bernstein [2] shattered any comfoTt which this last observation may
bring.
Theorem. (S.N. Bernstein) Let f(x) = |x| , x C I and suppose that the
nodes of interpolation are equally spaced in I . Then for 0 < |ac| < 1 »
Ln-i^f;x ̂ ' 0 as n “*■ °° *
The situation described here is worse than previous results suggest.
Bernstein's proof of this result is short but rather complicated. It would
be interesting to develop a proof of this result which is more straight
forward.
7
5. Chebyshev nodes
If we define the matrix of nodes by
x,k,n
cos t fc = l(l)n , n & 12 n
then we call these nodes "Chebyshev nodes" because they are the zeros of the
Chebyshev polynomial
In this case we denote the matrix M by T .
These nodes do not strike the outsider as natural in any way. Let us
explain the importance of these nodes by describing two properties of this
node system.
First, we know that, for any matrix M ,
Now, for the matrix T , we have (according to S.N. Bernstein [3]) ,
So T is nearly optimal: that is, if Lagrange interpolation is to be used
then the Chebyshev nodes are close to best. For this reason, T has
received a considerable amount of attention in the mathematical literature.
Second, if f ( C(I) then we associate with / its Fourier-Chebyshev
expansion:
T (®) - cos(n arc cosx) .
X < 8 + (4/w) In n .Yl
oo
f(x) ~ 1 * 0 Cf) +0= i
o
8
Let Sn - = 2 + tjJi aj(f)Tj(x ) • If one replaces the
integrals a.if) by certain Rieraann sums based on the points 6, = (2fe-l)ir/2» tJ ^
k = 1 , 2 , ... , n then Sn (f;x) becomes Ln l(f;x) . So jC/jx)
is approximately equal to Sn (f;x) . Herein lies an important link
between the study of interpolation polynomials and Fourier series. This
link is fully exploited in the text by A. Zygmund [31] .
Having established the reason for studying Chebyshev nodes we now try
to find if they behave better than equidistant nodes. A general folk-theorem
may be stated as follows:
If you can prove a general divergence theorem for the matrix T then
the theorem is probably true for all matrices M .
In 1935, G. Griinwald [13] gave us the first "bad" news concerning T .
Theorem. (G. Grunwald) Let M = T . There exists a function f € C(I)
such that, for almost all x € I ,
Urn sup \L (f;x) | = » . n+°° n ~1
In the next year J. Marcinkiewicz [17] and G. Grunwald [14] improved
this result with the following.
Theorem. ((7. Grunwald, J. Marcinkiewicz) Let M = T . There exists a
function f € C[I) such that, for all x ( I ,
Urn sup | L Af\x) | = » .oo rl~~ L
In 1937 P. Erdos and P. Turan [9] wrote the first of numerous papers
dealing with interpolation. In this paper they state an interesting diver
gence result dealing with averages of Lagrange interpolation polynomials.
Considering the case M = T , Erdos and Turin pursued the analogy between
Fourier series and interpolating polynomials. Just as L. Fejer considered
°n (f',x) m (S0(f-,x) + ... + Sn l(/;x))/n ,
9
Erdos and Turin considered
0n Cf;aO = (ijCf;*) + ... + £,(/;*))/« •
Concerning these means, they proved the following result.
Theorem. (P. Erdos, P. Turari) There ie a function f € C(J) such that
the sequence (0 (/;0) , « » 1 , 2 , ...} is unbounded.
The 1930's were heady days for those interested in divergence of inter
polation methods. In subsequent decades Erdbs, Turin, Grtlnwald generated
many more deep results. A considerable amount of effort has been devoted to
the study of and ^n C*) in the case when M = T . Asymptotic expan
sions, estimates of best constants, and studies of monotonicity of various
related functions have been studied by both pure mathematicians and numerical
analysts.
6. Lagrange interpolation and projections
We have referred to the fact that j maybe regarded as a linear
operator. Specifically, let us write
V, ;
where is the space of polynomials of degree n - 1 or less.
From what we have said earlier, we know that if p € II , thenn - 1
i(p) = p : that is I>n j is an example of a projection of C(I) onto
nn-l
There are many other similar projections which arise in numerical
analysis. Suppose we associate with / its Fourier-Chebyshev expansion
f ~ i «„(/> * j
where
10
T^(x) : = cos(fc arc cos x) ,
ak (f) : = f l f t W A W - * 2) dt
If we let
• -
fn= i /(cos 6) cos kQ dQ .
Sn (f;x) - j a Q (f) ♦ J a k (f)Tk (x)
be the nth partial sum then it is true that
s„_, : «[-!.!]) - Hb_1
is a projection.
Other examples of projection operators may be obtained by various
orthogonal expansions of f (e.g. in terms of Legendre polynomials).
A natural question is
"Which projection P : C(I) — ► has minimum norm?"
This question has been discussed nicely in an interesting paper by
M. Golomb (1965) [12] and, as far as we know, the problem is still open.
However, Golomb shows that there is a positive constant such that for each
natural number n , and any projection
p : C(I) -*• nn l
we have
||P|| £ K ^n(n-l) - C . v
A corollary of this is the Corollary of Faber's theoi-em.
11
The main point to be made here is that the study of Lagrange interpo
lation has led us into the general study of projections on function spaces.
It would, by the way, be interesting to get more information about the
constant C above.
7. Some recent results on divergence
Let us now survey some of the most important recent results.
A. A. Privalov (USSR) has been generating a number of very technical
papers concerned with Lagrange interpolation polynomials based on the matrix
of Jacobi nodes. The Jacobi polynomials with parameters o , (5 are afa 8)
sequence of polynomials {P^ (x) : n = 0 , 1 , 2 , 3 , . . . } such that
j 1 PJJ°,’B)(x)P/Ja 'B)(x)(l-x)a ( U x ) Bdx - 6 ^
where a > -1 , 0 > -1. The nth row of the matrix M(a,B) of Jacobi
nodes consists of the n distinct zeros of P^°'^(x) . For more detailsn v Jconsult the text by G. Szego [29] .
In 1976, Privalov [22] proved the following result.
Theorem. (A.A. Privalov) Given a > -1 , 0 > -1 , there is a function
f ( C(I) such that
lim sup IL (/;x)I = ® n>«*> n
a.e. in I .
In an interesting survey paper Privalov [23] takes this result even
further.
Theorem. ( A.A. Privalov) Let a > -1 , 6 > -1 and let M(a,B) be the
Jacobi matrix. Then there is a function f ( C(J) such that
(i) the sequence {t^(/;x) : n = 1 , 2 , . . . } diverges at all points
inside (-1,1) ,
12
(ii) for any number e , 0 < e < 1 , the Fourier-Jacobi series of f
converges to f uniformly in [-1+e, 1-e] .
A most startling type of result 1 Not only are the classical divergence
results extended but an equiconvergence problem is solved too.
In 1980, P. Erdos and P. Vertesi [10] , proved the following theorem.
Theorem. (P. Erdos, P. Vertesi) For any matrix M one can find a function
f (. C(J) such that
lim sup |L (fix') | = °° n
for almost all x € I .
If we consider the special matrix
then it is clear that we cannot drop the word "almost" for the statement of
this theorem.
To see that the "lim sup" cannot be replaced by "lim" or "lim inf" we
must refer to a convergence theorem of P. Erdos. However, we are endeavour
ing to avoid mention of convergence theorems in this paper.
This theorem is the climax of a long history dating back to Moray's
example last century. The proof is very complicated indeed.
These divergence results have been extended in another way by I. Muntean
in two recent papers (I. Muntean [19] and S. Cobzas and I. Muntean [5]) .
13
A typical result is the following.
Theorem. (I. Mimtean) Given an arbitrary matrix M let
U - {/ € C(I) : Urn sup ||L f \\ = «} .™ M -+-00 «
Then Uy is an uncountable, G^ , dense subset of C(_I~) .
The examples of Meray, Runge, Griinwald, Marrii.’uicwicz were not isolated
examples of "bad" functions. "Rs*'1" functions are everywhere!
Finally, in reviewing recent advances we mention that a long standing
conjecture of S.N. Bernstein was solved recently. Bernstein's conjecture
deals with the problem of finding n distinct points of [-1,1] which
minimize
Bernstein conjectured that X would be minimised if the local maxima
of the graph of X^(x) were all equal. This has been settled by T. Kilgore
[16] . For an interesting discussion of this very deep problem see the
paper by Carl de Boor [4] , or the recent report of Myron S. Henry (American
Mathematical Monthly 91 (1984), 497-499) .
8. Comments on the literature
We close by giving a brief description of some of the most readable
papers and books on the subject.
D. Elliott [8] recently published a paper whose content is similar to
the present paper. The paper is written in a very easy style and would be an
interesting starting point, for someone seeking general information about the
I. Natanson [20] has written a three volume treatise dealing with
approximation theory and the third volume is devoted exclusively to interpo
lation. The books provide an excellent introduction to the classical analysis
of approximation theory: they would be ideal for final year undergraduate students.
X^ = max U^(a)| : -1 5 i S 1
= max{X (*) : -1 S * S 1} . n
field.
14
T.J. Rivlin [25] has produced an excellent text book dealing exclu
sively with the Chebyshev polynomials. Interpolation polynomials have an
important place in this text which contains a large number of problems.
Rivlin1s book would be another useful text for final year undergraduates.
P. Turan [30] published a long paper which deals with unsolved problems
in approximation theory before he died. Since then a number of them have
been solved and published in Acta Math. Acad. Sci. Hungar. Still it provides
a fair description of the state of the art of interpolation.
P.J. Davis [6] has written a classic text which deals with interpolation
and approximation.
P. Kergin [15] has done some recent work on polynomial interpolation
of functions of serval variables - a problem which has received little
attention.
R. Askey [l] , P. Nevai [21] have written some interesting papers
dealing with mean convergence of Lagrange interpolation polynomials.
P. Printer [24] has an interesting paper dealing with interpolation in
more abstract spaces. Here she has attempted to generalize Lagrange's
interpolation formula as well as merely generalizing the concept of
interpolation.
Smirnov and Lebedev [27] have written a classic text on interpolation
of functions of a complex variable.
15
REFERENCES
1. R. Askey, Mean convergence of orthogonal series and Lagrange interpo
lation, Acta. Math. Acad. Sci. Hungar. 23 (1972), 71-85 .
2. S.N. Bernstein, Quelques remarques sur I' interpolation, Math. Ann. 79
(1918) , 1-12 .
3. S.N. Bernstein, Sur la limitation des valeurs d' interpolation, Bull.
Acad. Sci. de l'URSS, 8 (1931), 1025-1050 .
4. C. de Boor, Polynomial interpolation, Proc. Int. Cong. Math. Helsinki
(1981), 917-922 .
5. S. Cobzas and I. Muntean, Condensation of singularities and divergence
results in approximation theory, J. Approx. Theory 31 (1981) 138-153 .
6. P.J. Davis, Interpolation and Approximation, Dover Publications, N.Y.
(1975) .
7. P.J. Davis and P. Rabinowitz, Numerical Integration, Academic Press,
N.Y. (1975) .
8. D. Elliott, Lagrange interpolation - decline and fall? Int. J. Math.
Educ. Sci. Tech. 10 (1979), 1-12 .
9. P. Erdbs and P. Turdn, On interpolation I , Ann. Math. 38 (1937),
142-155 .
10. P. Erdos and P. Vertesi, On the almost everywhere divergence of Lagrange
interpolatory polynomials for arbitrary system of nodes, Acta. Math.
Acad. Sci. Hungar. 36 (1980), 71-89 . Corrections in Acta. Math. Acad.
Sci. Hunger. 38 (1981), 263 .
11. G. Faber, Uberdie interpolatorische Darstellung stetiger Funktionen,
Jahresber der Deutschen Math. Ver. 23 (1914), 190-210 .
12. M. Golomb, Optimal and nearly optimal linear approximations,
"Approximationof Functions: Proceedings" ed. H.L. Garabedian, Elsevier
(1965) 83-100 .
13. G. Griinwald, Uber die Divergenzerscheinungen der Lagrangeschen Interpo-
lationspolynome , Acta. Sci. Math. Szeged. 7 (1935), 207-221 .
14. G. Griinwald, Uber die Divergenzerscheinungen der Lagrangeschen Interpo-
lationspolynome stetiger Funktionen, Annales of Math. 37 (1936), 908-918 .
16
15. P. Kergin, A natural interpolation of u functions, J. Approx. Th.
29 (1980), 278-293 .
16. T.A. Kilgore, A oharaotization of the Lagrange interpolating projection
with minimal Tchebysheff norm, J. Approx. Th. 24(1978), 273-288 .
17. J. Marcinkiewicz, Sur la divergence des polynomes d' interpolation,
Act. Sci. Math. Szeged. 8 (1937), 131-135 .
18. Ch. Meray, Nouveaux exemples d' interpolation illusoires, Bull. Sci.
Math. 20 (1896), 266-270 .
19. I. Muntean, The Lagrange interpolation operators are densely divergent,
Studia Univ. Babes - Bolyai Mat. 21 (1976), 28-30 .
20. I.P. Natanson, Constructive Function Theory, Vols. I-III , Frederick
Ungar Pub. Co., N.Y. (1965) .
21. P.G. Nevai, Mean convergence of Lagrange interpolation I , II ,
J. Approx. Theory, 18 (1976) 363-377 , 30 (1980), 263-376 .
22. A.A. Privalov, On the divergence of Lagrange interpolation processes
constructed over roots of the Jacobi polynomials on sets of positive
measure of Lebesgue, Sibirsk. Math. 17 (1976), 837-859 . (Russian).
23. A.A. Privalov, Approximation of functions by interpolation polynomials,
"Fourier Analysis and Approximation Theory (Budapest)", ed. G. Alexits,
P. Turan (1976), 659-670 .
24. P.M. Prenter, Lagrange and Hermite interpolation in Banach spaces,
J. Approx. Th. 4 (1971), 419-432 .
25. T.J. Rivlin, The Chebyshev Polynomials, Wiley, N.Y. (1974) .
26. C. Runge, Uber die Darstellung willkurlicher Funktionen und die
Interpolation zuischen aquidistanten Ordinaten, Z. Angew. Math. Phys.
46 (1901), 224-243 .
27. V.I. Smirnov and N.A. Lebedev, Functions of a Complex Variable:
Constructive Theory, M.I.T. Press, Cambridge (1968) .
28. J. Steffensen, Interpolation, Chelsea Pub. Co., N.Y. (1950) .
29. G. Szego, Orthogonal Polynomials, AMS Colloq. Publ. Vol. 23 ,
Providence RI (1939) .
17
30. P. Turan, On some open problems of approximation theory, J. Approx.
Theory, 29 (1980), 23-85 , 86-89 .
31. A. Zygmund, Trigonometric Series, Vols. I , II Cambridge U.P., London
(1968) .
Bendigo College of Advanced Education, P.O. Box 199,Bendigo Vic.,AUSTRALIA 3550 .
18