mt5802 - integral equations introduction - st andrewsrac/mt5802/integral equations.pdf · mt5802 -...

16
MT5802 - Integral equations Introduction Integral equations occur in a variety of applications, often being obtained from a differential equation. The reason for doing this is that it may make solution of the problem easier or, sometimes, enable us to prove fundamental results on the existence and uniqueness of the solution. Denoting the unknown function by φ we consider linear integral equations which involve an integral of the form K( x, s ) φ( s ) ds or K( x, s ) φ( s ) ds a x a b The type with integration over a fixed interval is called a Fredholm equation, while if the upper limit is x , a variable, it is a Volterra equation. The other fundamental division of these equations is into first and second kinds. For Fredholm equation these are f ( x ) = K( x, s ) φ( s ) ds a b φ( x ) = f ( x ) + λ K( x, s ) φ( s ) ds a b The corresponding Volterra equations have the upper limit b replaced with x . The numerical parameter λ is introduced in front of the integral for reasons that will become apparent in due course. We shall mainly deal with equations of the second kind. Series solutions One fairly obvious thing to try for the equations of the second kind is to make an expansion in λ and hope that, at least for small enough values, this might converge. To illustrate the method let us begin with a simple Volterra equation, φ( x ) = x + λ φ( s ) ds 0 x . For small λ, φ 0 ( x ) = x is a first approximation. Insert this in the integral term to get a better approximation φ 1 ( x ) = x + λ sds 0 x = x + 1 2 λx 2 . Again put this into the integral to get φ 2 ( x ) = x + s + 1 2 λs 2 0 x ds = x + 1 2 λx 2 + 1 6 λ 2 x 3 . Continuing this process, we get

Upload: ngodiep

Post on 06-Feb-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

MT5802 - Integral equations

Introduction

Integral equations occur in a variety of applications, often being obtained from adifferential equation. The reason for doing this is that it may make solution of theproblem easier or, sometimes, enable us to prove fundamental results on the existenceand uniqueness of the solution.Denoting the unknown function by φ we consider linear integral equations which involvean integral of the form

K(x,s)φ(s)ds or K(x,s)φ(s)ds

a

x

∫a

b

∫The type with integration over a fixed interval is called a Fredholm equation, while if theupper limit is x , a variable, it is a Volterra equation.

The other fundamental division of these equations is into first and second kinds. ForFredholm equation these are

f (x) = K(x,s)φ(s)dsa

b

φ(x) = f (x)+ λ K(x,s)φ(s)dsa

b

∫The corresponding Volterra equations have the upper limit b replaced with x . Thenumerical parameter λ is introduced in front of the integral for reasons that will becomeapparent in due course. We shall mainly deal with equations of the second kind.

Series solutions

One fairly obvious thing to try for the equations of the second kind is to make anexpansion in λ and hope that, at least for small enough values, this might converge. Toillustrate the method let us begin with a simple Volterra equation,

φ(x) = x + λ φ(s)ds

0

x

∫ .

For small λ, φ0(x) = x is a first approximation. Insert this in the integral term to get a

better approximation

φ

1(x) = x + λ sds

0

x

∫ = x +12

λx 2 .

Again put this into the integral to get

φ

2(x) = x + s +

12

λs2

0

x

∫ ds = x +12

λx 2 +16

λ2x 3 .

Continuing this process, we get

Page 2: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

φ

n(x) = x +

12

λx 2 + ..........+1n !

λn−1xn

and, as we let n → ∞ , the series converges to

eλx −1( ) . Substituting into the equation

verifies that this is the correct solution. So, for this Volterra equation the technique ofexpansion in series gives a result which is convergent for all λ.

We now show that this works for all Volterra equations, subject to some fairly generalconditions. Suppose that we look for a solution with x in some finite interval [a,b]andthat on this interval f (x) is bounded with

f (x) < m . We also suppose that, on the

interval [a,b]×[a,b], K(x,s) < M .Then

φ0

(x) = f (x) < m

φ1

(x) = f (x) + λ K(x, s)f (s)ds

a

x

∫ < m + λ mM x − a( )

φ2

(x) = f (x) + K(x, s)φ1

(x)

a

x

∫ < m + M (m + mM (s − a)

a

x

∫ )ds = m 1 + λ M (x − a) +1

2M

2

(x − a)2

Carrying on like this we get

φ

n(x) < m 1 + λ M(x −a)+

12

λ2M 2(x −a)2 + .......+

1n !

λnM n(x −a)n

Since the series here is of exponential type we get convergence for all values of λ.

Now see whether the same thing works for Fredholm equations. Again we look at anexample -

φ(x) = x + λ φ(s)ds

0

1

∫similar to the previous one, except for the fixed range of integration.Now we get

φ0(x) = x

φ1(x) = x + λ sds = x +

12

0

1

∫ λ

φ2(x) = x + λ s +

12

λ

0

1

∫ ds = x +12

λ +12

λ2

and, continuing in this way

φ

n(x) = x +

12

λ + λ2 + .......+ λn( ) .

Page 3: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

Now we get a geometric series which only converges if λ < 1 . In this simple problem

the series can be summed and letting n → ∞ we get the solution

φ(x) = x +λ

2 1−λ( ).

Since we have managed to sum the series, we get a solution valid for all λ ≠ 1 . In moregeneral problems, the series will not be easily summable and will only be valid for arestricted range of values of λ.The series obtained in this way, for either type of equation is known as the Neumannseries.

Example: Find the first two terms of the Neumann series for the equation

φ(x) = sinx + λ cos(x 2s)φ(s)ds

0

π/2

∫ .

We get

φ0(x) = sinx

φ1(x) = sinx + λ cos(x 2s)sinsds

0

π/2

= sinx +λ2

sin((1 + x 2)s( ) + sin((1−x 2)s)ds0

π/2

= sinx +λ2

cos((1 + x 2)π2)−1

1 + x 2+

cos((1−x 2)π2)−1

1−x 2

Separable kernels

Suppose the kernel of a Fredholm equation is a product of a function of x and a functionof s, to the equation is of the form

φ(x) = f (x)+ λ u(x)

a

b

∫ v(s)φ(s)ds

Clearly the u(x)can come outside the integral and if we write

c = v(s)φ(s)ds

a

b

∫then we have

φ(x) = f (x)+ λcu(x) .All we need to find is the constant c and this can be done by multiplying the last equationby v(x)and integrating from a to b. This gives

Page 4: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

c = f (x)v(x)dx + λc u(x)v(x)dx

a

b

∫a

b

∫so

c =

v(x)f (x)dxa

b

1−λ u(x)v(x)dxa

b

∫.

This leads to the solution, so long as the value of λ is not such as to make thedenominator vanish.

Example: Solve

φ(x) = x 2 + λ x 3s2φ(s)ds.

0

1

∫The solution is

φ(x) = x 2 + λx 3cwith

c = s2

0

1

∫ φ(s)ds .

from the last two equations we get

c = x 4

0

1

∫ dx + λc x 5

0

1

∫ dx =15

+16

λc .

So, unless λ = 6we have

φ(x) = x 2 +

15x 3 6λ

6 −λ.

The value of λ for which the solution breaks down is an eigenvalue of the problem.

Now, go back to the general form of the equation and look what happens at this value.Define the homogeneous equation corresponding to our original equation by

φ(x) = λ u(x)v(s)φ(s)ds

a

b

∫(ie put f=0). Multiplying this by v(x)and integrating gives

v(x)φ(x)dx = λ u(x)v(x)dx v(s)φ(s)ds

a

b

∫a

b

∫a

b

∫which implies that

λ v(x)u(x)dx = 1

a

b

∫ .

Page 5: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

There can be no solution of the homogeneous equation unless this condition is satisfied.If it is satisfied then it is easily verified that φ(x) = ku(x) is a solution for arbitrary k.Going back to the inhomogeneous equation, it can be seen that if λ is equal to theeigenvalue, and a solution exists, then the condition

v(x)f (x)dx = 0

a

b

∫must be satisfied. Thus the solution can only exist for certain functions f. If such asolution exists then any multiple of the solution of the homogeneous equation can beadded to it. To summarise, we have,(a) λ ≠ eigenvalue - unique solution exists, or(b) λ = eigenvalue - solution may or may not exist and if it does it is not unique.Homogeneous equation has a solution, any multiple of which can be added.There are clear parallels between this and the matrix equation

X = F + λAXwhere X and F are column vectors and A a matrix. This is a discrete analogue of theFredholm equation. This has unique solution

X = (I −λA)−1Xif the inverse exists, which it does except for certain values of λ (the inverses of thematrix eigenvalues). At these values of λ the homogeneous equation (F=0) has a non-trivial solution while the inhomogeneous equation may or may not have a solution, and ifit does arbitrary multiples of the solution of the homogeneous equation can be added to it.

Degenerate Kernels

The method outlined above can be extended to the situation where

K(x,y) = u

i(x)v

i(y)

i∑

over some finite range of i. To see this we look at an example -

φ(x) = x + λ (xy + x 2y 2)φ(y)dy

0

1

∫ .

If we let

c1

= yφ(y)0

1

c2

= y 2φ(y)0

1

∫then the solution is

φ(x) = x + λ(c1x +c

2x 2) .

If we multiply this by x then by x2 and integrate from 0 to 1 in each case we obtain

Page 6: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

c1

=13

+ λ(13c

1+

14c

2)

c2

=14

+ λ(14c

1+

15c

2)

or

1−13

λ −14

λ

−14

λ 1−15

λ

c1

c2

=

1415

This gives

c1

c2

=

1

240 −128λ + λ2

−48(λ −5) 60λ60λ −80(λ − 3)

1/ 31/ 4

and hence a uniquely defined solution unless λ

2−128λ + 240 = 0

ie λ = 64 ± 4 241 .These two values are the eigenvalues for this problem. If λ takes one of these valuesthen the equation

1−13

λ −14

λ

−14

λ 1−15

λ

c1

c2

=

00

has a non-trivial solution. This gives a non-trivial solution to the homogeneous integralequation (which may be multiplied by an arbitrary constant).If we consider a kernel of the general form

K(x,y) = u

i(x)v

i(y)

i=1

n

∑then we can follow the same procedure and will end up with an n ×n matrix to invert.The determinant will be an nth degree polynomial in λ with n solutions (possiblycomplex and possibly repeated) which are the eigenvalues. If λ is not an eigenvalue thenthe matrix can be inverted and a unique solution found. If it is an eigenvalue then thehomogeneous equation has a non-trivial solution. In this case, the original equation mayor may not have a solution. To obtain the condition under which a solution exists weintroduce the transposed equation

φ(x) = f (x)+ K(y,x)φ(y)dy

a

b

∫where the arguments in K are swapped over. This produces a matrix which is thetranspose of the original and has the same eigenvalues. If ψ(x) is a solution of the

Page 7: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

homogeneous transposed equation for the eigenvalue λ then if we multiply the orginalequation by ψ(x) and integrate from a to b we get

φ(x)ψ(x) = ψ(x)f (x)+ λ K(x,y)ψ(x)φ(y)dxdy

a

b

∫a

b

∫a

b

∫a

b

∫ .

From the definition of ψ(x) it satisfies

ψ(x) = λ K(t,x)ψ(t)dt

a

b

∫from which it can be seen that the x integral in the last term above just gives ψ(y) and thewhole term is

φ(y)ψ(y)dy

a

b

∫which, of course, just cancels the left hand side. So, we conclude that if a solution existswe must have

ψ(x)f (x) = 0

a

b

∫ .

Example: Solve

φ(x) = sin(x)+ λ cos(x −y)φ(y)dy

0

π/2

∫ .

The first thing to note is that using the formula for cos(x −y) brings this into the formwe want, namely

φ(x) = sinx + λ (cosx cosy + sinx siny

0

π/2

∫ )φ(y)dy .

With

c1

= cos(x)φ(x)0

π/2

∫ dx

c2

= sin(x)φ(x)0

π/2

∫ dx

we get

c1

=12

+ λ c1

π4

+c2

12

c2

=π4

+ λ c1

12

+c2

π4

or

Page 8: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

1−λπ4

−12

λ

−12

λ 1−λπ4

c1

c2

=

12π4

.

The matrix here can be inverted unless

λ =1

π4

±12

, the result being

c1

c2

=

8

16 − 8λπ + λ2π2 − 4λ2

4λ + 4π + λπ2

16 − 8λπ + λ2π2 − 4λ2

.

Looking at what happens at the eigenvalues, with the + sign we get c1= c

2if we put the

right hand side equal to zero, giving the solution cosx + sinx (or any multiple of this) forthe homogeneous equation. This is also the solution of the homogeneous transposedequation (which is the same as the original). For our equation to have a solution for thiseigenvalue we need

sinx sinx + cosx

0

π/2

∫ = 0

which is not true, so no solution exists. This can also be seen from the above matrixequation, the two equations being incompatible if λ has this value. The condition for theexistence of a solution with the inhomogeneous term replaced with a general function f(x)is

f (x) cosx + sinx

0

π/2

∫ = 0 .

It can be seen that what this does is ensure that the vector on the right hand side of thematrix equation is such as to be compatible with the left hand side, the two equations thenbeing identical. A similar analysis holds for the other eigenvalue.

Resolvent kernel

Consider the Fredholm equation in the usual form and suppose for the moment that thekernel is degenerate,

K(x,y) = u

i(x)

i=1

n

∑ vi(y) .

Then, as we have seen

φ(x) = f (x)+ λ ciu

i(x)∑

ci

= v(x)φ(x)dxa

b

∫.

Page 9: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

Multiplying the integral equation by each vi(x) in turn, we get the system of linear

equations,

ci

= fi

+ λ aijc

jj

fi

= vi

a

b

∫ (x)f (x)dx

aij

= vi(x)u

j(x)

a

b

∫the solution of which is

c

i= b

ijfj

j∑

with the matrix B given by B = (I −λA)−1 . This produces a solution of the form

φ(x) = f (x)+ λ R(x,y;λ)f (y)dy

a

b

∫with

R(x,y;λ) = b

iju

i(x)v

ji, j∑ (y) .

R is called the resolvent kernel and is uniquely defined except when λ is aneigenvalue, ie A −λI is singular. If λ is an eigenvalue then the homogeneous solutionhas a non-trivial solution and the full equation may or may not have a solution, asdiscussed previously. If a solution exists it is not unique, since any multiple of thesolution of the homogeneous equation can be added to it.

Fredholm theory

Fredholm obtained a general expression for the resolvent kernel, valid even if the kernelis not degenerate. We shall not go into the details, but the theory says that

R(x,y;λ) =

D(x,y;λ)d(λ)

where the numerator and denominator can both be expressed as infinite series

D(x,y;λ) =(−1)n

n !n=0

∑ Dn(x,y)λn

d(λ) =(−1)n

n !n=0

∑ dnλn .

We have already seen how to construct a single Neumann series for the solution. It,however, is only convergent for sufficiently small λ , while the series here are convergentfor all values. The solution only breaks down when d(λ) = 0 , a condition whichdetermines the eigenvalues. The functions Dn

(x,y) and the constants dnare found from

the recurrence relations

Page 10: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

D0(x,y) = K(x,y)

d0

= 1

dn

= Dn−1

a

b

∫ (x,x)dx

Dn(x,y) = K(x,y)d

n−n K(x,z)D

n−1a

b

∫ (z,y)dz.

To illustrate how this works we look at the equation

φ(x) = x 2 + λ x 3y 2φ(y)dy

0

1

∫which we have already solved as an example of an equation with a separable kernel.Here we get

D0(x,y) = x 3y 2

d1

= x 5dx =16

0

1

D1(x,y) =

16

x 3y 2 − (x 3z 2

0

1

∫ )(z 3y 2)dz =16

x 3y 2 −16

x 3z 2 = 0.

and all subsequent terms vanish.The resolvent kernel is thus

R(x,y;λ) =x 3y 2

1−16

λ=

6x 3y 2

6 −λ.

The solution is

φ(x) = x 2 + λ

x 3y 2

6 −λ0

1

∫ y 2dy = x 2 +15

6λx 3

6 −λ,

the same as found before by a different method.

Solution of a Volterra equation by differentiation.

A Volterra equation with a simple separable kernel can be solved by reducing it to adifferential equation. To illustrate this consider the example

φ(x) = x 5 + x

0

x

∫ s2φ(s)ds.

We can divide this through by x , so that the integral term does not depend on x , getting

φ(x)x

= x 4 + s2φ(s)ds0

x

∫ .

Page 11: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

Differentiating with respect to x gives

ddx

φ(x)x

= 4x 3 + x 2φ(x) = 4x 3 + x 3 φ(x)x

which is a simple linear differential equation. We get

ddx

φ(x)x

e−

1

4x 4

= 4x 3e

−1

4x 4

and so

φ(x) = −4x +Cxe−

1

4x 4

.This involves an arbitrary constant, whereas a Volterra integral equation has a uniquesolution. We can evaluate the constant by going back to the integral equation. Thecondition that φ(x) = 0 when x = 0 tells us nothing, but if we use the fact that

φ(x)/ x → 0 as x → 0 we see that the solution is

φ(x) = 4xe−

1

4x 4

− 4x .

Integral transform methods

Recall that the Laplace transform of a function f (x) is defined by

%f (p) = f (x)e−pxdx0

∫ .

Let us now consider an integral of the form

F(x) = g(x −y)f (y)dy

0

x

∫and look at its LT.

%F(p) = e−px g(x −y)f (y)dydx.0

x

∫0

The integral is over the shaded area in the diagram below,

x

y

Page 12: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

and if we change the order of integration we get

%F(p) = g(x −y)f (y)e−pxdxdy.y

∫0

∫Now let u = x −y and get

%F(p) = g(u)f (y)e−pu−pydydu =0

∫0

∫ g(u)e−pudu f (y)e−pydy = %g(p)%f (p)0

∫0

∫ .

Our original integral is called the convolution of the functions f and g, so we have arrivedat the conclusion that the LT of the convolution of two functions is the product of theLT’s of the individual functions.

This can sometimes be used to solve an integral equation if the integral term takes theform of a convolution.

Example: Solve

φ(x) = x + (x −y)φ(y)dy

0

x

∫ .

Here g(x) = x and

%g(p) = xe−px

0

∫ dx =1

p2.

Taking the LT transform of the integral equation we get

%φ(p) =1

p2+

1

p2%φ(p)

%φ(p) =1

p2 −1=

12

1p −1

−1

p +1

leading to

φ(x) =

12(ex −e−x ) = sinhx .

Abel’s Integral Equation:

This is the following Volterra equation of the first find -

f (x) = (x −s)−@φ(s)ds (0 < @ < 1)

0

x

∫ .

Note that the integrand is singular at the upper limit but that the integral exists. Now

g(x) = x−@ and

%g(p) = e−pxx−@dx = p@−1

0

∫ e−u

0

∫ u−@du = p@−1Γ(1− @) .

The gamma function which is used here is defined by

Page 13: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

Γ(x) = e−uux−1du

0

∫ .

So, from the convolution theorem we get

%φ(p) =

%f (p)p@−1

Γ(1− @).

It is now convenient to introduce a function ψ(x) such that φ(x) = ′ψ (x) and ψ(0) = 0.From the properties of Laplace transforms we then have the result %φ)(p) = p %ψ(p) , sothat

%ψ(p) =

%f (p)p−@

Γ(1− @).

But, since the LT of x−@ is Γ(1− @)p@−1 , p−@ is the LT of x

@−1 / Γ(@) and so it followsfrom the convolution theorem that

ψ(x) = Γ(@)Γ(1− @){ }−1

(x −s)@−1 f (s)ds0

x

∫ .

If we use the result

Γ(@)Γ(1− @) =

πsin π@

then we get the solution in the form

φ(x) =

sin π@π

ddx

(x −s)@−1

0

x

∫ f (s)ds.

Numerical methods

Just as is the case for differential equations, there are many integral equations for which asimple analytic solution is not possible. In this case numerical techniques can be used. Itis quite easy to see how a numerical method can be implemented. If we let φi

be the

value of the solution at a series of points xispaced out along the interval of interest, then,

supposing we are dealing with a Fredholm equation of the second kind, we have

φ

i= φ(x

i) = f (x

i)+ K(x

ia

b

∫ ,s)φ(s)ds .

If we now use some numerical approximation for the integral, using the values at ourfinite set of points

φ

i= f

i+ c

ijφ

jj

∑ .

This is now a matrix equation for the set of function values and we can use a numericalmatrix inversion package to get the required result.

To illustrate this, and see how well it works we consider the equation

Page 14: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

φ(x) = sin(x)+ cos(x −y)φ(y)dy

0

π/2

∫ .

This, of course, is a separable equation and we have already found its solution with anarbitrary multiplier λ in front of the integral. We take it so that we can compare oureventual approximation with the exact answer.

If we divide the interval 0,

π2

into 10 sub-intervals and let

φ

i= φ(

iπ2

) fi

= sin(iπ2

) Kij

= cos(iπ2

−jπ2

)

for i = 0,1.....,10 , then with Simpson’s rule used to approximate the integral we get

φ

i= f

i+

π60

(Ki0

φ0

+ 4Ki1φ

1+ 2K

i2φ

2+ 4K

i3φ

3+ .......+ K

i10φ

10) .

To find the unknown function values we have to invert an 11×11 matrix which is readilydone using a numerical matrix inversion routine.

0 0.5 1 1.5 23

2.5

2

1.5

1

x

Ri

.i π20

φ(x)

Page 15: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

The above graph shows the solution obtained in this way. On the scale of this graph, thedifference between this numerical solution and the true solution is almostindistinguishable, even though we have only taken 10 intervals. The maximum error isabout 2.4×10−4 .

In the case of a Volterra equation which we want to solve over some range we can use thesame procedure. Suppose, for example, we take the equation

φ(x) = x + (x −y)φ(y)dy

0

x

∫(whose solution we have already seen to be sinh(x) ) and we want to solve it over therange [0,2]. Then we write it as

φ(x) = x + K(x,y)φ(y)dy0

2

K(x,y) =x −y x > y

0 otherwise

and treat it in the same way as the Fredholm equation. With [0,2] divided into 10intervals we get the following graph of the solution.

0 0.5 1 1.5 20

1

2

3

4

x

Ri

i5

x

φ(x)

Page 16: MT5802 - Integral equations Introduction - St Andrewsrac/MT5802/Integral equations.pdf · MT5802 - Integral equations Introduction Integral equations occur in a variety of applications,

Again, with only a small number of points we get a good approximation, the maximumerror over this range being around 0.02.

Further readingIntegral Equations - A Short Course - Ll, G. ChambersIntegral equations B L Moiseiwitsch