brachistochrone problem detailed solution

Upload: kristine-rodriguez-carnicer

Post on 11-Oct-2015

55 views

Category:

Documents


1 download

TRANSCRIPT

  • Physics 129aCalculus of Variations071113 Frank PorterRevision 081120

    1 Introduction

    Many problems in physics have to do with extrema. When the probleminvolves nding a function that satises some extremum criterion, we mayattack it with various methods under the rubric of calculus of variations.The basic approach is analogous with that of nding the extremum of afunction in ordinary calculus.

    2 The Brachistochrone Problem

    Historically and pedagogically, the prototype problem introducing the cal-culus of variations is the brachistochrone, from the Greek for shortesttime. We suppose that a particle of mass m moves along some curve underthe inuence of gravity. Well assume motion in two dimensions here, andthat the particle moves, starting at rest, from xed point a to xed point b.We could imagine that the particle is a bead that moves along a rigid wirewithout friction [Fig. 1(a)]. The question is: what is the shape of the wirefor which the time to get from a to b is minimized?

    First, it seems that such a path must exist the two outer paths inFig. 2(b) presumably bracket the correct path, or at least can be made tobracket the path. For example, the upper path can be adjusted to take anarbitrarily long time by making the rst part more and more horizontal. Thelower path can also be adjusted to take an arbitrarily long time by makingthe dip deeper and deeper. The straight-line path from a to b must takea shorter time than both of these alternatives, though it may not be theshortest.

    It is also readily observed that the optimal path must be single-valued inx, see Fig. 1(c). A path that wiggles back and forth in x can be shortened intime simply by dropping a vertical path through the wiggles. Thus, we candescribe path C as a function y(x).

    1

  • Ca

    b

    a

    b

    (a) (b)

    yx

    (c)a

    b

    .

    .

    . .

    ..

    Figure 1: The Brachistochrone Problem: (a) Illustration of the problem; (b)Schematic to argue that a shortest-time path must exist; (c) Schematic toargue that we neednt worry about paths folding back on themselves.

    Well choose a coordinate system with the origin at point a and the y axisdirected downward (Fig. 1). We choose the zero of potential energy so thatit is given by:

    V (y) = mgy.The kinetic energy is

    T (y) = V (y) = 12mv2,

    for zero total energy. Thus, the speed of the particle is

    v(y) =2gy.

    An element of distance traversed is:

    ds =(dx)2 + (dy)2 =

    1 +(dy

    dx

    )2dx.

    Thus, the element of time to traverse ds is:

    dt =ds

    v=

    1 +

    (dydx

    )22gy

    dx,

    and the total time of descent is:

    T = xb0

    1 +

    (dydx

    )22gy

    dx.

    2

  • Dierent functions y(x) will typically yield dierent values for T ; we callT a functional of y. Our problem is to nd the minimum of this functionalwith respect to possible functions y. Note that y must be continuous itwould require an innite speed to generate a discontinuity. Also, the accel-eration must exist and hence the second derivative d2y/dx2. Well proceedto formulate this problem as an example of a more general class of problemsin variational calculus.

    Consider all functions, y(x), with xed values at two endpoints; y(x0) =y0 and y(x1) = y1. We wish to nd that y(x) which gives an extremum forthe integral:

    I(y) = x1x0

    F (y, y, x) dx,

    where F (y, y, x) is some given function of its arguments. Well assume goodbehavior as needed.

    In ordinary calculus, when we want to nd the extrema of a functionf(x, y, . . .) we proceed as follows: Start with some candidate point (x0, y0, . . .),Compute the total dierential, df , with respect to arbitrary innitesimalchanges in the variables, (dx, dy, . . .):

    df =

    (f

    x

    )x0,y0,...

    dx +

    (f

    y

    )x0,y0,...

    dy + . . .

    Now, df must vanish at an extremum, independent of which direction wechoose with our innitesimal (dx, dy, . . .). If (x0, y0, . . .) are the coordinatesof an extremal point, then

    (f

    x

    )x0,y0,...

    =

    (f

    y

    )x0,y0,...

    = . . . = 0.

    Solving these equations thus gives the coordinates of an extremum point.Finding the extremum of a functional in variational calculus follows the

    same basic approach. Instead of a point (x0, y0, . . .), we consider a candidatefunction y(x) = Y (x). This candidate must satisfy our specied behavior atthe endpoints:

    Y (x0) = y0Y (x1) = y1. (1)

    We consider a small change in this function by adding some multiple ofanother function, h(x):

    Y (x) Y (x) + h(x).

    3

  • 0.9

    1.4

    1.9

    -1.1

    -0.6

    -0.1

    0.4

    0 0.2 0.4 0.6 0.8 1

    h

    YY+ h

    Figure 2: Variation on function Y by function h.

    To maintain the endpoint condition, we must have h(x0) = h(x1) = 0. Thenotation Y is often used for h(x).

    A change in functional form of Y (x) yields a change in the integral I.The integrand changes at each point x according to changes in y and y:

    y(x) = Y (x) + h(x),y(x) = Y (x) + h(x). (2)

    To rst order in , the new value of F is:

    F (Y +h, Y +h) F (Y, Y , x)+(F

    y

    )y=Yy=Y

    h(x)+

    (F

    y

    )y=Yy=Y

    h(x). (3)

    Well use I to denote the change in I due to this change in functionalform:

    I = x1x0

    F (Y + h, Y + h, x) dx x1x0

    F (Y, Y , x) dx,

    x1x0

    (F

    y

    )y=Yy=Y

    h +

    (F

    y

    )y=Yy=Y

    h

    dx. (4)

    We may apply integration by parts to the second term:

    x1x0

    F

    yh dx =

    x1x0

    hd

    dx

    (F

    y

    )dx, (5)

    4

  • where we have used h(x0) = h(x1) = 0. Thus,

    I = x1x0

    [F

    y+

    d

    dx

    (F

    y

    )]y=Yy=Y

    h dx. (6)

    When I is at a minimum, I must vanish, since, if I > 0 for some ,then changing the sign of gives I < 0, corresponding to a smaller value ofI. A similar argument applies for I < 0, hence I = 0 at a minimum. Thismust be true for arbitrary h and small but nite. It seems that a necessarycondition for I to be extremal is:[

    F

    y+

    d

    dx

    (F

    y

    )]y=Yy=Y

    = 0. (7)

    This follows from the fundamental theorem:

    Theorem: If f(x) is continuous in [x0, x1] and x1x0

    f(x)h(x) dx = 0 (8)

    for every continuously dierentiable h(x) in [x0, x1], where h(x0) =h(x1) = 0, then f(x) = 0 for x [x0, x1].

    Proof: Imagine that f() > 0 for some x0 < < x1. Since f is continuous,there exists > 0 such that f(x) > 0 for all x ( , + ). Let

    h(x) ={(x + )2(x )2, x + 0 otherwise.

    (9)

    Note that h(x) is continuously dierentiable in [x0, x1] and vanishes at x0and x1. We have that x1

    x0f(x)h(x) dx =

    +

    f(x)(x + )2(x )2 dx (10)> 0, (11)

    since f(x) is larger than zero everywhere in this interval. Thus, f(x) cannotbe larger than zero anywhere in the interval. The parallel argument followsfor f(x) < 0.

    This theorem then permits the assertion that

    [F

    y+

    d

    dx

    (F

    y

    )]y=Yy=Y

    = 0. (12)

    5

  • whenever y = Y such that I is an extremum, at least if the expression onthe right is continuous. We call the expression on the right the Lagrangianderivative of F (y, y, x) with respect to y(x), and denote it by F

    y.

    The extremum condition, relabeling Y y, is then:F

    y F

    y d

    dx

    (F

    y

    )= 0. (13)

    This is called the Euler-Lagrange equation.Note that I = 0 is a necessary condition for I to be an extremum, but

    not sucient. By denition, the Euler-Lagrange equation determines pointsfor which I is stationary. Further consideration is required to establishwhether I is an extremum or not.

    We may write the Euler-Lagrange equation in another form. Let

    Fa(y, y, x) F

    y. (14)

    Then

    d

    dx

    (F

    y

    )=

    dFadx

    =Fax

    +Fay

    y +Fay

    y (15)

    =2F

    xy+

    2F

    yyy +

    2F

    y2y. (16)

    Hence the Euler-Lagrange equation may be written:

    2F

    y2y +

    2F

    yyy +

    2F

    xy F

    y= 0. (17)

    Let us now apply this to the brachistochrone problem, nding the ex-tremum of:

    2gT = xb0

    1 + y2

    ydx. (18)

    That is:

    F (y, y, x) =

    1 + y2

    y. (19)

    Notice that, in this case, F has no explicit dependence on x, and we cantake a short-cut. Starting with the Euler-Lagrange equation, if F has noexplicit x-dependence we nd:

    0 =

    [F

    y d

    dx

    F

    y

    ]y (20)

    6

  • =F

    yy y d

    dx

    F

    y(21)

    =dF

    dx F

    yy y d

    dx

    F

    y(22)

    =d

    dx

    (F yF

    y

    ). (23)

    Hence,

    F yFy

    = constant = C. (24)

    In this case,

    yF

    y= (y)2 /

    y (1 + y2). (25)

    Thus, 1 + y2

    y (y)2 /

    y (1 + y2) = C, (26)

    or

    y(1 + y2

    )=

    1

    C2 A. (27)

    Solving for x, we nd

    x =

    y

    A ydy. (28)

    We may perform this integration with the trigonometric substitution: y =A2(1 cos ) = A sin2

    2. Then,

    x = sin2 2

    1 sin2 2

    A sin

    2cos

    2d (29)

    = A

    sin2

    2d (30)

    =A

    2( sin ) + B. (31)

    We determine integration constant B by letting = 0 at y = 0. Wechose our coordinates so that xa = ya = 0, and thus B = 0. Constant A isdetermined by requiring that the curve pass through (xb, yb):

    xb =A

    2(b sin b), (32)

    yb =A

    2(1 cos b). (33)

    7

  • This pair of equations determines A and b. The brachistochrone is givenparametrically by:

    x =A

    2( sin ), (34)

    y =A

    2(1 cos ). (35)

    In classical mechanics, Hamiltons principle for conservative systems thatthe action is stationary gives the familiar Euler-Lagrange equations of clas-sical mechanics. For a system with generalized coordinates q1, q2, . . . , qn, theaction is

    S = tt0L ({qi} , {qi} , t) dt, (36)

    where L is the Lagrangian. Requiring S to be stationary yields:

    d

    dt

    (L

    qi

    ) L

    qi= 0, i = 1, 2, . . . , n. (37)

    3 Relation to the Sturm-Liouville Problem

    Suppose we have the Sturm-Liouville operator:

    L =d

    dxp(x)

    d

    dx q(x), (38)

    with p(x) 0, q(x) 0, and x (0, U). We are interested in solving theinhomogeneous equation Lf = g, where g is a given function.

    Consider the functional

    J = U0

    (pf 2 + qf 2 + 2gf

    )dx. (39)

    The Euler-Lagrange equation for J to be an extremum is:

    F

    f d

    dx

    (F

    f

    )= 0, (40)

    where F = pf 2 + qf 2 + 2gf . We have

    F

    f= 2qy + 2g (41)

    d

    dx

    (F

    f

    )= 2pf + 2pf . (42)

    8

  • Substituting into the Euler-Lagrange equation gives

    d

    dx

    [p(x)

    d

    dxf(x)

    ] q(x)f(x) = 0. (43)

    This is the Sturm-Liouville equation! That is, the Sturm-Liouville dierentialequation is just the Euler-Lagrange equation for the functional J .

    We have the following theorem:

    Theorem: The solution to

    d

    dx

    [p(x)

    d

    dxf(x)

    ] q(x)f(x) = g(x), (44)

    where p(x) > 0, q(x) 0, and boundary conditions f(0) = a andf(U) = b, exists and is unique.

    Proof: First, suppose there exist two solutions, f1 and f2. Then d = f1 f2must satisfy the homogeneous equation:

    d

    dx

    [p(x)

    d

    dxd(x)

    ] q(x)d(x) = 0, (45)

    with homogeneous boundary conditions d(0) = d(U) = 0. Now multi-ply Equation 45 by d(x) and integrate:

    U0

    d(x)d

    dx

    (p(x)

    d

    dxd(x)

    )dx

    U0

    q(x)d(x)2 dx = 0

    = d(x)p(x)dd(x)

    dx

    U0 U0

    (dd(x)

    dx

    )2p(x) dx

    = U0

    pd2 dx. (46)

    Thus, U0

    (pd2(x) + q(x)d(x)2

    )dx = 0. (47)

    Since pd2 0 and qd2 0, we must thus have pd2 = 0 and qd2 = 0in order for the integral to vanish. Since p > 0 and pd2 = 0 it mustbe true that d = 0, that is d is a constant. But d(0) = 0, therefored(x) = 0. The solution, if it exists, is unique.

    The issue for existence is the boundary conditions. We presume thata solution to the dierential equation exists for some boundary con-ditions, and must show that a solution exists for the given boundary

    9

  • condition. From elementary calculus we know that two linearly inde-pendent solutions to the homogeneous dierential equation exist. Leth1(x) be a non-trivial solution to the homogeneous dierential equationwith h1(0) = 0. This must be possible because we can take a suitablelinear combination of our two solutions. Because the solution to theinhomogeneous equation is unique, it must be true that h1(U) = 0.Likewise, let h2(x) be a solution to the homogeneous equation withh2(U) = 0 (and therefore h2(0) = 0). Suppose f0(x) is a solution tothe inhomogeneous equation satisfying some boundary condition. Formthe function:

    f(x) = f0(x) + k1h1(x) + k2h2(x). (48)

    We adjust constants k1 and k2 in order to satisfy the desired boundarycondition

    a = f0(0) + k2h2(0), (49)

    b = f0(U) + k1h1(U). (50)

    That is,

    k1 =b f0(U)h1(U)

    , (51)

    k2 =a f0(0)h2(U)

    . (52)

    We have demonstrated existence of a solution.

    This discussion leads us to the variational calculus theorem:

    Theorem: For continuously dierentiable functions in (0, U) satisfying f(0) =a and f(U) = b, the functional

    J = U0

    (pf 2 + qf 2 + 2gf

    )dx, (53)

    with p(x) > 0 and q(x) 0, attains its minimum if and only if f(x) isthe solution of the corresponding Sturm-Liouville equation.

    Proof: Let s(x) be the unique solution to the Sturm-Liouville equation sat-isfying the given boundary conditions. Let f(x) be any other continu-ously dierentiable function satisfying the boundary conditions. Thend(x) f(x) s(x) is continuously dierentiable and d(0) = d(U) = 0.

    10

  • Solving for f , squaring, and doing the same for the dervative equation,yields

    f 2 = d2 + s2 + 2sd, (54)

    f 2 = d2 + s2 + 2sd. (55)

    Let

    J J(f) J(s) (56)=

    U0

    (pf 2 + qf 2 + 2gf ps2 qs2 2gs

    )dx (57)

    = U0

    [p(d2 + 2sd

    )+ q

    (d2 + 2ds

    )+ 2gf

    ]dx (58)

    = 2 U0

    (pds + qds + gd) dx + U0

    (pd2 + qd2

    )dx. (59)

    But

    U0

    (pds + qds+ gd) dx = dpsU0+ U0

    [d(x) d

    dx(ps) + qds + gd

    ]dx

    = U0

    d(x)

    [ d

    dx(ps) + qs + g

    ]dx, since d(0) = d(U) = 0

    = 0; integrand is zero by the dierential equation. (60)

    Thus, we have that

    J = U0

    (pd + qd2

    )dx 0. (61)

    In other words, f does no better than s, hence s corresponds to aminimum. Furthermore, if J = 0, then d = 0, since p > 0 implies d

    must be zero, and therefore d is constant, but we know d(0) = 0, henced = 0. Thus, f = s at the minimum.

    4 The Rayleigh-Ritz Method

    Consider the Sturm-Liouville problem:

    d

    dx

    [p(x)

    d

    dxf(x)

    ] q(x)f(x) = g(x), (62)

    with p > 0, q 0, and specied boundary conditions. For simplicity here,lets assume f(0) = f(U) = 0. Imagine expanding the solution in some set

    11

  • of complete functions, {n(x)} (not necessarily eignefunctions):

    f(x) =

    n=1

    Ann(x).

    We have just shown that our problem is equivalent to minimizing

    J = U0

    (pf 2 + qf 2 + 2gf

    )dx. (63)

    Substitute in our expansion, noting that

    pf 2 =m

    n

    AmAnp(x)m(x)

    n(x). (64)

    Let

    Cmn U0

    p mn dx, (65)

    Bmn U0

    pmn dx, (66)

    Gn U0

    gn dx. (67)

    Assume that we can interchange the sum and integral, obtaining, for example,

    U0

    pf 2 dx =m

    n

    CmnAmAn. (68)

    ThenJ =

    m

    n

    (Cmn + Bmn)AmAn + 2n

    GnAn. (69)

    Let Dmn Cmn + Bmn = Dnm. The Dmn and Gn are known, at least inprinciple. We wish to solve for the expansion coecients {An}. To accom-plish this, use the condition that J is a minimum, that is,

    J

    An= 0, n. (70)

    Thus,

    0 =J

    An=

    m=1

    DnmAm + Gn, n = 1, 2, . . . (71)

    This is an innite system of coupled inhomogeneous equations. If Dnm isdiagonal, the solution is simple:

    An = Gn/Dnn. (72)

    12

  • The reader is encouraged to demonstrate that this occurs if the n are theeigenfunctions of the Sturm-Liouville operator.

    It may be too dicult to solve the eigenvalue problem. In this case, we canlook for an approximate solution via the Rayleigh-Ritz approach: Choosesome nite number of linearly independent functions {1(x), 2(x), . . . , N(x)}.In order to nd a function

    f(x) =N

    n=1

    An(n)(x) (73)

    that approximates closely f(x), we nd the values for An that minimize

    J(f) =N

    n,m=1

    DnmAmAn + 2N

    n=1

    GnAn, (74)

    where now

    Dnm U0

    (pnm + qnm) dx (75)

    Gn U0

    gn dx. (76)

    The minimum of J(f) is at:

    Nm=1

    DnmAm + Gn = 0, n = 1, 2, . . . (77)

    In this method, it is important to make a good guess for the set of functions{n}.

    It may be remarked that the Rayleigh-Ritz method is similar in spiritbut dierent from the variational method we typically introduce in quantummechanics, for example when attempting to compute the ground state energyof the helium atom. In that case, we adjust parameters in a non-linearfunction, while in the Rayleigh-Ritz method we adjust the linear coecientsin an expansion.

    5 Adding Constraints

    As in ordinary extremum problems, constraints introduce correlations, nowin the possible variations of the function at dierent points. As with theordinary problem, we may employ the method of Lagrange multipliers toimpose the constraints.

    13

  • We consider the case of the isoperimetric problem, to nd the stationarypoints of the functional:

    J = ba

    F (f, f , x) dx, (78)

    in variations f vanishing at x = a, b, with the constraint that

    C ba

    G(f, f , x) dx (79)

    is constant under variations.We have the following theorem:

    Theorem: (Euler) The function f that solves this problem also makes thefunctional I = J + C stationary for some , as long as C

    f= 0 (i.e., f

    does not satisfy the Euler-Lagrange equation for C).

    Proof: (partial) We make stationary the integral:

    I = J + C = ba(F + G)dx. (80)

    That is, f must satisfy

    F

    f d

    dx

    F

    f +

    (G

    f d

    dx

    G

    f

    )= 0. (81)

    Multiply by the variation f(x) and integrate:

    ba

    (F

    f d

    dx

    F

    f

    )f(x) dx +

    ba

    (G

    f d

    dx

    G

    f

    )f(x) dx = 0.

    (82)Here, f(x) is arbitrary. However, only those variations that keep Cinvariant are allowed (e.g., take partial derivative with respect to andrequire it to be zero):

    C = ba

    (G

    f d

    dx

    G

    f

    )f(x) dx = 0. (83)

    5.1 Example: Catenary

    A heavy chain is suspended from endpoints at (x1, y1) and (x2, y2). Whatcurve describes its equilibrium position, under a uniform gravitational eld?

    14

  • The solution must minimize the potential energy:

    V = g 21

    ydm (84)

    = g 21

    yds (85)

    = g x2x1

    y1 + y2dy, (86)

    where is the linear density of the chain, and the distance element along thechain is ds = dx

    1 + y2.

    We wish to minimize V , under the constraint that the length of the chainis L, a constant. We have,

    L = 21

    ds = x2x1

    1 + y2dx. (87)

    To solve, let (we multiply L by g and divide out of the problem)

    F (y, y, x) = y1 + y2 +

    1 + y2, (88)

    and solve the Euler-Lagrange equation for F .Notice that F does not depend explicitly on x, so we again use our short

    cut that

    F yFy

    = constant = C. (89)

    Thus,

    C = F yFy

    (90)

    = (y + )

    (1 + y2 y

    21 + y2

    )(91)

    =(y + )

    (1 + y2.

    (92)

    Some manipulation yields

    dy(y + )2 C2

    =dx

    C. (93)

    With the substitution y+ = C cosh , we obtain = x+kC

    , where k is anintegraton constant, and thus

    y + = C cosh

    (x + k

    C

    ). (94)

    15

  • There are three unknown constants to determine in this expression, C, k,and . We have three equations to use for this:

    y1 + = C cosh

    (x1 + k

    C

    ), (95)

    y2 + = C cosh

    (x2 + k

    C

    ), and (96)

    L = x2x1

    1 + y2dx. (97)

    6 Eigenvalue Problems

    We may treat the eigenvalue problem as a variational problem. As an exam-ple, consider again the Sturm-Liouville eigenvalue equation:

    d

    dx

    [p(x)

    df(x)

    dx

    ] q(x)f(x) = w(x)f(x), (98)

    with boundary conditions f(0) = f(U) = 0. This is of the form

    Lf = wf. (99)

    Earlier, we found the desired functional to make stationary was, for Lf =0,

    I = U0

    (pf 2 + qf 2

    )dx. (100)

    We modify this to the eigenvalue problem with q q w, obtaining

    I = U0

    (pf 2 + qf 2 wf 2

    )dx, (101)

    which possesses the Euler-Lagrange equation giving the desired Sturm-Liouvilleequation. Note that is an unknown parameter - we want to determine it.

    It is natural to regard the eigenvalue problem as a variational problemwith constraints. Thus, we wish to vary f(x) so that

    J = U0

    (pf 2 + qf 2

    )dx (102)

    is stationary, with the constraint

    C = U0

    wf 2dx = constant. (103)

    16

  • Notice here that we may take C = 1, corresponding to normalized eigenfunc-tions f , with respect to weight w.

    Lets attempt to nd approximate solutions using the Rayleigh-Ritz method.Expand

    f(x) =

    n=1

    Anun(x), (104)

    where u(0) = u(U) = 0. The un are some set of expansion functions, not theeigenfunctions if they are the eigenfunctions, then the problem is alreadysolved! Substitute this into I, giving

    I =

    m=1

    n=1

    (Cmn Dmn)AmAn, (105)

    where

    Cmn U0

    (pumun + qumun) dx (106)

    Dmn U0

    wumundx. (107)

    Requiring I to be stationary,

    I

    Am= 0, m = 1, 2, . . . , (108)

    yields the innite set of coupled homogeneous equations:

    n=1

    (Cmn Dmn)An = 0, m = 1, 2, . . . (109)

    This is perhaps no simpler to solve than the original dierential equation.However, we may make approximate solutions for f(x) by selecting a niteset of linearly independent functions 1, . . . , N and letting

    f(x) =N

    n=1

    Ann(x). (110)

    Solve for the best approximation of this form by nding those {An} thatsatisfy

    Nn=1

    (Cmn Dmn

    )An = 0, m = 1, 2, . . . , N, (111)

    where

    Cmn U0

    (pmn + qmn) dx (112)

    Dmn U0

    wmndx. (113)

    17

  • This looks like N equations in the N+1 unknowns , {An}, but the overallnormalization of the Ans is arbitrary. Hence there are enough equations inprinciple, and we obtain

    =

    Nm,n=1 CmnAmAnNm,n=1 DmnAmAn

    . (114)

    Notice the similarity of Eqn. 114 with

    =

    U0 (pf

    2 + qf 2) dx U0 wf

    2dx=

    J(f)

    C(f). (115)

    This follows since I = 0 for f a solution to the Sturm-Liouville equation:

    I = U0

    (pf 2 + qf 2 wf 2

    )dx

    = pff U0+ U0

    [f d

    dx(pf ) + qf 2 wf 2

    ]dx

    = 0 + U0

    (qf 2 + wf 2 + qf 2 wf 2

    )dx

    = 0, (116)

    where we have used the both the boundary condition f(0) = f(U) = 0 andSturm-Liouville equation d

    dx(pf ) = qf wf to obtain the third line. Also,

    =J(f)

    C(f), (117)

    since, for example,

    J(f) = U0

    (pf 2 + qf 2

    )dx

    = U0

    (pm,n

    AnAmm

    n + q

    m,n

    AnAmmn

    )dx

    =m,n

    CmnAnAm. (118)

    That is, if f is close to an eigenfunction f , then should be close to aneigenvalue .

    Lets try an example: Find the lowest eigenvalue of f = f , withboundary conditions f(1) = 0. We of course readily see that the rsteigenfunction is cos(x/2) with 1 =

    2/4, but lets try our method to see

    18

  • how we do. For simplicity, well try a Rayleigh-Ritx approximation with onlyone term in the sum.

    As we noted earlier, it is a good idea to pick the functions with somecare. In this case, we know that the lowest eigenfunction wont wiggle much,and a good guess is that it will be symmetric with no zeros in the interval(1, 1). Such a function, which satises the boundary conditions, is:

    f(x) = A(1 x2

    ), (119)

    and well try it. With N = 1, we have 1 = = 1 x2, and

    C C11 = 11

    (p2 + q2

    )dx. (120)

    In the Sturm-Liouville form, we have p(x) = 1, q(x) = 0, w(x) = 1.With N = 1, we have 1 = = 1 x2, and

    C = 11

    4x2dx =8

    3. (121)

    Also,

    D D11 = 11

    w2dx = 11

    (1 x2

    )2dx =

    16

    15. (122)

    The equation

    Nn=1

    (Cmn Dmn

    )An = 0, m = 1, 2, . . . , N, (123)

    becomes(C D)A = 0. (124)

    If A = 0, then =

    C

    D=

    5

    2. (125)

    We are within 2% of the actual lowest eigenvalue of 1 = 2/4 = 2.467. Of

    course this rather good result is partly due to our good fortune at picking aclose approximation to the actual eigenfunction, as may be seen in Fig. 6.

    19

  • 0.6

    0.8

    1

    1.2

    0

    0.2

    0.4

    -1.5 -1 -0.5 0 0.5 1 1.5x

    f(x)cos x1-x2

    Figure 3: Rayleigh-Ritz eigenvalue estimation example, comparing exact so-lution with the guessed approximation.

    7 Extending to Multiple Dimensions

    It is possible to generalize our variational problem to multiple independentvariables, e.g.,

    I(u) =

    DF

    (u,

    u

    x,u

    y, x, y

    )dxdy, (126)

    where u = u(x, y), and bounded region D has u(x, y) specied on its bound-ary S. We wish to nd u such that I is stationary with respect to variationof u.

    We proceed along the same lines as before, letting

    u(x, y) = u(x, y) + h(x, y), (127)

    where h(x, y)|S = 0. Look for stationary I: dId=0

    = 0. Let

    ux ux

    , uy uy

    , hx hx

    , etc. (128)

    ThendI

    d=

    D

    (F

    uh +

    F

    uxhx +

    F

    uyhy

    )dxdy. (129)

    20

  • We want to integrate by parts the last two terms, in analogy with thesingle-variable case. Recall Greens theorem:

    S(Pdx+ Qdy) =

    D

    (Q

    x P

    y

    )dxdy, (130)

    and let

    P = hF

    ux, Q = h F

    uy. (131)

    With some algrbra, we nd that

    dI

    d=Sh

    (F

    uxdx F

    uydy

    )+

    Dh

    [F

    u D

    Dx

    (F

    ux

    ) D

    Dy

    (F

    uy

    )]dxdy,

    (132)where

    Df

    Dx f

    x+

    f

    u

    u

    x+

    f

    ux

    2u

    x2+

    f

    uy

    2u

    xy(133)

    is the total partial derivative with respect to x.The boundary integral over S is zero, since h(x {S}) = 0. The re-

    maining double integral over D must be zero for arbitrary functions h, andhence,

    F

    u D

    Dx

    (F

    ux

    ) D

    Dy

    (F

    uy

    )= 0. (134)

    This result is once again called the Euler-Lagrange equation.

    8 Exercises

    1. Suppose you have a string of length L. Pin one end at (x, y) = (0, 0)and the other end at (x, y) = (b, 0). Form the string into a curve suchthat the area between the string and the x axis is maximal. Assumethat b and L are xed, with L > b. What is the curve formed by thestring?

    2. We considered the application of the Rayleigh-Ritz method to ndingapproximate eigenvalues satisfying

    y = y, (135)with boundary conditions y(1) = y(1) = 0. Repeat the method, nowwith two functions:

    1(x) = 1 x2, (136)2(x) = x

    2(1 x2). (137)

    21

  • You should get estimates for two eigenvalues. Compare with the exacteigenvalues, including a discussion of which eigenvalues you have man-aged to approximate and why. If the eigenvalues you obtain are notthe two lowest, suggest another function you might have used to getthe lowest two.

    3. The Bessel dierential equation is

    d2y

    dx2+

    1

    x

    dy

    dx+

    (k2 m

    2

    x2

    )y = 0. (138)

    A solution is y(x) = Jm(kx), the mth order Bessel function. Assumea boundary condition y(1) = 0. That is, k is a root of Jm(x). Use theRayleigh-Ritz method to estimate the rst non-zero root of J3(x). Isuggest you try to do this with one test function, rather than a sum ofmultiple functions. But you must choose the function with some care.In particular, note that J3 has a third-order root at x = 0. You shouldcompare your result with the actual value of 6.379. If you get within,say, 15% of this, declare victory.

    22