0 ::; n - princeton universitymoll/crandall-primer.pdf · on open sets n c irn . ... here we aim at...

43
VISCOSITY SOLUTIONS: A PRIMER by Michael G. Crandall'U Department of Mathematics University of California, Santa Barbara Santa Barbara, CA 93106 o. Introduction These lectures present the most basic theory of "viscosity solutions" of fully nonlinear scalar partial differential equations of first and second order. Other contributions to this volume develop some of the amazing range of applications in which viscosity solutions play an essential role and various refinements of this basic material. In this introductory section we describe the class of equations which are treated within the theory and then our plan of presentation. The theory applies to scalar second order partial differential equations (PDE) F(x, u; Du, D 2u) = 0 on open sets n c IR N . The unknown function u : n ----+ IR is real-valued, Du corresponds to the gradient (U XI " '" U x N) of u and D 2 u corresponds to the Hessian matrix (ux;.xJ of second derivatives of u. Consistently, F is a mapping F : n x IR x IR N x S(N) ----+ IR where S(N) is the set of real symmetric N x N matrices. We say that Du (D 2 u) "corresponds" to the gradient (respectively, the Hessian) because, as we shall see, solutions u may not be differentiable, let alone twice differentiable, and still "solve" (PDE). We write F(x,r,p,X) to indicate the value of F at (x, r,p, X) E n x IR x IR N x S(N). (PDE) is said to be fully nonlinear to emphasize that F(x, T, p, X) need not be linear in any argument, including the X in the second derivative slot. F is called degenerate elliptic if it is nonincreasing in its matrix argument: F(x,r,p,X)::; F(x,r,p,Y) for Y::; x. The usual ordering is used on S(N); that is Y ::; X means 0 ::; for E IR N where \-,.) is the Euclidean inner product. If F is degenerate elliptic, we say that it is proper if it is also nondecreasing in r. That is, F is proper if F(x, s, p, X) < F(x, T, p, Y) for Y::; X, »< r. (t) Supported in part by NSF Grant DMS93-02995 and in part by the author's appointment as a Miller Professor at the University of California, Berkeley for Fall 1996.

Upload: leminh

Post on 15-May-2018

217 views

Category:

Documents


3 download

TRANSCRIPT

VISCOSITY SOLUTIONS: A PRIMER

by

Michael G. Crandall'UDepartment of Mathematics

University of California, Santa BarbaraSanta Barbara, CA 93106

o. Introduction

These lectures present the most basic theory of "viscosity solutions" of fullynonlinear scalar partial differential equations of first and second order. Othercontributions to this volume develop some of the amazing range of applicationsin which viscosity solutions play an essential role and various refinements of thisbasic material.

In this introductory section we describe the class of equations which aretreated within the theory and then our plan of presentation.

The theory applies to scalar second order partial differential equations

(PDE) F(x, u; Du, D2u) = 0

on open sets n c IRN . The unknown function u : n ----+ IR is real-valued, Ducorresponds to the gradient (U X I " ' " Ux N) of u and D2u corresponds to theHessian matrix (ux;.xJ of second derivatives of u. Consistently, F is a mapping

F : n x IR x IRN x S(N) ----+ IR

where S(N) is the set of real symmetric N x N matrices. We say that Du(D 2u) "corresponds" to the gradient (respectively, the Hessian) because, as weshall see, solutions u may not be differentiable, let alone twice differentiable,and still "solve" (PDE). We write F(x,r,p,X) to indicate the value of F at(x, r,p, X) E n x IR x IRN x S(N). (PDE) is said to be fully nonlinear toemphasize that F(x, T, p, X) need not be linear in any argument, including theX in the second derivative slot.

F is called degenerate elliptic if it is nonincreasing in its matrix argument:

F(x,r,p,X)::; F(x,r,p,Y) for Y::; x.The usual ordering is used on S(N); that is Y ::; X means

0 ::; for E IRN

where \-,.) is the Euclidean inner product. If F is degenerate elliptic, we saythat it is proper if it is also nondecreasing in r. That is, F is proper if

F(x, s, p, X) < F(x, T, p,Y) for Y::; X, »< r.

(t) Supported in part by NSF Grant DMS93-02995 and in part by the author'sappointment as a Miller Professor at the University of California, Berkeley forFall 1996.

2

As a first example, F might be of first order

F(x, r,p, X) = H(x, r,p);

every first order F is obviously (very) degenerate elliptic, and then proper if itis nondecreasing in r. For an explicit example, the equation Ut + (ux )2 = 0 with(t,x) Ern? is a proper equation (we are thinking of (t,x) as (X1,X2) above). Onthe other hand, the Burger's equation Ut + UUx = 0 is not proper, for it is notmonotone in u: We refer to proper first order equations H(x, u, Du) = 0 andUt+H(x, u; Du) = 0 as "Hamilton-Jacobi" equations.

Famous second order examples are given by F(x, r, p, X) = -Trace (X) andF(x,r,p,X) = -Trace (X) - f(x) where f is given; the pdes are then Laplace'sequation and Poisson's equation:

N

F(D2u) = - LUXiXi = = 0 and - 6u = f(x)i=l

The equations are degenerate elliptic since X ---> Trace (X) is monotone increas-ing on S(N) . We do not rule out the linear case! Incorporating t as an additionalvariable as above, the heat equation Ut - 6u = 0 provides another famous ex-ample. The convention used here, that Du; D2u stand for the spatial gradientand spatial Hessian, will be in force whenever we write "ii; +F(x, u, Du, D2u)".

Note the preference implied by these examples; we prefer -6 to 6. Areason is that (in various settings), -6 has an order preserving inverse. Thisconvention is not uniform; for example, Souganidis [35] does not follow it andreverses the inequality in the definition of degenerate ellipticity.

More generally, the linear equation

N N

- L ai,j(x)UXi,Xj +L bi(x)UXi + c(x)u - f(x) = 0i,j=l i=l

may be written in the form F = 0 by setting

(0.1) F(x,r,p,X) = -Trace (A(x)X) + {b(x),p) +c(x)r - f(x)

where A(x) is a symmetric matrix with the elements ai,J(x) andb(x) = (b1(x), ... ,bN(x». This F is degenerate elliptic if 0 :::: A(x) and properif also 0 :::: c(x).

In the text we will pose some exercises which are intended to help readersorient themselves (and to replace boring text with pleasant activities). We vi-olate all conventions by doing so even in this introduction. Some exercises are"starred" which means "please do it now" if the fact is not familiar.

Exercise 0.1.* Verify that F given in (0.1) is degenerate elliptic if and only ifA(x) is nonnegative.

The second order examples given above are associated with the "maximumprinciple". Indeed, the calculus of the maximum principle is a fundamental ideain the entire theory.

3

Exercise 0.2.* Show that F is proper if and only if whenever ip, if! E C2

and sp - if! has a nonnegative maximum (equivalently, if! - ip has a nonpositiveminimum) at X, then

F(x, if!(x) ,Dif!(x) ,D2if!(x)) :s: F(x, cp(x), Dcp(x), D2cp(x)).

So far, we have presented nonlinear first order examples and linear secondorder examples. However, the class of proper equations is very rich. Indeed, ifF, G are both proper, then so is AF + JlG for 0 :s: A,u. More interesting is thefollowing simple fact: if Fa ,{3 is proper for ex E A, /3 E B (some index sets), thenso is

F = sup inf Fa {3aEA{3EB '

provided only it is finite. This generality is essential to applications of thetheory in differential games (see Bardi [2]), while applications in control theorycorrespond to the case "Fa" in which there is only one index (see Bardi [2] andSoner [34]).

For example, maxju, + IDuI 2 - g(x), -6.u - f(x)) = 0 is a proper equation.The other lecture series will present many examples of scientific significance. Wehave only attempted here to indicate that that class of proper equations is broadand interesting.

Here we aim at a clear and congenial presentation of the most basic ele-ments of the theory of viscosity solutions of proper equations F = O. These arethe notion of a viscosity solution, maximum principle type comparison resultsfor viscosity solutions, and existence results for viscosity solutions via Perron'smethod. We do not aim at completeness or technical generality, which oftendistract from ideas.

The text is organized in sections, many of which are quite brief. The de-scriptions below contain remarks about the logic of the presentation. By thenumbers, the topics are:

Section 1: An illustration of the need to be able to consider nondifferentiablefunctions as solutions of proper fully nonlinear equations is given using first orderexamples.

Section 2: The notions of viscosity subsolutions, supersolutions and solutionsare presented. The convention that the modifier "viscosity" will be droppedthereafter in the text is introduced. It is essential to deal with semicontinuousfunctions in the theory, and this generality appears here.

Section 3: Striking general existence and uniqueness theorems are presentedwithout proof to indicate the success of viscosity solutions in this arena. Thecontrast with the examples in Section 1 is dramatic.

Sections 4, 5, 6: A primary test of a notion of generalized solutions is whetheror not appropriate uniqueness results can be obtained (when suitable side con-ditions - boundary conditions, growth conditions, initial conditions, etc. - aresatisfied). Actually, one wants a bit more here, that is the sort of comparisontheorems which follow from the maximum principle. Basic arguments needed

4

in proofs of comparison results for viscosity solutions of first order stationaryproblems (those without "t") are presented here and typical results are deduced.Section 4 concerns the Dirichlet problem, Section 5 concerns bounded solutionsof a problem in rnN , and Section 6 provides an example of treating unboundedsolutions. The second order case is more complex and is not taken up until Sec­tion 10. However, nothing is wasted, and all the arguments presented in thesesections are invoked in the second order setting.

Section 7: The notions of Section 2 are recast in a form convenient for use inthe next section and in the comparison theory in Sections 8 and 9.

Section 8: Two related results, each an important tool, are established. Onestates roughly that the supremum of a family of subsolutions is again a sub­solution, and the other that the limit of a sequence of viscosity subsolutions(supersolutions, solutions) of a converging sequence of equations (meaning theFn's converge) is a subsolution (respectively, a supersolution, solution) of thelimiting equation. We call this last theme "stability" of the notion; it is oneof the great tools of the theory in applications. The mathematics involved iselementary with a "point­set" flavor.

Section 9: Existence is proved via Perron's Method using a result of the previoussection. The existence theory presupposes "comparison". At this stage, com­parison has only been treated in the first order case, and is simply assumed forthe second order case. This does not affect either clarity or the basic argument.At this juncture, the most basic ideas have been presented with the exceptionof comparison for second order equations.

Section 10: The primary difference between the first and second order cases isexplained. Then the rather deep result which is used here to bridge the gap,called here "the Theorem on Sums" (an analytical result about semicontinuousfunctions), is stated without proof. An example is given to show how this tooltheorem renders the second order case as easy to treat as the first order case.

Section 11: The Theorem on Sums is proved.

Section 12: In the preceding sections comparison was only demonstrated forvarious equations of the form F(x, u, Du, D2u) = O. Here the main additionalpoints needed to treat Ut + F(x, u, Du, D2u) = 0 are sketched.

Regarding notation, we use standard expressions like "C 2 (n )" (the twicecontinuously differentiable functions on n) and "Ipl" (the Euclidean length of p)without further comment when it seems reasonable. With some exceptions, weminimize distracting notation.

Regarding the literature, it is too vast to try to summarize in a work likethis, which aims at presenting basic ideas and not at technical generality or greatprecision. We will basically rely on the big brother to this work, the more intense(and reportedly less friendly) [12] for its extensive references, together with thosein the other contributions to this volume. (We recommend the current work aspreparation for reading [12], especially the topics therein not taken up here.)We do give some references corresponding to the original works initiating thethemes treated here. A few more recent papers are cited as appropriate. All

5

references appear at the ends of sections. In addition, we mention the books byCabre and Caffarelli [7] and Dong [17] for recent expositions of regularity theoryof solutions, which is not treated here, as well as the classic text of Gilbarg andTrudinger [23]. Regularity theory is also one of the themes of Evans [20]. Therecent book of Barles [3] presents a complete theory of the first order case (whichitself fills a book that contains 154 references!). The book of Fleming and Soner[22, Chapters II and V] also nicely covers the basic theory. There are alternativetheories for first order equations; see, e.g., [9] and [36]. Of course, MathSciNetnow allows one to become nearly current regarding the state of the literaturerelatively easily, and one can profitably search on any of the leads given above.

A significant limitation of our presentation is that only the Dirichlet bound­ary condition is discussed at any length, and this in its usual form rather thanthe generalized version. Other boundary conditions appear in the contributionsof Bardi [2] and Soner [34] in an essential way. In addition to the references theygive, the reader may refer for example to [12, Section 7] for a discussion in thespirit of this work. Another limitation is that singular equations are not treatedat all. Equations with singularities appear in contributions of Evans [20] andSouganidis [35]. See also [12, Section 9]. Finally, only continuous solutions arediscussed here, while within applications one meets the discontinuous solutions.The contribution of Bardi [2, Section V] treats this issue, and discontinuousfunctions appear quickly in the exposition of Souganidis [35].

1. On the Need for Nonsmooth Solutions

The fact is that it is difficult to give examples of solutions (in any sense) ofequations F = 0 which are not classical solutions unless the equation is pretty"degenerate" (roughly, the monotonicity of X ­+ F(x, r,p, X) is not strongenough) or "singular" (that is, F may have discontinuities or other types of sin­gularities). (A "classical" solution of an equation F(x, u(x), Du(x), D2u(x)) = 0is a twice continuously differentiable function which satisfies the equation point­wise; if the equation is first order classical solutions arc once continuously differ­entiable; if the equation has the form V,t + F(x, u, Du, D2u) = 0, then a classicalsolution will possess the derivatives Ut, Du, D2u in the classical sense. Simi­lar remarks apply to subsolutions and supersolutions.) The reason is that theregularity theory of sufficiently nondegenerate and nonsingular equations is stillunsettled. In particular, it may be that nondegenerate nonsingular equationsF = 0 with smooth F admit only classical solutions, although some suspect thatthis is not so.

However, if the equation is first order (so very degenerate), then examplesare easy. The next exercise gives a simple problem without classical solutionsand for which there are solutions slightly less regular than "classical"; howeverallowing less regular solutions generates "nonuniqueness".

Exercise 1.1.* Put N = 1, n = (­1,1) and F(x,r,p,X) = Ipl2 - 1. Verifythat there is no classical (here this means C 1( -1,1) n C([-I, 1])) solution u ofF( u') = (u')2 ­1 = 0 on (0,1) satisfying the Dirichlet conditions u( ­1) = u(l) =O. Verify that u(x) = 1 -Ixl and v(x) = Ixl- 1 are both "strong" solutions: in

(1.1 )

6

this case, they are Lipschitz continuous and the equation is satisfied pointwiseexcept at x = 0 (so almost everywhere).

Of course, the problem in Exercise 1.1 has a unique solution within ourtheory, as we will see later (it is u(x) = 1 - Ixl).

To further establish the desirability of allowing nondifferentiable solutions,we recall the classical method of characteristics as it applies to the Cauchyproblem for a Hamilton-Jacobi equation Ut + H(Du) = 0:

{

Ut + H (Du) = 0 for x E IRN , t > 0

u(O,x) = 1jJ(x), for x E IRN.

Suppose that H is smooth and that u is a smooth solution of Ut+H (Du) = 0on t ::::: 0, x E IRN

. Define Z(t) E IRN to be the solution of the initial valueproblem

Z/(t) = :t Z(t) = DH(Du(t, Z(t))), Z(O) = x

over the largest interval for which this solution exists. A computation yields

d audt Du(t, Z(t)) = D at (t, Z(t)) + D2u(t, Z(t))Z/(t)

au= D at (t, Z(t)) + D2u(t, Z(t))DH(Du(t, Z(t))

=0

where the last equation arises from differentiating Ut +H(Du) = 0 with respectto x.

Remark 1.1. In calculations such as the above, one has to decide whetherthe the gradient Dv of a scalar function v is to be a column vector or a rowvector. There is no ambiguity about D 2v , for it is to be square and symmetricin any case. In the introduction we wrote the gradient as a row vector, but aboveinterpret it as a column vector. This is consistent with interpreting points of IRN

as column vectors while writing row vectors, and with these sloppy conventionsthe above is correct.

We conclude that Du is constant on the curve t ----+ (t, Z(t)). It then wouldfollow that Z(t) = x + tDH(D1jJ(x)). However, the resulting equation Du(t, x +tDH(D1jJ(x)) == D1jJ(x) yields contradictions as soon as we have characteristicscrossing, that is y F- z but t > 0 such that y+tDH(D1jJ(y)) = z+tDH(D1jJ(z)).In this case, one says that "shocks form" and there are no smooth solutions udefined for all t ::::: 0 in general.

Exercise 1.2. (i) Continue the analysis above to find

u(t, Z(t)) = 1jJ(x) + t( (D1jJ(x), DH(D1jJ(x))) - H(D1jJ(x)))

where (".) is the Euclidean inner product.(ii) If N = 1, then shocks will form unless x ----+ H'(1jJ/(x)) is monotone.

7

Under reasonable assumptions, as is shown in elementary courses, analysisby characteristics provides a smooth solution of (1.1) until shocks form. Whenclassical solutions break down, in this area and others, one is led to think ofthe problem of finding a way to continue past the breakdown with a less regularsolution. However, one can also immediately think of the problem of findingsolutions in cases where the data does not allow the classical analysis. E.g.,what does one do if Hand/or 1/) above is not smooth? The "breakdown" ideais not central in this view.

Just as in the case of Exercise 1.1, relaxing the regularity requirement fora solution just a tiny bit leads to nonuniqueness for (1.1). One does not expectuniqueness in general for stationary problems, but one does expect uniquenessfor initial-value problems.

Exercise 1.3. Consider the equation Ut + (u x ) 2 = 0 for t > 0, x EO lR coupledwith the initial condition u(O, x) == O. Verify that the function

v(t,x) == 0 for 0 < t:::; [z],v(t, x) = -t + Ixl for Ixl:::; t,

satisfies the initial condition, is continuous and has all the regularity one desiresoff the lines x = 0, t = [z], and satisfies the equation off these lines. Thusu == 0 and v are distinct nearly classical - even piecewise linear - solutions of theCauchy problem.

We have not given second order examples. However, here is a model equa-tion which will be covered under the theory to be described and for which theissue of how smooth solutions are is unsettled. Let Ai EO S(N), i = 1,2,3 satisfyI:::; Ai:::; 2I for i = 1,2,3,4 and

This is a uniformly elliptic equation - here this means that there are constantso< A < A such that

F(X + P) :::; F(X) - ATrace (P) and IF(X) - F(Y)I :::; AIIX - YIIfor X, Y, P E S(N), P O. Here IIXII can be any reasonable matrix norm ofX; a good one is the sum of the absolute values of the eigenvalues of X, as itcoincides with the trace on nonnegative matrices.

Exercise 1.4. Determine A, A which work above.

It is known that solutions of uniformly elliptic equations typically haveHolder continuous first derivatives, but it is not known if these solutions arenecessarily C2 . If the equation is uniformly elliptic and convex in X, regularityis known. See Evans [20], Cabre and Caffarelli [7], Dong [17], the referencestherein, as well as Trudinger [39] and Swiech [37] for a recent result concerningSobolev rather than Holder regularity.

(2.1)

8

2. The Notion of Viscosity Solutions

As we will see, the theory will require us to deal with semicontinuous func­tions, there is no escape. Therefore, let us recall the notions of the upper semi­continuous envelope u* and the lower semicontinuous envelope u* of a functionu: 0 ­­­t JR:

{

u*(x) = lim sup {u(y) : YEO, Iy - xl < r}7'10

u*(x) = liminf {u(y) : YEO, Iy - z] :s; r}.7'10

Recall that u is upper semicontinuous if u = u* and lower semicontinuousif u = u*; equivalently, u is upper semicontinuous if Xk ­­­t x implies u(x)limsuPk­>oo U(Xk), etc. Of course, u* is upper semicontinuous and u, is lowersemicontinuous.

Exercise 2.1.* In the above definition 0 could be replaced by an arbitrarymetric space 0 if Iy - xl is replaced distance between x, yEO. Show in thisgenerality that u is upper semicontinuous if and only if v = ­u is lower semi­continuous if and only if {x EO: u(x) :s; r} is closed for each r E JR. Showthat a function which is both upper semicontinuous and lower semicontinous iscontinuous. Show that if 0 is compact and u is upper semicontinuous on 0,then u has a maximum point x such that u(x) :s; u(x) for x E O.

Motivation for the following definition is found in Exercise 0.2; see alsoExercise 2.4 below. The semicontinuity requirements in the definition are partlyexplained by the last part of Exercise 2.1 and the fact that we will want toproduce the maxima associated with subsolutions, etc., in proofs.

Definition 2.1. Let F be proper, 0 be open and u : 0 ­­­t JR. Then uis a viscosity subsolution of F = 0 in 0 if it is upper semicontinuous andfor every ip E C2(O) and local maximum point x E 0 of u ­ rp, we haveF(x,u(x),Drp(x),D2rp(x)) :s; O. Similarly, u: 0 ­­­t JR is a viscosity superso­lution of F = 0 in 0 if it is lower semicontinuous and for every sp E C2(O) andlocal minimum point x E 0 of u ­ rp, we have Fi i; u(x), Drp(x), D2rp(x)) O.Finally, u is a viscosity solution of F = 0 in 0 if is both a viscosity subsolutionand a viscosity supersolution (hence continuous) of F = O.

Remark 2.2. Hereafter we use the following conventions: "supersolution","subsolution" and "solution" mean "viscosity supersolution", "viscosity subso­lution" and "viscosity solution" ­ other notions will carry the modifiers (e.g.,classical solutions, etc.). Moreover, the phrases "subsolution of F = 0" and"solution of F :s; 0" mean the same (and similarly for supersolutions).

Remark 2.3. Explicit subsolutions and supersolutions which are semicontin­uous and not continuous will not appear in these lectures. They intervene ab­stractly in proofs, however, in an essential way.

9

Exercise 2.2. * Reconcile Definition 2.1 with Exercise 0.2 in the following sense:Show that if F is proper, u E C2(D) and

F(x, u(x), Du(x), D2u(x)) < 0

(F(x, u(x), Du(x), D2u(x)) 2: 0) for xED, then u is a solution of F :::; 0(respectively F 2: 0) in the above sense.

Exercise 2.3.* With F as in Exercise 1.1, verify that u(x) = 1 - Ixl is asolution of F = 0 on (-1,1), but that u(x) = Ixl-1 is not. Attempt to show thatu(x) = 1-lxl is the only solution of F = 0 in (-1,1) which vanishes at x = -1,1.Verify that u(x) = Ixl-1 is a solution of _(u')2 + 1 = o. In general, verify thatif F is proper then u is a solution of F :::; 0 if and only if v = -u is a solutionof G 2: 0 where G(x, r,p, X) = -F(x, -r, -p, -X) and that G is proper. Thusany result about subsolutions provides a dual result about supersolutions.

Exercise 2.4. In general, if D is bounded and open in ffiN, verify that u(x) =distance(x,oD) is a solution of IDul = 1 in D.

We mention that the idea of putting derivatives on test functions in thismaximum principle context was first used to good effect in Evans [18, 19]. Thefull definitions above in all their semicontinuous glory, evolved after the unique-ness theory was initiated in [14], [15]. The definition in these works was equiva-lent to that above, but was formulated differently and all functions were assumedcontinuous. The paper [16] comments on equivalences and writes proofs moresimilar to those given today. Ishii's introduction of the Perron method in [24]was a key point in establishing the essential role of semicontinuous functions inthe theory. Ishii in fact defines a "solution" to be a function u such that u* is asubsolution and u; is a supersolution. See Bardi's lectures [2] in this regard.

3. Statements of Model Existence - Uniqueness Theorems

Recalling the discussion of classical solutions of the Cauchy problem (1.1)and Exercise 1.3, the following results are a striking affirmation that the solutionsintroduced in Definition 2.1 are appropriate.

For Hamilton-Jacobi equations we have:

Theorem 3.1. Let H : IRN--t IR be continuous and'IjJ : IRN

--t IR be uniformlycontinuous. Then there is a unique continuous function u ; [0, 00) x IRN --t IRwith the following properties: u is uniformly continuous in x uniformly in t, uis a solution oj ii; +H(Du) = 0 in (0,00) x IRN and u satisfies u(O,x) = 'IjJ(x)for x E IRN

.

Even more striking is the following even more unequivocal generalization toinclude second order equations:

Theorem 3.2. Let F : IRN x S(N) --t IR be continuous and degenerate elliptic.Then the statement of Theorem 3.1 remains true with the equation ut+H(Du) =o replaced by the equation Ut+ F(Du, D2u) = O.

10

The analogue of 3.2 for the stationary problem (i.e., without "t") is

Theorem 3.3. Let F : IRN x S(N) --+ IR be continuous and degenerate ellipticand f : IRN

--+ IR be uniformly continuous. Then there is a unique uniformlycontinuous u : IRN

--+ IR which is a solution of u + F(Du, D2u) - f(x) = inIRN

.

Moreover, the solutions whose unique existence is asserted above are theones which are demanded by the theories developed in the other lectures in thisvolume. In Bardi [2] and Soner [34] formulas are given for potential solutions ofvarious problems, in control theoretic and differential games settings, and it isa triumph of the theory that the functions given by the formulas can be shownto be the unique solutions given by the theory.

All of the heavy lifting needed to prove these results is done below. However,some of the details are left for the reader's pleasure. The proof of Theorem 3.1is indicated at the end of Section 9, the proof of Theorem 3.3 is completed inExercise 10.3 and the proof of Theorem 3.2 is completed in Exercise 12.1.

4. Comparison for Hamilton-Jacobi Equations: the Dirichlet Problem

The technology of the proof of comparison in the second order case is morecomplex than in the first order case, so at this first stage we offer some samplefirst order comparison proofs. As a pedagogical device, we present a sequence ofproofs illustrating various technical concerns. We begin with simplest case, thatis the Dirichlet problem. The next two sections concern variants. Argumentsare the main point, so we do not package the material as "theorems", etc. Allof the arguments given are invoked later in the second order case so no time iswasted by passing through the first order case along the way.

Let n be a bounded open set in IRN. The Dirichlet problem is:

(DP) H(x,u,Du)=O in n, u=g on an.Here H is continuous and proper on D x IR X IRN and 9 E C(an). We say thatu :D --+ IR is a subsolution (supersolution) of (DP) if u is upper semicontinuous(respectively, lower semicontinuous), solves H ::::; 0 (respectively, H 2: 0) in nand satisfies 9 ::::; u on an (respectively, u 2: 9 on an).Exercise 4.1. One does not expect (DP) to have solutions in general. Showthat if N = 1, n = (0,1), the Dirichlet problem u + u' = 1, u(O) = u(l) = 0does not have solutions (in the sense of Definition 2.1!).

We seek to show that if u is a subsolution of (DP) and v is a supersolutionof (DP), then u ::::; v. We will not succeed without further conditions on H.Indeed, choose n to be the unit ball and let w(x) E C1(D) be any functionwhich vanishes on an but does not vanish identically. Then wand -7J) aredistinct classical solutions (and hence viscosity solutions, via Exercise 2.4) of(DP) with e(x, u, p) = Ipl2 -IDwI 2 , 9 = O. We will discover sufficient conditionsto guarantee the comparison theorem along the way.

11

The idea of comparison proofs for viscosity solutions is this: we would like toconsider an interior maximum x of u(x) - v(x) and use H(x, u(x), Du(x)) :::; 0,Hii; v(x), Dv(x)) 2 0 to conclude that u(x) :::; v(x) or u :::; v. A primarydifficulty is that u and v need not be differentiable at such a maximum X. Thusinstead one chooses smooth "test functions" cp(x, y) for which u(x)-v(y)-cp(x, y)has a maximum (x, Y). Assuming that x, YEn, x is a maximum of x ---+

u(x) - cp(.T, f)) and so, by the definition of subsolution, H(x, u(x), Dxcp(x,y)) :::; O.Similarly, H(y,v(y), -Dycp(x,y)) 2 0 and then

Ht i; u(x), Dxcp(x, y)) - H(y, v(y), -Dycp(x, y)) :::; o.

It remains to conclude that u :::; v by playing with the choice of sp and perhapsmaking auxiliary estimates.

Pick r:: > 0 and small and let us maximize

(4.1)1

1>(x,y) = u(x) - v(y) - -Ix _ yl22r::

over n x n. Since 1> is upper semicontinuous a maximum (xE,YE) exists. Thetest function cp(x,y) = Ix-yI2j(2r::) is chosen to "penalize" large values of jz c-y]when E is sent to zero. It further has the desirable property that Dxcp = -Dycp,the utility of which is seen below.

We prepare a useful lemma about penalized maximums of semicontinuousfunctions for use now and later.

Lemma 4.1. Suppose 0 c rnN. Let w, 1lJ : 0 ---+ lR, 0 :::; 1lJ, and w, -1lJ be

upper semicontinous. Let

(4.2)

and

N = {z EO: 1lJ(z) = O} i- 0,

sup(w(z) -1lJ(z)) < 00.zEO

(4.3)

Let ME = SUPzEO(w(z) -1lJ(z)jr::) for e :::; 1. If ZE E 0 is such that

ME - (W(ZE) - ---+ 0

then

(4.4)

Moreover, and if Z E 0 is a cluster point of ZE as E 1 0, then ZEN andw(z) :::; w(z) for ZEN.

Proof. ME is clearly a decreasing function of 0 < r:: :::; 1. Since sUPNw :::; Iv!E :::;M1 < 00 where N is the nonempty set of (4.2), Mo = limdO ME exists and is

12

finite. Letting g(c) be the left-hand side of (4.3), for 0 < J1"c we have

MI-' - - .!.) W(ZE) 2W(ZE) - .!.W(ZE) - - .!.) W(ZE) =C J1, J1, e J1,

1w(zc') - -w(zo:) 2 ME - g(c).

E

Taking J1, = 2E we conclude from the extreme inequalities that

1-W(ZE) ::; 2(M2E- M; + g(E))E

and the right-hand side tends to zero as e 1o.Assume now that Zo: ----> Z E 0 along a sequence of E'S tending to zero. Then

0= limsuPdo w(zo:) 2 W(z) by lower semicontinuity, and zEN. Moreover, bythe upper semicontinuity of w,

w(z) 2 lim (w(zo:) - = Mo 2 sup w(z).0:10 E zEJV

o

Since nx S1 is compact, the maximum point (xo:, Yo:) of 1> of (4.1) has a limitpoint E 1O. It follows from Lemma 4.1 that

(4.5) 110 ,/2 0- Xc - YE ---->E

and any limit point has the form (x, x). If i: E an, then u(x) ::; g(x) ::; v(x)shows that

lim sup 1>(xE, Ye) ::; O.dO

If no such limit x E an , then xC,YE must lie in n for small E. In this case, asexplained before, we have

(4.6) H (0 (0) Xo: - Yo: ) H (0 (0) Xc - Yo:) < 0Xc, u Xc, C - Ye'V Yo:, e -'

When does this information imply u ::; v? For a simple example, let usassume that G has the form H(x, r,p) = r +G(p) - f(x) where f is continuouson S1. Then (4.6) rewrites to

in view of (4.5) and the uniform continuity of f the right-hand side tends to zeroas e 10 and we conclude again that

lim sup 1>(xo:, YE) ::; limsup(u(xo:) - v(f)o:)) ::; O.0:10 E10

13

Since u(x) - v(x) = <I> (x, x) :'S: <I>(xc,yc)' we conclude u :'S: v in the limit E 1O.The case in which H has the form H(x, r,p) = G(r,p) - f(x) and H is strictlyincreasing in r uniformly in p E rnN is essentially the same.

In the above examples, the x dependence is "separated". When it is not,the situation is more subtle and it convenient to use the full force of (4.5).

Exercise 4.2. Establish comparison for (DP) when H(x, r,p) satisfies

IH(x, r,p) - H(y, r,p)1 :'S: w(lx - yl(l + Ipl))for some function satisfying w(O+) = 0 and H is sufficiently increasing in r .Show that the "sufficiently increasing" (however you formulated it) assumptioncan be dropped when there is a c > 0 such that either u solves H :'S: -c or vsolves H 2 c.

It was remarked at the beginning that solutions of (DP) for equations ofthe form IDul 2 = f(x) are not necessarily unique. However, they are unique iff (x) > 0 in n as shown by the next two exercises.

Exercise 4.3. * Let F(x, r, p) be proper, u be a solution of F :'S: 0 (respectively,F 2 0) and consider a change of unknown function according to u = K (w). HereK is continuously differentiable, K' (r) > 0 for r in the domain of K and the rangeof K includes the range of u. Show that w is then a subsolution (respectively, su-persolution) of the resulting equation, G(x,w,Dw) = F(x,K(w),K'(w)Dw) =O. Note, however, that G may not be proper. Discuss the second order case.

Exercise 4.4.* Let f E C(O), f(x) > 0 for x E n. Find a change of unknownin the equation IDul 2 - f(x) = 0 which - with a little massaging - produces aproper equation H(x, w, Dw) = 0 for which comparison in the Dirichlet problemholds.

Except for the semicontinuous generality, which plays a small role, com-parison results of these forms have been known since [15]. However, the proofsabove are certainly clearer than the original ones.

5. Comparison for Hamilton-Jacobi Equations in IRN

The point of this section is to indicate how to handle unbounded domains.The reader may skip ahead now to Section 7 if desired.

We consider the model stationary Hamilton-Jacobi equation on IRN:

(SHJE) u + H(Du) = f(x) for x E IRN.

Means to treat more general equations used in the previous section will alsowork here, and we focus only the modifications of arguments required by theunbounded domain IRN

. Everywhere below, u,v : IRN-+ IR, u is a subsolution

and v is a supersolution of (SHJE). Moreover, H, f : IRN-+ IR, H is continuous

and f is uniformly continuous. The goal is again to prove u :'S: v.

14

We suppose that u, v are bounded on IRN ; this is relaxed in the next section.For 0 < e, b define the function

1 b4>(x, y) = u(x) - v(y) - -Ix - yl2 - - (lxl 2+ Iyn

2c 2

on mN x mN. The term b(lxl 2 + lyI2)/2 is present to guarantee that 4> has a

maximum on its unbounded domain and will be removed by sending b 1 o. 4> isupper semicontinuous and, since u(x) - v(y) is bounded above by assumption,4> tends to -00 as [z], Iyl --+ 00. Thus 4> has a maximum point (x, f)) (it dependson e, b; however we no longer indicate this dependence). Proceeding as abovewe have

(5.1) u(x) - v(f)) < H (x f) - bf)) - H (x f) + bX) + f(x) - f(f)).

From the assumed boundedness of u, v, it follows that

(5.2)

(5.3)

1 2sup(u(x) - v(y) - -Ix - yl ) < 00.x,y 2

A slight modification of Lemma 4.1 then yields

where

(5.4) Iim lim supCj, = O.010 010

In addition, for e 1, u(O) - v(O) 4>(x, f)) and (5.2) imply

b 12 (lxl 2 + 1f)1 2) v(O) - u(O)+ u(x) - v(f)) - 21x - f)1 2 C1

It follows from (5.3) and the above that:

(i) Ix - f)1/c = (Ix - f)1 2/c)1/2 /,JE (CE,0/c)1/2,

(5.5) (ii) Ix - 711 (C."oc)1/2,

(iii) blxl + blYl = V6((bJxJ2)1/2 + (bIf)1 2)1/2) < 2V6Ci/2.

Let Pf be the modulus of continuity of i, that is the least nondecreasingfunction such that

If(x) - f(y)1 Pf(lx - yl);

Pf is continuous. Likewise, the merely continuous function H is uniformly con-tinuous on compact sets, so there is a least function PH such that

IH(p + q) - H(p)1 < PH(R, r) for Ipl <R, Iql < r.

(5.u)

15

We have as well Pj(O+) = PH(R, 0+) = 0 for R > O. Returning to (5.1), we mayuse (i)-(iii) above to conclude (with unseemly precision) that

u(x) - v(fj) S PH((Ce,6/C)1/2, 2/8C;/2) + Pj((cCe,6)1/2).

Thuslimsup(u(x) - v(fj)) S pj((climsupCe,6)1/2).

610 610

Therefore

u(x) - v(y)_lx - yl2 = lim <p(x, y) Slim <p(x, fj) S2c 610 610

lim sup(uf.i ) - v(fj)) S pj((climsupCe,6)1/2).610 610

Putting x = y and letting c ----+ 0, we conclude that u(x) S v(x) for all x.Another use of estimates like (5.6) is this: if u itself is a bounded solution

of (SHJE), we may take u = v in (5.6) to conclude that

u(x) - u(y) S inf (IX - yl2 + pj((dimsup Ce,6)1/2)) ,O<e::; 1 2c 610

which provides a modulus of continuity for u determined by f. This methodgeneralizes to allow H (x, p) to depend on x as well; in the current case there isa simpler way to obtain a modulus for a solution u.

Exercise 5.1. Generalize the above to show that if f,g : rnN----+ IR are

uniformly continuous and u, -v are upper semicontinuous and bounded, u solvesu+H(Du) Sf and v solves v+H(Dv) 2 g, then u(x) -v(x) S SUPzEIRN (j(z)-g(z)). If u is a solution of u +H(Du) = f and z E IRN

, then v(x) = u(x + z) isa solution of v +H(Dv) = 9 with g(x) =, f(x + z). Consequently, u(x) - u(y) Spj(lx - yl)·

6. Hamilton-Jacobi Equations in mN: Unbounded Solutions

We treat some technical difficulties "at 00" caused by allowing unboundedU, v in the problem treated in the Section 5. The devices used adapt to thesecond order case. The reader may skip ahead to Section 7 at this time withoutdisrupting the flow.

This time, we allow a linear growth at infinity, that is

(LG) U(x) - v(y) S L(lxl + Iyl + 1) for x, Y E IRN

for some constant L (note that this amounts to bounding u, -v separately fromabove). We show that then uS v. A review of the proof of Section 5 shows thatbounds on u, v were not in fact needed except in so far as they guaranteed (5.2)(which itself guarantees the existence of the maxima used). If we verify (5.2) (oran close substitute), we will be finished.

16

Remark 6.1. The linear growth is "critical" in the class of powers of [z]. Theequation u - IDul'Y = 0 with, > 1 has the two distinct solutions u == 0 andu. = (b - 1)!r)'Y/h-1)lxl'Y/h-l). Choosing -y Iarge, the growth is as close tolinear as we please.

The next exercise is used immediately.

Exercise 6.1.* Let f : rnN -+ ffi be uniformly continuous and

pj(r) = sup {If(x) - f(y)l: x,y E ffiN, Ix - yl:::; r}

be its modulus of continuity. Show that Pi(r + s) :::; Pi(r) + Pj(s) (Pj is subad-ditive) and pj(r) :::; pj(8) + (pj(8)f8)r for 0 :::; r, s, 0 < 8.

Since f is uniformly continuous on rnN, by the above exercise it admits the

estimate

(6.1)

where K = pj(l).We claim that

If(x) - f(y)1 - Klx - yl :::; K

sup (u(x) - v(y) - Klx - yl) < 00IRNxIRN

(and then u(x) - v(y) - Klx - yl2 is bounded as well).In view of (LG), the upper semicontinuous function

8<I> (x, y) = u(x) - v(y) - K(l + Ix - YI2)1/2 - 2(lx12 + lyl2)

attains its maximum at some point (x, iJ). (LG) and u(O) - v(O) :::; <I> (x, iJ) implythat

82(lx1

2+ liJl 2) :::; v(O) - u(O) + u(x) - v(iJ) :::; C+ L(lxl + liJl)which implies

(6.2) 8(lxl + liJl) < C

(6.3)

where the constants C are various. Using the equations

u(x) - v(Y) < H ( K (1 + I:=fI2)1/2 - 8iJ) -

H ( K (1 + I:=tI2)l/2 + 8X) + f(x) - f(iJ)·

The key thing here is that

(7.1)

17

while we also have (6.2). Thus the arguments of H above are bounded indepen­dently of 6. Invoking (6.1) as well, we conclude that

u(x) - v(Y) -::: C + Kjx - 711.

But then

u(x) - v(Y) - K(1 + Ix - 711 2)1/2 ­::: u(x) - v(Y) - Klx - 711 -::: C

and finally

<1>(x, y) -::: <1> (x, Y) ­::: u(x) - v(Y) - K(1 + Ix - 711 2)1/ 2 ­::: C.

Passing to the limit as 6 10, we conclude that u(x) - v(y) - K(1 + [z ­ yI2)1/2is bounded.

Exercise 6.2. Show that linearly bounded solutions of (SHJE) are uniformlycontinuous.

See the references given in [12, Section 5D].

7. Definitions Revisited - Semijets

In this section the notion of Definition 2.1 will be recast for later conve­nience. The definition itself involves extrema of differences u ­ cp and thenevaluation of the equation at data from the second order Taylor expansion ofsp at these extrema. It is only the information from the expansion of cp whichmatters, and we now emphasize this.

H cp E C1(D) and u ­ sp has a local maximum relative to D at xED, then

u(x) -:::u(x) + cp(x) - cp(x) =u(x) + (p, x - x) + o(lx xl) as D:3 x ­+ x

and if cp E C2(D), then

u(x) -::: u(x) + cp(x) - cp(x)

(7.2) = u(x) + (p,x - x) +1"2 (X(x - x), x - x) +o(lx - x1 2

) as D:3 x ­+ x

where

(7.3) p = Dcp(x) and X = D2cp(x).

Conversely, if p E rnN and (7.1) holds, then there exists cp E C1(D) such thatu - cp has a strict maximum at x and Dcp(x) = p. Here a maximum x of u - ip

is strict if there is a nondecreasing function h : (0,00) ­+ (0,00) and TO > Osuchthat

(7.4) u(x) - cp(x) -::: u(x) cp(x) - h(T) for 1" -::: Ix - xl -::: TO·

18

Let us call h the "strictness" in this situation. The proof goes like this: assume(7.1) and set

g(,) = sup {(u(x) - u(x) - (p,x - x))+: x E n, Ix - xl <:::: r}.

By (7.1), g(,) = 0(') as r 1 °and is nondecreasing. Choose a continuousnondecreasing majorant 9with the same properties; that is, we want g(,) <:::: g(,),g(,) = 0(') and 9 is nondecreasing. Now put

1 j2rG(,) = - g(s) ds

, r

and check thatcp(x) = G(lx - xl) + tp, x - x) + Ix - xl 4

has the desired properties (with h(,) =,4 as the strictness).

Exercise 7.1. * Formulate and prove the corresponding statement for (7.2).

We focus on the second case. The quadratic appearing on the right handside of (7.2) is defined by the "jet" (u(x),p,X) and we write

(u(x),p,X) E j2,+u(x)

when (7.2) holds. The quantity u(x) on the left appears to be redundant, butis incorporated for technical reasons. For any function u : n -> IR, j2,+u mapsn into the set of subsets of n x IRN x S(N) (and the empty set may well be avalue).

Exercise 7.2.* Let u(x) = [z] for x E IR. Compute j2,+U(0) and j2,-U(0).

'Whenever j2,+u(x) is not empty, it is infinite, since whenever (7.2) holds forp, X it also holds for p, Y whenever X <:::: Y. Likewise, we define (u(i),p, X) Ej2,-U(X) to mean that (7.2) holds with the inequality reversed.

Exercise 7.3. Define the first order analogues ]1,+, j1,- of the second ordersemijets. Observe that (u(x),p, X) E j2,+u(x) implies (u(x),p) E j1,+U(x), butshow by example that the converse fails: j2,+u(x) may be empty while j1,+U(X)is nonempty.

Exercise 7.4. Let n be open and u : n -> IR be upper semicontinuous. Showthat {x En: j2,+u(x) -.:/cO} is dense in n. Show that if j1,+u(x) n ]1,-u(x) is

nonernpty, then there is apE IRN such that

u(x) = u(x) + (p,x - x) + o(lx - xl) for x E n;

in this case we say that u is differentiable at x and p = Du(i). Conclude thatthere are continuous functions u : (0,1) -> IR such that

j2,+U(x) n j2,-U(x) = 0

for every x E (0,1).

Exercise 7.5. Let u: IRN-> IR and lu(x)-u(y)[ <:::: Llx-YI for x, Y E IRN (i.e.,

u is Lipschitz continuous with constant L). Show that if (u(x),p) E j1,+u(X),

19

then Ipl L. Conversely, if (u(x),p, X) E J2,+u(x) implies that Ipl L, showthat u is Lipschitz with constant L.

According to Exercise 7.1, an upper semicontinuous function u : n --+ IRis a subsolution of a proper equation G = 0 if and only if G(x,u(x),p,X) 0for every x E nand (u(x),p,X) E P'+u(x). If G is continuous (or even lowersemicontinuous), the relation

G(x,u(x),p,X) < 0

persists under taking limits, and this leads us to define the closure ]2,+u of

J2,+u. This goes as follows: (r, p,X) E ]2,+u(x) if there exists

n 3 xn --+ x and (u(xn),Pn, Xn) E J2,+u(xn)

such that(u(xn),Pn, Xn) --+ (r,p, X).

We have then that an upper semicontinuous function u : n --+ IR is asubsolution of a proper equation G = 0 if and only if G(x, r, p, X) 0 for every

x E nand (r,p,X) E ]2,+u(x). Note that upper semicontinuity of u impliesthat r u(x), so perhaps G(x,u(x),p,X) > O.

One defines ]2,-u similarly and then u : n --+ IR is a subsolution of aproper equation G = 0 if and only if G(x, r, p,X) ::::: 0 for every x E nand

-2-(r,p, X) E J' u(x).

Remark 7.1. The notation in use here differs from that in [12] in that the valuesof J 2,+ are taken here to include "u(x)" and this was not so in [12]. Nobodymuch likes this "jet" business and perhaps we should refer to "second ordersuperdifferentials" or some such. There seems to be a law of conservation ofpedantic excess in attempts to resolve this issue. It is a bookkeeping question,and when we get to the Theorem on Sums, one needs to do the bookkeepingsomehow.

The construction of the test functions used to show that the "jet" formula-tions are equivalent to the "maximum of u - cp" formulations appears in Evans[18].

8. Stability of the Notions

We begin by considering two related issues. First, if :F is a collection ofsolutions of F 0, then (SUPuE.:F u)* (recall (2.1)) is another subsolution. Next,if Un is a solution of Fn 0 for n = 1,2, ... , Fn = 0, and Un --+ U, Fn --+ F in asuitable sense, then u is a subsolution of F = O. These facts are linked in thatthey both rely on the following result.

The reader will notice in the statement below that there is a set 0 C IRN

which is "locally compact"; for example, both open sets and closed subsets ofIRN contain a compact (relative) neighborhood of each point and so are locally

20

compact, as are various other sets. In fact, the above considerations easilygeneralize to allow locally compact sets 0 in the definition of the jets, etc. Ifo is locally compact, we take 'P E C2(0) to mean it is the restriction to 0 of atwice continuously differentiable function defined on a neighborhood (in rnN

) ofO. The relation (7.2) needs no modification, for we have already appended "aso 3 x -> ii" to emphasize that 0 may not be open, etc. The jet "operators"P'+, p,- are written and their closures and when 0 isnot necessarily open. At the moment, we pose this technical generality "becausewe can" and it doesn't affect the presentation; however, it is essential in variousways in other parts of the theory.

Proposition 8.1. Let 0 c rnN be locally compact, U : 0 -> IR be uppersemicontinuous, z E 0 and (U(z),p,X) E Suppose also that Un is asequence of upper- semicontinuous functions on 0 such that

(i) there exists X n E 0 such that (xn,un(xn)) --> (z,U(z)) and

(8.1) (ii) if Zn E 0 and Zn -> x E 0, then limsupun(zn) U(x).n--->oo

Then

(8.2)there exists xn EO, (un(Xn),Pn, X n) E

such that (xn, un(xn),Pn, X n) -> (z, U(z),p, X).

Before proving this result, we use it.

Proposition 8.2. Let F be a nonempty collection of solutions of F 0 on 0where F is proper and continuous. If U(x) = SUPuEJ u(x) and U* is finite on0, then U* is a solution of F < 0 on O.

Proof. Suppose z EO and (U*(z),p, X) E j2,+U*(z). It is clear that there existsa sequence Un E :F and Xn E 0 such that (xn,un(xn)) -> (z,U*(z)) and thatthen the assumption of Proposition 8.1 are satisfied with U replaced by U*. Let(xn,un(xn),Pn,Xn) --> (z,U*(z),p,X) be as in Proposition 8.1; by assumptionF(xn, un(xn),Pn, X n) 0 and we conclude F(z, U*(z),p, X) 0 in the limit.o

We use Proposition 8.1 again.For an arbitrary sequence of functions u-, on 0 we can form the smallest

function U such that if 0 3 xn --> x E 0, then limsuPn--->oo un(xn) U(x). Uis given by

21

we write U = lim Un' In the opposite sense, we define

liminf Un = -limsup*( -Un)'n---+(:X) * n---+oo

Note that for any x E 0 there exists sequences n] -+ 00 and 0 :3 xnj -+ x suchthat un] (.'Tn ; ) -+ U(x).

Exercise 8.1. Show lim u-, is upper semicontinuous and that if Un == Ufor all n, then lim Un = u*, the upper semicontinuous envelope of u.

The following statement is remarkable, in that it produces a subsolutionof a limit problem from an arbitrary sequence of subsolutions of approximateproblems. No control of derivatives of any kind is assumed.

Theorem 8.3. For n = 1,2, ... , let Un be a subsolution of a proper equationFn = 0 on O. Let U = Un and F be proper and satisfy

F < lim inf e;n---+oo *

If U is finite, then it is a solution of F ::; O. In particular, if Un -+ U, Fn -+ Flocally uniformly, then U is a solution of F ::; O.

Proof. According to the above discussion, if (U(z),p,X) E j2,+U(z), then thereis a subsequence of the Un (which we again call un) such that the hypotheses ofProposition 8.1 are satisfied. If (xn, u(xn),Pn, X n) is as in the proposition, ourassumptions imply

o

Remark 8.4. Recall Exercise 2.3, whereby results concerning subsolutionsautomatically imply the corresponding result for supersolutions.

Exercise 8.2. The equation

N

-t>.pu = - :l)IDu[P-2ux.)Xi = 0;=1

is called the "p-Laplace" equation; p is a number and we focus on large p.Carrying out the differentiations, show that this is a degenerate elliptic equation(and then proper as it does not have a "u" dependence). Suppose up is a solutionof this equation for large p and that up -+ u uniformly on 0 as p -+ 00. Showthat u is then a solution of the "infinity Laplace" equation,

N

-t>.(X)u = - L UXiUXjUXi,Xj = O.;,]=1

22

Exercise 8.3. Suppose that {Un} are uniformly bounded and u-, solves

1Un + H(Dun) - -fJ.un = f(x)

n

on rnN where f is uniformly continuous. Prove via comparison for u+H(Du) =f that then

lim sup*Un -<: lim inf Un'n..---..-40oo n-+CX) *

Show this implies that Un converges locally uniformly to a limit u, and U is theunique bounded solution of U+ H(Du) = f.

Proof of Proposition 8.1.Without loss of generality we put z = O. By the as-sumptions and Exercise 7.1, there is an r > 0 such that N; = {x EO: [z] ::; r}is compact and a twice continuously differentiable function cp defined in a neigh-borhood of 0 such that

(8.3) U(X) - cp(x) ::;U(O) - cp(O) for x E Nn

the maximum 0 is both strict and the only maximum of U - cp in Ni., p = Dcp(O),and X = D2 cp (0). By the assumption (8.1) (i), there exists X n E 0 such that(xn, un(xn)) --> (0, U(O)). Let xn E N; be a maximum point of un(x) - cp(x)over N; so that

(8.4)

Suppose that (passing to a subsequence if necessary) in --> y as n --> 00. Puttingx = X n in (8.4) and taking the limit inferior as n --> 00, we find

on the other hand, by (8.1) (ii) lim inf Un (xn) ::;U(y). Thus U(O) -cp(O) ::;U(y)-cp(y) and we conclude that y = 0 (because 0 is the only maximum) and thatthe inequality liminfun(xn)::; U(O) cannot be strict - that is, (xn,un(xn) -->(0, U(O)). Since this holds no matter the subsequence, it holds without passingto a subsequence. Finally

and we are done. o

Remark 8.5. Proposition 8.1 could be restated in the form it is proved; a strictmaximum of U - cp perturbs to maxima of Un - sp which converge, etc. Whichform you prefer is a matter of taste; this author "thinks" in terms of the data ata point while eliminating "jets" in the statement does make it less unattractive.

Proposition 8.2 is due to Ishii [24]. Theorem 8.3 is of great utility in ap-plications and is due to Barles and Perthame [4], [5]. See the comments of [12,Section 6] and Barles [3], for further orientation. In the case of uniform conver-gence, Evans [18], [19] already employed the essential idea. Sophisticated limit

23

questions are discussed in Souganidis [35] and many references are given. Barlesand Souganidis [6] is typical significant contribution reflecting the utility of theability to take limits easily. Exercise 8.3 reflects the origin of the term "viscos­ity"; if one can solve regularized equations by adding "artificial viscosity" , thenpassage to the limit is an easy matter. Note that with the technology of thissection, one does not even need to estimate the modulus of continuity of the Un'

Exercise 8.2 is light­hearted, and is from Jensen [29], which is quite fascinating.At the moment, a pure viscosity solution proof of the uniqueness theorem in [29](comparison for the Dirichlet problem for -6.oo u = 0) is not known.

9. Existence Via Perron's Method

We establish existence results for Dirichlet problems via Perron's method.Below (DP) means:

(DP) F(x, u, Du, D2u) = 0 in n, U = 9 on 8n.

Subsolutions, etc., are defined exactly as in Section 4.F is to be proper and continuous, while 9 is continuous. We call the following

implementation of "Perron's Method" Ishii's Theorem, as it is a good exampleof the method he introduced into this subject. At this stage, we have not verifiedthe hypotheses of the theorem for second order equations, but will take this uplater. The assumptions have been verified in first order cases in Section 4.

Theorem (Ishii). Let comparison hold for (DP); i.e., if w is a subsolution of(DP) and v is a supersolution of (DP), then w :::; v. Suppose also that there is asubsolution 1f and a supersolution u of (DP) which satisfy the boundary condition1f*(x) = u*(x) = g(x) for x E 8n. Then

(9.1) W(x) = sup{w(x) : 1f :::; w :::; u and w is a subsolution of (DP)}

is a solution of (DP).

The first step in the proof of Ishii's Theorem was given in Proposition 8.2.The second is a simple construction which we now describe. Roughly speaking,it says that if a subsolution of (DP) is not a solution, then it is not a maximalsubsolution. Of course, if comparison holds for (DP) and it has a solution, thenthe solution is the largest subsolution. We have to take care of semicontinuityconsiderations. Suppose that n is open, u is a solution of F :::; 0 and that u* isnot a solution of F 0; in particular, assume 0 E n and we have

(9.2)

for some sp E C 2 such that for some ro > 0

(9.3)

24

where h > 0 is the strictness of the minimum. Adjusting sp by a constant, weassume u*(O) = rp(O). Then, by continuity,

F(x, rp(x)+ 15, Drp(x), D2rp(x)) < 0

for 0 -::: 15 < 150 and [z] -::: ro provided 150 and ro are sufficiently small; that is,U6 = rp(x) + 15 is a classical solution of F < 0 in Ixl -::: ro. Since

u(x) ::::: u*(x) ::::: rp(x)+ h(r) for r -::: Ixl -::: ro,if we choose 15 < (1/2)h(ro/2), then u(x) >U6(X) for ro/2 -::: [z] -::: ro and then,by Proposition 8.2, the function

U(x) = {max{u(x), U6(X)} if Ixl < ro.u(x) otherwise

is a solution of F -::: 0 in n. The last observation is that in every neighborhoodof 0 there are points such that U(x) > u(x); indeed, by definition, there is asequence (x n , u(xn ) ) convergent to (0, u*(O)) and then

lim (U(xn ) - u(xn ) ) = U6(0) - u*(O) = u*(O) + 15 - u*(O) > O.

We summarize what this "bump" construction provides in the following lemma,the proof of which consists only of choosing ro sufficiently small.

Lemma 9.1. Let n be open, and u be solution of F -::: 0 in n. If u* fails to bea supersolution at some point X, i.e., there exists (u*(x),p,X) E forwhich F(x, u*(x),p, X) < 0, then for any small K > 0 there is a subsolution UK,of F -::: 0 in n satisfying

{

UK,(x) ::::: u(x) and sup(UK, - u) > 0,

UK,(x) = u(x) for x n and Ix - xl ::::: K.

Proof of Ishii's Theorem.With the notation of the theorem observe that ll* -:::W* -::: W -::: W* -::: 'IT* and, in particular, W* = W = W* = 0 on an. ByProposition 8.2 W* is a subsolution of (DP) and hence, by comparison, W* -::: u.It then follows from the definition of W that W = W* (so W is a subsolution).If W* fails to be a supersolution at some point x E n, let WK, be provided byLemma 9.1. Clearly II -::: WK, and WK, = 0 on an for sufficiently small K. Bycomparison, WK, -::: 'IT and since W is the maximal subsolution between II andU, we arrive at the contradiction WK, -::: w. Hence W* is a supersolution of(DP) and then, by comparison for (DP), W* = W -::: W*, showing that W iscontinuous and is a solution. 0

Ishii's Theorem leaves open the question of when a subsolution II and asupersolution 'ii of (DP) which agree with g on an can be found. Some general

25

discussion can be found in [12, Section 4]. Here we only discuss simple illustrativecases to show the power.

First, according to Exercise 4.4, comparison holds for the equation IDul2 ­

f(x) = 0 if f > 0 on n. Put g = o. Assuming that f > 0, H(x,O) - f(x) S; 0,and we have a subsolution, namely u == O. To find a supersolution, we rely onExercise 2.4. Taking u = Mdistance(x,an), MIDul = M and if M sUPo.f, uis a supersolution. We conclude that a unique solution exists. No requirementshad to be laid on an.

Next we take up the construction of a supersolution for a general class ofuniformly elliptic operators. The construction is standard (see e.g. Gilbarg andTrudinger [23]); but the presentation is made consistent with the theme here.The implications, via Ishii's Theorem and comparison results to follow, is a quitegeneral existence and uniqueness theorem. At this juncture, we remark that theapplications of the theory to degenerate equations is more significant, but weleave it to the reader to visit [12] or other works in this regard. Nonetheless,even in the uniformly elliptic case, we are outside the realm of classical solutions,regularity is not known for general uniformly elliptic equations.

Define the "trace" norm on S(N):

(9.4) IIXII = L 1/11jLEeig(X)

(9.5)

(9.6)

where eig(X) is the set of eigenvalues of X counted according to their multiplic-ity.

Exercise 9.1.* Verify that the trace norm is indeed a norm on X.

Recall that F is called uniformly elliptic if there exists constants 0 < A S; Asuch that

F(x,r,p,X + Z) S; F(x,r,p,X) - ATrace(Z) and

IF(x,r,p,X) - F(x,r,p, Y)I S; AIIX - YIIfor X, Y,Z E S(N), Z O. For example, F = -Trace (X) satisfies (9.5)with A = A = 1. The two inequalities in (9.5) say first that F(x, r, p, X) isdecreasing with respect to X at an at least linear rate and then that it is Lipschitzcontinuous in X as well. A bit less intuitive is the condition

F(x,r,p,X) - ATrace(Z) S; F(x,r,p,X + Z) S;

F(x,r,p,X)­ATrace(Z) for

but it is easy to see the equivalence or (9.5) and (9.6).We are assuming that F is proper. Rewriting F = °as

F(x,u,Du,D2u) - F(x,O,O,O) = F(x,O,O,O),

we may as well consider the equation F = f and assume that

(9.7) F(x, 0, 0, 0) == o.

26

Finally, we add a condition of Lipschitz continuity with respect to the gradient:

(9.8) IF(x, u,p, X) - F(x, u, q, X)I ::; I'lp - qlfor some constant 1'.

Exercise 9.2.* Show that for bE IRN, A E S(N), c E IR,

F(x,r,p,X) = -Trace (AX) + (b,p) +cr

is proper and satisfies (9.6) if and only if eig(A) C [A, A], [bl ::; I' and c ::::': 0.

The goal is to construct supersolutions of (DP) with 9 =°for the equationF = f where f E C(D). Concerning the region n, it is assumed that there is anro such that every point Xb E an is on the boundary of a ball of radius ro whichdoes not otherwise meetD. For each Xb let Zb be such that IZb-xbl = ro < lx-zblfor xED, x =J Xb. We seek a supersolution of F = f in the form U(x) = G(r)where r = Ix - zbl which will satisfy U(Xb) = °and U ::::': °in D.

A computation shows

Thus any vector orthogonal to x - Zb is an eigenvector of D2G with G' (r) / ras the eigenvalue, while x - Zb is itself an eigenvector with eigenvalue G"(r).Letting P be the orthogonal projection along x - Zb we have

(9.9)

Taking

(9.10)

D2G(lx - Zb!) = G"(r)P + - P).r

1 1G(r) = - --

rg r"

for (7 > 0, G is nonnegative, concave and increasing on ro ::; r. The decomposi-tion (9.9) above then represents D 2G as an orthogonal sum of a positive matrixand a negative matrix. Using this above in conjunction with F being proper,(9.6), (9.7), and (9.8) we find

F(x, C, DC, D2G) ::::': F(x, 0, DG, D2G)

::::':F(x, 0, 0, ,D2G) - I'IDGIA

::::':F(x, 0, 0, C"(r)P) - -G'(r)Trace (1 - P) -I'G'(r)r

::::':F(x, 0, 0, 0) - AG"(r) - A(N - 1)G'(r) -I'G'(r)r

= _ AC"(r) - A(N - 1)G'(r) -I'G'(r).r

27

Using (9.10), we find

(9.11) F(x, G, DG, D2G) ?: '1':+2 ((a + 1)>' - A(N - 1) -/,'1').

Taking a sufficiently large, we can guarantee that F(x, G, DG, D2G) ?: K > 0on n and then if M = sup>.! f+ / K, we have

F(x,MG,MDG,MD2G)?: f

on w.We do not yet have our supersolution, since MG does not satisfy the bound-

ary condition MG = 0 on an. However

does. Moreover, it is continuous since the family {MG(lx - zbl : Xb E an} isequicontinuous; it is then also a supersolution by Proposition 8.2.

Exercise 9.3. Construct a supersolution for a general continuous boundaryfunction 9 on an. (Hint: g(Xb)+ F; +MG(lx - zbl).)

Exercise 9.4. Verify that if each of FA,B satisfies (9.6) with fixed constants>., A, then so does supA infB FA,B. Observe that the supersolution just con-structed depends only on structure conditions and the implications of this.

As a last example, we take up the stationary Hamilton-Jacobi equation ofSection 6 for which we established comparison (if you skipped that section, youcan either return or skip this example). First of all, it is clear from the proofgiven for the Dirichlet problem that Perron's method applies to u +H(Du) = fin rnN where f is uniformly continuous. Coupled with the comparison provedin Section 6, to prove the unique existence of a uniformly continuous solution,we need only produce linearly bounded sub and supersolutions. We seek asupersolution in the form u(x) = A(l + IxI2)1/ 2 +B for some constants A andB. Since f is uniformly continuous, f(x) ::::: Klxl +K for some K and it sufficesto have

A(l + IxI2)1/ 2+ B +H (A (1 + 1:12)1/ 2) ?: Klxl + K for x E rnN.

We may take A = K and then B large enough to guarantee this inequality (andsubsolutions are obtained in a similar way).

Ishii [24] introduced Perron's method into this arena. The constructioncarried out for uniformly elliptic equations can be modified (incorporating anangular variable) to the situation in which there are exterior cones rather thanballs at each point of an, see Miller [33]. The existence theory in the uniformlyelliptic case is completed in Exercise 10.2. It is interesting that the Dirichletproblem for the equation F(u, Du, D2u) = f(x) where f is merely in LN (n) canbe shown to have unique "viscosity solutions" as well. Of course, this requiresadapting the notions and other machinery. See Caffarelli, Crandall, Kocan, andSwiech [8].

28

10. The Uniqueness Machinery for Second Order Equations

Let us begin by indicating why uniqueness for second equations cannot betreated by simple extensions of the arguments for the first order case.

We consider a Dirichlet problem

(DP){u +F(Du,D2u) = f(x)

u(x) = 1J;(x) on anin n

and seek to prove a comparison result for subsolutions and supersolutions. Fora change, we package a comparison result as a formal theorem.

N -Theorem 10.1. Let n be a bounded open subset ofm. ,f E C(n), and F(p, X)be continuous and degenerate elliptic. Let, u,v : n --> m. be upper semicontinuousand lower semicontinuous respectively, u be a solution of u + F(Du, D2u) f,v be a solution of v + F(Dv, D2v) :::- f in n, and u v on an. Then u v inn.

The strategy of the first order proof already given suggests that we considera maximum of u(x) - v(y) - Ix - yI2/(2E) over n x n and let E 1 O. Followingthe corresponding proof in Section 3, we may assume (x,y) lies in n x n if E issmall, and apply the definitions to find that

(x - f) 1 )

u(x) + F -E-' f(x), (x - f) 1)

v(f)) + F -E-' :::- f(f)),

which turns out to be useless since I :::- -I. We need refined information aboutthis maximum (x, f)), which turns out to be a substantial and interesting issue.The information we need corresponds to the fact that if <I>(x, y) = u(x) - v(y) -Ix - yl2/(2E) has a maximum at (x, f)) and u, v are twice differentiable, then thefull Hessian of <I> in (x, y) is nonpositive, or

-I).I '

the failed attempt above did reflect the full second order test for a maximum inthe doubled variables (x, y). (The notation "I" is used here and later to denote

the identity in any dimension.) Since the matrix on the right annihilates

for t; E m.N , the above inequality implies that D2u(x) D2v(f)), which is thesort of thing we need. To get there, we require some preparations.

The basic fact concerns semijets of differences (equivalently, sums), as hintedabove. We call it the "Theorem on Sums".

Theorem on Sums. Let 0 be a locally compact subset of m.N . Let u, -v :o --> m. be upper semicontinuous and ip be twice continuously differentiable in aneighborhood of 0 x O. Set

w(x, y) = u(x) - v(y) for x, yEO

29

and suppose (x, f)) E 0 x 0 is a local maximum of w(x, y) - <p(x, y) relative toOx O. Then for each 1:£ > 0 with I:£D2<p(x , f)) < I there exists X, Y E S(N) suchthat

and the block diagonal matrix with entries X, - Y satisfies

(10.1) <::; _Oy) < (I I:£D2<p(x , f)))-1D2<p(x , f)).

We use the Theorem on Sums to prove Theorem 10.I.

Proof of Theorem 1O.1.Set

1<I>(x,y) = u(x) - v(y) - -Ix _ yl2

2c

and consider a maximum (x, f)) over n x n.Let

I -II)2c . . c -Ix=x,y=y

and note that A <::; (2/c)I ; moreover, for 21:£/c < 1

-1 1 (I -I)(I - I:£A) A = c 21:£ -I I .By Theorem on Sums, there exists X, Y E S(N) such that

( (A) X - f) X) -J2 ,+ (")u x ,--, E u x,e

( (") x - f) Y) -J2 , - (")V Y ,--, E v Yc

(10.2) < (X 0) < _1_ (I -I).K, - 0 -Y - e - 21:£ -I I

Since the right hand side annihilates for E rnN , we conclude that X <::;

Y and then F((x - f))/c, Y) <::; F((x - f))/c, X) since F is degenerate elliptic.Moreover,

u(x) + F (x f),X) <::; f(x),

sou(x) - v(f)) f(x) - f(f))

and we conclude (using Lemma 4.1) that u(x) - v(f)) --+ 0 as c 10. Since

we conclude.

u(x) - v(x) = <I>(x, x) <I> (x, f)) <::; u(x) - v(f))

o

30

Note that the choice /'1, = c/3 in (10.2) yields

(10.3) < (X 0) < ( 1E - 0 -Y E -1

-1)1 .

Exercise 10.1. Extend the above comparison proof to the equation

F(x, u, Du, D2u) = 0

in place of u + F(Du, D2u) = f(x) under the conditions that there exists r > 0such that

(10.4) "((r - s):'S F(x,r,p,X) - F(x,s,p,X)

for r :::: s, (x, p, X) En x JRN x S(N) and there is a function w: [0,00] ---+ [0,00]which satisfies w(O+) = 0 such that

(x- y) ( x-y )(10.5) F y,r,-c-'Y -F x,r,-c-'X :'Sw(lx-yl(l+lx-yl/c:))·

whenever x, yEn, r E JR, X, Y E S(N) and (10.3) holds.

See [12, Section 3] regarding verifying the assumption (10.5).

Exercise 10.2.* Show, as in Exercise 4.2 that the "strictly increasing in r"assumption of the previous exercise can be dropped under the assumption thatfor c > 0 such that either u solves F :'S -c or v solves F :::: c. Show thatif F satisfies (9.6) and v solves F :::: 0, then there are arbitrarily small radialperturbations 1/J such that v + 1/J solves F :::: c > o.

Remark 10.2. At this point, via Perron's Method, the supersolutions con-structed in Section 9, and the preceding exercise, we have uniquely solved (forexample) the Dirichlet problem for F(u, Du., D2u) = f(x).Exercise 10.3. Adapt the above comparison proof to handle linear growth ofthe sub and supersolutions in the problem u + F(Du, D2u) = f(x) on JRN. UsePerron's method to complete the proof of Theorem 3.3. (Hint: try supersolutionsof the form employed for the first order case at the end of Section 9.)

The second order uniqueness theory has a long history with important con-tributions by R. Jensen ([27] contains the first proof for second order equationswithout convexity conditions and introduced new ideas), H. Ishii, P. L. Lionsand P. E. Souganidis (e.g., [31], [30], [25], [26]). See the comments ending [12,Section 3]. The result called here the "Theorem on Sums", which makes life soeasy, is a mild refinement of the result called the "maximum principle for semi-continuous functions" in Crandall and Ishii [11] and was preceded by Crandall[10]. Note also "Ishii's Lemma" in [22, Cbapter V]. The proof below is slickertoo.

(11.1)

31

11. Proof of the Theorem on Sums

In this section we sketch the proof of Theorem on Sums. By now, you knowthat it can be used to good effect. It was stated before in the form used in thetheory, but for notational simplicity we change the statement a trifle, so it is, atlast, about sums.

Theorem on Sums. Let 0 be a locally compact subset of IRN. Let u, v : 0 ->

IR and c.p be twice continuously differentiable in a neighborhood of 0 x O. Set

w(x,y) = u(x) +v(y) for x,y E 0

and suppose (x, if) E 0 x 0 is a local maximum of w(x, y) - c.p(x, y) relative too x O. Then for each r;, > 0 with r;,D2c.p(x, if) < I, there exists X, Y E S(N)such that

(u(x), Dxc.p(x, if), X) E ]6+u(x), (v(Y), Dyc.p(x, if),Y) E ]6+v(Y)

and the block diagonal matrix with entries X, Y satisfies

(I - r;,D2c.p(x,if))-lD2c.p(x,if)·

[1] Consider the function u(x) = xsinx, u(O) = 0 on IR. Show that j2,+U(0)

is empty while ]2,+u(O) is quite large.

In order to prove this result we will need to "regularize" merely semicon-tinuous functions. The method we will use is called "sup convolution". Thisoperation is introduced next, the relevant properties are established, and theseresults are then used in the proof.

Let 0 be a closed subset of IRM , r: > 0, 'IjJ : 0 -> IR and ( E IRM, and put

(11.2) "j;(() = sup ('IjJ(Z) - -21[z _ (1 2

) .zEO r:

(11.3 )

Obviously the supremum is assumed and finite if 'IjJ 'i= -00 and

. 'IjJ(z) 1lim sup -- <-

zEo,lzl-+oo Izl 2 2r;,'

which we always assume, often without comment. If we extend 'IjJ to all of IRM

by'IjJ(z) = -00 for z (j. 0

the formula becomes

(11.4)A 1 2'IjJ(() = max ('IjJ(z) - -2 Iz - (I ).

zEIRM r;,

We make this extension without further comment, and treat upper semicontinous'IjJ : IRM -> [-00,00) where we now allow -00 as a value, but insist that 'IjJ 'i= -00.

32

Obviously o depends on r: which is not indicated in our notation: r: will beobvious from the context. We note some obvious properties of sup convolution:

(11.5){

(i) If 1/1 < 'P, then,(/;::; 1.jJ,

(ii) 1/1 ::; ,(/;.

(iii) ,(/;(C) + (1/2"')1(12 is convex.

Property (i) needs no comment. Property (ii) is seen by putting z = ( in thedefining supremum. Property (iii) holds because 1/1(z) - (1/2"')lz-(12+(1/2"')1(1 2

is affine in (, and the supremum of a family convex functions is convex.The property (iii) is called "semiconvexity"; ,(/; is semiconvex (with constant

1/(2"')). Less obvious properties include that sup convolution is an approxima-tion of the identity, that is

lim ,(/;(() = 1/1(()K10

for ( E mN when the left-hand side of (11.3) is finite.

Exercise 11.1. Prove this claim.

Hence sup convolution provides an approximation of 1/1 which is pretty reg-ular, that is semiconvex. The function ,(/; enjoys all the regularity of convexfunctions.

We later use the following special case:

Exercise 11.2.* If BE S(M) and 2",B < I, prove that if1/1(z) = (Bz,z), then

,(/;(C) = (B(1 - 2",B)-1(, () .

Finally, we note the "magic" properties of sup convolution - it respects J 2,+in the following sense:

Theorem On Magic Properties. Let 1/1 :mN ---+ IR be upper semicouiinuousand satisfy (11.3). If ( E IRM and

(,(/;((),p, X) E

then for

z= (+ "'p

and for every real M x M matrix T,

, 1(1/1(( + ",p),p, -(1 - T*)(I - T) + T* XT) E J 2'+1/1 (2)

r:

where T* is the adjoint of T. Moreover, 2 is the unique point for which

(11.6)

33

In particular, choosing T = I,

Proof We may assume that there exists sp E C2(IRM) such that

for ( E rnM and D<p(() = p, D 2<p(() = X. From the definition of;j; this impliesthere exists i such that

1jJ(z) - - (1 2- <p(():::; 1jJ(i) - _ (1 2

- <p(()

for all z E IRM. Putting first z = i, we find that Ii - (1 2 + <p(() has a

minimum at (, and thus by the first and second order test for a maximum,

(11.7)

Next, leave z free but put ( = T(z - z) + (. This leads to

1jJ(z) - <I>(z) :::; 1jJ(i) - <I>(z)

where1 A A

<I>(z) = -I(I - T)z + Ti: - (1 2 + <p(Tz - T'i: + ().

We conclude that

Finally, by (11.7) and direct computation

D<I>(z) = - () = p,

D 2<I>(z)

= - T*)(I - T) + T* D2<p(()T = - T*)(I - T) +T* XT.

o

Remark 11.1. It is natural to ask what is the smallest matrix Y which can bewritten in the form

Y = (I - T*) (I - T) +T* XT,

where we recall that X 2:: -1 / «. There is no optimal choice if -1 / r: is aneigenvalue of X this can be seen from the scalar case: putting X = T = tleads to Y = (1 - 2t) [«, which is not bounded below. If X > -1 [n, then theanswer is (exercise)

34

which yields, with a little algebra,

(11.8)

Exercise 11.3. Compute o if 'l{!(0) = 1 and 'l{!(x) = 0 for x # O.

Exercise 11.4.* Show that if u is a subsolution of a proper equation

F(u, Du, D2u) = 0,

on rnN , then u is as well. Using (11.6), show

Conclude that if u is a bounded subsolution of an equation F(u, Du; D 2u) = f(x)where f is uniformly continuous, then u is a solution of F(u, Dii, D 2u) s: f(x) +6" for some constant 6" ----+ 0 as '" 1 O. Discuss the general case, F(x, Du, D2u) s:O.

It is always an option to use the method (direct use of the approximationsin the equations) as indicated by Exercise Exercise 11.5. * in place of theTheorem on Sums while working on problems in this area.

We will also employ two nontrivial facts about semiconvex functions. Thefirst assertion is a classical result of Aleksandrov:

Theorem (Aleksandrov). Let 9 : rnM ----+ IR be semiconvex. Then 9 is twicedifferentiable almost everywhere on IRN .

Here 9 is "twice differentiable" at 2 means that

g(z) = g(2) + tp, z - 2) + (X(z - 2), z - 2) + o(lz - 21 2 )

for some p E IRM , X E S(M), and then we say Dg(2) = p, D2g(2) = X. Wewill not prove this classical result.

The next result we will need concerning semiconvex functions follows. In thestatement, Br(z) is the ball of radius r about z. We will prove this result aftercompleting the proof of the Theorem on Sums. It is a variant of an argument ofAleksandrov.

Lemma 11.2. Let <p : IRM----+ IR be semiconvex and x be a strict local maximum

point of <p. For p E IRN, set <pp (x) = <p(x) + (p, x). Then for 0 < r sufficientlysmall and all 6 > 0,

{y E Br(x) : :3p E Bo(O) such that <pp(x) s: <pp(y) for x E Br(x)}

has positive measure.

Proof of the Theorem on Sums.In the proof, we may as well assume that 0 =IRN

. Indeed, if not, we first restrict u, v to compact neighborhoods N 1 , N 2 of x, f)in 0 and then extend the restrictions to IRN by u(x) = v(y) = -0<) for x (j. N 1

(11.9)

35

and y rt N2. One then checks that ]6+u(x) = ]2,+u(x) (provided -00 < u(x)),etc. We may assume that 'P is C 2 in a neighborhood of Nt x N2 , and then onlRN by modification off N, x N 2 . It is clear that (x, iJ) is still a local maximumof w - 'P relative to ]RN x ]RN .

Further we may as well assume that

x= iJ = 0, u(O) = v(O) = 0

and that

'P(x, y) = ( A , )

for some A E S(2N) is a pure quadratic, and 0 is a global maximum of w - 'P. In-deed, a translation puts x, iJ at the origin and then by replacing 'P(x, y), u(x), v(y)by 'P(x, y) - ('P(O) + (Dx'P(O), x) + (Dy'P(O), y)) and u(x) - (u(O) + (Dx'P(O), x)),v(y) - (v(O) + (Dy'P(O), y)) we reduce to the situation x = iJ = Dx'P(O) =Dy'P(O) = 0, and 'P(O) = u(O) = v(O) = O. Then, since

'P(x, y) = ( A , ) + o(lxl2+ lyl2)

where A = D2'P(0), and w(x,y) - 'P(x,y) :s w(O) - 'P(O) = 0 for small x,y, if71 > 0, we will have

w(x, y) - ((A + T/l) , ) < 0

for small x, y i- O. Globality of the (strict) maximum at 0 may be achieved bylocalizing further as above. If the result holds in this case, we then pass to thelimit as 71 10 to obtain the general result.

From

w(x, y) = u(x) + v(y) :s (A , )and Exercise 11.2 we have

(11.10)

where we are writing

Since u :s il, etc., 0 = u(O) :s il(O); similarly 0 :s il(O). On the other hand,by (11.10) we have il(O) + il(O) :s 0, and then il(O) = il(O) = 0 and

1+ il(T/) - "2 (A(I - t;;A)-l(, ()

has a maximum at the origin, which is in fact strict (or increase A a little if youdon't want to check this). By Jensen's Lemma and Aleksandrov's Theorem, for

36

each 0 < 15 we have ps, qs E Bs(O) such that

1+ '11(1]) - 2 (A(I - K;A)-l(, () + + (qS' 1])

has a maximum ( s, T}s) E Bs(O) x Bs(O) and U, v are twice differentiable at(s, iu. We now apply the magic properties, and let 15 1°to reach the conclusion.The second order test at a maximum and semiconvexity imply

therefore we can assume

along a sequence 15 1°and< (; < (1 - K;A)-lA.

Let P project rnN x rnN on its first coordinates. By the magic properties,

(u( s + K;(pS + PA(s)),ps + PA(s,Xs) E J 2,+u((s + K;(ps + PA(s)).

By the definitions and magic properties,

, ,K; '2'+ K;(ps + PA(s)) 2!PS + PA(sl =

since u is continuous, we conclude that

and then-2+

(u(O),O,X) E J' u(O)

follows upon letting 15 1 O. The analogous comments hold for v. oProof of Jensen's Lemma.We assume that r is so small that 'P has x as a uniquemaximum point in B (x, r ) and assume for the moment that 'P is C2 . It followsfrom this that if 15 is sufficiently small and P E Bs(O), then every maximum of'Pp with respect to B(x,r) lies in the interior of B(x,r). Since D'P+p = °holdsat maximum points of 'Pp, D'P(K) :=> Bs(O). Let A 2 0 and 'P(x)+ (A/2)lxI2 beconvex; we then have -AI D2'P; moreover, on K, D2'P °and then

-,\J D2'P(x) 0 for x E K.

In particular, IdetD2'P(x)I AN for x E K. Thus

meas(Bs(O)) < meas(D'P(K)) <LIdetD2'P(x)ldx :::; meas(K)IAl n

and we have a lower bound on the measure of K depending only on A.

37

In the general case, in which if! need not be smooth, we approximate itvia mollification with smooth functions if!" which have the same semiconvexityconstant ,\ and which converge uniformly to 'P on B(x, r). The correspondingsets K" obey the above estimates for small /'£ and

K =:l K-;k

is evident. The result now follows. 0Exercise 11.6. Extend the Theorem on Sums to an arbitrary number ofsummands.

Concerning the "magic properties" , perhaps they are not so magic if viewedthrough the lens explained by Evans [20] under the subheading "Jensen's regu­larizations"; in this regard see also Lasry and Lions [32] and Jensen, Lions andSouganidis [30]. The precise formulation and proof of the magic properties givenabove is from Crandall, Kocan, Soravia, and Swiech [13]. It is interesting thatthe Theorem on Sums does not refer to regularizations; the information is storedas a general fact about semicontinuous functions. Of course, as noted before,using regularizations themselves in pdes is an important tool. Aleksandrov'stheorem is from [1]; see also [21]. Jensen's lemma is proved in [27].

12. Briefly Parabolic

We briefly indicate how to extend the results of the preceding sections toproblems involving the "parabolic" equation

(PE) Ut + F(t, x, u, Du, D 2u) = 0

where now u is to be a function of (t,x) and Du, D2u mean Dxu(t,x) andx). We do this by discussing comparison for the Cauchy­Dirichlet prob­

lem on a bounded domain; it will then be clear how to modify other proofs aswell. Let 0 be a locally compact subset of IRN

, T > 0, and 0T = (0,T) xO. Wedenote by P6'+, P6'- the "parabolic" variants of the semijets Forexample, if u:OT ­+ IR, P6'+u is defined by (u(s, z), a,p, X) E IR X IRN x S(N)lies in u(s, z) if (s, z) E OT and

(12.1)1

u(t,x)::; u(s,z) +a(t ­ s) + (p,x ­ z) + 2 (X(x ­ z),x ­ z)

+o(lt - sl + Ix - z12) as OT 3 (t, x) ­+ (s, z);

similarly, - u = - +(-u). The corresponding definitions of arethen clear. This definition reflects that (PE) is first order in t.

The notions of a subsolution, etc., of (PE) on an open set are contained inthe previous discussion. As in Section 7, they may be reformulated, and witha little work one sees that we have: a subsolution of (PE) on OT is an uppersemicontinuous function u. : OT ­+ IR such that

(12.2) a + F(t,x,r,p,X) ::; 0 for (t,x) E OT

38

whenever (r,a,p,X) E P6'+u(t, x); likewise, a supersolution is a lower semicon­tinuous function v such that

(12.3) a+ F(t,x,r,p,X)?: 0 for (t,x) E OT

­2­whenever (r,a,p,X) E Po v(t,x).

We show how to treat the Cauchy­Dirichlet problem for (PE), exhibiting theconsiderations which do not occur in the stationary case. Consider the problem

{

(E ) Ut +F(t, x, u, Du, D2u) = 0 in (0, T) x 0

(12.4) (BC) u(t,x) = 0 for 0 ­:::: t < T and x E ao(Ie) u(O,x) = 'IjJ(x) for x EO.

where 0 c IRN is open and T > 0 and 'IjJ E C(O) are given. By a subsolution of(12.4) on [0,T) xOwe mean an upper semicontinuous function u : [0,T) xn ----+ IRsuch that u is a subsolution of (PE) in (0, T) x 0, u(t, x) ­:::: 0 for 0 ­:::: t <T and x E ao and u(O, x) ­:::: 'IjJ(x) for x E n and so on for supersolutions andsolutions.

Theorem 12.1. Let 0 C IRN be open and bounded. Let FE C([O,T] x 0 x IR X

IRN x S(N)) be continuous, proper and satisfy (10.5) for each fixed t E [0, T),with the same function w. If u is a subsolution of (12.4) and v is a supersolutionof (12.4), then u ­:::: v on [0,T) x O.

To continue, we require the parabolic analogue of The Theorem on Sums.It takes the form:

Theorem 12.2. Let 0 be a locally compact subset ofIRN and u, v : (0, T) x 0 ----+

IR be upper semicontinuous. Let <p be defined on an open neighborhood of (0,T) xOx 0 and such that (t, x, y) ----+ <p(t, x, y) is once continuously differentiable in tand twice continuously differentiable in (x, y). Let (i, x, f)) E (0, T) x 0 x 0 and

w(t, x, y) == u(t, x) + v(t, y) ­ <p(t, x, y) ­:::: w(i, ii ; f))

for 0 < t < T and x, yEO. Assume, moreover, that there is an r > 0 such thatfor every M > 0 there is a C such that

(12.5) b < C whenever (u(t,x),b,q,X) E P6'+u(t,x)

and[z ­ xl + It- il-:::: rand lu(t,x)1 + Iql + IXI SM.

Assume the same condition on v (with x replaced by f) just above). Then foreacli « > 0 with ,,"Dz<p(i,x,f)) < I there are X, Y E S(N), bi .b» E IR such that

(u(i,x),b1,Dx<p(i,x,f)),X) E P'6+u(i,x),

(v( i, fJ), bz,Dy<p(i,x, f)),Y) E P'6+v(i, fJ)

(12.6)

(12.7)

39

and

and

Observe that the condition (12.5) is guaranteed if u is a subsolution of aparabolic equation (and likewise for v).Proof of Theorem 12.1. First observe that (decreasing T if necessary), wecan assume that u is bounded above and v is bounded below. Next, let c, e > 0,and notice that U= u - ct - c/(T - t) is also a subsolution of (12.4) and in factsatisfies (PE) with a strict inequality:

Ut + F(t, x, U, o«,D2u) < -c - (T t)2

Since u :::; v follows from u :::; v in the limit c, c 10, it will simply suffice to provethe comparison under the additional assumptions

{

(i) Ut +F(t, x, u; Du, D2u) < -c and

(ii) lim u(t, x) = -00 uniformly on D.tiT

Let (t, ii; fj) be a maximum point of u(t, x) - v(t, y) - [z - yI2/(2c) over[0, T) x n x n where a > 0; such a maximum exists in view of the assumedbound above on u, -v, the compactness of nand (12.6) (ii). By Lemma 4.1,

Ix - fjl2-'---'- ----+ 0

as e 1o. Let (i,x,x) be a limit point of (t,x,fj) as e 1o. If(i,x,x) E {O} x nu [O,T) x aD,

upper semicontinuity and the side conditions imply that

liminf (u(t,x) - v(t,fj) _ Ix - fj12) :::; u(i,x) - v(i,x) :::; O.El0 2c

Hence

(12.8) (I, '12), , X - Y

u(t,x) - v(t,x) :::; limu(t,x) - v(t,fj) - = 0010 2c

and we are done.If i > 0 and i: tj. aD, then we may assume that x, fj E D and use Theorem

12.2 at (t, x, fj) to learn that there are numbers a, b and X, Y E S(N) such that

(u(t,x),a,a(x - fj),X) E u(t,x),

40

and(v(£,y),b,a(x - y), Y) E P6- v(£,y)

such that

(12.9) a _ b= 0and _ (1 0) < (X 0) < ( 1I'; 0 1 0 -Y - I'; -1

-1)1 .

We may assume that v(£, x) <u(£, i) for otherwise we have (12.8). Then therelations

a+F(£,x,u(t,x),(x-y)/I';,X)::; -c, b+F(£,y,v(£,y),(x-fj)/I';,Y)

and (12.9) imply

<

(1.i: - fj I2 I" AI)w + x - y

I';

which leads to a contradiction via (12.7).

Exercise 12.1.* Work out the full proof of Theorem 3.2. The primary stepsare to adapt the above proof to the pure initial-value problem via the devicesused in Sections 5 and 6, to note that Perron's Method applies, and to producesubsolutions and supersolutions. All is routine, except perhaps the last step.Here is a brute force way to go about this. First, via comparison and stability,it suffices to discuss Lipschitz continuous 1/J (as uniformly continuous functionson JRN are precisely those which are uniform limits of Lipschitz continuousfunctions). If 1/J has Lipschitz constant L, then for every I'; > 0, Z E JRN,1/J ::; 1/JZ,E where

1/JZ,E(X) = V'(z) + L(lx - zl2+ 1';)1/2.

Show that there is an AE such that uE,z = AEt+ 1/JZ,E is a supersolution for theinitial value problem, that infE,zuE,z is continuous on [0,00) x JRN, and agreeswith 1/J at t = O.

Regarding the parabolic theorem on sums, see [11] and note again "Ishii'sLemma" in [22, Chapter V].

REFERENCES

[1] A. D. Aleksandrov, Almost everywhere existence of the second differentialof a convex function and some properties of convex functions, LeningradUniversity Annals (Mathematical Series) 37 (1939), 3-35 (in Russian).

[2] M. Bardi, Some applications of viscosity solutions to optimal control anddifferential games, this volume.

[3] G. Barles, Solutions de viscosite des equations de Hamilton-Jacobi,Springer-Verlag, New York, 1994.

41

[4] G. Barles and B. Perthame, Discontinuous solutions of deterministic opti­mal stopping­time problems, Model. Math. et Anal. Num. 21 (1987),557-579.

[5] , Exit time problems in optimal control and the vanishing vis­cosity method, SIAM J. Control Optim, 26 ( 1988), 1133-1148.

[6] G. Barles and P. E. Souganidis, Convergence of approximation schemes forfully nonlinear second order equations, Asymp. Anal. 4 (1989), 271-283.

[7] X. Cabre and L. Caffarelli, Fully Nonlinear Elliptic Equations, Amer.Math. Society, Providence, 1995.

[8] L. Caffarelli , M. Crandall, M. Kocan and A. Swiech, On viscosity solutionsof fully nonlinear equations with measurable ingredients, Comm. Pure Appl.Math. 49 (1996), 365-397.

[9] F. H. Clarke, Y. S. Ledyaev, R. J. Stern, and P. R. Wolenksi, Quali­tative properties of trajectories of control systems - a survey, Journal ofDynamical and Control Systems 1 (1995), 1-48.

[10] M. G. Crandall, Quadratic forms, semidifferentials and viscosity solutionsof fully nonlinear elliptic equations, Ann. LH.P. Anal. Non. Lin. 6 (1989),419-435.

[11] M. G. Crandall and H. Ishii, The maximum principle for semicontinuousfunctions, Diff. and Int. Equations 3 (1990), 1001-1014.

[12] M. G. Crandall, H. Ishii and P. L. Lions, User's Guide to viscosity solutionsof second order partial differential equations, Bull. Amer. Math. Soc.(N.S.) 27 (1992), 1-67.

[13] M. G. Crandall, M. Kocan, P. Saravia and A. Swiech: On the equivalenceof various weak notions of solutions of elliptic pdes with measurable in­gredients, in Progress in elliptic and parabolic partial differentialequations, Alvino et. al. eds, Pitman Research Notes 350, Addison Wes-ley Longman, 1996, p 136-162.

[14] M. G. Crandall and P. 1. Lions, Condition d'uxiicite pour les solutionsgenerelisees des equations de Hamilton­Jacobi du premier ordre, C. R. Acad.Sci. Paris 292 (1981), 183-186.

[15] , Viscosity solutions ofHamilton­Jacobi equations, Trans. Amer.Math. Soc. 277 (1983), 1-42.

[16] M G. Crandall, P. L. Lions and L. C. Evans, Some properties of viscositysolutions of Hamilton­Jacobi equations, Trans. Amer. Math. Soc. 282(1984), 487-502.

[17] G. Dong, Nonlinear Partial Differential Equations of Second Or­der, Translations of Mathematical Monographs 95, American MathematicalSociety, Providence, 1994.

[18] L. C. Evans, A convergence theorem for solutions of nonlinear second orderelliptic equations, Indiana Univ. J. 27 (1978), 875-887.

42

[19J , On solving certain nonlinear differential equations by accretiveoperator methods, Israel J. Math. 36 (1980),

[20J ' Regularity for fully nonlinear e11iptic equations and motion bymean curvature, this volume.

[21J L. C. Evans and R. Gariepy, Measure Theory and Fine Properties ofFunctions, Studies in Advanced Mathematics, CRC Press, Boca Raton,1992.

[22] W. H. Fleming and H. Mete Soner, Controlled Markov Processes andViscosity Solutions, Applications of Mathematics 25, Springer-Verlag,New York, 1993.

[23J D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equationsof Second order, 2nd Edition, Springer-Verlag, New York, 1983.

[24] H. Ishii, Perron's method for Hamilton-Jacobi equations, Duke Math. J.55 (1987),

[25] , On uniqueness and existence of viscosity solutions of fully non-linear second order e11iptic PDE's, Comm. Pure Appl. Math. 42 (1989),14-45.

[26J H. Ishii and P. L. Lions, Viscosity solutions of fully nonlinear second orderelliptic partial differential equations, J. Diff. Equa. 83 (1990), 26-78.

[27] R. Jensen, The maximum principle for viscosity solutions of fully nonlinearsecond order partial differential equations, Arch. Rat. Mech. Anal. 101(1988), 1-27.

[28] , Uniqueness criteria for viscosity solutions of fully nonlinearelliptic partial differential equations, Indiana U. Math. J. 38 (1989),667.

[29] , Uniqueness of Lipschitz extensions - Minimizing the sup normof the gradient, Arch. Rat. Mech. Anal 123 (1993),

[30] R. Jensen, P. 1. Lions and P. E. Souganidis, A uniqueness result for viscositysolutions of second order fully nonlinear partial differential equations, Proc.AMS 102 (1988), 975-978.

[31J P. L. Lions, Optimal control of diffusion processes and Hamilton-Jacobi-Bellman equations. Part 1: The dynamic programming principle and appli-cations and Part 2: Viscosity solutions and uniqueness, Comm. P. D. E. 8(1983), 1101-1174 and 1229-1276.

[32J J. M. Lasry and P. L. Lions, A remark on regularization in Hilbert spaces,Israel J. Math 55 (1986),257-266.

[33] K. Miller, Barriers on cones for uniformly elliptic equations, Ann. di Mat.Pura Appl. LXXVI (1967), 93-106.

[34] M. Saner, Controlled Markov processes, viscosity solutions and applicationsto mathematical finance, this volume.

43

[35] P. E. Souganidis, Front Propagation: Theory and applications, this volume.

[36] A. Subbotin, Solutions of First-order PDEs. The Dynamical Opti-mization Perspective, Birkhauser, Boston, 1995.

[37] A. Swiech, W1'P-interior estimates for solutions of fully nonlinear, uniformlyelliptic equations, preprint.

[38] N. S. Trudinger, Comparison principles and pointwise estimates for viscositysolutions of second order elliptic equations, Rev. Mat. Iberoamericana 4(1988), 453-468.

[39] , Holder gradient estimates for fully nonlinear elliptic equations,Proc. Roy. Soc. Edinburgh, Sect. A 108 (1988),