cooperative motion in one dimension

24
COOPERATIVE MOTION IN ONE DIMENSION LOUIGI ADDARIO-BERRY, ERIN BECKMAN, AND JESSICA LIN Abstract. We prove distributional convergence for a family of random processes on Z, which we call cooperative motions. The model generalizes the “totally asymmetric hipster random walk” introduced in [1]. We present a novel approach based on connecting a temporal recurrence relation satisfied by the cumulative distribution functions of the process to the theory of finite difference schemes for Hamilton-Jacobi equations [8]. We also point out some surprising lattice effects that can persist in the distributional limit, and propose several generalizations and directions for future research. 1. Introduction 1.1. Description of the model and the main result. Let (D n ,n 0) be a collection of independent, identically distributed integer random variables with com- mon law ν . Fix a probability distribution μ on Z {-∞, }, and define a sequence (X n ,n 0) of extended real random variables as follows. Let X 0 be chosen according to μ. For n 0, let ( ˜ X i n , 1 i m) be independent copies of X n , and set X n+1 = X n + D n if X n = ˜ X i n for all i = 1 ...m, X n if X n ˜ X i n for some i, (1.1) where we use the convention that ∞+ r = and -∞ + r = -∞ for r Z. We refer to the resulting sequence of random variables (X n ,n 0) as a cooperative motion with initial distribution μ and step size distribution ν . We will principally consider the case when the steps (D n ,n 0) are Bernoulli(q)- distributed; in this case, we write CM(m,q,μ) for the law of the process (X n ,n 0) when started from initial distribution μ. Our main result is to show that CM(m,q,μ) processes are asymptotically Beta-distributed whenever the initial distribution μ is supported by Z. This generalizes a result from [1], which is the case m = 1 of the next theorem. Theorem 1.1. Fix an integer m 1, q (0, 1) and any probability distribution μ on Z, and let (X n ,n 0) be CM(m,q,μ)-distributed. Then 1 m + 1 ( m m q ) 1 m+1 X n n 1/(m+1) d B, (1.2) where B is Beta ( m+1 m , 1)-distributed. The CM(m,q,μ) processes are a type of random walk with delay. However, the amount of the delay is tied to the law of the process itself, since if X n finds itself in an unlikely location, then the odds that ˜ X 1 n ,..., ˜ X m n are all equal to X n are low. As such, the position and the rate of motion are highly dependent upon each other, which is the primary challenge in analyzing the process. 2010 Mathematics Subject Classification. Primary: 60F05, 60K35; Secondary: 65M12, 35F21, 35F25. Key words and phrases. recursive distributional equations, monotone finite difference schemes, monotone couplings. 1

Upload: others

Post on 19-Feb-2022

9 views

Category:

Documents


0 download

TRANSCRIPT

COOPERATIVE MOTION IN ONE DIMENSION

LOUIGI ADDARIO-BERRY, ERIN BECKMAN, AND JESSICA LIN

Abstract. We prove distributional convergence for a family of random processeson Z, which we call cooperative motions. The model generalizes the “totallyasymmetric hipster random walk” introduced in [1]. We present a novel approachbased on connecting a temporal recurrence relation satisfied by the cumulativedistribution functions of the process to the theory of finite difference schemes forHamilton-Jacobi equations [8]. We also point out some surprising lattice effectsthat can persist in the distributional limit, and propose several generalizationsand directions for future research.

1. Introduction

1.1. Description of the model and the main result. Let (Dn, n ≥ 0) be acollection of independent, identically distributed integer random variables with com-mon law ν. Fix a probability distribution µ on Z ∪ {−∞,∞}, and define a sequence(Xn, n ≥ 0) of extended real random variables as follows. Let X0 be chosen accordingto µ. For n ≥ 0, let (Xi

n,1 ≤ i ≤m) be independent copies of Xn, and set

Xn+1 =⎧⎪⎪⎨⎪⎪⎩

Xn +Dn if Xn = Xin for all i = 1 . . .m,

Xn if Xn ≠ Xin for some i ,

(1.1)

where we use the convention that ∞+ r =∞ and −∞+ r = −∞ for r ∈ Z. We refer tothe resulting sequence of random variables (Xn, n ≥ 0) as a cooperative motion withinitial distribution µ and step size distribution ν.

We will principally consider the case when the steps (Dn, n ≥ 0) are Bernoulli(q)-distributed; in this case, we write CM(m,q,µ) for the law of the process (Xn, n ≥ 0)when started from initial distribution µ. Our main result is to show that CM(m,q,µ)processes are asymptotically Beta-distributed whenever the initial distribution µ issupported by Z. This generalizes a result from [1], which is the case m = 1 of thenext theorem.Theorem 1.1. Fix an integer m ≥ 1, q ∈ (0,1) and any probability distribution µ onZ, and let (Xn, n ≥ 0) be CM(m,q,µ)-distributed. Then

1

m + 1(m

m

q)

1m+1⋅ Xn

n1/(m+1)dÐ→ B, (1.2)

where B is Beta (m+1m ,1)-distributed.The CM(m,q,µ) processes are a type of random walk with delay. However, the

amount of the delay is tied to the law of the process itself, since if Xn finds itselfin an unlikely location, then the odds that X1

n, . . . , Xmn are all equal to Xn are low.

As such, the position and the rate of motion are highly dependent upon each other,which is the primary challenge in analyzing the process.

2010 Mathematics Subject Classification. Primary: 60F05, 60K35; Secondary: 65M12, 35F21,35F25.

Key words and phrases. recursive distributional equations, monotone finite difference schemes,monotone couplings.

1

2

1.2. Proof technique. Let B be Beta (m+1m ,1)-distributed. Our approach to estab-lishing (1.2) is to work directly with the cumulative distribution function (CDF) ofthe rescaled random variable. In particular, we show that as n → ∞, the CDF ofn−1/(m+1)Xn converges to the CDF of (m + 1) ( q

mm )1/m+1

B, which is

F (x) =

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

0 if x ≤ 0,m

q1m (m+1)

m+1m

xm+1m if 0 ≤ x ≤ (m + 1) ( q

mm )1/m+1,

1 otherwise.(1.3)

Our approach is based on the observation that the CDF of Xn in fact satisfies a finite-difference equation which approximates a first-order Hamilton-Jacobi equation. Notethat if Fn

k ∶= P(Xn < k), then since the steps Dn are {0,1}-valued, we have

Fn+1k = P(Xn < k − 1) +P(Xn = k − 1,Xn+1 = k − 1) (1.4)= Fn

k−1 +P(Xn = k − 1) −P(Xn = k − 1,Xn+1 ≠ k − 1)

= Fnk−1 + (F

nk − F

nk−1) −P(Xn = k − 1)P(Dn = 1)

m

∏i=1

P(Xin = k − 1)

= Fnk − q (F

nk − F

nk−1)

m+1 ,

and we may rewrite the final identity as

Fn+1k − Fn

k = −q(Fnk − F

nk−1)

m+1

= −q∣Fnk − F

nk−1∣

m+1 . (1.5)

The introduction of ∣ ⋅ ∣ in (1.5) is allowed since Fnk ≥ F

nk−1. We write the recursion

in the form of (1.5) because this makes (1.5) a discrete analogue (or finite differencescheme) of the first-order partial differential equation (PDE)

ut + q∣ux∣m+1 = 0 in R × (0,∞). (1.6)

In a nutshell, our approach to proving Theorem 1.1 is to exploit this connection,showing that solutions of (1.5) closely approximate solutions of (1.6) after an appro-priate rescaling, when n is large. The remainder of the introduction is principallydedicated to elaborating on the details of this approach and the challenges to carryingit out.

Equation (1.6) is an example of a nonlinear Hamilton-Jacobi equation of the form

ut +H(ux) = 0 in R × (0,∞),

with the Hamiltonian H ∶ R→ R defined by H(p) = q∣p∣m+1. For general initial data,(1.6) fails to have classical, smooth solutions for all time. The theory of viscositysolutions introduced by Crandall and Lions [6, 7], which are continuous but neednot be differentiable, provides a notion of weak solution for such equations. We willhereafter refer to Crandall-Lions viscosity solutions simply as continuous viscositysolutions. We provide an overview of relevant properties of viscosity solutions forHamilton-Jacobi equations in Appendix A.

While continuous viscosity solutions are perhaps the most well-studied notion ofweak solution for PDEs such as (1.6), our goal is to find a function u(x, t) solving(1.6), which is meant to be an n→∞ analogue of the distribution function

P(X⌊tn⌋

n1/(m+1) < x) .

3

We note that for any initial distribution µ with µ(Z) = 1, we have

P( X0

n1/(m+1) < x)→⎧⎪⎪⎨⎪⎪⎩

1 x > 00 x < 0,

(1.7)

as n→∞, with the behaviour at x = 0 depending on the distribution µ. This impliesthat the continuous analogue u(x,0) we seek will necessarily have a discontinuity atx = 0. Such a discontinuity in the initial condition puts us outside of the frameworkof continuous viscosity solutions.

There have been several attempts to define an appropriate notion of discontinuousviscosity solutions (see [5] for some references). One notion, introduced by Barronand Jensen [4], is defined for convex Hamilton-Jacobi equations. This is our situation;the Hamiltonian H(p) = q∣p∣m+1 in (1.6) is a convex function. (It is for this reasonthat we introduced absolute values in (1.5).) The Barron-Jensen theory appliesexclusively to lower semicontinuous functions, which is why we choose to define Fn

k =P(Xn < k), instead of the more traditional definition of a CDF given by P(Xn ≤ k).Of course, this makes practically no difference to the probabilistic analysis. Wewill refer to Barron-Jensen viscosity solutions as lsc (lower semicontinuous) viscositysolutions (see the Appendix for more details about the properties of these solutionswhich we make use of). Throughout this paper, every continuous (resp. lsc) viscositysolution we consider is in fact the unique continuous (resp. lsc) solution satisfying thePDE in question (see Theorem A.2 and Theorem A.7). Moreover, the two notionscoincide for continuous functions. In particular, any lsc viscosity solution which is acontinuous function is also a continuous viscosity solution (see Theorem A.6).

It turns out that the function F introduced in (1.3) is nothing more than F (x) =u(x,1), where u(x, t) is the lsc viscosity solution of the initial value problem

⎧⎪⎪⎨⎪⎪⎩

ut + q∣ux∣m+1 = 0 in R × (0,∞),u(x,0) = 1{x>0} in R.

(1.8)

The lsc viscosity solution of (1.8) can be explicitly computed. In fact, for futureuse, we will compute the lsc viscosity solution of the more general PDE

⎧⎪⎪⎨⎪⎪⎩

ua,bt + q∣ua,bx ∣m+1 = 0 in R × (0,∞),

ua,b(x,0) = a1{x≤0} + b1{x>0} in R,(1.9)

for 0 ≤ a < b ≤ 1. Since (1.9) is a convex Hamilton-Jacobi equation, Theorem A.7in the Appendix guarantees that the corresponding lsc viscosity solution is given bythe Hopf-Lax formula from control theory,

ua,b(x, t) = infy∈R{ua,b(y,0) + tH∗ (x − y

t)} , (1.10)

where H∗ is the Legendre transform of H, defined by

H∗(p) = supα∈R(αp −H(α)) .

For the Hamiltonian H(p) = q∣p∣m+1, as H is superlinear (limp→∞H(p)∣p∣ = +∞) and

ua,b(x,0) is lower semicontinuous, the infimum in (1.10) is achieved. We may thus

4

compute explicitly that for this Hamiltonian,H∗(p) = sup

α∈R(αp − q∣α∣m+1)

= ∣p∣m+1m

(q(m + 1))1m

− q−1m ∣ p

m + 1∣m+1m

= q−1m ∣p∣

m+1m

m

(m + 1)m+1m

. (1.11)

It follows that the lsc viscosity solution ua,b of (1.9) is given by

ua,b(x, t) = infy∈R{a1{y≤0} + b1{y>0} + tH∗ (

x − yt)}

= infy∈R

⎧⎪⎪⎨⎪⎪⎩a1{y≤0} + b1{y>0} + tq−

1m ∣x − y

t∣m+1m m

(m + 1)m+1m

⎫⎪⎪⎬⎪⎪⎭

= infy∈R

⎧⎪⎪⎨⎪⎪⎩a1{y≤0} + b1{y>0} +

1

t1m

q−1m

m

(m + 1)m+1m

∣x − y∣m+1m

⎫⎪⎪⎬⎪⎪⎭.

A straightforward analysis yields that the preceding infimum is achieved at

y =⎧⎪⎪⎪⎨⎪⎪⎪⎩

0 if 0 ≤ (xm+1

t )1m ≤ (b − a)q

1m(m+1)

m+1m

m ,x otherwise.

This implies that

ua,b(x, t) =

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

a if x ≤ 0,

a + m

q1m (m+1)

m+1m(xm+1

t )1m if 0 ≤ (xm+1

t )1m ≤ (b − a)q

1m(m+1)

m+1m

m ,

b otherwise.(1.12)

In the case when a = 0, and b = 1 (so for u solving (1.8)), we may rewrite this as

u(x, t) =

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

0 if x ≤ 0,m

q1m (m+1)

m+1m(xm+1

t )1m if 0 ≤ x ≤ (m + 1) ( qt

mm )1/m+1,

1 otherwise,

(1.13)

which agrees with the rescaled Beta CDF given in (1.3) when t = 1. With regardsto demonstrating the convergence of the finite difference scheme (1.5) to solutionsof (1.6), we begin by recalling a robust result of Crandall and Lions [8]. In [8], theauthors identify sufficient conditions for functions defined by finite difference schemeson a space time mesh ∆xZ×∆tN to converge to the continuous viscosity solution ofthe corresponding Hamilon-Jacobi equation (such as (1.6)). Their general result isstated as Theorem 2.3, below. Upon an appropriate scaling, we may convert (1.5) toa finite difference relation on ∆xZ×∆tN. Theorem 2.3 implies that, if (1.5) satisfies amonotonicity condition (see Definition 2.2) and F 0

k ∶= u0(k∆x) is the discretization ofa Lipschitz continuous function u0 on the mesh ∆xZ, then for all sufficiently small ∆x,the values FN

k defined by the finite difference scheme are uniformly close to solutionsu(k∆x,N∆t) of the PDE with u(x,0) = u0(x), for N∆t lying in any compact timeinterval [0, T ]. The Crandall–Lions theory, however, relies upon the initial data beingLipschitz continuous, as well as using the theory of continuous viscosity solutions.Since we aim to show that the CDFs of the rescaled CM(m,q,µ) random variables

5

(Xnn−1/(m+1), n ≥ 0) converge to the lsc viscosity solution of (1.8), this precludes a

direct application of the results of [8] to prove Theorem 1.1. Furthermore, to thebest of our knowledge, no numerical approximation results analogous to those of [8]have been proved for lsc viscosity solutions.

Probabilistically, the Lipschitz continuity required by the Crandall-Lions theoryis also an issue: it means that the CDF of X0/n1/(m+1) should be a discretization ofa Lipschitz function, with Lipschitz constant independent of n; but for a fixed initialdistribution, this is impossible (recall (1.7)). To obtain such Lipschitz continuity,the Crandall-Lions theory thus requires the initial condition for the discrete processto depend on the mesh size, which probabilistically translates to requiring the initialdistribution of the cooperative motion to depend on the target time n at which wewish to observe the process.

In order to make use of the results of [8] in our setting, we use further propertiesof the probabilistic model in order to demonstrate convergence to the lsc viscositysolution (which corresponds to the Beta-distributed limit in Theorem 1.1). In par-ticular, we prove a discrete stochastic monotonicity result, Lemma 3.2, which allowsus to couple the process started from different initial distributions. This couplingis surprisingly delicate; it is not the case that the cooperative motion evolution pre-serves stochastic ordering for arbitrary initial distributions. However, we prove thatit preserves stochastic ordering whenever the initial distribution is not too singular(i.e. when all atoms satisfy a quantitative upper bound, depending on q and m); seeProposition 3.1. Having established this allows us to use the results of Crandall andLions [8] to prove convergence to an lsc viscosity solution. We can then stochasticallysandwich the evolution started from any initial conditions using Lipschitz-continuous(n-dependent) initial conditions, up to an error term which can be made arbitrarilysmall (after rescaling by n1/(m+1)). This allows us to demonstrate the convergence in(1.2) for sufficiently non-singular initial distributions. We then conclude by showingthat any initial distribution “relaxes” to a sufficiently non-singular distribution in abounded number of steps.

We mention that a recurrence similar to (1.5) can be written for the probabilitymass function pnk = P(Xn = k):

pn+1k − pnk = −q ((pnk)

m+1 − (pnk−1)m+1) . (1.14)

This recurrence can be interpreted as a discretization of the scalar conservation law,vt = −q(vm+1)x. (1.15)

Indeed, this connection was observed in [1] in the special case when m = 1, andthe proof in [1] of the m = 1 case of Theorem 1.1 relied upon similar numerics/PDEL1 convergence results for finite difference schemes of scalar conservation laws. Inparticular, rescaled solutions of (1.14) converge in L1 to the unique entropy solutionsof (1.15). From the theory of PDEs, it is well-known that in the one-dimensionalsetting, entropy solutions of (1.15) correspond precisely to derivatives of viscositysolutions of (1.6). This motivated our approach of working directly with the CDFsin this paper, and using viscosity solutions methods in this setting. The advantages ofworking with viscosity solutions include (a) the fact that the solution theory, at leastas it relates to such probabilistic models, is better developed for viscosity solutionsthan for the corresponding entropy solutions, and (b) the fact that working in the“integrated” setting gives the solutions greater regularity, which makes the resultingproofs more direct.

The rest of the paper proceeds as follows. In Section 2, we review the resultsof Crandall and Lions [8] and use them to demonstrate convergence of CDFs of a

6

rescaled CM(m,q,µ) process with a “diffuse” initial condition, which approximatesa Lipschitz continuous function. In Section 3, we show convergence of CDFs ofa rescaled CM(m,q,µ) process with initial distribution µ which has no overly largeatoms in its support. In Section 4, we remove this hypothesis on the size of the atomsof µ, and complete the proof of Theorem 1.1. Section 5 concerns the limitations ofthe approach taken in this paper, and includes Theorem 5.4, which presents a prov-able obstacle to applying our methodology to establish convergence of cooperativemotion-type processes with non-Bernoulli step sizes. This section also presents The-orem 5.1, which shows that when the step size is an integer multiple of a Bernoulli,the resulting lattice effects lead to limits which are mixtures of Beta distributions.Finally, Appendix A provides an overview of continuous and lsc viscosity solutions,and describes several important properties of such solutions that we use throughoutthe paper.

1.3. Notation. Before proceeding, we introduce some terminological conventions.Given a random variable X, we define the CDF FX ∶ R → [0,1] of X by FX(x) =P(X < x); As mentioned in the introduction, we use this definition rather than thestandard FX(x) = P(X ≤ x) to make it easier to appeal to the relevant PDE theory,which has been developed for lower-semicontinuous functions.

We say a function F ∶ R→ [0,1] is a CDF if it is the CDF of an R-valued randomvariable, and that F is an extended CDF if it is the CDF of an extended randomvariable (i.e. a random variable taking values in R ∪ {±∞}).

For random variables X,Y taking values in R ∪ {±∞}, we write X ⪯ Y and saythat Y stochastically dominates X if P(X < x) ≥ P(Y < x) for all x ∈ R.

Let (Xn, n ≥ 0) and (Xn, n ≥ 0) be CM(m,q,µ) and CM(m,q, µ)-distributed, re-spectively. Suppose that X0 ⪯ X0. Then we say that the CM-evolution is stochasti-cally monotone for µ and µ if Xn ⪯Xn for all n ≥ 0. In other words, the CM-evolutionis stochastically monotone for µ and µ if it preserves their stochastic ordering in time.

2. Finite Difference Schemes for Diffuse Initial Conditions

As mentioned in Section 1, our approach is to interpret CDFs of the discreterandom variables (Xn, n ≥ 0) as solutions of a finite difference scheme. As before,fix m ∈ N with m ≥ 1, q ∈ (0,1), and a probability distribution µ supported onZ ∪ {−∞,∞}. Let (Xn, n ≥ 0) be CM(m,q,µ)-distributed, and for k ∈ Z writeFnk = Fn

k (µ) = P(Xn < k) = µ[−∞, k). (We suppress the dependence on m andq as they are fixed throughout, and also suppress the dependence on µ wheneverpossible.) Then (Fn

k )k∈Z,n∈N is defined by⎧⎪⎪⎨⎪⎪⎩

Fn+1k − Fn

k = −q (Fnk − F

nk−1)

m+1n ≥ 0, k ∈ Z,

F 0k = µ[−∞, k) k ∈ Z.

(2.1)

Since Fnk is nondecreasing in k for all n ∈ N, (2.1) can be rewritten as

⎧⎪⎪⎨⎪⎪⎩

Fn+1k − Fn

k = −q ∣Fnk − F

nk−1∣

m+1n ≥ 0, k ∈ Z,

F 0k = µ[−∞, k) k ∈ Z ,

(2.2)

and the function defined by (2.2) is identical to the function defined by (2.1). We willuse (2.1) and (2.2) interchangeably, and will also use the fact that Fn

k is nondecreasingin k, for all n ∈ N, frequently in what follows.

The main result of this section is the following proposition, which states thatsolutions of the recurrence relation from (2.2), with nondecreasing, Lipschitz initialdata converge to solutions of the appropriate Hamilton-Jacobi equation.

7

Proposition 2.1. Let u0 be a Lipschitz-continuous extended CDF with Lipschitzconstant K. Fix N ∈ N and define a probability distribution µN on Z ∪ {−∞,∞} by

µN [−∞, k) ∶= u0(kN−1/(m+1))for k ∈ Z. Let (Xn, n ≥ 0) be CM(m,q,µN)-distributed, and let Fn

k = Fnk (µN) =

P(Xn < k). Finally, fix T > 0. Then there exist N0 = N0(q,m,K) and c =c(K,m, q, T ) such that if N ≥ N0,

sup0≤t≤T

supk∈Z∣F ⌊Nt⌋

k − u( k

N1/(m+1) , t)∣ ≤c

N1/2 , (2.3)

where u is the continuous viscosity solution of⎧⎪⎪⎨⎪⎪⎩

ut + q∣ux∣m+1 = 0 in R × (0,∞),u(x,0) = u0(x) in R.

(2.4)

It follows that u is an extended CDF and that

supx∈R∣P( XN

N1/(m+1) < x) − u(x,1)∣ ≤c

N1/2 . (2.5)

In order to prove this proposition, we require the framework of monotone finitedifferent schemes for Hamilton-Jacobi equations. We next introduce this framework,and relate it to the evolution of the CDF of cooperative motion.

We may imagine numerically approximating the solution of a Hamilton-Jacobiequation of the form

⎧⎪⎪⎨⎪⎪⎩

ut +H(ux) = 0 in R × (0,∞),u(x,0) = u0(x) in R

(2.6)

as follows. Fix temporal and spatial mesh sizes (∆t and ∆x, respectively). SetU0k = u0(k∆x) for k ∈ Z, and for n ≥ 0 define Un+1

k by

Un+1k = G(Un

k , Unk−1) = U

nk −∆tH (

Unk −U

nk−1

∆x) , (2.7)

so G(y, z) = G∆(y, z) = y −∆tH(y−z∆x), where ∆ = (∆x,∆t). We may use (2.7) to

define a rescaled field of valuesu∆ ∶∆xZ ×∆tN→ R

by setting u∆(k∆x, n∆t) ∶= Unk . With this definition, (2.7) is equivalent to the

statement thatu∆(k∆x, n∆t +∆t) − u∆(k∆x, n∆t)

∆t+H(u

∆(k∆x, n∆t) − u∆(k∆x −∆x, n∆t)∆x

) = 0.

This indeed looks, formally, like a discretization of (2.6) on the space-time mesh∆xZ × ∆tN. We refer to (2.7) as a finite difference scheme for the initial valueproblem (2.6). It turns out that, under suitable regularity assumptions on the ini-tial condition u0 and the Hamiltonian H, the sufficient conditions on (2.7) for u∆,or equivalently (Un

k )k∈Z,n∈N, to well-approximate u as ∆t and ∆x → 0 are consis-tency and monotonicity. The consistency condition states that when G is written indifferenced form, i.e.

G(Unk , U

nk−1) = U

nk −∆tg (

Unk −U

nk−1

∆x)

for some function g ∶ R→ R, then g(p) =H(p) for all p ∈ R. This is trivially satisfiedin our setting by the scheme defined in (2.7).

The more subtle condition is monotonicity, which we next define.

8

Definition 2.2. A scheme of the form (2.7) is monotone on [λ,Λ] ⊆ R if G(Unk , U

nk−1)

is a nondecreasing function of each argument so long asλ ≤ (∆x)−1 (Un

k −Unk−1) ≤ Λ. (2.8)

We now state the main result from [8], specialized to the one-dimensional settingof the current paper, on the quality of approximation provided by monotone finitedifference schemes for Hamilton-Jacobi equations.

Theorem 2.3. [Theorem 1, [8]] Let u ∶ R × (0,∞) → R be the continuous viscositysolution of

⎧⎪⎪⎨⎪⎪⎩

ut +H(ux) = 0 in R × (0,∞),u(x,0) = u0(x) in R,

(2.9)

where H ∶ R → R is continuous and u0 is bounded and Lipschitz continuous withLipschitz constant K. Fix ∆x > 0 and ∆t > 0, let U0

k ∶= u0(k∆x), and define Unk by

a general scheme of the form (2.7).If (2.7) is consistent and monotone on [−(K +1),K +1], then for any T > 0, there

exists c, depending on sup ∣u0∣,K,H, and T so that

supn∈N,n∆t∈[0,T ]

supk∈Z∣Un

k − u(k∆x, n∆t)∣ ≤ c√∆t. (2.10)

Before connecting Theorem 2.3 to cooperative motion, it is instructive to furtherdiscuss the meaning and value of monotonicity in this setting. (The following dis-cussion is inspired by the proof of [8, Proposition 3.1].)

Fix K > 0 and two sets of initial conditions (U0k )k∈Z and (U0

k )k∈Z with U0k ≤ U

0k ,

then set Un+1k = G(Un

k , Unk−1) and Un+1

k = G(Unk , U

nk−1) for n ≥ 0 as in (2.7).

Suppose that G is monotone on [−K,K], and that∣U0

k −U0k−1∣

∆x≤K and

∣U0k − U

0k−1∣

∆x≤K (2.11)

for all k. Then monotonicity implies thatU1k = G(U

0k , U

0k−1) ≤ G(U

0k , U

0k−1) = U

1k . (2.12)

Next, let (W 0k )k∈Z be any initial condition with supk∆

−1x ∣W 0

k −W0k−1∣ ≤ K, and set

Wn+1k = G(Wn

k ,Wnk−1) for n ≥ 0 and k ∈ Z. Write λ = supk ∣W 0

k −U0k ∣. Let V 0

k = U0k +λ,

and set V 1k = G(V 0

k , V0k−1) = U1

k + λ. By the choice of λ, we have W 0k ≤ V 0

k . Thenmonotonicity gives that

W 1k ≤ V

1k = U

1k + λ ,

and a symmetric argument gives that W 1k ≥ U

1k − λ, so

supk∣W 1

k −U1k ∣ ≤ λ.

We apply this with the specific choice of initial condition W 0k = U0

k−1. Since W 1k =

U1k−1, the preceding bound gives

supk∣U1

k −U1k−1∣ = sup

k∣U1

k −W1k ∣ ≤ λ = sup

k∣U0

k −W0k ∣ = sup

k∣U0

k −U0k−1∣ ≤K∆x.

A similar analysis allows us to conclude thatsupk∣U1

k − U1k−1∣ ≤K∆x.

By the two preceding bounds and (2.12), it follows by induction that Unk ≤ U

nk for all

n ∈ N and k ∈ Z and that supk ∣Unk − U

nk−1∣ ≤ K∆x for all n. In short, equation (2.8),

9

which can be viewed as a type of discrete Lipschitz bound on Unk , allows one to show

that an order relation between two initial conditions persists for all positive times.

Remark 2.4. Whenever the initial condition (U0k )k∈Z is nondecreasing in k, the above

argument shows that if G is monotone on [0,K] and supk(U0k −U

0k−1) ≤ K∆x, then

(Unk )k∈Z is non-decreasing in k and supk(Un

k − Unk−1) ≤ K∆x for all n ∈ N. It fol-

lows from this that if u0 is non-decreasing, then in order to verify the condition ofTheorem 2.3 one need only check that (2.7) is monotone on [0,K + 1].

We now specialize the above discussion to the specific setting of our paper, soagain let Fn

k = P(Xn < k) where (Xn, n ≥ 0) is CM(m,q,µ)-distributed. Givenspatial and temporal mesh sizes (∆x and ∆t, respectively), we may use the field ofvalues (Fn

k )k∈Z,n∈N to define a rescaled field f = f∆t,∆x ∶ ∆xZ ×∆tN → R by settingf(k∆x, n∆t) ∶= Fn

k .In order to identify an appropriate scaling relationship between ∆x and ∆t, we

seek a continuous space-time scaling which preserves the PDE. In particular, if usolves (1.8), then for any ρ ∈ R, uρ(x, t) ∶= u(ρx, ρm+1t) also solves (1.8). Thissuggests that the temporal and spatial mesh sizes should satisfy the relation

(∆x)m+1 =∆t. (2.13)With this scaling relation, we may rewrite (2.1) as

Fn+1k = Fn

k − q∆t (Fnk − F

nk−1

∆x)m+1

, (2.14)

which, since Fnk is non-decreasing in k, we may re-express as

Fn+1k = Fn

k − q∆t ∣Fnk − F

nk−1

∆x∣m+1

. (2.15)

This equation has precisely the form of (2.7) with G(y, z) = y − q∆t∣y−z∆x∣m+1 = y −

q∣y−z∣m+1, the second equality holding due to (2.13). The fact that in this setting Gdoes not in fact depend on ∆x and ∆t means that to verify monotonicity, one mayassume that ∆x takes any fixed positive value – say ∆x = 1.

Now fix probability distributions µ, µ on Z ∪ {±∞} with µ ⪯ µ, let (Xn, n ≥ 0)and (Xn, n ≥ 0) be CM(m,q,µ) and CM(m,q, µ)-distributed, respectively, and setFnk = P(Xn < k) and Fn

k = P(Xn < k), so (Fnk )k∈Z,n∈N and (Fn

k )k∈Z,n∈N both satisfy(2.15) but with different initial conditions. The fact that µ ⪯ µ means that F 0

k ≤ F0k .

For a given Λ > 0, if G is monotone on [0,Λ] and supk∈Z(F 0k − F

0k−1) ≤ Λ and

supk∈Z(F 0k − F

0k−1) ≤ Λ, then (2.12) gives that F 1

k ≤ F1k , and inductively that Fn

k ≤ Fnk

for all n. In other words, we can think of monotonicity as a sufficient condition whichguarantees that the cooperative motion will preserve stochastic ordering in time. InSection 3, we will use a variation of this approach to identify the value of Λ, andthereby a sufficient condition, which guarantees stochastic monotonicity.

Proof of Proposition 2.1. Let N0 ∶= ([q(m + 1)]1/m(K + 1))(m+1) and fix N ≥ N0.

We choose ∆x = N−1/(m+1), and ∆t = N−1, so that ∆x and ∆t satisfy (2.13). Thisimplies that F

(⋅)k (µN) is defined by (2.15). The proof relies upon verifying the

hypotheses of Theorem 2.3 for Unk = Fn

k . As u0 is nondecreasing, by Remark 2.4,we only need to verify that (2.15) or, equivalently, (2.14) is monotone in [0,K + 1].

To verify monotonicity of (2.14), we differentiate

G(Fnk , F

nk−1) = F

nk − q∆t (

Fnk − F

nk−1

∆x)m+1

10

in each argument, in the region 0 ≤ (∆x)−1(Fnk −F

nk−1) ≤K + 1. Differentiating with

respect to Fnk , we have

1 − q(m + 1)∆t

∆x(Fnk − F

nk−1

∆x)m

≥ 1 − q(m + 1)∆t

∆x(K + 1)m

= 1 − q(m + 1)(∆x)m(K + 1)m.

As N ≥ N0, we have that

(∆x)m = N−m/(m+1) ≤ [q(m + 1)]−1(K + 1)−m,

which implies that G(⋅, Fnk−1) is nondecreasing. It similarly follows that G(Fn

k , ⋅) isnondecreasing. This implies that (2.14) is monotone on [0,K + 1], so by Theorem2.3, we then have that for N ≥ N0, for any T > 0, there is c = c(K,m, q, T ) such that

sup0≤j/N≤T

supk∈Z∣F j

k − u(k

N1/(m+1) ,j

N)∣ ≤ cN−

12 ;

recall that ∆t = N−1, so j/N = j∆t. We may rewrite this bound as

sup0≤t≤T

supk∈Z∣F ⌊Nt⌋

k − u( k

N1/(m+1) ,1

N⌊Nt⌋)∣ ≤ cN−

12 .

By Proposition A.4, the continuous viscosity solution u solving (2.4) is globallyLipschitz continuous in space and time. Therefore,

sup0≤t≤T

supk∈Z∣F ⌊Nt⌋

k − u( k

N1/(m+1) , t)∣

≤ sup0≤t≤T

supk∈Z∣F ⌊Nt⌋

k − u( k

N1/(m+1) ,1

N⌊Nt⌋)∣

+ sup0≤t≤T

supk∈Z∣u( k

N1/(m+1) ,1

N⌊Nt⌋) − u( k

N1/(m+1) , t)∣

≤ cN−12 +C sup

0≤t≤T∣ 1N⌊Nt⌋ − t∣ ≤ cN−

12 ,

and this yields (2.3); equation (2.5) follows as it is simply a restating of (2.3) in thespecial case when t = 1. Finally, (2.5) gives that u(x,1) is the pointwise limit of aCDF, so is itself an extended CDF. □

3. “Good” singular initial conditions

The convergence results of the previous section require that the finite differencescheme (Fn

k )k∈Z,n∈N begins with an initial condition µN which is a discretization of aLipschitz function at scale ∆x (depending on N). In this section, we build on thoseconvergence results to prove distributional limit theorems for certain fixed (ratherthan varying in N) initial conditions. Let

p∗ = ( 1

q(m + 1))1/m

, (3.1)

and note that p∗ > 1/2 since (m + 1)−1/m ≥ 1/2 for all m ≥ 1 and q ∈ (0,1). We saythat an extended probability distribution µ is p∗-bounded if

supx∈Z∪{−∞,∞}

µ({x}) < p∗.

The goal of this section is to prove the following proposition, which essentially statesthat Theorem 1.1 holds for p∗-bounded initial conditions.

11

Proposition 3.1. Let (Xn, n ≥ 0) be CM(m,q,µ)-distributed with µ a probabilitydistribution on Z. If µ is p∗-bounded, then

limn→∞

P( Xn

n1/(m+1) < x) = u(x,1)

uniformly in x, where u(x, t) is given by (1.13).

The proof of Proposition 3.1 relies on comparison between the CM-evolution withp∗-bounded initial conditions to CM-evolutions with Lipschitz continuous initial con-ditions. To establish the possibility of such comparisons, we prove that the CMevolution is stochastically monotone on a much broader class of initial conditionsthan what is covered by Proposition 2.1. (It may be useful to revisit the discussionpreceding the proof of Proposition 2.1 at this point.) We first show for the classof p∗-bounded distributions, stochastic ordering is preserved by one time-step of theCM evolution. We then show that the CDFs at future time steps remain in the familyof p∗-bounded distributions. This is exactly the content of the next two lemmas:

Lemma 3.2. Let µX and µY be p∗-bounded probability distributions on Z∪{−∞,∞},and let (Xn, n ≥ 0) be CM(m,q,µX)-distributed and (Yn, n ≥ 0) be CM(m,q,µY )-distributed. If Y0 ⪯X0 then Y1 ⪯X1.

Lemma 3.3. Let (Xn, n ≥ 0) be CM(m,q,µ)-distributed and define P(Xn = k) = pnk .Then for all j ∈ Z,

pn+1j ≤max(pnj−1, pnj ), (3.2)where there is equality if and only if pnj−1 = pnj .

The requirement that µX and µY are p∗-bounded in Lemma 3.2 is necessary. Tosee this, fix non-negative integer random variables X0 ⪯ Y0 with P(Y0 = 0) = p < p =P(X0 = 0). Then by (1.14),

P(X1 < 1) = P(X1 = 0) = p − qpm+1,P(Y1 < 1) = P(Y1 = 0) = p − qpm+1.

In order to have X1 ⪯ Y1, we thus require that p−qpm+1 ≤ p−qpm+1, or in other wordsthat p − qpm+1 is non-decreasing in p. Differentiating, we see that this is equivalentto requiring that

1 − (m + 1)qpm ≥ 0,which is true precisely when p ≤ p∗. (For later use, write f(p) ∶= p − qpm+1; we willuse that f is increasing on [0, p∗] and decreasing on [p∗,1].)

Proof of Lemma 3.2. Let G(y, z) = y − q∣y − z∣m+1. Note that for 0 ≤ z < y, G(y, z) isincreasing in z, and for 0 ≤ z ≤ y, G is increasing in y provided that

1 − (m + 1)q(y − z)m ≥ 0 ,

or in other words provided that y − z ≤ p∗. Therefore, G is monotone on [0, p∗].Now write F 0

k = P(X0 < k) = µX[−∞, k) and F 0k = P(Y0 < k) = µY [−∞, k). Since

Y0 ⪯ X0, we have that for all k ∈ Z, F 0k ≤ F 0

k , and moreover, F 0k − F

0k−1 ≤ p∗ and

F 0k − F

0k−1 ≤ p

∗. Since G is monotone on [0, p∗] it follows that

P(X1 < k) = F 1k = G(F

0k , F

0k−1) ≤ G(F

0k , F

0k−1) = F

1k = P(Y1 < k) ,

so Y1 ⪯X1, as required. □

We next introduce an additional technical lemma needed in the proof of Lemma 3.3.

12

Lemma 3.4. Let g(x, y) = f(x) − f(y) with f(x) = x − qxm+1. Then g(a, b) > 0whenever a > b ≥ 0 and a + b ≤ 1. Under the additional constraint a > p∗, we have

g(a, b) >min(g(p∗,1 − p∗),1 − q) > 0. (3.3)

Proof. First, we note that∂

∂xg(x, y) = 1 − q(m + 1)xm,

∂yg(x, y) = −1 + q(m + 1)ym.

We are concerned with the behavior of the function g in the regions A and B shown inFigure 1. Formally, if C = {(x, y) ∶ 0 ≤ y < x, y ≤ 1 − x}, then A = {(x, y) ∈ C ∶ x ≤ p∗}and B = {(x, y) ∈ C ∶ x > p∗}.

0 1/2 p∗ 10

1/2

p∗

1

A

B

Figure 1. Lemma 3.4 states that g(a, b) is positive for (a, b) ∈ A and is greaterthan min(g(p∗,1 − p∗),1 − q) for (a, b) ∈ B).

If (x, y) ∈ A then since f is increasing on [0, p∗] and in this region 0 ≤ y < x ≤ p∗,it follows that g(x, y) = f(x) − f(y) > 0.

To determine the behavior in region B, notice that in this region, ∂g∂x < 0 and

∂g∂y < 0. Therefore,

inf(x,y)∈B

g(x, y) = infx∈[p∗,1]

g(x,1 − x)

Moreover,∂2

∂x2g(x,1 − x) = q(m + 1)m((1 − x)m−1 − xm−1) ;

for x > 1/2, the difference (1 − x)m−1 − xm−1 is strictly negative, so g(x,1 − x) isstrictly concave for x ∈ (1/2,1). Since p∗ > 1/2, we thus have

inf(x,y)∈B

g(x, y) = infx∈{p∗,1}

g(x,1 − x) > infx∈{1/2,1}

g(x,1 − x). (3.4)

Since g(1/2,1/2) = 0 and g(1,0) = 1 − q > 0, it follows that g(x,1 − x) > 0 for allx ∈ (1/2,1]. Finally, since g is strictly concave on B, the first equality in (3.4)implies that g(x, y) > infx∈{p∗,1} g(x,1 − x) for (x, y) ∈ B, which is (3.3). □

Equipped with this technical lemma, we can now prove Lemma 3.3.

13

Proof of Lemma 3.3. We prove the lemma in cases. There are three cases to consider:pnj−1 < pnj , pnj−1 = pnj , and pnj−1 > pnj . In each case, we will use the definition of pn+1j

from (1.14):pn+1j = pnj − q [(pnj )m+1 − (pnj−1)m+1] . (3.5)

When pnj−1 < pnj , (3.2) reduces to showing that pn+1j < pnj . This is clear from (3.5),since (pnj )m+1 − (pnj−1)m+1 > 0.

Similarly, when pnj−1 = pnj , (3.2) reduces to showing that pn+1j = pnj . Again, from(3.5), we see that (pnj )m+1 − (pnj−1)m+1 = 0, while implies the result.

For the final case, when pnj−1 > pnj , establishing (3.2) reduces to showing thatpn+1j < pnj−1. By (3.5), this is equivalent to showing that

0 < pnj−1 − q(pnj−1)m+1 − (pnj − q(pnj )m+1) .

Using again the definition of g(x, y) = f(x) − f(y) as in Lemma 3.4, we can see thisis equivalent to showing that

g(pnj−1, pnj ) > 0.Because pnj−1 + pnj ≤ 1 and pnj−1 > pnj ≥ 0, Lemma 3.4 exactly yields the result. □

Remark 3.5. Combining Lemma 3.2 and Lemma 3.3, we are able to identify theprecise value of Λ which guarantees stochastic monotonicity, as discussed just abovethe proof of Proposition 2.1. In particular, if F 0

k and F 0k are two CDFs such that

F 0k ≤ F

0k and

0 ≤ F 0k − F

0k−1 ≤ p

∗ and 0 ≤ F 0k − F

0k−1 ≤ p

∗ , (3.6)then Lemma 3.2 guarantees that F 1

k ≤ F1k , and Lemma 3.3 guarantees that F 1

k and F 1k

both satisfy (3.6). We may then conclude (by induction) that Fnk ≤ F

nk for all n ∈ N

and k ∈ Z, so the CM-evolution is stochastically monotone for the correspondinginitial distributions µ and µ.

Now we are ready to prove Proposition 3.1. We will do so by relating the processstarting from a p∗-bounded initial condition to a sequence of processes which beginfrom a discretization of a Lipschitz function. We will then be able to use Proposition2.1 for Lipschitz continuous initial data to yield convergence in this extended setting.

Proof of Proposition 3.1. Fix ε > 0. Let (Xn, n ≥ 0) be CM(m,q,µ)-distributed.Then the collection of values Fn

k = P(Xn < k) satisfy the recursive relationship (2.1)with initial condition F 0

k = P(X0 < k) = µ(−∞, k). We note that the values Fnk also

satisfy (2.15) when ∆x and ∆t are chosen so that (∆x)m+1 = ∆t. We enforce thisrelation between ∆x and ∆t throughout the proof.

We will sandwich Fnk between two solutions of (2.15) with smoother initial condi-

tions. To this end, define uε,1 as the lsc viscosity solution of⎧⎪⎪⎨⎪⎪⎩

uε,1t + q ∣uε,1x ∣

m+1 = 0 in R × (0,∞),uε,1(x,0) = ε1{x≤0} + 1{x>0} in R,

and let u0,1−ε denote the lsc viscosity solution of⎧⎪⎪⎨⎪⎪⎩

u0,1−εt + q ∣u0,1−εx ∣m+1 = 0 in R × (0,∞),u0,1−ε(x,0) = (1 − ε)1{x>0} in R.

SettingS ∶= S(ε) = (1 − ε)m/(m+1)(m + 1)q1/(m+1)m−m/(m+1)ε1/(m+1), (3.7)

14

by (1.12), these solutions have the explicit forms

uε,1(x, t) =

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

ε if x ≤ 0,

ε + m

q1m (m+1)

m+1m(xm+1

t )1m if 0 ≤ x ≤ S(ε)(t/ε)1/(m+1),

1 otherwise.

(3.8)

and

u0,1−ε(x, t) =

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

0 if x ≤ 0,m

q1m (m+1)

m+1m(xm+1

t )1m if 0 ≤ x ≤ S(ε)(t/ε)1/(m+1),

1 − ε otherwise.

(3.9)

In particular, uε,1(x, t) = u0,1−ε(x, t) + ε. We also see from (3.8) and (3.9) thatboth uε,1(x, ε), u0,1−ε(x, ε) are Lipschitz continuous with the same Lipschitz constantK =K(ε), and therefore, there exists an η = η(ε) sufficiently small such that if ∆x ≤ η,then ⎧⎪⎪⎨⎪⎪⎩

0 ≤ uε,1(x +∆x, ε) − uε,1(x, ε) ≤ p∗ for all x ∈ R,0 ≤ u0,1−ε(x +∆x, ε) − u0,1−ε(x, ε) ≤ p∗ for all x ∈ R.

(3.10)

Also, by our explicit representation of uε,1 in (3.8), we haveuε,1(x, ε) ≥ ε for all x,uε,1(x, ε) = 1 for all x ≥ S(ε).

(3.11)

Now, defineL ∶= L(ε) =max{k ≤ 0 ∶ F 0

k ≤ ε},R ∶= R(ε) =min{k ≥ 0 ∶ F 0

k ≥ 1 − ε}.These values are both finite because µ is a probability distribution on Z and hencelimk→−∞ F 0

k = 0 and limk→∞ F 0k = 1. Then, for n ∈ N, let F +,nk (n) and F−,nk (n) be the

schemes defined by (2.15), with (∆x)m+1 =∆t as always, and with initial conditionsF+,0k (n) = u

ε,1(k∆x −Ln−1/(m+1) + S, ε) (3.12)and

F−,0k (n) = u0,1−ε(k∆x −Rn−1/(m+1), ε), (3.13)

respectively. We use the parameter n to spatially shift the initial conditions in orderto obtain ordered initial conditions.

By the definition of L, for k < L we haveF 0k ≤ ε ≤ F

+,0k (n),

the second inequality since F +,0k (n) ≥ ε for all k by (3.11). For k ≥ L, if ∆x ≤ n−1/(m+1)

then since L ≤ 0 we have k∆x −Ln−1/(m+1) + S ≥ S, so also by (3.11),F+,0k (n) = u

ε,1(k∆x −Ln−1/(m+1) + S, ε) = 1 ≥ F 0k .

Therefore, if ∆x ≤ n−1/(m+1) then F +,0k (n) ≥ F0k for all k ∈ Z.

Similarly, by the definition of R and (3.9), for k > R, we haveF −,0k (n) ≤ 1 − ε ≤ F

0k ,

and if ∆x ≤ n−1/(m+1), then for k ≤ R we have k∆x −Rn−1/(m+1) ≤ 0, so by (3.9),F−,0k (n) = 0 ≤ F

0k .

Thus if ∆x ≤ n−1/(m+1) then F −,0k (n) ≤ F0k for all k ∈ Z.

15

Combining the two preceding paragraphs, we obtain that if ∆x ≤ n−1/(m+1) thenfor all k,

F −,0k (n) ≤ F0k ≤ F

+,0k (n) .

If also ∆x ≤ η, then by (3.10), each scheme satisfies the condition that

∣F ±,0k (n) − F±,0k−1(n)∣ ≤ p

for all k ∈ Z. By Remark 3.5, the prior two displays yield that whenever ∆x ≤min(η,1/n(m+1)), we have by induction that for all n ∈ N,

F−,nk (n) ≤ Fnk ≤ F

+,nk (n). (3.14)

We now combine these bounds with Proposition 2.1. We first aim to apply theproposition with µN defined by

µN [−∞, k) = F +,0k (n) .

The proposition requires that µN have the form µN [−∞, k) = u0(k/N1/(m+1)), sothe definition of F +,0k (n) forces us to take ∆x = N−1/(m+1) and u0(x) = uε,1(x −L/n1/(m+1) + S, ε). Since uε,1 is Lipschitz, fixing T > 1 and applying Proposition 2.1(specifically (2.3)) at time t = 1 ∈ [0, T ], it follows that there exist N0 = N0(q,m,K)and c = c(K,m,T ) such that if N ≥ N0, for all k ∈ Z,

F +,Nk (n) ≤ uε,1(kN−1/(m+1) −Ln−1/(m+1) + S,1 + ε) + cN−1/2.

We emphasize that N0 and c depend only on the initial condition u0(x) = uε,1(x −L/n1/(m+1) + S, ε) through its Lipschitz constant K; in particular, N0 and c do notdepend on n since varying n translates the initial condition horizontally but doesnot change its Lipschitz constant.

Similarly, taking ∆x = N−1/(m+1) and u0(x) = u0,1−ε(x −R/n1/(m+1), ε) and t = 1,applying Proposition 2.1 (specifically (2.3)) with µN defined by µN [−∞, k) = F−,0k (n)yields that for all N ≥ N0 and all k ∈ Z,

F−,Nk (n) ≥ u0,1−ε(kN−1/(m+1) −Rn−1/(m+1),1 + ε) − cN−1/2.

For N ≥ N0 large enough that also ∆x = N−1/(m+1) ≤ min(η, n−1/(m+1)), we maycombine these bounds with (3.14) to deduce that for all k ∈ Z,

P(XN < k) = FNk ≤ F

+,Nk (n) ≤ uε,1(kN−1/(m+1) −Ln−1/(m+1) + S,1 + ε) + cN−1/2

and

P(XN < k) = FNk ≥ F

−,Nk (n) ≥ u0,1−ε(kN−1/(m+1) −Rn−1/(m+1),1 + ε) − cN−1/2.

Taking k = xN1/(m+1), these bounds become

P(XN < xN1/(m+1)) ≤ uε,1(x −Ln−1/(m+1) + S,1 + ε) + cN−1/2,

andP(XN < xN1/(m+1)) ≥ u0,1−ε(x −Rn−1/(m+1),1 + ε) − cN−1/2.

We should in fact take k = ⌊xN1/(m+1)⌋ above, but we ignore this minor roundingissue to preserve readability, as the errors it creates are asymptotically negligible forN large due to the spatial continuity of uε,1 and of u0,1−ε at time 1 + ε.

For n ≥max(N0, ηm+1), if N ≥ n then the other constraints on N are automatically

satisfied. Recalling that L ≤ 0, since cN−1/2 → 0 as N → ∞, the first of the two

16

preceding bounds then implies that

lim supN→∞

P( XN

N1/(m+1) < x) ≤ infn≥max(N0,ηm+1)

(uε,1(x −Ln−1/(m+1) + S,1 + ε))

= uε,1(x + S,1 + ε).Likewise, the second of the bounds yields that

lim infN→∞

P( XN

N1/(m+1) < x) ≥ supn≥max(N0,ηm+1)

(u0,1−ε(x −Rn−1/(m+1),1 + ε))

= u0,1−ε(x,1 + ε) .

Finally, from the explicit representations of uε,1, u0,1−ε from (3.8) and (3.9), andS = S(ε) defined by (3.7), we have

limε→0

S(ε) = 0

limε→0

uε,1(x + S(ε),1 + ε) = u(x,1)

limε→0

u0,1−ε(x,1 + ε) = u(x,1),

uniformly in x, where u(x, t) is given by (1.13). Taking the limit as ε → 0, we getthat

u(x,1) ≤ lim infn→∞

P( Xn

n1/(m+1) < x) ≤ lim supn→∞

P( Xn

n1/(m+1) < x) ≤ u(x,1),

and thereforelimn→∞

P( Xn

n1/(m+1) < x) = u(x,1) ,

as desired. □

4. General singular initial conditions

We saw that Lemma 3.2 requires a bound on the maximum single-site probability.Our next result shows that in fact, there exists a constant N1 such that, regardlessof the initial distribution, the distribution of the CM after N1 steps will satisfy sucha bound.

Lemma 4.1. Let µ be a probability distribution with support Z, and let (Xn, n ≥ 0)be CM(m,q,µ)-distributed. Then there exists a constant N1 = N1(q,m), such thatfor all n ≥ N1,

maxk∈Z

P(Xn = k) ≤ p∗. (4.1)

Proof of Lemma 4.1. First, by Lemma 3.3, if M ∶= maxk∈Z p0k = maxk∈Z µ({k}) ≤ p∗

then maxk∈Z p1k ≤ p

∗. Therefore, it suffices to show that there exists C = C(m,q) > 0such that if M > p∗ then maxk∈Z p

1k ≤M −C, since then by induction, for all n ∈ N,

maxk∈Z

pnk ≤max(p∗,M − nC),

and in particular maxk∈Z pnk ≤ p

∗ for all n ≥ (1 − p∗)/C.So suppose M > p∗. Since p∗ > 1/2, there is a unique integer ℓ ∈ Z with p0ℓ > p

∗.We bound p1k for all k ∈ Z by splitting into three cases, according to whether k = ℓ,k = ℓ + 1 or k /∈ {ℓ, ℓ + 1}.

● If k /∈ {ℓ, ℓ + 1}, then by Lemma 3.3 we havep1k ≤max(p0k, p

0k−1) ≤ max

k∈Z,k≠ℓp0k ≤ 1/2 <M − (p

∗ − 1/2).

17

● If k = ℓ, then by (3.5) and since p0k =M > p∗ and p0k−1 < 1/2, we have

p1k = p0k − q[(p

0k)

m+1 − (p0k−1)m+1] ≤ p0k − q[(p

∗)m+1 − (1/2)m+1]=M − q[(p∗)m+1 − (1/2)m+1].

● If k = ℓ + 1, then we use Lemma 3.4, which provides lower bounds on thefunction g(x, y) = (x − qxm+1) − (y − qym+1). Note that

g(p0k−1, p0k) = p

0k−1 − q(p

0k−1)

m+1 − p0k + q(p0k)

m+1 = p0k−1 − p1k.

Since p0k−1 = p0ℓ = M > p∗ and p0k−1 + p0k ≤ 1, Lemma 3.4 and the preceding

displayed identity together imply thatp1k = p

0k−1 − g(p

0k−1, p

0k) ≤M −min(g(p∗,1 − p∗),1 − q) .

Taking C1 = p∗ − 1/2, C2 = q[(p∗)m+1 − (1/2)m+1], C3 = min(g(p∗,1 − p∗),1 − q),and C = min(C1,C2,C3), the above bounds then give that maxk∈Z p

1k ≤ M − C, as

required.□

We can combine this result with Proposition 3.1 to complete the proof of the maintheorem.

Proof of Theorem 1.1. Fix N1 = N1(q,m) as in Lemma 4.1, let µ be the distributionof XN1 and let Xn = XN1+n for n ≥ 0. Then (Xn, n ≥ 0) is CM(m,q, µ)-distributed.Because P(X0 = k) ≤ p∗ for all k, we can apply Proposition 3.1 to conclude that

limn→∞

P( Xn

n1/(m+1) < x) = u(x,1).

Since N1 is fixed and u(x,1) is continuous, this implies that for all x ∈ R,

limn→∞

P( Xn

n1/(m+1) < x) = limn→∞

P( Xn

n1/(m+1) < x(n −N1

n)1/(m+1)

) = u(x,1). (4.2)

By comparing the expression for u(x,1) provided by (1.13) to the CDF given in(1.3) for (m + 1) ( q

mm )1/m+1

B, where B is Beta(m+1m ,1)-distributed, we see that

1

m + 1(m

m

q)

1m+1⋅ Xn

n1/(m+1)dÐ→ B,

as required. □

5. Generalizations, Limitations, and Open Questions

In this section, we discuss several possible extensions to the above results, as wellas some obstacles and challenges we have observed.1. Higher Dimensions. One may try to extend our results and techniques to higherspace-dimensions. However, there are several challenges. In particular, we use themonotonicity of CDFs, namely that Fn

k is nondecreasing in k, ubiquitously through-out the paper. The monotonicity of CDFs in higher dimensions is weaker, as itrequires ordering in all coordinates. Relatedly, it is unclear how to extend the sto-chastic monotonicity and sandwiching arguments from this paper to higher dimen-sions. These points make generalization of our approach to dimensions d > 1 delicate(although we hope not impossible).2. Cooperative Motion with fewer than m friends. A related model which we have notconsidered, but which may be amenable to the techniques of this paper, is when thecooperative motion only requires ℓ individuals to move, for ℓ <m. More precisely, we

18

may modify the model as follows. Let X0 and (Dn, n ≥ 0) be as in the introduction.Then, for n ≥ 0, let (Xi

n,1 ≤ i ≤m) be independent copies of Xn, and set

Xn+1 =⎧⎪⎪⎨⎪⎪⎩

Xn +Dn if Xn = Xin for at least ℓ distinct values i ∈ {1 . . .m},

Xn otherwise .

It seems likely that for such a process, Xn should typically take values of ordern1/(ℓ+1). A heuristic argument for this is as follows. Suppose that Xn/nα behavesroughly like a continuous random variable with compact support, for n large; saythat P(Xn = k) ≍ n−1/α for Θ(nα) distinct values of k, and for other values of k thisprobability is substantially smaller.

On one hand, this suggests that P(Xn+1 > Xn) = Θ(nα−1), since we expect thatX2n −Xn = Θ(nα). On the other hand, P(Xn+1 > Xn) is the probability that atleast ℓ of the m copies of Xi

n take the same value as Xn; if the distribution of Xn

is spread out over roughly nα sites, then this probability should be around (n−α)ℓ.For these two predictions to agree we must have 1 − α = αℓ, so α = 1/(ℓ + 1).

While we have confidence in this prediction of the asymptotic size of Xn, it isnot clear to us whether or not the scaling limit should in fact be the same as for aCM(ℓ, q, µ) process.3. Cooperation Motion with a non-integer number of friends. Another possibleextension is to the case when m is non-integer. Although m integer has a naturalinterpretation in terms of cooperative motion, which in turn leads to the recursionrelation (1.4), we may alternatively take (1.4) as a definition for the CDF Fn

k of arandom variable Xn. In this case, the same analysis shows that for any m ∈ R withm ≥ 1, Theorem 1.1 still holds when X0 is µ-distributed and P(Xn < k) = Fn

k , whereFnk is defined according to (1.4).The requirement that m ≥ 1 is crucial for Lemma 3.4, so the techniques of this

paper do not yield insight into what happens for m ∈ (0,1). However, it would bequite interesting to understand this, as well as the limiting behaviour as m→ 0.General step size distributions. Remaining in one spatial dimension, another naturalgeneralization of this process would be to consider cooperative motion which allowedfor more general step sizes Dn. As we will discuss in Section 5.2, it turns outthat if P(∣Dn∣ > 1) > 0, we confront an immediate, provable obstacle to directlyapplying the proof techniques of this paper (failure of monotonicity). However, beforedescribing this obstacle, we first present a generalization of our main result. If thesteps (Dn, n ≥ 0) are an integer multiple of Bernoulli(q) random variables, thenwe are able to prove a distributional convergence result; this is presented in thenext subsection. Surprisingly, the limiting distribution in this case, although alwaysa mixture of Beta random variables, need not be Beta-distributed, due to latticeeffects which persist at large times.

5.1. Persistent lattice effects. Let (Dn, n ≥ 0) be iid, non-negative, boundedinteger random variables. Define a cooperative motion process, with X0 chosenaccording to an initial probability distribution µ on Z, as follows. For n ≥ 0, let(Xi

n,1 ≤ i ≤m) be independent copies of Xn, and set

Xn+1 =⎧⎪⎪⎨⎪⎪⎩

Xn +Dn if Xn = Xin for all i = 1 . . .m,

Xn if Xn ≠ Xin for some i .

(5.1)

In this section we consider the case where the steps (Dn, n ≥ 0) are iid withP(Dn = g) = q = 1−P(Dn = 0) for some q ∈ (0,1) and g ∈ N. If the initial distributionµ is supported by a translate of gZ then the resulting cooperative motion may simply

19

be seen as a rescaling of the Bernoulli cooperative motion process considered in thebody of the paper. However, if µ is not supported by a translate of gZ then theasymptotic behaviour is in fact different; there are lattice effects which persist atlarge times.

Theorem 5.1. Consider the generalized cooperative motion with P(Dn = g) = q =1 − P(Dn = 0) for some q ∈ (0,1) and g ∈ N. Write πr = P(X0 = r mod g) forr ∈ {1,2, . . . , g}. Then

1

g

1

m + 1(m

m

qn)1/(m+1)

XndÐ→ B ⋅

g

∑r=1(πr)m/(m+1)1{A=r},

where A is a random variable taking values in {1,2, . . . , g} with P(A = r) = πr, andB is Beta(m+1m ,1)-distributed and independent of A.

As an input to the proof of Theorem 5.1, we use the following straightforwardextension of Theorem 1.1 to Bernoulli cooperative motions which may take values±∞. Let c = c(q,m) = mq−1/m(m + 1)−(m+1)/m, and for 0 ≤ a < b ≤ 1 define anextended CDF F a,b by

F a,b(x) =

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

a if x ≤ 0a + cx(m+1)/m if 0 ≤ cx(m+1)/m ≤ b − ab if b − a ≤ cx(m+1)/m .

Let Ba,b be an extended random variable with distribution F a,b. Then P(∣Ba,b∣ <∞) = b − a, and for x ∈ R,

P(Ba,b ≤ x ∣ ∣Ba,b∣ <∞) =

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

0 if x ≤ 0c

b−ax(m+1)/m if 0 ≤ c

b−ax(m+1)/m ≤ 1

1 if 1 ≤ cb−ax

(m+1)/m .

In other words, given that ∣Ba,b∣ is finite, it is distributed as ( b−ac )m/(m+1)B where B

is Beta(m+1m ,1)-distributed

Proposition 5.2. If µ is a probability distribution on Z ∪ {±∞} with µ({−∞}) = aand µ({+∞}) = 1 − b, and (Xn, n ≥ 0) is CM(m,q,µ)-distributed, then

Xn

n1/(m+1)dÐ→ Ba,b.

The proof of Proposition 5.2 proceeds exactly as does the proof of Theorem 1.1,with minor notational changes, so we omit the details.

Corollary 5.3. Suppose that µ is a probability distribution on Z ∪ {±∞} withµ({−∞}) = a and µ({+∞}) = 1 − b, and that P(Dn = g) = q = 1 − P(Dn = 0)for some q ∈ (0,1) and g ∈ N, g > 0. If there is r ∈ {1,2, . . . , g} such that P(∣X0∣ = rmod g ∣ ∣X0∣ <∞) = 1, then

1

g⋅ Xn

n1/(m+1)dÐ→ Ba,b.

Proof. Apply Proposition 5.2 to the process ((Xn − r)/g, n ≥ 0). □

Proof of Theorem 5.1. Define auxiliary processes (X(r)n , n ≥ 0) for 1 ≤ r ≤ g by

X(r)n =⎧⎪⎪⎨⎪⎪⎩

Xn if Xn = r mod g

−∞ otherwise.

20

Then by Corollary 5.3, for each 1 ≤ r ≤ g,

1

g⋅ X

(r)n

n1/(m+1)dÐ→ B1−πr,1.

Moreover, since exactly one of X(1)n , . . . ,X(g)n is finite, and P(∣X(r)n ∣ <∞) = π(r) for

all n ≥ 0 and 1 ≤ r ≤ g, it follows that

⎛⎝1

g⋅ X

(r)n

n1/(m+1) ,1 ≤ r ≤ g⎞⎠

dÐ→ (B1−πr,1,1 ≤ r ≤ g),

where the joint distribution of the variables on the right-hand side is fully determinedby the stipulation that exactly one of them is finite and all others take the value −∞.

Finally, with the convention that (−∞) ⋅ 0 = 0, we have

Xn =g

∑r=1

X(r)n 1{∣X(r)n ∣<∞},

together with which the preceding joint convergence implies that

1

g

Xn

n1/(m+1)dÐ→

g

∑r=1

B1−πr,11{∣B1−πr,1∣<∞} .

Since P(∣B1−πr,1∣ <∞) = 1−(1−πr) = πr for each r ∈ {1,2, . . . , g}, and the conditionaldistribution of B1−πr,1 given that ∣B1−πr,1∣ is finite is that of (πr/c)m/(m+1)Beta(m+1m ,1),the result follows. □

5.2. Step Sizes ∣Dn∣ > 1. Building on our main theorem, and in view of the per-sistent lattice effects explained in the preceding subsection, we make the followingconjecture. Consider the generalized cooperative motion defined by (5.1) and writeν for the common distribution of (Dn, n ≥ 0). If gcd(k > 0 ∶ P(Dn = k) > 0) = 1,then there exists c = c(ν) > 0 such that cn−1/(m+1)Xn

dÐ→ B, where B is Beta (m+1m ,1)-distributed.

The preceding conjecture states that all totally asymmetric cooperative motionprocesses with non-negative, bounded integer step sizes whose support is not con-tained in a proper sublattice of Z should have similar asymptotic behaviour. However,there is a provable difficulty in establishing this conjecture beyond the Bernoullisetting using the proof techniques shown above. Specifically, we next show thatmonotonicity of the evolution fails to hold whenever P(∣Dn∣ > 1) > 0. This impliesthat, in some sense, the main proof technique used in this paper can only handlecooperative motion-type processes with ∣Dn∣ ≤ 1.

Consider a cooperative-motion type process as in (5.1), with bounded but notnecessarily positive step sizes, so P(−ℓ ≤ Dn ≤ s) = 1 for some non-negative integerss and ℓ. Writing Fn

k = P(Xn < k), then the values Fnk satisfy the following recurrence:

Fn+1k = G(Fn

k+ℓ, . . . , Fnk , . . . , F

nk−s)

∶= Fnk −

k−1∑

j=k−sP(Xn = j)m+1P(Dn ≥ k − j) +

k+ℓ−1∑j=k

P(Xn = j)m+1P(Dn < k − j)

= Fnk −

k−1∑

j=k−s(Fn

j+1 − Fnj )m+1P(Dn ≥ k − j) +

k+ℓ−1∑j=k(Fn

j+1 − Fnj )m+1P(Dn < k − j).

21

The function G is defined by the equality of the first and third lines, above: soG(fk+ℓ, . . . , fk−s)

= fk −k−1∑

j=k−s(fj+1 − fj)m+1P(Dn ≥ k − j) +

k+ℓ−1∑j=k(fj+1 − fj)m+1P(Dn < k − j) .

Theorem 5.4. If P(∣Dn∣ > 1) > 0, then there is no Λ > 0 such that G is non-decreasing in each argument whenever

0 ≤ fj+1 − fj ≤ Λfor all j ∈ [k − s, k + ℓ − 1].

Proof. First,∂G

∂fk−1= (m + 1)(fk − fk−1)mP(Dn ≥ k − (k − 1))

− (m + 1)(fk−1 − fk−2)mP(Dn ≥ k − (k − 2))= (m + 1)(fk − fk−1)mP(Dn ≥ 1) − (m + 1)(fk−1 − fk−2)mP(Dn ≥ 2),

so if P(Dn ≥ 2) > 0 and if fk = fk−1 > fk−2, then∂G

∂fk−1< 0.

Similarly,∂G

∂fk+1= (m + 1)(fk+1 − fk)mP(Dn < 0) − (m + 1)(fk+2 − fk+1)mP(Dn < −1),

so if P(Dn < −1) > 0, then whenever fk = fk+1 < fk+2 then we have∂G

∂fk+1< 0. □

Note that for any initial distribution with bounded support, if the step size isbounded then for all n the support of Xn is bounded: letting k =max{ℓ ∶ Fn

ℓ < 1}+2and k′ = min{ℓ ∶ Fn

ℓ > 0} − 2, then k and k′ are both finite. Moreover, Fnk−2 <

Fnk−1 = Fn

k = 1 and 0 = Fnk′ = Fn

k′+1 < Fnk′+2, and thus if P(∣Dn∣ > 1) > 0 then by

the above theorem, at no point in the evolution will the process reach a time atwhich monotonicity can be invoked. Without monotonicity, we can not apply theCrandall-Lions methodology, so the proof technique used in this paper fails.

Acknowledgements. LAB was partially supported by NSERC Discovery Grant643473 and Discovery Accelerator Supplement 643474. EB was partially supportedby NSERC Discovery Grants 247764 and 643473. JL was partially supported byNSERC Discovery Grant 247764, FRQNT Grant 250479, and the Canada ResearchChairs program. We thank Gavin Barill and Maeve Wildes for useful discussionspertaining to the convergence of finite difference schemes for Hamilton-Jacobi equa-tions.

Appendix A. An introduction to viscosity solutions

In this section, we provide a self-contained description of Crandall-Lions (contin-uous) and Barron-Jensen (lsc) viscosity solutions. The results of this section areclassical and can be found in various references such as [6, 10, 2, 3].

We will work throughout this section with the model equationut +H(ux) = 0, (A.1)

22

where H ∶ R→ R. We also define the Cauchy problem, given by⎧⎪⎪⎨⎪⎪⎩

ut +H(ux) = 0 in R × (0,∞),u(x,0) = u0(x) in R.

(A.2)

We begin with the theory of continuous viscosity solutions.

Definition A.1. Let u ∶ R × (0,∞) → R. We say that u is a viscosity subsolutionof (A.1) at (x0, t0) if u is upper semicontinuous at (x0, t0), and for any functionφ ∈ C1(R × (0,∞)) such that u − φ has a local maximum at (x0, t0), we have

φt(x0, t0) +H(φx(x0, t0)) ≤ 0.

We say that u is a viscosity supersolution of (A.1) at (x0, t0) if u is lower semicon-tinuous at (x0, t0), and for any function φ ∈ C1(R × (0,∞)) such that u − φ has alocal minimum at (x0, t0), we have

φt(x0, t0) +H(φx(x0, t0)) ≥ 0.

Finally, we say that u is a viscosity solution of (A.2) if and only if u is both aviscosity subsolution and supersolution of (A.1) for all (x0, t0) ∈ R × (0,∞) and,additionally, for all x ∈ R, u(y, t)→ u0(x) as (y, t)→ (x,0). As u is then both upperand lower-semicontinuous on R × (0,∞), u is necessarily continuous.

One can also interpret the definition of viscosity solutions from a geometric per-spective. The condition that u − φ has a local max/min at (x0, t0) can always bereplaced by the function φ touching u at the point (x0, t0) from above/below. In-deed, when u − φ has a local max at (x0, t0), we may adjust φ (adding appropriateconstants and strictly convex/concave functions) to obtain φ such that

u < φ in R × (0,∞), except at (x0, t0) where u(x0, t0) = φ(x0, t0).

If u is differentiable at (x0, t0) and satisfies (A.1) at (x0, t0), then u automaticallysatisfies (A.1) in the viscosity sense at (x0, t0). The notion of viscosity solutionentails that if u is not differentiable at (x0, t0), one uses a smooth test functionwhich “touches” u at the point (x0, t0) on either side to evaluate the PDE at (x0, t0).Compared to other notions of weak solutions of PDEs (for example, distributionalsolutions which are based on integration by parts), viscosity solutions are particularlyamenable to nonlinear PDEs. We now recall the basic existence and uniqueness resultfor continuous viscosity solutions which we will use throughout the paper:

Theorem A.2. [7, Theorem VI.2] Consider (A.2) with H continuous and u0 boundedand uniformly continuous. There exists a unique continuous viscosity solution u of(A.2). Moreover,

∣u(x, t) − u(y, t)∣ ≤ supζ∈R∣u0(ζ) + u0(ζ + y − x)∣ for x, y ∈ R, t ≥ 0.

It is well known (see for example [9, 10.3, Theorem 3] that when H(p) is convexand lim∣p∣→∞

H(p)∣p∣ = +∞, the unique continuous viscosity solution is given by the

Hopf-Lax Formula

u(x, t) = infy∈R{u0(y) + tH∗ (

x − yt)} .

The crown jewel of continuous viscosity solutions theory is the celebrated compar-ison principle, which is an extremely useful tool for analysis:

23

Theorem A.3. [6, Theorem 8.2] Consider (A.2) with H continuous. If u is asubsolution of (A.1) and v is a supersolution of (A.1), and u(x,0) = u0(x) ≤ v0(x) =v(x,0) with u0, v0 bounded and uniformly continuous, then u(x, t) ≤ v(x, t) for allt > 0.

Using the Comparison Principle (Theorem A.3), we can show that u solving (A.2)satisfies additional regularity estimates:

Proposition A.4. Let u denote the unique continuous viscosity solution of (A.2)with u0 bounded and Lipschitz continuous with Lipschitz constant K > 0. Then thereexists C > 0 such that for all (x, t) ∈ R × (0,∞),

⎧⎪⎪⎨⎪⎪⎩

∣ut(x, t)∣ ≤ C,∣ux(x, t)∣ ≤K.

Proof. The fact that ∣ux∣ ≤ K in all of R × (0,∞) is automatic by Theorem A.2.We now show that ut is uniformly bounded. In order to do so, we note that forC ∶= sup∣p∣≤K H(p),

v(x, t) ∶= u0(x) +Ct and w(x, t) ∶= u0(x) −Ct

are both super and subsolutions of (A.2) respectively. Therefore, the ComparisonPrinciple (Theorem A.3) yields

u0(x) −Ct ≤ u(x, t) ≤ u0(x) +Ct,

which implies that

supt>0∣u(x, t) − u0(x)

t∣ ≤ C, (A.3)

for all x ∈ R. Now, for any s > 0, considering the function us(x, t) ∶= u(x, t + s), wehave that

us(x,0) − ∣∣u(x,0) − us(x,0)∣∣L∞ ≤ u(x,0) ≤ us(x,0) + ∣∣u(x,0) − us(x,0)∣∣L∞ .

Another application of the Comparison Principle (Theorem A.3) implies thatus(x, t) − ∣∣u(x,0) − us(x,0)∣∣L∞ ≤ u(x, t) ≤ u

s(x, t) + ∣∣u(x,0) − us(x,0)∣∣L∞ ,

so that by (A.3),∣u(x, t + s) − u(x, t)∣ ≤ ∣∣u(x,0) − us(x,0)∣∣L∞ ≤ Cs.

This implies that ∣ut∣ ≤ C for all (x, t) ∈ R × (0,∞). □

We now introduce the notion of Barron-Jensen or lower semicontinuous viscositysolutions, which is only defined when H is a convex function.

Definition A.5. A lower semicontinuous function u ∶ R × (0,∞) → R is an lscviscosity solution of (A.1) at (x0, t0) if for every φ ∈ C1(R × (0,∞)) such that u − φhas a local minimum at (x0, t0), we have that

φt(x0, t0) +H(φx(x0, t0)) = 0.We say that u is a lsc solution of (A.2) if u is a lsc viscosity solution for all (x0, t0) ∈R × (0,∞) and

inf {lim infn→∞

u(xn, tn) ∣ tn → 0, xn → x} = u0(x).

In the case when u is continuous, we have an equivalence between the two defini-tions:

24

Theorem A.6. [3, Theorem 16] Assume H is convex. A continuous function is aviscosity solution of (A.1) if and only if it is a lsc viscosity solution (A.1).

Finally, we recall that in the case when H is convex, a natural candidate for asolution (from the point of view of optimal control) is the solution given by the Hopf-Lax formula. In the cases when u0 is lower-semicontinuous and bounded below, theHopf-Lax formula gives rise to the unique lsc viscosity solution.

Theorem A.7. [2, Theorem 5.2] Let u0 ∶ R→ R be lsc withu0(x) ≥ −C(∣x∣ + 1).

Let H ∶ R→ R be convex and Lipschitz. Then

u(x, t) = infy∈R{u0(y) + tH∗ (

x − yt)}

is the unique lsc viscosity solution of (A.2) bounded from below by a function oflinear growth.

References[1] Addario-Berry, L., Cairns, H., Devroye, L., Kerriou, C., and Mitchell, R. Hipster

random walks. arXiv 1909.07367v1 (2019).[2] Alvarez, O., Barron, E. N., and Ishii, H. Hopf-Lax formulas for semicontinuous data.

Indiana Univ. Math. J. 48, 3 (1999), 993–1035.[3] Barron, E. N. Viscosity solutions and analysis in L∞. In Nonlinear analysis, differential

equations and control (Montreal, QC, 1998), vol. 528 of NATO Sci. Ser. C Math. Phys. Sci.Kluwer Acad. Publ., Dordrecht, 1999, pp. 1–60.

[4] Barron, E. N., and Jensen, R. Semicontinuous viscosity solutions for Hamilton-Jacobi equa-tions with convex Hamiltonians. Comm. Partial Differential Equations 15, 12 (1990), 1713–1742.

[5] Chen, G.-Q., and Su, B. Discontinuous solutions for Hamilton-Jacobi equations: uniquenessand regularity. Discrete Contin. Dyn. Syst. 9, 1 (2003), 167–192.

[6] Crandall, M. G., Ishii, H., and Lions, P.-L. User’s guide to viscosity solutions of secondorder partial differential equations. Bull. Amer. Math. Soc. (N.S.) 27, 1 (1992), 1–67.

[7] Crandall, M. G., and Lions, P.-L. Viscosity solutions of Hamilton-Jacobi equations. Trans.Amer. Math. Soc. 277, 1 (1983), 1–42.

[8] Crandall, M. G., and Lions, P.-L. Two approximations of solutions of hamilton-jacobiequations. Mathematics of Computation 43, 167 (sep 1984), 1–1.

[9] Evans, L. C. Partial differential equations, second ed., vol. 19 of Graduate Studies in Mathe-matics. American Mathematical Society, Providence, RI, 2010.

[10] Lions, P.-L. Generalized solutions of Hamilton-Jacobi equations, vol. 69 of Research Notes inMathematics. Pitman (Advanced Publishing Program), Boston, Mass.-London, 1982.

Department of Mathematics and Statistics, McGill UniversityEmail address: [email protected]

Department of Mathematics and Statistics, Concordia UniversityEmail address: [email protected]

Department of Mathematics and Statistics, McGill UniversityEmail address: [email protected]