lectures on the potential theory of subordinate brownian ...rsong/lecture.pdf · these lecture...
TRANSCRIPT
Lectures on the Potential Theory of Subordinate
Brownian Motions
Renming SongDepartment of Mathematics
University of Illinois, Urbana, IL 61801Email: [email protected]
1
1 Introduction
Let X = (Xt : t ≥ 0) be a d-dimensional Brownian motion. Subordination of Brownian
motion consists of time-changing the paths of X by an independent subordinator. To be
more precise, let S = (St : t ≥ 0) be a subordinator (i.e., a nonnegative, increasing Levy
process) independent of X. The process Y = (Yt : t ≥ 0) defined by Yt = X(St) is called
a subordinate Brownian motion. The process Y is an example of a rotationally invariant
d-dimensional Levy process. A general Levy process in Rd is completely characterized by
its characteristic triple (b, A, π), where b ∈ Rd, A is a nonnegative definite d × d matrix,
and π is a measure on Rd \ 0 satisfying∫
(1 ∧ |x|2) π(dx) < ∞, called the Levy measure
of the process. Its characteristic exponent Φ, defined by E[expi〈x, Yt〉] = exp−tΦ(x),x ∈ Rd, is given by the Levy-Khintchine formula involving the characteristic triple (b, A, π).
The main difficulty in studying general Levy processes stems from the fact that the Levy
measure π can be quite complicated.
The situation simplifies immensely in the case of subordinate Brownian motions. If we
take the Brownian motion X as a given data, then Y is completely determined by the
subordinator S. Hence, one can deduce properties of Y from properties of the subordinator
S. On the analytic level this translates to the following: Let φ denote the Laplace exponent of
the subordinator S. That is, E[exp−λSt] = exp−tφ(λ), λ > 0. Then the characteristic
exponent Φ of the subordinate Brownian motion Y takes on the very simple form Φ(x) =
φ(|x|2) (our Brownian motion X runs at twice the usual speed). Hence, properties of Y
should follow from properties of the Laplace exponent φ. This will be the main theme of
these lecture notes - we will study potential-theoretic properties of Y by using information
given by φ. Two main instances of this approach are explicit formulae for the Green function
of Y and the Levy measure of Y . Let p(t, x, y), x, y ∈ Rd, t > 0, denote the transition
densities of the Brownian motion X, and let µ, respectively U , denote the Levy measure,
respectively the potential measure, of the subordinator S. Then the Levy measure π of Y is
given by π(dx) = J(x) dx where
J(x) =
∫ ∞
0
p(t, 0, x) µ(dt) ,
while, when Y is transient, the Green function G(x, y), x, y ∈ Rd, of Y is given by
G(x, y) =
∫ ∞
0
p(t, x, y) U(dt) .
Let us consider the second formula (same reasoning also applies to the first one). This
formula suggests that the asymptotic behavior of G(x, y) when |x − y| → 0 (respectively,
when |x−y| → ∞) should follow from the asymptotic behavior of the potential measure U at
∞ (respectively at 0). The latter can be studied in the case when the potential measure has
2
a monotone density u with respect to the Lebesgue measure. Indeed, the Laplace transform
of U is given by LU(λ) = 1/φ(λ), hence one can invoke the Tauberian and monotone density
theorems in order to obtain the asymptotic behavior of u from the asymptotic behavior of φ.
We will be mainly interested in the behavior of the Green function G(x, y) and the jumping
function J(x) near zero, hence the reasonable assumption on φ will be that it is regularly
varying at infinity with index α ∈ [0, 2]. This includes subordinators having a drift, as well
as subordinators with slowly varying Laplace exponent at infinity, for example, a gamma
subordinator.
The materials covered in these lectures notes are based on several recent papers, primarily
[56], [61], [69] [67], [70] and [41]. Here is an outline of the lecture notes.
In Section 2 we give a brief introduction to Levy processes, subordinators, subordinate
Brownian motions and regular variations.
In Section 3 we recall some basic facts about subordinators and give a list of exam-
ples that will be useful later on. This list contains stable subordinators, relativistic stable
subordinators, subordinators which are a sum of stable subordinators and a drift, gamma
subordinators, geometric stable subordinators, iterated geometric stable subordinators and
Bessel subordinators. All of these subordinators belong to the class of special subordina-
tors (even complete Bernstein subordinators). Special subordinators are important to our
approach because they are precisely the ones whose potential measure restricted to (0,∞)
has a decreasing density u. In fact, for all of the listed subordinators the potential measure
has a decreasing density u. In the last part of the section we study asymptotic behaviors of
the potential density u and the Levy density of subordinators by use of Karamata’s and de
Haan’s Tauberian and monotone density theorems.
In Section 4 we derive asymptotic properties of the Green function and the jumping
function of subordinate Brownian motion. These results follow from the technical Lemma
4.3 upon checking its conditions for particular subordinators. Of special interest is the order
of singularities of the Green function near zero, starting from the Newtonian kernel at the
one end, and singularities on the brink of integrability on the other end obtained for iterated
geometric stable subordinators. The results for asymptotic behavior of the jumping function
are less complete, but are substituted by results on decay at zero and at infinity. Finally, we
discuss transition densities for symmetric geometric stable processes which exhibit unusual
behavior on the diagonal for small (as well as large) times.
The original motivation for deriving the results in Sections 3 and 4 was an attempt to
obtain the Harnack inequality for subordinate Brownian motions with subordinators whose
Laplace exponent φ(λ) has the asymptotic behavior at infinity of one of the following two
forms: (i) φ(λ) ∼ λ, or (ii) logarithmic behavior at ∞. A typical example of the first case is
the process Y which is a sum of Brownian motion and an independent rotationally invariant
α-stable process. This situation was studied in [56]. A typical example of the second case is a
3
geometric stable process - Brownian motion subordinate by a geometric stable subordinator.
In this case, φ(λ) ∼ log λ, λ →∞. This was studied in [61]. Section 5 contains an exposition
of these results and some generalizations, and is partially based on the general approach to
Harnack inequality from [66]. After obtaining some potential-theoretic results for a class of
radial Levy processes, we derive Krylov-Safonov-type estimates for the hitting probabilities
involving capacities. Similar estimates involving Lebesgue measure were obtained in [66]
based on the work of Bass and Levin [4]. These estimates are crucial in proving two types
of Harnack inequalities for small balls - scale invariant ones, and the weak ones in which
the constant depend on the radius of a ball. In fact, we give a full proof of the Harnack
inequality only for iterated geometric stable processes, and refer the reader to the original
papers for the other cases.
In section 6 we establish a boundary Harnack inequality for a large class of subordinate
Brownian motions. The results of this section are taken from [41].
Finally, in Section 7 we replace the underlying Brownian motion by the Brownian motion
killed upon exiting a Lipschitz domain D. The resulting process is denoted by XD. We
are interested in the potential theory of the process Y Dt = XD(St) where S is a special
subordinator with infinite Levy measure or positive drift. Such questions were first studied
for stable subordinators in [36], and the final solution in this case was given in [35]. The
general case for special subordinators appeared in [69]. Surprisingly, it turns out that the
potential theory of Y D is in a one-to-one and onto correspondence with the potential theory of
XD. More precisely, there is a bijection (realized by the potential operator of the subordinate
process ZDt = XD(Tt) where T is the subordinator conjugate to S) from the cone S(Y D) of
excessive functions of Y D onto the cone S(XD) of excessive functions for XD which preserves
nonnegative harmonic functions. This bijection makes it possible to essentially transfer the
potential theory of XD to potential theory of Y D. In this way we obtain the Martin kernel
and the Martin representation for Y D which immediately leads to a proof of the boundary
Harnack principle for nonnegative harmonic functions of Y D. In the case of a C1,1 domain
we obtain sharp bounds for the transition densities of the subordinate process Y D.
The materials covered in these lectures notes by no means include all that can be said
about potential theory of subordinate Brownian motion. One of the omissions is the Green
function estimates for killed subordinate Brownian motions and the boundary Harnack in-
equality for the positive harmonic functions of subordinate Brownian motions. By using
ideas from [21] or [57] one can easily extend the Green function estimates of [19] and [43]
for killed symmetric stable processes to more general killed subordinate Brownian motions
under certain conditions, and then use these estimates to extend arguments in [14] and [71]
to establish the boundary Harnack inequality for general subordinate Brownian motions un-
der certain conditions. We will not pursue this. Another notable omission is the spectral
theory for such processes together with implications to spectral theory of killed subordinate
4
Brownian motion. We refer the reader to [22], [23] and [24]. Related to this is the general
discussion on the exact difference between the subordinate killed Brownian motion and the
killed subordinate Brownian motion and consequences of thereof. This was discussed in [65]
and [64]. See also [38] and the forthcoming [70].
We end this introduction with few words on notation. For functions f and g we write
f ∼ g if the quotient f/g converges to 1, and f ³ g if the quotient f/g stays bounded
between two positive constants.
2 Introduction to Levy Processes and Regular Varia-
tions
2.1 Infinite Divisibility and Levy processes
In this subsection we will give a brief introduction the Levy processes. For more details, one
can refer to the books [7], [46] and [58].
Definition 2.1 An Rd-valued process X = (Xt; t ≥ 0) defined on a probability space (Ω,F ,P)
is called a Levy process if it satisfies the following properties
(i) the sample paths of X are right continuous with left limits;
(ii) P(X0 = 0) = 1;
(iii) for 0 ≤ s ≤ t, Xt −Xs has the same distribution as Xt−s;
(iv) for 0 ≤ s ≤ t, Xt −Xs is independent of Xu; 0 ≤ u ≤ s.
It is often convenient to use the canonical notations. That is, we can take Ω to be the
family of paths ω : [0,∞) → Rd ∪ ∂ with lifetime ζ(ω) = inft ≥ 0 : ω(t) = ∂ which are
right continuous on [0,∞) and with left limits on (0,∞), and stay at the cemetery point
∂ after the lifetime. We can endow Ω with the Skorohod topology and let F be the Borel
σ-field of Ω.
We then introduce the coordinate process X = (Xt; t ≥ 0), where
Xt = Xt(ω) = ω(t).
Note that by definition P(X0 = 0) = 1. For any x ∈ Rd, we will later write Px for the
probability law of the process (x + Xt; t ≥ 0) under P. Of course, P0 is the same as P.
The most well known examples of Levy processes are Brownian motions and Poisson
processes.
Levy processes are closely related to infinitely divisible random variables.
5
Definition 2.2 We say that an Rd-valued random variable is infinitely divisible if for any
n ≥ 1, there exists iid random variables Xn,1, . . . Xn,n such that X has the same distribution
as Xn,1 + · · ·+ Xn,n.
From the decomposition
Xt = Xt/n + (X2t/n −Xt/n) + · · ·+ (Xnt/n −X(n−1)t/n),
we can easily see that, for each t, Xt is an infinitely divisible random variable. From the
well-known Levy-Khintchine formula for the characteristic functions of infinitely divisible
random variables and the definition of Levy processes, we can easily get the following result.
Theorem 2.1 If X = (Xt; t ≥ 0 is an Rd-valued Levy process on (Ω,F ,P), then
E (exp(iξ ·Xt)) = e−tΦ(ξ), t ≥ 0, ξ ∈ Rd,
with
Φ(ξ) = ia · ξ +1
2ξ′Qξ +
∫
Rd
(1− eiξ·x + iξ · x1|x|<1)Π(dx),
where a is a point in Rd, Q a nonnegative definite d×d matrix and Π a measure on Rd \0such that
∫Rd(1 ∧ |x|2)Π(dx) < ∞.
The function Φ is called the Levy exponent or characteristic exponent of X. Q is called
the diffusion coefficient of X. Π is called the Levy measure of X, and it determines the
jumping mechanism of X. a is called the linear coefficient of X. The triplet (a,Q, Π) is
sometimes called the generating triplet of X.
The converse of the theorem above is also true.
Theorem 2.2 Suppose that a ∈ Rd, Q is a nonnegative definite d× d matrix and that Π is
a measure on Rd \ 0 such that∫Rd(1 ∧ |x|2)Π(dx) < ∞. Define a function on Rd by
Φ(ξ) = ia · ξ +1
2ξ′Qξ +
∫
Rd
(1− eiξ·x + iξ · x1|x|<1)Π(dx), ξ ∈ Rd.
Then there is an Rd-valued Levy process X = (Xt; t ≥ 0) with Levy exponent Φ.
Here is an outline of the proof of this theorem. This is closely related to the Levy-Ito
decomposition of Levy processes.
Rewrite the Levy exponent Φ as follows
Φ(ξ) = Φ(1)(ξ) + Φ(2)(ξ) + Φ(3)(ξ),
6
where
Φ(1)(ξ) = ia · ξ +1
2ξ′Qξ,
Φ(2)(ξ) =
∫
B(0,1)c
(1− eiξ·x)Π(dx) = Π(B(0, 1)c)
∫
B(0,1)c
(1− eiξ·x)Π(dx)
Π(B(0, 1)c),
and
Φ(3)(ξ) =
∫
B(0,1)
(1− eiξ·x + iξ · x)Π(dx).
Then we can construct three independent Levy processes X(1) = (X(1)t ; t ≥ 0), X(2) =
(X(2)t ; t ≥ 0) and X(3) = (X
(3)t ; t ≥ 0) with Levy exponents Φ(1), Φ(2) and Φ(3) respectively.
The process X(1) is a Brownian motion with drift:
X(1)t =
√QBt − at,
where B = (Bt; t ≥ 0) is a standard d-dimensional Brownian motion. X(2) is a compound
Poisson process defined as follows
X(2)t =
Nt∑j=1
ηj,
where (ηj; j ≥ 1) are iid random variables with distribution Π(dx)1|x|≥1/Π(B(0, 1)c), N =
(Nt; t ≥ 0) is a Poisson process (independent of (ηj; j ≥ 1)) with rate Π(B(0, 1)c). The
process X(3) is more difficult to construct. It is the L2-limit of the processes X(3,n) defined
by
X(3,n))t =
N(n)t∑
j=1
ηnj − t
∫
2−n≤|x|<1xΠ(dx),
where (η(n)j ; j ≥ 1) are iid random variables with distribution Π(dx)12−n≤|x|<1/Π(2−n ≤
|x| < 1), N (n) = (N(n)t ; t ≥ 0) is a Poisson process (independent of (η
(n)j ; j ≥ 1)) with rate
Π(2−n ≤ |x| < 1).Definition 2.3 A Levy process X = (Xt; t ≥ 0) on Rd is said to be recurrent if
lim inft↑∞
|Xt| = 0, a.s.
and transient if
limt↑∞
|Xt| = ∞, a.s.
An important quantity related to the recurrence and transience of a Levy process is its
potential measure U defined by
U(B) =
∫ ∞
0
P(Xt ∈ B)dt = E∫ ∞
0
1B(Xt)dt, B ∈ B(Rd).
The following result is true.
7
Theorem 2.3 Let X = (Xt; t ≥ 0) be a Levy process on Rd with Levy exponent Φ. Then
(i) it is either recurrent or transient;
(ii) it is recurrent if and only if U(B(0, a)) = ∞ for every a > 0;
(iii) it is recurrent if and only if∫∞
01B(0,a)(Xt)dt = ∞ almost surely for every a > 0;
(iv) it is transient if and only if U(B(0, a)) < ∞ for every a > 0;
(v) it is transient if and only if∫∞
01B(0,a)(Xt)dt < ∞ almost surely for every a > 0;
(vi) it is recurrent if and only if
lim supq↓0
∫
B(0,1)
Re (1
q + Φ(ξ))dξ = ∞.
By using the Levy-Ito decomposition and some basic results in Poisson point processes
one can prove the following result.
Proposition 2.4 A real valued Levy process X with generating triplet (a,Q, Π) is of bounded
variation if and only if Q = 0 and∫
(1 ∧ |x|)Π(dx) < ∞.
The finiteness of the integral∫
(1 ∧ |x|)Π(dx) allows us to rewrite the Levy exponent of
any real-valued Levy process of bounded variation as follows:
Φ(ξ) = −idξ +
∫ ∞
−∞(1− eiξx)Π(dx),
where
d = −(
a +
∫ 1
−1
xΠ(dx)
).
d is called the drift of the bounded variation Levy process.
2.2 Subordinators and Bernstein functions
In this subsection we will consider a class of Levy processes called subordinators.
Definition 2.4 A subordinator is a real-valued Levy process which only takes nonnegative
values.
8
Using this definition and the definition of Levy processes, one can easily see that subor-
dinators must be increasing processes. Using the Levy-Ito decompsoition one can show that
for a subordinator the diffusion coefficient must be zero, the drift d must be nonnegative and
the Levy measure Π can not charge (−∞, 0).
We can write the Levy exponent of a subordinator in the following form
Φ(ξ) = −idξ +
∫ ∞
0
(1− eiξx)Π(dx).
It is clear that the integral on the right hand side of the display above converges in the upper
half of the complex ξ plane. So we can define the Laplace exponent of a subordinator X by
φ(λ) = − logE(e−λX1) = Φ(iλ) = dλ +
∫ ∞
0
(1− e−λx)Π(dx), λ > 0.
If we define the tail of the Levy measure by Π(x) = Π((x,∞)), then by using Fubini’s
theorem one can easily show that
φ(λ)
λ= d +
∫ ∞
0
e−λxΠ(x)dx, λ > 0.
The Laplace exponent of a subordinator is closely related to the concept of Bernstein
functions.
Definition 2.5 A C∞ function f : (0,∞) → [0,∞) is called a Bernstein function if
(−1)nDnf ≤ 0 for all n ≥ 1.
It is easy to see that the Laplace exponent φ of any subordinator is a Bernstein function
with limt↓0 φ(t) = 0. It can be shown that the converse is also valid. That is , a function
φ : (0,∞) → [0,∞) is the Laplace exponent of a subordinator if and only if φ is a Bernstein
function with limt↓0 φ(t) = 0.
Sometimes we need to deal with subordinators with a possibly finite lifetime. In order
for the Markov property to hold, the lifetime has to be exponentially distributed, say with
parameter k. It is easy to see that if X is such a subordinator, then it can be considered as
a subordinator X with infinite lifetime killed at an independent exponential time, and the
corresponding Laplace exponents are related by
φ(λ) = k + φ(λ), λ > 0.
A subordinator with possibly finite lifetime is also called a killed subordinator. Therefore a
function φ : (0,∞) → [0,∞) is the Laplace exponent of a killed subordinator if and only if
φ is a Bernstein function.
9
A subordinator is a transient Levy process and so it potential measure
U(B) =
∫ ∞
0
P(Xt ∈ B)dt = E∫ ∞
0
1B(Xt)dt, B ∈ B(R),
is a Radon measure. It is easy to see that the Laplace transform of the potential measure is
given by
LU(λ) = E∫ ∞
0
e−λXtdt =1
φ(λ), λ > 0.
2.3 Subordinate Brownian motions
Let S = (St; t ≥ 0) be a subordinator with Laplace exponent given by
φ(λ) = d +
∫ ∞
0
(1− e−λt)µ(dt), λ > 0,
where d ≥ 0 is the drift of the subordinator and µ is the Levy measure of the subordinator.
We will use ηt(·) to denote the measure P(St ∈ ·). Let X = (Xt; t ≥ 0) be a Levy process on
Rd with generating triplet (a,Q, Π) and Levy exponent Φ. We will use ρt(dx) to denote the
measure P(Xt ∈ dx). Suppose that X and S are independent. We can define a new process
Y = (Yt; t ≥ 0) on Rd by
Yt = XSt , t ≥ 0.
Then we have the following result.
Theorem 2.5 The process Y = (Yt; t ≥ 0) is a Levy process on Rd with Levy exponent given
by φ(Φ(ξ)). The generating triplet (a], Q], Π]) is given by
a] = ad +
∫ ∞
0
µ(ds)
∫
B(0,1)
xρs(dx),
Q] = dQ,
and
Π](B) = d Π(B) +
∫ ∞
0
µ(ds)ρs(B), B ∈ B(Rd).
Furthermore, for any t > 0, we have
P(Yt ∈ B) =
∫
[0,∞)
ρs(B)ηt(ds), B ∈ B(Rd).
Similarly we have the following
10
Theorem 2.6 If S = (St; t ≥ 0) and T = (Tt; t ≥ 0) are independent subordinators with
Laplace exponents φ and ψ respectively. Then the process X = (Xt; t ≥ 0) defined by
Yt = STt , t ≥ 0
is a subordinator with Laplace exponent ψ(φ(λ)).
In these lectures we will be mostly interested in subordinate Brownian motions. That is,
S = (St; t ≥ 0) is a subordinator with Laplace exponent given by
φ(λ) = d +
∫ ∞
0
(1− e−λt)µ(dt), λ > 0,
B = (Bt; t ≥ 0) is a d-dimensional Brownian motion with Levy exponent |ξ|2 independent
of S. The process X = (Xt; t ≥ 0) defined by
Xt = BSt , t ≥ 0
is called a subordinate Brownian motion. The Levy exponent of X is given by φ(|ξ|2). The
generating triplet of X is given by (0, d I, J(x)dx), where I is the d× d identity matric and
J(x) =
∫ ∞
0
(4πs)−d/2 exp(−|x|2
4s)µ(ds), x ∈ Rd.
For any t > 0, the distribution of Xt has a density function given by
∫
[0,∞)
(4πs)−d/2 exp(−|x|2
4s)ηt(ds), x ∈ Rd,
where ηt is the distribution of St.
2.4 Regular variations
In these lectures we are going to use some basic results on regular variations. In this subsec-
tion we will give a brief introduction to the theory of regular variations. A standard reference
is the book [10].
Definition 2.6 A function ` : (0,∞) → (0,∞) is said to be slowly varying at 0+ (respec-
tively, at ∞) if for every λ > 0 lim(`(λx)/`(x)) = 1 as x tends to 0+ (respectively, to
∞).
Of course, positive constant functions are slowly varying. The simplest non-trivial ex-
amples of slowly varying functions at ∞ are positive functions which are equal to log x,
log log x, etc, near ∞. Non-trivial slowly varying functions at 0+ are positive functions
11
which are equal to log(1/x), log log(1/x), etc, near 0+. Of course, there are plenty of ex-
amples of slowly varying functions which are not logarithmic functions. For instance, the
following two functions are slowly varying at ∞:
`(x) = exp((log x)α), α ∈ (0, 1),
and
`(x) = exp((log x)/ log log x).
We have the following local uniformity for slowly varying functions.
Theorem 2.7 If ` is slowly varying at 0+ (respectively, at ∞), then lim(`(λx)/`(x)) = 1
as x tends to 0+ (respectively, to ∞) uniformly on each compact λ-set in (0,∞).
Definition 2.7 A function f : (0,∞) → (0,∞) is said to be regularly varying at 0+ (re-
spectively, at ∞) if for every λ > 0 the ratio f(λx)/f(x) converges to a number in (0,∞) as
x tends to 0+ (respectively, to ∞).
Theorem 2.8 If f is regularly varying at 0+ (respectively, at ∞), then there exists a real
number ρ, called the index, such that
limf(λx)
f(x)= λρ, x → 0 + (resp.x →∞)
for every λ > 0. Moreover, `(x) = f(x)x−ρ is slowly varying at at 0+ (respectively, at ∞).
The following theorems will be very important in these lectures.
Theorem 2.9 (Karamata’s Tauberian theorem) Let U : (0,∞) → (0,∞) be an in-
creasing function. If ` is slowly varying at ∞ (resp. at 0+), ρ ≥ 0, the following are
equivalent:
(i) As t →∞ (resp. t → 0+)
U(t) ∼ tρ`(t)
Γ(1 + ρ).
(ii) As λ → 0 (resp. λ →∞)
LU(λ) ∼ λ−ρ`(1/λ) .
Theorem 2.10 (Karamata’s monotone density theorem) Let U : (0,∞) → (0,∞) be
an increasing function. If ` is slowly varying at ∞ (resp. at 0+) and that U(dx) = u(x) dx,
where u is monotone and nonnegative, and ρ > 0, then (i) and (ii) in the Theorem above
are equivalent to:
(iii) As t →∞ (resp. t → 0+)
u(t) ∼ ρtρ−1`(t)
Γ(1 + ρ).
12
Note that Karamata’s monotone density theorem only applies when the index ρ is strictly
positive. Sometimes we need the following more refined versions of both Tauberian and
monotone density theorems. The results are also taken from [10].
Theorem 2.11 (de Haan’s Tauberian Theorem) Let U : (0,∞) → (0,∞) be an in-
creasing function. If ` is slowly varying at ∞ (resp. at 0+), c ≥ 0, the following are
equivalent:
(i) As t →∞ (resp. t → 0+)
U(λt)− U(t)
`(t)→ c log λ, ∀λ > 0.
(ii) As t →∞ (resp. t → 0+)
LU( 1λt
)− LU(1t)
`(t)→ c log λ, ∀λ > 0.
Theorem 2.12 (de Haan’s Tauberian Theorem) Let U : (0,∞) → (0,∞) be an in-
creasing function and ` a slowly varying at ∞ (resp. at 0+), c ≥ 0. If U(dx) = u(x) dx,
where u is monotone and nonnegative, and c > 0, then (i) and (ii) in the Teorem above are
equivalent to:
(iii) As t →∞ (resp. t → 0+)
u(t) ∼ ct−1`(t).
The following result will be useful in Section 6.
Theorem 2.13 (Potter’s Theorem) If ` is slowly varying at ∞ then for any A > 1 and
δ > 0, there exists C > 0 such that
`(y)
`(x)≤ A max
(yδ
xδ,xδ
yδ
), x, y ≥ C.
There is also a version of this result for slowly varying functions at 0+.
The following result is about the asymptotic behavior of integrals of regularly varying
functions and it will be very useful in Section 6.
Theorem 2.14 Let ` be slowly varying at ∞ and locally bounded on [C,∞) for some positive
constant C. Then
(i) for any σ ≥ −1,xσ+1`(x)∫ x
Ctσ`(t)dt
→ σ + 1
as x →∞;
13
(ii) for any σ < −1,xσ+1`(x)∫∞x
tσ`(t)dt→ −(σ + 1)
as x →∞.
Of course, there is also a version of this result for regularly varying functions at 0+.
3 Subordinators
3.1 Special subordinators and complete Bernstein functions
Let S = (St : t ≥ 0) be a subordinator, that is, an increasing Levy process taking values
in [0,∞] with S0 = 0. We remark that our subordinators are what some authors call killed
subordinators. The Laplace transform of the law of St is given by the formula
E[exp(−λSt)] = exp(−tφ(λ)) , λ > 0. (3.1)
The function φ : (0,∞) → R is called the Laplace exponent of S, and it can be written in
the form
φ(λ) = a + bλ +
∫ ∞
0
(1− e−λt) µ(dt) . (3.2)
Here a, b ≥ 0, and µ is a σ-finite measure on (0,∞) satisfying∫ ∞
0
(t ∧ 1) µ(dt) < ∞ . (3.3)
The constant a is called the killing rate, b the drift, and µ the Levy measure of the subordi-
nator S. By using condition (3.3) above one can easily check that
limt→0
t µ(t,∞) = 0, (3.4)∫ 1
0
µ(t,∞) dt < ∞ . (3.5)
For t ≥ 0, let ηt be the distribution of St. To be more precise, for a Borel set A ⊂ [0,∞),
ηt(A) = P(St ∈ A). The family of measures (ηt : t ≥ 0) form a convolution semigroup of
measures on [0,∞). Clearly, the formula (3.1) reads exp(−tφ(λ)) = Lηt(λ), the Laplace
transform of the measure ηt. We refer the reader to [7] for much more detailed exposition
on subordinators.
Recall that a C∞ function φ : (0,∞) → [0,∞) is called a Bernstein function if (−1)nDnφ ≤0 for every n ∈ N. It is well known (see, e.g., [6]) that a function φ : (0,∞) → R is a Bernstein
function if and only if it has the representation given by (3.2).
We now introduce the concepts of special Bernstein functions and special subordinators.
14
Definition 3.1 A Bernstein function φ is called a special Bernstein function if ψ(λ) :=
λ/φ(λ) is also a Bernstein function. A subordinator S is called a special subordinator if its
Laplace exponent is a special Bernstein function.
We will call ψ the Bernstein function conjugate to φ.
Special subordinators occur naturally in various situations. For instance, they appear as
the ladder time process for a Levy process which is not a compound Poisson process, see
page 166 of [7]. Yet another situation in which they appear naturally is in connection with
the exponential functional of subordinators (see [9]).
The most common examples of special Bernstein functions are complete Bernstein func-
tions, also called operator monotone functions in some literature. A function φ : (0,∞) → Ris called a complete Bernstein function if there exists a Bernstein function η such that
φ(λ) = λ2Lη(λ), λ > 0,
where L stands for the Laplace transform of the measure η: Lη(λ) =∫∞
0e−λt η(dt). It
is known (see, for instance, Remark 3.9.28 and Theorem 3.9.29 of [39]) that every com-
plete Bernstein function is a Bernstein function and that the following three conditions are
equivalent:
(i) φ is a complete Bernstein function;
(ii) ψ(λ) := λ/φ(λ) is a complete Bernstein function;
(iii) φ is a Bernstein function whose Levy measure µ is given by
µ(dt) =
∫ ∞
0
e−stγ(ds)dt
where γ is a measure on (0,∞) satisfying
∫ 1
0
1
sγ(ds) +
∫ ∞
1
1
s2γ(ds) < ∞.
The equivalence of (i) and (ii) says that every complete Bernstein function is a special
Bernstein function. Note also that it follows from the condition (iii) above that being a
complete Bernstein function only depends on the Levy measure and that the Levy measure
µ(dt) of any complete Bernstein function has a completely monotone density. We also note
that the tail t → µ(t,∞) of the Levy measure µ is a completely monotone function. Indeed,
by Fubini’s theorem
µ(x,∞) =
∫ ∞
x
∫ ∞
0
e−st γ(ds) dt =
∫ ∞
0
e−xs γ(ds)
s.
15
A similar argument shows that the converse is also true, namely, if the tail of the Levy
measure µ is a completely monotone function, then µ has a completely monotone density.
The density of the Levy measure with respect to the Lebesgue measure (when it exists) will
be called the Levy density.
The family of all complete Bernstein functions is a closed convex cone containing positive
constants. The following properties of complete Bernstein functions are well known, see, for
instance, [51]: (i) If φ is a nonzero complete Bernstein function, then so are φ(λ−1)−1 and
λφ(λ−1); (ii) if φ1 and φ2 are nonzero complete Bernstein functions and β ∈ (0, 1), then
φβ1 (λ)φ1−β
2 (λ) is also a complete Bernstein function; (iii) if φ1 and φ2 are nonzero complete
Bernstein functions and β ∈ (−1, 0) ∪ (0, 1), then (φβ1 (λ) + φβ
2 (λ))1/β is also a complete
Bernstein function.
Most of the familiar Bernstein functions are complete Bernstein functions. The following
are some examples of complete Bernstein functions ([39]): (i) λα, α ∈ (0, 1]; (ii) (λ + 1)α −1, α ∈ (0, 1); (iii) log(1+λ); (iv) λ
λ+1. The first family corresponds to α-stable subordinators
(0 < α < 1) and a pure drift (α = 1), the second family corresponds to relativistic α-stable
subordinators, the third Bernstein function corresponds to the gamma subordinator, and
the fourth corresponds to the compound Poisson process with rate 1 and exponential jumps.
An example of a Bernstein function which is not a complete Bernstein function is 1 − e−λ.
One can also check that 1− e−λ is not a special Bernstein function as well.
The potential measure of the subordinator S is defined by
U(A) = E∫ ∞
0
1(St∈A) dt =
∫ ∞
0
ηt(A) dt, A ⊂ [0,∞). (3.6)
Note that U(A) is the expected time the subordinator S spends in the set A. The Laplace
transform of the measure U is given by
LU(λ) =
∫ ∞
0
e−λt dU(t) = E∫ ∞
0
exp(−λSt) dt =1
φ(λ). (3.7)
We are going to derive a characterization of special subordinators in terms of their po-
tential measures. Roughly, a subordinator S is special if and only if its potential measure
U restricted to (0,∞) has a decreasing density. To be more precise, let S be a special
subordinator with the Laplace exponent φ given by
φ(λ) = a + bλ +
∫ ∞
0
(1− e−λt) µ(dt) .
Then
limλ→0
λ
φ(λ)=
0 , a > 0,1
b+R∞0 t µ(dt)
, a = 0,
limλ→∞
1
φ(λ)=
0 , b > 0 or µ(0,∞) = ∞,1
a+µ(0,∞), b = 0 and µ(0,∞) < ∞ .
16
Since λ/φ(λ) is a Bernstein function, we must have
λ
φ(λ)= a + bλ +
∫ ∞
0
(1− e−λt) ν(dt) (3.8)
for some Levy measure ν, and
a =
0 , a > 0,1
b+R∞0 t µ(dt)
, a = 0, (3.9)
b =
0 , b > 0 or µ(0,∞) = ∞,1
a+µ(0,∞), b = 0 and µ(0,∞) < ∞ .
(3.10)
Equivalently,1
φ(λ)= b +
∫ ∞
0
e−λtΠ(t) dt (3.11)
with
Π(t) = a + ν(t,∞) , t > 0 .
Let τ(dt) := bε0(dt) + Π(t) dt. Then the right-hand side in (3.11) is the Laplace transform
of the measure τ . Since 1/φ(λ) = LU(λ), the Laplace transform of the potential measure U
of S, we have that LU(λ) = Lτ(λ) . Therefore,
U(dt) = bε0(dt) + u(t) dt ,
with a decreasing function u(t) = Π(t).
Conversely, suppose that S is a subordinator with potential measure given by
U(dt) = cε0(dt) + u(t) dt ,
for some c ≥ 0 and some decreasing function u : (0,∞) → (0,∞) satisfying∫ 1
0u(t) dt < ∞.
Then1
φ(λ)= LU(λ) = c +
∫ ∞
0
e−λtu(t) dt .
It follows that
λ
φ(λ)= cλ +
∫ ∞
0
u(t) d(1− e−λt)
= cλ + u(t)(1− e−λt) |∞0 −∫ ∞
0
(1− e−λt) u(dt)
= cλ + u(∞) +
∫ ∞
0
(1− e−λt) γ(dt) , (3.12)
with γ(dt) = −u(dt). In the last equality we used that limt→0 u(t)(1 − e−λt) = 0. This
is a consequence of the assumption∫ 1
0u(t) dt < ∞. It is easy to check, by using the same
integrability condition on u, that∫∞
0(1∧t) γ(dt) < ∞, so that γ is a Levy measure. Therefore,
λ/φ(λ) is a Bernstein function, implying that S is a special subordinator.
In this way we have proved the following
17
Theorem 3.1 Let S be a subordinator with the potential measure U . Then S is special if
and only if
U(dt) = cε0(dt) + u(t) dt
for some c ≥ 0 and some decreasing function u : (0,∞) → (0,∞) satisfying∫ 1
0u(t) dt < ∞.
Remark 3.2 The above result appeared in [8] as Corollaries 1 and 2 and was possibly known
even before. The above presentation is taken from [69]. In case c = 0, we will call u the
potential density of the subordinator S (or the Laplace exponent φ).
Corollary 3.3 Let S be a subordinator with the Laplace exponent φ and the potential mea-
sure U . Then φ is a complete Bernstein function if and only if U restricted to (0,∞) has a
completely monotone density u.
Proof. Note that from the proof of Theorem 3.1 we have the explicit form of the density u:
u(t) = Π(t) where Π(t) = a+ν(t,∞). Here ν is the Levy measure of λ/φ(λ). If φ is complete
Bernstein, then λ/φ(λ) is complete Bernstein, and hence it follows from the property (iii)
of complete Bernstein function that u(t) = a + ν(t,∞) is a completely monotone function.
Conversely, if u is completely monotone, then clearly the tail t → ν(t,∞) is completely
monotone, which implies that λ/φ(λ) is complete Bernstein. Therefore, φ is also a complete
Bernstein function. 2
Note that by comparing expressions (3.8) and (3.12) for λ/φ(λ), and by using formulae
(3.9) and (3.10), it immediately follows that
c = b =
0 , b > 0 or µ(0,∞) = ∞,1
a+µ(0,∞), b = 0 and µ(0,∞) < ∞,
u(∞) = a =
0 , a > 0,1
b+R∞0 t µ(dt)
, a = 0,
u(t) = a + ν(t,∞) .
In particular, it cannot happen that both a and a are positive, and similarly, that both b
and b are positive. Moreover, it is clear from the definition of b that b > 0 if and only if
b = 0 and µ(0,∞) < ∞.
We record now some consequences of Theorem 3.1 and the formulae above.
Corollary 3.4 Suppose that S = (St : t ≥ 0) is a subordinator whose Laplace exponent
φ(λ) = a + bλ +
∫ ∞
0
(1− e−λt) µ(dt)
18
is a special Bernstein function with b > 0 or µ(0,∞) = ∞. Then the potential measure U
of S has a decreasing density u satisfying
limt→0
t u(t) = 0, (3.13)
limt→0
∫ t
0
s du(s) = 0 . (3.14)
Proof. The formulae follow immediately from u(t) = a + ν(t,∞) and (3.4), (3.5) applied to
ν. 2
Corollary 3.5 Suppose that S = (St : t ≥ 0) is a special subordinator with the Laplace
exponent given by
φ(λ) = a +
∫ ∞
0
(1− e−λt) µ(dt)
where µ satisfies µ(0,∞) = ∞. Then
ψ(λ) :=λ
φ(λ)= a +
∫ ∞
0
(1− e−λt) ν(dt) (3.15)
where the Levy measure ν satisfies ν(0,∞) = ∞.
Let T be the subordinator with the Laplace exponent ψ. If u and v denote the potential
density of S and T respectively, then
v(t) = a + µ(t,∞) . (3.16)
In particular, a = v(∞) and a = u(∞). Moreover, a and a cannot be both positive.
Assume that φ is a special Bernstein function with the representation (3.2) where b > 0
or µ(0,∞) = ∞. Let S be a subordinator with the Laplace exponent φ, and let U denote its
potential measure. By Corollary 3.4, U has a decreasing density u : (0,∞) → (0,∞). Let T
be a subordinator with the Laplace exponent ψ(λ) = λ/φ(λ) and let V denote its potential
measure. Then V (dt) = bε0(dt) + v(t) dt where v : (0,∞) → (0,∞) is a decreasing function.
If b > 0, the potential measure V has an atom at zero, and hence the subordinator T is a
compound Poisson process (this can be also seen as follows: since b > 0, we have u(0+) < ∞,
and hence ν(0,∞) = u(0+)− a < ∞). Note that in case b > 0, the Levy measure µ can be
finite. If b = 0, we require that µ(0,∞) = ∞, and then, by Corollary 3.5, ψ(λ) = λ/φ(λ)
has the same form as φ, namely b = 0 and ν(0,∞) = ∞. In this case, subordinators S and
T play symmetric roles.
The following result will be crucial for the developments in Section 7 of this paper.
19
Theorem 3.6 Let φ be a special Bernstein function with representation (3.2) satisfying
b > 0 or µ(0,∞) = ∞. Then
b u(t) +
∫ t
0
u(s)v(t− s) ds = b u(t) +
∫ t
0
v(s)u(t− s) ds = 1, t > 0. (3.17)
Proof. Since for all λ > 0 we have
1
φ(λ)= Lu(λ),
φ(λ)
λ= b + Lv(λ) ,
after multiplying we get
1
λ= bLu(λ) + Lu(λ)Lv(λ)
= bLu(λ) + L(u ∗ v)(λ) .
Inverting this equality gives
1 = b u(t) +
∫ t
0
u(s)v(t− s) ds , t > 0.
2
Theorem 3.6 has an amusing consequence related to the first passage of the subordinator
S. Let σt = infs > 0 : Ss > t be the first passage time across the level t > 0. By the first
passage formula (see, e.g., [7], p.76), we have
P(Sσt− ∈ ds, Sσt ∈ dx) = u(s)µ(x− s) ds dx ,
for 0 ≤ s ≤ t, and x > t. Since µ(x,∞) = v(x), by use of Fubini’s theorem this implies
P(Sσt > t) =
∫ ∞
t
∫ t
0
u(s)µ(x− s) ds dx =
∫ t
0
u(s)
∫ ∞
t
µ(x− s) dx ds
=
∫ t
0
u(s)µ(t− s,∞) ds =
∫ t
0
u(s)v(t− s) ds .
Since P(Sσt ≥ t) = 1, by comparing with (3.17) we see that P(Sσt = t) = b u(t). This
provides a simple proof in case of special subordinators of the well-known fact true for
general subordinators (see [7], pp.77-79).
In the sequel we will also need the following result about potential density valid for
subordinators that are not necessarily special.
Proposition 3.7 Let S = (St : t ≥ 0) be a subordinator with positive drift b > 0. Then
its potential measure U has a density u continuous on (0,∞) satisfying u(0+) = 1/b and
u(t) ≤ u(0+) for every t > 0.
Proof. For the proof of existence of continuous u and the fact that u(0+) = 1/b see, e.g.,
[7], p.79. That u(t) ≤ u(0+) for every t > 0 follows from the subadditivity of the function
t 7→ U([0, t]) (see, e.g., [56]). 2
20
3.2 Examples of subordinators
In this subsection we give a list of subordinators that will be relevant in the sequel and
describe some of their properties.
Example 3.8 (Stable subordinators) Our first example covers the family of well-known
stable subordinators. For 0 < α < 2, let φ(λ) = λα/2. By integration
λα/2 =α/2
Γ(1− α/2)
∫ ∞
0
(1− e−λt) t−1−α/2 dt ,
i.e, the Levy measure µ(dt) of φ has a density given by (α/2)/Γ(1 − α/2) t−1−α/2. Since
t−1−α/2 =∫∞
0e−tssα/2/Γ(1+α/2) ds, it follows that φ is a complete Bernstein function. The
tail of the Levy measure µ is equal to
µ(t,∞) =t−α/2
Γ(1− α/2).
The conjugate Bernstein function is ψ(λ) = λ1−α/2, hence its tail is ν(t,∞) = tα/2−1/Γ(α/2).
This shows that the potential density of φ(λ) = λα/2 is equal to
u(t) =tα/2−1
Γ(α/2).
The subordinator S corresponding to φ is called an α/2-stable subordinator.
It is known that the distribution η1(ds) of the α/2-stable subordinator has a density η1(s)
with respect to the Lebesgue measure. Moreover, by [62],
η1(s) ∼ 2πΓ(1 +
α
2
)sin
(απ
4
)s−1−α/2 , s →∞ , (3.18)
and
η1(s) ≤ c(1 ∧ s−1−α/2) , s > 0 , (3.19)
for some positive constant c > 0.
Example 3.9 (Relativistic stable subordinators) For 0 < α < 2 and m > 0, let φ(λ) =
(λ + m2/α)α/2 −m. By integration
(λ + m2/α)α/2 −m =α/2
Γ(1− α/2)
∫ ∞
0
(1− e−λt) e−m2/αt t−1−α/2 dt ,
i.e., the Levy measure µ(dt) of φ has a density given by (α/2)/Γ(1−α/2) e−m2/αt t−1−α/2. This
Bernstein function appeared in [47] in his study of the stability of relativistic matter, and so
21
we call the corresponding subordinator S a relativistic α/2-stable subordinator. Further, by
checking tables of Laplace transforms ([30]) we see that
1
φ(λ)=
∫ ∞
0
e−λt 1
Γ(α/2)e−m2/αt t−1+α/2 dt ,
implying that the potential measure U of the subordinator S has a density u given by
u(t) =1
Γ(α/2)e−m2/αt t−1+α/2 .
Note that m+φ(λ) = (λ+m2/α)α/2 is a composition of complete Bernstein functions, hence
a complete Bernstein function itself. Therefore, φ is also a complete Bernstein function.
Example 3.10 (Gamma subordinator) Let φ(λ) = log(1 + λ). By use of Frullani’s
integral it follows that
log(1 + λ) =
∫ ∞
0
(1− e−λt)e−t
tdt ,
i.e., the Levy measure of φ has a density given by e−t/t. Note that e−t/t =∫∞
0e−st1(1,∞)(s) ds,
implying that the density of the Levy measure µ is completely monotone. Therefore, φ is a
complete Bernstein function. The corresponding subordinator S is called a gamma subordi-
nator. The explicit form of the potential density u is not known. In the next section we will
derive the asymptotic behavior of u at 0 and at +∞. On the other hand, the distribution
ηt(ds), t > 0, is well known and given by
ηt(ds) =1
Γ(t)st−1e−s ds , s > 0 . (3.20)
Before proceeding to the next two examples, let us briefly discuss composition of subordi-
nators. Suppose that S1 = (S1t : t ≥ 0) and S2 = (S2
t : t ≥ 0) are two independent subordi-
nators with Laplace exponents φ1, respectively φ2, and convolution semigroups (η1t : t ≥ 0),
respectively (η2t : t ≥ 0). Define the new process S = (St : t ≥ 0) by St = S1(S2
t ), subordi-
nation of S1 by S2. Subordinating a Levy process by an independent subordinator always
yields a Levy process (e.g. [58], p. 197). Hence, S is another subordinator. The distribution
ηt of St is given by
ηt(ds) =
∫ ∞
0
η2t (du)η1
u(ds) . (3.21)
Therefore, for any λ > 0,∫ ∞
0
e−λs ηt(ds) =
∫ ∞
0
e−λs
∫ ∞
0
η2t (du)η1
u(ds)
=
∫ ∞
0
η2t (du)
∫ ∞
0
e−λs η1u(ds)
=
∫ ∞
0
η2t (du)e−uφ1(λ) = φ2(φ1(λ)) ,
22
showing that the Laplace exponent φ of S is given by φ(λ) = φ2(φ1(λ)).
Example 3.11 (Geometric stable subordinators) For 0 < α < 2, let φ(λ) = log(1 +
λα/2). Since φ is a composition of the complete Bernstein functions from Examples 3.8
and 3.10, it is itself a complete Bernstein function. The corresponding subordinator S is
called a geometric α/2-stable subordinator. Note that this subordinator may be obtained
by subordinating an α/2-stable subordinator by a gamma subordinator. The concept of
geometric stable distributions was first introduced in [42]. We will now compute the Levy
measure µ of S. Define
Eα/2(t) :=∞∑
n=0
(−1)n tnα/2
Γ(1 + α/2), t > 0 .
By checking tables of Laplace transforms (or by computing term by term), we see that
∫ ∞
0
e−λtEα/2(t) dt =1
λ(1 + λ−α/2)=
λα/2−1
1 + λα/2. (3.22)
Further, since φ(0+) = 0 and limλ→∞ φ(λ)/λ = 0, we have that φ(λ) =∫∞
0(1− e−λt) µ(dt).
By differentiating this expression for φ and the explicit form of φ we obtain that
φ ′(λ) =
∫ ∞
0
te−λt µ(dt) =α
2
λα/2−1
1 + λα/2. (3.23)
By comparing (3.22) and (3.23) we see that the Levy measure µ(dt) has a density given by
µ(t) =α
2
Eα/2(t)
t. (3.24)
The explicit form of the potential density u is not known. In the next section we will derive
the asymptotic behavior of u at 0+ and at ∞.
We will now show that the distribution function of S1 is given by
F (s) = 1− Eα/2(s) =∞∑
n=1
(−1)n−1 snα/2
Γ(1 + α/2), s > 0 . (3.25)
Indeed, for λ > 0,
LF (λ) =
∫ ∞
0
e−λt F (dt) = λ
∫ ∞
0
e−λt(1− Eα/2(t)) dt
= λ
(1
λ− λα/2−1
1 + λα/2
)= exp− log(1 + λα/2) .
Since the function λ 7→ 1 + λα/2 is a complete Bernstein function, its reciprocal function,
λ 7→ 1/(1 + λα/2) is a Stieltjes function (see [39] for more details about Stieltjes functions).
23
Moreover, since limt→∞ 1/(1 + λα/2) = 0, it follows that there exists a measure σ on (0,∞)
such that1
1 + λα/2= L(Lσ)(λ) .
But this means that the function F has a completely monotone density f given by f(t) =
Lσ(t). It is shown in [52] that the distribution function of St, t > 0, is equal to
∞∑n=1
(−1)n−1 Γ(t + n− 1)s(t+n−1)α/2
Γ(t)(n− 1)!Γ(1 + (t + n− 1)α/2).
Note that the case of the gamma subordinator may be subsumed under the case of
geometric α/2-stable subordinator by taking α = 2 in the definition.
Example 3.12 (Iterated geometric stable subordinators) Let 0 < α ≤ 2. Define,
φ(1)(λ) = φ(λ) = log(1 + λα/2) , φ(n)(λ) = φ(φ(n−1)(λ)) , n ≥ 2 .
Since φ(n) is a complete Bernstein function, we have that φ(n)(λ) =∫∞
0(1−e−λt)µ(n)(t) dt for
a completely monotone Levy density µ(n)(t). The exact form of this density is not known.
Let S(n) = (S(n)t : t ≥ 0) be the corresponding (iterated) subordinator, and let U (n)
denote the potential measure of S(n). Since φ(n) is a complete Bernstein function, U (n)
admits a completely monotone density u(n). The explicit form of the potential density u(n)
is not known. In the next section we will derive the asymptotic behavior of u at 0 and at
+∞.
Example 3.13 (Stable subordinators with drifts) For 0 < α < 2 and b > 0, let
φ(λ) = bλ + λα/2. Since λ 7→ λα/2 is complete Bernstein, it follow that φ is also a complete
Bernstein function. The corresponding subordinator S = (St : t ≥ 0) is a sum of the pure
drift subordinator t 7→ bt and the α/2-stable subordinator. Its Levy measure is the same as
the Levy measure of the α/2-stable subordinator. In order to compute the potential density
u of the subordinator S, we first note that, similarly as in (3.22),
∫ ∞
0
e−λtbEα/2(b−2/αt) dt =
1
bλ + λα/2=
1
φ(λ).
Therefore, u(t) = bEα/2(b−2/αt) for t > 0.
Example 3.14 (Bessel subordinators) The two subordinators in this example are taken
from [50]. The Bessel subordinator SI = (SI(t) : t ≥ 0) is a subordinator with no drift, no
killing and Levy density
µI(t) =1
tI0(t) e−t,
24
where for any real number ν, Iν is the modified Bessel function. Since µI is the Laplace
transform of the function γ(t) =∫ t
0g(s) ds with
g(s) =
π−1(2s− s2)−1/2 , s ∈ (0, 2),0 , s ≥ 2,
the Laplace exponent of SI is a complete Bernstein function. The Laplace exponent of SI is
given by
φI(λ) = log((1 + λ) +√
(1 + λ)2 − 1).
For any t > 0, the density of SI(t) is given by
ft(x) =t
xIt(x)e−x.
The Bessel subordinator SK = (SK(t) : t ≥ 0) is a subordinator with no drift, no killing
and Levy density
µK(t) =1
tK0(t)e
−t,
where for any real number ν, Kν is the modified Bessel function. Since µK is the Laplace
transform of the function
γ(t) =
0, t ∈ (0, 2],
log(t− 1 +√
(t− 1)2 + 1), t > 2 ,
the Laplace exponent of SK is a complete Bernstein function. The Laplace exponent of SK
is given by
φK(λ) =1
2
(log((1 + λ) +
√(1 + λ)2 − 1)
)2
.
For any t > 0, the density of SK(t) is given by
ft(x) =
√2π
tϑx(
1
t)e−x
x,
where
ϑv(t) =v√2π3t
∫ ∞
0
exp(π2 − ξ2
2t) exp(−v cosh(ξ)) sinh(ξ) sin(
πξ
t)dξ.
Example 3.15 For any α ∈ (0, 2) and β ∈ (0, 2 − α), it follows from the properties of
complete Bernstein functions that
φ(λ) = λα/2(log(1 + λ))β/2
is a complete Bernstein function.
Example 3.16 For any α ∈ (0, 2) and β ∈ (0, α), it follows from the properties of complete
Bernstein functions that
φ(λ) = λα/2(log(1 + λ))−β/2
is a complete Bernstein function.
25
3.3 Asymptotic behavior of the potential, Levy and transitiondensities
Recall the formula (3.7) relating the Laplace exponent φ of the subordinator S with the
Laplace transform of its potential measure U . In the case U has a density u, this formula
reads
Lu(λ) =
∫ ∞
0
e−λtu(t) dt =1
φ(λ).
The asymptotic behavior of φ at ∞ (resp. at 0) determines, by use of Tauberian and the
monotone density theorems, the asymptotic behavior of the potential density u at 0 (resp. at
∞).
We are going to use Theorems 2.9 and 2.10 for Laplace exponents that are regularly
varying at ∞ (resp. at 0). To be more specific, we will assume that
φ(λ) ∼ λα/2`(λ) , λ →∞ (resp. λ → 0), (3.26)
where 0 < α < 2, and ` is slowly varying at ∞ (resp. at 0). If φ is a special Bernstein
function, then the corresponding subordinator S has a decreasing potential density u whose
asymptotic behavior at 0 is then given by
u(t) ∼ 1
Γ(α/2)
tα/2−1
`(1/t), t → 0 + (resp. t →∞) . (3.27)
As consequences of the above, we immediately get the following: (1) for α ∈ (0, 2), β ∈(0, 2− α), the potential density of the subordinator corresponding to Example 3.15 satisfies
u(t) ∼ 1
Γ(α/2)
1
t1−α/2| log t|β/2, t → 0+, (3.28)
u(t) ∼ 1
Γ(α/2 + β/2)
1
t1−(α+β)/2, t →∞ ; (3.29)
(2) for α ∈ (0, 2), β ∈ (0, α), the potential density of the subordinator corresponding to
Example 3.16 satisfies
u(t) ∼ α
2Γ(1 + α/2)
| log t|β/2
t1−α/2, t → 0+, (3.30)
u(t) ∼ α− β
2Γ(1 + (α− β)/2)
1
t1−(α−β)/2, t →∞ . (3.31)
In the case when the subordinator has a positive drift b > 0, the potential density u
always exists, it is continuous, and and u(0+) = b. For example, this will be the case when
φ(λ) = bλ+λα/2. Recall (see Example 3.13) that the potential density is given by the rather
explicit formula u(t) = bEα/2(b−2/αt). The asymptotic behavior of u(t) as t → ∞ is not
26
easily derived from this formula. On the other hand, since φ(λ) ∼ λα/2 as λ → 0, it follows
from (3.27) that u(t) ∼ tα/2−1/Γ(α/2) as t →∞.
Note that the gamma subordinator, geometric α/2-stable subordinators, iterated geo-
metric stable subordinators and Bessel subordinators have Laplace exponents that are not
regularly varying with strictly positive exponent at ∞, but are rather slowly varying at ∞.
In this case, Karamata’s monotone density theorem cannot be used, and we need to use
Theorems 2.11 and 2.12.
We are going to apply these results to establish the asymptotic behaviors of the potential
density of geometric stable subordinators, iterated geometric stable subordinators and Bessel
subordinators at zero.
Proposition 3.17 For any α ∈ (0, 2], let φ(λ) = log(1 + λα/2), and let u be the potential
density of the corresponding subordinator. Then
u(t) ∼ 2
αt(log t)2, t → 0+ ,
u(t) ∼ tα/2−1
Γ(α/2), t →∞ .
Proof. Recall that
LU(λ) = 1/φ(λ) = 1/ log(1 + λα/2).
SinceLU( 1
xλ)− LU( 1
λ)
(log λ)−2→ 2
αlog x, ∀x > 0 ,
as λ → 0+, we have by (the 0+ version of) Theorem 2.11 that
U(xt)− U(t)
(log t)−2→ 2
αlog x, x > 0 ,
as t → 0+. Now we can apply (the 0+ version of) Theorem 2.12 to get that
u(t) ∼ 2
αt(log t)2
as t → 0+. Asymptotic behavior of u(t) as t →∞ follows from Theorem 2.9. 2
In order to deal with the iterated geometric stable subordinators, let e0 = 0, and induc-
tively, en = een−1 , n ≥ 1. For n ≥ 1 define ln : (en,∞) → (0,∞) by
ln(y) = log log . . . log y , n times . (3.32)
Further, let L0(y) = 1, and for n ∈ N, define Ln : (en,∞) → (0,∞) by
Ln(y) = l1(y)l2(y) . . . ln(y) . (3.33)
27
Note that l′n(y) = 1/(yLn−1(y)) for every n ≥ 1. Let α ∈ (0, 2] and recall from Example 3.12
that φ(1)(y) := log(1 + yα/2), and for n ≥ 1, φ(n)(y) := φ(φ(n−1)(y)). Let kn(y) := 1/φ(n)(y).
Lemma 3.18 Let t > 0. For every n ∈ N,
limy→∞
(kn(ty)− kn(y))Ln−1(y)ln(y)2 = − 2
αlog t .
Proof. The proof for n = 1 is straightforward and is implicit in the proof of Proposition
3.17. We only give the proof for n = 2, the proof for general n is similar. Using the fact that
log(1 + y) ∼ y, y → 0+, (3.34)
we can easily get that
limy→∞
(log
log y
log(yt)
)log y = − lim
y→∞
(log
log y + log t
log y
)log y = − log t. (3.35)
Using (3.34) and the elementary fact that log(1 + y) ∼ log y as y →∞ we get that
limy→∞
(k2(ty)− k2(y))L1(y)l2(y)2
=α
2limy→∞
(log
log(1 + yα/2)
log(1 + (ty)α/2)
)log y(log log y)2
(α/2)2 log(log(1 + yα/2)) log(log(1 + (ty)α/2))
=2
αlimy→∞
(log
log y
log(yt)
)log y = − 2
αlog t .
2
Recall that U (n) denotes the potential measure and u(n)(t) the potential density of the
iterated geometric stable subordinator S(n) with the Laplace exponent φ(n).
Proposition 3.19 For any α ∈ (0, 2], we have
u(n)(t) ∼ 2
αtLn−1(1t)ln(1
t)2
, t → 0+ , (3.36)
u(n)(t) ∼ t(α/2)n−1
Γ((α/2)n), t → 0 + . (3.37)
Proof. Using Lemma 3.18 we can easily see that
LU (n)( 1xλ
)− LU (n)( 1λ)
(Ln−1(1λ)ln( 1
λ)2)−1
→ 2
αlog x, ∀x > 0 ,
28
as λ → 0+. Therefore, by (the 0+ version of) Theorem 2.11 we have that
U (n)(xt)− U (n)(t)
(Ln−1(1t)ln(1
t)2)−1
→ 2
αlog x, x > 0 ,
as t → 0+. Now we can apply (the 0+ version of) Theorem 2.12 to get that
u(n)(t) ∼ 2
αtLn−1(1t)ln(1
t)2
as t → 0+. Asymptotic behavior of u(n)(t) at ∞ follows easily from Theorem 2.10. 2
Let uI and uK be the potential densities of the Bessel subordinators I and K respectively.
Then we have the following result.
Proposition 3.20 The potential densities of the Bessel subordinators satisfy the following
asymptotics
uI(t) ∼ 1
t(log t)2, t → 0+ ,
uK(t) ∼ 1
t| log t|3 , t → 0+ ,
uI(t) ∼ 1√2π
t−1/2, t →∞ ,
uK(t) ∼ 1, t →∞ .
Proof. The proofs of first two relations are direct applications of de Haan’s Tauberian
and monotone density theorems and the proofs of the last two are direct applications of
Karamata’s Tauberian and monotone density theorems. We omit the details. 2
We now discuss the asymptotic behavior of the Levy density of a subordinator.
Proposition 3.21 Assume that the Laplace exponent φ of the subordinator S is a complete
Bernstein function and let µ(t) denote the density of its Levy measure.
(i) Let 0 < α < 2. If φ(λ) ∼ λα/2`(λ), λ → ∞, and ` is a slowly varying function at ∞,
then
µ(t) ∼ α/2
Γ(1− α/2)t−1−α/2 `(1/t) , t → 0 + . (3.38)
(ii) Let 0 < α ≤ 2. If φ(λ) ∼ λα/2`(λ), λ → 0, and ` is a slowly varying function at 0,
then
µ(t) ∼ α/2
Γ(1− α/2)t−1−α/2 `(1/t) , t →∞ . (3.39)
29
Proof. (i) The assumption implies that there is no drift, b = 0, and hence by integration
by parts,
φ(λ) = λ
∫ ∞
0
e−λtµ(t,∞) dt .
Thus,∫∞
0e−λtµ(t,∞) dt ∼ λα/2−1`(λ) as λ →∞, and (3.38) follows by first using Karamata’s
Tauberian theorem and then Karamata’s monotone density theorem.
(ii) In this case it is possible that the drift b is strictly positive, and thus
φ(λ) = λ
(b +
∫ ∞
0
e−λtµ(t,∞) dt
).
This implies that∫∞
0e−λtµ(t,∞) dt ∼ λα/2−1`(λ) as λ → 0, and (3.39) holds by Theorems
2.9 and 2.10. 2
Note that if φ(λ) ∼ bλ as λ →∞ and b > 0, nothing can be inferred about the behavior
of the density µ(t) near zero. Next we record the asymptotic behavior of the Levy density of
the geometric stable subordinator. The first claim follows from (3.24), and the second from
the previous proposition.
Proposition 3.22 Let µ(dt) = µ(t) dt be the Levy measure of a geometrically α/2-stable
subordinator. Then
(i) For 0 < α ≤ 2, µ(t) ∼ α2t
, t → 0+.
(ii) For 0 < α < 2, µ(t) ∼ α/2Γ(1−α/2)
t−α/2−1, t →∞. For α = 2, µ(t) = e−t
t.
In the case of iterated geometric stable subordinators, we have only partial result for the
asymptotic behavior of the density µ(n) which follows from Proposition 3.39 (ii).
Proposition 3.23 For any α ∈ (0, 2),
µ(n)(t) ∼ (α/2)n
Γ(1− (α/2)n)t−1−(α/2)n
, t →∞ .
Remark 3.24 Note that we do not give the asymptotic behavior of µ(n)(t) as t → ∞ for
α = 2 (iterated gamma subordinator), and the asymptotic behavior of µ(n)(t) as t → 0+ for
all α ∈ (0, 2]. It is an open problem to determine the correct asymptotic behavior.
The following results are immediate consequences of Proposition 3.21.
30
Proposition 3.25 Suppose that α ∈ (0, 2) and β ∈ (0, 2− α). Let µ(t) be the Levy density
of the subordinator corresponding to Example 3.15. Then
µ(t) ∼ α
2Γ(1− α/2)t−1−α/2(log(1/t))β/2, t → 0+ ,
µ(t) ∼ α + β
2Γ(1− (α + β)/2)t−1−(α+β)/2, t →∞ .
Proposition 3.26 Suppose that α ∈ (0, 2) and β ∈ (0, α). Let µ(t) be the Levy density of
the subordinator corresponding to Example 3.16. Then
µ(t) ∼ α
2Γ(1− α/2)
1
t1+α/2(log(1/t))β/2, t → 0+ ,
µ(t) ∼ α− β
2Γ(1− (α− β)/2)
1
t1+(α−β)/2, t →∞ .
We conclude this section with a discussion of the asymptotic behavior of transition den-
sities of geometric stable subordinators. Let S = (St : t ≥ 0) be a geometric α/2-stable
subordinator, and let (ηs : s ≥ 0) be the corresponding convolution semigroup. Further,
let (ρs : s ≥ 0) be the convolution semigroup corresponding to an α/2-stable subordinator,
and by abuse of notation, let ρs denote the corresponding density. Then by (3.21) and the
explicit formula (3.20), we see that ηt has a density
fs(t) =
∫ ∞
0
ρu(t)1
Γ(s)us−1e−u du .
For s = 1, this formula reads
f1(t) =
∫ ∞
0
ρu(t)e−u du .
Moreover,we have shown in Example 3.11 that f1(t) is completely monotone. To be more
precise, f1(t) is the density of the distribution function F (t) = 1−Eα/2(t) of the probability
measure η1 (see (3.25)).
Proposition 3.27 For any α ∈ (0, 2),
f1(t) ∼ 1
Γ(α/2)tα/2−1 , t → 0+ , (3.40)
f1(t) ∼ 2πΓ(1 +
α
2
)sin
(απ
4
)t−1−α
2 , t →∞ . (3.41)
Proof. The first relation follows from the explicit form of the distribution function F (t) =
1 − Eα/2(t) and Karamata’s monotone density theorem. For the second relation, use the
scaling property of stable distribution, ρu(t) = u−2/αρ1(u−2/αt), to get
f1(t) =
∫ ∞
0
e−uu−2/αρ1(u−2/αt) du.
31
Now use (3.18), (3.19) and dominated convergence theorem to obtain the required asymptotic
behavior. 2
4 Subordinate Brownian motion
4.1 Definitions and technical lemma
Let X = (Xt,Px) be a d-dimensional Brownian motion. The transition densities p(t, x, y) =
p(t, y − x), x, y ∈ Rd, t > 0, of X are given by
p(t, x) = (4πt)−d/2 exp
(−|x|
2
4t
).
The semigroup (Pt : t ≥ 0) of X is defined by Ptf(x) = Ex[f(Xt)] =∫Rd p(t, x, y)f(y) dy,
where f is a nonnegative Borel function on Rd. Recall that if d ≥ 3, the Green function
G(2)(x, y) = G(2)(x− y), x, y ∈ Rd, of X is well defined and is equal to
G(2)(x) =
∫ ∞
0
p(t, x) dt =Γ(d/2− 1)
4πd/2|x|−d+2 .
Let S = (St : t ≥ 0) be a subordinator independent of X, with Laplace exponent φ(λ),
Levy measure µ, drift b ≥ 0, no killing, potential measure U , and convolution semigroup
(ηt : t ≥ 0). We define a new process Y = (Yt : t ≥ 0) by Yt := X(St). Then Y is a
Levy process with characteristic exponent Φ(x) = φ(|x|2) (see e.g. [58], pp.197-198) called a
subordinate Brownian motion. The semigroup (Qt : t ≥ 0) of the process Y is given by
Qtf(x) = Ex[f(Yt)] = Ex[f(X(St))] =
∫ ∞
0
Psf(x) ηt(ds) .
If the subordinator S is not a compound Poisson process, then Qt has a density q(t, x, y) =
q(t, x− y) given by q(t, x) =∫∞
0p(s, x) ηt(ds).
From now on we assume that the subordinate process Y is transient. According to the
criterion due to Port and Stone ([54]), Y is transient if and only if for some small r > 0,∫|x|<r
R( 1Φ(x)
) dx < ∞. Since Φ(x) = φ(|x|2) is real, it follows that Y is transient if and only
if ∫
0+
λd/2−1
φ(λ)dλ < ∞ . (4.1)
This is always true if d ≥ 3, and, depending on the subordinator, may be true for d = 1 or
d = 2. For x ∈ Rd and A Borel subset of Rd, the occupation measure is given by
G(x,A) = Ex
∫ ∞
0
1(Yt∈A) =
∫ ∞
0
Qt1A(x) dt =
∫ ∞
0
∫ ∞
0
Ps1A(x)ηt(ds) dt
=
∫ ∞
0
Ps1A U(ds) =
∫
A
∫ ∞
0
p(s, x, y) U(ds) dy ,
32
where the second line follows from (3.6). If A is bounded, then by the transience of Y ,
G(x,A) < ∞ for every x ∈ Rd. Let G(x, y) denote the density of the occupation measure
G(x, ·). Clearly, G(x, y) = G(y − x) where
G(x) =
∫ ∞
0
p(t, x) U(dt) =
∫ ∞
0
p(t, x)u(t) dt , (4.2)
and the last equality holds in case when U has a potential density u.
The Levy measure π of Y is given by (see e.g. [58], pp. 197-198)
π(A) =
∫
A
∫ ∞
0
p(t, x) µ(dt) dx =
∫
A
J(x) dx , A ⊂ Rd ,
where
J(x) :=
∫ ∞
0
p(t, x) µ(dt) =
∫ ∞
0
p(t, x)µ(t)dt , (4.3)
is called the jumping function of Y . The last equality is valid in the case when µ(dt) has a
density µ(t). Define the function j : (0,∞) → (0,∞) by
j(r) :=
∫ ∞
0
(4π)−d/2t−d/2 exp
(−r2
4t
)µ(dt) , r > 0 , (4.4)
and note that by (4.3), J(x) = j(|x|), x ∈ Rd \ 0. We state the following well-known
conditions describing when a Levy process is a subordinate Brownian motion (for a proof,
see e.g. [39], pp. 190-192).
Proposition 4.1 Let Y : (Yt : t ≥ 0) be a d-dimensional Levy process with the characteristic
triple (b, A, π). Then Y is a subordinate Brownian motion if and only if π has a rotationally
invariant density x 7→ j(|x|) such that r 7→ j(√
r) is a completely monotone function on
(0,∞), A = cId with c ≥ 0, and b = 0.
Example 4.2 (i) Let φ(λ) = λα/2, 0 < α < 2, and S the corresponding α/2-stable sub-
ordinator. The characteristic exponent of the subordinate process Y is equal to Φ(x) =
φ(|x|) = |x|α. Hence, Y is a rotationally invariant α-stable process. From now on we will
(imprecisely) refer to this process as a symmetric α-stable process. Y is transient if and only
if d > α. When d > α, the Green function of Y is given by the Riesz kernel
G(x) =1
πd/22α
Γ(d−α2
)
Γ(α2)|x|α−d, x ∈ Rd ,
and the jumping function by
J(x) =α2α−1Γ(d+α
2)
πd/2Γ(1− α2)|x|−α−d, x ∈ Rd .
33
(ii) For 0 < α < 2 and m > 0, let φ(λ) = (λ + mα/2)2/α−m, and let S be the corresponding
relativistic α/2-stable subordinator. The characteristic exponent of the subordinate process
Y is equal to Φ(x) = φ(|x|) = (|x|2 + mα/2)2/α −m. The process Y is called the symmetric
relativistic α-stable process. Y is transient if and only if d > 2.
(iii) Let φ(λ) = log(1 + λ), and let S be the corresponding gamma subordinator. The
characteristic exponent of the subordinate process Y is given by Φ(x) = log(1 + |x|2). The
process Y is known in some finance literature (see [48] and [34]) as a variance gamma process
(at least for d = 1). Y is transient if and only if d > 2.
(iv) For 0 < α < 2, let φ(λ) = log(1 + λα/2), and let S be the corresponding subordinator.
The characteristic exponent of the subordinate process Y is given by Φ(x) = log(1 + |x|α).
The process Y is known as a rotationally invariant geometric α-stable process. From now on
we will (imprecisely) refer to this process as a symmetric geometric α-stable process. Y is
transient if and only if d > α.
(v) For 0 < α < 2, let φ(1)(λ) = log(1+λα/2), and for n > 1, let φ(n)(λ) = φ(1)(φ(n−1)(λ). Let
S(n) be the corresponding iterated geometric stable subordinator. Denote Y(n)t = X(S
(n)t ).
Y (n) is transient if and only if d > 2(α/2)n.
(vi) For 0 < α < 2 and let φ(λ) = bλ + λα/2, and let S be the corresponding subordinator.
The characteristic exponent of the subordinate process Y is Φ(x) = b|x|2 + |x|α. Hence, Y is
the sum of a (multiple of) Brownian motion and an independent α-stable process. Similarly,
we can realize the sum of an α-stable and an independent β-stable processes by subordinating
Brownian motion X with a subordinator having the Laplace exponent φ(λ) = λα/2 + λβ/2.
(vii) The characteristic exponent of the subordinate Brownian motion with the Bessel sub-
ordinator SI is log((1 + |x|2) +√
(1 + |x|2)2 − 1) and so this process is transient if and only
if d > 1. The characteristic exponent of the subordinate Brownian motion with the Bessel
subordinator SK is 12(log((1 + |x|2) +
√(1 + |x|2)2 − 1))2 and so this process is transient if
and only if d > 2.
(viii) For α ∈ (0, 2), β ∈ (0, 2− α), let S be the subordinator with Laplace exponent φ(λ) =
λα/2(log(1 + λ))β/2. The characteristic exponent of the subordinate process Y is Φ(x) =
|x|α(log(1 + |x|2))β/2. Y is transient if and only if d > α + β.
(ix) For α ∈ (0, 2), β ∈ (0, α), let S be the subordinator with Laplace exponent φ(λ) =
λα/2(log(1 + λ))−β/2. The characteristic exponent of the subordinate process Y is Φ(x) =
|x|α(log(1 + |x|2))−β/2. Y is transient if and only if d > α− β.
In order to establish the asymptotic behaviors of the Green function G and the jumping
function J of the subordinate Brownian motion Y , we start by defining an auxiliary function.
For any slowly varying function ` at infinity and any ξ > 0, let
f`,ξ(y, t) :=
`(1/y)`(4t/y)
, y < tξ,
0, y ≥ tξ.
34
Now we state and prove the key technical lemma.
Lemma 4.3 Suppose that w : (0,∞) → (0,∞) is a decreasing function satisfying the fol-
lowing two assumptions:
(i) There exist constants c0 > 0 and β ∈ [0, 2], and a continuous functions ` : (0,∞) →(0,∞) slowly varying at ∞ such that
w(t) ∼ c0
tβ`(1/t), t → 0 + . (4.5)
(ii) If d = 1 or d = 2, then there exist a constant c∞ > 0 and a constant γ < d/2 such that
w(t) ∼ c∞tγ−1 , t → +∞ . (4.6)
Let g : (0,∞) → (0,∞) be a function such that
∫ ∞
0
td/2−2+β e−tg(t) dt < ∞ .
If there is ξ > 0 such that f`,ξ(y, t) ≤ g(t) for all y, t > 0, then
I(x) :=
∫ ∞
0
(4πt)−d/2e−|x|24t w(t) dt ∼ c0Γ(d/2 + β − 1)
41−βπd/2
1
|x|d+2β−2 `( 1|x|2 )
, |x| → 0 .
Proof. Let us first note that the assumptions of the lemma guarantee that I(x) < ∞ for
every x 6= 0. By a change of variable we get
∫ ∞
0
(4πt)−d/2e−|x|24t w(t) dt =
|x|−d+2
4πd/2
∫ ∞
0
td/2−2e−tw
( |x|24t
)dt
=1
4πd/2
(|x|−d+2
∫ ξ|x|2
0
+|x|−d+2
∫ ∞
ξ|x|2
)
=1
4πd/2
(|x|−d+2I1 + |x|−d+2I2
).
We first consider I1 for the case d = 1 or d = 2. It follows from the assumptions that there
exists a positive constant c1 such that w(s) ≤ c1sγ−1 for all s ≥ 1/(4ξ). Thus
I1 ≤∫ ξ|x|2
0
td/2−2e−tc1
( |x|24t
)γ−1
dt
≤ c2|x|2γ−2
∫ ξ|x|2
0
td/2−γ−1 dt = c3|x|d−2 .
35
It follows that
lim|x|→0
|x|−d+2I1
1|x|d−2β+2 `( 1
|x|2 )
= 0 . (4.7)
In the case d ≥ 3, we proceed similarly, using the bound w(s) ≤ w(1/(4ξ)) for s ≥ 1/(4ξ).
Now we consider I2:
|x|−d+2I2 =1
|x|d−2
∫ ∞
ξ|x|2td/2−2e−tw
( |x|24t
)dt
=4β
|x|d+2β−2 `( 1|x|2 )
∫ ∞
ξ|x|2td/2−2+βe−t
w(|x|24t
)
1 |x|24t
β` ( 4t|x|2 )
`( 1|x|2 )
`( 4t|x|2 )
dt .
Using the assumption (4.5), we can see that there is a constant c > 0 such that
w(|x|24t
)
1 |x|24t
β`( 4t|x|2 )
< c ,
for all t and x satisfying |x|2/(4t) ≤ 1/(4ξ). Since ` is slowly varying at infinity,
lim|x|→0
`( 1|x|2 )
`( 4t|x|2 )
= 1
for all t > 0. Note that`( 1|x|2 )
`( 4t|x|2 )
= f`,ξ(|x|2, t) .
It follows from the assumption that
td/2−2+βe−tw
(|x|24t
)
1 |x|24t
β`( 4t|x|2 )
`( 1|x|2 )
`( 4t|x|2 )
≤ ctd/2−2+βe−tg(t) .
Therefore, by the dominated convergence theorem we have
lim|x|→0
∫ ∞
ξ|x|2td/2−2+βe−t
w(|x|24t
)
1 |x|24t
β`( 4t|x|2 )
`( 1|x|2 )
`( 4t|x|2 )
dt =
∫ ∞
0
c0td/2−2+βe−t dt = c0Γ(d/2 + β − 1) .
Hence,
lim|x|→0
|x|−d+2I2
4β
|x|d+2β−2`( 1|x|2 )
= c0Γ(d/2 + β − 1) . (4.8)
36
Finally, combining (4.7) and (4.8) we get
lim|x|→0
I(x)1
|x|d+2β−2`( 1|x|2 )
=c0Γ(d/2 + β − 1)
41−βπd/2.
2
Remark 4.4 Note that if in (4.5) we have that ` = 1, then f`,ξ = 1, hence f`,ξ(y, t) ≤ g(y)
and∫∞
0td/2+β−1e−tg(t) dt < ∞ with g = 1.
4.2 Asymptotic behavior of the Green function
The goal of this subsection is to establish the asymptotic behavior of the Green function
G(x) of the subordinate process Y under certain assumptions on the Laplace exponent of
the subordinator S. We start with the asymptotic behavior when |x| → 0 for the following
cases: (1) φ(λ) has a power law behavior at ∞, (2) S is a geometric α/2-stable subordinator,
0 < α ≤ 2, (3) S is an iterated geometric stable subordinator, (4) S is a Bessel subordinator,
and (v) S is the subordinator corresponding to Example 3.15 or Example 3.16.
Theorem 4.5 Suppose that S = (St : t ≥ 0) is a subordinator whose Laplace exponent
φ(λ) = bλ +∫∞
0(1− e−λt) µ(dt) satisfies one of the following two assumptions:
(i) b > 0,
(ii) S is a special subordinator and φ(λ) ∼ γ−1λα/2 as λ →∞, for 0 < α < 2.
Then
G(x) ∼ γ
πd/22α
Γ(d−α2
)
Γ(α2)|x|α−d, |x| → 0, (4.9)
(where in case (i), γ−1 = b and α = 2).
Proof. (i) In this case, φ(λ) ∼ bλ, λ → ∞. By Proposition 3.7, the potential measure U
has a continuous density u satisfying u(0+) = 1/b = γ and u(t) ≤ u(0+) for all t > 0. Note
first that by change of variables
∫ ∞
0
(4πt)−d/2 exp
(−|x|
2
4t
)u(t) dt =
|x|−d+2
4πd/2
∫ ∞
0
sd/2−2e−su
( |x|24s
)ds . (4.10)
By Proposition 3.7, limx→0 u(|x|2/(4s)) = u(0+) = γ for all s > 0 and the convergence is
bounded by u(0+). Hence, by the bounded convergence theorem,
limx→0
1
|x|−d+2
∫ ∞
0
(4πt)−d/2 exp
(−|x|
2
4t
)u(t) dt =
γΓ(d/2− 1)
4πd/2. (4.11)
37
(ii) In this case the potential measure U has a decreasing density u which by (3.27) satisfies
u(t) ∼ γ
Γ(α/2)
1
t1−α/2, t → 0 + .
By recalling Remark 4.4, we can now apply Lemma 4.3 with β = 1 − α/2 to obtain the
required asymptotic behavior. 2
Theorem 4.6 For any α ∈ (0, 2], let φ(λ) = log(1 + λα/2) and let S be the corresponding
geometric α/2-stable subordinator. Then the Green function of the subordinate process Y
satisfies
G(x) ∼ Γ(d/2)
2απd/2|x|d log2 1|x|
, |x| → 0. (4.12)
Proof. We apply Lemma 4.3 with w(t) = u(t), the potential density of S. By Proposition
3.17, u(t) ∼ 2αt log2 t
as t → 0+, so we take c0 = 2/α, β = 1 and `(t) = log2 t. Moreover,
by the second part of Proposition 3.17, u(t) ∼ tα/2−1/(Γ(α)/2) as t → +∞, so we can take
γ = α/2 < d/2. Choose ξ = 1/2. Let
f(y, t) := f`,1/2(y, t) =
log2 ylog2 y
4t
, y < 2t ,
0 , y ≥ 2t .
Define
g(t) :=
log2 2tlog2 2
, t < 14,
1 , t ≥ 14.
In order to show that f(y, t) ≤ g(t), first let t < 1/4. Then y 7→ f(y, t) is an increasing
function for 0 < y < 2t. Hence,
sup0<y<2t
f(y, t) = f(2t, t) =log2 2t
log2 2.
Clearly, f(y, 1/4) = 1. For t > 1/4, y 7→ f(y, t) is a decreasing function for 0 < y < 1.
Hence
sup0<y<(2t)∧1
f(y, t) = f(0, t) := limy→0
f(y, t) = 1 .
For t > 1/2, elementary consideration gives that
sup1<y<2t
f(y, t) ≤ log2 2t
log2 2.
Clearly, ∫ ∞
0
td/2−1e−tg(t) dt < ∞ ,
38
and the required asymptotic behavior follows from Lemma 4.3. 2
For n ≥ 1, let S(n) be the iterated geometric stable subordinator with the Laplace
exponent φ(n). Recall that φ(1)(λ) = log(1 + λα/2), 0 < α ≤ 2, and φ(n) = φ(1) φ(n−1). Let
Y(n)t = X(S
(n)t ) be the subordinate process, and denote its Green function by G(n). We want
to study the asymptotic behavior of G(n) using Lemma 4.3. In order to check the conditions
of that lemma, we need some preparations.
For n ∈ N, define fn : (0, 1/en)× (0,∞) → [0,∞) by
fn(y, t) :=
Ln−1( 1
y)ln( 1
y)2
Ln−1( 4ty
)ln( 4ty
)2, y < 2t
en,
0 , y ≥ 2ten
.
Note that fn is equal to the function f`,ξ, defined before Lemma 4.3, with `(y) = Ln−1(y)ln(y)2
and ξ = en/2. Also, for n ∈ N, let
gn(t) :=
fn( 2t
en, t) , t < 1/4 ,
1 , t ≥ 1/4 .
Moreover, for n ∈ N, define hn : (0, 1/en)× (0,∞) → (0,∞) by
hn(y, t) :=ln( 1
y)
ln(4ty).
Clearly, for 0 < y < 2ten∧ 1
enwe have that
fn(y, t) = h1(y, t) . . . hn−1(y, t)hn(y, t)2 . (4.13)
Lemma 4.7 For all y ∈ (0, 1/en) and all t > 0 we have fn(y, t) ≤ gn(t). Moreover,∫∞0
td/2−1e−tgn(t) dt < ∞.
Proof. A direct calculation of partial derivative gives
∂hn
∂y(y, t) =
Ln( 1y)− Ln(4t
y)
yLn−1(1y)Ln−1(
4ty)ln(4t
y)2
.
The denominator is always positive. Clearly, the numerator is positive if and only if t > 1/4.
Therefore, for t < 1/4, y 7→ hn(y, t) is increasing on (0, 2t/en), while for t > 1/4 it is
decreasing on (0, 2t/en).
Let t < 1/4. It follows from (4.13) and the fact that y 7→ hn(y, t) is increasing on
(0, 2t/en) that y 7→ fn(y, t) is increasing for 0 < y < 2t/en. Therefore,
sup0<y<2t/en
fn(y, t) ≤ fn(2t/en, t) = gn(t) .
39
Clearly, fn(y, 1/4) = 1. For y ≥ 1/4, it follows from (4.13) and the fact that y 7→ hn(y, t) is
decreasing on (0, 2t/en) that y 7→ fn(y, t) is decreasing for 0 < y < 1/en. Hence
sup0<y< 2t
en∧ 1
en
fn(y, t) = f(0, t) := limy→0
fn(y, t) = 1 .
For t > 1/2, elementary consideration gives that
sup1
en<y< 2t
en∧ 1
en
fn(y, t) ≤ gn(t).
The integrability statement of the lemma is obvious. 2
Theorem 4.8 We have
G(n)(x) ∼ Γ(d/2)
2απd/2|x|dLn−1(1/|x|2)ln(1/|x|2)2, |x| → 0.
Proof. We apply Lemma 4.3 with w(t) = u(n)(t), the potential density of S(n). By Propo-
sition 3.19,
u(n)(t) ∼ 2
αtLn−1(1/t)ln(1/t)2, t → 0+,
so we take c0 = 2/α, β = 1 and `(t) = Ln−1(t)ln(t)2. By the second part of Proposition 3.19,
u(n)(t) is of order t(α/2)n−1 as t → ∞, so we may take γ = (α/2)n < d/2. Choose ξ = en/2.
The result follows from Lemma 4.3 and Lemma 4.7 2
Using arguments similar to that used in the proof of Theorem 4.6, we can easily get the
following two results.
Theorem 4.9 (i) Suppose d > 1. Let GI be the Green function of the subordinate Brownian
motion via the Bessel subordinator SI . Then
GI(x) ∼ Γ(d/2)
4πd/2|x|d log2 1|x|
, |x| → 0.
(ii) Suppose d > 2. Let GK be the Green function of the subordinate Brownian motion via
the Bessel subordinator SK. Then
GK(x) ∼ Γ(d/2)
4πd/2|x|d log3 1|x|
, |x| → 0.
40
Theorem 4.10 Suppose α ∈ (0, 2), β ∈ (0, 2−α) and that S is the subordinator in Example
3.15. If the d > α+β, the Green function of the subordinate Brownian motion via S satisfies
G(x) ∼ αΓ((d− α)/2
2α+1πd/2Γ(1 + α/2)
1
|x|d−α(log(1/|x|2))β/2, |x| → 0.
Theorem 4.11 Suppose α ∈ (0, 2), β ∈ (0, α) and that S is the subordinator in Example
3.16. If the d > α−β, the Green function of the subordinate Brownian motion via S satisfies
G(x) ∼ αΓ((d− α)/2
2α+1πd/2Γ(1 + α/2)
(log(1/|x|2))β/2
|x|d−α, |x| → 0.
Proof. The proof of this theorem is similar to that of Theorem 4.6, the only difference is
that in this case when applying Lemma 4.3 we take the slowly varying function ` to be
`(t) =
(log2 t)−β/4, t ≥ 2,(log2 2)−β/4, t ≤ 2.
Then using argument similar to that in the proof of Theorem 4.6 we can show that with the
functions defined by
f(y, t) =
`(1/y)`(4t/y)
, y < 2t,
0, y ≥ 2t=
(log2(4t/y)
log2(1/y)
)β/4
, t < 1/4, y < 2t,(
log2(4t/y)
log2(1/y)
)β/4
, t ≥ 1/4, y < 1/2,(
log2(4t/y)
log2 2
)β/4
, t < 1/4, 1/2 < y < 2t,
0, y ≥ 2t,
and
g(t) =
(log2(8t)
log2 2
)β/4
, t > 1/4,
1, t ≤ 1/4.
we have f(y, t) ≤ g(t) for all y > 0 and t > 0. The rest of the proof is exactly the same as
that of Theorem 4.6. 2
By using results and methods developed so far, we can obtain the following table of the
asymptotic behavior of the Green function of the subordinate Brownian motion depending
on the Laplace exponent of the subordinator. The left column contains Laplace exponents,
41
while the right column describe the asymptotic behavior of G(x) as |x| → 0, up to a constant.
Laplace exponent φ Green function G ∼ c ·λ |x|−d |x|2∫ 1
0λ1−ββηdβ (η > −1) |x|−d |x|2 log( 1
|x|2 )η+1
λα/2(log(1 + λ))β/2, 0 < α < 2, 0 < β < 2− α, |x|−d |x|α 1(log(1/|x|2))β/2
λα/2, 0 < α < 2 |x|−d |x|αλα/2(log(1 + λ))−β/2, 0 < α < 2, 0 < β < α, |x|−d |x|α(log(1/|x|2))β/2
log(1 + λα/2), 0 < α ≤ 2 |x|−d 1log2 1
|x|2
φ(n)(λ) |x|−d 1Ln−1( 1
x)ln( 1
x)2
Notice that the singularity of the Green function increases from top to bottom. This
is, of course, a consequence of the fact that corresponding subordinator become slower and
slower, hence the subordinate process Y moves also more slowly for small times.
We look now at the asymptotic behavior of the Green function G(x) for |x| → ∞.
Theorem 4.12 Suppose that S = (St : t ≥ 0) is a subordinator whose Laplace exponent
φ(λ) = bλ +
∫ ∞
0
(1− e−λt) µ(dt)
is a special Bernstein function. If φ(λ) ∼ γ−1λα/2 as λ → 0+ for α ∈ (0, 2] and a positive
constant γ, then
G(x) ∼ γ
πd/22α
Γ(d−α2
)
Γ(α2)|x|α−d
as |x| → ∞.
Proof. Note that the assumption that φ(λ) ∼ γ−1λα/2 as λ → 0+ implies that either b > 0
or µ(0,∞) = ∞. Thus by Theorem 3.1 the potential measure of the subordinator has a
decreasing density. By use of Theorems 2.9 and 2.10, the assumption φ(λ) ∼ γ−1λα/2 as
λ → 0+ implies that
u(t) ∼ γ
Γ(α/2)tα/2−1 , t →∞ .
Since u is decreasing and integrable near 0, it is easy to show that there exists t0 > 0 such
that u(t) ≤ t−1 for all t ∈ (0, t0). Hence, we can find a positive constant C such that
u(t) ≤ C(t−1 ∨ tα/2−1) . (4.14)
42
By change of variables we have∫ ∞
0
(4πt)−d/2 exp
(−|x|
2
4t
)u(t) dt
=1
4πd/2|x|−d+2
∫ ∞
0
sd/2−2e−su
( |x|24s
)ds
=γ
4πd/2Γ(α/2)|x|−d+α
∫ ∞
0
sd/2−2e−su
(|x|24s
)
γΓ(α/2)
(|x|24s
)α/2−1
(1
4s
)α/2−1
ds
=γ
2απd/2Γ(α/2)|x|−d+α
∫ ∞
0
sd/2−α/2−1e−su
(|x|24s
)
γΓ(α/2)
(|x|24s
)α/2−1ds .
Let |x| ≥ 2. Then by (4.14),
u(|x|24s
)
(|x|24s
)α/2−1≤ C
(( |x|24s
)−α/2
∨ 1
)≤ C(sα/2 ∨ 1) .
It follows that the integrand in the last formula above is bounded by an integrable function,
so we may use the bounded convergence theorem to obtain
lim|x|→∞
1
|x|−d+α
∫ ∞
0
(4πt)−d/2 exp
(−|x|
2
4t
)u(t) dt =
γ
2απd/2
Γ(d−α2
)
Γ(α2)
,
which proves the result. 2
Examples of subordinators that satisfy the assumptions of the last theorem are relativistic
β/2-stable subordinators (with α in the theorem equal to 2), gamma subordinator (α = 2),
geometric β/2-stable subordinators (α = β), iterated geometric stable subordinators, Bessel
subordinators SI , α = 1, and SK , α = 2, and also subordinators corresponding to Examples
3.15 and 3.16.
Remark 4.13 Suppose that St = bt + St where b is positive and St is a pure jump subor-
dinator with finite expectation and a Laplace exponent that is a special Bernstein function.
Then φ(λ) ∼ bλ, λ →∞, and φ(λ) ∼ φ′(0+)λ, λ → 0. This implies that the Green function
of the subordinate process Y satisfies G(x) ³ G(2)(x) for all x ∈ Rd.
4.3 Asymptotic behavior of the jumping function
The goal of this subsection is to establish results on the asymptotic behavior of the jumping
function near zero, and results about the rate of decay of the jumping function near zero
43
and near infinity. We start by stating two theorems on the asymptotic behavior of the
jumping functions at zero for subordinate Brownian motions via subordinators corresponding
to Examples 3.15 and 3.16. We omit the proofs which rely on Lemma 4.3 and are similar to
proofs of Theorems 4.10 and 4.11.
Theorem 4.14 Suppose α ∈ (0, 2), β ∈ (0, 2−α) and that S is the subordinator correspond-
ing to Example 3.15. Then the jumping function of the subordinate Brownian motion Y via
S satisfies
J(x) ∼ αΓ((d + α)/2)
21−απd/2Γ(1− α/2)
(log(1/|x|2))β/2
|x|d+α, |x| → 0.
Theorem 4.15 Suppose α ∈ (0, 2), β ∈ (0, α) and that S is the subordinator corresponding
to Example 3.16. Then the jumping function of the subordinate Brownian motion Y via S
satisfies
J(x) ∼ αΓ((d + α)/2)
21−απd/2Γ(1− α/2)
1
|x|d+α(log(1/|x|2))β/2, |x| → 0.
We continue by establishing the asymptotic behavior of the jumping function for the
geometric stable processes. More precisely, for 0 < α ≤ 2, let φ(λ) = log(1 + λα/2), S the
corresponding geometric α/2-stable subordinator, Yt = X(St) the subordinate process and
J the jumping function of Y .
Theorem 4.16 For every α ∈ (0, 2], it holds that
J(x) ∼ αΓ(d/2)
2|x|d , |x| → 0.
Proof. We again apply Lemma 4.3, this time with w(t) = µ(t), the density of the Levy
measure of S. By Proposition 3.22 (i), µ(t) ∼ α2t
as t → 0+, so we take c0 = α/2, β = 1 and
`(t) = 1. By Proposition 3.22 (ii), µ(t) is of the order t−α/2−1 as t → +∞, so we may take
γ = −α/2. Choose ξ = 1/2 and let g = 1. 2
Theorem 4.17 For every α ∈ (0, 2) we have
J(x) ∼ α
2α+1πd/2
Γ(d+α2
)
Γ(1− α2)|x|−d−α, |x| → ∞.
Proof. By Proposition 3.22 (ii),
µ(t) ∼ α
2Γ(1− α/2)t−α/2−1 , t →∞ .
44
Now combine this with Proposition 3.22 (i) to get that
µ(t) ≤ C(t−1 ∨ t−α/2−1) , t > 0 . (4.15)
By change of variables we have∫ ∞
0
(4πt)−d/2 exp
(−|x|
2
4t
)µ(t) dt
=1
4πd/2|x|−d+2
∫ ∞
0
sd/2−2e−sµ
( |x|24s
)ds
=α
8πd/2Γ(1− α/2)|x|−d−α
∫ ∞
0
sd/2−2e−sµ
(|x|24s
)
α2Γ(1−α/2)
(|x|24s
)−α/2−1
(1
4s
)−α/2−1
ds
=α
2α+1πd/2Γ(1− α/2)|x|−d−α
∫ ∞
0
sd/2+α/2−1e−sµ
(|x|24s
)
αaΓ(1−α/2)
(|x|24s
)−α/2−1ds .
Let |x| ≥ 2. Then by (4.15),
u(|x|24s
)
(|x|24s
)−α/2−1≤ C
(( |x|24s
)α/2
∨ 1
)≤ C(s−α/2 ∨ 1) .
It follows that the integrand in the last display above is bounded by an integrable function,
so we may use the bounded convergence theorem to obtain
lim|x|→∞
1
|x|−d−α
∫ ∞
0
(4πt)−d/2 exp
(−|x|
2
4t
)µ(t) dt =
α
2α+1πd/2
Γ(d+α2
)
Γ(1− α2), (4.16)
which proves the result. 2
In the case α = 2, the behavior of J at ∞ is different and is given in the following result.
Theorem 4.18 When α = 2, we have
J(x) ∼ 2−d/2π−d−12
e−|x|
|x| d+12
, |x| → ∞.
Proof. By change of variables we get that
J(x) =1
2
∫ ∞
0
t−1e−t(4πt)−d/2 exp(−|x|2
2)dt
= 2−d−1π−d/2|x|−d
∫ ∞
0
sd2−1e−
s4− |x|2
s ds
= 2−d−1π−d/2|x|−dI(|x|),
45
where
I(r) =
∫ ∞
0
sd2−1e−
s4− r2
s ds.
Using the change of variable u =√
s2− r√
swe get
I(r) = e−r
∫ ∞
0
sd2−1e
−(√
s2− r√
s)2ds
= e−r
∫ ∞
−∞
2(u +√
u2 + 2r)d
√u2 + 2r
e−u2
du
= 2e−rrd−12
∫ ∞
−∞
u +√
u2 + 2r√u2 + 2r
(u√r
+
√u2
r+ 2)d−1e−u2
du .
Therefore by the dominated convergence theorem we obtain
I(r) ∼ 2d2+1√
πe−rrd−12 , r →∞.
Now the assertion of the theorem follows immediately. 2
Let Y(n)t = X(S
(n)t ) be Brownian motion subordinate by the iterated geometric subordi-
nator S(n), and let J (n) be the corresponding jumping function. Because of Remark 3.24, we
were unable to determine the asymptotic behavior of J (n).
Assume now that φ(λ) is a complete Bernstein function which asymptotically behaves
as λα/2 as λ → 0+ (resp. as λ → ∞). Similar arguments as in Theorems 4.16 and 4.17
would yield that the jumping function J of the corresponding subordinate Brownian motion
behaves (up to a constant) as |x|−α−d as |x| → ∞ (resp. as |x|−α−d as |x| → 0). We are not
going to pursue this here, because, firstly, such behavior of the jumping kernel is known from
the case of α-stable processes, and secondly, because in the sequel we will not be interested
in precise asymptotics of J , but rather in the rate of decay near zero and near infinity.
Recall that µ(t) denotes the decreasing density of the Levy measure of the subordinator S
(which exists since φ is assumed to be complete Bernstein), and recall that the function
j : (0,∞) → (0,∞) was defined by
j(r) :=
∫ ∞
0
(4π)−d/2t−d/2 exp
(−r2
4t
)µ(t) dt , r > 0 , (4.17)
and that J(x) = j(|x|), x ∈ Rd \ 0.
Proposition 4.19 Suppose that there exists a positive constant c1 > 0 such that
µ(t) ≤ c1 µ(2t) for all t ∈ (0, 8) , (4.18)
µ(t) ≤ c1 µ(t + 1) for all t > 1 . (4.19)
46
Then there exists a positive constant c2 such that
j(r) ≤ c2j(2r) for all r ∈ (0, 2) , (4.20)
j(r) ≤ c2j(r + 1) for all r > 1 . (4.21)
Also, r 7→ j(r) is decreasing on (0,∞).
Proof. For simplicity we redefine in the proof the function j by dropping the factor (4π)−d/2
from its definition. This does not effect (4.20) and (4.21).
Let 0 < r < 2. We have
j(2r) =
∫ ∞
0
t−d/2 exp(−r2/t)µ(t) dt
=1
2
(∫ 1/2
0
t−d/2 exp(−r2/t)µ(t) dt +
∫ ∞
1/2
t−d/2 exp(−r2/t)µ(t) dt
+
∫ 2
0
t−d/2 exp(−r2/t)µ(t) dt +
∫ ∞
2
t−d/2 exp(−r2/t)µ(t) dt
)
≥ 1
2
(∫ ∞
1/2
t−d/2 exp(−r2/t)µ(t) dt +
∫ 2
0
t−d/2 exp(−r2/t)µ(t) dt
)
=1
2(I1 + I2).
Now,
I1 =
∫ ∞
1/2
t−d/2 exp(−r2
t)µ(t) dt =
∫ ∞
1/2
t−d/2 exp(−r2
4t) exp(−3r2
4t)µ(t) dt
≥∫ ∞
1/2
t−d/2 exp(−r2
4t) exp(−3r2
2)µ(t) dt ≥ e−6
∫ ∞
1/2
t−d/2 exp(−r2
4t)µ(t) dt ,
I2 =
∫ 2
0
t−d/2 exp(−r2
t)µ(t) dt = 4−d/2+1
∫ 1/2
0
s−d/2 exp(− r2
4s)µ(4s) ds
≥ c−21 4−d/2+1
∫ 1/2
0
s−d/2 exp(− r2
4s)µ(s) ds.
Combining the three terms above we get that j(2r) ≥ c3 j(r) for all r ∈ (0, 2).
To prove (4.21) we first note that for all t ≥ 2 and all r ≥ 1 it holds that
(r + 1)2
t− r2
t− 1≤ 1 .
This implies that
exp(−(r + 1)2
4t) ≥ e−1/4 exp(− r2
4(t− 1)), for all r > 1, t > 2 . (4.22)
47
Now we have
j(r + 1) =
∫ ∞
0
t−d/2 exp(−(r + 1)2
4t)µ(t) dt
≥ 1
2
(∫ 8
0
t−d/2 exp(−(r + 1)2
4t)µ(t) dt +
∫ ∞
3
t−d/2 exp(−(r + 1)2
4t)µ(t) dt
)
=1
2(I3 + I4).
For I3 note that (r + 1)2 ≤ 4r2 for all r > 1. Thus
I3 =
∫ 8
0
t−d/2 exp(−(r + 1)2
4t)µ(t) dt ≥
∫ 8
0
t−d/2 exp(−r2/t)µ(t) dt
= 4−d/2+1
∫ 2
0
s−d/2 exp(− r2
4s)µ(4s) ds ≥ c−2
1 4−d/2+1
∫ 2
0
s−d/2 exp(− r2
4s)µ(s) ds ,
I4 =
∫ ∞
3
t−d/2 exp(−(r + 1)2
4t)µ(t) dt ≥
∫ ∞
3
t−d/2 exp−1/4 exp(− r2
4(t− 1)) µ(t) dt
= e−1/4
∫ ∞
2
(s− 1)−d/2 exp(− r2
4s) µ(s + 1) ds ≥ c−1
1 e−1/4
∫ ∞
2
s−d/2 exp(− r2
4s)µ(s) ds .
Combining the three terms above we get that j(r + 1) ≥ c4 j(r) for all r > 1. 2
Suppose that S = (St : t ≥ 0) is an α/2-stable subordinator, or a relativistic α/2-
stable subordinator, a gamma subordinator. By the explicit forms of the Levy densities
given in Examples 3.8, 3.9 and 3.10 it is straightforward to verify that in all three cases
µ(t) satisfies (4.18) and (4.19). For the Bessel subordinators, by use of asymptotic behavior
of modified Bessel functions I0 and K0, one obtains that µI(t) ∼ e−t/t, t → 0+, µI(t) ∼(1/√
2π) t−3/2, t → ∞, µK(t) ∼ log(1/t)/t, t → 0+, and µK(t) ∼√
π/2 e−2t t−3/2, t → ∞.
From Propositions 3.25 and 3.26, it is easy to see that corresponding Levy densities satisfy
(4.18) and (4.19). In the case when S is a geometric α/2-stable subordinator or when S is the
subordinator corresponding to Example 3.15, respectively Example 3.16, these two properties
follow from Proposition 3.22, and Proposition 3.25, respectively Proposition 3.26. In the
case of an iterated geometric stable subordinator with 0 < α < 2, (4.19) is a consequence
of Proposition 3.23, but we do not know whether (4.18) holds true. By using a different
approach, we will show that if j(n) : (0,∞) → (0,∞) is such that J (n)(x) = j(n)(|x|), then
(4.20) and (4.21) are still true.
We first observe that symmetric geometric α-stable process Y can be obtained by subor-
dinating a symmetric α-stable process Xα via a gamma subordinator S. Indeed, the char-
acteristic exponent of Xα being equal to |x|α, and the Laplace exponent of S being equal
to log(1 + λ), the composition of these two gives the characteristic exponent log(1 + |x|α)
48
of a symmetric geometric α-stable process. Let pα(t, x, y) = pα(t, x − y) denote the transi-
tion densities of the symmetric α-stable process, and let qα(t, x, y) = qα(t, x− y) denote the
transition densities of the symmetric geometric α-stable process, x, y ∈ Rd, t ≥ 0. Then
qα(t, x) =
∫ ∞
0
pα(s, x)1
Γ(t)st−1e−sds . (4.23)
Also, similarly as in (4.3), the jumping function of Y can be written as
J(x) =
∫ ∞
0
pα(t, x)t−1e−t dt , x ∈ Rd \ 0 . (4.24)
Define functions j(n) : (0,∞) → (0,∞) by
j(n)(r) :=
∫ ∞
0
t−d/2 exp
(−r2
4t
)µ(n)(t) dt , r > 0 , (4.25)
where µ(n) denotes the Levy density of the iterated geometric subordinator, and note that
by (4.3), J (n)(x) = (4π)−d/2j(n)(|x|), x ∈ Rd \ 0.
Proposition 4.20 For any α ∈ (0, 2) and n ≥ 1, there exists a positive constant c such that
j(n)(r) ≤ cj(n)(2r), for all r > 0 (4.26)
and
j(n)(r) ≤ cj(n)(r + 1), for all r > 1. (4.27)
Proof. The inequality (4.27) follows from Proposition 4.19. Now we prove (4.26). It is
known (see Theorem 2.1 of [12]) that there exist positive constants C1 and C2 such that for
all t > 0 and all x ∈ Rd,
C1 min(t−d/α, t |x|−d−α) ≤ pα(t, x) ≤ C2 min(t−d/α, t |x|−d−α) . (4.28)
Using these estimates one can easily see that there exists C3 > 0 such that
pα(t, x) ≤ C3pα(t, 2x), for all t > 0 and x ∈ Rd. (4.29)
Let J (1)(x) = J(x) and q(1)α (t, x) = qα(t, x). By use of (4.29), it follows from (4.23) and
(4.24) that J (1)(x) ≤ C3J(1)(2x) , for all x ∈ Rd \ 0, and q
(1)α (t, x) ≤ C3q
(1)α (t, 2x), for
all t > 0 and x ∈ Rd. Further, Y (2) is obtained by subordinating Y (1) by a geometrically
α/2-stable subordinator S. Therefore,
J (2)(x) =1
2
∫ ∞
0
q(1)α (s, x)µα/2(s) ds , q(2)
α (t, x) =
∫ ∞
0
q(1)α (s, x)fα/2(t, s) ds , (4.30)
49
where µ(s) is the Levy density of S and fα/2(t, s) the density of P(St ∈ ds). By use of
q(1)α (s, x) ≤ C3q
(1)α (s, 2x), it follows J (2)(x) ≤ C3J
(2)(2x) and q(2)α (t, x) ≤ C3q
(2)α (t, 2x) for all
t > 0 and x ∈ Rd. The proof is completed by induction. 2
We conclude this section with a result that is essential in proving the Harnack inequality
for jump processes, and was the motivation behind Propositions 4.19 and 4.20.
Proposition 4.21 Let Y be a subordinate Brownian motion such that the function j defined
in (4.17) satisfies conditions (4.20) and (4.21). There exist positive constants C4 and C5 such
that if r ∈ (0, 1), x ∈ B(0, r), and H is a nonnegative function with support in B(0, 2r)c,
then
ExH(Y (τB(0,r))) ≤ C4(ExτB(0,r))
∫H(z)J(z) dz
and
ExH(Y (τB(0,r))) ≥ C5(ExτB(0,r))
∫H(z)J(z) dz.
Proof. Let y ∈ B(0, r) and z ∈ B(0, 2r)c. If z ∈ B(0, 2) we use the estimates
2−1|z| ≤ |z − y| ≤ 2|z|, (4.31)
while if z /∈ B(0, 2) we use
|z| − 1 ≤ |z − y| ≤ |z|+ 1. (4.32)
Let B ⊂ B(0, 2r)c. Then by using the Levy system we get
Ex1B(Y (τB(0,r))) = Ex
∫ τB(0,r)
0
∫
B
J(z − Ys) dz ds = Ex
∫ τB(0,r)
0
∫
B
j(|z − Ys|) dz ds .
By use of (4.20), (4.21), (4.31), and (4.32), the inner integral is estimated as follows:∫
B
j(|z − Ys|) dz =
∫
B∩B(0,2)
j(|z − Ys|) dz +
∫
B∩B(0,2)c
j(|z − Ys|) dz
≤∫
B∩B(0,2)
j(2−1|z|) dz +
∫
B∩B(0,2)c
j(|z| − 1) dz
≤∫
B∩B(0,2)
c2j(|z|) dz +
∫
B∩B(0,2)c
c2j(|z|) dz
= c2
∫
B
J(z) dz .
Therefore
Ex1B(Y (τB(0,r))) ≤ Ex
∫ τB(0,r)
0
c2
∫
B
J(z) dz
= c2 Ex(τB(0,r))
∫1B(z)J(z) dz .
50
Using linearity we get the above inequality when 1B is replaced by a simple function. Ap-
proximating H by simple functions and taking limits we have the first inequality in the
statement of the lemma.
The second inequality is proved in the same way. 2
4.4 Transition densities of symmetric geometric stable processes
Recall that for 0 < α ≤ 2, qα(t, x) denotes the transition density of the symmetric geometric
α-stable process. Asymptotic behavior of qα(1, x) as |x| → ∞ is given in the following result.
Proposition 4.22 For α ∈ (0, 2) we have
qα(1, x) ∼ α2α−1 sin απ2
Γ(d+α2
)Γ(α2)
πd2+1|x|d+α
, |x| → ∞.
For α = 2 we have
q2(1, x) ∼ 2−d2 π−
d−12
e−|x|
|x| d−12
, |x| → ∞.
Proof. The proof of the case α < 2 is similar to the proof of Proposition 3.27 and uses
(4.28), while the proof of the case α = 2 is similar to the proof of Theorem 4.18. We omit
the details. 2
The following theorem from [24] provides the sharp estimate for qα(t, x) for small time t
in case 0 < α < 2.
Theorem 4.23 Let α ∈ (0, 2). There are positive constants C1 < C2 such that for all x ∈ Rd
and 0 < t < 1 ∧ d2α
,
C1t min(|x|−d−α, |x|−d+tα) ≤ qα(t, x) ≤ C2t min(|x|−d−α, |x|−d+tα) .
Proof. The following sharp estimates for the stable densities (4.28) is well known (see, for
instance, [12])
pα(s, x) ³ s−dα
(1 ∧ s
d+αα
|x|d+α
), ∀s > 0 and x ∈ Rd .
Hence, by (4.23) it follows that qα(t, x) ³ 1Γ(t)
I(t, |x|) where
I(t, r) :=
∫ ∞
0
s−dα
(1 ∧ s
d+αα
rd+α
)st−1e−s ds
=1
rd+α
∫ rα
0
ste−s ds +
∫ ∞
rα
st−1−d/αe−s ds .
51
From now on assume that 0 < t ≤ 1 ∧ d2α
. Then for 0 < r ≤ 1,
I(t, r) ³ 1
rd+α
∫ rα
0
st ds +
∫ 1
rα
st−1−d/αe−s ds +
∫ ∞
1
st−1−d/αe−s ds
=1
t + 1rαt−d +
1
d/α− t(rαt−d − 1) +
∫ ∞
1
st−1−d/αe−s ds ³ rαt−d .
We also have,
I(t, r) ≤ 1
rd+α
∫ ∞
0
ste−s ds =Γ(t + 1)
rd+α≤ 1
rd+α, r > 0 ,
I(t, r) ≥ 1
rd+α
∫ 1
0
ste−s ds ≥ 1
rd+α
1
(1 + t)e≥ 1
2erd+α, r > 1 .
Note that1
rd+α∧ 1
rd−tα=
1
rd−tα , 0 < r ≤ 11
rd+α , r > 1.
Therefore I(t, r) ³ 1rd+α ∧ 1
rd−tα . This implies that
qα(t, x) ³ 1
Γ(t)I(t, |x|) ³ 1
Γ(t)
(1
|x|d+α∧ 1
|x|d−tα
)³ t
(1
|x|d+α∧ 1
|x|d−tα
)
since for 0 < t ≤ 1, Γ(t) ³ t−1. 2
Note that by taking x = 0, one obtains that qα(t, 0) = ∞ for 0 < t < 1 ∧ d2α
. This
somewhat unusual feature of the transition density is easier to show when α = 2, i.e., in the
case of a gamma subordinator. Indeed, then
q2(t, x) :=
∫ ∞
0
(4πs)−d/2e−|x|2/(4t) 1
Γ(t)st−1e−s ds ,
and therefore
q2(t, 0) =(4π)−d/2
Γ(t)
∫ ∞
0
s−d/2+t−1e−s ds =
+∞ , t ≤ d/2Γ(t−d/2)
(4π)d/2Γ(t), t > d/2 .
Assume now that S(2) is an iterated geometric stable subordinator with the Laplace exponent
φ(λ) = log(1 + log(1 + λ)), and let q(2)2 (t, x) be the transition density of the process Y
(2)t =
X(S(2)t ). Then by (4.30),
q(2)2 (t, 0) =
∫ ∞
0
q2(s, 0)1
Γ(t)st−1e−s ds = ∞
for all t > 0.
52
5 Harnack inequality for subordinate Brownian mo-
tion
5.1 Capacity and exit time estimates for some symmetric Levyprocesses
Revise this to concentrate on the strong form of HI
The purpose of this subsection is to establish lower and upper estimates for the capacity
of balls and the exit time from balls, with respect to a class of radially symmetric Levy
processes.
Suppose that Y = (Yt,Px) is a transient radially symmetric Levy process on Rd. We
will assume that the potential kernel of Y is absolutely continuous with a density G(x, y) =
G(|y − x|) with respect to the Lebesgue measure. Let us assume the following condition:
G : [0,∞) → (0,∞] is a positive and decreasing function satisfying G(0) = ∞. We will have
need of the following elementary lemma.
Lemma 5.1 There exist a positive constant C1 = C1(d) such that for every r > 0 and all
x ∈ B(0, r),
C1
∫
B(0,r)
G(|y|) dy ≤∫
B(0,r)
G(x, y) dy ≤∫
B(0,r)
G(|y|) dy .
Moreover, the supremum of∫
B(0,r)G(x, y) dy is attained at x = 0, while the infimum is
attained at any point on the boundary of B(0, r).
Proof. The proof is elementary. We only present the proof of the left-hand side inequality
for d ≥ 2. Consider the intersection of B(0, r) and B(x, r). This intersection contains the
intersection of B(x, r) and the cone with vertex x of aperture equal to π/3 pointing towards
the origin. Let C(x) be that intersection. Then
∫
B(0,r)
G(|y − x|) dy ≥∫
C(x)
G(|y − x|) dy ≥ c1
∫
B(x,r)
G(|y − x|) dy = c1
∫
B(0,r)
G(|y|) dy
where the constant c1 depends only on the dimension d. It is easy to see that the infimum
of∫
B(0,r)G(x, y) dy is attained at any point on the boundary of B(0, r). 2
Let Cap denote the (0-order) capacity with respect to X (for definition of capacity see
e.g. [13] or [58]). For a measure µ denote
Gµ(x) :=
∫G(x, y) µ(dy) .
53
For any compact subset K of Rd, let PK be the set of probability measures supported by
K. Define
e(K) := infµ∈PK
∫Gµ(x) µ(dx) .
Since the kernel G satisfies the maximum principle (see, for example, Theorem 5.2.2 in [25]),
it follows from ([32], page 159) that for any compact subset K of Rd
Cap(K) =1
infµ∈PKsupx∈Supp(µ) Gµ(x)
=1
e(K). (5.1)
Furthermore, the infimum is attained at the capacitary measure µK . The following lemma
is essentially proved in [45].
Lemma 5.2 Let K be a compact subset of Rd. For any probability measure µ on K, it holds
that
infx∈Supp(µ)
Gµ(x) ≤ e(K) ≤ supx∈Supp(µ)
Gµ(x) . (5.2)
Proof. The right-hand side inequality follows immediately from (5.1). In order to prove the
left-hand side inequality, suppose that for some probability measure µ on K it holds that
e(K) < infx∈Supp(µ) Gµ(x). Then e(K)+ ε < infx∈Supp(µ) Gµ(x) for some ε > 0. We first have
∫
K
Gµ(x) µK(dx) >
∫
K
(e(K) + ε) µK(dx) = e(K) + ε .
On the other hand,∫
K
Gµ(x) µK(dx) =
∫
K
GµK(x) µ(dx) =
∫
K
e(K) µ(dx) = e(K) ,
where we have used the fact that GµK = e(K) quasi everywhere in K, and the measure of
finite energy does not charge sets of capacity zero. This contradiction proves the lemma. 2
Proposition 5.3 There exist positive constants C2 < C3 depending only on d, such that for
all r > 0C2r
d
∫B(0,r)
G(|y|) dy≤ Cap(B(0, r)) ≤ C3r
d
∫B(0,r)
G(|y|) dy. (5.3)
Proof. Let mr(dy) be the normalized Lebesgue measure on B(0, r). Thus, mr(dy) =
dy/(c1rd), where c1 is the volume of the unit ball. Consider Gmr = supx∈B(0,r) Gmr(x). By
Lemma 5.1, the supremum is attained at x = 0, and so
Gmr =1
c1rd
∫
B(0,r)
G(|y|)dy .
54
Therefore from Lemma 5.2
Cap(B(0, r)) ≥ c1rd
∫B(0,r)
G(|y|)dy. (5.4)
For the right-hand side of (5.2), it follows from Lemma 5.1 and Lemma 5.2 that
Cap(B(0, r)) ≤ 1
Gmr(z)=
c1rd
∫B(0,r)
G(z, y)dy ≤ c1r
d
C1
∫B(0,r)
G(|y|) dy ,
where in the first line, z ∈ ∂B(0, r). 2
In the remaining part of this section we assume in addition that G satisfies the following
assumption: There exist r0 > 0 and c0 ∈ (0, 1) such that
c0G(r) ≥ G(2r) , 0 < 2r < r0 . (5.5)
Note that if G is regularly varying at 0 with index δ < 0, i.e., if
limr→0
G(2r)
G(r)= 2δ ,
then (5.5) is satisfied with c0 = (2δ +1)/2 for some positive r0. Let τB(0,r) = inft > 0 : Yt /∈B(0, r) be the first exit time of Y from the ball B(0, r).
Proposition 5.4 There exists a positive constant C4 such that for all r ∈ (0, r0/2)
C4
∫
B(0,r/6)
G(|y|) dy ≤ infx∈B(0,r/6)
ExτB(0,r) ≤ supx∈B(0,r)
ExτB(0,r) ≤∫
B(0,r)
G(|y|) dy . (5.6)
Proof. Let GB(0,r)(x, y) denote the Green function of the process Y killed upon exiting
B(0, r). Clearly, GB(0,r)(x, y) ≤ G(x, y), for x, y ∈ B(0, r). Therefore,
ExτB(0,r) =
∫
B(0,r)
GB(0,r)(x, y) dy
≤∫
B(0,r)
G(x, y) dy ≤∫
B(0,r)
G(|y|) dy .
For the left-hand side inequality, let r ∈ (0, r0/2), and let x, y ∈ B(0, r/6). Then,
GB(0,r)(x, y) = G(x, y)− ExG(Y (τB(0,r)), y)
≥ G(|y − x|)−G(2|y − x|) .
The last inequality follows because |y−Y (τB(0,r))| ≥ 23r ≥ 2|y−x|. Let c1 = 1−c0 ∈ (0, 1). By
(5.5) we have that for all u ∈ (0, r0), G(u)−G(2u) ≥ c1G(u). Hence, G(|y−x|)−G(2|y−x|) ≥
55
c1G(|y − x|), which implies that GB(0,r)(x, y) ≥ c1G(x, y) for all x, y ∈ B(0, r/6). Now, for
x ∈ B(0, r/6),
ExτB(0,r) =
∫
B(0,r)
GB(0,r)(x, y) dy ≥∫
B(0,r/6)
GB(0,r)(x, y) dy
≥ c1
∫
B(0,r/6)
G(x, y) dy ≥ c1C1
∫
B(0,r/6)
G(|y|) dy ,
where the last inequality follows from Lemma 5.1. 2
Example 5.5 We illustrate the last two propositions by applying them to the iterated
geometric stable process Y (n) introduced in Example 4.2 (iv) and (v). Hence, we assume
that d > 2(α/2)n. By a slight abuse of notation we define a function G(n) : [0,∞) → (0,∞]
by G(n)(|x|) = G(n)(x). Note that by Theorem 4.8, G is regularly varying at zero with index
β = −d. Let r0 be the constant from (5.5). Let us first look at the asymptotic behavior of∫B(0,r)
G(n)(|y|) dy for small r. We have
∫
B(0,r)
G(n)(|y|) dy = cd
∫ r
0
ud−1G(n)(u) du
∼ cdΓ(d/2)
απd/2
∫ r
0
ud−1
udLn−1(1/u2)l2n(1/u2)=
cdΓ(d/2)
2απd/2
∫ r2
0
1
vLn−1(1/v)l2n(1/v)
=cdΓ(d/2)
2απd/2
1
ln(1/r2)∼ cα,d
1
ln(1/r), r → 0 .
It follows from Proposition 5.3 that there exist positive constants C5 ≤ C6 such that for
all r ∈ (0, 1/en),
C5rdln(1/r) ≤ Cap(B(0, r)) ≤ C6r
dln(1/r) .
Similarly, it follows from Proposition 5.4 that there exist positive constants C7 ≤ C8 such
that for all r ∈ (0, (1/en) ∧ (r0/2)),
C7
ln(1/r)≤ inf
x∈B(0,r/6)ExτB(0,r) ≤ sup
x∈B(0,r)
ExτB(0,r) ≤ C8
ln(1/r). (5.7)
Here we also used the fact that ln is slowly varying.
By use of Theorem 4.12 and Proposition 5.3, we can estimate capacity of large balls. It
easily follows that as r →∞, Cap(B(0, r) is of the order rα(α/2)n−1.
56
5.2 Krylov-Safonov-type estimate
In this subsection we retain the assumptions from the beginning of the previous one. Thus,
Y = (Yt,Px) is a transient radially symmetric Levy process on Rd with the potential kernel
having the density G(x, y) = G(|y − x|) which is positive, decreasing and G(0) = ∞. Let
r1 ∈ (0, 1) and let ` : (1/r1,∞) → (0,∞) be a slowly varying function at ∞. Let β ∈ [0, 1]
be such that d + 2β − 2 > 0. We introduce the following additional assumption about the
density G: There exists a positive constant c1 such that
G(x) ∼ c1
|x|d+2β−2`(1/|x|2) , |x| → 0 . (5.8)
If we abuse notation and let G(|x|) = G(x), then G is regularly varying at 0 with index
−d− 2β + 2 < 0, hence satisfies the assumption (5.5) with some r0 > 0. In order to simplify
notations, we define the function g : (0, r1) → (0,∞) by
g(r) =1
rd+2β−2`(1/r2).
Clearly, g is regularly varying at 0 with index −d − 2β + 2 < 0. Let g be a monotone
equivalent of g at 0. More precisely, we define g : (0, r1/2) →∞ by
g(r) := supg(ρ) : r ≤ ρ ≤ r1 .
By Theorem 1.5.3. in [10] (its 0-version), g(r) ∼ g(r) as r → 0. Moreover, g(r) ≥ g(r), and
g is decreasing. Let r2 = min(r0, r1). There exists positive constants C9 < C10 such that
C9g(r) ≤ G(r) ≤ C10g(r) , r < r2 . (5.9)
We define
c = max
1
3
(4C10
C9
) 1d+2β−2
, 1
. (5.10)
Since g is regularly varying at 0 with index −d− 2β + 2, there exists r3 > 0 such that
1
2
(1
3c
)d+2β−2
≤ g(6cr)
g(2r)≤ 2
(1
3c
)d+2β−2
, r < r3 . (5.11)
Finally, let
R = min(r2, r3, 1) = min(r0, r1, r3, 1) . (5.12)
Lemma 5.6 There exists C11 > 0 such that for any r ∈ (0, (7c)−1R), any closed subset A
of B(0, r), and any y ∈ B(0, r),
Py(TA < τB(0,7cr)) ≥ C11κ(r)Cap(A)
Cap(B(0, r)),
57
where
κ(r) =rdg(r)∫ r
0ρd−1g(ρ) dρ
. (5.13)
Proof. Without loss of generality we may assume that Cap(A) > 0. Let GB(0,7cr) be the
Green function of the process obtained by killing Y upon exiting from B(0, 7cr). If ν is the
capacitary measure of A with respect to Y , then we have for all y ∈ B(0, r),
GB(0,7cr)ν(y) = Ey[GB(0,7cr)ν(YTA) : TA < τB(0,7cr)]
≤ supz∈Rd
GB(0,7cr)ν(z)Py(TA < τB(0,7cr))
≤ Py(TA < τB(0,7cr)).
On the other hand we have for all y ∈ B(0, r),
GB(0,7cr)ν(y) =
∫GB(0,7cr)(y, z)ν(dz) ≥ ν(A) inf
z∈B(0,r)GB(0,7cr)(y, z)
= Cap(A) infz∈B(0,r)
GB(0,7cr)(y, z) .
In order to estimate the infimum in the last display, note that GB(0,7cr)(y, z) = G(y, z) −Ey[G(YτB(0,7cr)
, z)]. Since |y− z| < 2r < R, it follows by (5.9) and the monotonicity of g that
G(y, z) ≥ C9g(|z − y|) ≥ C9g(2r) . (5.14)
Now we consider G(YτB(0,7cr), z). First note that |YτB(0,7cr)
− z| ≥ 7cr− r ≥ 7cr− cr ≥ 6cr. If
|YτB(0,7cr)− z| ≤ R, then by (5.9) and the monotonicity of g,
G(YτB(0,7cr), z) ≤ C10g(|z − YτB(0,7cr)
|) ≤ C10g(6cr) .
If, on the other hand, |YτB(0,7cr)− z| ≥ R, then G(YτB(0,7cr)
, z) ≤ G(w), where w ∈ Rd is any
point such that |w| = R. Here we have used the monotonicity of G. For |w| = R we have
that G(w) ≤ C10g(|w|) = C10g(R) ≤ C10g(6cr). Therefore
Ey[G(YτB(0,7cr), z)] ≤ C10g(6cr) . (5.15)
By use of (5.14) and (5.14) we obtain
GB(0,7cr)(y, z) ≥ C9g(2r)− C10g(6cr)
= g(2r)
(C9 − C10
g(6cr)
g(2r)
)
≥ g(2r)
(C9 − 2C10
(1
3c
)d+2β−2)
≥ g(2r)
(C9 − 2C10
C9
4C10
)=
C9
2g(2r) ,
58
where the next to last line follows from (5.11) and the last from definition (5.10). By using
one more time that g is regularly varying at 0, we conclude that there exists a constant
C12 > 0 such that for all y, z ∈ B(0, r),
GB(0,7cr)(y, z) ≥ C12g(r) .
Further, it follows from Proposition 5.3 that there exists a constant C13 > 0, such that
C13
Cap(B(0, r))
rd
∫ r
0ρd−1g(ρ) dρ
≤ 1 . (5.16)
Hence
GB(0,7cr)(y, z) ≥ C12C131
Cap(B(0, r))
rdg(r)∫ r
0ρd−1g(ρ) dρ
≥ C141
Cap(B(0, r))κ(r) .
To finish the proof, note that
Py(TA < τB(0,7cr)) ≥ GB(0,7cr)ν(y) ≥ C14κ(r)Cap(A)
Cap(B(0, r)).
2
Remark 5.7 Note that in the estimate (5.16) we could use g instead of g. Together with
the fact that g(r) ≥ g(r) this would lead to the hitting time estimate
Py(TA < τB(0,7cr)) ≥ C11κ(r)Cap(A)
Cap(B(0, r)),
where
κ(r) =rdg(r)∫ r
0ρd−1g(ρ) dρ
. (5.17)
We will apply the above lemma to subordinate Brownian motions. Assume, first, that
Yt = X(St) where S = (St : t ≥ 0) is the special subordinator with the Laplace exponent φ
satisfying φ(λ) ∼ λα/2`(λ), λ → ∞, where 0 < α < 2, and ` is slowly varying at ∞. Then
the Green function of Y satisfies all assumptions of this subsection, in particular (5.8) with
β = 1− α/2, see (3.27) and Lemma (4.3). Define c as in (5.10) for appropriate C9 and C10
and β = 1− α/2, and let R be as in (5.12).
59
Proposition 5.8 Assume that Yt = X(St) where S = (St : t ≥ 0) is the special sub-
ordinator with the Laplace exponent φ satisfying one of the following two conditions: (i)
φ(λ) ∼ λα/2`(λ), λ →∞, where 0 < α < 2, and ` is slowly varying at ∞, or (ii) φ(λ) ∼ λ,
λ →∞. Then the following statements are true:
(a) There exists a constant C15 > 0 such that for any r ∈ (0, (7c)−1R), any closed subset A
of B(0, r), and any y ∈ B(0, r),
Py(TA < τB(0,7cr)) ≥ C15Cap(A)
Cap(B(0, r)).
(b) There exists a constant C16 > 0 such that for any r ∈ (0, R) we have
supy∈B(0,r)
Ey τB(0,r) ≤ C16 infy∈B(0,r/6)
Ey τB(0,r) .
Proof. We give the proof for case (i), case (ii) being simpler.
(a) It suffices to show that κ(r), r < (7c)−1R, is bounded from below by a positive constant.
Note that g is regularly varying at 0 with index −d + α. Hence there is a slowly varying
function ` such that g(r) = r−d+α`(r). By Karamata’s monotone density theorem one can
conclude that∫ r
0
ρd−1g(ρ) dρ =
∫ r
0
ρα−1`(ρ) dρ ∼ 1
αrα`(r) =
1
αrdg(r) , r → 0 .
Therefore,
κ(r) =rdg(r)∫ r
0ρd−1g(ρ) dρ
∼ 1
α.
(b) By Proposition 5.4 it suffices to show that∫
B(0,r)G(|y|) dy ≤ c
∫B(0,r/6)
G(|y|) dy for
some positive constant c. But, by the proof of part (a),∫
B(0,r)G(|y|) dy ³ rdg(r), while∫
B(0,r/6)G(|y|) dy ³ (r/6)dg(r/6). Since g is regularly varying, the claim follows. 2
Proposition 5.9 Let S(n) be the iterated geometric stable subordinator and let Y(n)t =
X(S(n)t ) be the corresponding subordinate process.
(a) Let γ > 0. There exists a constant C17 > 0 such that for any r ∈ (0, (7c)−1R), any closed
subset A of B(0, r), and any y ∈ B(0, r)
Py(TA < τB(0,7cr)) ≥ C17 rγ Cap(A)
Cap(B(0, r)).
(b) There exists a constant C18 > 0 such that for any r ∈ (0, R) we have
supy∈B(0,r)
Ey τB(0,r) ≤ C18 infy∈B(0,r/6)
Ey τB(0,r) .
60
Proof. (a) By Proposition 3.19 we take
g(r) =1
rdLn−1(1/r2)ln(1/r2)2.
Recall that functions ln, respectively Ln, were defined in (3.32), respectively (3.33). Integra-
tion gives that∫ r
0
ρd−1g(ρ) dρ =
∫ r
0
1
ρLn−1(1/ρ2)ln(1/ρ2)2dρ =
2
ln(1/r2).
Therefore,
κ(r) =1
Ln(1/r2)≥ crγ ,
and the claim follows from Remark 5.7.
(b) This was shown in Example 5.5. 2
Remark 5.10 We note that part (b) of both Propositions 5.8 and 5.9 are true for every
pure jump process. This was proved in [55], and later also in [60].
In the remainder of this subsection we discuss briefly the Krylov-Safonov type estimate
involving the Lebesgue measure instead of the capacity. This type of estimate turns out to
be very useful in case of a pure jump Levy process. The method of proof comes from [4],
while our exposition follows [66].
Assume that Y = (Yt : t ≥ 0) is a subordinate Brownian motion via a subordinator with
no drift. We retain the notation j(|x|) = J(x), introduce functions
η1(r) = r−2
∫ r
0
ρd+1j(ρ) dρ , η2(ρ) =
∫ ∞
r
ρd−1j(ρ) dρ ,
and let η(r) = η1(r) + η2(r). The proof of the following result can be found in [66].
Lemma 5.11 There exists a constant C19 > 0 such that for every r ∈ (0, 1), every A ⊂B(0, r) and any y ∈ B(0, 2r),
Py(TA < τB(0,3r)) ≥ C19rdj(4r)
η(r)
|A||B(0, r)| ,
where | · | denotes the Lebesgue measure.
Proposition 5.12 Assume that Yt = X(St) where S = (St : t ≥ 0) is a pure jump subordi-
nator, and the jumping function J(x) = j(|x|) of Y is such that j satisfies j(r) ∼ r−d−α`(r),
r → 0+, with 0 < α < 2 and ` slowly varying at 0. Then there exists a constant C20 > 0
such that for every r ∈ (0, 1), every A ⊂ B(0, r) and any y ∈ B(0, 2r),
Py(TA < τB(0,3r)) ≥ C20|A|
|B(0, r)| .
61
Proof. It suffices to prove that rdj(4r)/η(r) is bounded from below by a positive constant.
This is accomplished along the lines of the proof of Proposition 5.8. 2
Note that the assumptions of Proposition 5.12 are satisfied for subordinate Brownian mo-
tions via α/2-stable subordinators, relativistic α-stable subordinators and the subordinators
corresponding to Examples 3.15 and 3.16 (see Theorems 4.14 and 4.15).
In the case of, say, geometric stable process Y , one obtains from Lemma 5.11 a weak
form of the hitting time estimate: There exists C21 > 0 such that for every r ∈ (0, 1/2),
every A ⊂ B(0, r) and any y ∈ B(0, 2r),
Py(TA < τB(0,3r)) ≥ C211
log(1/r)
|A||B(0, r)| . (5.18)
5.3 Proof of Harnack inequality
Let Y = (Yt : t ≥ 0) be a subordinate Brownian motion in Rd and let D be an open subset
of Rd. A function h : Rd → [0, +∞] is said to be harmonic in D with respect to the process
Y if for every bounded open set B ⊂ B ⊂ D,
h(x) = Ex[h(YτB)] , ∀x ∈ B ,
where τB = inft > 0 : Yt /∈ B is the exit time of Y from B. Harnack inequality is a
statement about the growth rate of nonnegative harmonic functions in compact subsets of
D. We will first discuss two proofs of a scale invariant Harnack inequality for small balls.
Next, we will give a proof of a weak form of Harnack inequality for small balls for the iterated
geometric stable process. All discussed forms of the inequality lead to the following Harnack
inequality: For any compact set K ⊂ D, there exists a constant C > 0, depending only on
D and K, such that for every nonnegative harmonic function h with respect to Y in D, it
holds that
supx∈K
h(x) ≤ C infx∈K
h(x) .
The general methodology of proving Harnack inequality for jump processes is explained
in [66] following the pioneering work [4] (for an alternative approach see [17]). The same
method was also used in [5] and [18] to prove a parabolic Harnack inequality. There are two
essential ingredients: The first one is a Krylov-Safonov-type estimate for the hitting proba-
bility discussed in the previous subsection. The form given in Lemma 5.11 and Proposition
5.12 can be used in the case of pure jump processes for which one has good control of the
behavior of the jumping function J at zero. More precisely, one needs that j(r) is a regularly
varying function of index −d−α for 0 < α < 2 when r → 0+. This, as shown in Proposition
5.12, implies that the function of r on the right-hand side of the estimate can be replaced by
a constant, which is desirable to obtaining the scale invariant form of Harnack inequality for
62
small balls. In the case of a geometric stable process the behavior of J near zero is known
(see Theorem 4.16), but leads to the inequality (5.18) having the factor 1/ log(1/r) on the
right-hand side. This yields a weak type of Harnack inequality for balls. In the case of
the iterated geometric stable processes, no information about the behavior of J near zero is
available, and hence one does not have any control on the factor rdj(r)/η(r) in Lemma 5.11.
In the case where Y has a continuous component (i.e, the subordinator S has a drift), or
the case when information on the behavior of J near zero is missing, one can use the form
of Krylov-Safonov inequality described in Propositions 5.8 and 5.9.
The second ingredient in the proof is the following result which can be considered as
a very weak form of Harnack inequality (more precisely, Harnack inequality for harmonic
measures of sets away from the ball). Recall that R > 0 was defined in (5.12).
Proposition 5.13 Let Y be a subordinate Brownian motion such that the function j defined
in (4.17) satisfies conditions (4.20) and (4.21). There exists a positive constant C22 > 0 such
that for any r ∈ (0, R), any y, z ∈ B(0, r/2) and any nonnegative function H supported on
B(0, 2r)c it holds that
EzH(Y (τB(0,r))) ≤ C22EyH(Y (τB(0,r))) . (5.19)
Proof. This is an immediate consequence of Proposition 4.21 and the comparison results
for the mean exit times explained in Remark 5.10 (see also Propositions 5.8 and 5.9). 2
We are now ready to state Harnack inequality under two different set of conditions.
Theorem 5.14 Let Y be a subordinate Brownian motion such that the function j defined
in (4.17) satisfies conditions (4.20) and (4.21) and is further regularly varying at zero with
index −d − α where 0 < α < 2. Then there exists a constant C > 0 such that, for any
r ∈ (0, 1/4), and any function h which is nonnegative, bounded on Rd, and harmonic with
respect to Y in B(0, 16r), we have
h(x) ≤ Ch(y), ∀x, y ∈ B(0, r).
Proof of this Harnack inequality follows from [66] and uses Proposition 5.12. The second
set of conditions for Harnack inequality uses Proposition 5.8. Recall the constant c defined
in (5.10).
Theorem 5.15 Let Y be a subordinate Brownian motion such that the function j defined in
(4.17) satisfies conditions (4.20) and (4.21), and assume further that the subordinator S is
special and its Laplace exponent φ satisfies φ(λ) ∼ bλα/2, λ →∞, with α ∈ (0, 2] and b > 0.
Then there exists a constant C > 0 such that, for any r ∈ (0, (14c)−1R), and any function
h which is nonnegative, bounded on Rd, and harmonic with respect to Y in B(0, 14cr), we
have
h(x) ≤ Ch(y), ∀x, y ∈ B(0, r/2).
63
Under these conditions, Harnack inequality was proved in [56]. Unfortunately, despite the
fact that Proposition 5.8 holds under weaker conditions for φ than the ones stated in the
theorem above, we were unable to carry out a proof in this more general case.
Now we are going to present a proof of a weak form of Harnack inequality for iterated
geometric stable processes. Let S(n) be the iterated geometric stable subordinator and let
Y(n)t = X(S
(n)t ) be the corresponding subordinate process. For simplicity we write Y instead
of Y (n). We state again Propositions 5.9 (a), and and 5.13:
Let γ > 0. There exists a constant C17 > 0 such that for any r ∈ (0, (7c)−1R), any closed
subset A of B(0, r), and any y ∈ B(0, r)
Py(TA < τB(0,7cr)) ≥ C17 rγ Cap(A)
Cap(B(0, r)), (5.20)
There exists a positive constant C22 > 0 such that for any r ∈ (0, R), any y, z ∈ B(0, r/2)
and any nonnegative function H supported on B(0, r)c it holds that
EzH(Y (τB(0,r))) ≤ C22EyH(Y (τB(0,r))) . (5.21)
We will also need the following lemma.
Lemma 5.16 There exists a positive constant C23 such that for all 0 < ρ < r < 1/en+1
Cap(B(0, ρ))
Cap(B(0, r))≥ C23
(ρ
r
)d
.
Proof. By Example 5.5
C5rdln(1/r) ≤ Cap(B(0, r)) ≤ C6r
dln(1/r)
for every r < 1/en+1. Therefore,
Cap(B(0, ρ))
Cap(B(0, r))≥ C5ρ
dln(1/ρ)
C6rdln(1/r)≥ C5
C6
(ρ
r
)d
,
where the last inequality follows from the fact that ln is increasing at infinity. 2
Theorem 5.17 Let R and c be defined by (5.12) and (5.10) respectively. Let r ∈ (0, (14c)−1R).
There exists a constant C > 0 such that for every nonnegative bounded function h in Rd which
is harmonic with respect to Y in B(0, 14cr) it holds
h(x) ≤ Ch(y), x, y ∈ B(0, r/2).
64
Remark 5.18 Note that the constant C in the theorem may depend on the radius r. This is
why the above Harnack inequality is weak. A version of a weak Harnack inequality appeared
in [3], and our proof follows the arguments there. Similar proof, in a somewhat different
context, was given in [68].
Proof. We fix γ ∈ (0, 1). Suppose that h is nonnegative and bounded in Rd and harmonic
with respect to Y in B(0, 14cr). By looking at h + ε and letting ε ↓ 0, we may suppose that
h is bounded from below by a positive constant. By looking at ah for a suitable a > 0, we
may suppose that infB(0,r/2) h = 1/2. We want to bound h from above in B(0, r/2) by a
constant depending only on r, d and γ. Choose z1 ∈ B(0, r/2) such that h(z1) ≤ 1. Choose
ρ ∈ (1, γ−1). For i ≥ 1 let
ri =c1r
iρ,
where c1 is a constant to be determined later. We require first of all that c1 is small enough
so that ∞∑i=1
ri ≤ r
8. (5.22)
Recall that there exists c2 := C17 > 0 such that for any s ∈ (0, (7c)−1R), any closed
subset A ⊂ B(0, s) and any y ∈ B(0, s),
Py(TA < τB(0,7cs)) ≥ c2sγ Cap(A)
Cap(B(0, s)). (5.23)
Let c3 be a constant such that
c3 ≤ c22−4−γ+ργ .
Denote the constant C8 from Lemma 5.16 by c4. Once c1 and c3 have been chosen, choose
K1 sufficiently large so that
1
4(7c)−d−γc2c4K1 exp((14c)−γrγc1c3i
1−ργ) c4γ+d1 r4γ ≥ 2i4ργ+ρd (5.24)
for all i ≥ 1. Such a choice is possible since ργ < 1. Note that K1 will depend on r, d and
γ as well as constants c, c1, c2, c3 and c4. Suppose now that there exists x1 ∈ B(0, r/2) with
h(x1) ≥ K1. We will show that in this case there exists a sequence (xj, Kj) : j ≥ 1 with
xj+1 ∈ B(xj, 2rj) ⊂ B(0, 3r/4), Kj = h(xj), and
Kj ≥ K1 exp((14c)−γrγc1c3j1−ργ). (5.25)
Since 1−ργ > 0, we have Kj →∞, a contradiction to the assumption that h is bounded. We
can then conclude that h must be bounded by K1 on B(0, r/2), and hence h(x) ≤ 2K1h(y)
if x, y ∈ B(0, r/2).
65
Suppose that x1, x2, . . . , xi have been selected and that (5.25) holds for j = 1, . . . , i. We
will show that there exists xi+1 ∈ B(xi, 2ri) such that if Ki+1 = h(xi+1), then (5.25) holds
for j = i + 1; we then use induction to conclude that (5.25) holds for all j.
Let
Ai = y ∈ B(xi, (14c)−1ri) : h(y) ≥ Kir2γ .
First we prove thatCap(Ai)
Cap(B(xi, (14c)−1ri))≤ 1
4. (5.26)
To prove this claim, we suppose to the contrary that Cap(Ai)/Cap(B(xi, (14c)−1ri)) > 1/4.
Let F be a compact subset of Ai with Cap(F )/Cap(B(xi, (14c)−1ri)) > 1/4. Recall that
r ≥ 8ri. Now we have
1 ≥ h(z1) ≥ Ez1 [h(YTF∧τB(0,7cr)); TF < τB(0,7cr)]
≥ Kir2γi Pz1(TF < τB(0,7cr))
≥ c2Kir2γi rγ Cap(F )
Cap(B(0, r))
= c2Kir2γi rγ Cap(F )
Cap(B(xi, (7c)−1ri))
Cap(B(xi, (7c)−1ri))
Cap(B(0, r))
≥ 1
4c2Kir
2γi rγ CapB(0, (7c)−1ri))
Cap(B(0, r))
≥ 1
4c2Kir
2γi rγc4
((7c)−1ri
r
)d
=1
4c2c4(7c)
−dKir2γi rγ
(ri
r
)d
≥ 1
4c2c4(7c)
−d−γKir4γi
(ri
r
)d
≥ 1
4c2c4(7c)
−d−γK1 exp((14c)−γrγc1c3i1−ργ)r4γ
i
(ri
r
)d
≥ 1
4c2c4(7c)
−d−γK1 exp((14c)−γrγc1c3i1−ργ)
(c1r
iρ
)4γ (c1
iρ
)d
≥ 1
4c2c4(7c)
−d−γK1 exp((14c)−γrγc1c3j1−ργ)c4γ+d
1 r4γi−4γρ−ρd
≥ 2i4γρ+ρdi−4γρ−ρd = 2 .
We used the definition of harmonicity in the first line, (5.23) in the third, Lemma 5.16 in the
sixth, (5.25) in the ninth, and (5.24) in the last line. This is a contradiction, and therefore
(5.26) is valid.
66
By subadditivity of the capacity and by (5.26) it follows that there exists Ei ⊂ B(xi, (14c)−1c)\Ai such that
Cap(Ei)
Cap(B(xi, (14c)−1ri))≥ 1
2.
Write τi for τB(xi,ri/2) and let pi := Pxi(TEi< τi). It follows from (5.23) that
pi ≥ c2
( ri
14c
)γ Cap(Ei)
Cap(B(xi, (14c)−1))
≥ c2
2
( ri
14c
)γ
. (5.27)
Set Mi = supB(xi,ri)h. Then
Ki = h(xi) = Exi [h(YTEi∧τi
); TEi< τi]
+ Exi [h(YTEi∧τi
); TEi≥ τi, Yτi
∈ B(xi, ri)]
+ Exi [h(YTEi∧τi
); TEi≥ τi, Yτi
/∈ B(xi, ri)]. (5.28)
We are going to estimate each term separately. Since Ei is compact, we have
Exi [h(YTEi∧τi
); TEi< τi] ≤ Kir
2γi Pxi(TEi
< τi) ≤ Kir2γi .
Further,
Exi [h(YTEi∧τi
); TEi≥ τi, Yτi
∈ B(xi, ri)] ≤ Mi(1− pi).
Inequality (5.26) implies in particular that there exists yi ∈ B(xi, (14c)−1ri) with h(yi) ≤Kir
2γi . We then have, by (5.21) and with c5 = C22
Kir2γi ≥ h(yi) ≥ Eyi [h(Yτi
) : Yτi/∈ B(xi, ri)]
≥ c5Exi [h(Yτi) : Yτi
/∈ B(xi, ri)] . (5.29)
Therefore
Exi [h(YTEi∧τi
); TEi≥ τi, Yτi
/∈ B(xi, ri)] ≤ c6Kir2γi
for the positive constant c6 = 1/c5. Consequently we have
Ki ≤ (1 + c6)Kir2γi + Mi(1− pi). (5.30)
Rearranging, we get
Mi ≥ Ki
(1− (1 + c6)r
2γi
1− pi
). (5.31)
Now choose
c1 ≤ min 1
14c
1
r
(1
4
c2
1 + c6
)1/γ
, 1 .
67
This choice of c1 implies that
2(1 + c6)r2γi ≤ c2
2
( ri
14c
)γ
≤ pi ,
where the second inequality follows from (5.27). Therefore, 1− (1 + c6)r2γi ≥ 1− pi/2, and
hence by use of (5.31)
Mi ≥ Ki
(1− 1
2pi
1− pi
)> (1 +
pi
2)Ki.
Using the definition of Mi and (5.21), there exists a point xi+1 ∈ B(xi, ri) ⊂ B(xi, 2ri) such
that
Ki+1 = h(xi+1) ≥ Ki
(1 +
c2
4
( ri
14c
)γ).
Taking logarithms and writing
log Ki+1 = log Ki +i∑
j=1
[log Kj+1 − log Kj],
we have
log Ki+1 ≥ log K1 +i∑
j=1
log(1 +
c2
4
( rj
14c
)γ)
≥ log K1 +i∑
j=1
c2
4
rγj
(14c)γ
= log K1 +c2
4
1
(14c)γ
i∑j=1
(c1r
jρ
)γ
≥ log K1 +c2
4
1
(14c)γrγcγ
1
i∑j=1
j−ργ
≥ log K1 +c2
4
1
(14c)γrγc1i
1−ργ
≥ log K1 +1
(14c)γrγc1c3(i + 1)1−ργ .
In the fifth line we used the fact that c1 < 1. For the last line recall that
c3 ≤ c22−4−γ+ργ =
c2
23+γ
(1
2
)1−ργ
≤ c2
23+γ
(i
i + 1
)1−ργ
,
implying thatc2
4i1−ργ ≥ 21+γc3(1 + i)1−ργ .
68
Therefore we have obtained that
Ki+1 ≥ K1 exp((14c)−γrγc1c3(i + 1)1−ργ)
which is (5.25) for i + 1. The proof is now finished. 2
Remark 5.19 The proof given above can be easily modified to provide a proof of Theorem
5.15. Indeed, one can modify slightly Lemma 5.16, take γ = 0 and choose any ρ > 1 in the
proof. The choice of K1 in (5.24) and Kj in (5.25) will not depend on r > 0, thus giving a
strong form of Harnack inequality.
6 Boundary Harnack Principle for subordinate Brow-
nian motions
6.1 Some Results on One-dimensional Subordinate Brownian Mo-tion
Suppose that W = (Wt : t ≥ 0) is a one-dimensional Brownian motion with
E[eiξ(Wt−W0)
]= e−tξ2
, ∀ξ ∈ R, t > 0 ,
and S = (St : t ≥ 0) is a subordinator (a non-negative increasing Levy process) independent
of W and with Laplace exponent φ, that is
E[e−λSt
]= e−tφ(λ), ∀t, λ > 0.
A C∞ function g : (0,∞) → [0,∞) is called a Bernstein function if (−1)nDng ≤ 0 for every
n ∈ N. A function g : (0,∞) → [0,∞) is called a complete Bernstein function if there exists
a Bernstein function η such that g(λ) = λ2Lη(λ) where L stands for the Laplace transform
of η. Throughout this paper we will assume that φ is a complete Bernstein function such
that
φ(λ) = λα/2`(λ) (6.1)
for some α ∈ (0, 2) and some positive function ` which is slowly varying at ∞.
Using Corollary 2.3 of [69] or Theorem 2.3 of [56] we know that the potential measure of
S has a decreasing density u. By using the Tauberian theorem (Theorem 1.7.1 in [10]) and
the monotone density theorem (Theorem 1.7.2 in [10]), one can easily check that
u(t) ∼ tα/2−1
Γ(α/2)
1
`(t−1), t → 0. (6.2)
69
Since φ is a complete Bernstein function, the Levy measure of S has a density µ(t). It follows
from Proposition 2.23 of [70] that
µ(t) ∼ α
2Γ(1− α/2)
`(t−1)
t1+α/2, t → 0. (6.3)
The subordinate Brownian motion X = (Xt : t ≥ 0) defined by Xt = WSt is a symmetric
Levy process with characteristic exponent
Φ(θ) = φ(θ2) = |θ|α`(θ2), ∀θ ∈ R.
Let χ be the Laplace exponent of the ladder height process of X. (For the definition of the
ladder height process and its basic properties, we refer our readers to Chapter 6 of [7].) Then
it follows from Corollary 9.7 of [31] that
χ(λ) = exp
(1
π
∫ ∞
0
log(Φ(λθ))
1 + θ2dθ
)= exp
(1
π
∫ ∞
0
log(|θ|αλα`(θ2λ2))
1 + θ2dθ
), ∀λ > 0. (6.4)
Under our assumptions, we have the following result.
Proposition 6.1 The Laplace exponent χ of the ladder height process of X is a special
Bernstein function. i.e., λ/χ(λ) is also a Bernstein function.
Proof. Define ψ(λ) = λ/φ(λ). Let T be a subordinator independent of W and with Laplace
exponent ψ and let Y = (Yt : t ≥ 0) be the subordinate Brownian motion defined by
Yt = WTt . Let Ψ be the characteristic exponent of Y . Then
Φ(θ)Ψ(θ) = φ(θ2)ψ(θ2) = θ2, ∀θ ∈ R.
Let ρ be the Laplace exponent of the ladder height process of Y . Then by (6.4) we have
χ(λ)ρ(λ) = exp
(1
π
∫ ∞
0
log(Φ(θλ)) + log(Ψ(θλ))
1 + θ2dθ
)
= exp
(1
π
∫ ∞
0
log(Φ(θλ)Ψ(θλ))
1 + θ2dθ
)
= exp
(1
π
∫ ∞
0
log(θ2λ2)
1 + θ2dθ
)= λ.
Thus χ is a special Bernstein function. 2
70
Proposition 6.2 If there are M > 1, δ ∈ (0, 1) and a nonnegative integrable function f on
(0, 1) such that∣∣∣∣log
(`(λ2θ2)
`(λ2)
)∣∣∣∣ ≤ f(θ), ∀(θ, λ) ∈ (0, δ)× (M,∞), (6.5)
then
limλ→∞
χ(λ)
λα/2(`(λ2))1/2= 1. (6.6)
Proof. Using the identity
λβ/2 = exp
(1
π
∫ ∞
0
log(θβλβ)
1 + θ2dθ
), ∀λ, β > 0,
we get easily from (6.4) that
χ(λ) = λα/2 exp
(1
π
∫ ∞
0
log(`(λ2θ2))
1 + θ2dθ
)
= λα/2(`(λ2))1/2 exp
(1
π
∫ ∞
0
log
(`(λ2θ2)
`(λ2)
)1
1 + θ2dθ
).
By Potter’s Theorem (Theorem 1.5.6 (1) in [10]), there exists λ0 > 1 such that∣∣∣∣log
(`(λ2θ2)
`(λ2)
)∣∣∣∣1
1 + θ2≤ 2
log θ
1 + θ2, ∀(θ, λ) ∈ [1,∞)× [λ0,∞).
Thus by the dominated convergence theorem in the first integral below, the uniform conver-
gence theorem (Theorem 1.2.1 in [10]) in the second integral, and the assumption (6.5) in
the third integral, we have
limλ→∞
∫ ∞
0
log
(`(λ2θ2)
`(λ2)
)1
1 + θ2dθ = lim
λ→∞
(∫ ∞
1
+
∫ 1
δ
+
∫ δ
0
)log
(`(λ2θ2)
`(λ2)
)1
1 + θ2dθ = 0.
2
In the case φ(λ) = λα/2 for some α ∈ (0, 2), the assumption of the Proposition above is
trivially satisfied. Now we give some other examples.
Example 6.3 Suppose that α ∈ (0, 2) and define
φ(λ) = (λ + 1)α/2 − 1.
Then φ is a complete Bernstein function which can be written as φ(λ) = λα/2`(λ) with
`(λ) =(λ + 1)α/2 − 1
λα/2.
Using elementary analysis one can easily check that there is a nonnegative integrable function
f on (0, 1) such that (6.5) is satisfied.
71
Example 6.4 Suppose 0 < β < α < 2 and define
φ(λ) = λα/2 + λβ/2.
Then φ is a complete Bernstein function which can be written as φ(λ) = λα/2`(λ) with
`(λ) = 1 + λ(β−α)/2.
Using elementary analysis one can easily check that there is a nonnegative integrable function
f on (0, 1) such that (6.5) is satisfied.
Example 6.5 Suppose that α ∈ (0, 2) and β ∈ (0, 2− α). Define
φ(λ) = λα/2(log(1 + λ))β/2.
Then φ is a complete Bernstein function which can be written as φ(λ) = λα/2`(λ) with
`(λ) = (log(1 + λ))β/2.
To check that there is a nonnegative integrable function f on (0, 1) such that (6.5) is satisfied,
we only need to bound the function∣∣∣∣log
(log(1 + λ2θ2)
log(1 + λ2)
)∣∣∣∣for large λ and small θ. We will consider two cases separately. Fix an M > 1 and a θ < 1.
(1) λ ≥ M , θ < 1 and λ > 1/θ. In this case, by using the fact that for any a > 0 the
function x 7→ xx−a
is decreasing on (a,∞), we get that∣∣∣∣log
(log(1 + λ2θ2)
log(1 + λ2)
)∣∣∣∣ = log
(log(1 + λ2)
log(1 + λ2θ2)
)
≤ log
(log(1 + λ2)
log(θ2) + log(1 + λ2)
)
≤ log
(log(1 + θ−2)
log(θ2) + log(1 + θ−2)
)
= log
(log(1 + θ2)− log(θ2)
log(1 + θ2)
).
(2) λ ≥ M , θ < 1 and λ ≤ 1/θ. In this case we have∣∣∣∣log
(log(1 + λ2θ2)
log(1 + λ2)
)∣∣∣∣ = log
(log(1 + λ2)
log(1 + λ2θ2)
)
≤(
log(1 + λ2)
log(1 + M2θ2)
)
≤(
log(1 + θ−2)
log(1 + M2θ2)
).
72
Combining the results above one can easily check that there is a nonnegative integrable
function f on (0,∞) such that (6.5) is satisfied.
Example 6.6 Suppose that α ∈ (0, 2) and β ∈ (0, α). Define
φ(λ) = λα/2(log(1 + λ))−β/2.
Then φ is a complete Bernstein function which can be written as φ(λ) = λα/2`(λ) with
`(λ) = (log(1 + λ))−β/2.
Similarly to the example above, once can use elementary analysis to check that there is a
nonnegative integrable function f on (0, 1) such that (6.5) is satisfied.
In the remainder of this section we will always assume that the assumption of Proposition
6.2 is satisfied. It follows from Propositions 6.1 and 6.2 above and Corollary 2.3 of [69] that
the potential measure V of the ladder height process of X has a decreasing density v. Since
X is symmetric, we know that the potential measure V of the dual ladder height process is
equal to V .
In light of Proposition 6.2, one can easily apply the Tauberian theorem (Theorem 1.7.1 in
[10]) and the monotone density theorem (Theorem 1.7.2 in [10]) to get the following result.
Proposition 6.7 As x → 0, we have
V ((0, x)) ∼ xα/2
Γ(1 + α/2)(`(x−2))1/2,
v(x) ∼ xα/2−1
Γ(α/2)(`(x−2))1/2.
Proof. We omit the details. 2
It follows from Proposition 6.2 above and Lemma 7.10 of [46] that the process X does
not creep upwards. Since X is symmetric, we know that X also does not creep downwards.
Thus if, for any a ∈ R, we define
τa = inft > 0 : Xt < a, σa = inft > 0 : Xt ≤ a,
then we have
Px(τa = σa) = 1, x > a. (6.7)
Let G(0,∞)(x, y) be the Green function of X(0,∞), the process obtained by killing X upon
exiting from (0,∞). Then we have the following result.
73
Proposition 6.8 For any x, y > 0 we have
G(0,∞)(x, y) =
∫ x
0v(z)v(y + z − x)dz, x ≤ y,∫ x
x−yv(z)v(y + z − x)dz, x > y.
Proof. By using (6.7) above and Theorem 20 on page 176 of [7] we get that for any
nonnegative function on f on (0,∞),
Ex
[∫ ∞
0
f(X(0,∞)t ) dt
]= k
∫ ∞
0
∫ x
0
v(z)f(x + z − y)v(y)dzdy , (6.8)
where k is the constant depending on the normalization of the local time of the process X
reflected at its supremum. We choose k = 1. Then
Ex
[∫ ∞
0
f(X(0,∞)t ) dt
]
=
∫ ∞
0
v(y)
∫ x
0
v(z)f(x + y − z)dzdy
=
∫ x
0
v(z)
∫ ∞
0
v(y)f(x + y − z)dydz
=
∫ x
0
v(z)
∫ ∞
x−z
v(w + z − x)f(w)dwdz
=
∫ x
0
f(w)
∫ x
x−w
v(z)v(w + z − x)dzdw +
∫ ∞
x
f(w)
∫ x
0
v(z)v(w + z − x)dzdw .(6.9)
On the other hand,
Ex
[∫ ∞
0
f(X(0,∞)t ) dt
]=
∫ ∞
0
G(0,∞)(x,w)f(w) dw
=
∫ x
0
G(0,∞)(x,w)f(w) dw +
∫ ∞
x
G(0,∞)(x,w)f(w) dw . (6.10)
By comparing (6.9) and (6.10) we arrive at our desired conclusion. 2
For any r > 0, let G(0,r) be the Green function of X(0,r), the process obtained by killing
X upon exiting from (0, r). Then we have the following result.
Proposition 6.9 For any R > 0, there exists C = C(R) > 0 such that
∫ r
0
G(0,r)(x, y)dy ≤ Crα/2
(`(r−2))1/2
xα/2
(`(x−2))1/2, x ∈ (0, r), r ∈ (0, R).
74
Proof. For any x ∈ (0, r), we have
∫ r
0
G(0,r)(x, y)dy ≤∫ r
0
G(0,∞)(x, y)dy
=
∫ x
0
∫ x
x−y
v(z)v(y + z − x)dzdy +
∫ r
x
∫ x
0
v(z)v(y + z − x)dzdy
=
∫ x
0
v(z)
∫ x
x−z
v(y + z − x)dydz +
∫ x
0
v(z)
∫ r
x
v(y + z − x)dydz
≤ 2V ((0, r))V ((0, x)).
Now the desired conclusion follows easily from Proposition 6.7 and the continuity of V ((0, x))
and xα/2/(`(x−2))1/2.
2
As a consequence of the result above, we immediately get the following
Proposition 6.10 For any R > 0, there exists C = C(R) > 0 such that
∫ r
0
G(0,r)(x, y)dy ≤ Crα/2
(`(r−2))1/2
(xα/2
(`(x−2))1/2∧ (r − x)α/2
(`((r − x)−2))1/2
), x ∈ (0, r), r ∈ (0, R).
6.2 Preliminary Results on Muti-dimensional Subordinate Brow-nian Motions
In the remainder of this paper we will always assume that d ≥ 2 and that α ∈ (0, 2). From
now on we will assume that B = (Bt : t ≥ 0) is a Brownian motion on Rd with
E[eiξ·(Bt−B0)
]= e−t|ξ|2 , ∀ξ ∈ Rd, t > 0.
Suppose that S = (St : t ≥ 0) is a subordinator independent of B and that its Laplace
exponent φ is a complete Bernstein function satisfying all the assumption of the previous
section. More precisely we assume that there is a positive function ` on (0,∞) which is
slowly varying at ∞ such that φ(λ) = λα/2`(λ) for all λ > 0 and that there is a nonnegative
integrable function f on (0, 1) such that (6.5) holds. As in the previous section, we will use
u(t) and µ(t) to denote the potential density and Levy density of S respectively.
In the remainder of this paper we will use X = (Xt : t ≥ 0) to denote the subordinate
Brownian motion defined by Xt = BSt . Then it is easy to check that when d ≥ 3 the process
X is transient. In the case of d = 2, we will always assume the following:
A1. The potential density u of S satisfies the following assumption:
u(t) ∼ ctγ−1, t →∞ (6.11)
75
for some constants c > 0 and γ < 1.
Under this assumption the process X is also transient for d = 2.
We will use G(x, y) = G(x− y) to denote the Green function of X.
The Green function G of X is given by the following formula
G(x) =
∫ ∞
0
(4πt)−d/2e−|x|2/(4t)u(t)dt, x ∈ Rd.
Using this formula, we can easily see that G is radially decreasing and continuous in Rd\0.In order to get the asymptotic behavior of G near the origin, we need some additional
assumption on the slowly varying function `. For any y, t, ξ > 0, define
Λ`,ξ(y, t) :=
`(1/y)`(4t/y)
, y < tξ,
0, y ≥ tξ.
We will always assume that
A2. There is a ξ > 0 such that
Λ`,ξ(y, t) ≤ g(t), ∀y, t > 0,
for some positive function g on (0,∞) with∫ ∞
0
t(d−α)/2−1e−tg(t)dt < ∞.
It is easy to check (see the proofs of Theorem 3.6 and Theorem 3.11 in [70]) that for the
subordinators corresponding to Examples 6.3–6.6, A1 and A2 are satisfied.
Under these assumptions we have the following
Theorem 6.11 The Green function G of X satisfies the following
G(x) ∼ αΓ((d− α)/2)
2α+1πd/2Γ(1 + α/2)
1
|x|d−α`(|x|−2), |x| → 0.
Proof. This follows easily from A1-A2, (6.2) above and Lemma 3.3 of [70]. We omit the
details. 2
Let J be the jumping function of X, then
J(x) =
∫ ∞
0
(4πt)−d/2e−|x|2/(4t)µ(t)dt, x ∈ Rd.
Thus J(x) = j(|x|) with
j(r) =
∫ ∞
0
(4πt)−d/2e−r2/(4t)µ(t)dt, r > 0.
76
It is easy to see that j is continuous in (0,∞). Since t 7→ µ(t) is decreasing, the function
r 7→ j(r) is decreasing on (0,∞). In order to get the asymptotic behavior of j near the origin,
we need some additional assumption on the slowly varying function `. For any y, t, ξ > 0,
define
Υ`,ξ(y, t) :=
`(4t/y)`(1/y)
, y < tξ,
0, y ≥ tξ.
We will always assume that
A3. There is a ξ > 0 such that
Υ`,ξ(y, t) ≤ h(t), ∀y, t > 0
for some positive function h on (0,∞) with∫ ∞
0
t(d+α)/2−1e−th(t)dt < ∞.
It is easy to check (see the proofs of Theorem 3.6 and Theorem 3.11 in [70]) that for the
subordinators corresponding to Examples 6.3–6.6, A3 is satisfied.
Theorem 6.12 The function j satisfies the following
j(r) ∼ αΓ((d + α)/2)
21−απd/2Γ(1− α/2)
`(r−2)
rd+α, r → 0.
Proof. This follows easily from A1, A3, (6.3) above and Lemma 3.3 of [70]. We omit the
details. 2
For any open set D, we use τD to denote the first exit time of D, i.e., τD = inft > 0 :
Xt /∈ D. Given an open set D ⊂ Rd, we define XDt (ω) = Xt(ω) if t < τD(ω) and XD
t (ω) = ∂
if t ≥ τD(ω), where ∂ is a cemetery state. We now recall the definition of harmonic functions
with respect to X.
Definition 6.1 Let D be an open subset of Rd. A function u defined on Rd is said to be
(1) harmonic in D with respect to X if
Ex [|u(XτB)|] < ∞ and u(x) = Ex [u(XτB
)] , x ∈ B,
for every open set B whose closure is a compact subset of D;
(2) regular harmonic in D with respect to X if it is harmonic in D with respect to X and
for each x ∈ D,
u(x) = Ex [u(XτD)] ;
77
(3) harmonic for XD if it is harmonic for X in D and vanishes outside D.
In order for a scale invariant Harnack inequality to hold, we need to assume some addi-
tional conditions on the Levy density µ of S. We will always assume that
A4. The Levy density µ of S satisfies the following conditions: there exists C1 > 0 such
that
µ(t) ≤ C1µ(t + 1), ∀t > 1.
It follows from (6.3) that for any M > 0 there exists C2 > 0 such that
µ(t) ≤ C2µ(2t), ∀t ∈ (0,M).
Using A4, the display above and repeating the proof of Lemma 4.2 of [56] we get that
(1) For any M > 0, there exists C3 > 0 such that
j(r) ≤ C3j(2r), ∀r ∈ (0,M). (6.12)
(2) There exists C4 > 0 such that
j(t) ≤ C4j(r + 1), ∀r > 1.
It is easy to check that for the subordinators corresponding to Examples 6.3–6.6, A4 is
satisfied. Therefore by Theorem 4.14 of [70] (see also [56]) we have the following Harnack
inequality
Theorem 6.13 (Harnack inequality) There exist r1 ∈ (0, 1) and C > 0 such that for
every r ∈ (0, r1), every x0 ∈ Rd, and every nonnegative function f on Rd which is harmonic
in B(x0, r) with respect to X, we have
supy∈B(x0,r/2)
f(y) ≤ C infy∈B(x0,r/2)
f(y).
For any bounded open set D in Rd, we will use GD(x, y) to denote the Green function of
XD. Using the continuity and the radial decreasing property of G, we can easily check that
GD is continuous in (D ×D) \ (x, x) : x ∈ D.
Proposition 6.14 For any R > 0, there exists C = C(R) > 0 such that for every open
subset D with diam(D) ≤ R,
GD(x, y) ≤ G(x, y) ≤ C1
`(|x− y|−2)|x− y|d−α, ∀(x, y) ∈ D ×D. (6.13)
78
Proof. The results of this proposition are immediate consequences of Theorem 6.11 and the
continuity and positivity of `(r−2)rd−α on (0,∞). 2
Lemma 6.15 For any R > 0, there exists C = C(R) > 0 such that for every r ∈ (0, R) and
x0 ∈ Rd,
Ex[τB(x0,r)] ≤ Crα/2
(`(r−2))12
(r − |x− x0|)α/2
(`((r − |x− x0|)−2))12
, x ∈ B(x0, r).
Proof. Without loss of generality, we may assume that x0 = 0. For x 6= 0, put Zt = Xt·x|x| .
Then Zt is a Levy process on R with
E(eiθZt) = E(eiθ x|x| ·Xt) = e−t|θ|α`(θ2), θ ∈ R.
Thus Zt is the type of one-dimensional subordinate Brownian motion we studied in the
previous section. It is easy to see that, if Xt ∈ B(0, r), then |Zt| < r, hence
Ex[τB(0,r)] ≤ E|x|[τ ],
where τ = inft > 0 : |Zt| ≥ r. Now the desired conclusion follows easily from Proposition
6.10. 2
Lemma 6.16 There exist r2 ∈ (0, r1] and C > 0 such that for every positive r ≤ r2 and
x0 ∈ Rd,
Ex0 [τB(x0,r)] ≥ Crα
`(r−2).
Proof. The conclusion of this Lemma follows easily from Theorem 6.12 above, Lemma 3.2
of [66]. 2
Using the Levy system for X, we know that for every bounded open subset D and every
f ≥ 0 and x ∈ D,
Ex [f(XτD); XτD− 6= XτD
] =
∫
Dc
∫
D
GD(x, z)J(z − y)dzf(y)dy. (6.14)
For notational convenience, we define
KD(x, y) :=
∫
D
GD(x, z)J(z − y)dz, (x, y) ∈ D ×Dc. (6.15)
Thus (6.14) can be simply written as
Ex [f(XτD); XτD− 6= XτD
] =
∫
DcKD(x, y)f(y)dy.
Using the continuities of GD and J , one can easily check that KD is continuous on D ×Dc.
As a consequence of Lemma 6.15-6.16 and (6.14), we get the following proposition.
79
Proposition 6.17 There exist C1, C2 > 0 such that for every r ∈ (0, r2) and x0 ∈ Rd,
KB(x0,r)(x, y) ≤ C1 j(|y − x0| − r)rα/2
(`(r−2))1/2
(r − |x− x0|)α/2
(`((r − |x− x0|)−2))1/2, (6.16)
for all (x, y) ∈ B(x0, r)×B(x0, r)cand
KB(x0,r)(x0, y) ≥ C2 J(2(y − x0))rα
`(r−2), ∀y ∈ B(x0, r)
c. (6.17)
Proof. Without loss of generality, we assume x0 = 0. For z ∈ B(0, r) and y ∈ B(0, r)c,
|y| − r ≤ |y| − |z| ≤ |z − y| ≤ |z|+ |y| ≤ r + |y| ≤ 2|y|.
Thus by the monotonicity of J ,
J(2y) ≤ J(z − y) ≤ j(|y| − r). (z, y) ∈ B(0, r)×B(0, r)c.
Applying the above inequality and Lemma 6.15-6.16 to (6.14), we have proved the proposi-
tion. 2
Proposition 6.18 For every a ∈ (0, 1), there exists C = C(a) > 0 such that for every
r ∈ (0, r2), x0 ∈ Rd and x1, x2 ∈ B(x0, ar),
KB(x0,r)(x1, y) ≤ CKB(x0,r)(x2, y), y ∈ B(x0, r)c.
Proof. This follows easily from the Harnack inequality (Theorem 6.13) and the continuity
of KB(x0,r). For details, see the proof of Lemma 4.2 in [73]. 2
As an immediate consequence of Theorem 6.12, we have
Lemma 6.19 There exists r3 ∈ (0, r2] such that for every positive y with |y| ≤ r3,
1
2
`(|y|−2)
|y|d+α≤ J(y) ≤ 2
`(|y|−2)
|y|d+α.
The next inequalities will be used several times in the remainder of this paper.
Lemma 6.20 There exist r4 ∈ (0, r3] and C > 0 such that
sα/2
(`(s−2))1/2≤ C
rα/2
(`(r−2))1/2, ∀ 0 < s < r ≤ 4r4, (6.18)
80
s1−α/2
(`(s−2))1/2≤ C
r1−α/2
(`(r−2))1/2, ∀ 0 < s < r ≤ 4r4, (6.19)
s1−α/2(`(s−2)
)1/2 ≤ C r1−α/2(`(r−2)
)1/2, ∀ 0 < s < r ≤ 4r4, (6.20)
∫ ∞
r
(`(s−2))1/2
s1+α/2ds ≤ C
(`(r−2))1/2
rα/2, ∀ 0 < r ≤ 4r4, (6.21)
∫ r
0
(`(s−2))1/2
sα/2ds ≤ C
(`(r−2))1/2
rα/2−1, ∀ 0 < r ≤ 4r4, (6.22)
∫ ∞
r
`(s−2)
s1+αds ≤ C
`(r−2)
rα, ∀ 0 < r ≤ 4r4, (6.23)
∫ r
0
`(s−2)
sα−1ds ≤ C
`(r−2)
rα−2, ∀ 0 < r ≤ 4r4 (6.24)
and ∫ r
0
sα−1
`(s−2)ds ≤ C
rα
`(r−2), ∀ 0 < r ≤ 4r4. (6.25)
Proof. The first three inequalities follow easily from Theorem 1.5.3 of [10], while the last
five from the 0-version of Theorem 1.5.11 of [10]. 2
Proposition 6.21 For every a ∈ (0, 1), there exists C = C(a) > 0 such that for every
r ∈ (0, r4] and x0 ∈ Rd,
KB(x0,r)(x, y) ≤ Crα/2−d
(`(r−2))1/2
(`((|y − x0| − r)−2))1/2
(|y − x0| − r)α/2∀x ∈ B(x0, ar), y ∈ r < |x0−y| ≤ 2r.
Proof. By Proposition 6.18
KB(x0,r)(x, y) ≤ c1
rd
∫
B(x0,ar)
KB(x0,r)(w, y)dw
for some constant c1 = c1(a) > 0. Thus from (6.16) we have that
KB(x0,r)(x, y) ≤ c2
rd
∫
B(x0,r)
∫
B(x0,r)
GB(x0,r)(w, z)J(z − y)dzdw
=c2
rd
∫
B(x0,r)
Ez[τB(x0,r)]J(z − y)dz
≤ c3
rd
rα/2
(`(r−2))1/2
∫
B(x0,r)
(r − |z − x0|)α/2
(`((r − |z − x0|)−2))1/2J(z − y)dz
81
for some constants c2 = c2(a) > 0, c3 = c3(a) > 0. Now applying Lemma 6.19, we get
KB(x0,r)(x, y) ≤ c4rα/2−d
(`(r−2))1/2
∫
B(x0,r)
(r − |z − x0|)α/2
(`((r − |z − x0|)−2))12
`(|z − y|−2)
|z − y|d+αdz
for some constant c4 = c4(a) > 0. Since r− |z− x0| ≤ |y− z| ≤ 3r ≤ 3r4, from (6.18) we see
that(r − |z − x0|)α/2
(`((r − |z − x0|)−2))12
≤ c− 5(|y − z|)α/2
(`(|y − z|−2))12
for some constant c5 > 0. Thus we have
KB(x0,r)(x, y) ≤ c6rα/2−d
(`(r−2))1/2
∫
B(x0,r)
(`(|z − y|−2))1/2
|z − y|d+α/2dz
≤ c6rα/2−d
(`(r−2))1/2
∫
B(y,|y−x0|−r)c
(`(|z − y|−2))1/2
|z − y|d+α/2dz
≤ c7rα/2−d
(`(r−2))1/2
∫ ∞
|y−x0|−r
(`(s−2))1/2
s1+α/2ds
for some constants c6 = c6(a) > 0, c7 = c7(a) > 0. Using (6.21) in the above equation, we
conclude that
KB(x0,r)(x, y) ≤ c8rα/2−d
(`(r−2))1/2
(`((|y − x0| − r)−2))1/2
(|y − x0| − r)α/2
for some constant c8 = c8(a) > 0. 2
6.3 Boundary Harnack Principle
We will use A to denote the L2-generator of X, and C∞c (Rd) to denote the space of infinitely
differentiable function with compact support. It is well-known that C∞c (Rd) is in the domain
of A and, for every φ ∈ C∞c (Rd) and every ε > 0,
Aφ(x) =
∫
Rd
(φ(x + y)− φ(x)− (∇φ(x) · y)1B(0,ε)(y))J(y)dy, (6.26)
(see Section 4.1 in [63]).
Recall that G(x, y) and GD(x, y) are the Green functions of X and XD respectively. We
have AG(x, y) = −δx(y) in the weak sense. Since GD(x, y) = G(x, y) − Ex[G(XτD, y)], we
82
have, by the symmetry of A, for any x ∈ D and any nonnegative φ ∈ C∞c (Rd),
∫
D
GD(x, y)Aφ(y)dy =
∫
Rd
GD(x, y)Aφ(y)dy
=
∫
Rd
G(x, y)Aφ(y)dy −∫
Rd
Ex[G(XτD, y)]Aφ(y)dy
=
∫
Rd
G(z, y)Aφ(y)dy −∫ ∞
0
∫
Dc
∫
Rd
G(x, y)Aφ(y)dyPx(XτD∈ dz, τD ∈ dt)
= −φ(x) +
∫ ∞
0
∫
Dc
φ(z)Px(XτD∈ dz, τD ∈ dt) = −φ(x) + Ex[φ(XτD
)].
In particular, if φ(x) = 0 for x ∈ D, we have
Ex [φ(XτD)] =
∫
D
GD(x, y)Aφ(y)dy. (6.27)
Take a sequence of radial functions φm in C∞c (Rd) such that 0 ≤ φm ≤ 1,
φm(y) =
0, |y| < 12
1, 1 ≤ |y| ≤ m + 10, |y| > m + 2,
and that∑
i,j | ∂2
∂yi∂yjφm| is uniformly bounded. Define φm,r(y) = φm(y
r) so that 0 ≤ φm,r ≤ 1,
φm,r(y) =
0, |y| < r/21, r ≤ |y| ≤ r(m + 1)0, |y| > r(m + 2),
and
supy∈Rd
∑i,j
∣∣∣∣∂2
∂yi∂yj
φm,r(y)
∣∣∣∣ < c1 r−2. (6.28)
We recall the constant r4 from the previous section. We claim that there exists a constant
C > 0 such that for all r ∈ (0, r4),
supm≥1
supy∈Rd
|Aφm,r(y)| ≤ Cr−α `(r−2). (6.29)
83
In fact, by Lemma 6.19 we have∣∣∣∣∫
Rd
(φm,r(x + y)− φm,r(x)− (∇φm,r(x) · y)1B(0,r)(y))J(y)dy
∣∣∣∣
≤∣∣∣∣∫
|y|≤r(φm,r(x + y)− φm,r(x)− (∇φm,r(x) · y)1B(0,r)(y))J(y)dy
∣∣∣∣ + 2
∫
r<|y|J(y)dy
≤ c2
r2
∫
|y|≤r|y|2J(y)dy +
∫
r<|y|J(y)dy
≤ c3
r2
∫
|y|≤r
1
|y|d+α−2`(|y|−2)dy + c3
∫
r<|y|
1
|y|d+α`(|y|−2)dy
≤ c4
r2
∫ r
0
`(s−2)
sα−1ds + c4
∫ ∞
r
`(s−2)
s1+αds.
Applying (6.23)-(6.24) to the above equation, we get∣∣∣∣∫
Rd
(φm,r(x + y)− φm,r(x)− (∇φm,r(x) · y)1B(0,r)(y))J(y)dy
∣∣∣∣ ≤ c5 r−α `(r−2),
for some constant c4 = c4(d, α, `) > 0. So the claim follows. When D ⊂ B(0, r) for some
r ∈ (0, r4), we get, by combining (6.27) and (6.29), that for any x ∈ D ∩B(0, r/2),
Px (XτD∈ B(0, r)c) = lim
m→∞Px (XτD
∈ A(0, r, (m + 1)r)) ≤ C r−α `(r−2)
∫
D
GD(x, y)dy.
We have proved the following.
Repeating the first part of the proof of Lemma 3.3 in [71] and using Lemma 6.19 and
(6.23)-(6.24) above we can easily get the following
Lemma 6.22 There exists C > 0 such that for any r ∈ (0, r4) and any open set D with
D ⊂ B(0, r) we have
Px (XτD∈ B(0, r)c) ≤ C r−α `(r−2)
∫
D
GD(x, y)dy, x ∈ D ∩B(0, r/2).
Lemma 6.23 There exists C > 0 such that for any open set D with B(A, κr) ⊂ D ⊂ B(0, r)
for some r ∈ (0, r4) and κ ∈ (0, 1), we have that for every x ∈ D \B(A, κr),
∫
D
GD(x, y)dy ≤ C rα κ−d−α/2 1
`((4r)−2)
(1 +
`((3κr4
)−2)
`((4r)−2)
)Px
(XτD\B(A,κr)
∈ B(A, κr))
.
Proof. Fix a point x ∈ D \ B(A, κr) and let B := B(A, κr2). Since GD(x, · ) is harmonic in
D \ x with respect to X,
GD(x,A) ≥∫
D∩BcKB(A, y)GD(x, y)dy ≥
∫
D∩B(A, 3κr4
)c
KB(A, y)GD(x, y)dy.
84
Since 3κr4≤ |y−A| ≤ 2r for y ∈ B(A, 3κr
4)c ∩D and j is a decreasing function, it follows
from (6.17) in Proposition 6.17 and Lemma 6.19 that
GD(x,A) ≥ c1
(3κr4
)α
`((3κr
4)−2
)∫
D∩B(A, 3κr4
)c
GD(x, y)J(2(y − A))dy
≥ c1 j(4r)(3κr
4)α
`((3κr
4)−2
)∫
D∩B(A, 3κr4
)c
GD(x, y)dy
≥ c2 κα r−d `((4r)−2)
`((3κr4
)−2)
∫
D∩B(A, 3κr4
)c
GD(x, y)dy,
for some positive constants c1 and c2. Applying Theorem 6.13 we get
∫
B(A, 3κr4
)
GD(x, y)dy ≤ c3
∫
B(A, 3κr4
)
GD(x,A)dy ≤ c4 rd κdGD(x,A),
for some positive constants c3 and c4. Combining these two estimates we get that
∫
D
GD(x, y)dy ≤ c5
(rdκd + rdκ−α `((3κr
4)−2)
`((4r)−2)
)GD(x,A) (6.30)
for some constant c5 > 0.
Let Ω = D \B(A, κr2). Note that for any z ∈ B(A, κr
4) and y ∈ Ω, 2−1|y− z| ≤ |y−A| ≤
2|y − z|. Thus we get from (6.15) that for z ∈ B(A, κr4),
c−16 KΩ(x,A) ≤ KΩ(x, z) ≤ c6KΩ(x,A) (6.31)
for some c6 > 1. Using the harmonicity of GD(·, A) in D \ A with respect to X, we can
split GD(·, A) into two parts:
GD(x,A) = Ex [GD(XτΩ , A)]
= Ex[GD(XτΩ , A) : XτΩ ∈ B(A,
κr
4)]
+ Ex[GD(XτΩ , A) : XτΩ ∈
κr
4≤ |y − A| ≤ κr
2]
:= I1 + I2.
Using (6.31) and (6.13), we have
I1 ≤ c6 KΩ(x,A)
∫
B(A, κr4
)
GD(y, A)dy ≤ c7 KΩ(x,A)
∫
B(A, κr4
)
1
|y − A|d−α
dy
`(|y − A|−2).
for some constant c7 > 0. Since |y − A| ≤ 4r ≤ 4r4, by (6.18),
|y − A|α/2
`(|y − A|−2)≤ c8
(4r)α/2
`((4r)−2)(6.32)
85
for some constant c8 > 0. Thus
I1 ≤ c7 c8 KΩ(x,A)
∫
B(A, κr4
)
1
|y − A|d−α/2
(4r)α/2
`((4r)−2)dy ≤ c9κ
α/2rα 1
`((4r)−2)KΩ(x,A)
for some constant c9 > 0. Now using (6.31) again, we get
I1 ≤ c10κα/2−drα−d 1
`((4r)−2)
∫
B(A, κr4
)
KΩ(x, z)dz,
for some constant c10 > 0. On the other hand, by (6.13),
I2 =
∫
κr4≤|y−A|≤κr
2GD(y, A)Px(XτΩ ∈ dy)
≤ c11
∫
κr4≤|y−A|≤κr
2
1
|y − A|d−α
1
`(|y − A|−2)Px(XτΩ ∈ dy)
for some constant c11 > 0. Using (6.32), the above is less than or equal to
c12κα/2−d rα−d 1
`((4r)−2)Px
(XτΩ ∈
κr
4≤ |y − A| ≤ κr
2)
,
for some constant c12 > 0. Therefore
GD(x,A) ≤ c13 κα/2−d rα−d 1
`((4r)−2)Px
(XτΩ ∈ B(A,
κr
2))
.
for some constant c13 > 0. Combining the above with (6.30), we get
∫
D
GD(x, y)dy ≤ c14 rα κ−d−α/2 1
`((4r)−2)
(1 +
`((3κr4
)−2)
`((4r)−2)
)Px
(XτD\B(A, κr
2 )∈ B(A,
κr
2))
,
for some constant c14 > 0. It follows immediately that
∫
D
GD(x, y)dy ≤ c14 rα κ−d−α/2 1
l((4r)−2)
(1 +
`((3κr4
)−2)
`((4r)−2)
)Px
(XτD\B(A,κr)
∈ B(A, κr))
.
2
Combining Lemmas 6.22-6.23 and using the translation invariant property, we have the
following
Lemma 6.24 There exists C > 0 such that for any open set D with B(A, κr) ⊂ D ⊂ B(Q, r)
for some r ∈ (0, r4) and κ ∈ (0, 1), we have that for every x ∈ D ∩B(Q, r2),
Px (XτD∈ B(Q, r)c) ≤ c1 κ−d−α/2 `(r−2)
`((4r)−2)
(1 +
`((3κr4
)−2)
`((4r)−2)
)Px
(XτD\B(A,κr)
∈ B(A, κr))
.
86
Let A(x, a, b) := y ∈ Rd : a ≤ |y − x| < b.
Lemma 6.25 Let D be an open set and 0 < 2r < r4. For any positive function u vanishing
Dc, there is a σ ∈ (106r, 11
6r) such that for any M ∈ (1,∞), Q ∈ Rd and x ∈ D ∩B(Q, 3
2r),
Ex[u(XτD∩B(Q,σ)
); XτD∩B(Q,σ)∈ A(Q, σ,M)
]≤ C
rα
`((2r)−2)
∫
A(Q, 10r6
,M)
J(y)u(y)dy
for some constant C = C(M) > 0.
Proof. Without loss of generality, we may assume that Q = 0. Note that by (6.22)
∫ 116
r
106
r
∫
A(0,σ,2r)
`((|y| − σ)−2)1/2(|y| − σ)−α/2u(y)dydσ
=
∫
A(0, 106
r,2r)
∫ |y|∧ 116
r
106
r
`((|y| − σ)−2)1/2(|y| − σ)−α/2dσu(y)dy
≤ c1
∫
A(0, 106
r,2r)
(∫ |y|− 106
r
0
`(s−2)1/2s−α/2ds
)u(y)dy
≤ c2
∫
A(0, 10r6
,2r)
`((|y| − 10r
6)−2)1/2(|y| − 10r
6)1−α/2u(y)dy
for some positive constants c1 and c2. Using (6.20), we get that there is a constant c3 > 0
such that∫
A(0, 10r6
,2r)
`((|y| − 10r
6)−2)1/2(|y| − 10r
6)1−α/2u(y)dy ≤ c3
∫
A(0, 10r6
,2r)
`(|y|−2)1/2|y|1−α/2u(y)dy,
which is less than or equal to
c4r1−α/2
`((2r)−2)1/2
∫
A(0, 10r6
,2r)
`(|y|−2)u(y)dy
for some constant c4 > 0 by (6.19). Thus, by taking c5 = (10/6)c4, we can conclude that
there is a σ ∈ (106r, 11
6r) such that
∫
A(0,σ,2r)
`((|y| − σ)−2)1/2 (|y| − σ)−α2 u(y)dy ≤ c5
r−α/2
`((2r)−2)1/2
∫
A(0, 10r6
,2r)
`(|y|−2)u(y)dy.
(6.33)
87
Let x ∈ D ∩B(0, 32r). Note that, since X satisfies the hypothesis H in [72], by Theorem
1 in [72] we have
Ex[u(XτD∩B(0,σ)
); XτD∩B(0,σ)∈ A(0, σ,M)
]
= Ex[u(XτD∩B(0,σ)
); XτD∩B(0,σ)∈ A(0, σ,M), τD∩B(0,σ) = τB(0,σ)
]
= Ex[u(XτB(0,σ)
); XτB(0,σ)∈ A(0, σ,M), τD∩B(0,σ) = τB(0,σ)
]
≤ Ex[u(XτB(0,σ)
); XτB(0,σ)∈ A(0, σ,M)
]=
∫
A(0,σ,M)
KB(0,σ)(x, y)u(y)dy.
In the first equality above we have used the fact that u vanishes on Dc. Since σ < 2r < r4,
from (6.16) in Proposition 6.17, Proposition 6.21 and Lemma 6.19 we have
Ex[u(XτD∩B(0,σ)
); XτD∩B(0,σ)∈ A(0, σ,M)
]≤
∫
A(0,σ,M)
KB(0,σ)(x, y)u(y)dy
≤ c6
∫
A(0,σ,2r)
σα/2−d
(`(σ−2))1/2
(`((|y| − σ)−2))1/2
(|y| − σ)α/2u(y)dy
+ c6
∫
A(0,2r,M)
j(|y| − σ)σα/2
(`(σ−2))1/2
(σ − |x|)α/2
(`((σ − |x|)−2))1/2u(y)dy
for some constant c6 > 0. For y ∈ A(0, 2r,M), 112|y| ≤ |y| − σ and σ − |x| ≤ σ ≤ 2r. Thus
by (6.18) and the monotonicity of j, we have
j(|y| − σ)σα/2
(`(σ−2))1/2
(σ − |x|)α/2
(`((σ − |x|)−2))1/2≤ c7j(
|y|12
)rα
`((2r)−2)
for some constant c7 > 0. Thus by (6.12), we get
j(|y| − σ)σα/2
(`(σ−2))1/2
(σ − |x|)α/2
(`((σ − |x|)−2))1/2≤ c8j(|y|) rα
`((2r)−2)
for some constant c8 = c8(M) > 0. On the other hand, by (6.18) and (6.33), there exist
positive constants c9 vand c10 such that∫
A(0,σ,2r)
σα/2−d
(`(σ−2))1/2
(`((|y| − σ)−2))1/2
(|y| − σ)α/2u(y)dy
≤ (10r
6)−d σα/2
(`(σ−2))1/2
∫
A(0,σ,2r)
(`((|y| − σ)−2))1/2
(|y| − σ)α/2u(y)dy
≤ c9r−d (2r)α/2
(`((2r)−2))1/2
r−α/2
(`((2r)−2))1/2
∫
A(0, 10r6
,2r)
`(|y|−2)u(y)dy
≤ c10rα
`((2r)−2)
∫
A(0, 10r6
,2r)
`(|y|−2)|y|−d−αu(y)dy,
88
which is is less than or equal to
c11rα
`((2r)−2)
∫
A(0, 10r6
,2r)
J(y)u(y)dy,
for some constants c11 > 0 by Lemma 6.19. Hence
Ex[u(XτD∩B(0,σ)
); XτD∩B(0,σ)∈ A(0, σ,M)
]≤ c12
rα
`((2r)−2)
∫
A(0, 10r6
,M)
J(y)u(y)dy
for some constant c12 = c12(M) > 0. 2
Lemma 6.26 Let D be an open set. Assume that B(A, κr) ⊂ D ∩ B(Q, r) for some 0 <
r < 2r4 and κ ∈ (0, 12]. Suppose that u ≥ 0 is regular harmonic in D ∩B(Q, 2r) with respect
to X and u = 0 in (Dc ∩ B(Q, 2r)) ∪ B(Q,M)c. If w is a regular harmonic function with
respect to X in D ∩B(Q, r) such that
w(x) =
u(x), x ∈ B(Q, 3r
2)c ∪ (Dc ∩B(Q, r)),
0, x ∈ A(Q, r, 3r2),
then
u(A) ≥ w(A) ≥ C κα `((2r)−2)
`((κr)−2)u(x), x ∈ D ∩B(Q,
3
2r)
for some constant C = C(M) > 0.
Proof. Without loss of generality, we may assume Q = 0 and x ∈ D ∩ B(0, 32r). The left
hand side inequality in the conclusion of the lemma is obvious, so we only need to prove the
right hand side inequality. Since u is regular harmonic in D ∩ B(0, 2r) with respect to X
and u = 0 on B(0,M)c, we know from Lemma 6.25 that there exists σ ∈ (10r6
, 11r6
) such that
u(x) = Ex[u(XτD∩B(0,σ)
); XτD∩B(0,σ)∈ A(0, σ,M)
]≤ c1
rα
`((2r)−2)
∫
A(0, 10r6
,M)
J(y)u(y)dy
for some constant c1 = c1(M) > 0. On the other hand, by (6.17) in Proposition 6.17, we
have that
w(A) =
∫
A(0, 3r2
,M)
KD∩B(0,r)(A, y)u(y)dy ≥∫
A(0, 3r2
,M)
KB(A,κr)(A, y)u(y)dy
≥ c2
∫
A(0, 3r2
,M)
J(2(A− y))(κr)α
`((κr)−2)u(y)dy
for some constant c2 > 0. Note that |y−A| ≤ 2|y| in A(0, 3r2,M). Hence by the monotonicity
of j and (6.12),
w(A) ≥ c2(κr)α
`((κr)−2)
∫
A(0, 3r2
,M)
j(4|y|)u(y)dy ≥ c3(κr)α
`((κr)−2)
∫
A(0, 3r2
,M)
J(y)u(y)dy
89
for some constant c3 = c3(M) > 0. Therefore
w(A) ≥ c4 κα `((2r)−2)
`((κr)−2)u(x)
for some constant c4 = c4(M) > 0. 2
We recall the definition of κ-fat set from [71].
Definition 6.2 Let κ ∈ (0, 1/2]. We say that an open set D in Rd is κ-fat if there exists
R > 0 such that for each Q ∈ ∂D and r ∈ (0, R), D ∩B(Q, r) contains a ball B(Ar(Q), κr).
The pair (R, κ) is called the characteristics of the κ-fat open set D.
Note that all Lipschitz domain and all non-tangentially accessible domain (see [40] for
the definition) are κ-fat. Moreover, every John domain is κ-fat (see Lemma 6.3 in [49]). The
boundary of a κ-fat open set can be highly nonrectifiable and, in general, no regularity of
its boundary can be inferred. Bounded κ-fat open set may be disconnected.
The next theorem is a boundary Harnack principle for bounded κ-fat open set and it is
the main result of this section. Maybe a word of caution is in order here. The boundary
Harnack principle here is a little different from the ones proved in [14] and [71] in the sense
that in the boundary Harnack principle below we require our harmonic functions to vanish
on the whole complement of the open set. However, this will not affect our application later
since we are mainly interested in the case when the harmonic functions are given by the
Green functions.
Theorem 6.27 Suppose that D is a bounded κ-fat open set with the characteristics (R, κ).
There exists a constant r5 := r5(D,α, l) ≤ r4 ∧ R such that if 2r ≤ r5 and Q ∈ ∂D, then
for any nonnegative functions u, v in Rd which are regular harmonic in D ∩ B(Q, 2r) with
respect to X and vanish in Dc, we have
C−1 u(Ar(Q))
v(Ar(Q))≤ u(x)
v(x)≤ C
u(Ar(Q))
v(Ar(Q)), x ∈ D ∩B(Q,
r
2),
for some constant C = C(D) > 1.
Proof. Since l is slowly varying at ∞, there exists a constant r5 := r5(D,α, l) ≤ r4∧R such
that for every 2r ≤ r5,
max
(`(r−2)
`((κr)−2),
`((2r)−2)
`((4r)−2),
`((3κr4
)−2)
`((4r)−2),
`((κr)−2)
`((2r)−2)
)≤ 2. (6.34)
Fix 2r ≤ r5 throughout this proof. Without loss of generality we may assume that Q = 0
and u(Ar(0)) = v(Ar(0)). For simplicity, we will write Ar(0) as A in the remainder of this
90
proof. Define u1 and u2 to be regular harmonic functions in D ∩ B(0, r) with respect to X
such that
u1(x) =
u(x), r ≤ |x| < 3r
2,
0, x ∈ B(0, 3r2)c ∪ (Dc ∩B(0, r))
and
u2(x) =
u(x), x ∈ B(0, 3r
2)c ∪ (Dc ∩B(0, r)),
0, r ≤ |x| < 3r2,
and note that u = u1+u2. If D∩r ≤ |y| < 3r2 is empty, then u1 = 0 and the inequality (6.38)
below holds trivially. So we assume D ∩ r ≤ |y| < 3r2 is not empty and let M :=diam(D).
Then by Lemma 6.26,
u(y) ≤ c1κ−α `((κr)−2)
`((2r)−2)u(A), y ∈ D ∩B(0,
3r
2),
for some constant c1 = c1(M) > 0. For x ∈ D ∩B(0, r2), we have
u1(x) = Ex
[u(XτD∩B(0,r)
) : XτD∩B(0,r)∈ D ∩ r ≤ |y| < 3r
2]
≤(
supD∩r≤|y|< 3r
2u(y)
)Px
(XτD∩B(0,r)
∈ D ∩ r ≤ |y| < 3r
2)
≤(
supD∩r≤|y|< 3r
2u(y)
)Px
(XτD∩B(0,r)
∈ B(0, r)c)
≤ c1 κ−α `((κr)−2)
`((2r)−2)u(A)Px
(XτD∩B(0,r)
∈ B(0, r)c)
.
Now using Lemma 6.24 and (6.34) we have that for x ∈ D ∩B(0, r2),
u1(x) (6.35)
≤ c2 κ−d− 32α `((κr)−2)
`((2r)−2)
`(r−2)
`((4r)−2)
(1 +
`((3κr4
)−2)
`((4r)−2)
)u(A)Px
(Xτ(D∩B(0,r))\B(A, κr
2 )∈ B(A,
κr
2))
≤ c3 u(A)Px(Xτ(D∩B(0,r))\B(A, κr
2 )∈ B(A,
κr
2))
(6.36)
for some positive constants c2 = c2(M) and c3 = c3(M,κ). Since 2r < r4, Theorem 6.13
implies that
u(y) ≥ c4 u(A), y ∈ B(A,κr
2)
for some constant c4 > 0. Therefore for x ∈ D ∩B(0, r2)
u(x) = Ex[u(Xτ(D∩B(0,r))\B(A, κr
2 ))]≥ c4 u(A)Px
(Xτ(D∩B(0,r))\B(A, κr
2 )∈ B(A,
κr
2))
. (6.37)
91
Using (6.36), the analogue of (6.37) for v and the assumption that u(A) = v(A), we get that
for x ∈ D ∩B(0, r2),
u1(x) ≤ c3 v(A)Px(Xτ(D∩B(0,r))\B(A, κr
2 )∈ B(A,
κr
2))≤ c5 v(x) (6.38)
for some constant c5 = c5(M,κ) > 0. Since u = 0 on B(0,M)c, we have that for x ∈D ∩B(0, r),
u2(x) =
∫
A(0, 3r2
,M)
KD∩B(0,r)(x, z)u(z)dz
=
∫
A(0, 3r2
,M)
∫
D∩B(0,r)
GD∩B(0,r)(x, y)J(y − z)dyu(z)dz.
Let
s(x) :=
∫
D∩B(0,r)
GD∩B(0,r)(x, y)dy.
Note that for every y ∈ B(0, r) and z ∈ B(0, 3r2),
1
3|z| ≤ |z| − r ≤ |z| − |y| ≤ |y − z| ≤ |y|+ |z| ≤ r + |z| ≤ 2|z|.
So by the monotonicity of j, for every y ∈ B(0, r) and z ∈ B(0, 3r2),
j(12|z|) ≤ j(2|z|) ≤ J(y − z) ≤ j(1
3|z|) ≤ j(
1
12|z|).
Using (6.12), we have that, for every y ∈ B(0, r) and z ∈ A(0, 3r2,M),
c−16 j(|z|) ≤ J(y − z) ≤ c6 j(|z|)
for some constant c6 = c6(M) > 0. Thus we have
c−17 ≤ u2(x)
u2(A)/
s(x)
s(A)≤ c7, (6.39)
for some constant c7 = c7(M) > 1. Applying (6.39) to u and v and Lemma 6.26 to v and v2,
we obtain for x ∈ D ∩B(0, r2),
u2(x) ≤ c7 u2(A)s(x)
s(A)≤ c2
7
u2(A)
v2(A)v2(x) ≤ c8 κ−α `((κr)−2)
`((2r)−2)
u(A)
v(A)v2(x) = c8 κ−α `((κr)−2)
`((2r)−2)v2(x),
(6.40)
for some constant c8 = c8(M) > 0. Combining (6.38) and (6.40) and applying (6.34), we
have
u(x) ≤ c9 v(x), x ∈ D ∩B(0,r
2),
for some constant c9 = c7(M,κ) > 0. 2
92
6.4 Martin Boundary and Martin Representation
In this section we will always assume that D is a bounded κ-fat open set in Rd with the
characteristics (R, κ). We are going to apply Theorem 6.27 to study the Martin boundary
of D with respect to X.
We recall from Definition 6.2 that for each Q ∈ ∂D and r ∈ (0, R), Ar(Q) is a point in
D∩B(Q, r) satisfying B(Ar(Q), κr) ⊂ D∩B(Q, r). From Theorem 6.27, we get the following
boundary Harnack principle for the Green function of X which will play an important role
in this section. Recall that r5 is the constant defined in Theorem 6.27. Without loss of
generality, we will assume r5 < R.
Theorem 6.28 There exists a constant c = c(D,α, l) > 1 such that for any Q ∈ ∂D,
r ∈ (0, r5) and z, w ∈ D \B(Q, 2r), we have
c−1 GD(z, Ar(Q))
GD(w, Ar(Q))≤ GD(z, x)
GD(w, x)≤ c
GD(z, Ar(Q))
GD(w, Ar(Q)), x ∈ D ∩B
(Q,
r
2
).
Since ` is slowly varying at ∞, there exists a constant r6 := r6(κ, l) ≤ r5 and c > 0 such
that for every 2r ≤ r6,
1
c≤ min
(`((κ2
64r−2)
`(r−2),
`(( 4κ2 r
−2)
`(r−2)
)≤ max
(`((κ2
64r−2)
`(r−2),
`(( 4κ2 r
−2)
l(r−2)
)≤ c < ∞. (6.41)
Lemma 6.29 There exist positive constants c = c(D,α) and γ = γ(D,α) < α such that for
any Q ∈ ∂D and r ∈ (0, r6), and nonnegative function u which is harmonic with respect to
X in D ∩B(Q, r) we have
u(Ar(Q)) ≤ c
(2
κ
)γk `((κ/2)−2kr−2
))
` (r−2))u(A(κ/2)kr(Q)), k = 0, 1, · · · . (6.42)
Proof. Without loss of generality, we may assume Q = 0. Fix r < r6 and let
ηk :=(κ
2
)k
r, Ak := Aηk(0) and Bk := B(Ak, ηk+1), k = 0, 1, · · · .
Note that the Bk’s are disjoint. So by the harmonicity of u, we have
u(Ak) ≥k−1∑
l=0
EAk
[u(YτBk
) : YτBk∈ Bl
]=
k−1∑
l=0
∫
Bl
KBk(Ak, z)u(z)dz.
Theorem 6.13 implies that∫
Bl
KBk(Ak, z)u(z)dz ≥ c0 u(Al)
∫
Bl
KBk(Ak, z)dz
93
for some constant c0 = c0(d, α) > 0. Since dist(Ak, Bl) ≤ 2ηl, by (6.17) in Proposition 6.17
and the monotonicity of j we have
KBk(Ak, z) ≥ c J(2(Ak − z))
(ηk+1)α
`((ηk+1)−2)≥ c J(4ηl)
(ηk+1)α
`((ηk+1)−2), z ∈ Bl.
Applying Lemma 6.19 and (6.41), we get
KBk(Ak, z) ≥ c1
(ηk+1)α
(4ηl)d+α
`((4ηl)−2)
`( 1(ηl+1)2
)
`((ηl+1)−2)
`((ηk+1)−2)≥ c2
(ηk+1)α
(ηl+1)d+α
`((ηl+1)−2)
`((ηk+1)−2), z ∈ Bl
for some constants c1 = c1(d, α, `) > 0 and c2 = c2(d, α, `) > 0. Thus we have∫
Bl
KBk(Ak, z)dz ≥ c3
(ηk+1)α
(ηl+1)α
`((ηl+1)−2)
`((ηk+1)−2), z ∈ Bl
for some constant c3 = c3(d, α, `) > 0. Therefore,
(ηk)−α u(Ak)`((ηk+1)
−2) ≥ c4
k−1∑
l=0
(ηl)−α u(Al)`((ηl+1)
−2)
for some constant c4 = c4(d, α, κ, `) > 0. Let ak := (ηk)−αu(Ak)`(
1(ηk+1)2
) so that ak ≥c4
∑k−1l=0 al. By induction, one can easily check that ak ≥ c5(1 + c4/2)ka0 for some constant
c5 = c5(d, α) > 0. Thus, with γ = α− ln(1 + c42)(ln(2/κ))−1, we get
u(Ar(Q)) ≤ c
(2
κ
)γk `((κ/2)−2(k+1)r−2
))
` ((κ/2)−2r−2))u(A(κ/2)kr(Q)).
Applying (6.41), we conclude that (6.42) is true. 2
Lemma 6.30 Suppose Q ∈ ∂D and r ∈ (0, r5). If w ∈ D \B(Q, r), then
GD(Ar(Q), w) ≥ cκαrα
`((κr/2)−2)
∫
B(Q,r)c
J(1
2(z −Q))GD(z, w)dz
for some constant c = c(D,α, `) > 0.
Proof. Without loss of generality, we may assume Q = 0. Fix w ∈ D \ B(0, r) and let
A := Ar(0) and u(·) := GD(·, w). Since u is regular harmonic in D ∩ B(0, (1 − κ/2)r) with
respect to X, we have
u(A) ≥ EA[u
(XτD∩B(0,(1−κ/2)r)
); XτD∩B(0,(1−κ/2)r)
∈ B(0, r)c]
=
∫
B(0,r)c
KD∩B(0,(1−κ/2)r)(A, z)u(z)dz
=
∫
B(0,r)c
∫
D∩B(0,(1−κ/2)r)
GD∩B(0,(1−κ/2)r)(A, y) J(y − z)dyu(z)dz.
94
Since B(A, κr/2) ⊂ D ∩B(0, (1− κ/2)r), by the monotonicity of the Green functions,
GD∩B(0,(1−κ/2)r)(A, y) ≥ GB(A,κr/2)(A, y), y ∈ B(A, κr/2).
Thus
u(A) ≥∫
B(0,r)c
∫
B(A,κr/2)
GB(A,κr/2)(A, y)J(y − z)dyu(z)dz
=
∫
B(0,r)c
KB(A,κr/2)(A, z)u(z)dz,
which is greater than or equal to
c1
∫
B(0,r)c
J(2(z − A))(κr/2)α
`((κr/2)−2)u(z)dz
for some positive constant c1 = c1(d, α, l) by (6.17) in Proposition 6.17. Note that |z−A| ≤2|z| for z ∈ B(0, r)c. Let M :=diam(D). Hence
u(A) ≥ c2καrα
`((κr/2)−2)
∫
A(0,r,M)
u(z)J(4z)dz ≥ c3καrα
`((κr/2)−2)
∫
A(0,r,M)
u(z)J(1
2z)dz
(6.43)
for some constant c3 = c3(d, α, `, M) > 0. We have used (6.12) in the last inequality above.
2
Lemma 6.31 There exist positive constants c1 = c1(D,α, l) and c2 = c2(D,α, l) < 1 such
that for any Q ∈ ∂D, r ∈ (0, r6) and w ∈ D \B(Q, 2r/κ), we have
Ex[GD(XτD∩Bk
, w) : XτD∩Bk∈ B(Q, r)c
]≤ c1 ck
2 GD(x,w), x ∈ D ∩Bk,
where Bk := B(Q, (κ/2)kr), k = 0, 1, · · · .Proof. Without loss of generality, we may assume Q = 0. Fix r < r6 and w ∈ D \B(0, 4r).
Let ηk := (κ/2)kr, Bk := B(0, ηk) and
uk(x) := Ex[GD(XτD∩Bk
, w); XτD∩Bk∈ B(0, r)c
], x ∈ D ∩Bk.
Note that for x ∈ D ∩Bk+1
uk+1(x) = Ex[GD(XτD∩Bk+1
, w); XτD∩Bk+1∈ B(0, r)c
]
= Ex[GD(XτD∩Bk+1
, w); τD∩Bk+1= τD∩Bk
, XτD∩Bk+1∈ B(0, r)c
]
= Ex[GD(XτD∩Bk
, w); τD∩Bk+1= τD∩Bk
, XτD∩Bk∈ B(0, r)c
]
≤ Ex[GD(XτD∩Bk
, w); XτD∩Bk∈ B(0, r)c
]
≤ Ex[GD(XτD∩Bk
, w); XτD∩Bk∈ B(0, r)c
].
95
Thus
uk+1(x) ≤ uk(x) , x ∈ D ∩Bk+1 . (6.44)
Let Ak := Aηk(0) and M :=diam(D). Since GD( · , w) is zero on Dc, we have
uk(Ak) = EAk
[GD(XτD∩Bk
, w); XτD∩Bk∈ A(0, r,M)
]
≤ EAk
[GD(XτBk
, w); XτBk∈ A(0, r,M)
]≤
∫
A(0,r,M)
KBk(Ak, z)GD(z, w)dz.
Since r < r4, by (6.16) in Proposition 6.17, we get that for z ∈ A(0, r,M),
KBk(Ak, z) ≤ c1 J(|z| − ηk)
ηα/2k
(`(η−2k ))1/2
(ηk − |Ak|)α/2
(`((ηk − |Ak|)−2))1/2
for some constant c1 = c1(D,α) > 0 and k = 1, 2, . . . . Since ηk−|Ak| ≤ ηk ≤ r6, from (6.18)
we see that(ηk − |Ak|)α/2
(`((ηk − |Ak|)−2))1/2≤ c
ηα/2k
(`(η−2k ))1/2
.
Thus
KBk(Ak, z) ≤ c2 J(|z| − ηk)
ηαk
`(η−2k ))
for some constant c2 = c2(D,α, `) > 0 and k = 1, 2, . . . . Therefore by the monotonicity of j
uk(Ak) ≤ c2ηα
k
`(η−2k )
∫
A(0,r,M)
J(1
2z)GD(z, w)dz, k = 1, 2, . . . . (6.45)
From Lemma 6.30, we have
GD(A0, w) ≥ c3καrα
`((κr/2)−2)
∫
A(0,r,M)
J(1
2z)GD(z, w)dz (6.46)
for some constant c3 = c3(D,α, `) > 0. Therefore (6.45) and (6.46) imply that
uk(Ak) ≤ c4
(κ
2
)kα ` ((κ/2)−2r−2)
l ((κ/2)−2kr−2)GD(A0, w)
for some constant c4 = c4(D,α, `) > 0. On the other hand, using Lemma 6.29, we get
GD(A0, w) ≤ c5
(2
κ
)γk `((κ/2)−2kr−2
))
` (r−2))GD(Ak, w)
for some constant c5 = c5(D,α) > 0. Thus by (6.41)
uk(Ak) ≤ c6
(2
κ
)−k(α−γ)
GD(Ak, w).
96
By Theorem 6.28, we have
uk(x)
GD(x,w)≤ uk−1(x)
GD(x,w)≤ c6
uk−1(Ak−1)
GD(Ak−1, w)≤ c4c5c6
(2
κ
)−(k−1)(α−γ)
for some constant c6 = c6(D,α) > 0 and k = 1, 2, · · · . 2
Let x0 ∈ D be fixed and set
MD(x, y) :=GD(x, y)
GD(x0, y), x, y ∈ D, y 6= x0.
MD is called the Martin kernel of D with respect to X.
Now the next theorem follows from Theorem 6.28 and Lemma 6.31 (instead of Lemma 13
and Lemma 14 in [14] respectively) in very much the same way as in the case of symmetric
stable processes in Lemma 16 of [14] (with Green functions instead of harmonic functions).
We omit the details.
Theorem 6.32 There exist positive constants R1, M1, c and β depending on D, α and l
such that for any Q ∈ ∂D, r < R1 and z ∈ D \B(Q,M1r), we have
|MD(z, x)−MD(z, y)| ≤ c
( |x− y|r
)β
, x, y ∈ D ∩B(Q, r).
In particular, the limit limD3y→w MD(x, y) exists for every w ∈ ∂D.
There is a compactification DM of D, unique up to a homeomorphism, such that MD(x, y)
has a continuous extension to D×(DM\x0) and MD(·, z1) = MD(·, z2) if and only if z1 = z2.
(See, for instance, [44].) The set ∂MD = DM \D is called the Martin boundary of D. For
z ∈ ∂MD, set MD(·, z) to be zero in Dc.
A positive harmonic function u for XD is minimal if, whenever v is a positive harmonic
function for XD with v ≤ u on D, one must have u = cv for some constant c. The set
of points z ∈ ∂MD such that MD(·, z) is minimal harmonic for XD is called the minimal
Martin boundary of D.
For each fixed z ∈ ∂D and x ∈ D, let
MD(x, z) := limD3y→z
MD(x, y),
which exists by Theorem 6.32. For each z ∈ ∂D, set MD(x, z) to be zero for x ∈ Dc.
Lemma 6.33 For every z ∈ ∂D and B ⊂ B ⊂ D, MD(XτB, z) is Px-integrable.
97
Proof. Take a sequence zmm≥1 ⊂ D \ B converging to z. Since MD(·, zm) is regular
harmonic for X in B, by Fatou’s lemma and Theorem 6.32,
Ex [MD (XτB, z)] = Ex
[lim
m→∞MD (XτB
, zm)]≤ lim inf
m→∞MD(x, zm) = MD(x, z) < ∞.
2
Lemma 6.34 For every z ∈ ∂D and x ∈ D,
MD(x, z) = Ex[MD
(XD
τB(x,r), z
)], for every 0 < r < r5 ∧ 1
2ρD(x). (6.47)
Proof. Fix z ∈ ∂D, x ∈ D and r < r6 ∧ 12ρD(x) < R. let
ηm :=(κ
2
)m
r and zm := Aηm(0), m = 0, 1, · · · .
Note that
B(zm, ηm+1) ⊂ B(z,1
2ηm) ∩D ⊂ B(z, ηm) ∩D ⊂ B(z, r) ∩D ⊂ D \B(x, r)
for all m ≥ 0. Thus by the harmonicity of MD(·, zm), we have
MD(x, zm) = Ex[MD
(XτB(x,r)
, zm
)].
On the other hand, by Theorem 6.28, there exist constants m0 ≥ 0 and c1 > 0 such that
for every w ∈ D \B(z, ηm) and y ∈ D ∩B(z, ηm+1),
MD(w, zm) =GD(w, zm)
GD(x0, zm)≤ c1
GD(w, y)
GD(x0, y)= c1 MD(w, y), m ≥ m0.
Letting y → z ∈ ∂D we get
MD(w, zm) ≤ c1 MD(w, z), m ≥ m0, (6.48)
for every w ∈ D \B(z, ηm).
To prove (6.47), it suffices to show that MD(XτB(x,r), zm) : m ≥ m0 is Px-uniformly
integrable. Since MD(XτB(x,r), z) is Px-integrable by Lemma 6.33, for any ε > 0, there is an
N0 > 1 such that
Ex[MD
(XτB(x,r)
, z)
; MD
(XτB(x,r)
, z)
> N0/c1
]<
ε
4c1
. (6.49)
Note that by (6.48) and (6.49)
Ex[MD
(XτB(x,r)
, zm
); MD
(XτB(x,r)
, zm
)> N0 and XτB(x,r)
∈ D \B(z, ηm)]
≤ c1 Ex[MD
(XτB(x,r)
, z)
; c1MD
(XτB(x,r)
, z)
> N0
]< c1
ε
4c1
=ε
4.
98
By (6.16) in Proposition 6.17, we have for m ≥ m0,
Ex[MD
(XD
τB(x,r), zm
); XτB(x,r)
∈ D ∩B(z, ηm)]
=
∫
D∩B(z,ηm)
MD(w, zm)KB(x,r)(x,w)dw
≤ c2
∫
D∩B(z,ηm)
MD(w, zm)j(|w − x| − r)rα/2
(`(r−2))1/2
(r − |w|)α/2
(`((r − |w|)−2))1/2dw
for some c2 = c2(d, α, l) > 0. Since |w − x| ≥ |x− z| − |z − w| ≥ ρD(x)− ηm ≥ 2r − r = r,
using the monotonicity of J and (6.18) to the above equation, we see that
Ex[MD
(XD
τB(x,r), zm
); XτB(x,r)
∈ D ∩B(z, ηm)]
(6.50)
≤ c3 j(r)rα
`( 1r2 )
∫
D∩B(z,ηm)
MD(w, zm)dw
≤ c4
∫
B(z,ηm)
MD(w, zm)dw = c4 GD(x0, zm)−1
∫
B(z,ηm)
GD(w, zm)dw (6.51)
for some c3 = c3(D,α, `) > 0 and c4 = c4(D,α, `, r) > 0. Note that, by Lemma 6.29, there
exist c5 = c5(D,α, `, m0) > 0, c6 = c6(D,α, `, m0, r) > 0 and γ < α such that
GD(x0, zm)−1 ≤ c5
(κ
2
)−γm `((κ/2)−2(m+1)(κ/2)−2m0r−2
))
` ((κ/2)−2(κ/2)−2m0r−2)GD(x0, zm0)
−1
≤ c6
(κ
2
)−γm
`((κ/2)−2m(κ/2)−2(m0+1)r−2
). (6.52)
On the other hand, by (6.13)∫
B(z,ηm)
GD(w, zm)dw ≤ c7
∫
B(zm,2ηm)
dw
`(|w − zm|−2)|w − zm|d−α
≤ c8
∫ 2ηm
0
sα−1
`(s−2)ds ≤ c9
(ηm)α
`((2ηm)−2)(6.53)
In the last inequality above, we have used (6.25). It follows from (6.51)-(6.53) that there
exists c10 = c10(D,α, `, m0, r) > 0 such that
Ex[MD(XD
τB(x,r), zm); XτB(x,r)
∈ D ∩B(z, 2r/m)]≤ c10
(κ
2
)(α−γ)m `((κ/2)−2m(κ/2)−2(m0+1)r−2
)
` ((κ/2)−2m(2r)−2).
Since ` is slowly varying at ∞, we can take N = N(ε,D, m0, r) large enough so that for
m ≥ N ,
Ex[MD
(XτB(x,r)
, zm
); MD
(XτB(x,r)
, zm
)> N
]
≤ Ex[MD
(XτB(x,r)
, zm
); XτB(x,r)
∈ D ∩B(z, 2r/m)]
+Ex[MD
(XτB(x,r)
, zm
); MD
(XτB(x,r)
, zm
)> N and XτB(x,r)
∈ D \B(z, 2r/m)]
< c10
(κ
2
)(α−γ)m `((κ/2)−2m(κ/2)−2(m0+1)r−2
)
` ((κ/2)−2m(2r)−2)+
ε
4< ε.
99
As each MD(XτB(x,r), zm) is Px-integrable, we conclude that MD(XτB(x,r)
, zm) : m ≥ m0 is
uniformly integrable under Px. 2
The two lemmas above imply that MD(·, z) is harmonic for X.
Theorem 6.35 For every z ∈ ∂D, the function x 7→ MD(·, z) is harmonic in D with respect
to X.
Proof. Fix z ∈ ∂D and let h(x) := MD(x, z). For any open set D1 ⊂ D1 ⊂ D we can
always take a smooth open set D2 such that D1 ⊂ D1 ⊂ D2 ⊂ D2 ⊂ D. Thus by the strong
Markov property, it is enough to show that for any x ∈ D2,
h(x) = Ex[h
(XτD2
)].
For a fixed ε > 0 and each x ∈ D2 we put
r(x) =1
2ρD2(x) ∧ ε and B(x) = B(x, r(x)).
Define a sequence of stopping times Tm,m ≥ 1 as follows:
T1 = inft > 0 : Xt /∈ B(X0),and for m ≥ 2,
Tm = Tm−1 + τB(YTm−1) θTm−1
if XTm−1 ∈ D2, and Tm = τD2 otherwise. Then h(XTm),m ≥ 1 is a martingale under Px for
any x ∈ D2. Since D2 is smooth, we know from Theorem 1 in [72] that Px(XτD2∈ ∂D2) = 0.
Thus we have Px(τD2 = Tm for some m ≥ 1) = 1. Since h is bounded on D2, we have
|Ex [h(XTm); Tm < τD2 ]| ≤ cPx (Tm < τD2) → 0.
Take a domain D3 such that D2 ⊂ D3 ⊂ D3 ⊂ D, then h is continuous and therefore bounded
on D3. By Lemma 6.33, we have Ex[h(XτD2)] < ∞. Thus by the dominated convergence
theorem
limm→∞
Ex[h
(XτD2
); Tm = τD2
]= Ex
[h
(XτD2
)].
Therefore
h(x) = limm→∞
Ex [h(XTm)]
= limm→∞
Ex[h(XτD2
); Tm = τD2
]+ lim
m→∞Ex [h(XTm); Tm < τD2 ] = Ex
[h(XτD2
)].
2
Recall that a point z ∈ ∂D is said to be a regular boundary point for X if Pz(τD = 0) = 1
and an irregular boundary point if Pz(τD = 0) = 0. It is well known that if z ∈ ∂D is regular
for X, then for any x ∈ D, GD(x, y) → 0 as y → z.
100
Lemma 6.36 (1) If z, w ∈ ∂D, z 6= w and w is a regular boundary point for X, then
MD(x, z) → 0 as x → w.
(2) The mapping (x, z) 7→ MD(x, z) is continuous on D × ∂D.
Proof. Both of the assertions can be proved easily using our Theorems 6.28 and 6.32. We
skip the proof since the argument is almost identical to the one on page 235 of [15]. 2
Lemma 6.37 Suppose that h is a bounded singular α-harmonic function in a bounded open
set D. If there is a set N of zero capacity such that for any z ∈ ∂D \N ,
limD3x→z
h(x) = 0,
then h is identically zero.
Proof. Take an increasing sequence of open sets Dmm≥1 satisfying Dm ⊂ Dm+1 and⋃∞m=1 Dm = D. Set τm = τDm . Then τm ↑ τD and limm→∞ Xτm = XτD
by the quasi-left
continuity of X. Since N has zero capacity, we have
Px(XτD∈ N) = 0, x ∈ D.
Therefore by the bounded convergence theorem we have for any x ∈ D,
h(x) = limm→∞
Ex(h(Xτm), τm < τD)
= limm→∞
Ex(h(Xτm)1∂D\N(XτD); τm < τD) = 0.
2
So far we have shown that the Martin boundary of D can be identified with a subset of
the Euclidean boundary ∂D. The main result of this section is as follows:
Theorem 6.38 The Martin boundary and the minimal Martin boundary of D with respect
to X can be identified with the Euclidean boundary of D.
Proof. Let I be the set of irregular boundary points of D for X. I is semi-polar by
Proposition II.3.3 in [13], which is polar in our case (Theorem 4.1.2 in [33]). Thus Cap(I) = 0.
Choose a decreasing sequence of open set ∆m containing I such that
limm→∞
Cap(∆m) = 0.
Hence
limm→∞
Px(T∆m < ∞) = 0, x /∈∞⋂
m=1
∆m.
101
Define a sequence Dk of subsets of D by
Dk = x ∈ D : dist(x,Dc) >1
k, k = 1, 2, . . .
Without loss of generality we may assume that x0 ∈ D1 \∆1, thus
Px0(XτDk∈ ∆m ∩Dc
k) = Px0(XTDck∈ ∆m),
and hence
limm→∞
supkPx0(XτDk
∈ ∆m ∩Dck) = 0.
For any z ∈ ∂D, we define
νzk(dy) = MD(y, z)Px0(XτDk
∈ dy), k = 1, 2, . . .
As k → ∞, the measure νzk converges weakly to the unit point mass δz. Indeed, for every
k ≥ 1, νzk(Rd) =
∫MD(y, z)Px0(XτDk
∈ dy) = MD(x0, z) = 1. Also, for each ball B = B(z, r)
and each m ≥ 1
νzk(B
c) =
∫
Bc
MD(y, z)Px0(XτDk∈ dy)
=
∫
(Bc∩∆m)∩Dck
MD(y, z)Px0(XτDk∈ dy) +
∫
(Bc\∆m)∩Dck
MD(y, z)Px0(XτDk∈ dy)
≤ supy∈Bc
MD(y, z) · Px0(XτDk∈ ∆m ∩Dc
k) + supy∈(Bc\∆m)∩Dc
k
MD(y, z).
Applying the boundary Harnack principle we see that MD(·, z) is bounded on Bc. For any
ε > 0, we can choose an m > 0 so that
supkPx0(XτDk
∈ ∆m ∩Dck) ≤ ε/ sup
y∈Bc
MD(y, z).
Note from (b) and (c) in the first paragraph of this proof that
limk→∞
supy∈(Bc\∆m)∩Dc
k
MD(y, z) = 0.
Hence
limk→∞
νzk(B
c) = 0.
Consequently if MD(·, z1) = MD(·, z2) then δz1 = δz2 which implies that z1 = z2.
So far we have shown that the Martin boundary coincides with the Euclidean boundary
and the Martin kernels MD(·, z), z ∈ ∂D, are all harmonic for XD. Thus it follows from [44]
that for every nonnegative singular α-harmonic function h in D, there exists a finite measure
µ on ∂D such that (6.54) holds.
102
Finally we show that, for every z ∈ ∂D, MD(·, z) is a minimal harmonic function, hence
the minimal Martin boundary of D can be identified with the Euclidean boundary. Fix
z ∈ ∂D and suppose that h ≤ MD(·, z), where h is nonnegative and harmonic for XD. Then
there is a finite µ on ∂D such that
h(·) =
∫
∂D
MD(·, w)µ(dw).
If µ is not a multiple of δz, then there is a positive measure ν ≤ µ such that dist(z, supp(ν)) >
0. Let
u(·) =
∫
∂D
MD(·, w)ν(dw).
Then u is a positive harmonic function for XD and is bounded above by MD(·, z). Take
ε = 12dist(z, supp(ν)). Then by the boundary Harnack principle, MD(·, z) is bounded on
B(z, ε)c and so is u. Again from the boundary Harnack principle we see that MD(·, ·) is
bounded on B(z, ε) × supp(ν), so u is also bounded on B(z, ε). Since MD(x, z) → 0 as x
approaches any regular boundary point different from z; and consequently u(x) → 0 as x
approaches any regular boundary point different from z. From Lemma 6.37 we see that u is
identically zero. Therefore µ = cδz for some c and MD(·, z) is minimal α-harmonic. 2
As a consequence of Theorem 6.38, we conclude that for every nonnegative harmonic
function h for XD, there exists a unique finite measure µ on ∂D such that
h(x) =
∫
∂D
MD(x, z)µ(dz), x ∈ D. (6.54)
µ is called the Martin measure of h.
7 Subordinate killed Brownian motion
7.1 Definitions
Let X = (Xt,Px) be a d-dimensional Brownian motion. Let D be a bounded connected open
set in Rd, and let τD = inft > 0 : Xt /∈ D be the exit time of X from D. Define
XDt =
Xt, t < τD ,∂ , t ≥ τD ,
where ∂ is the cemetery. We call XD a Brownian motion killed upon exiting D, or simply,
a killed Brownian motion. The semigroup of XD will be denoted by (PDt : t ≥ 0), and its
transition density by pD(t, x, y), t ≥ 0, x, y ∈ D. The transition density pD(t, x, y) is strictly
positive, and hence the eigenfunction ϕ0 of the operator −∆|D corresponding to the smallest
103
eigenvalue λ0 can be chosen to be strictly positive, see, for instance, [27]. The potential
operator of XD is given by
GDf(x) =
∫ ∞
0
PDt f(x) dt
and has a density GD(x, y), x, y ∈ D. Here, and further below, f denotes a nonnegative
Borel function on D. We recall the following well-known facts: If h is a nonnegative harmonic
function for XD (i.e., harmonic for ∆ in D), then both h and PDt h are continuous functions
in D.
In this section we always assume that (PDt : t ≥ 0) is intrinsically ultracontractive, that
is, for each t > 0 there exists a constant ct such that
pD(t, x, y) ≤ ctϕ0(x)ϕ0(y), x, y ∈ D, (7.1)
where ϕ0 is the positive eigenfunction corresponding to the smallest eigenvalue λ0 of the
Dirichlet Laplacian −∆|D. It is well known that (see, for instance, [28]) when (PDt ; t ≥ 0)
is intrinsically ultracontractive there is ct > 0 such that
pD(t, x, y) ≥ ctϕ0(x)ϕ0(y), x, y ∈ D.
Intrinsic ultracontractivity was introduced by Davies and Simon in [28]. It is well known that
(see, for instance, [1]) (PDt : t ≥ 0) is intrinsically ultracontractive when D is a bounded
Lipschitz domain, or a Holder domain of order 0, or a uniformly Holder domain of order
β ∈ (0, 2).
Let S = (St : t ≥ 0) and T = (Tt : t ≥ 0) be two special subordinators. Suppose that
X, S and T are independent. We assume that the Laplace exponents of S and T , denoted
by φ and ψ respectively, are conjugate, i.e., λ = φ(λ)ψ(λ). We also assume that φ has the
representation (3.2) with b > 0 or µ(0,∞) = ∞. We define subordinate processes by
Y Dt = XD(St), t ≥ 0
ZDt = XD(Tt), t ≥ 0.
Then Y D = (Y Dt : t ≥ 0) and ZD = (ZD
t : t ≥ 0) are strong Markov processes on D. We call
Y D (resp. ZD) a subordinate killed Brownian motion. If we use ηt(ds) and θt(ds) to denote
the distributions of St and Tt respectively, the semigroups of Y D and ZD are given by
QDt f(x) =
∫ ∞
0
PDs f(x)ηt(ds),
RDt f(x) =
∫ ∞
0
PDs f(x)θt(ds),
respectively. The semigroup QDt has a density given by
qD(t, x, y) =
∫ ∞
0
pD(s, x, y)ηt(ds) .
104
The semigroup RDt will have a density
rD(t, x, y) =
∫ ∞
0
pD(s, x, y)θt(ds)
in the case b = 0, while for b > 0, RDt is not absolutely continuous with respect to the
Lebesgue measure. Let U and V denote the potential measures of S and T , respectively.
Then there are decreasing functions u and v defined on (0,∞) such that U(dt) = u(t) dt and
V (dt) = bε0(dt) + v(t) dt. The potential kernels of Y D and ZD are given by
UDf(x) =
∫ ∞
0
PDt f(x) U(dt) =
∫ ∞
0
PDt f(x) u(t) dt,
V Df(x) =
∫ ∞
0
PDt f(x) V (dt) = bf(x) +
∫ ∞
0
PDt f(x) v(t) dt ,
respectively. The potential kernel UD has a density given by
UD(x, y) =
∫ ∞
0
pD(t, x, y) u(t) dt ,
while V D need not be absolutely continuous with respect to the Lebesgue measure. Note
that UD(x, y) is the Green function of the process Y D. For the process Y D we define the
potential of a Borel measure m on D by
UDm(x) :=
∫
D
UD(x, y) m(dy) =
∫ ∞
0
PDt m(x) u(t) dt .
Let (UDλ , λ > 0) be the resolvent of the semigroup (QD
t , t ≥ 0). Then UDλ is given by a
kernel which is absolutely continuous with respect to the Lebesgue measure. Moreover, one
can easily show that for a bounded Borel function f vanishing outside a compact subset of
D, the functions x 7→ UDλ f(x), λ > 0, and x 7→ UDf(x) are continuous. This implies (e.g.,
[13], p.266) that excessive functions of Y D are lower semicontinuous.
Recall that a measurable function s : D → [0,∞] is excessive for Y D (or QDt ), if QD
t s ≤ s
for all t ≥ 0 and s = limt→0 QDt s. We will denote the family of all excessive function for Y D
by S(Y D). The notation S(XD) and S(ZD) are now self-explanatory.
A measurable function h : D → [0,∞] is harmonic for Y D if h is not identically infinite
in D and if for every relatively compact open subset U ⊂ U ⊂ D,
h(x) = Ex[h(Y D(τYU ))], ∀x ∈ U,
where τYU = inft : Y D
t /∈ U is the first exit time of Y D from U . We will denote the family
of all excessive function for Y D by H+(Y D). Similarly, H+(XD) will denote the family of all
nonnegative harmonic functions for XD. It is well known that H+(·) ⊂ S(·).
105
7.2 Representation of excessive and harmonic functions of subor-dinate process
The factorization in the next proposition is similar in spirit to Theorem 4.1 (5) in [59].
Proposition 7.1 (a) For any nonnegative Borel function f on D we have
UDV Df(x) = V DUDf(x) = GDf(x), x ∈ D.
(b) For any Borel measure m on D we have
V DUDm(x) = GDm(x)
Proof. (a) We are only going to show that UDV Df(x) = GDf(x) for all x ∈ D. For the
proof of V DUDf(x) = GDf(x) see part (b). For any nonnegative Borel function f on D, by
using the Markov property and Theorem 3.6 we get that
UDV Df(x) =
∫ ∞
0
PDt V Df(x)u(t)dt
=
∫ ∞
0
PDt
(bf(x) +
∫ ∞
0
PDs f(x)v(s)ds
)u(t)dt
= bUDf(x) +
∫ ∞
0
PDt
(∫ ∞
0
PDs f(x)v(s)ds
)u(t)dt
= bUDf(x) +
∫ ∞
0
∫ ∞
0
PDt+sf(x)v(s)ds u(t)dt
= bUDf(x) +
∫ ∞
0
∫ ∞
t
PDr f(x)v(r − t)dr u(t)dt
= bUDf(x) +
∫ ∞
0
(∫ r
0
u(t)v(r − t)dt
)PD
r f(x)dr
=
∫ ∞
0
(bu(r) +
∫ r
0
u(t)v(r − t)dt
)PD
r f(x)dr
=
∫ ∞
0
PDr f(x)dr = GDf(x).
106
(b) Similarly as above,
V DUDm(x) = bUDm(x) +
∫ ∞
0
PDt UDm(x)v(t) dt
= bUDm(x) +
∫ ∞
0
PDt
(∫ ∞
0
PDs m(x)u(s) ds
)v(t) dt
= bUDm(x) +
∫ ∞
0
∫ ∞
0
PDt+sm(x)u(s) ds v(t) dt
= bUDm(x) +
∫ ∞
0
∫ ∞
r
PDr m(x)u(r − t) dr v(t) dt
= bUDm(x) +
∫ ∞
0
(∫ r
0
u(r − t)v(t) dt
)PD
r m(x) dr
=
∫ ∞
0
(b +
∫ r
0
u(r − t)v(t) dt
)PD
r m(x) dr
=
∫ ∞
0
PDr m(x) dr = GDm(x)
2
Proposition 7.2 Let g be an excessive function for Y D. Then V Dg is excessive for XD.
Proof. We first observe that if g is excessive with respect to Y D, then g is the increasing
limit of UDfn for some fn. Hence it follows from Proposition 7.1 that
V Dg = limn→∞
V DUDfn = limn→∞
GDfn,
which implies that V Dg is either identically infinite or excessive with respect to XD. We
prove now that V Dg is not identically infinite. In fact, since g is excessive with respect to
Y D, there exists x0 ∈ D such that for every t > 0,
∞ > g(x0) ≥ QDt g(x0) =
∫ ∞
0
PDs g(x0)ρt(ds).
Thus there is s > 0 such that PDs g(x0) is finite. Hence
∞ > PDs g(x0) =
∫
D
pD(s, x0, y)g(y) dy ≥ csϕ0(x0)
∫
D
ϕ0(y)g(y) dy,
107
so we have∫
Dϕ0(y)g(y) dy < ∞. Consequently
∫
D
V Dg(x)ϕ0(x) dx =
∫
D
g(x)V Dϕ0(x) dx
=
∫
D
g(x)
(bϕ0(x) +
∫ ∞
0
PDt ϕ0(x)v(t) dt
)dx
=
∫
D
g(x)
(bϕ0(x) +
∫ ∞
0
e−λ0tϕ0(x)v(t) dt
)dx
=
∫
D
ϕ0(x)g(x)dy
(b +
∫ ∞
0
e−λ0tv(t) dt
)< ∞.
Therefore s = V Dg is not identically infinite in D. 2
Remark 7.3 Note that the proposition above is valid with Y D and ZD interchanged: If g is
excessive for ZD, then UDg is excessive for XD. Using this we can easily get the following
simple fact: If f and g are two nonnegative Borel functions on D such that V Df and V Dg
are not identically infinite, and such that V Df = V Dg a.e., then f = g a.e. In fact, since
V Df and V Dg are excessive for ZD, we know that GDf = UDV Df and GDg = UDV Dg are
excessive for XD. Moreover, by the absolute continuity of UD, we have that GDf = GDg.
The a.e. equality of f and g follows from the uniqueness principle for GD.
The second part of Proposition 7.1 shows that if s = GDm is the potential of a measure,
then s = V Dg where g = UDm is excessive for Y D. The function g can be written in the
following way:
g(x) =
∫ ∞
0
PDs m(x)u(s) ds
=
∫ ∞
0
PDs m(x)
(u(∞) +
∫ ∞
s
−du(t)
)ds
=
∫ ∞
0
PDs m(x)u(∞) ds +
∫ ∞
0
PDs m(x)
(∫ ∞
s
−du(t)
)ds
= u(∞)s(x) +
∫ ∞
0
(∫ t
0
PDs m(x) ds
)(−du(t))
= u(∞)s(x) +
∫ ∞
0
(PDt s(x)− s(x)) du(t) (7.2)
In the next proposition we will show that every excessive function s for XD can be
represented as a potential V Dg, where g, given by (7.2), is excessive for Y D. We need the
following important lemma.
108
Lemma 7.4 Let h be a nonnegative harmonic function for XD, and let
g(x) = u(∞)h(x) +
∫ ∞
0
(PDt h(x)− h(x)) du(t) . (7.3)
Then g is continuous.
Proof. For any ε > 0 it holds that | ∫∞ε
du(t)| ≤ u(ε). Hence from continuity of h and PDt h
it follows by the dominated convergence theorem that the function
x 7→∫ ∞
ε
(PDt h(x)− h(x)) du(t), x ∈ D,
is continuous. Therefore we only need to prove that the function
x 7→∫ ε
0
(PDt h(x)− h(x)) du(t), x ∈ D,
is continuous. For any x0 ∈ D choose r > 0 such that B(x0, 2r) ⊂ D, and let B = B(x0, r).
It is enough to show that
limε↓0
∫ ε
0
(PDt h(x)− h(x)) du(t) = 0
uniformly on B, the closure of B. For any x ∈ B, h(Xt∧τB) is a Px-martingale. Therefore,
0 ≤ h(x)− PDt h(x) = Ex[h(Xt∧τB
)]− Ex[h(Xt), t < τD]
= Ex[h(Xt), t < τB] + Ex[h(XτB), τB ≤ t]
−Ex[h(Xt), t < τB]− Ex[h(Xt), τB ≤ t < τD]
= Ex[h(XτB), τB ≤ t]− Ex[h(Xt), τB ≤ t < τD]
≤ Ex[h(XτB), τB ≤ t] ≤ MPx(τB ≤ t) , (7.4)
where M is a constant such that h(y) ≤ M for all y ∈ B. It is a standard fact that there
exists a constant c > 0 such that for every x ∈ B it holds that Px(τB ≤ t) ≤ ct, for all t > 0.
Therefore, 0 ≤ h(x)− PDt h(x) ≤ Mct, for all x ∈ B and all t > 0. It follows that for every
x ∈ B, ∣∣∣∣∫ ε
0
(PDt h− h)(x) du(t)
∣∣∣∣ ≤ Mc
∣∣∣∣∫ ε
0
t du(t)
∣∣∣∣ .
By use of (3.14) we get that
limε↓0
∫ ε
0
(PDt h(x)− h(x)) du(t) = 0
uniformly on B1. The proof is now complete. 2
109
Proposition 7.5 If s is an excessive function with respect to XD, then
s(x) = V Dg(x), x ∈ D ,
where g is the excessive function for Y D given by the formula
g(x) = u(∞)s(x) +
∫ ∞
0
(PDt s(x)− s(x)) du(t) (7.5)
= ψ(0)s(x) +
∫ ∞
0
(s(x)− PDt s(x)) dν(t) . (7.6)
Proof. We know that the result is true when s is the potential of a measure. Let s be an
arbitrary excessive function of XD. By the Riesz decomposition theorem (see, for instance,
Chapter 6 of [13]), s = GDm + h, where m is a measure on D, and h is a nonnegative har-
monic function for XD. By linearity, it suffices to prove the result for nonnegative harmonic
functions.
In the rest of the proof we assume therefore that s is a nonnegative harmonic function
for XD. Define the function g by formula (7.5). We have to prove that g is excessive for Y D
and s = V Dg. By Lemma 7.4, we know that g is continuous.
Further, since s is excessive, there exists a sequence of nonnegative functions fn such that
sn := GDfn increases to s. Then also PDt sn ↑ PD
t s, implying sn − PDt sn → s− PD
t s. If
gn = u(∞)sn +
∫ ∞
0
(sn − PDt sn)(−du(t)) ,
then we know that sn = V Dgn and gn is excessive for Y D. By use of Fatou’s lemma we get
that
g = u(∞)s +
∫ ∞
0
(s− PDt s)(−du(t))
= limn
u(∞)sn +
∫ ∞
0
limn
(sn − PDt sn)(−du(t))
≤ lim infn
(u(∞)sn +
∫ ∞
0
(sn − PDt sn)(−du(t))
)
= lim infn
gn .
This implies (again by Fatou’s lemma) that
V Dg ≤ V D(lim inf gn) (7.7)
≤ lim inf V Dgn = lim infn
sn = s .
For any nonnegative function f , put GD1 f(x) :=
∫∞0
e−tPDt f(x) dt. Using the excessivity
of s, we can easily check that s1 := s−GD1 s is an excessive function of XD. Using an argument
110
similar to that of the proof of Proposition 7.2 we can show that GDs is not identically infinite.
Thus by the resolvent equation we get GDs1 = GDs−GDGD1 s = GD
1 s, or equivalently,
s(x) = s1(x) + GD1 s(x) = s1(x) + GDs1(x), x ∈ D,
By use of formula (7.2) for the potential GDs1 and the easy fact that V D and GD1 commute,
we have
GD1 s = GDs1 = V D
(u(∞)GDs1 +
∫ ∞
0
(PDt GDs1 −GDs1) du(t)
)
= V D
(u(∞)GD
1 s +
∫ ∞
0
(PDt GD
1 s−GD1 s) du(t)
)
= GD1 V D
(u(∞)s +
∫ ∞
0
(PDt s− s) du(t)
).
By the uniqueness principle it follows that
s = V D
(u(∞)s +
∫ ∞
0
(PDt s− s) du(t)
)= V Dg a.e. in D .
Together with (7.7), this implies that V Dg = V D(lim infn gn) a.e. From Remark 7.3 it follows
that
g = lim infn
gn a.e. (7.8)
By Fatou’s lemma and Y D-excessiveness of gn we get that,
λUDλ g = λUD
λ (lim inf gn) ≤ lim infn
λUDλ gn ≤ lim inf gn = g a.e .
We want to show that, in fact, λUDλ g ≤ g everywhere, i.e., that g is supermedian. In order
to do this we define g := supn∈N nUDn g. Then g ≤ g a.e., hence, by the absolute continuity
of UDn , nUD
n g ≤ nUDn g ≤ g everywhere. This implies that λ 7→ λUD
λ g is increasing (see, e.g.,
Lemma 3.6 in [11]), hence g is supermedian. The same argument gives that n 7→ nUDn g is
increasing a.e. Define˜g := sup
λ>0λUD
λ g = supn
nUDn g .
Then ˜g is excessive, and therefore lower semicontinuous. Moreover,
˜g = supn
nUDn g ≤ g ≤ g a.e.
Combining this with the continuity of g and the lower semicontinuity of ˜g, we can get
that ˜g ≤ g everywhere. Further, for x ∈ D such that g(x) < ∞, we have by the monotone
111
convergence theorem and the resolvent equation
λUDλ g(x) = lim
n→∞λUD
λ (nUDn )g(x)
= limn→∞
nλ
n− λ(UD
λ g(x)− UDn g(x)
= λUDλ g(x) .
Since g < ∞ a.e., we have
λUDλ g = λUD
λ g a.e.
Together with the definition of g this implies that
˜g = g a.e. (7.9)
By the continuity of g and the fact that the measures nUDn (x, ·) converge weakly to the point
mass at x, we have that for every x ∈ D
g(x) ≤ lim infn→∞
g(x) ≤ g(x) .
Hence, by using (7.9), it follows that g ≤ ˜g a.e. Since we already proved that ˜g ≤ g, it holds
that g = ˜g a.e. By the absolute continuity of UDλ , g ≥ ˜g ≥ λUD
λ˜g = λUD
λ g everywhere, i.e.,
g is supermedian.
Since it is well known (see e.g. [25]) that a supermedian function which is lower semi-
continuous is in fact excessive, this proves that g is excessive for Y D. By Proposition 7.2 we
then have that V Dg ≤ s is excessive for XD. Moreover, V Dg = s a.e., and both functions
being excessive for XD, they are equal everywhere.
It remains to notice that the formula (7.6) follows immediately from (7.5) by noting that
u(∞) = ψ(0) and du(t) = −dν(t). 2
Propositions 7.1 and 7.5 can be combined in the following theorem containing additional
information on harmonic functions.
Theorem 7.6 If s is excessive with respect to XD, then there is a function g excessive with
respect to Y D such that s = V Dg. The function g is given by the formula (7.2). Furthermore,
if s is harmonic with respect to XD, then g is harmonic with respect to Y D.
Conversely, if g is excessive with respect to Y D, then the function s defined by s = V Dg
is excessive with respect to XD. If, moreover, g is harmonic with respect to Y D, then s is
harmonic with respect to XD.
Every nonnegative harmonic function for Y D is continuous.
112
Proof. It remains to show the statements about harmonic functions. First note that every
excessive functions g for Y D admits the Riesz decomposition g = UDm + h where m is a
Borel measure on D and h is harmonic function of Y D (see Chapter 6 of [13] and note that
the assumptions on pp. 265, 266 are satisfied). We have already mentioned that excessive
functions of XD admit such decomposition. Since excessive functions of XD and Y D are in 1-
1 correspondence, and since potentials of measures of XD and Y D are in 1-1 correspondence,
the same must hold for nonnegative harmonic functions of XD and Y D.
The continuity of nonnegative harmonic functions for Y D follows from Lemma 7.4 and
Proposition 7.5. 2
It follows from the theorem above that V D is a bijection from S(Y D) to S(XD), and is
also a bijection from H+(Y D) to H+(XD). We are going to use (V D)−1 to denote the inverse
map and so we have for any s ∈ S(Y D),
(V D)−1s(x) = u(∞)s(x) +
∫ ∞
0
(PDt s(x)− s(x)) du(t) (7.10)
= ψ(0)s(x) +
∫ ∞
0
(s(x)− PDt s(x)) dν(t) .
Although the map V D is order preserving, we do not know if the inverse map (V D)−1 is
order preserving on S(XD). However from the formula above we can see that (V D)−1 is
order preserving on H+(XD).
By combining Proposition 7.1 and Theorem 7.6 we get the following relation which we
are going to use later.
Proposition 7.7 For any x, y ∈ D, we have
UD(x, y) = (V D)−1(GD(·, y))(x).
7.3 Harnack inequality for subordinate process
In this subsection we are going to prove the Harnack inequality for positive harmonic func-
tions for the process Y D under the assumption that D is a bounded domain such that (PDt )
is intrinsic ultracontractive. The proof we offer uses the intrinsic ultracontractivity in an
essential way, and differs from the existing proofs of Harnack inequalities in other settings.
We first recall that since (PDt : t ≥ 0) is intrinsic ultracontractive, by Theorem 4.2.5 of
[27] there exists T > 0 such that
pD(t, x, y) ≤ 3
2e−λ0tϕ0(x)ϕ0(y), t ≥ T, x, y ∈ D. (7.11)
113
Lemma 7.8 Suppose that D is a bounded domain such that (PDt ) is intrinsic ultracontrac-
tive. There exists a constant C > 0 such that
V Ds ≤ Cs, ∀s ∈ S(Y D) . (7.12)
Proof. Let T be the constant from (7.11). For any nonnegative function f ,
UDf(x) =
(∫ T
0
PDt f(x)u(t) dt +
∫ ∞
T
PDt f(x)u(t) dt
).
We obviously have ∫ T
0
PDt f(x)u(t) dt ≥ u(T )
∫ T
0
PDt f(x) dt.
By using (7.11) we see that∫ ∞
T
PDt f(x)u(t) dt ≥
(1
2
∫ ∞
T
e−λ0tu(t) dt
) ∫
D
ϕ0(x)ϕ0(y)f(y) dy
≥ u(T )
2
(1
2
∫ ∞
T
e−λ0t dt
)(∫
D
ϕ0(x)ϕ0(y)f(y) dy
).
and ∫ ∞
T
PDt f(x) dt ≤
(3
2
∫ ∞
T
e−λ0tdt
) ∫
D
ϕ0(x)ϕ0(y)f(y)dy.
The last two displays imply that∫ ∞
T
PDt f(x)u(t) dt ≥ u(T )
3
∫ ∞
T
PDt f(x) dt .
Therefore,
UDf(x) ≥ u(T )
∫ ∞
T
PDt f(x) dt +
u(T )
3
∫ ∞
T
PDt f(x) dt
≥ u(T )
3GDf(x) = CGDf(x)
with C = u(T )/3. From GDf(x) = V DUDf(x), we obtain V DUDf(x) ≤ CUDf(x). Since
every g ∈ S(Y D) is an increasing limit of potentials UDf(x), the claim follows. 2
Lemma 7.9 Suppose D is a bounded domain such that (PDt ) is intrinsic ultracontractive.
If s ∈ S(Y D), then for any x ∈ D,
s(x) ≥ 1
2Ce−λ0T 1
ψ(λ0)ϕ0(x)
∫
D
s(y)ϕ0(y) dy,
where T is the constant in (7.11) and C is the constant in (7.12).
114
Proof. From the lemma above we know that, for every x ∈ D, V Ds(x) ≤ Cs(x), where C
is the constant in (7.12). Since V Ds is in S(XD), we have
V Ds(x) ≥∫
D
pD(T, x, y)V Ds(y) dy
=
∫
D
pD(T, x, y)V Ds(y) dy
≥ 1
2e−λ0T ϕ0(x)
∫
D
ϕ0(y)V Ds(y) dy.
Hence
Cs(x) ≥ V Ds(x) ≥ 1
2e−λ0T ϕ0(x)
∫
D
ϕ0(y)V Ds(y) dy
=1
2e−λ0T ϕ0(x)
∫
D
s(y)V Dϕ0(y) dy
=1
2e−λ0T 1
ψ(λ0)ϕ0(x)
∫
D
s(y)ϕ0(y) dy ,
where the last line follows from
V Dϕ0(y) =
∫ ∞
0
PDt ϕ0(y) V (dt) =
∫ ∞
0
e−λ0tϕ0(y) V (dt)
= ϕ0(y)LV (λ0) =ϕ0(y)
ψ(λ0).
2
In particular, it follows from the lemma that if s ∈ S(Y D) is not identically infinite, then∫D
ϕ0(y)s(y) dy < ∞.
Theorem 7.10 Suppose D is a bounded domain such that (PDt ) is intrinsic ultracontractive.
For any compact subset K of D, there exists a constant C depending on K and D such that
for any h ∈ H+(Y D),
supx∈K
h(x) ≤ C infx∈K
h(x).
Proof. If the conclusion of the theorem were not true, for any n ≥ 1, there would exist
hn ∈ H+(Y D) such that
supx∈K
hn(x) ≥ n2n infx∈K
hn(x). (7.13)
By the lemma above, we may assume without loss of generality that∫
D
hn(y)ϕ0(y)dy = 1, n ≥ 1.
115
Define
h(x) =∞∑
n=1
2−nhn(x), x ∈ D.
Then ∫
D
h(y)ϕ0(y)dy = 1
and so h ∈ H+(Y D). By (7.13) and the lemma above, for every n ≥ 1, there exists xn ∈ K
such that hn(xn) ≥ n2nc1 where
c1 =1
2Ce−λ0T 1
ψ(λ0)infx∈K
ϕ0(x)
with T as in (7.11) and C in (7.12). Therefore we have h(xn) ≥ nc1. Since K is compact,
there is a convergent subsequence of xn. Let x0 be the limit of this convergent subsequence.
Theorem 7.6 implies that h is continuous, and so we have h(x0) = ∞. This is a contradiction.
So the conclusion of the theorem is valid. 2
7.4 Martin boundary of subordinate process
In this subsection we assume that D is a bounded Lipschitz domain in Rd where d ≥ 3. Fix
a point x0 ∈ D and set
MD(x, y) =GD(x, y)
GD(x0, y), x, y ∈ D.
It is well known that the limit limD3y→z MD(x, y) exists for every x ∈ D and z ∈ ∂D. The
function MD(x, z) := limD3y→z MD(x, y) on D × ∂D defined above is called the Martin
kernel of XD based at x0. The Martin boundary and minimal Martin boundary of XD both
coincide with the Euclidean boundary ∂D. For these and other results about the Martin
boundary of XD one can see [2]. One of the goals of this section is to determine the Martin
boundary of Y D.
By using the Harnack inequality, one can easily show that (see, for instance, pages 17–18
of [29]), if (hj) is a sequence of functions in H+(XD) converging pointwise to a function
h ∈ H+(XD), then (hj) is locally uniformly bounded in D and equicontinuous at every point
in D. Using this, one can get that, if (hj) is a sequence of functions in H+(XD) converging
pointwise to a function h ∈ H+(XD), then (hj) converges to h uniformly on compact subsets
of D. We are going to use this fact below.
Lemma 7.11 Suppose that x0 ∈ D is a fixed point.
116
(a) Let (xj : j ≥ 1) be a sequence of points in D converging to x ∈ D and let (hj) be
a sequence of functions in H+(XD) with hj(x0) = 1 for all j. If the sequence (hj)
converges to a function h ∈ H+(XD), then for each t > 0
limj→∞
PDt hj(xj) = PD
t h(x) .
(b) If (yj : j ≥ 1) is a sequence of points in D such that limj yj = z ∈ ∂D, then for each
t > 0 and for each x ∈ D
limj→∞
PDt
(GD(·, yj)
GD(x0, yj)
)(x) = PD
t (MD(·, z))(x) .
Proof. (a) For each j ∈ N, since hj(x0) = 1, there exists a probability measure µj on ∂D
such that
hj(x) =
∫
∂D
MD(x, z)µj(dz), x ∈ D.
Similarly, there exists a probability measure µ on ∂D such that
h(x) =
∫
∂D
MD(x, z)µ(dz), x ∈ D.
Let D0 be a relatively compact open subset of D such that x0 ∈ D0, and also x, xj ∈ D0.
Then
|PDt hj(xj)− PD
t h(x)|=
∣∣∣∣∫
D
pD(t, xj, y)hj(y)dy −∫
D
pD(t, x, y)h(y)dy
∣∣∣∣
≤∣∣∣∣∫
D0
pD(t, xj, y)hj(y)dy −∫
D0
pD(t, x, y)h(y)dy
∣∣∣∣
+
∫
D\D0
pD(t, xj, y)hj(y) dy +
∫
D\D0
pD(t, x, y)h(y) dy .
Recall that (see Section 6.2 of [26], for instance) there exists a constant c > 0 such that
GD(x, y)GD(y, w)
GD(x,w)≤ c
(1
|x− y|d−2+
1
|y − w|d−2
), x, y, w ∈ D. (7.14)
From this and the definition of the Martin kernel we immediately get
GD(x0, y)MD(y, z) ≤ c
(1
|x0 − y|d−2+
1
|y − z|d−2
), y ∈ D, z ∈ ∂D. (7.15)
Recall (see [27], p.131, Theorem 4.6.11) that there is a constant c > 0 such that
ϕ0(x0)ϕ0(y) ≤ cGD(x0, y) y ∈ D .
117
By boundedness of ϕ0 we have that ϕ0(u) ≤ c1ϕ0(x0) for every u ∈ D. Hence, from the last
display it follows that
ϕ0(u)ϕ0(y) ≤ cGD(x0, y) u, y ∈ D , (7.16)
with a possibly different constant c > 0 Now using (7.1), (7.15) and (7.16) we get that for
any u ∈ D,
∫
D\D0
pD(t, u, y)h(y) dy
≤ ctϕ0(u)
∫
D\D0
ϕ0(y)h(y) dy
= ctϕ0(u)
∫
D\D0
dy ϕ0(y)
∫
∂D
MD(y, z)µ(dz)
= ctϕ0(u)
∫
∂D
µ(dz)
∫
D\D0
ϕ0(y)MD(y, z) dy
≤ cct
∫
∂D
µ(dz)
∫
D\D0
GD(x0, y)MD(y, z) dy
≤ cct
∫
∂D
µ(dz)
∫
D\D0
(1
|y − z|d−2+
1
|x0 − y|d−2
)dy
≤ cct
∫
∂D
µ(dz) supz∈∂D
∫
D\D0
(1
|y − z|d−2+
1
|x0 − y|d−2
)dy
= cct supz∈∂D
∫
D\D0
(1
|y − z|d−2+
1
|x0 − y|d−2
)dy .
The same estimate holds with hj instead of h. For a given ε > 0 choose D0 large enough so
that the last line in the display above is less than ε. Put A = supD0h. Take j0 ∈ N large
enough so that for all j ≥ j0 we have
|pD(t, xj, y)− pD(t, x, y)| ≤ ε and |hj(y)− h(y)| < ε
for all y ∈ D0. Then
∣∣∣∣∫
D0
pD(t, xj, y)hj(y)dy −∫
D0
pD(t, x, y)h(y)dy
∣∣∣∣
≤∫
D0
pD(t, xj, y)|hj(y)− h(y)|dy +
∫
D0
|pD(t, xj, y)− pD(t, x, y)|h(y)dy
≤ ε + A|D0|ε,
where |D0| stands for the Lebesgue measure of D0. This proves the first part.
118
(b) We proceed similarly as in the proof of the first part. The only difference is that we use
(7.14) to get the following estimate:
∫
D\D0
pD(t, x, y)GD(y, yj)
GD(x0, yj)dy
≤ ctφ0(x)
∫
D\D0
φ0(y)GD(y, yj)
GD(x0, yj)dy
≤ cct
∫
D\D0
GD(x0, y)GD(y, yj)
GD(x0, yj)dy
≤ cct
∫
D\D0
(|x0 − y|2−d + |y − yj|2−d) dy
≤ cct supj
∫
D\D0
(|x0 − y|2−d + |y − yj|2−d) dy .
The corresponding estimate for MD(·, z) is given in part (a) of the lemma. For a given ε > 0
find D0 large enough so that the last line in the display above is less than ε. Then find
j0 ∈ N such that for all j ≥ j0,
∣∣∣∣GD(y, yj)
GD(x0, yj)−MD(y, z)
∣∣∣∣ < ε, y ∈ D0.
Then ∫
D0
pD(t, x, y)
∣∣∣∣GD(y, yn)
GD(x0, yj)−MD(y, z)
∣∣∣∣ dy < ε for all j ≥ j0 .
This proves the second part. 2
Theorem 7.12 Suppose that D ⊂ Rd, d ≥ 3 is a bounded Lipschitz domain and let x0 ∈ D
be a fixed point.
(a) If (xj) is a sequence of points in D converging to x ∈ D and (hj) is a sequence of
functions in H+(XD) converging to a function h ∈ H+(XD), then
limj
(V D)−1hj(xj) = (V D)−1h(x) .
(b) If (yj) is a sequence of points in D converging to z ∈ ∂D, then for every x ∈ D,
limj
(V D)−1(GD(·, yj)
GD(x0, yj))(x) = lim
j
(V D)−1(GD(·, yj))(x)
GD(x0, yj)= (V D)−1MD(·, z)(x) .
119
Proof. (a) Normalizing by hj(x0) if necessary, we may assume without loss of generality
that hj(x0) = 1 for all j ≥ 1. Let ε > 0. We have
|(V D)−1hj(xj)− (V D)−1h(x)|=
∣∣∣∣∫ ∞
0
(PDt hj(xj)− hj(xj) du(t)−
∫ ∞
0
(PDt h(x)− h(x)) du(t) + u(∞)(hj(xj)− h(x))
∣∣∣∣
≤∫ ε
0
(PDt hj(xj)− hj(xj)) du(t) +
∫ ε
0
(PDt h(x)− h(x)) du(t)
+
∣∣∣∣∫ ∞
ε
(PDt hj(xj)− hj(xj)) du(t)−
∫ ∞
ε
(PDt h(x)− h(x)) du(t)
∣∣∣∣+ u(∞)|hj(xj)− h(x)| .
The last term clearly converges to zero as j →∞.
For any x ∈ D choose r > 0 such that B(x, 2r) ⊂ D and put B = B(x, r). Without loss
of generality we may and do assume that xj ∈ B for all j ≥ 1. Since h and hj are continuous
in D and (hj) is locally uniformly bounded in D, there is a constant M > 0 such that h and
hj, j = 1, 2, . . . , are all bounded from above by M on B. Now from the proof of Lemma 7.4,
more precisely from relation (7.4), it follows that there is a constant c1 > 0 such that
0 ≤ h(y)− PDt h(y) ≤ c1t, y ∈ B ,
and
0 ≤ hj(y)− PDt hj(y) ≤ c1t, y ∈ B, j ≥ 1 .
Therefore we have,∣∣∣∣∫ ε
0
(PDt h− h)(y) du(t)
∣∣∣∣ ≤ c1
∣∣∣∣∫ ε
0
t du(t)
∣∣∣∣ , y ∈ B
and ∣∣∣∣∫ ε
0
(PDt hj − hj)(y) du(t)
∣∣∣∣ ≤ c1
∣∣∣∣∫ ε
0
t du(t)
∣∣∣∣ , y ∈ B , j ≥ 1.
Using (3.14) we get that
limε↓0
∫ ε
0
(PDt h(x)− h(x)) du(t) = 0 ,
and
limε↓0
∫ ε
0
(PDt hj(xj)− hj(xj)) du(t) = 0 .
Further,∣∣∣∣∫ ∞
ε
(PDt hj(xj)− hj(xj)) du(t)−
∫ ∞
ε
(PDt h(x)− h(x)) du(t)
∣∣∣∣
≤∫ ∞
ε
(|hj(xj)− h(xj)|+ |h(xj)− h(x)|) du(t) +
∫ ∞
ε
|PDt hj(xj)− PD
t h(x)| du(t) .
120
Since |hj(xj) − h(xj)| + |h(xj) − h(x)| ≤ 2M and |PDt hj(xj) − PD
t h(x)| ≤ M for all j ≥ 1
and all x ∈ B, we can apply Lemma 7.11(a) and the dominated convergence theorem to get
limj→∞
∫ ∞
ε
(|hj(xj)− h(xj)|+ |h(xj)− h(x)|) du(t) = 0
and
limj→∞
∫ ∞
ε
|PDt hj(xj)− PD
t h(x)|du(t) = 0.
The proof of (a) is now complete.
(b) The proof of (b) is similar to (a). The only difference is that we use 7.11(b) in this
case. We omit the details. 2
Let us define the function KDY (x, z) := (V D)−1MD(·, z)(x) on D × ∂D. For each fixed
z ∈ ∂D, KDY (·, z) ∈ H+(Y D). By the first part of Theorem 7.12, we know that KD
Y (x, z) is
continuous on D × ∂D. Let (yj) be a sequence of points in D converging to z ∈ ∂D, then
from Theorem 7.12(b) we get that
KDY (x, z) = lim
j→∞(V D)−1
(GD(·, yj)
GD(x0, yj)
)(x)
= limj→∞
(V D)−1(GD(·, yj))(x)
GD(x0, yj)
= limj→∞
UD(x, yj)
GD(x0, yj), (7.17)
where the last line follows from Proposition 7.7. In particular, there exists the limit
limj→∞
UD(x0, yj)
GD(x0, yj)= KD
Z (x0, z) . (7.18)
Now we define a function MDY on D × ∂D by
MDY (x, z) :=
KDY (x, z)
KDY (x0, z)
, x ∈ D, z ∈ ∂D. (7.19)
For each z ∈ ∂D, MDY (·, z) ∈ H+(Y D). Moreover, MD
Y is jointly continuous on D × ∂D.
From the definition above and (7.17) we can easily see that
limD3y→z
UD(x, y)
UD(x0, y)= MD
Y (x, z), x ∈ D, z ∈ ∂D. (7.20)
Theorem 7.13 Let D ⊂ Rd, d ≥ 3, be a bounded Lipschitz domain. The Martin boundary
and the minimal Martin boundary of Y D both coincide with the Euclidean boundary ∂D, and
the Martin kernel based at x0 is given by the function MDY .
121
Proof. The fact that MDY is the Martin kernel of Y D based at x0 has been proven in the
paragraph above. It follows from Theorem 7.6 that when z1 and z2 are two distinct points on
∂D, the functions MDY (·, z1) and MD
Y (·, z2) are not identical. Therefore the Martin boundary
of Y D coincides with the Euclidean boundary ∂D. Since MD(·, z) ∈ H+(XD) is minimal, by
the order preserving property of (V D)−1 we know that MDY (·, z) ∈ H+(Y D) is also minimal.
Therefore the minimal Martin boundary of YD also coincides with the Euclidean boundary
∂D. 2
It follows from Theorem 7.13 and the general theory of Martin boundary that for any
g ∈ H+(Y D) there exists a finite measure n on ∂D such that
g(x) =
∫
∂D
MDY (x, z)n(dz), x ∈ D.
The measure n is sometimes called the Martin measure of g. The following result gives the
relation between the Martin measure of h ∈ H+(XD) and the Martin measure of (V D)−1h ∈H+(Y D).
Proposition 7.14 If h ∈ H+(XD) has the representation
h(x) =
∫
∂D
MD(x, z) m(dz), x ∈ D,
then
(V D)−1h(x) =
∫
∂D
MDY (x, z) n(dz), x ∈ D
with n(dz) = KDY (x0, z) m(dz).
Proof. By assumption we have
h(x) =
∫
∂D
MD(x, z) m(dz), x ∈ D.
Using (7.5) and Fubini’s theorem we get
(V D)−1h(x) =
∫
∂D
(V D)−1(MD(·, z))(x) m(dz)
=
∫
∂D
MDY (x, z)KD
Y (x0, z) m(dz) =
∫
∂D
MDY (x, z) n(dz) ,
with n(dz) = KDY (x0, z)m(dz). The proof is now complete. 2
From Theorem 7.12 we know that (V D)−1 : H+(XD) → H+(Y D) is continuous with
respect to topologies of locally uniform convergence. In the next result we show that V D :
H+(Y D) → H+(XD) is also continuous.
122
Proposition 7.15 Let (gj, j ≥ 0) be a sequence of functions in H+(Y D) converging point-
wise to the function g ∈ H+(Y D). Then limj→∞ V Dgj(x) = V Dg(x) for every x ∈ D.
Proof. Without loss of generality we may assume that gj(x0) = 1 for all j ∈ N. Then there
exist probability measures nj, j ∈ N, and n on ∂D such that gj(x) =∫
∂DMD
Y (x, z)nj(dz), j ∈N, and g(x) =
∫∂D
MDY (x, z)n(dz). It is easy to show that the convergence of the harmonic
functions hj implies that nj → n weakly. Let V Dgj(x) =∫
∂DMD(x, z)mj(dz) and V Dg(x) =∫
∂DMD(x, z)m(dz). Then nj(dz) = KD
Y (x0, z)mj(dz) and n(dz) = KDY (x0, z)m(dz). Since
the density KDY (x0, ·) is bounded away from zero and bounded from above, it follows that
mj → m weakly. From this the claim of proposition follows immediately. 2
7.5 Boundary Harnack principle for subordinate process
The boundary Harnack principle is a very important result in potential theory and harmonic
analysis. For example, it is usually used to prove that, when D is a bounded Lipschitz
domain, both the Martin boundary and the minimal Martin boundary of XD coincide with
the Euclidean boundary ∂D. We have already proved in Theorem 7.13 that for Y D, both the
Martin boundary and the minimal Martin boundary coincide with the Euclidean boundary
∂D. By using this we are going to prove a boundary Harnack principle for functions in
H+(Y D).
In this subsection we will always assume that D ⊂ Rd, d ≥ 3, is a bounded Lipschitz
domain and x0 ∈ D is fixed. Recall that ϕ0 is the eigenfunction corresponding to the
smallest eigenvalue λ0 of −∆|D. Also recall that the potential operator V D is not absolutely
continuous in case b > 0 and is given by
V Df(x) = bf(x) +
∫ ∞
0
PDt f(x)v(t) dt .
Define
V D(x, y) =
∫ ∞
0
pD(t, x, y)v(t) dt .
Then
V Df(x) = bf(x) +
∫
D
V D(x, y)f(y) dy .
Proposition 7.16 Suppose that D is a bounded Lipschitz domain. There exist c > 0 and
k > d such that
UD(x, y) ≤ cϕ0(x)ϕ0(y)
|x− y|k ,
V D(x, y) ≤ cϕ0(x)ϕ0(y)
|x− y|k ,
for all x, y ∈ D.
123
Proof. We give a proof of the second estimate, the proof of the first being exactly the same.
Note that similarly as in (2.13)
limt→0
tv(t) = 0 . (7.21)
It follows from Theorem 4.6.9 of [27] that the density pD of the killed Brownian motion on
D satisfies the following estimate
pD(t, x, y) ≤ c1t−k/2ϕ0(x)ϕ0(y)e−
|x−y|26t , t > 0, x, y ∈ D,
for some k > d and c2 > 0. Recall that v is a decreasing function. From (7.21) it follows
that there exists a t0 > 0 such that v(t) ≤ 1t
for t ≤ t0. Consequently,
v(t) ≤ M +1
t, t > 0,
for some M > 0. Now we have
V D(x, y) =
∫ ∞
0
pD(t, x, y)v(t)dt ≤ c1
∫ ∞
0
t−k/2ϕ0(x)ϕ0(y)e−|x−y|2
6t v(t)dt
≤ c1
∫ ∞
0
t−k/2−1ϕ0(x)ϕ0(y)e−|x−y|2
6t dt + Mc1
∫ ∞
0
t−k/2ϕ0(x)ϕ0(y)e−|x−y|2
6t dt
≤ c2ϕ0(x)ϕ0(y)
|x− y|k + Mc3ϕ0(x)ϕ0(y)
|x− y|k−2
≤ c4ϕ0(x)ϕ0(y)
|x− y|k .
The proof is now finished. 2
Lemma 7.17 Suppose that D is a bounded Lipschitz domain and W an open subset of Rd
such that W ∩ ∂D is non-empty. If h ∈ H+(Y D) satisfies
limx→z
h(x)
(V D)−11(x)= 0, for all z ∈ W ∩ ∂D,
then
limx→z
V Dh(x) = 0, for all z ∈ W ∩ ∂D .
Proof. Fix z ∈ W ∩ ∂D. For any ε > 0, there exists δ > 0 such that h(x) ≤ ε(V D)−11(x)
for x ∈ B(z, δ) ∩D. Thus we have
V Dh(x) ≤ V D(h 1D\B(z,δ))(x) + εV D(V D)−11(x) = V D(h 1D\B(z,δ))(x) + ε, x ∈ D.
124
For any x ∈ B(z, δ/2) ∩D we have
V D(h 1D\B(z,δ))(x) = bh(x) 1D\B(z,δ)(x) +
∫
D\B(z,δ)
V D(x, y)h(y) dy
=
∫
D\B(z,δ)
V D(x, y)h(y) dy
since 1D\B(z,δ)(x) = 0 for x ∈ B(z, δ/2) ∩ D. By Proposition 7.16 we get that there exists
c > 0 such that for any x ∈ B(z, δ/2) ∩D,
∫
D\B(z,δ)
V D(x, y)h(y) ≤ cϕ0(x)
∫
D\B(z,δ)
ϕ0(y)
|x− y|k h(y) dy
≤ cϕ0(x)
∫
D\B(z,δ)
ϕ0(y)
(δ/2)kh(y) dy ≤ cϕ0(x)
∫
D
ϕ0(y)h(y) dy .
Hence,
V Dh(x) ≤ cϕ0(x)
∫
D
ϕ0(y)h(y)dy + ε .
From Lemma 7.9 we know that∫
Dϕ0(y)h(y)dy < ∞. Now the conclusion of the lemma
follows easily from the fact that limx→z ϕ0(x) = 0. 2
Now we can prove the main result of this section: the boundary Harnack principle.
Theorem 7.18 Suppose that D ⊂ Rd, d ≥ 3, is a bounded Lipschitz domain, W an open
subset of Rd such that W ∩ ∂D is non-empty, and K a compact subset of W . There exists
a constant c > 0 such that for any two functions h1 and h2 in H+(Y D) satisfying
limx→z
hi(x)
(V D)−11(x)= 0, z ∈ W ∩ ∂D, i = 1, 2,
we haveh1(x)
h2(x)≤ c
h1(y)
h2(y), x, y ∈ K ∩D.
Proof. It follows from Corollary 4.7 of [65] and Proposition 7.16 that there exist positive
constants c1 and c2 such that
c1ϕ0(x)ϕ0(y) ≤ UD(x, y) ≤ c2ϕ0(x)ϕ0(y)
|x− y|k , x, y ∈ D,
where k > d is given in Proposition 7.16. Therefore by (7.20) we get that there exist positive
constants c3 and c4 such that
c3ϕ0(x) ≤ MDY (x, z) ≤ c4ϕ0(x), x ∈ K ∩D, z ∈ ∂D \W. (7.22)
125
Suppose that h1 and h2 are two functions in H+(Y D) such that
limx→z
hi(x)
(V D)−11(x)= 0, z ∈ W ∩ ∂D, i = 1, 2,
then by Lemma 7.17 we know that
limx→z
V Dhi(x) = 0, z ∈ W ∩ ∂D, i = 1, 2.
Now by Corollary 8.1.6 of [53] we know that the Martin measures m1 and m2 of V Dh1 and
V Dh2 are supported by ∂D \W and so we have
V Dhi(x) =
∫
∂D\WMD(x, z) mi(dz), x ∈ D, i = 1, 2.
Using Proposition 7.14 we get that
hi(x) =
∫
∂D\WMD
Y (x, z) ni(dz), x ∈ D, i = 1, 2,
where ni(dz) = KDY (x0, z) mi(dz), i = 1, 2. Now using (7.22) it follows that
c3ϕ0(x)ni(∂D \W ) ≤ hi(x) ≤ c4ϕ0(x)ni(∂D \W ), x ∈ K ∩D, i = 1, 2.
The conclusion of the theorem follows immediately. 2
From the proof of Theorem 7.18 we can see that the following result is true.
Proposition 7.19 Suppose that D ⊂ Rd, d ≥ 3, is a bounded Lipschitz domain and W an
open subset of Rd such that W ∩ ∂D is non-empty. If h ∈ H+(Y D) satisfies
limx→z
h(x)
(V D)−11(x)= 0, z ∈ W ∩ ∂D,
then
limx→z
h(x) = 0, z ∈ W ∩ ∂D.
Proof. From the proof of Theorem 7.18 we see that the Martin measure n of h is supported
by ∂D \W and so we have
h(x) =
∫
∂D\WMD
Y (x, z)n(dz), x ∈ D.
For any z0 ∈ W ∩ ∂D, take δ > 0 small enough so that B(z0, δ) ⊂ B(z0, δ) ⊂ W . Then by
(7.22) we get that
c5ϕ0(x) ≤ MDY (x, z) ≤ c6ϕ0(x), x ∈ B(z0, δ) ∩D, z ∈ ∂D \W ,
for some positive constants c5 and c6. Thus
h(x) ≤ c6ϕ0(x)n(∂D \W ), x ∈ B(z0, δ) ∩D,
from which the assertion of the proposition follows immediately. 2
126
7.6 Sharp bounds for the Green function and the jumping func-tion of subordinate process
In this subsection we are going to derive sharp bounds for the Green function and the
jumping function of the process Y D. The method uses the upper and lower bounds for the
transition densities pD(t, x, y) of the killed Brownian motion. The lower bound that we need
is available only in case when D is a bounded C1,1 domain in Rd. Therefore, throughout this
subsection we assume that D ⊂ Rd is a bounded C1,1 domain.
Recall that a bounded domain D ⊂ Rd, d ≥ 2, is called a bounded C1,1 domain if
there exist positive constants r0 and M with the following property: For every z ∈ ∂D and
every r ∈ (0, r0], there exist a function Γz : Rd−1 → R satisfying the condition |∇Γz(ξ) −∇Γz(η)| ≤ M |ξ − η| for all ξ, η ∈ Rd−1, and an orthonormal coordinate system CSz such
that if y = (y1, . . . , yd) in CSz coordinates, then
B(z, r) ∩D = B(z, r) ∩ y : yd > Γz(y1, . . . , yd−1 .
When we speak of a bounded C1,1 domain in R we mean a finite open interval.
For any x ∈ D, let ρ(x) denote the distance between x and ∂D. We will use the following
two bounds for transition densities pD(t, x, y): There exists a positive constant c1 such that
for all t > 0 and any x, y ∈ D,
pD(t, x, y) ≤ c1t−d/2−1ρ(x)ρ(y) exp
(−|x− y|2
6t
). (7.23)
This result (valid also for Lipschitz domains) can be found in [27] (see also [65]). The lower
bound was obtained in [74] and [64] and states that for any A > 0, there exist positive
constants c2 and c such that for any t ∈ (0, A] and any x, y ∈ D,
pD(t, x, y) ≥ c2
(ρ(x)ρ(y)
t∧ 1
)t−d/2 exp
(−c|x− y|2
t
). (7.24)
Recall that the Green function of Y D is given by
UD(x, y) =
∫ ∞
0
pD(t, x, y)u(t) dt ,
where u is the potential density of the subordinator S. We assume that the Laplace exponent
of S is a special Bernstein function, so that u is decreasing. Instead of assuming conditions
on the asymptotic behavior of φ(λ) as λ → ∞, we will directly assume the asymptotic
behavior of u(t) as t → 0+.
Assumption A: (i) There exist constants c0 > 0 and β ∈ [0, 1], and an increasing, contin-
uous function ` : (0,∞) → (0,∞) which is slowly varying at ∞ such that
u(t) ∼ c0
tβ`(1/t), t → 0 + . (7.25)
127
(ii) In the case when d = 1 or d = 2, there exist constants c > 0, T > 0 and γ < d/2 such
that
u(t) ≤ ctγ−1, t ≥ T. (7.26)
Note that under certain assumptions on the asymptotic behavior of φ(λ) as λ →∞, one can
obtain (7.25) and (7.26) for the density u.
Theorem 7.20 Suppose that D is a bounded C1,1 domain in Rd and that the potential
density u of the special subordinator S = (St : t ≥ 0) satisfies the Assumption A. Suppose
also that there is a function g : (0,∞) → (0,∞) such that
∫ ∞
0
td/2−2+βe−tg(t)dt < ∞
and ξ > 0 such that f`,ξ(y, t) ≤ g(t) for all y, t > 0, where f`,ξ is the function defined before
Lemma 4.3 using the ` in (7.25). Then there exist positive constants C1 ≤ C2 such that for
all x, y ∈ D
C1
(ρ(x)ρ(y)
|x− y|2 ∧ 1
)1
|x− y|d+2β−2 `( 1|x−y|2 )
≤ UD(x, y)
≤ C2
(ρ(x)ρ(y)
|x− y|2 ∧ 1
)1
|x− y|d+2β−2 `( 1|x−y|2 )
. (7.27)
Proof. We start by proving the upper bound. Using the obvious upper bound pD(t, x, y) ≤(4πt)−d/2 exp(−|x− |2/4t) and Lemma 4.3 one can easily show that
UD(x, y) ≤ c11
|x− y|d+2β−2 `(
1|x−y|2
) .
Now note that (7.23) gives
UD(x, y) ≤ c2ρ(x)ρ(y)
∫ ∞
0
t−d/2−1e−|x−y|2/6tu(t) dt
Thus by using Lemma 4.3 one easily gets
UD(x, y) ≤ c3ρ(x)ρ(y)1
|x− y|d+2β `(
1|x−y|2
) .
Now combining the two upper bounds we have got so far we arrive at the upper bound in
(7.27).
128
In order to prove the lower bound, we first recall the following result about slowly varying
functions (see [10], p. 22, Theorem 1.5.12):
limλ→∞
`(tλ)
`(λ)= 1
uniformly in t ∈ [a, b] where [a, b] ⊂ (0,∞). Together with joint continuity of (t, λ) 7→`(tλ)/`(λ), this shows that for a given λ0 > 0 and an interval [a, b] ⊂ (0,∞), there exists a
positive constant c(a, b, λ0) such that
`(tλ)
`(λ)≤ c(a, b, λ0) , a ≤ t ≤ b, λ ≥ λ0 . (7.28)
Now, by (7.24),
UD(x, y) ≥ c4
∫ A
0
(ρ(x)ρ(y)
t∧ 1
)t−d/2 exp
(−c|x− y|2
t
)dt .
Assume x 6= y. Let R be the diameter of D and assume that A has been chosen so that
A = R2. Then for any x, y ∈ D, ρ(x)ρ(y) < R2 = A. The lower bound is proved by
considering two separate cases:
(i)|x− y|2ρ(x)ρ(y)
<2R2
A. In this case we have:
UD(x, y) ≥ c4
∫ ρ(x)ρ(y)
0
(ρ(x)ρ(y)
t∧ 1
)t−d/2 exp−c|x− y|2/tu(t) dt
≥ c5|x− y|−d+2
∫ ∞
c|x−y|2ρ(x)ρ(y)
sd/2−2e−su(c|x− y|2/s) ds
≥ c5|x− y|−d+2
∫ 4cR2
A
2cR2
A
sd/2−2e−su(c|x− y|2/s) ds . (7.29)
For 2cR2/A < s < 4cR2/A, we have that A/4 ≤ c|x− y|2/s ≤ A/2. Hence, by (7.25), there
exists c6 > 0 such that
u
(c|x− y|2
s
)≥ c6(
c|x−y|2s
)β
`(
sc|x−y|2
) .
Further, since 1/|x − y|2 ≥ 1/R2 for all x, y ∈ D, we can use (7.28) to conclude that there
exists c7 > 0 such that
`(
1|x−y|2
)
`(
sc|x−y|2
) ≥ c7 ,2cR2
A≤ s ≤ 4cR2
A, x, y ∈ D .
129
It follows from (7.29), that
UD(x, y) ≥ c5|x− y|−d+2
∫ 4cR2
A
2cR2
A
sd/2−2e−s c6c7(c|x−y|2
s
)β
`(
1|x−y|2
) ds
=c4
|x− y|d+2β−2`( 1|x−y|2 )
∫ 4cR2
A
2cR2
A
sd/2+β−2e−s ds
=c9
|x− y|d+2β−2`( 1|x−y|2 )
.
(ii)|x− y|2ρ(x)ρ(y)
≥ 2R2
A. In this case we have:
UD(x, y) ≥ c4ρ(x)ρ(y)
∫ A
ρ(x)ρ(y)
t−d/2−1 exp−c|x− y|2/tu(t) dt
= c10ρ(x)ρ(y)|x− y|−d
∫ c|x−y|2ρ(x)ρ(y)
c|x−y|2A
sd/2−1e−su(c|x− y|2/s) ds
≥ c10ρ(x)ρ(y)|x− y|−d
∫ 2cR2
A
cR2
A
sd/2−1e−su(c|x− y|2/s) ds .
The integral above is estimated in the same way as in case (i). It follows that there exists a
positive constant c11 such that
UD(x, y) ≥ c10ρ(x)ρ(y)|x− y|−d c11
|x− y|2β`( 1|x−y|2 )
= c12ρ(x)ρ(y)
|x− y|d+2β`( 1|x−y|2 )
.
Combining the two cases above we arrive at the lower bound (7.27). 2
Suppose that the subordinator S has a strictly positive drift b. Then we can take β = 0
and ` = 1 in the Assumption A, and Theorem 7.20 implies that the Green function UD
of Y D is comparable to the Green function of XD. Further, if φ(λ) ∼ c0λα/2, as λ → ∞,
0 < α < 2, then by (3.27) it follows that the Assumption A holds true with β = 1−α/2 and
` = 1. In this way we recover a result from [67] saying that under the stated assumption,
C1
(ρ(x)ρ(y)
|x− y|2 ∧ 1
)1
|x− y|d−α≤ UD(x, y) ≤ C2
(ρ(x)ρ(y)
|x− y|2 ∧ 1
)1
|x− y|d−α.
The jumping function JD(x, y) of the subordinate process Y D is given by the following
formula:
JD(x, y) =
∫ ∞
0
pD(t, x, y) µ(dt) .
130
Suppose that µ(dt) has a decreasing density µ(t) which satisfies
Assumption B: There exist constants c0 > 0, β ∈ [1, 2] and an increasing, continuous
function ` : (0,∞) → (0,∞) which is slowly varying at ∞ such that such that
µ(t) ∼ c0
tβ`(1/t), t → 0 + . (7.30)
Then we have the following result on sharp bounds of JD(x, y). The proof is similar to the
proof of Theorem 7.20, and therefore omitted.
Theorem 7.21 Suppose that D is a bounded C1,1 domain in Rd and that the Levy density
µ(t) of the subordinator S = (St : t ≥ 0) exists, is decreasing and satisfies the Assumption
B. Suppose also that there is a function g : (0,∞) → (0,∞) such that
∫ ∞
0
td/2−2+βe−tg(t)dt < ∞
and ξ > 0 such that f`,ξ(y, t) ≤ g(t) for all y, t > 0, where f`,ξ is the function defined before
Lemma 4.3 using the ` in (7.30). Then there exist positive constants C3 ≤ C4 such that for
all x, y ∈ D
C3
(ρ(x)ρ(y)
|x− y|2 ∧ 1
)1
|x− y|d+2β−2`( 1|x−y|2 )
≤ JD(x, y)
≤ C4
(ρ(x)ρ(y)
|x− y|2 ∧ 1
)1
|x− y|d+2β−2`( 1|x−y|2 )
. (7.31)
References
[1] Banuelos, R.: Intrinsic ultracontractivity and eigenfunction estimates for Schrodinger opera-tors. J. Funct. Anal., textbf100, 181–206 (1991)
[2] Bass, R.: Probabilistic Techniques in Analysis. Springer, New York (1995)
[3] Bass, R.F., Kassmann, M.: Harnack inequalities for non-local operators of variable order.Trans. Amer. Math. Soc., 357, 837–850 (2005)
[4] Bass, R.F., Levin, D.A.: Harnack inequalities for jump processes. Potential Analysis, textbf17,375–388 (2002)
[5] Bass, R.F., Levin, D.A.: Transition probabilities for symmetric jump processes. Trans. Amer.Math. Soc., 354, 2933–2953 (2002)
131
[6] Berg, C., Forst, G.: Potential Theory on Locally Compact Abelian Groups. Springer, Berlin(1975)
[7] Bertoin, J.: Levy Processes. Cambridge University Press, Cambridge (1996)
[8] Bertoin, J.: Regenerative embeddings of Markov sets. Probab. Theory Rel. Fields, 108, 559–571 (1997)
[9] Bertoin, J., Yor, M.: On subordinators, self-similar Markov processes and some factorizationsof the exponential random variable. Elect. Comm. in Probab., 6, 95–106 (2001)
[10] Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Cambridge University Press,Cambridge (1987)
[11] Bliedtner, J., Hansen, W.: Potential Theory: An Analytic and Probabilistic Approach toBalayage. Springer, New York (1986)
[12] Blumenthal, R.M., Getoor, R.K.: Some theorems on stable processes, Trans. Amer. Math.Soc., 95, 263–273 (1960)
[13] Blumenthal, R.M., Getoor, R.K.: Markov Processes and Potential Theory. Academic Press,New York (1968)
[14] Bogdan, K.: The boundary Harnack principle for the fractional Laplacian. Studia Math., 123,43–80 (1997)
[15] Bogdan, K.: Representation of α-harmonic functions in Lipschitz domains. Hiroshima Math.J., 29 (1999), 227–243.
[16] Bogdan, K., Kulczycki, T. and Kwasnicki, M.: Estimates and structure of α-harmonic func-tions, Probab. Th. rel. Fields, (to appear), 2007.
[17] Bogdan, K., Stos, A., Sztonyk, P.: Potential theory for Levy stable processes. Bull. PolishAcad. Sci. Math., 50, 361–372 (2002)
[18] Chen, Z.-Q., Kumagai, T.: Heat kernel estimates for stable-like processes on d-sets. Stoch.Proc. Appl., 108, 27–62 (2003)
[19] Chen. Z.-Q., Song, R.: Estimates on Green functions and Poisson kernels of symmetric stableprocesses. Math. Ann., 312, 465–501 (1998)
[20] Z.-Q. Chen and R. Song, Martin boundary and integral representation for harmonic functionsof symmetric stable processes. J. Funct. Anal., 159 (1998), 267–294.
[21] Chen. Z.-Q., Song, R.: Drift transform and Green function estimates for discontinuous pro-cesses. J. Funct. Anal., 201, 262–281 (2003)
[22] Chen, Z.-Q.,Song, R.: Two-sided eigenvalue estimates for subordinate processes in domains,J. Funct. Anal., 226, 90–113 (2005)
132
[23] Chen, Z.-Q.,Song, R.: Continuity of eigenvalues of subordinate processes in domains, Math.Z., 252, 71–89 (2006)
[24] Chen, Z.-Q.,Song, R.: Spectral properties of subordinate processes in domains, to appear inStochastic Analysis and Partial Differential Equations
[25] Chung, K.-L.: Lectures from Markov Processes to Brownian Motion. Springer, New York(1982)
[26] Chung, K.-L., Zhao, Z.: From Brownian Motion to Schrodinger’s Equation. Springer, Berlin(1995)
[27] Davies, E.B.: Heat Kernels and Spectral Theory. Cambridge University Press, Cambridge(1989)
[28] Davies, E.B., Simon, B.: Ultracontractivity and the heat kernel for Schrodinger operators andDirichlet Laplacians. J. Funct. Anal., 59, 335–395 (1984)
[29] Doob, J.L.: Classical Potential Theory and its Probabilistic Counterparts. Springer, New York(1984)
[30] Erdelyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.G.: Tables of Integral Transforms,Vol. 1. McGraw-Hill, New York (1954)
[31] Fristedt, B. E.: Sample functions of stochastic processes with stationary, independent incre-ments, Advances in probability and related topics, Vol. 3, pp. 241–396, Dekker, New York,1974.
[32] Fuglede, B.: On the theory of potentials in locally compact spaces. Acta Math., 103, 139–215(1960)
[33] Fukushima, M., Oshima, Y. and Takeda, M.: Dirichlet forms and symmetric Markov processes,Walter De Gruyter, Berlin, 1994.
[34] Geman, H., Madan, D.B., Yor, M. Time changes for Levy processes. Math. Finance, 11, 79–96(2001)
[35] Glover, J., Pop-Stojanovic, Z., Rao, M., Sikic, H., Song, R., Vondracek, Z.: Harmonic functionsof subordinate killed Brownian motions. J. Funct. Anal., 215, 399–426 (2004)
[36] Glover, J., Rao, M., Sikic, H., Song, R.: Γ-potentials. In Classical and modern potential theoryand applications (Chateau de Bonas, 1993), 217–232, Kluwer Acad. Publ., Dordrecht (1994)
[37] Grzywny, T. and Ryznar, M.: Estimates of Green functions for some perturbations of fractionalLaplacians, Illinois J. Math., (to appear), 2007.
[38] He, P., Ying, J.: Subordinating the killed process versus killing the subordinate process.Proceedings Amer. Math. Soc., 135, 523–529 (2007)
[39] Jacob, N.: Pseudo Differential Operators and Markov Processes, Vol. 1. Imperial College Press,London (2001)
133
[40] Jerison, D. S. and Kenig, C. E.: Boundary behavior of harmonic functions in non-tangentiallyaccessible domains. Adv. Math., 46 (1982), 80–147.
[41] Kim, P., Song, R., Vondracek, Z.: Boundary Harnack inequality for subordinate Brownianmotions, preprint, 2007.
[42] Klebanov, L.B., Maniya, G.M., Melamed, I.A.: A problem of V. M. Zolotarev and analoguesof infinitely divisible and stable distributions in a scheme for summation of a random numberof random variables. Theory Probab. Appl., 29, 791–794 (1984)
[43] Kulczycki, T.: Properties of Green function of symmetric stable process.Probab. Math. Statis.,17, 381–404 (1997)
[44] Kunita, H. and Watanabe, T.: Markov processes and Martin boundaries. Illinois J. Math. 9(1965), 485–526.
[45] Kunugui, K.: Etude sur la theorie du potentiel generalise. Osaka Math. J., 2, 63–103 (1950)
[46] Kyprianou, A. E.: Introductoy Lectures on Fluctuations of Levy Processes with Applications,Springer, Berlin, 2006.
[47] Lieb, E.: The stability of matter. Rev. Modern Phys., 48, 553–569 (1976)
[48] Madan, D.B., Carr, P., E. Chang, E.: The variance gamma process and option pricing. Euro-pean Finance Review, 2, 79–105 (1998)
[49] Martio, O. and Vuorinen, M.: Whitney cubes, p-capacity, and Minkowski content. Exposition.Math., 5(1) (1987), 17–40.
[50] Matsumoto, H., Nguyen, L., Yor, M.: Subordinators related to the exponential functionals ofBrownian bridges and explicit formulae for the semigroups of hyperbolic Brownian motions. InStochastic processes and related topics (Siegmundsburg, 2000), 213–235, Stochastics Monogr.,Vol. 12, Taylor & Francis, London (2002)
[51] Nakamura, Y.: Classes of operator monotone functions and Stieltjes functions. In The GohbergAnniversary Collection, Vol. II: Topics in Analysis and Operator Theory, Operator Theory:Advances and Applications, Vol. 41, H. Dym et al, Eds, 395–404, Birkhauser, Basel (1989)
[52] Pillai, A.N.: On Mittag-Leffler functions and related distributions. Ann. Inst. Statist. Math.,42, 157–161 (1990)
[53] Pinsky, R.: Positive Harmonic Functions and Diffusion, Cambridge University Press, Cam-bridge (1995)
[54] Port, S.C., Stone, J.C.: Infinitely divisible process and their potential theory I, II. Ann. Inst.Fourier, 21(2) 157–275, and 21(4), 179–265 (1971)
[55] Pruitt, W.E.: The growth of random walks and Levy processes. Ann. Probab., 9, 948–956(1981)
134
[56] Rao, M., Song, R., Vondracek, Z.: Green function estimates and Harnack inequality for sub-ordinate Brownian motions. Potential Anal., 25, 1–27 (2006)
[57] Ryznar, M.: Estimates of Green functions for relativistic α-stable process. Potential Anal.,17, 1-23 (2002)
[58] Sato, K.-I.: Levy Processes and Infinitely Divisible Distributions. Cambridge University Press(1999)
[59] Schilling, R.L.: Subordination in the sense of Bochner and a related functional calculus. J.Austral. Math. Soc., Ser. A, 64, 368–396 (1998)
[60] Schilling, R.L.: Growth and Holder conditions for the sample paths of Feller processes. Probab.Theory Related Fields, 112, 565-611 (1998)
[61] Sikic, H., Song, R., Vondracek, Z.: Potential theory of geometric stable processes. Probab.Theory Related Fields, 135, 547–575 (2006)
[62] Skorohod, A.V.: Asymptotic formulas for stable distribution laws. Select. Transl. Math.Statist. and Probability, Vol. 1. Inst. Math. Statist. and Amer. Math. Soc., Providence, R.I.,157–161, (1961
[63] Skorohod, A. V.: Random Processes with Independent Increments. Kluwer, Dordrecht, 1991.
[64] Song, R.: Sharp bounds on the density, Green function and jumping function of subordinatekilled BM. Probab. Theory Related Fields, 128, 606-628 (2004)
[65] Song, R., Vondracek, Z.: Potential theory of subordinate killed Brownian motion in a domain.Probab. Theory Related Fields, 125, 578–592 (2003)
[66] Song, R., Vondracek, Z.: Harnack inequalities for some classes of Markov processes. Math. Z.,246, 177–202 (2004)
[67] Song, R., Vondracek, Z.: Sharp bounds for Green functions and jumping functions of subor-dinate killed Brownian motions in bounded C1,1 domains. Elect. Comm. in Probab., 9 96–105(2004)
[68] Song, R., Vondracek, Z.: Harnack inequality for some discontinuous Markov processes with adiffusion part. Glasnik Matematicki, 40, 177–187 (2005)
[69] Song, R., Vondracek, Z.: Potential theory of special subordinators and subordinate killedstable processes. J. Theoret, Probab., 19, 817–847 (2006)
[70] Song, R., Vondracek, Z.: Potential theory of subordinate Brownian motions, Preprint, 2007.
[71] Song, R., Wu, J.-M.: Boundary Harnack principle for symmetric stable processes. J. Funct.Anal., 168, 403–427 (1999)
[72] Sztonyk, P.: On harmonic measure for Levy processes, Probab. Math. Statist., 20 (2000),383–390.
135
[73] Sztonyk, P.: Boundary potential theory for stable Levy processes, Colloq. Math., 95(2) (2003),191–206.
[74] Zhang, Q.S.: The boundary behavior of heat kernels of Dirichlet Laplacians. J. DifferentialEquations, 182, 416–430 (2002).
136