shot noise processess - university of georgia › pdfs › xiao_yuanhui_200308_phd.pdf · models...

67
Shot Noise Processess by Yuanhui Xiao (Under the direction of Robert Lund) Abstract This dissertation studies several statistical issues for shot noise processes. A strong law of large numbers and a central limit theorem are derived under very weak conditions. Asymptotics are explored for the case of heavy-tailed shot marks. Optimal estimating equation and moment-type methods are developed for parameter estimation purpose. Asymptotic normality of the proposed estimators is established. Intructive examples are presented and simulations are included for additional feel. Index words: Shot Noise, Law of Large Numbers, Central Limit Theorem, Heavy-tails, Inference, Estimating Equations, Conditional Expectation, Linear Prediction, Conditional Least Squares.

Upload: others

Post on 05-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

Shot Noise Processess

by

Yuanhui Xiao

(Under the direction of Robert Lund)

Abstract

This dissertation studies several statistical issues for shot noise processes. Astrong law of large numbers and a central limit theorem are derived under veryweak conditions. Asymptotics are explored for the case of heavy-tailed shot marks.Optimal estimating equation and moment-type methods are developed for parameterestimation purpose. Asymptotic normality of the proposed estimators is established.Intructive examples are presented and simulations are included for additional feel.

Index words: Shot Noise, Law of Large Numbers, Central Limit Theorem,Heavy-tails, Inference, Estimating Equations, ConditionalExpectation, Linear Prediction, Conditional Least Squares.

Page 2: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

Shot Noise Processess

by

Yuanhui Xiao

B.S., Jiangxi University, 1988

M.S., Nankai University, 1990

M.S., The University of Georgia, 2002

A Dissertation Submitted to the Graduate Faculty

of The University of Georgia in Partial Fulfillment

of the

Requirements for the Degree

Doctor of Philosophy

Athens, Georgia

2003

Page 3: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

c© 2003

Yuanhui Xiao

All Rights Reserved

Page 4: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

Shot Noise Processess

by

Yuanhui Xiao

Approved:

Major Professor: Robert Lund

Committee: Anand Vidyashankar

Lynne Seymour

Daniel Hall

William P. McCormick

Electronic Version Approved:

Maureen Grasso

Dean of the Graduate School

The University of Georgia

August 2003

Page 5: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

Acknowledgments

With the completion of this dissertation, the author is becoming more and more

indebted to my friends, fellow students, teachers, parents, and many other people

who helped me in various ways. The achievement of this work would be very hard

to imagine without their support.

First I am very thankful to my Christian friends for their love and supporting

prayers.

I am also grateful to my Chinese, American, and international friends in Athens,

Georgia for their care, hospitality, and encouragement.

I also want to express my thanks to my fellow students at the University of

Georgia for their respect and friendship.

I owe much to the staff of the Department of Statistics at the University of

Georgia for serving me very faithfully.

I especially want to emphasize my gratitude to the faculty members of the

Department of Statistics at the University of Georgia for their excellent teaching

and valuable academic guidance.

Here I also want to mention my parents on the other side of the earth, who

always share my joy and sorrow. I am very thankful to them for their love, solicitude,

patience, and encouragement.

Special thanks to Dr. Anand Vidyshankar and Dr. Lynne Seymour for serving

me as my graduate committee members;

Special thanks to Dr. Daniel Hall and Dr. Ishwar Basawa for their guidance in

estimating equations and dispensation of valuable advice.

iv

Page 6: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

v

Special thanks to Dr. Bill McCormick who has taken his time to read several

drafts of the work. His contribution to this work is invaluable.

Special thanks to Dr. Robert Taylor, the Head of the Department of Mathematics

of Clemson University. Dr. Taylor was previous a faculty member of the Department

of Statistics at the University of Georgia and served the department very faithfully.

Though he is not a member of my graduate committee, because of his encouragement

and earnest attitude in academic matters, I am willing to pursue my achievements

in academics.

Finally, I want to give my special thanks to my major advisor, Dr. Robert Lund,

who has taught me more than Statistics. Under his guidance I did the best research

in my life. Furthermore, he has re-kindled my academic enthusiasm and henceforth

I am willing to develop my career in academics.

Here I must point out that all the people who gave me aid deserve my special

thanks. Perhaps I should create a book to list their names. However, I want to say

that their aid is unforgettable and their names will also be remembered.

Before the end of this acknowledgement section, I want to express my gratitude to

a special group of people. They have passed away. However, they left a great book:

the Holy Bible, of which I am a faithful reader. Through this book, I established

my faith and found my eternal hope. I often find joy, comfort, encouragement, and

refreshed heart when reading this book. This is a book I want to recommend to you

the most, and I wish you try to read it. Not only do I expect you know the names of

the people in the group, but I also wish you will find something even more precious.

May God bless you!

Page 7: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

Table of Contents

Page

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

Chapter

1 Introduction and Literature Review . . . . . . . . . . . . . 1

1.1 References . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Inference for Shot Noise† . . . . . . . . . . . . . . . . . . . . 5

2.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Estimation for Interval Similar Shot Functions . . 8

2.3 Other Shot Functions . . . . . . . . . . . . . . . . . . 18

2.4 Proof of Theorem 2.4. . . . . . . . . . . . . . . . . . . 21

2.5 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.6 References . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 Limiting Properties of Shot Noise Processes† . . . . . . . . 30

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 31

3.2 The Law of Large Numbers and Central Limit The-

orem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.3 Heavy-Tailed Shot Marks . . . . . . . . . . . . . . . . 36

3.4 Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.5 References . . . . . . . . . . . . . . . . . . . . . . . . . 52

4 Conclusions and Future Work . . . . . . . . . . . . . . . . . 54

4.1 Non-interval Similar Shot Functions . . . . . . . . . 55

vi

Page 8: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

vii

4.2 One Parameter Shot Functions . . . . . . . . . . . . 56

4.3 Non-causal Shot Functions? . . . . . . . . . . . . . . 58

4.4 Maximum Likelihood Estimators . . . . . . . . . . . . 59

Page 9: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

Chapter 1

Introduction and Literature Review

Whereas the definition of shot noise is inconsistent with varying authors, the basic

feature of a shot noise process is that of a convolution of two independent stochastic

processes. In our work we use the follow definition.

Definition (Shot Noise). A stochastic process A = At : t ≥ 0 is called a shot

noise process if for each t ≥ 0,

At =Nt∑

i=1

Yih(t − τi), (1.0.1)

where N = Nt : t ≥ 0 is a Poisson process with arrival times τi∞i=1 and rate

generation parameter λ > 0, Yi∞i=1 is an independent and identically distributed

(IID) sequence of “shot marks”, and h is a real-valued function on R satisfying the

causality condition h(t) = 0 for t < 0. The function h is called the impulse response

function or shot function and is often assumed to be bounded.

Note that if h is constant on [0,∞), then A is the well-known compound Poisson

process.

Shot noise processes and their variants have been widely used in various areas.

In insurance risk the total claim proceses is a shot noise process with a constant

impulse response function (Beirlant and Teugels, 1992). Precipitation models often

center on a shot noise model (Waymire and Gupta, 1981a, b, c). Riverflows have been

noted to be modeled well by a shot noise structure (Weiss, 1973, 1974; Lawrance

and Kottegoda, 1979). Shot noise has even been used to characterize the irregularity

1

Page 10: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

2

of textile yarns (Linhart, 1964). The content of water in a reservoir is shown to

follow a shot noise process with h(t) = e−γt for some γ > 0 if the water is released

from the reservoir at rate γu when the content of the reservoir is u > 0 (Lund,

1996). Applications of shot noise have also been made to neurophysiology (Holden,

1976), optics, acoustic, membrane noise analysis, electronics, biophysics, (Bevan et

al., 1979), and traffic law studies (Marcus, 1975).

Previous authors have enriched the statistical literature on shot noise. Papoulis

(1971) studied the distance of shot noise from Gaussianity (1971). Rice (1977) cal-

culated the characteristic function of generalized shot noise. Lane (1984) presented

a version of the central limit theorem, somewhat different from the one we derive in

Chapter 3 here. McCormick (1995, 1997) quantified the extremes of shot noise. Lund

(1996) explored the stability of storage models with shot noise input and considered

prediction of shot noise in Lund et al. (1999). Shot noise generated by a semi-Markov

process was studied in Smith (1973).

In this dissertation, we consider several statistical topics on shot noise. The flavor

of research in this dissertation is not found in any related literature. Our focus is

on process histories Atini=1 taken on an equally spaced lattice: the time indices of

observation, denoted by 0 < t1 < t2 < . . . < tn, are given by ti = δi, where δ is a

constant and is assumed to be unity without loss of generality. We will derive a strong

law of large numbers and central limit theorem for n−1∑n

i=1 Ai. Examples are given

to show the use of the results in inferential settings. We also present asymptotics

for the case of heavy-tailed shot marks. These will be discussed in Chapter 3. In

Chapter 2, we explore estimation of parameters in a shot noise process and derive

asymptotic properties of the estimators. In Chapter 4, we summarize the work and

state a few open problems.

Page 11: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

3

1.1 References

[1] Beirlant, J. and Teugels, J. L. (1991). Modeling large claims in non-life insur-

ance. Insurance: Mathematical Economics, 11, 17-29.

[2] Bevan, S., Kullberg, R., and Rice, J. (1979). An analysis of cell membrane noise,

The Annals of Statistics, 7, 237-257.

[3] Holden, H. V. (1976). Models for stochastic activity of Neurones (Lecture Notes

in Biomathematics, 12). Springer, Berlin.

[4] Lane, J. A. (1984). The central limit theorem for the Poisson shot-noise process,

Journal of Applied Probability, 21, 287-301.

[5] Lawrance, A. J. and Kottegoda, H. T. (1977). Stochastic modelling of riverflow

time series, Journal of the Royal Statistical Society, Series A, 140, 1-14.

[6] Linhar, H. (1964). On the distribution of some time averages of shot noise,

Technometrics, 6, 287-292.

[7] Lund, R. B. (1996). The stability of storage models with shot noise input,

Journal of Applied Probability, 33, 830-839.

[8] Lund, R. B. (1999). Prediction of shot noise, Journal of Applied Probability, 36,

374-388.

[9] Marcus, A. (1975). Some exact distributions in traffic noise theory. Advances in

Applied Probability, 7, 593-606.

[10] McCormick, W. P. (1997). Extremes for shot noise processes with heavy-tailed

amplitudes, Journal of Applied Probability, 34, 643-656.

[11] McCormick, W. P. and Homble, P. (1995). Weak limit results for the extremes

of a class of shot noise processes, Journal of Applied Probability, 32, 707-726.

Page 12: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

4

[12] Papoulis, A. (1971). High density shot noise and Gaussianity, Journal of Applied

Probability, 8, 118-127.

[13] Rice, J. (1977). On generalized shot noise, Advances in Applied Probability, 9,

553-565.

[14] Smith, W. (1973). Shot noise generated by a semi-Markov process, Journal of

Applied Probability, 10, 685-690.

[15] Waymire, E. and Gupta, V. K. (1981a). The mathematical structure of rainfall

representation 1. A review of the theory of stochastic rainfall models. Water

Resources Research, 17, 1261-1272.

[16] Waymire, E. and Gupta, V. K. (1981b). The mathematical structure of rainfall

representation 2. A review of the theory of point processes. Water Resources

Research, 17, 1273-1285.

[17] Waymire, E. and Gupta, V. K. (1981c). The mathematical structure of rain-

fall representation 3. Some applications of the point process theory to rainfall

processes. Water Resources Research, 17, 1287-1294.

[18] Weiss, G. (1973). Filtered Poisson processes as models for daily streamflow data.

PhD thesis. Imperial College, London.

[19] Weiss, G. (1974). Shot noise models for synthetic generation of multisite daily

streamflow data. In: Design of Water Resource Projects with Inadequate Data,

Vol 1. IAHS. pp. 457-467.

Page 13: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

Chapter 2

Inference for Shot Noise†

†Xiao, Y. and Lund, R. Submitted to Statistical Inference in Stochastic Processes,

4/13/2003

5

Page 14: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

6

Abstract

This paper studies estimation issues for shot noise processes from a process his-

tory taken on a discrete-time lattice. Optimal estimating equation methods are con-

structed for the case when the impulse response function of the shot process is

interval similar; moment-type methods are explored for compactly supported impulse

responses. Asymptotic normality of the proposed estimators are established and the

limiting covariance of the estimators is derived. Examples demonstrating the appli-

cation of the methods to specific shot functions are presented; short simulations are

included for additional feel.

Key Words: Shot Noise; Inference; Estimating Equation; Conditional Expectation,

Linear Prediction, Conditional Least Squares.

2.1 Introduction.

This paper studies estimation of parameters in a shot noise process A = At, t ≥ 0

from observations on a discrete lattice. Specifically, At is defined for each time t ≥ 0

by

At =Nt∑

i=1

Yih(t − τi), (2.1.1)

where N = Nt, t ≥ 0 is a Poisson process with arrival times τi∞i=1 and rate

generation parameter λ > 0, and Yi∞i=1 is an independent and identically dis-

tributed (IID) sequence of shot marks that is independent of N . The shot function

h, also called an impulse response function, is assumed to be causal in the sense that

h(t) = 0 for all t < 0. Shot noise processes are studied in Lauger (1975), Rice (1977),

Bevan et al. (1979), and Lund et al. (1999).

Parameters in a shot noise process can arise from three sources. First, there is the

Poissonian arrival rate parameter λ, which, for the moment, we take as unknown.

Second, there may be unknown parameters in the shot function h. For example, if

Page 15: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

7

h(t) = e−γt for t ≥ 0 and some unknown γ ≥ 0, then h has one parameter γ that

requires estimation. Third, parameters may arise in the marginal distribution of Yi.

For example, if Yi has a Gamma type distribution with unknown parameters α > 0

and β > 0, then α and β constitute two additional unknown parameters.

At our disposal are observations of A on a discrete set that we arrange in the

time-order At1 , . . . , Atn where 0 < t1 < . . . < tn. There is little to be gained by

bookkeeping such general time indices; hence, we assume equally spaced observa-

tions: ti = ∆i for some ∆ > 0. As the time axis may be rescaled, we may take ∆ = 1

without loss of generality. In short, our objective is to estimate all unknown param-

eters from A1, . . . , An. Multiple realizations of A are not assumed to be available.

The shot noise parameters may not be identifiable. To simply see this, let c > 0 be

any positive number. Then the two shot noise processes At, t ≥ 0 and A∗t , t ≥ 0

defined by

At =Nt∑

i=1

Yih(t − τi) and A∗t =

Nt∑

i=1

Y ∗i h∗(t − τi), (2.1.2)

where Y ∗i = cYi and h∗(t) = h(t)/c, generate the same sample sample path from

fixed realizations of N and Yi∞i=1. To circumvent this type of non-identifiability, we

assume that supt≥0 |h(t)| = 1. Unfortunately, this alone is not sufficient to guarantee

that all the parameters will be identifiable; we revisit identifiability issues in the

next section.

For general notation, let Y represent a generic copy of Yi for any i ≥ 1. The

vector θ will denote all unknown model parameters. The covariance structure of A

is derived in Lund et al. (1999). If E[|Y |] < ∞, then E[|At|] < ∞ for all t > 0 with

E[At] = λE[Y ]

∫ t

0

h(u)du. (2.1.3)

If E[Y 2] < ∞, then process covariances are finite with

Cov(At1 , At2) = λE[Y 2]

∫ t1

0

h(t1 − u)h(t2 − u)du (2.1.4)

Page 16: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

8

for all 0 ≤ t1 ≤ t2. From the covariances in (2.1.4), one can linearly predict future

process values from a history of observations (Lund et al. 1999). This is useful when

no tractable form for the conditional predictor

Ai = E[Ai|A1, A2, ..., Ai−1] (2.1.5)

is available. We return to this issue in Section 3. In some cases, such as those studied

in Section 2 here, one can explicitly compute Ai. Both linear and conditional pre-

dictors could, in principle, serve as the basis for an estimating equation method of

parameter estimation (Sørensen 2000).

The rest of this paper proceeds as follows. In Section 2, we introduce the class

of interval similar shot functions; in this setting, Ai can be computed explicitly

and inference for optimal estimating equations quantified. Section 3 moves to non-

interval similar shot functions. Section 4 proves the paper’s main result and Section

5 is an Appendix of technical facts.

2.2 Estimation for Interval Similar Shot Functions

This section studies estimation in shot noise processes with interval similar impulse

response functions. The shot function h is called interval similar if for each n ≥ 0

and γ ∈ [0, 1),

h(n + γ) = βnh(γ) (2.2.1)

for some βn. We take β0 = 1. The exponential shot function h(t) = e−γt1[0,∞)(t) is

interval similar with βn = e−nγ . The piecewise constant shot function h(t) = δn if

t ∈ [n, n + 1) for constants δn∞n=0 is interval similar with βn = δn/δ0. In the rest of

this section, we assume that h is interval similar.

For t ≥ 1, define

Rt =∑

τi∈(t−1,t]

Yih(t − τi). (2.2.2)

Page 17: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

9

Proposition 2.1. (Properties of Rt on the integer lattice).

1. Rn∞n=1 is IID with

E[Rn] = λE[Y ]

∫ 1

0

h(u)dudef= µR and Var(Rn) = λE[Y 2]

∫ 1

0

h2(u)dudef= σ2

R.

(2.2.3)

2. Rn is independent of A1, A2, . . . , An−1 for each n ≥ 1.

3. For each n ≥ 1,

An =

n−1∑

k=1

βn−kRk + Rn; (2.2.4)

furthermore,

Rn = An +

n−1∑

k=1

αn−kAk (2.2.5)

where the αn’s depend only on parameters in h and can be recursively computed via

α0 = β0 and

αk = −k∑

j=1

αk−jβj , k ≥ 1. (2.2.6)

Proof. Parts (1) and (2) follow from the stationary and independent increments of

N and the IID structure assumed of Yi∞i=1. The moments for Rn in (2.2.3) are

easily obtained by taking expectations in (2.2.2).

To prove Part (3), use (2.1.1) to get

An =

n−1∑

k=1

τi∈(k−1,k]

Yih(n − τi) +∑

τi∈(n−1,n]

Yih(n − τi) (2.2.7)

for each n ≥ 1. For τi ∈ (k − 1, k] the interval similarity in (2.2.1) gives h(n − τi) =

h(n− k + k − τi) = βn−kh(k − τi). Using this in (2.2.7) gives (2.2.4). Solving (2.2.4)

for Rn yields

Rn = An −n−1∑

k=1

βn−kRk. (2.2.8)

Page 18: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

10

As the coefficient multiplying An and Rn in (2.2.8) is unity, the linear form in (2.2.5)

follows for some αk’s. Subtracting (2.2.8) from (2.2.5) gives

0 =

n−1∑

k=1

(αn−kAk + βn−kRk). (2.2.9)

To obtain the form of the αk’s quoted in (2.2.6), use an induction starting with

A1 = R1 and equate coefficients in (2.2.9) to zero.

Propostion 2.1 facilitates computation of conditional predictors; in fact, by

(2.2.5), An = −∑n−1k=1 αn−kAk + Rn. Since Rn is independent of A1, A2, . . . , An−1,

An = −n−1∑

k=1

αn−kAk + µR. (2.2.10)

As An−An = Rn−µR, it follows that E[(Ai−Ai)2|A1, A2, . . . , Ai−1] = Var(Ri) = σ2

R,

which is constant in i.

Returning to estimation issues, we obtain an estimate of θ in the conditional

least squares sense by minimizing the sum of squares

Qn(θ) =n∑

i=1

(Ai − Ai)2 (2.2.11)

in the argument θ. Such estimates are solutions to Sn(θ) = 0 where

Sn(θ) =n∑

i=1

(Ai − Ai(θ))A′i(θ); (2.2.12)

here, ′ denotes partial derivative with respect to θ and the notation Ai(θ) indicates

dependence of Ai on θ. Since Var(Ai − Ai|A1, . . . , Ai−1) is constant in i, (2.2.12) is

also an optimal estimating equation in the sense of Godambe (1985).

Observe from (2.2.10) that

A′i(θ) = −

i−1∑

k=1

α′i−k(θ)Ak + µ′

R(θ) (2.2.13)

is a linear combination of 1, A1, A2, . . . , Ai−1; hence A′i(θ) is also a linear combination

of 1, R1, . . . , Ri−1 by (2.2.4). Therefore, A′i(θ) is independent of Ai − Ai = Ri − µR

Page 19: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

11

for each i. From this, it follows that Sn(θ)∞n=1 is a martingale adapted to Fn∞n=1,

where Fn is the σ−field generated by A1, A2, . . . , An.

An identifiability issue arises. The component in Sn(θ) corresponding to any

parameter appearing in E[Y ] or λ simplifies to the equation

n∑

i=1

(Ai − Ai) = 0. (2.2.14)

As this same equation appears for each parameter arising in E[Y ] or λ, one will not

be able to identify both λ and all parameters in E[Y ] simultaneously. In particular,

note that parameters in the distribution of Y appear in the estimating equation

only through E[Y ]. Henceforth, we will focus on estimating the single parameter

λE[Y ]def= θ1. We view this as equivalent to assuming that λ = 1 and estimating the

shot mean E[Y ].

Now let θ2, . . . , θp+1 denote the p parameters of the shot function h so that

θ = (θ1, θ2, . . . , θp+1)T where T indicates matrix transpose. For a fixed 2 ≤ j ≤ p+1,

use (2.2.13) to get

n∑

i=1

(

i−1∑

k=1

∂αi−k

∂θjAk + θ1

∂κ

∂θj

)

(Ai − Ai(θ)) = 0, (2.2.15)

where κ =∫ 1

0h(u)du. Observe that κ depends only on θ2, . . . , θp+1.

The remaining shot parameters θ2, . . . , θp+1 are tacitly assumed to be para-

meterized in an identifiable manner. Of course, redundancies in the component equa-

tions in (2.2.12) indicate non-identifiability of model parameters from the observed

data.

Returning to the general theory for interval similar shot functions, let θ∞ denote

the true value of θ and let θ∗ be some element in the δ-neighborhood θ : ‖θ −

θ∞‖2 < δ of θ for some fixed δ > 0. In our ensuing technical work, the following

result, which is a combination of Theorems 6.4 and 6.5 in Hall and Heyde (1980),

will be useful.

Page 20: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

12

Proposition 2.2. Suppose that as n → ∞,

(Condition EE1) n−1S ′n(θ∞)

P→ W (θ∞),

(Condition EE2) limn→∞

supδ↓0

(nδ)−1|Tn(θ∗)(j,k)| < ∞ a.e.

(Condition EE3) n−1/2Sn(θ∞)D−→ N(0,Σ(θ∞)), (2.2.16)

where T n(θ∗) = S′n(θ∗) − S ′

n(θ∞), Tn(θ∗)(j,k) denotes the (j, k)th entry of T n(θ∗),

and Σ(θ∞) and W (θ∞) are positive definite and symmetric matrices. Then, there

exists a sequence of estimators θn such that θna.e.−→ θ∞, and for any ε > 0 there is

an event E with P (E) > 1− ε and natural number n0 such that on E with n > n0,

θn satisfies the least squares equations (2.2.12) and Qn attains a relative minimum

at θn. Moreover, the following asymptotic normality holds:

√n(θn − θ∞)

D−→ N(0, W−1(θ∞)Σ(θ∞)W−1(θ∞)). (2.2.17)

Before continuing with the general study of solutions to (2.2.12), an example

with explicit computations is instructive.

Example 2.3. Consider the exponential shot function h(t) = e−γtI[0,∞)(t), where

IA(t) denotes the indicator function over the set A. Note that θ1 = E[Y ] and θ2 = γ.

Then βn = e−nγ, and (2.2.6) gives α0 = 1, α1 = e−γ , and αk = 0 for k ≥ 2. Thus,

(2.2.10) is

An = e−γAn−1 +E[Y ](1 − e−γ)

γ. (2.2.18)

To simplify bookkeeping, let β = (ρ, m)T where ρ = e−γ and m = γ−1E[Y ](1−e−γ).

Then (2.2.18) is An = ρAn−1 +m and (2.2.5) is Rn = An−ρAn−1. Minimizing Qn(θ)

as functions of ρ and m yields

ρn =n∑n

i=2 AiAi−1 − (∑n

i=2 Ai−1)(∑n

i=1 Ai)

n∑n

i=2 A2i−1 − (

∑ni=2 Ai−1)2

(2.2.19)

Page 21: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

13

and

mn = n−1

[

n∑

i=1

Ai − ρn

n∑

i=2

Ai−1

]

. (2.2.20)

Simple manipulations show that

γn = − ln(ρn) and E[Y ]n = −mn ln(ρn)

1 − ρn. (2.2.21)

Example A3 in the Appendix show that ρnP−→ ρ and mn

P−→ m as n → ∞. Hence

γnP→ γ and E[Y ]n

P−→ E[Y ] as n → ∞ and the estimators are consistent.

Now (2.2.12) is

Sn(β) =

−∑ni=1(Ai − ρAi−1 − m)Ai−1

−∑ni=1(Ai − ρAi−1 − m)

; (2.2.22)

differentiating this gives

S′n(β) =

∑ni=1 A2

i−1,∑n

i=1 Ai−1

∑ni=1 Ai−1, n

. (2.2.23)

As S′n(β) does not depend on β, Condition EE2 holds trivially. Example A3 in the

Appendix establishes Condition EE1 with

W (β) =

σ2R+m2

1−ρ2 , m1−ρ

m1−ρ

, 1

. (2.2.24)

Taking Y to be exponentially distributed, we have E[Y 2] = 2E2[Y ] and (2.2.3) gives

σ2R = 2E2[Y ]

∫ 1

0

e−2γudu

= −m2(ln ρ)1 + ρ

1 − ρ. (2.2.25)

Thus,

W (β) =

m2(1−ln ρ)(1−ρ)2

, m1−ρ

m1−ρ

, 1

. (2.2.26)

Page 22: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

14

As E[|Y |3] < ∞, Proposition A6 in the Appendix proves Condition EE3 with Σ(β) =

σ2RW (β). Using this in (2.2.26) now gives

W (β)−1Σ(β)W (β)−1 =

−(1 − ρ2) ln ρ, m(1 + ρ) ln ρ

m(1 + ρ) ln ρ, −m(1 − ln ρ)(ln ρ)1+ρ1−ρ

. (2.2.27)

The asymptotic normality in Proposition 2.2 supplies

√n(βn − β)

D−→ N

0

0

,

−(1 − ρ2) ln ρ, m(1 + ρ) ln ρ

m(1 + ρ) ln ρ, −m(1 − ln ρ)(ln ρ)1+ρ1−ρ

.

(2.2.28)

A delta method can be used to convert the analysis back into the original parameters:

√n

E[Y ]n

γn

D−→ N

E[Y ]

γ

,Σ∗

, (2.2.29)

where

Σ∗ =1 + e−γ

1 − e−γ

(

γ + (1−e−γ)2

γ2e−2γ

)

E2[Y ], (1−e−γ )2E[Y ]γe−2γ

(1−e−γ )2E[Y ]γe−2γ , (1−e−γ )2

e−2γ

. (2.2.30)

A simulation was conducted to further explore the properties of the estimators

for the exponential shot function h(t) = e−γt1[0,∞)(t). Ten thousand independent

realizations of such a shot noise process with parameters γ = 1 and IID exponentially

distributed Yn’s with mean E[Y ] = 4 were simulated. The table below reports the

sample means and standard errors of the resulting set of ten thousand estimates

of γ and E[Y ] for varying sample lengths n. Sample average parameter estimates

are listed with sample standard errors in parentheses. The sample standard errors

agree with the theoretical ones in (2.2.30). For example, when n = 5000 and γ = 1,

the variance obtained from the (2, 2)th element in Σ∗ in (2.2.30) is 0.0357, which is

consistent with the sample variance of 0.0360 reported.

Page 23: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

15

Table 2.1: Shot Noise Simulation Results with h(t) = e−γtI[0,∞)(t). Displayed areparameter estimates and standard errors (in parentheses) for various sample sizes.

Sample Size E[Y ] γ50 4.7932 (2.1148) 1.2020 (0.4592)100 4.3659 (1.2753) 1.0901 (0.2765)200 4.1861 (0.8633) 1.0473 (0.1850)250 4.1419 (0.7589) 1.0348 (0.1641)500 4.0679 (0.5317) 1.0170 (0.1138)1000 4.0419 (0.3775) 1.0106 (0.0816)2000 4.0164 (0.2612) 1.0041 (0.0567)5000 4.0088 (0.1652) 1.0022 (0.0360)

λ = 1, E[Y ] = 4, γ = 1; #Simulations=10, 000

Returning to the asymptotic properties of estimators from a general interval

similar shot function h, let a = an∞n=0 and b = bn∞n=0 be two sequences, which

may be vector- or matrix-valued. The convolution c = cn∞n=0, defined at index n

via

cn =

n∑

k=0

an−kbk, (2.2.31)

will be denoted by c = a ∗ b. For a sequence that begins at index 1 rather than at

index 0, the zeroth term is taken as zero so that (2.2.31) remains meaningful. Let ‖·‖1

denote the 1−norm defined by ‖a‖1 =∑∞

n=0 |an| and let u′(β) = ∂un(β)/∂β∞n=0

denote the term-by-term derivatives of un(β)∞n=0.

For ease of exposition, we make the following notational abbreviations: A =

An∞n=1, α(θ) = αn(θ)∞n=0, and β(θ) = βn(θ)∞n=0. In convolution notation,

(2.2.4) and (2.2.5) are A = β(θ∞) ∗ R(θ∞) and R(θ∞) = α(θ∞) ∗ A.

For θ in some sufficiently close neighborhood of θ∞, define R(θ) = α(θ) ∗ A as

a function of θ conditional on the realization of A and write R(θ) = Rn(θ)∞n=0.

Page 24: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

16

The above convolution identities yield R(θ) = α(θ) ∗β(θ∞) ∗R(θ∞); differentiating

this gives R′(θ) = α′(θ) ∗ β(θ∞) ∗ R(θ∞) and R′′(θ) = α′′(θ) ∗ β(θ∞) ∗ R(θ∞). As

α0(θ) ≡ 1, α′0(θ) = 0, and α′′

0(θ) = 0, it follows that the zeroth terms in α′(θ)∗β(θ∞)

and α′′(θ) ∗ β(θ∞) are also zero. Our main result for interval similar shot noise can

now be stated.

Theorem 2.4. Suppose that A is a shot noise process with an interval similar h.

Assume that E[|Y |3] < ∞, that ‖α′′(θ) ∗ β(θ∞)‖1 and µ′′R(θ) are continuous at

θ = θ∞, and that W (θ) = σ2R(θ)W 1(θ) + W 2(θ) is positive definite at θ = θ∞,

where

W 1(θ) =

∞∑

i=0

[α′(θ) ∗ β(θ)]i [α′(θ) ∗ β(θ)]

Ti , (2.2.32)

([α′(θ) ∗ β(θ)]i denotes the ith element in α′(θ) ∗ β(θ)) and

W 2(θ) =

[

µR(θ)∞∑

i=0

[α′(θ) ∗ β(θ)]i − µ′R(θ)

][

µR(θ)∞∑

i=0

[α′(θ) ∗ β(θ)]i − µ′R(θ)

]T

.

(2.2.33)

Then the parameter estimators are consistent and asymptotically normal as n → ∞

with√

n(θn − θ∞)D−→ N(0, σ2

R(θ∞)W−1(θ∞)). (2.2.34)

The proof of Theorem 2.4 is given in Section 4.

Remark 2.5. As ‖a ∗ b‖1 ≤ ‖a‖1‖b‖1, if ‖β(θ∞)‖1 < ∞ and ‖α′′(θ)‖1 is continuous

at θ = θ∞, then ‖α′′(θ) ∗ β(θ∞)‖1 < ∞ and the proof and conclusions of Theorem

2.4 still hold. For general calculation of α′(θ)∗β(θ) and α′′(θ)∗β(θ), we offer the fol-

lowing power series methods. Let α(θ; z) =∑∞

i=0 αi(θ)zi and β(θ; z) =∑∞

i=0 βi(θ)zi.

Then by (2.2.6), the convolutions α′(θ) ∗ β(θ) and α′′(θ) ∗ β(θ) are the coefficients

of the Taylor series for the products α′(θ; z)β(θ; z) and α′′(θ; z)β(θ; z), respectively.

Page 25: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

17

Example 2.6. Consider again the exponential shot function h(t) = e−γtI[0,∞)(t).

Here, βn(θ) = e−nγ for n ≥ 0 and α(θ) = 1, e−γ, 0, 0, . . .. Hence, α′(θ) is an abso-

lutely summable sequence. The matrix W (θ∞) requires compuation of α′(θ) ∗β(θ).

The above explicit expresssions above provide α′(θ) ∗ β(θ) = 0, e−γ, e−2γ , . . . , .

Applying (2.2.32) and (2.2.33) give the W (θ∞) identified in (2.2.26) earlier.

Example 2.7. Consider the interval similar shot function

h(t) =

1, if t ∈ [0, 1];

γ, if t ∈ (1, 2];

0, otherwise,

(2.2.35)

where −1 < γ < 1. Then β(θ) = 1, γ, 0, 0, . . . and α(θ) = 1,−γ, . . . , (−γ)n, . . ..

It follows that terms in α′(θ) and α′′(θ) have alternating positive and negative signs

with ||α′(θ)||1 = (1 − γ)−2 and ||α′′(θ)||1 = 2(1 − γ)−3. Hence, Theorem 2.4 again

applies. For this shot function, we obtain [α′(θ) ∗ β(θ)]i = (−1)iγi−1 for i ≥ 1 with

[α′(θ) ∗ β(θ)]0 = 0. This gives

W (θ∞) =

1, E[Y ]1+γ

E[Y ]1+γ

, E2[Y ](1+γ)2

+σ2

R

1−γ2

, (2.2.36)

which leads to

√n

E[Y ]n

γn

D−→ N

E[Y ]

γ

,

σ2R + 1−γ

1+γE2[Y ], −(1 − γ)E[Y ]

−(1 − γ)E[Y ], 1 − γ2

.

(2.2.37)

Note that if Y is exponentially distributed, then σ2R = 2E2[Y ] and the above asymp-

totic covariance matrix can be further simplified.

Page 26: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

18

2.3 Other Shot Functions

There exist simple shot functions that are not interval similar. One example of such

is

h(t) = max (1 − t/β, 0)I[0,∞)(t), (2.3.1)

for some unknown β > 0. For such situations, no tractable form for E[An|A1, . . . ,

An−1] exists (Lund et al. 1999); however, parameter estimators can be constructed

from alternative methods such as the linear prediction paradigm investigated in

Sørensen (2000).

In this work, we restrict attention to the case where h is compactly supported,

a common assumption in other shot noise analyses (Doney and O’Brien 1991;

McCormick 1998). For a compactly supported h, there exists an L(θ) such that

h(t) = 0 for all t ≥ L(θ). From (2.1.1), A is seen to be L(θ)-dependent in the sense

that At and Ar are independent when |t − r| > L(θ) and t, r ≥ 0. Equation (2.1.3)

shows that E[At] is constant in t when t ≥ L(θ) and that Cov(At, Ar) depends only

on |t − r| when mint, r > L(θ). Proposition 3.2.1 in Brockwell and Davis (1991)

implies that Ai+q, i ≥ 1 is a sample from a moving-average time series of order

q = dL(θ)e. The well-studied asymptotic properties of moving-average processes

could now, in principle, be used to obtain the asymptotic properties for shot noise

parameter estimators. However, a more direct, and also less intensive, method of

estimation can be obtained from moment techniques. We will consider the case

where h has one parameter for simplicity of exposition: θ = (E[Y ], θ2)T . The ideas

are similar in higher dimensions.

From the L(θ)-dependence of A, it follows that uAt + vA2t is also L(θ)-

dependent for any fixed u and v. Let mk = limt→∞ E[Akt ] be the limiting kth

moment of At when this is finite. The following result can be established from the

L(θ)-dependence of A.

Page 27: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

19

Proposition 3.1. Suppose that A is a shot noise process with a compactly sup-

ported h. If E[|Y |5] < ∞, then n−1∑n

i=1 Aki

a.e.−→ mk for 1 ≤ k < 5 and the central

limit theorem applies to sample averages of uAt + vA2t:

n−1/2n∑

i=1

(uAi + vA2i )

D−→ N(sm1 + rm2, σ2(u, v)), (2.3.2)

as n → ∞ where

σ2(u, v) = u2s11 + 2uvs12 + v2s22 (2.3.3)

and

s11 s12

s12 s22

(2.3.4)

is a positive definite matrix. The following joint normality can also be verified:

1√n

∑ni=1 Ai

1√n

∑ni=1 A2

i

D−→ N

m1

m2

,

s11 s12

s12 s22

. (2.3.5)

The parameter estimators θn = (E[Y ]n, θ2,n)T are taken as solutions to the

moment equations n−1∑n

t=1 At = m1 and n−1∑n

t=1 A2t = m2. Here, m1 and m2

are functions of E[Y ] and θ2. Proposition 3.1 and a delta method, assuming suitable

regularity of m1 and m2, show that θn is consistent and asymptotically normal. We

illustrate the ideas in the following example.

Example 3.2. Suppose that Y is exponentially distributed with mean E[Y ] and

that h is as in (2.3.1). Then m1 = E[Y ](β/2) and m2 = E2[Y ](2β/3 + β2/4). Hence

the solution to the moment estimating equations are

E[Y ]n =

(

3

4

)

n−1∑n

i=1 A2i − (n−1

∑ni=1 Ai)

2

n−1∑n

i=1 Ai, (2.3.6)

and

βn =

(

8

3

)

(n−1∑n

i=1 Ai)2

n−1∑n

i=1 A2i − (n−1

∑ni=1 Ai)2

. (2.3.7)

Page 28: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

20

The law of large numbers in Proposition 3.1 and a delta-method now give

limn→∞

nVar(θn) = D−1

s11 s12

s12 s22

(DT )−1, (2.3.8)

where D is the change-of-basis matrix

D =∂(m1, m2)

∂(E[Y ], β)=

β2, E[Y ]

2(

4β3

+ β2

2

)

E[Y ],(

23

+ β2

)

E2[Y ]

. (2.3.9)

For the shot function in (2.3.1), s11, s12, and s22 can be computed from (2.1.1) with

a sleepless afternoon:

s11 = E2[Y ] ×[

(q2 + 2q3 + q4)β−4

6− (q + q2)β−2 +

(2 + 4q)β−1

3

]

, (2.3.10)

s12 = E3[Y ] ×[

(3q + 10q2 + 8q3 + q4)β−2

6− (3q + 3q2)β−1

+3 + 4q − 2q2

2+

(2 + 4q)β

3

]

, (2.3.11)

s22 = E4[Y ] ×[

(10q + 126q2 − 70q3 − 630q4 − 546q5 − 42q6 + 60q7)β−4

945

+(4q − 40q3 − 60q4 − 24q5)β−2

45+

(24q + 76q2 + 56q3 + 4q4)β−1

9

+−62q − 41q2 + 22q3 + q4

6+

(72 + 14q − 130q2)β

15

+(35 + 61q − 9q2)β2

9+

(2 + 4q)β3

3

]

, (2.3.12)

where q = β − 1 if β is an integer and q = [β] otherwise.

Using the above expressions in (2.3.8) and performing another tedious computa-

tion gives the explicit convergence

√n

E[Y ]n

βn

D−→ N

E[Y ]

β

,Σ∗

, (2.3.13)

where

Σ1,1 = E2[Y ] ×[

(10q + 126q2 − 70q3 − 630q4 − 546q5 − 42q6 + 60q7)β−6

420

Page 29: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

21

+(3q + 10q2 − 10q3 − 35q4 − 18q5)β−4

15

+(3q + 10q2 + 8q3 + q4)β−3 +(−23q − 17q2 + 6q3)β−2

2

+(67 + 44q − 90q2)β−1

15+ (2 + 4q)

]

, (2.3.14)

Σ1,2 = E[Y ] ×[

(−10q − 126q2 + 70q3 + 630q4 + 546q5 + 42q6 − 60q7)β−5

420

+(−3q − 20q2 − 10q3 + 25q4 + 18q5)β−3

15

−(3q + 11q2 + 10q3 + 2q4)β−2

2+

(13q + 7q2 − 6q3)β−1

2

+−79 + 22q + 180q2

30− (2 + 4q)β

]

, (2.3.15)

Σ2,2 =(10q + 126q2 − 70q3 − 630q4 − 546q5 − 42q6 + 60q7)β−4

420

+(3q + 40q2 + 50q3 − 5q4 − 18q5)β−2

15

+(q2 + 2q3 + q4)β−1 +−11q − 5q2 + 6q3

2

+(52 + 14q − 90q2)β

15+ (2 + 4q)β2. (2.3.16)

A simulation study for the shot function in (2.3.1) was conducted with β = 2.5.

The other specifications are identical to those in the Section 2 simulation study. In

particular, the shot marks are IID and exponentially distributed with mean E[Y ] =

4. The results in the table below indicate good performance of the moment estimators

with perhaps some bias; the sample variances are compatible with the theoretical

variances computed in (2.3.13).

2.4 Proof of Theorem 2.4.

Our work essentially consists of verifying Conditions EE1-EE3 in Proposition 2.2

with Σ(θ) = σ2R(θ)W (θ). For each fixed θ, Ai − Ai = Ri(θ) − µR(θ) for all i by

Page 30: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

22

Table 2.2: Shot Noise Simulation Results with h(t) = max(1 − t/β, 0)I[0,∞)(t). Dis-played are parameter estimates and standard errors (in parentheses) for varioussample sizes.

Sample Size E[Y ] β50 3.7799 (1.1700) 2.7979 (0.8141)100 3.9006 (0.8639) 2.6502 (0.5668)200 3.9443 (0.6308) 2.5841 (0.3999)250 3.9514 (0.5635) 2.5644 (0.3529)500 3.9841 (0.3997) 2.5290 (0.2451)1000 3.9842 (0.2839) 2.5184 (0.1758)2000 3.9916 (0.2023) 2.5096 (0.1255)5000 3.9967 (0.1257) 2.5031 (0.0780)

λ = 1, E[Y ] = 4, β = 2.5; #Simulations=10, 000

interval similarity; differentiating this gives

Sn(θ) =

n∑

i=1

[Ri(θ) − µR(θ)][R′i(θ) − µ′

R(θ)]. (2.4.1)

Now make the partition S ′n(θ) = In(θ) + Jn(θ), where

In(θ) =n∑

i=1

[R′i(θ) − µ′

R(θ)][R′i(θ) − µ′

R(θ)]T (2.4.2)

and

Jn(θ) =n∑

i=1

[Ri(θ) − µR(θ)][R′′i (θ) − µ′′

R(θ)]. (2.4.3)

Recall that R′(θ) = α′(θ) ∗ β(θ∞) ∗ R(θ∞) and R′′(θ) = α′′(θ) ∗ β(θ∞) ∗ R(θ∞).

Proposition A4 in the Appendix implies that n−1In(θ∞)P−→ W (θ∞), and Proposi-

tion A5 gives n−1Jn(θ∞)a.e.−→ 0. Thus, Condition EE1 holds. Proposition A6 in the

Appendix shows that Condition EE3 holds with Σ(θ∞) = σ2R(θ∞)W (θ∞).

Hence it remains to verify Condition EE2. As an alternative to this, we prove the

related condition that Tn(θ∗) = Tn,1(θ∗)+Tn,2(θ

∗), where Tn,1(θ) satisfies Condition

Page 31: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

23

EE2 and Tn,2(θ∗)/n

a.e.−→ 0 if θ∗ is sufficiently close to θ∞. Reworking the original

proof of Proposition 2.2, and invoking Conditions EE1 and EE3, shows that the

conclusions of Theorem 2.4 still remain true under this ‘weakened Condition EE2’.

Let θ be in a close neighborhood of θ∞. Then In(θ) = In(θ∞)+ In,1(θ)+ In,2(θ),

where

In,1(θ) =

n∑

i=1

[R′i(θ) − µ′

R(θ)][R′i(θ) − R′

i(θ∞) − (µ′R(θ) − µ′

R(θ∞))]T ,

(2.4.4)

In,2(θ) =n∑

i=1

[R′i(θ) − R′

i(θ∞) − (µ′R(θ) − µ′

R(θ∞))][R′i(θ∞) − µ′

R(θ∞)]T .

(2.4.5)

Similarly, write Jn(θ) = Jn(θ∞) + Jn,1(θ) + Jn,2(θ) where

Jn,1(θ) =n∑

i=1

[Ri(θ) − Ri(θ∞) − (µR(θ) − µR(θ∞))][R′′i (θ) − µ′′

R(θ)],

(2.4.6)

Jn,2(θ) =n∑

i=1

[Ri(θ∞) − µR(θ∞)][R′′i (θ) − R′′

i (θ∞)) − (µ′′R(θ) − µ′′

R(θ∞))].

(2.4.7)

Also, Tn(θ) = Tn,1(θ) + Tn,2(θ) where Tn,1(θ) = In,1(θ) + In,2(θ) + Jn,1(θ) and

Tn,2(θ) = Jn,2(θ). Since ‖α′′(θ) ∗ β(θ∞)‖1 is bounded in some neighborhood of θ∞,

Proposition A5 in the Appendix gives Tn,2(θ)/na.e.−→ 0 if θ is sufficiently close to

θ∞.

By a Taylor expansion with remainder, there is some θ∗i between θ and θ∞ such

that

R′i(θ) − R′

i(θ∞) − (µ′R(θ) − µR(θ∞)) = (R′′

i (θ∗i ) − µ′′

R(θ∗i ))(θ − θ∞); (2.4.8)

hence,

In,1(θ) =n∑

i=1

[R′i(θ) − µ′

R(θ)][θ − θ∞]T [R′′i (θ

∗i ) − µ′′

R(θ∗i )]

T . (2.4.9)

Page 32: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

24

Unraveling the matrix multiplication in (2.4.9) shows that the (j, k)-entry of In,1(θ)

can be expressed as

In,1(θ)(j,k) =

p+1∑

l=1

In,1(θ)(j,k;l), (2.4.10)

where

In,1(θ)(j,k;l) =

n∑

i=1

[R′i(θ) − µ′

R(θ)](j)(θ − θ∞)(l)[R′′i (θ

∗i ) − µ′′

R(θ∗i )](k,l)

= (θ − θ∞)(l)

n∑

i=1

[R′i(θ) − µ′

R(θ)](j)[R′′i (θ

∗i ) − µ′′

R(θ∗i )](k,l)

(2.4.11)

It follows that for each l = 1, 2, . . . , p + 1,

supδ↓0

(nδ)−1|In,1(θ∗)(j,k;l)| ≤ n−1

n∑

i=1

[R′i(θ∞) − µ′

R(θ∞)](j)[R′′i (θ∞) − µ′′

R(θ∞)](l,k)

∣.

(2.4.12)

By Proposition A4 in the Appendix, the right-side of the above inequality conver-

gence in probability as n → ∞, hence for all l = 1, 2, . . . , p + 1,

limn→∞

supδ↓0

(nδ)−1|In,1(θ∗)(j,k;l)| < ∞ a.e. (2.4.13)

Now (2.4.10) and (2.4.13) together give

limn→∞

supδ↓0

(nδ)−1|In,1(θ∗)(j,k)| < ∞ a.e. (2.4.14)

Similarly,

limn→∞

supδ↓0

(nδ)−1|In,2(θ∗)(j,k)| < ∞ a.e. (2.4.15)

limn→∞

supδ↓0

(nδ)−1|Jn,1(θ∗)(j,k)| < ∞ a.e. (2.4.16)

Now (2.4.14) — (2.4.16) imply that Tn,1(θ) satisfies Condition EE2, completing our

proof.

Page 33: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

25

2.5 Appendix

This section establishes some of the technical facts used in the previous sections.

We are brief as most of the results are elementary. Let X = Xn∞n=1 be IID with

mean µ, variance σ2, and finite third moment, and let a = an∞n=0 and b = bn∞n=0

be absolutely summable sequences of real numbers. Let Y = a ∗ X and Z = b ∗ X

denote convolution sequences.

Proposition A1. (Strong Law of Large Numbers for Convolutions).

n−1

n∑

i=1

Yia.e.−→ µ

∞∑

i=0

ai as n → ∞. (2.5.1)

Proof. For each n ≥ 1, let Xn = n−1∑n

k=1 Xk. Then Xna.e.−→ µ as n → ∞ by the

strong law of large numbers for IID variates with probability one. Expanding and

collecting terms gives

n−1

n∑

i=1

Yi =

n−1∑

k=0

(n − k)ak

nXn−k. (2.5.2)

For any point in the probability space where Xn → µ, apply the Dominated Con-

vergence Theorem in (2.5.2) (Xn is bounded in n) to get Yn → µ∑∞

i=0 ai for this

point. This proves the result.

Proposition A2. As n → ∞,

n−1

n∑

i=1

YiZiP−→ σ2

∞∑

i=0

aibi + µ2

( ∞∑

i=0

ai

)( ∞∑

i=0

bi

)

. (2.5.3)

Proof. Make the convention that Xn = 0 for all n ≤ 0. Then

n−1n∑

i=1

YiZi = n−1n∑

t=1

(

t−1∑

u=0

auXt−u

)(

t−1∑

v=0

bvXt−v

)

=

∞∑

u=0

( ∞∑

v=0

aubvn−1

n∑

t=1

Xt−uXt−v

)

. (2.5.4)

Page 34: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

26

For each fixed u and v, the strong law of large numbers for IID variates with a finite

variance gives n−1∑n

t=1 Xt−uXt−va.e.−→ µ2 + σ21[u=v]. Hence, for p ≥ 1, as n → ∞,

p∑

u=0

p∑

v=0

aubvn−1

n∑

t=1

Xt−uXt−va.e.−→ µ2

(

p∑

u=0

au

)(

p∑

v=0

bv

)

+ σ2

p∑

u=0

aubu. (2.5.5)

As∑∞

u=0 |au| < ∞,∑∞

v=0 |bv| < ∞, and n−1E[|∑nt=1 Xt−uXt−v|] ≤ (σ2 + µ2) < ∞,

we have

limp→∞

n−1E

∞∑

u=p+1

∞∑

v=0

aubv

n∑

t=1

Xt−uXt−v

= limp→∞

n−1E

∞∑

u=0

∞∑

v=p+1

aubv

n∑

t=1

Xt−uXt−v

≤ limp→∞

(σ2 + µ2)

∞∑

u=0

|au|∞∑

v=p+1

|bv|

= 0. (2.5.6)

As convergence in mean implies convergence in probability, (2.5.3) follows together

from (2.5.5) and (2.5.6).

Example A3. Here, we establish the expressions in the law of large numbers used

in Example 2.3. Propositions A1 and A2 establish existence of the almost sure limits

n−1∑n

i=1 Ai → m1 and n−1∑n

i=1 A2i → m2. Equations (2.1.3) and (2.1.4) could

be used in principle to identify m1 and m2. Alternatively, note from (2.1.1) that

At = ρAt−1 +Rt, where Rt is independent As for any s < t− 1. Taking expectations

in this now yields our claimed forms:

n−1

n∑

t=1

AtP−→ m

1 − ρ, (2.5.7)

n−1n∑

t=1

A2t

P−→ σ2R

1 − ρ2+

m2

(1 − ρ)2, (2.5.8)

and

n−1n−1∑

i=1

AiAi−1P−→ ρσ2

R

1 − ρ2+

m2

(1 − ρ)2. (2.5.9)

Page 35: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

27

Propositions A1 and A2 can be easily generalized to cases where both a and b

are vector-valued sequences. Specifically, we state the following.

Proposition A4. Suppose that a and b are vector-valued sequences, and that c and

d are two constant vectors. Then as n → ∞,

n−1

n∑

i=1

(Yi − c)(Zi − d)T P−→ σ2

∞∑

i=0

aibTi +

[

µ

∞∑

i=0

ai − c

][

µ

∞∑

i=0

bi − d

]T

. (2.5.10)

We now turn to martingale versions of the previous results. Let Fn be the σ-field

generated by X1, X2, . . . , Xn for each n. Then Yn =∑n−1

k=1 an−kXk is Fn−1 measurable

and independent of Xn. Define Sn =∑n

i=1(Xi − µ)(Yi − c) for each n, where c is

a constant. Then Sn∞n=1 is a martingale adapted to Fn∞n=1. From the assumed

finite third moments, it easy to show that E[|Yn − c|3] is bounded in n for any fixed

c; hence, E[(Yn − c)2] is bounded in n and

E

[ ∞∑

i=1

i−2E[(Xi − µ)2(Yi − c)2|Fi−1]

]

≤∞∑

i=1

i−2σ2E[(Yi − c)2] < ∞. (2.5.11)

Theorem 2.18 in Hall and Heyde (1980) now gives a strong law of large numbers for

Sn. We state this and a central limit theorem together.

Proposition A5. If E[|X1|3] < ∞, then n−1Sna.e.−→ 0 as n → ∞ and n−1/2Sn

D−→

N(0, σ2S) as n → ∞, where

σ2S = σ2

σ2

∞∑

p=0

a2n +

(

µ

∞∑

p=0

an − c

)2

. (2.5.12)

Proof. It remains to prove the central limit statement. Using the boundedness of

E[|Yn − c|3] in n and a Markov inequality, one can establish a Lindeberg condition

for Sn∞n=1. To get the quoted form of σ2S , observe that

n−1n∑

i=1

E[(Xi − µ)2(Yi − c)2|Fi−1] = n−1σ2n∑

i=1

(Yi − c)2, (2.5.13)

Page 36: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

28

which converges in probability to σ2S as n → ∞ by Proposition A3. The result now

follows from Corollary 3.1 in Hall and Heyde (1980).

Proposition A6. (Vector Form of Proposition A5) If E[|X1|3] < ∞ and a is a

sequence of real vectors and c is a constant vector, then as n → ∞, n−1Sna.e.−→ 0

and

n−1/2Sn =n∑

i=1

(Xi − µX)(Yi − c)D→ N(0, ΣS), (2.5.14)

where

ΣS = σ2

σ2

∞∑

n=0

anaTn +

(

µ

∞∑

n=0

an − c

)(

µ

∞∑

n=0

an − c

)T

. (2.5.15)

Acknowledgments. The authors acknowledge National Science Foundation sup-

port from grant DMS 0071383.

2.6 References

[1] Bevan, S., Kullberg, R. and Rice, J. (1979). An analysis of membrane noise,

Annals of Statistics, 7, 237-257.

[2] Brockwell, P. J. and Davis, R. A. (1991). Time Series: Theory and Methods,

Second Edition, Springer-Verlag, New York.

[3] Doney, R. A. and O’Brien, G. L. (1991). Loud shot noise, Annals of Applied

Probability, 1, 88-103.

[4] Godambe, V. P. (1985). The foundations of finite estimation in stochastic pro-

cess, Biometrika, 72, 419-28.

[5] Hall, P. and Heyde, C. C. (1980). Martingale Limit Theory and its Application,

Academic Press, New York.

Page 37: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

29

[6] Lauger, P. (1975). Shot noise in ion channels, Biochem. Biophys. Acta., 413,

1-10.

[7] Lund, R. B., Butler, R. and Paige, R. L. (1999). Prediction of shot noise, Journal

of Applied Probability, 36, 374-388.

[8] McCormick, W. P. (1998). Extremes for shot noise processes with heavy tailed

amplitudes, Journal of Applied Probability, 34, 643-656.

[9] Rice, J. (1977). On generalized shot noise, Advances in Applied Probability, 9,

553-565.

[10] Sørensen, M. (2000). Prediction-based estimating equations, Econometrics

Journal, 3, 123-147.

Page 38: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

Chapter 3

Limiting Properties of Shot Noise Processes†

†Lund, R., McCormick, W. P., and Xiao, Y. Submitted to Stochastic Processes and

their Applications, 7/15/2003

30

Page 39: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

31

Abstract

This paper studies limiting properties of shot noise processes. Versions of the law

of large numbers and central limit theorem are derived under very weak conditions.

Asymptotics for the case of heavy-tailed shot marks are also studied. Examples are

given to demonstrate the utility of the results.

Key Words: Law of Large Numbers; Central Limit Theorem; Heavy-Tails.

3.1 Introduction

This paper studies the asymptotic properties of a shot noise process A = At, t ≥ 0

defined at each time t ≥ 0 via

At =Nt∑

i=1

Yih(t − τi). (3.1.1)

In (3.1.1), N = Nt, t ≥ 0 is a Poisson process with arrival times τi∞i=1 and

rate generation parameter λ > 0, and Yi∞i=1 is an independent and identically

distributed (IID) sequence of shot marks that is assumed independent of N .

The shot function h, also called an impulse response function elsewhere, is

assumed to be measurable, causal in the sense that h(t) = 0 for all t < 0, and not

identically zero in the sense of (3.2.2) below. Shot noise processes and their applica-

tions are considered in Lauger (1975), Rice (1977), Bevan et al. (1979), and Lund et

al. (1999) amongst others. Without loss of generality, one may take supt≥0 |h(t)| = 1.

For if this is not the case, the shot noise A∗t , t ≥ 0, defined for fixed t > 0 by

A∗t =

∑Nt

i=1 Y ∗i h∗(t−τi) with Y ∗

i = cYi, h∗(t) = h(t)/c, and c = supt≥0 |h(t)|, satisfies

such assumptions and has paths the same as those of A.

We consider a discrete-time history of observations At1 , . . . , Atn , where 0 < t1 <

. . . < tn and tn → ∞ as n → ∞. As little is gained by bookkeeping such general

time indices, we work with equally spaced observations: ti = ∆i for some ∆ > 0.

Further we rescale the time axis to make ∆ = 1.

Page 40: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

32

Much of our interest here lies with the sample average An = n−1∑n

t=1 At. In

particular, we derive a strong law of large numbers and central limit theorem for

An under considerable generality. Such results are useful in inference settings and,

in view of the many stochastic processes that are variants of shot noise, general

conceptual understanding.

Let Y denote a generic copy of Yi for any i ≥ 1. The first two moments of A are

easily obtained from (3.1.1). Specifically, if E[|Y |] < ∞, then E[|At|] < ∞ for all

t > 0 with

E[At] = λE[Y ]

∫ t

0

h(u)du. (3.1.2)

If E[Y 2] < ∞, then process covariances are finite with

cov(At1 , At2) = λE[Y 2]

∫ t1

0

h(t1 − u)h(t2 − u)du (3.1.3)

for all 0 ≤ t1 ≤ t2. The compound Poisson bound

|At| ≤Nt∑

i=1

|Yi| (3.1.4)

can be used to show that E[|At|α] < ∞ if E[|Y |α] < ∞ for any α > 0.

The rest of this paper proceeds as follows. In Section 2, we state the law of large

numbers and central limit theorem for An. Several applications of the results are

given. Section 3 moves to the case of heavy-tailed shot marks. Section 4 contains all

details of proof.

3.2 The Law of Large Numbers and Central Limit Theorem

This section states shot noise versions of the law of large numbers and central limit

theorem and provides examples of their uses.

Page 41: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

33

Theorem 2.1. (Law of Large Numbers) Consider a shot noise process A with

E[|Y |] < ∞ and∫∞0

|h(u)|du < ∞. Then

Ana.s.−→ λE[Y ]

∫ ∞

0

h(u)du (3.2.1)

as n → ∞ where the notationa.s.−→ indicates convergence almost surely.

To avoid trite work in the central limit theorem, we assume that the function T

defined for a fixed γ ∈ [0, 1) by

T (γ) =∞∑

n=0

h(n + γ) (3.2.2)

is non-zero on a subset of [0, 1) with positive Lebesgue measure. When the assump-

tion∫∞0

|h(u)|du < ∞ is in force, the series defining T converges almost surely with

respect to the Lebesgue measure on [0, 1]. Further, if T is zero on a set of measure

one and∫∞0

u|h(u)|du < ∞ , then√

n(An −E[An])D−→ 0 as n → ∞ and any central

limit would be degenerate (the notationD−→ indicates convergence in distribution).

Theorem 2.2. (Central Limit Theorem) Consider a shot noise process A with

E[Y 2] < ∞ and a shot function satisfying the non-degeneracy condition in (3.2.2)

and∫∞0

u|h(u)|du < ∞. Then

An − E[An]

var(An)1/2

D−→ N(0, 1) (3.2.3)

as n → ∞ where N(0, 1) denotes the standard normal distribution.

Theorems 2.1 and 2.2 are proven in Section 4. Explicit expressions for the mean

and variance of An are easy to obtain from (3.1.2) and (3.1.3):

E[An] =λE[Y ]

n

n∑

k=1

(n + 1 − k)

∫ k

k−1

h(u)du (3.2.4)

and

var(An) =λE[Y 2]

n2

n−1∑

k=0

∫ 1

0

(

k∑

i=0

h(i + γ)

)2

dγ. (3.2.5)

Page 42: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

34

Example 2.3 Consider a storage process Xt∞t=0 taking values in [0,∞). The

store starts with the initial content X(0) = x ∈ [0,∞) and experiences inputs of

size Yi > 0 at time τi. The τi, as before, are the arrival times of a Poisson process

N with generation rate λ > 0. In between input times, the store instantaneously

releases content at rate βu for some fixed β > 0 when the store’s content is at level

u ≥ 0. This and more general storage processes are discussed further in Harrison

and Resnick (1976) and Brockwell et al. (1982).

The sample paths of Xt obey the mass balance storage equation

Xt = x + It −∫ t

0

βXudu, (3.2.6)

where It =∑Nt

i=1 Yi is the total input to the store during the time interval [0, t] and∫ t

0(βXu)du is the total outflow from the store during [0, t].

Moran (1969) shows that the unique solution to (3.2.6) can be written as

Xt = xe−βt +

Nt∑

i=1

Yie−β(t−τi), (3.2.7)

which we recognize as shot noise, less the xe−βt term, with h(t) = e−βt1[0,∞)(t).

Observe that∫ t

0h(u)du = β−1(1 − e−βt).

With Xn = n−1∑n

t=1 Xt, the strong law of large numbers is

Xna.s.−→ λE[Y ]

β(3.2.8)

as n → ∞. The central limit theorem provides the asymptotic normality

Xn − λE[Y ]β

λE[Y 2](1+e−β)2nβ(1−e−β)

D−→ N(0, 1) (3.2.9)

as n → ∞, where we used that |E[Xn] − β−1λE[Y ]| = O(1/n) in calculations.

Our next example moves to inferential settings.

Example 2.4 Consider the Gaussian shot function h(t) = e−β2t2I[0,∞)(t) where

β > 0. The strong law of large numbers in Theorem 2.1 gives

Ana.e−→ λE[Y ]

√π

2β(3.2.10)

Page 43: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

35

as n → ∞. To isolate study on the shot parameter β, we take λ and E[Y ] as known

and unity: λ = E[Y ] = 1. Further we assume that Y is exponentially distributed so

that E[Y 2] = 2.

A moment estimator of β based on A1, . . . , An, denoted by βn, is obtained by

equating An with (2β)−1√

π and solving for β:

βn =

√π

2An

. (3.2.11)

This estimator is strongly consistent by Theorem 2.1.

To derive the asymptotic distribution of βn, we will need an explicit expres-

sion for limn→∞ nvar(An). Employing (3.2.5) and averaging properties of convergent

sequences gives

limn→∞

nvar(An) = 2

∫ 1

0

[ ∞∑

i=0

h(i + γ)

]2

= 2[

∫ ∞

0

h2(u)du + 2

∞∑

k=1

∫ ∞

0

h(u)h(u + k)du]

=

√2π

β

[1

2+ 2

∞∑

k=1

exp

(

−k2β2

2

)

[1 − Φ(kβ)]]

(3.2.12)

since for each k ≥ 1,

∫ ∞

0

h(u)h(u + k)du =1

β

π

2exp

(

−k2β2

2

)

[1 − Φ(kβ)], (3.2.13)

where Φ denotes the cumulative distribution function of a standard normal variate.

Now use Theorem 2.2 and a delta method to get

√n(βn − β)

D−→ N(0, σ2) (3.2.14)

as n → ∞ where

σ2 =4β3

√2√

π

[

1

2+ 2

∞∑

k=1

exp

(

−k2β2

2

)

[1 − Φ(kβ)]

]

. (3.2.15)

Equation (3.2.14) and (3.2.15) facilitate construction of confidence intervals for β.

Page 44: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

36

3.3 Heavy-Tailed Shot Marks

Next, we move to the case of heavy-tailed shot marks. Specifically, consider a shot

noise process with Yi∞i=1 such that

P (|Y | > x) = x−αL(x), (3.3.1)

where L is a slowly varying function at infinity, and, further, the tail balancing

condition

limx→∞

P (Y > x)

P (|Y | > x)= p and lim

x→∞

P (Y < −x)

P (|Y | > x)= 1 − p (3.3.2)

holds where 0 ≤ p ≤ 1. When (3.1) and (3.2) hold, we say that Y is of order α.

In this section, we assume that h is compactly supported over [0, K] where K > 1

and, further, that h is nonincreasing and nonnegative on its support. More general

shot function structure can be pursued, but is not done here. Recall that we can

extend a stationary Poisson process defined on [0,∞) to (−∞,∞). Explicitly, if N

denotes a Poisson process on [0,∞) and N′ denotes an independent copy of N, then

the superimposed process, N + N, where N(A) = N′(−A) and −A = −x : x ∈ A

provides the desired extension. We shall denote this extension also by N. Further

note that the shot noise process given by

At =∑

i:τi<t

Yih(t − τi),

where the τi are the points of the extended N is stationary and, moreover, if t > K

no point τi < 0 “contributes” to the value of At (so that the asymptotics for sums

are the same for original and extended versions). For this reason and mathematical

convenience, we assume that N is extended to (−∞,∞) in the heavy-tailed case.

For β > 0, let un(β) be a sequence of constants satisfying

limn→∞

nP|Y | > un(β) = β. (3.3.3)

Page 45: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

37

The existence of such is sequence is guaranteed by (3.3.1). Our first heavy-tailed

result relates the tail behavior of the shot noise process to that of the shot marks

Yi.

Lemma 3.1. Let A be a shot noise process with heavy-tailed shot marks Yi of

order α. Suppose that h is nonincreasing over the compact support [0, K] where

K ≥ 1. Then

limx→∞

P (|A1| > x)

P (|Y | > x)= λ

∫ ∞

0

hα(s)ds. (3.3.4)

Moreover,

limx→∞

P (A1 > x)

P (|A1| > x)= p and lim

x→∞

P (A1 < −x)

P (|A1| > x)= 1 − p. (3.3.5)

Theorem 3.2 next gives a detailed description of the large values in the sampled

shot noise process and associated clustering effects of these values. We use the usual

notion of convergence in distribution of point processes in this paper: let N = µ :

µ is a Radon counting measure on [0,∞). Equip N with the topology of vague

convergence. For random elements Nn, N ∈ N , we write Nn ⇒ N as n → ∞ to

denote weak convergence with respect to the vague topology on N .

Theorem 3.2. Let A be a shot noise process with heavy-tailed shot marks Yi

of order α. Suppose that h is nonincreasing over the compact support [0, K] where

K ≥ 1. Define point processes

Nn =

∞∑

j=1

δAj/an(3.3.6)

for n ≥ 1 where an = un(β), β = [λ∫∞0

hα(s)ds]−1, and δx denotes a unit point mass

at x. Then Nn ⇒ N as n → ∞ where

ND=

∞∑

i=1

∞∑

j=0

δPiQij, (3.3.7)

Page 46: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

38

with∑∞

i=1 δPia Poisson random measure with mean measure ν where ν([x,∞)) =

θx−α for x > 0 (the notationD= denotes equivalent distributions). The extremal

index θ is

θ =

∫ 1

0hα(u)du

∫∞0

hα(u)du. (3.3.8)

Furthermore, the point processes∑∞

j=0 δQijare IID and independent of

∑∞i=1 δPi

and

∞∑

j=0

δQij

D=

∞∑

k=0

δJh(k+γ)h(γ

, (3.3.9)

where J and γ are independent random variables with γ having probability density

fγ(x) = hα(x)/∫ 1

0hα(t)dt for 0 ≤ x ≤ 1 and P (J = 1) = p and P (J = −1) = 1 − p

where p is as in (3.3.2).

Our main result for sums can now be stated. Introduce the random variable

W = J

bKc∑

k=0

h(k + γ)

h(γ); (3.3.10)

we use bxc to denote the integer part of x and we also use dxe to denote the least

integer greater or equal to x.

Theorem 3.3. Let A be a shot noise process with heavy-tailed shot marks Yi of

order α ∈ (0, 2). Suppose that h is nonincreasing over the compact support [0, K]

where K ≥ 1. If 0 < α < 1, then

1

un(β)

n∑

k=1

AkD−→ S =

∞∑

i=1

∞∑

j=0

PiQij (3.3.11)

as n → ∞. Furthermore, S has a stable distribution. If 1 ≤ α < 2, then

1

un(β)

n∑

k=1

(Ak − E[A11(0,1](A1)])D−→ S, (3.3.12)

as n → ∞ where S is the limit distribution of

∞∑

i=1

∞∑

j=0

PiQijI(ε,∞)(Pi|Qij |) − (p − q)

∫ 1

ε

αx−αdx (3.3.13)

Page 47: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

39

as ε ↓ 0. Moreover, S has a stable distribution.

Our last result provides the characteristic function of the limiting stable law.

Theorem 3.4. The limiting variate S in Theorem 3.3 has characteristic function

E[eitS] =

expimt − d|t|α[1 − i(p − q)sgn(t) tan(πα/2)] if α 6= 1

expimt − d|t|[1 + i(p − q)(2/π)sgn(t) log |t|] if α = 1, (3.3.14)

where

d =

θα2−α

E[|W |α]Γ(3−α)α(α−1)

cos(πα2

), if α 6= 1

θα2−α

E[|W |α]π2, if α = 1

, (3.3.15)

with W as in (3.3.10). Moreover, m = 0 when α < 1 and m = (p − q)α/(α − 1) if

α > 1. Finally, in the case where α = 1,

m =

∫ ∞

−∞

(

sin x − xI(0,1)(|x|)x2

)

M(dx) − θE[W log |W |I(0,1)(|W |)]

+ limt→∞

θE[

∫ t

1

(

L(y, γ)I(0,t)(yL(y, γ))− W

y

)

dy]

, (3.3.16)

where

L(y, γ) =

bKc∑

k=0

h(k + γ)

h(γ)I(1,∞)

(

yh(k + γ)

h(γ)

)

(3.3.17)

and the measure M above refers to the vague limit as ε ↓ 0 of

Mε(dx) = x2

∫ ∞

0

P

[( ∞∑

k=0

yh(k + γ)

h(γ)I(ε,∞)(y

h(k + γ)

h(γ))

)

∈ dx

]

θαy−(α+1)dy.

(3.3.18)

3.4 Proofs

Proof of Theorem 2.1. For each natural number M , define the M-truncation

gM(u) = h(u)I[0,M)(u) and note that gM has support contained in [0, M ]. Let hM =

Page 48: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

40

h − gM and use (3.1.1) to write

At =

Nt∑

i=1

YigM(t − τi) +

Nt∑

i=1

YihM(t − τi)

def= A

(M)t + κ

(M)t . (3.4.1)

From the compact support of gM , the marginal distribution of A(M)t is seen to be

constant in t for t ≥ M ; furthermore, the Poisson arrivals τi and the IID Yi imply

that A(M)t and A

(M)t′ are independent whenever t, t′ > 0 with |t − t′| ≥ M . Hence,

A(M)k+M∞k=0 is an M-dependent stationary sequence with mean λE[Y ]

∫∞0

gM(u)du =

λE[Y ]∫M

0h(u)du. The Strong Law of Large Numbers for M-dependent sequences

gives

n−1n∑

i=1

A(M)i

a.s.−→ λE[Y ]

∫ M

0

h(u)du (3.4.2)

for a fixed M . Note that the right hand side of (3.4.2) converges to λE[Y ]∫∞0

h(t)dt

as M → ∞. The remainder of our argument will show that κ(M)t is negligible as

M → ∞. Toward this, define

B(M)i,k =

j:τj∈(k−1,k]

YjhM(i − τj) (3.4.3)

for i ≥ k and set

Z(M)n,k =

n∑

i=k

B(M)i,k . (3.4.4)

Combining (3.4.3) and (3.4.4) gives

Z(M)n,k =

j:τj∈(k−1,k]

Yj

[

n∑

i=k

hM(i − τj)

]

=∑

j:τj∈(k−1,k]

Yj

[

n−k∑

i=0

hM(i + k − τj)

]

. (3.4.5)

It follows that

|Z(M)n,k | ≤

j:τj∈(k−1,k]

|Yj|HM(k − τj)def= R

(M)k , (3.4.6)

Page 49: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

41

where HM(γ) =∑∞

n=M |h(n + γ)| for γ ∈ [0, 1). By the Poisson arrivals τi and

IID Yi, R(M)k ∞k=1 is IID with mean λE[|Y |]

∫ 1

0HM(γ)dγ = λE[|Y |]

∫∞M

|h(u)|du,

which is finite by absolute integrability of h and the finiteness of E[|Y |].

Observe that

n−1

n∑

i=1

κ(M)i = n−1

n∑

k=1

Z(M)n,k . (3.4.7)

Using (3.4.7), the bound in (3.4.6), and the classical Strong Law of Large Numbers

gives

lim supn→∞

|n−1n∑

i=1

κ(M)i | ≤ lim sup

n→∞n−1

n∑

k=1

R(M)k

= λE[|Y |]∫ ∞

M

|h(u)|du (3.4.8)

almost surely for each fixed M .

To finish the proof, merely note that for each fixed M ,

lim supn→∞

An − λE[Y ]

∫ ∞

0

h(u)du

≤ lim supn→∞

n−1

n∑

t=1

A(M)t − λE[Y ]

∫ M

0

h(u)du

+ λE|Y |∫ ∞

M

|h(u)|du +

λE[Y ]

∫ ∞

M

h(u)du

(3.4.9)

almost surely. As the last two terms in (3.4.9) converge to zero as M → ∞ by

absolutely integrability of h and E[|Y |] < ∞, the proof is complete via (3.4.2).

Proof of Theorem 2.2. For this argument, it is sufficient to consider the case

where E[Y ] = 0; this renders E[An] ≡ 0.

Use (3.1.1) to make the partition

An =∑

j:τj∈(0,1]

Yjh(n − τj) + . . . +∑

j:τj∈(n−1,n]

Yjh(n − τj)

def= An,1 + . . . + An,n. (3.4.10)

Page 50: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

42

Hence, An = n−1∑n

k=1 Zn,k where

Zn,k =

n∑

i=k

Ai,k

=∑

j:τj∈(k−1,k]

Yj

n−k∑

i=0

h(i + k − τj). (3.4.11)

The key observation is that Zn,l and Zn,m are pairwise independent (by Poisson

arrivals τi and IID Yi ) when l 6= m. Taking moments in (3.4.11) gives E[Zn,k] ≡

0 and

var(Zn,k) = λE[Y 2]

∫ 1

0

[

n−k∑

i=0

h(i + γ)

]2

dγ. (3.4.12)

Equation (3.4.12) shows that var(An) has the form quoted in (3.2.5). Let vn =

var(∑n

k=1 Zn,k) so that vn/n = var(√

nAn). Now scale for variances by setting

Xn,k = Zn,k/√

vn and observe that An/√

vn = n−1∑n

k=1 Xn,k. The above moment

and independence structures give

n∑

k=1

E[X2n,k|Zn,1, . . . , Zn,k−1] =

n∑

k=1

E[X2n,k] = 1 (3.4.13)

for each n ≥ 1.

For a fixed γ ∈ [0, 1), define

V (γ) =

∞∑

i=0

|h(i + γ)| (3.4.14)

(the functions V and H0 of the stong law proof coincide). We will use finiteness of∫∞0

u|h(u)|du to show that V is well defined (finite on a subset of [0, 1) with Lebesgue

measure one) and square integrable. This is not immediate from (3.1.4) as V (γ) does

not necessarily lie in [0, 1] for each γ ∈ (0, 1].

Expanding the summation in (3.4.14) gives

∫ 1

0

V 2(γ)dγ ≤∫ 1

0

∞∑

k=0

h2(k + γ)dγ + 2

∫ 1

0

∞∑

i=0

∞∑

j=i+1

|h(i + γ)h(j + γ)|dγ. (3.4.15)

Page 51: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

43

Using |h(t)| ≤ 1 for all t in the double summation gives

∫ 1

0

V 2(γ)dγ ≤∫ ∞

0

h2(u)du + 2

∞∑

k=1

k

∫ k+1

k

|h(u)|du

≤∫ ∞

0

|h(u)|du + 2

∞∑

k=1

∫ k+1

k

u|h(u)|du

=

∫ ∞

0

|h(u)|du + 2

∫ ∞

1

u|h(u)|du

≤ 1 + 3

∫ ∞

1

u|h(u)|du

< ∞. (3.4.16)

Hence, V is square integrable and E[V 2(U)] < ∞ when U is uniformly distributed

over the interval [0, 1].

Now set

Rk =∑

j:τj∈(k−1,k]

|Yj|V (k − τj) (3.4.17)

for k ≥ 1 and observe that Rk∞k=1 is IID by the Poisson nature of τi and the

IID Yi. Observe that |Zn,k| ≤ Rk and hence, |Xn,k| ≤ Rk/√

vn. Let FR denote the

cumulative distribution function of an Rk. By Campbell’s Theorem,

E[R21] = λE[Y 2]

∫ 1

0

V 2(γ)dγ < ∞. (3.4.18)

We now show that n/vn is bounded in n with vn → ∞. The partial sums∑n

i=0 h(i + γ) are uniformly bounded by V (γ) for each fixed γ ∈ [0, 1). Dominated

convergence gives

∫ 1

0

[

n−1∑

i=0

h(i + γ)

]2

dγ −→∫ 1

0

T 2(γ)dγ (3.4.19)

as n → ∞ where T is as in (3.2.2). Now use (3.4.12) and averaging of limits to get

limn→∞

vn

n= λE[Y 2]

∫ 1

0

[ ∞∑

n=0

h(n + γ)

]2

dγ ∈ (0,∞), (3.4.20)

where (3.2.2) has been applied for positivity of the limit.

Page 52: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

44

Collecting the above facts, gives for each fixed ε > 0,

n∑

k=1

E[X2n,kI(|Xn,k| > ε)|Zn,1, . . . , Zn,k−1] =

n∑

k=1

E[X2n,kI(|Xn,k| > ε)]

≤n∑

k=1

E

[

R2k

vn

I(Rk/√

vn > ε)

]

≤ n

vn

[|x|>√vnε]

x2dFR(x)

−→ 0 (3.4.21)

as n → ∞ where n/vn is bounded, vn → ∞ as n → ∞, and FR has a finite second

moment have been used. In view of (3.4.13) and (3.4.21), the central limit theorem

now follows from Corollary 3.1 in Hall and Heyde (1980).

Proof of Lemma 3.1. The proof is obtained by the same argument as given in the

proof of Lemma 2.1 in McCormick (1997).

Lemma 4.1. Under the assumptions of Lemma 3.1,

limn→∞

P

[

max2≤k≤dKe

|Ak| ≤ un

∣|A1| > un

]

=

∫ 1

0hα(s)ds

∫∞0

hα(s)ds= θ. (3.4.22)

Proof. In view of Lemma 3.1, (3.4.22) is equivalent to showing that

limn→∞

nP

[

|A1| > un ≥ max2≤k≤dKe

|Ak|]

= βλ

∫ 1

0

hα(s)ds. (3.4.23)

Recall that the Poisson arrival process N is viewed as extended over (−∞,∞).

Let CK = N ([1 − K, K]) and note that CK has a Poisson distribution with mean

λ(2K − 1). Then for some constant C,

P (CK > `) ≤ C exp

[

−1

2` log(`)

]

(3.4.24)

for all ` ≥ 0. Now let P (`)(·) = P (·|CK = `) and E(`) denote expectation with respect

to P (`). Further, conditional on (CK = `), label the Poisson points in [1 − K, K] by

Page 53: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

45

1 − K ≤ τ1 < . . . < τ` ≤ K. Accordingly the marks associated with these points

are denoted Yj, 1 ≤ j ≤ ` and, by virtue of the support set of h being [0, K], the

variables Ak, 1 ≤ k ≤ dKe only involve the Yis via Y1, . . . , Y` (when CK = `). Fix

ε > 0 and define

E`,n,ε =

[

|Yi| ∧ |Yj| >εun

log(n)for some 1 ≤ i < j ≤ `

]

.

Then

P

[

|A1| > un ≥ max2≤k≤dKe

|Ak|]

=

∞∑

`=0

E(`)P (`)

[

|A1| > un ≥ max2≤k≤dKe

|Ak|∣

∣τj = tj , 1 ≤ j ≤ `

]

P (CK = `)

and

P (`)

[

|A1| > un ≥ max2≤k≤dKe

|Ak|∣

∣τj = tj , 1 ≤ j ≤ `

]

P (`)

[

|A1| > un ≥ max2≤k≤dKe

|Ak|, Ec`,n,ε

]

+ P (`)E`,n,ε. (3.4.25)

First note that

P (`)E`,n,ε ≤ `2P 2|Y1| > εun/ log n

≤ 2β2ε−2α`2(log(n))2α

n2.

Hence,

n∞∑

`=0

P (`)E`,n,εPCK = ` = O

(

[log(n)]2α

n

)

. (3.4.26)

Next note that for 0 ≤ ` ≤ log(n) and I = [1 − K, K]

P (`)

[

|A1| > un ≥ max2≤k≤dKe

|Ak|, Ec`,n,ε

∣τi = ti 1 ≤ i ≤ `

]

≤∑

i:tiεI

P (`)[

(|Yi|h(1 − ti) > (1 − ε)un) ∩ [|Yi|h(2 − ti) < (1 + ε)un]]

.

Next observe that for n large

i:tiεI

P (`)|Y1|h(1 − ti) > (1 − ε)un ≤ 2β`

n.

Page 54: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

46

Thus for n large,

nE(`)P (`)

[

|A1| > un ≥ max2≤k≤dKe

|Ak|, Ec`,n,ε

∣τi = ti, 1 ≤ i ≤ `

]

≤ 2β`,

where the upper bound is integrable with respect to P (CK ∈ ·). Applying dominated

convergence yields

lim supn→∞

nP

[

|A1| > un ≥ max2≤k≤dKe

|Ak|]

≤∞∑

`=0

limn→∞

nE(`)P (`)[

|A1| > un ≥ max2≤k≤dKe

|Ak|, Ec`,n,ε

∣τi = ti, 1 ≤ i ≤ `

]

.

(3.4.27)

For ε > 0 sufficiently small,

limn→∞

nP (`)[

[|Y1|h(1 − ti) > (1 − ε)un] ∩ [|Y1|h(2 − ti) < (1 + ε)un]]

=

β((1 − ε)−αhα(1 − ti) − (1 + ε)−αhα(2 − ti)) ≤

β(hα(1 − ti) − hα(2 − ti) + 2αε).

Therefore,

limn→∞

nP (`)

[

|A1| > un ≥ max2≤k≤dKe

|Ak|, Ec`,n,ε

]

≤ βE(`)[

τi∈I

(hα(1 − τi) − hα(2 − τi) + 2αε)]

. (3.4.28)

Hence using (3.4.27) and (3.4.28),

lim supn→∞

nP

[

|A1| > un ≥ max2≤k≤dKe

|Ak|]

≤ βE

I

(hα(1 − t) − hα(2 − t) +

2αε)N(dt)

= β

∫ K

1−K

(hα(1 − t) − hα(2 − t) + 2αε)λdt

≤ 2εαβλ(2K − 1) + βλ

∫ 1

0

hα(t)dt,

where the inequality 2 − K ≤ 1 has been applied. Since ε > 0 was arbitrary,

lim supn→∞

nP

[

|A1| > un ≥ max2≤k≤dKe

|Ak|]

≤ βλ

∫ 1

0

hα(t)dt.

Page 55: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

47

An analogous argument shows that

lim infn→∞

nP

[

|A1| > un ≥ max2≤k≤dKe

|Ak|]

≥ βλ

∫ 1

0

hα(t)ds,

thus establishing the lemma.

Lemma 4.2. Let rn be any sequence such that rn → ∞ and rn = o(n) as n → ∞.

Let kn = b nrnc and let θ be as in (3.3.8). Then

limn→∞

knP

[

max1≤k≤rn

|Ak| > anx

]

= θx−α, x > 0.

Proof. Since |Ak|∞k=1 is a stationary dKe-dependent sequence, by Lemma 4.1 and

Corollary 1.3 in Chernick et al. (1991), |Ak|∞k=1 has extremal index

θ = limn→∞

P

[

max2≤k≤dKe

|Ak| ≤ un

∣|A1| > un

]

=

∫ 1

0hα(s)ds

∫∞0

hα(s)ds, (3.4.29)

where the last equality holds by Lemma 4.1. Using dKe-dependence, one obtains

P

[

max1≤k≤n

|Ak| ≤ anx

]

− P kn

[

max1≤k≤rn

|Ak| ≤ anx

]

→ 0 (3.4.30)

as n → ∞.

In view of (3.4.29), (3.4.30), and the fact that nP|A1| > an → 1 as n → ∞

(which follows from Lemma 3.1),

limn→∞

P kn

[

max1≤k≤rn

|Ak| ≤ anx

]

= e−θx−α

.

Hence

limn→∞

knP max1≤k≤rn

|Ak| > anx = θx−α,

so that the lemma holds.

Page 56: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

48

Define a sequence Qn of random measures by

Qn(M) = P

[

rn∑

k=1

δ(Ak/∨rn

1 |Aj |) ∈ M∣

rn∨

1

|Aj | > anx

]

(3.4.31)

where M ∈ B(Mp) with Mp = µ : µ is a locally finite counting measure on [−1, 0)∪

(0, 1] and µ(−1 ∪ 1) > 0, which is given the topology of vague convergence.

Further, we require that r3n = o(n) and rn → ∞ as n → ∞.

Lemma 4.3. Let Qn be as in (3.4.31) and assume the hypotheses of Lemma 3.1

hold. Then

Qn ⇒ Q (3.4.32)

as n → ∞ in (Mp,B(Mp)), where Q(M) = P∑∞k=0 δ(Jh(k+γ)/h(γ)) ∈ M and γ has

probability density

fγ(s) =hα(s)

∫ 1

0hα(t)dt

, 0 ≤ s ≤ 1.

Moreover, J is independent of γ and P (J = 1) = 1 − P (J = −1) = p with p given

in (3.3.5).

Proof. By Theorem 4.7 in Kallenberg (1976), to establish (3.4.32) it suffices to show

for any set U =⋃m

i=1[si, ti] given as a disjoint union of intervals [si, ti] ⊂ [−1, 0)∪(0, 1]

that

limn→∞

Qnµ : µ(U) = 0 = Qµ : µ(U) = 0 (3.4.33)

provided Qµ : µ(∂U) = 0 = 1 and, further, for [a, b] ⊂ [−1, 0) ∪ (0, 1]

limn→∞

Mp

µ([a, b])Qn(dµ) =

Mp

µ([a, b])Q(dµ) (3.4.34)

provided Qµ : µ(a ∪ b) = 0 = 1.

To that end, take U = [s, t] with 0 < s < t < 1. The general case of allowable U

follows by the same argument as that for an interval. Then for large n,

Qnµ : µ([s, t]) = 0 = P

[

Ak/

rn∨

1

|Aj| ∈ [s, t]c, 1 ≤ k ≤ rn

rn∨

1

|Aj| > anx

]

Page 57: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

49

≤ (1 + ε)knxα

θP[

Ak/

rn∨

1

|Aj| ∈ [s, t]c, 1 ≤ k ≤ rn,

rn∨

1

|Aj| > anx]

, (3.4.35)

where ε > 0 is arbitrary and we have used Lemma 4.2. By arguing as in the proof

of Lemma 4.1, we obtain for large n that

P

[

Ak/rn∨

1

|Aj| ∈ [s, t]c, 1 ≤ k ≤ rn,rn∨

1

|Aj| > anx

]

≤rn∑

j=1

P[

sgn(Yj)h(k − τj)/h(j − τj) ∈ [(1 + ε)s, (1 − ε)t]c, j ≤ k ≤ j + dKe,

j − 1 < τj ≤ j, |Yj|h(j − τj) > (1 − ε)anx]

. (3.4.36)

Note if |Aj∗| =∨rn

1 |Aj |, then outside an asymptotically negligible set

Aj∗ =∑

j:τj≤j∗

h(j∗ − τj)Yj

with |Yj∗| = max|Yj| : j such that τj ≤ rn and, further, j∗ − 1 < τj∗ ≤ j∗ where

we have used that h is nonincreasing on its support. Moreover, as in the proof of

Lemma 4.1, one can account for the number of Yj’s contributing to Ak, 1 ≤ k ≤ rn,

as well as the magnitudes of Yj, j 6= j∗.

Next we find that

P[

sgn(Y1)h(k − τ1)/h(1 − τ1) ∈ [(1 + ε)s, (1 − ε)t]c, 1 ≤ k ≤ 1 + dKe,

0 < τ1 ≤ 1, |Y1|h(1 − τ1) > (1 − ε)anx]

≤ (1 − ε)−αx−α

n∫ K

0hα(t)dt

∫ 1

0

hα(1 − t)

(

q + pP

[ ∞∑

k=1

δh(k−t)h(1−t)

([(1 + ε)s, (1 − ε)t]) = 0

])

dt

=(1 − ε)−αx−α

n∫ K

0hα(t)dt

∫ 1

0

hα(1 − t)

(

q + pP

[ ∞∑

k=0

δh(k+t)h(t)

([(1 + ε)s, (1 − ε)t]) = 0

])

dt

(3.4.37)

From (3.4.33) — (3.4.37), we conclude that

lim supn→∞

Qn(µ : µ([s, t]) = 0)

Page 58: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

50

≤ (1 + ε)∫ 1

0hα(t)dt

∫ 1

0

hα(t)

[

q + p P

∞∑

k=0

δh(k+t)h(t)

([(1 + ε)s, (1 − ε)t]) = 0

]

dt

= (1 + ε)P

∞∑

k=0

δJh(k+γ)h(γ)

([(1 + ε)s, (1 − ε)t]) = 0

= (1 + ε)Q(µ : µ([(1 + ε)s, (1 − ε)t]) = 0). (3.4.38)

Since

Q(µ : µ(s ∪ t) = 0) = q + p P (h(k + γ)/h(γ) ∈ s, tc, k ≥ 0)

= 1,

letting ε ↓ 0 in (3.4.38) gives

lim supn→∞

Qn(µ : µ([s, t]) = 0) ≤ Q(µ : µ([s, t]) = 0).

The opposite inequality for the limit infimum is shown similarly. Thus (3.4.33) is

established.

Now let 0 < s < 1 and consider

Mp

µ([s, 1])Qn(dµ)

=

Ω

rn∑

k=1

δ(Ak/∨rn

1 |Aj |)([s, 1])I[

rn∨

1

|Aj| > anx]dP

P (∨rn

1 |Aj| > anx)

∼ p∫ 1

0hα(t)dt

∫ 1

0

hα(t)

dKe∑

m=0

(m + 1)I

[

h(m + t)

h(t)≥ s >

h(m + 1 + t)

h(t)

]

dt

=

Mp

µ([a, b])Q(dµ),

establishing (3.4.34) for intervals in the positive half line. As the other case is done

in the same way, the proof of the lemma is complete.

Proof of Theorem 3.2. We apply Theorem 2.5 in Davis and Hsing (1995). Since

Ak is dKe-dependent the mixing condition in their theorem is a fortiori satisfied.

Thus by Lemma 4.1 and Lemma 4.2, the result follows.

Page 59: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

51

Proof of Theorem 3.3. We apply Theorem 3.1 in Davis and Hsing (1995). By that

result, the case 0 < α < 1 holds since from Lemma 3.1, it follows that

nPA1/an ∈ · v−→ µ(·), (3.4.39)

where µ(dx) = [pαx−α−1I(0,∞)(x)+ qα(−x)−α−1I(−∞,0)(x)]dx andv−→ denotes vague

convergence and because

∞∑

j=1

δAj/an⇒

∞∑

i=1

∞∑

j=0

δPjQij

as n → ∞ according to Theorem 3.2.

For the case 1 ≤ α < 2 we proceed as follows. First note that

E

[

a−1n

n∑

k=1

(AkI[|Ak| ≤ εan] − EAkI[|Ak| ≤ εan])

]2

≤ a−2n

n∑

k=1

EA2kI[|Ak| ≤ εan]

+ 2a−2n

n∑

j=1

j+dKe∑

k=j+1

cov(AjI[|Aj| ≤ εan], AkI[|Ak| ≤ εan]),

where we have used the dKe-dependence of Ak. Applying Cauchy-Schwarz to the

covariance terms gives

lim supn→∞

a−2n var

(

n∑

k=1

AkI[|Ak| ≤ εan]

)

≤ limn→∞

2dKe + 1

a2n

nEA21I[|A1| ≤ εan]

= O(ε2−α).

Hence, for any δ > 0,

limε→0

lim supn→∞

P

[

1

an

n∑

k=1

(AkI[|Ak| ≤ εan] − EAkI[|Ak| ≤ εan])∣

∣> δ

]

= 0. (3.4.40)

This case holds from (3.4.39), (3.4.40), and Theorem 3.2 by application of Theorem

3.1 (ii) in Davis and Hsing (1995).

Proof of Theorem 3.4. Use Theorem 3.3 and apply Theorem 3.2 in Davis and

Hsing (1995).

Page 60: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

52

Acknowledgments. The authors acknowledge National Science Foundation sup-

port from grants DMS 0071383 and DMS 0304407.

3.5 References

[1] Bevan, S., Kullberg, R. and Rice, J. (1979). An analysis of membrane noise,

Annals of Statistics, 7, 237-257.

[2] Billingsley, P. (1995) Probability and Measure, Third Edition, John Wiley &

Sons, New York.

[3] Brockwell, P. J. and Davis, R. A. (1991). Time Series: Theory and Methods,

Second Edition, Springer-Verlag, New York.

[4] Brockwell, P. J., Resnick, S. I. and Tweedie, R. L. (1982). Storage Processes

with general release and additive inputs, Advances in Applied Probability, 14,

392-433.

[5] Chernick, M. R., Hsing, T. and McCormick, W. P. (1991). Calculating the

extremal index for a class of stationary sequences. Advances in Applied Proba-

bility 23, 835-850.

[6] Davis, R. A. and Hsing, T. (1995). Point process and partial sum convergence for

weakly dependent random variables with infinite variance. Annals of Probability,

23, 879-917.

[7] Doney, R. A. and O’Brien, G. L. (1991). Loud shot noise, Annals of Applied

Probability, 1, 88-103.

[8] Godambe, V. P. (1985). The foundations of finite estimation in stochastic pro-

cess, Biome- trika, 72, 419-28.

Page 61: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

53

[9] Hall, P. and Heyde, C. C. (1980). Martingale Limit Theory and its Application,

Academic Press, New York.

[10] Harrison, J. M. and Resnick, S. I. (1976). The stationary distribution and first

exit probabilities of a storage process with general release rule, Mathematics of

Operations Research, 1, 347 - 358.

[11] Kallenberg, O. (1976). Random Measures, Academic Press, New York City.

[12] Lauger, P. (1975). Shot noise in ion channels, Biochem. Biophys. Acta., 413,

1-10.

[13] Lund, R. B., Butler, R. and Paige, R. L. (1999). Prediction of shot noise, Journal

of Applied Probability, 36, 374-388.

[14] Moran, P. A. P. (1969). Dams in series with continuous release, Journal of

Applied Probability, 4, 380-388.

[15] McCormick, W. P. (1998). Extremes for shot noise processes with heavy tailed

amplitudes, Journal of Applied Probability, 34, 643-656.

[16] Rice, J. (1977). On generalized shot noise, Advances in Applied Probability, 9,

553-565.

Page 62: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

Chapter 4

Conclusions and Future Work

This dissertation studied several statistical issues for shot noise processes. Our focus

was on process histories taken on an equally-spaced discrete-time lattice. General-

izations of this should be possible. Our efforts with the law of large numbers and

central limit theorem were made under very weak conditions. Some thought/work

will show that the integrability of the impulse response function is actually neces-

sary for the strong law of large numbers. Likewise, in the proof of the central limit

theorem, it is not hard to see that

∫ 1

0

[ ∞∑

i=0

|h(i + u)|]2

du < ∞ (4.0.1)

is a necessary condition (here we use the notation in Chapter 3). We also gave

asymptotics for the case of heavy-tailed shot marks.

We explored estimation of parameters appearing in a shot noise process. Methods

for constructing optimal estimating equations for the case when the shot function is

interval-similar were developed. Again we point out that the conditions in Theorem

2.4 in Chapter 2 will be hard to weaken. For the case when the shot function is com-

pactly supported, we developed moment-type methods for estimating parameters.

The methods are particularly practical when the shot function has just one unknown

parameter.

Though the results concerning limiting results presented are quite general, there

are still a few unanswered questions in shot noise inference.

54

Page 63: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

55

4.1 Non-interval Similar Shot Functions

In particular, we ask the following question.

Question 1. Can we establish optimal estimating equations without assuming the

interval similarity of shot functions?

Perhaps the following observation gives clues to the answer.

Theorem 1.1. Suppose h is a non-negative shot function defined on [0,∞) satis-

fying∫∞0

h(u)du < ∞. Then for each natural number d, there exists a sequence of

interval similar functions gk∞k=1 such that

0 ≤ h(n + γ) −∞∑

k=1

gk(n + γ) ≤ 1

2n+d(4.1.1)

for almost all (Lebesgue) γ ∈ [0, 1) and n = 0, 1, 2, . . ..

Let C : [0,∞) → R be the piecewise constant function defined by C(n + γ) ≡

1/2n+d for each n = 0, 1, 2, . . . and γ ∈ [0, 1), where d is some integer. Then C is also

interval similar, and can be made arbitrary “small” by choosing d large. Theorem

1.2 merely says that any positive integrable shot function can be approximated by

the sum of an infinite sequence of interval-similar shot functions, and further, the

difference is bounded by an interval-similar shot function. It is not hard to generalize

the theorem without assuming the positiveness of h.

Proof of Theorem 1.2. Fix the non-negative integer n and define hn(γ) = h(n+γ)

for each γ ∈ [0, 1). Let

Em,k = [0, 1) ∩ h−1n ([m + (k − 1)/2n, m + k/2n+d)), (4.1.2)

for a fixed m = 0, 1, 2, . . . and k = 1, 2, . . . , 2n+d. Then the Em,k’s are disjoint

measurable subsets of [0, 1) and the union ∪m,kEm,k has Lebesgue measure one. Let

Page 64: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

56

cm,k = m + (k − 1)/2n+d and define g : [0,∞) → [0,∞) by

g(n + γ) =

∞∑

m=0

2n+d∑

k=1

cm,kIEm,k(γ), (4.1.3)

where γ ∈ [0, 1). It follows that for each γ ∈ [0, 1),

0 ≤ h(n + γ) − g(n + γ) ≤ 1

2n+d. (4.1.4)

Let Fn,k, k = 1, 2, . . . , mn (where mn may be infinite) be all sets in the collection

Em,k : m = 0, 1, . . . ; k = 1, 2, . . . , 2n+d with positive Lebesgue measure. Then we

can rewrite g(n + γ) as

g(n + γ) =mn∑

k=1

cn,kIFn,k(γ) (4.1.5)

for γ ∈ [0, 1). Now let F1, F2, . . . be all the sets in the collection ∩∞n=0Fn,in : 1 ≤

in ≤ mn with positive Lebesgue measure. Then Fk∞k=1 are disjoint and the union

∪∞n=0Fn has Lebesgue measure one. Furthermore, one has the representation

g(n + γ) =∞∑

k=1

cn,kIFk(γ), (4.1.6)

where γ ∈ [0, 1). Let gk : [0,∞) → R be given by

gk(n + γ) = cn,kIFk(γ) (4.1.7)

for each n = 0, 1, 2, . . . and γ ∈ [0, 1). From the above construction, it is routine to

check that gk∞k=1 are as desired.

4.2 One Parameter Shot Functions

In applications it is not unusual to have a shot function with only one parameter.

Motivated by the moment-type methods in the case of a compactly supported shot

function in Chapter 2, we pay special attention to the case where the shot function

has just one unknown parameter. Identifiability issues also make one parameter shot

Page 65: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

57

functions attractive: if a shot function has too many parameters, identifiability issues

could arise.

The question we ask here is

Question 2. Can we generalize the moment-type methods developed in Chapter 2

without assuming compactness of the support of the shot function?

To be specific, we need to prove the following two conjectures which are key

technical details for moment-type methods. Here, we use the notation and technical

setting of Chapter 2, assume integrability of h over [0,∞), and that |h(t)| ≤ 1 for

all t ≥ 0.

Conjecture 1. (Strong and Weak Laws of Large Numbers) Let mi = limn→∞ E[Ain]

for i = 1, 2. Then as n → ∞,

n−1

n∑

i=1

Aia.e−→ λE[Y ]

∫ ∞

0

h(u)du; (4.2.1)

n−1n∑

i=1

A2i

P−→[

λE[Y ]

∫ ∞

0

h(u)du

]2

+ λE[Y 2]

∫ ∞

0

h2(u)du. (4.2.2)

Conjecture 2. (Central Limit Theorem) As n → ∞,

n1/2∑n

i=1 Ai

n1/2∑n

i=1 A2i

D−→ N

m1

m2

,

s211 s12

s12 s222

, (4.2.3)

where s211, s12, and s2

22 can be explicitly obtained (the computations are unwieldy,

and the formulas are not given here).

A simulation study was conducted for the shot function h(t) = e−β2t2I[0,∞)(t)

with β = 1. The shot marks are exponentially distributed with E[Y ] = 4, and

the rate generation parameter λ is taken as unity. The two parameters we want to

estimate are E[Y ] and β. Sample sizes of up to 500 were considered. The study gives

Page 66: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

58

some support for the two conjectures, but we do not discuss mathematical detail

here.

Table 4.1: Shot Noise Simulation Results with h(t) = e−β2t2I[0,∞)(t). Displayed areparameter estimates and simaluted/theoretical standard errors (in parentheses) forvarious sample sizes.

Sample Size E[Y ] β50 3.7848 (1.1517/1.2520) 0.9700 (0.2778/0.3018)100 3.9150 (0.8349/0.8853) 0.9911 (0.2048/0.2134)200 3.9642 (0.6231/0.6260) 0.9971 (0.1501/0.1508)250 3.9548 (0.5382/0.5599) 0.9901 (0.1306/0.1350)500 3.9828 (0.3810/0.3959) 0.9982 (0.0957/0.0954)

λ = 1, E[Y ] = 4, β = 1; #Simulations=2, 000

Finally, we point out that interval-similar shot functions still deserve attention.

We have shown that optimal estimating equations can be constructed for the case of

interval similar shot functions. One should not expect estimators from moment-type

methods to be as efficient as those constructed via optimal estimating equations.

However, moment-type equations can be solved to obtain initial values for numeri-

cally solving optimal estimating equations.

4.3 Non-causal Shot Functions?

The causality of shot functions plays a key role in establishing both optimal esti-

mating equation and moment-type methods. Towards this, another question arises.

Question 3. Can we develop methods for estimating parameters in a shot noise

process with a non-causal shot function?

The idea of a non-causal shot function is not absurd. Some processes in nature

may exhibit anticipatory structure before the shot occurs. This was pointed out to

me by Doctor Jane L. Harvill, a professor at Mississippi State University, who is also

Page 67: Shot Noise Processess - University of Georgia › pdfs › xiao_yuanhui_200308_phd.pdf · Models for stochastic activity of Neurones (Lecture Notes in Biomathematics, 12). Springer,

59

studying shot noise. Harvill’s work does not assume causality of the shot function

and has real data with such behavior. Cooperation in future work is planned.

4.4 Maximum Likelihood Estimators

We have made no use of any distributional features of the shot marks other than

moments. An exponential distribution of shot marks in several examples was

assumed; however, what is ultimately needed in those examples are the first few

moments of the shot marks only. If the first few moments are given, then the

distribution of the shot marks is irrelevant in many estimating equation settings.

This may be regarded as an obvious advantage, especially over exact distributional

calculations when the sample sizes are large, or as a drawback ripe for likelihood

exploration.

It is desirable to explore the possible roles of the distribution of shot marks

in inferential settings and develop maximum likelihood estimators. Thus the final

question we ask is:

Question 4. Can we develop maximum likelihood estimators for parameters of a

shot noise process?