OutlineIntroduction
Stochastic Processes
CAESLezione 2 -
Processi Stocastici
Prof. Michele Scarpiniti
Dip. NFOCOM - “Sapienza” Universita di Roma
http://ispac.ing.uniroma1.it/scarpiniti/index.htm
Roma, 04 Marzo 2010
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 1 / 46
OutlineIntroduction
Stochastic Processes
1 IntroductionBasic DefinitionMean and Moments
2 Stochastic ProcessesStochastic ProcessesRandom sequences and discrete-time LTI systems
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 2 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Introduction
Introduction
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 3 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Introduction
The deterministic signals are defined through precise mathematical ruleswhich are able to exactly predict the sample values. The random signals orstochastic processes, instead, are not exactly predictable, but they arestatistically characterized by functions that can describe their behavior inmean.
A random variable (RV) X(ς) is a function, with domain onS = [ς1, ς2, . . .] defined as universal set of experimental results, whichassigns a number x to every occurrence ς ∈ S .The result ςk is defined event.The predictability of the event ςk is defined by a non-negativefunction p(ςk), k = 1, 2, . . ., named probability function of the eventςk . This function satisfy the two following properties:
1 p[X(ς) = −∞] = 0 and p[X(ς) = +∞] = 1;2 0 ≤ p(·) ≤ 1.
A random variable is: a discrete RV if it can assume a set of discretevalues xk ; a continuous RV if it can assume all possible values.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 4 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Cumulative Density Functions
For simplicity a RV X(ς) is denoted as X.The probability of the event X ≤ x , denoted as p(X ≤ x) is a function ofthe x variable. This function is a measure of the probability, it is denotedby FX(x) and it is named as probability distribution function or cumulativedensity function (cdf):
FX(x) = p(X ≤ x), for −∞ < x <∞
Properties of the cdf:
1 0 ≤ FX(x) ≤ 1;
2 FX(−∞) = 0;
3 FX(+∞) = 1;
4 FX(x1) ≤ FX(x2) if x1 < x2.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 5 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Probability Density Functions
Another very meaningful function is the probability density function (pdf),defined as follows:
fX(x) =dFX(x)
dx
Properties of the pdf:
1∫ +∞−∞ fX(x)dx = 1;
2 FX(x) =∫ x−∞ fX(ν)dν.
We have to note that fX(x) does not represent a probability: to obtain theprobability of the event x ≤ X ≤ x + ∆x we have to multiply the pdf forthe interval ∆x :
fX(x)∆x ≈ ∆xFX(x) ≡ FX(x + ∆x)− FX(x) = p(x ≤ X ≤ x + ∆x).
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 6 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Cumulative and Probability Density Functions
The following figure shows some cdf and the respective pdf.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 7 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Mean and Moments
The expected value or mean of a RV X is defined by
µx = E {x} =
∫ ∞−∞
xfX(x)dx
The moment of order m is defined by
r(m)x = E {xm} =
∫ ∞−∞
xmfX(x)dx
The central moment of order m is defined by
c(m)x = E {(x − µ)m} =
∫ ∞−∞
(x − µ)mfX(x)dx
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 8 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Variance
The variance σ2x of a RV X is defined as the central moment of order
2:
σ2x = c
(2)x = E
{(x − µ)2
}=
∫ ∞−∞
(x − µ)2fX(x)dx
The positive constant σx =√σ2
x is defined as standard deviation ofthe RV X.
The variance indicates as the values of the RV X are distributed aroundthe mean value µx, as show in the figure below.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 9 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Skewness and Kurtosis
The skewness k(3)x of a RV X is a function of the central moment of order 3:
k(3)x = E
{[x − µσx
]3}
=1
σ3x
c(3)x
The skewness represents a degree of the pdf asymmetry.
The kurtosis k(4)x of a RV X is a function of the central moment of order 4:
k(4)x = E
{[x − µσx
]4}− 3 =
1
σ4x
c(4)x − 3
The term −3 is introduced in order to obtain k(4)x = 0 for a Gaussian RV.
The kurtosis represents the peakiness of the pdf.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 10 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Chebyshev’s inequality
Given a RV X with mean value µ and standard deviation σx, then
p {|x − µ| ≥ kσx} ≤1
k2, k > 0.
This inequality is known as Chebyshev’s inequality and states that a RVdeviates k times from its mean value with probability lower than 1/k2.The Chebyshev’s inequality is independent from the shape of the fx(x).
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 11 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
The Characteristic Function
The Characteristic Function or CF Φx(s) is defined as the Laplace transform ofthe probability density function (pdf) fx(x):
Φx(s) =
∫ ∞−∞
fx(x)e−sxdx = E{
e−sx}
(1)
where s is the complex-valued Laplace variable and cannot be interpreted as afrequency.The Taylor series expansion of the CF gives
Φx(s) ≡ E{
e−sx}
= E
{1 + sx +
(sx)2
2!+ · · · +
(sx)m
m!+ · · ·
}= 1 + sµ +
s2
2!r (2)x + · · · +
sm
m!r (m)x + · · ·
So the moments can be interpreted as the coefficients of the Taylor seriesexpansion:
r (m)x =
dmΦx(s)
dsmfor m = 1, 2, . . .
For this reason the CF is also called moments generating function.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 12 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
The Second Characteristic Function
The Second Characteristic Function Ψx(s) is defined as the natural logarithm ofthe characteristic function Φx(s):
Ψx(s) = ln Φx(s) = ln E{
e−sx}
(2)
The Taylor series expansion of the second CF gives some coefficients each of
them can be interpreted as cumulant κ(m)x , which is a statistical descriptor.
κ(m)x =
dmΨx(s)
dsmm = 1, 2, . . .
For this reason the second CF is also called cumulants generating function. Thefirst five cumulants are:
κ(1)x = r
(1)x = µ = 0
κ(2)x = r
(2)x = σ2
x
κ(3)x = c
(3)x
κ(4)x = c
(4)x − 3σ4
x
κ(5)x = c
(5)x − 10c
(3)x σ2
x
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 13 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
The Uniform Distribution
A RV X, defined in the interval [a, b], has a uniform distribution if its pdfis
fx (x) =
{1
b−a a ≤ x ≤ b
0 elsewhere
The cdf is
Fx (x) =
x∫−∞
fx (v) dv =
0 x < a
x−ab−a a ≤ x ≤ b
1 x > b
while the CF is
Φx(s) =esb − esa
s(b − a)
Its mean and variance are respectively:
µ =a + b
2and σ2
x =(b − a)2
12
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 14 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
The Normal Distribution
A RV X, with mean µ and standard deviation σx, has a normal distributionif its pdf is
fx(x) =1√
2πσ2x
e− 1
2σ2x
(x−µ)2
with a CFΦx(s) = eµs− 1
2σ2
x s2
Usually it is denoted by N(µ, σ2x), because the Gaussian RV is completely
determined by its mean µ and variance σ2x .
The moments are evaluated by
c(m)x = E {|x − µ|m} =
{1 · 3 · 5 · . . . · (m − 1)σm
x for m even0 for m odd
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 15 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Central Limit Theorem
An important result is the central limit theorem (CLT) stating that:
Theorem
Given n independent RVs xi , consider the sum
x = x1 + x2 + . . .+ xn
This is a RV with mean µ = µ1 + µ2 + . . .+ µn and varianceσ2
x = σ2x1
+ σ2x2
+ . . .+ σ2xn
. Then the distribution fx(x) of x approaches a normaldistribution with the same mean and variance, as n increases:
fx(x) =1√
2πσ2x
e− (x−µ)2
2σ2x
This theorem can be stated as a limit: if z = (x− µ) /σx then
fz(z) →n→∞
1√2π
e−z2/2.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 16 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Vectors of Random Variables
A vector of random variables or random vector is defined as a row or columnvector of random variables. For example X(ς) = [X0(ς),X1(ς), . . . ,XN(ς)]T . Wecan generalize to random vectors the definition of mean, variance and moments.
The mean of a random vector is the vector:
µ = E {x} = [E {x1} ,E {x2} , . . . ,E {xN}]T = [µ1, µ2, . . . , µN ]T
The second order statistic is characterized by the auto-covariance matrix,define as
Cx = E{
(x− µ)(x− µ)T}
It is a symmetric matrix: Cx = CTx .
The auto-correlation matrix is defined as
Rx = E{
xxH}
It is a symmetric matrix: Rx = RTx .
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 17 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Vectors of Random Variables
If the random vector X has zero mean µ = [0, 0, . . . , 0]T , then
Cx = Rx .
The auto-correlation matrix Rx of a vector X of RVs is alway non-negativedefined. In fact for every vector w = [w0,w1, . . . ,wM ]T , we have
wT Rxw ≥ 0
that is the quadratic form wT Rxw is non-negative.
For a complex-valued random vector X ∈ CN , we have the same formulassubstituting the transpose operator (·)T with the hermitian operator (·)H .
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 18 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Conditional distributions
Conditional probability is the probability of some event X, given theoccurrence of some other event Y and it is written as
p(X|Y).
It is described by the conditional cumulative density function Fx(x|y) andby the conditional probability density function fx(x|y). These terms areevaluated as
Fx(x|y) =Fxy(x,y)Fy(y)
fx(x|y) =fxy(x,y)fy(y)
In addition it is valid the following
Theorem (Bayes)
p(Y|X) = p(X|Y)p(Y)
p(X)
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 19 / 46
OutlineIntroduction
Stochastic Processes
Basic DefinitionMean and Moments
Statistical independence
Given two events X and Y, we say that they are statistical independenceif
p (X|Y) = p (X)p (Y|X) = p (Y)
and converselyFx (X|Y) = Fx (X)fx (X|Y) = fx (X)
A consequence of the statistical independence is that the jointdistribution is factorized in the products of its marginal ones, as aconsequence of the Bayes’theorem:
Fx(x, y) = Fx(x)Fy(y)fx(x, y) = fx(x)fy(y)
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 20 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Stochastic Processes
Stochastic Processes
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 21 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Stochastic Processes
A stochastic process (SP) x(t, ς) is a family of bi-dimensional functions inthe variable t and ς. The variable ς is defined by the set of allexperimental results while the variable t represents the time variable,usually t ∈ R or t ∈ Z. If t is real we have a continuous-time stochasticprocess x(t), else, if t ∈ Z, it is a discrete-time stochastic process x[n]:
x[n] = [x1[n], x2[n], ..., xN [n]]
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 22 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Statistical representation of stochastic processes
The determination of the statistical representation of a stochastic processcan be done as for the random variables. In fact, for a fixed temporalindex n, the process is a simple random variable. Hence a stochasticprocess is characterized by:
the joint cdf of order k :
Fx(x1, . . . , xk ; n1, . . . , nk) = p (x[n1] ≤ x1, . . . , x[nk ] ≤ xk)
the joint pdf of order k :
fx(x1, . . . , xk ; n1, . . . , nk) =∂Fx(x1, . . . , xk ; n1, . . . , nk)
∂x1∂x2 . . . ∂xk
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 23 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Ensemble means
For a stochastic process is possible to define the ensemble means:
mean value:µn = E {x [n]}
It usually depends on the time instant n.autocorrelation:
r [n,m] = E {x [n]x∗[m]}
In addition r [n] = E{|x [n]|2
}is the mean power of the process.
auto-covariance:
c[n,m] = E {(x [n]− µn)(x [m]− µm)∗} = r [n,m]− µnµ∗m
If the SP has zero mean, for every n,m, then r [n,m] = c[n,m].high order moments and high order central moments:
r (m)[n1, . . . , nm] = E {x [n1] · x [n2] · . . . · x [nm]} ,c(m)[n1, . . . , nm] = E {(x [n1]− µn1)(x [n2]− µn2) · · · (x [nm]− µnm)} .
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 24 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Ensemble means
In particular the following moments are very important:
variance:
σ2xn = E
{|x [n]− µn|2
}= E
{|x [n]|2
}− |µn|2
The quantity σxn is defined as standard deviation of the SP andmeasures the dispersion of the observation x [n] around its mean value
µn. If the PS has a zero mean then σ2xn = r [n, n] = E
{|x [n]|2
}.
Given two SP x [n] and y [n], we can define the cross-correlation andcross-covariance:
rxy [n,m] = E {x [n]y∗[m]}cxy [n,m] = E {(x [n]− µxn) (y [m]− µym)∗} = rxy [n,m]− µxnµ
∗ym
The normalized cross-correlation is defined as
rxy [n,m] =cxy [n,m]
σxnσym.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 25 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Statistical Independence
Fundamental is the general concept of independence.
A stochastic process is said to be independent if its joint pdf can befactorized into the product of its marginal distribution:
fx(x1, . . . , xk ; n1, . . . , nk) = f1(x1; n1) ·f2(x2; n2) · . . . ·fk(xk ; nk), ∀k , ni
that is: x[n] is a SP formed by the independent RVs x1[n], x2[n], . . ..
Alternatively, for two independent sequences x [n] and y [n] followsthat:
E {x [n] · y [n]} = E {x [n]} · E {y [n]}
If all the sequences of the SP are independent and are characterizedby the same probability density function f1(x1; n1) = . . . = fk(xk ; nk),the the SP is called independent and identically distributed process ori.i.d.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 26 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Statistical Independence
A stochastic process is said to be uncorrelated if
c[n,m] = E {(x [n]− µn)(x [m]− µm)∗} = σ2xnδ[n −m]
The SPs x [n] and y [n] are said to be uncorrelated if
cxy [n,m] = E{
(x [n]− µxn) (y [m]− µym)∗}
= 0
that isrxy [n,m] = µxnµ
∗ym
The SPs x [n] and y [n] are said to be orthogonal if
rxy [n,m] = 0.
Remark
If two SPs x [n] and y [n] are independent the they are also uncorrelated, but thereverse is not always true. (It is true for Gaussian SPs)
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 27 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Stationary stochastic process
Usually a SP is stationary if the statistic of x [n] is equal to the statistic ofx [n − k]. To be more precise:
Definition
The SP x [n] is strict-sense stationary (SSS) or stationary of order N, if
fx(x1, . . . , xN ; n1, . . . , nN) = fx(x1, . . . , xN ; n1 − k , . . . , nN − k), ∀k
that is its pdf is invariant to a time translation.
Definition
The SP x [n] is wide-sense stationary (WSS) if
E {x [n]} = E {x [n + k]} = µ, ∀n, k
that is its statistic of first order does not depend on time.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 28 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Stationary stochastic process
As a corollary of the definition of the stationarity we can give the following:
Definition
The SP x [n] is wide-sense periodic (WSP) if
E {x [n]} = E {x [n + N]} = µ, ∀n
that is its statistic of first order is invariant to a translation of N samples.
Definition
The SP x [n] is wide-sense cyclostationary (WSC) if
E {x [n]} = E {x [n + N]}r [m, n] = r [m + N, n + N]
∀m, n
that is its statistic of first and second order are invariant to a translation of Nsamples.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 29 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Auto-correlation and auto-covariance sequence for a WSSSP
The auto-correlation sequence for a WSS SP is defined as
r [m, n] = E {x [n]x∗[m]} = r [n −m]
Let us pose k = n −m, defined as correlation delay or lag , then thecorrelation is written as
r [k] = E {x [n]x∗[n − k]} = E {x [n + k]x∗[n]}
usually named auto-correlation function (acf).
The auto-covariance sequence for a WSS SP is defined as
c[k] = E {(x [n]− µ)(x [n − k]− µ)∗} = r [k]− µ2
for a zero-mean SP r [k] = c[k].
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 30 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Properties of the auto-correlation function
The auto-correlation function has some fundamental properties:
1 it has conjugate-symmetry with respect the delay:
r∗[−k] = r [k]
2 it is non-negative defined
M∑k=1
M∑m=1
αk r [k −m]α∗k ≥ 0
this property is a necessary and sufficient condition so that r [k] is anacf.
3 the term at k = 0 is that of maximum amplitude
E{
x2[n]}
= r [0] ≥ |r [k]| , ∀n, k
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 31 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Cross-correlation and cross-covariance sequence for twoWSS SPs
The cross-correlation sequence for two WSS SPs x [n] and y [n], isdefined as
rxy [k] = E {x [n]y∗[n − k]} = E {x [n + k]y∗[n]}
usually known as the cross-correlation function (ccf).
The cross-covariance sequence for two WSS SPs x [n] and y [n], isdefined as
cxy [k] = E {(x [n]− µx) (y [n − k]− µy )∗} = rxy [k]− µxµ∗y
for a zero-mean SP rxy [k] = cxy [k].
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 32 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Ergodic processes
A stochastic process is said to be ergodic if the ensemble means coincidewith the temporal means.For a discrete sequence x [n] the temporal mean 〈x [n]〉 is defined as
µ = 〈x [n]〉 = limN→∞
1
N
N−1∑n=0
x [n]
So, if the SP is ergodic then we can substitute the expectation operatorE {·} with the operator 〈·〉, hence
µ = 〈x [n]〉 = E {x [n]}
and〈x [n]x∗[n − k]〉 = E {x [n]x∗[n − k]}
and so on.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 33 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Ergodic processes
The ergodicity is a property assuring that we are able to construct thestatistics of order N with only one realization of the SP. This fact is veryimportant in real situations of signal processing where we usually have atour disposal only one realization. Then from time means we can obtain theensemble means: i.e, from a power measure we can obtain the variance.
Two ergodic SP x [n] and y [n] are also jointly ergodic if
E {x [n]y∗[n − k]} = 〈x [n]y∗[n − k]〉
Remark
If a SP is ergodic then it is also WSS, but the reverse is not always true.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 34 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Auto-correlation matrix of sequences
Let us consider a M × 1 random vector xn derived from the SP X [n]:
xn∆=[
x [n] x [n − 1] · · · x [n −M + 1]]T
Then the mean value is defined as
µxn =[µxn µxn−1 · · · µxn−M+1
]TThe auto-correlation matrix is defined as
Rxn = E (xnxHn ) =
rx [n, n] · · · rx [n, n −M + 1]...
. . ....
rx [n −M + 1, n] · · · rx [n −M + 1, n −M + 1]
because rx [n− i , n− j ] = r∗x [n− j , n− i ] for 0 ≤ i , j ≤ M − 1 the matrix Rxn
is hermitian.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 35 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Auto-correlation matrix of sequences
If the stochastic process is stationary then the auto-correlation function does notdepend on the time index n. So, defining k = j − i we have
rx [n − i , n − j ] = rx [j − i ] = rx [k]
With this position we obtain
Rx = E (xnxHn ) =
rx [0] rx [1] rx [2] · · · rx [M − 1]r∗x [1] rx [0] rx [1] · · · rx [M − 2]r∗x [2] r∗x [1] rx [0] · · · rx [M − 3]
......
.... . .
...r∗x [M − 1] r∗x [M − 2] r∗x [M − 3] · · · rx [0]
The auto-correlation matrix Rx of a stationary SP is Hermitian, Toeplitz and
non-negative defined.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 36 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Properties of the auto-correlation matrix
The eigenvalues and eigenvectors of the auto-correlation matrix Rx satisfy some properties:1 The eigenvalues λi of Rx are real and non-negative, and
λi =qH
i Rqi
qHi qi
(Rayleighratio)
2 The eigenvectors qi of Rx are orthogonal for distinct λi
qHi qj = 0, for i 6= j
3 Rx is diagonalized byR = QΛQH
where Q = [ q0 q1 · · · qM−1 ], Λ = diag(λ0, λ1, . . . , λM−1) and QHQ = I.4 Rx has the following spectral representation:
R =
M−1∑i=0
λi qi qHi =
M−1∑i=0
λi Pi
where Pi = qi qHi is defined spectral projection.
5 The trace of Rx satisfies
tr[R] =
M−1∑i=0
λi ⇒1
M
M−1∑i=0
λi = rxx [0] = σ2x
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 37 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Random sequences and discrete-time LTI systems
Let us consider a stable discrete-time LTI system characterized by the impulseresponse h[n] with a random sequence WSS x [n] as input andy [n] =
∑∞l=−∞ h[l ]x [n − l ] as output.
It could be useful to know the auto- and cross-correlation between input andoutput of a such system.
ryx [k] = h[k] ∗ rxx [k];
ryy [k] = rhh[k] ∗ rxx [k].
The output pdf py(y) is related to the input pdf px(x) from the following
py(y) =px(x)
|det(J)|where J is the Jacobian of the transformation.M. Scarpiniti CAES Lezione 2 - Processi Stocastici 38 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Power Spectral Density (PSD)
The power spectral density or PSD is defined as the DTFT of theauto-correlation sequence
Rxx(e jω) =∞∑
k=−∞
rxx [k]e−jωk
It represents a frequency distribution measure of the mean power of a SP.
The cross power spectral density or CPSD is defined as the DTFT of thecross-correlation sequence
Rxy (e jω) =∞∑
k=−∞
rxy [k]e−jωk
Its magnitude represents the frequency of the SP x [n] associates to those ofthe SP y [n], while its phase indicates the phase delay of x [n] with respect toy [n] for each frequency.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 39 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Coherence Function and MSC
The normalized cross-spectrum or coherence function is defined as
γxy (e jω) =Rxy (e jω)√
Rxx(e jω)√
Ryy (e jω)
The magnitude square coherence (MSC) is the magnitude square of thecoherence function
Cxy (e jω) ≡∣∣γxy (e jω)
∣∣2 =
∣∣Rxy (e jω)∣∣2
Rxx(e jω)Ryy (e jω)
The MSC is a kind of correlation in the frequency domain. In particular ifx [n] = y [n] the γxy (e jω) = 1 (maximum correlation). On the contrary if x [n]and y [n] are not correlated then γxy (e jω) = 0. We can deduce that
0 ≤∣∣γxy (e jω)
∣∣ ≤ 1
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 40 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Summery
Definitions of the principal definitions of correlation functions are listed below:DefinitionsExpectation µx = E {x [n]}Auto-correlation rxx = E {x [n]x∗[n − k]}Auto-covariance cxx = E {(x [n]− µx) (x∗[n − k]− µx)}Cross-correlation rxy = E {x [n]y∗[n − k]}Cross-covariance cxy = E {(x [n]− µx) (y∗[n − k]− µy )}Power spectral density (PSD) Rxx(e jω) =
∑∞k=−∞ rxx [k]e−jωk
Magnitude square coherence (MSC)∣∣γxy (e jω)
∣∣2 =∣∣Rxy (e jω)
∣∣2/[Ryy (e jω)Rxx(e jω)]
Properties of the principal definitions of correlation functions are listed below:Propertiesrxx [k] is non-negative Rxx(e jω) ≥ 0 and realrxx [k] = r∗xx [−k] Rxx(e jω) = Rxx(e−jω) and x [n] ∈ R|rxx [k]| ≤ rxx [0] Rxx(z) = R∗xx(1/z∗)∣∣cx [k]
/σ2
x
∣∣ ≤ 1 Rxx(z) = Rxx(z−1) and x [n] ∈ Rrxy [k] = r∗yx [−k] Rxy (z) = R∗xy (1/z∗)
|rxy [k]| ≤√
rxx [0]ryy [0] ≤ 12
(rxx [0] + ryy [0]) 0 ≤∣∣γxy (e jω)
∣∣ ≤ 1
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 41 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Spectral representation of random sequences
Let us remember the following property on the Z-transform
Z {h[n]} = H(z)⇔ Z {h∗[−n]}=H∗(1/z∗)
Then for previous relations we have
Ryx(z) = H(z)Rxx(z)Ryy (z) = H(z)H∗(1/z∗)Rxx(z)
In particular, considering the DTFT
Ryx(e jω) = H(e jω)Rxx(e jω)
Ryy (e jω) =∣∣H(e jω)
∣∣2 Rxx(e jω)
andRyx(e jω) = R∗xy (e jω)
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 42 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
Properties on the second order moments
Definitions and properties of second order moments of random sequences for
discrete-time LTI systems, are listed in the following table.Time domain Z-domain Frequency-domain
y [n] = h[n] ∗ x[n]ryx [k] = h[k] ∗ x[k]rxy [k] = h∗[−k] ∗ rxx [k]ryy [k] = h[k] ∗ rxy [k]ryy [k] = h∗[−k] ∗ h[k] ∗ rxx [k]
Y (z) = H(z)X (z)Ryx (z) = H(z)Rxx (z)Rxy (z) = H∗(1
/z∗)Rxx (z)
Ryy (z) = H(z)Rxy (z)Ryy (z) = H(z)H∗(1
/z∗)Rxx (z)
Y (ejω) = H(ejω)X (ejω)
Ryx (ejω) = H(ejω)Rx (ejω)
Rxy (ejω) = H∗(ejω)Rxx (ejω)
Ryy (ejω) = H(ejω)Rxy (ejω)
Ryy (ejω) =∣∣∣H(ejω)
∣∣∣2 Rxx (ejω)
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 43 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
White noise
The white noise (WN) is a zero-mean sequence with an auto-correlation function
r [k] = σ2xδ[k]
where δ[k] is the impulse function. Then its power spectral density is
Rxx(e jω) = σ2x
so a white noise has its spectrum constant for every frequency.The output of a LTI system when its input is a WN has the following PSD
Ryy (e jω) =∣∣H(e jω)
∣∣2 σ2x
An important WN is that with a Gaussian pdf, known as white Gaussian noise(WGN) x [n] with zero mean and variance σ2
x , denoted by x [n] ∼ N(0, σ2x). It ha
the propertyr [m − n] = E (x [n]x [m]) = 0, for m 6= n
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 44 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
White noise
The white noise can have different pdf, not only Gaussian; for example a Uniformpdf or a Cauchy pdf, ecc. The figure shows this WNs.
Remark
Differently from the Uniform distribution, the support of the Gaussian or Cauchydistribution is not limited. Then these distributions can assume any amplitudevalue, even if with low probability.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 45 / 46
OutlineIntroduction
Stochastic Processes
Stochastic ProcessesRandom sequences and discrete-time LTI systems
References
A. Papoulis.Probability, Random Variables and Stochastic Processes.III ed., McGraw-Hill, 1991.
D.G. Manolakis, V.K. Ingle, S.M. KogonStatistical and Adaptive Signal Processing.Artech House, Norwood, MA, 2005.
T. KaylathLinear Systems.Prentice Hall, Englewood Cliffs, NJ, 1980.
R.A. FisherOn the mathematical foundations of theoretical statistics.Philosophical Transactions of the Royal Society, A, 222: 309-368, 1922.
M. Scarpiniti CAES Lezione 2 - Processi Stocastici 46 / 46