lecture 09 - parametric se
TRANSCRIPT
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 1/20
6/22/
1
Lecture 09: Spectrum estimation –parametric methods
Instructor:
Dr. Gleb V. Tcheslavski
Contact:
Office Hours:
Room 2030
Class web site:
Summer I 2009ELEN 5301 Adv. DSP and Modeling
http://www.ee.lamar.edu/gleb/adsp/Index.htm
Material comes from Hayes; Image was downloaded fromhttp://www.bubbleology.com/seeps/SeepBubbleMusic.html
2
Introduction
One of the limitations of the non-parametric spectral analysis methods is that they
do not incorporate information that may be available about the process. For, ,
an autoregressive model on the speech waveform. Therefore, for the quasi-stationary time intervals, the spectrum of the process would be
( )2
0
1
1
j
x p jk
k
k
bP e
a e
ω
ω −
=
=
+ ∑
while the eriodo ram for instance would return an estimate consistent with a
(9.2.1)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
moving average process:
( ) ( ) ( )12
1
ˆ ˆ N
j j jk
per N x
k N
P e X e r k eω ω ω
−−
=− +
= = ∑
Therefore, incorporating a process model into the spectrum estimation algorithmcould lead to a more accurate and higher resolution estimate.
(9.2.2)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 2/20
6/22/
3
Introduction
With a parametric approach, the first step is to select an appropriate model for theprocess. This selection may be based on a priori knowledge about the way the
model suits well. The commonly used models are autoregressive (AR), moving
average (MA), and autoregressive moving average (ARMA).
The next step (once the model is selected) is to estimate the model parameters
from the given data.
The final step is to estimate the power spectrum by using the estimated parameters.
Although it is possible to
si nificantl im rove the( ) ( )5 sin 0.45 5 sin 0.55
n n x n n wπ π = + +
Summer I 2009ELEN 5301 Adv. DSP and Modeling
resolution of the spectral estimate
with a parametric method, it is
important to realize that, unlessthe model that is used is
appropriate for the analyzedprocess, inaccurate or misleading
estimates may be obtained.
Blackman-Tukey
AR MA processsinusoids
4
Signal modeling
Assume that the time-domain signal x n that consists of N data samples x 0 ,…,x N-1 is
transmitted across a communication channel. We state that it is possible to. ,
would be more efficient to transmit or store these parameters instead of signalvalues. The signal would be then reconstructed from the parameters.
We view the signal x n as the response of a linear time-invariant filter to an input v n .
Therefore, our goal is to find the filter H (z ) and the input v n that make the output “asclose as possible” to x n .
We will use the rational model for
the filter in the form
Summer I 2009ELEN 5301 Adv. DSP and Modeling
( )( )
( )0
1
1
qk
k q k
pk p
k
k
b z B z
H z A z
a z
−
=
−
=
= =+
∑
∑Therefore, the signal model will include the filtercoefficients a k and b k and a description of an
input signal v n .
(9.4.1)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 3/20
6/22/
5
Signal modeling
To keep the model as efficient as possible, the filter input is typically either a fixed
signal, such as a unit sample δ n , or a signal that can be represented parametrically…
In modeling stochastic processes, the input to the filter is usually assumed to be
white noise.
Once the model type is selected and the input to the filter is specified, the set of the
filter coefficients can be determined such that the filter output will approximate the
modeled signal according to some goodness measure.
Summer I 2009ELEN 5301 Adv. DSP and Modeling
6
Autoregressive SE
An autoregressive (AR) process x n may be represented as the output of an all-pole
filter that is driven by unit variance white noise:
( )2
0
1
1
j
x p jk
k
k
bP e
a e
ω
ω −
=
=
+ ∑
(9.6.1)
(9.6.2)The power spectrum of a
p th-order AR process is
( ) 0
1
1 p
k
k
k
H z
a z−
=
=
+ ∑
Summer I 2009ELEN 5301 Adv. DSP and Modeling
Therefore, if the parameters b 0 and a k
can be estimated from the data, the
estimate of the power spectrum maybe formed using
( )
2
0
1
ˆˆ
ˆ1
j
AR p jk
k
k
bP e
a e
ω
ω −
=
=
+ ∑Clearly, the accuracy of the estimate will depend on how accurately the modelparameters may be estimated and whether an AR model is applicable.
(9.6.3)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 4/20
6/22/
7
Autoregressive SE: theAutocorrelation method
The autocorrelation sequence of an AR process satisfies the Yule-Walker equations
2 p
There is a variety of techniques that may be used to estimate the all-pole (AR)
parameters. Once the parameters are estimated, the power spectrum estimate
according to (9.6.3).
1. Autocorrelation method:
The AR coefficients can be found by solving the autocorrelation normal equations:
. .0
1
x l x
l =
−
Summer I 2009ELEN 5301 Adv. DSP and Modeling
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
* * *
* *1
* 2
1 10 1 2
01 0 1 1
02 1 0 2
01 2 0
x x x x
x x x x
p x x x x
p x x x x
r r r r p
ar r r r p
ar r r r p
ar p r p r p r
ε
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥=−⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − ⎣ ⎦⎣ ⎦⎣ ⎦
(9.7.2)
8
Autoregressive SE: theAutocorrelation method
Here the autocorrelation estimates (the hats are omitted) are computed as:
1*1 N k − −
0
; , ,..., x n k n
n
r x x p N
+=
= =
Solving (9.7.2) for the coefficients a k and setting
( ) ( )2 *
0
1
0 p
p x k x
k
b r a r k ε
=
= = + ∑produces an estimate for power spectrum sometimes referred to as the Yule-Walker
(9.8.1)
(9.8.2)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
method, which is equivalent to the maximum entropy method. The only difference isthat the Y-W assumes that x n is an AR process while the ME assumes that x n is
Gaussian.
Since the autocorrelation matrix Rx is Toeplitz, the Levinson-Durbin recursion may
be used to solve these equations. Furthermore, if the autocorrelation matrix is
positive definite, the roots of A(z ) will be inside the unit circle and the model will bestable.
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 5/20
6/22/
9
Autoregressive SE: theAutocorrelation method
However, since the autocorrelation method applies a rectangular window to the
data, the data will be extrapolated with zeros. As a consequence, the resolution of…
The autocorrelation method is generally not used for short data records.
There is an artifact that may be observed with the autocorrelation method calledspectral line splitting. As a result, a spectral peak may appear as two separate
distinct peaks. This usually happens when x n is overmodeled, i.e., when p is too
large. Consider, for example an AR(2) process described by the following differenceequation:
= − 9.9.1
Summer I 2009ELEN 5301 Adv. DSP and Modeling
2.n n n−
where w n is unit variance white noise. The true spectrum has a single peak at ω =
π /2.
The data record of length N = 64 was generated according to (9.9.1) and two
estimates were found using model orders p = 4 and p = 12…
. .
10
Autoregressive SE: theAutocorrelation method
p = 4
p = 12
Line splitting for p = 12.
We notice that the autocorrelation estimate in (9.8.1) is biased. A variation of the
autocorrelation method is to use the unbiased estimate:
Summer I 2009ELEN 5301 Adv. DSP and Modeling
( )1
*
0
1; 0,1,...,
N k
x n k n
n
r k x x k p N k
− −
+=
= =−
∑However, in this case, the autocorrelation matrix is not guaranteed to be positive
definite and the variance of the spectrum estimate tends to become large when thematrix is ill-conditioned. Therefore, biased autocorrelation estimates are generally
preferred.
(9.10.1)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 6/20
6/22/
11
Autoregressive SE: theCovariance method
The covariance method requires finding the solution to the set of linear equations:
2. Covariance method:
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( )
( )( )
( )
1
2
3
1,1 2,1 3,1 ,1 0,1
1, 2 2, 2 3, 2 , 2 0, 2
1,3 2,3 3,3 ,3 0,3
1, 2, 3, , 0,
x x x x x
x x x x x
x x x x x
p x x x x x
ar r r r p r
ar r r r p r
ar r r r p r
ar p r p r p r p p r p
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ = −⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣ ⎦⎣ ⎦ ⎣ ⎦
(9.11.1)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
where
( )
1*
,
N
x n l n k n pr k l x x
−
− −== ∑Unlike the linear equations in the autocorrelation method, equations in (9.11.1) arenot Toeplitz.
(9.11.2)
12
Autoregressive SE: theCovariance method
The advantage of the covariance method over the autocorrelation method is that no
windowing of the data is required in the formation of the autocorrelation estimates
r x , . ere ore, or s or a a recor s, e covar ance me o genera y pro uces
higher resolution spectrum estimates than the autocorrelation method.
When the data length is large compared to the model order, N >> p , the effect of thedata window becomes small and the difference between the two approaches
becomes negligible.
3. Modified Covariance method:
Summer I 2009ELEN 5301 Adv. DSP and Modeling
The modified covariance method is similar to the covariance method in that no
window is applied to the data. However, instead of finding an AR model that
minimizes the sum of the squares of the forward prediction error, the modifiedcovariance method minimizes the sum of the squares of the forward and backward
prediction errors.
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 7/20
6/22/
13
Autoregressive SE: the ModifiedCovariance method
The AR parameters in the modified covariance method are found by solving a set of
linear equations of the form given in (9.11.1) with
( )1
* *, N
x n l n k n p l n p k
n p
r k l x x x x−
− − − + − +=
⎡ ⎤= +⎣ ⎦∑ (9.13.1)
We notice that the autocorrelation matrix is not Toeplitz.
Unlike the autoregressive method, the modified covariance method appears to givestatistically stable spectral estimates with high resolution. Furthermore, in the
Summer I 2009ELEN 5301 Adv. DSP and Modeling
,method is characterized by a shifting of the peaks from their true locations due to
additive noise, this shifting is less pronounced than with other AR estimators.
Also, the peak location tends to be less sensitive to phase.Finally, unlike the previous methods, the modified covariance method is not subject
for line splitting.
14
Autoregressive SE: the Burgalgorithm
4. Burg method:
As with the modified covariance method, the Burg algorithm finds a set of AR
parameters that minimizes the sum of the squares of the forward and backwardprediction errors. However, in order to assure that the model is stable, this
minimization is performed sequentially with respect to the reflection coefficients.
Although less accurate than the modified covariance method, the AR estimates aremore accurate than those obtained with the autocorrelation method (since no
window is applied to the data).
Summer I 2009ELEN 5301 Adv. DSP and Modeling
,splitting and the peak locations are highly dependent upon the phases of the
sinusoids.
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 8/20
6/22/
15
Autoregressive SE: Examples
It is not easy to provide a set of examples illustrating properties of the ARestimators. We consider the AR(4) process generated according to the difference
1 2 3 42.7377 3.7476 2.6293 0.9224n n n n n n x x x x x w− − − −= − + − + (9.15.1)
where w n is unit variance white Gaussian noise.
The filter generating x n has two pairs of complex poles at
0.20.98 j z e π ±=
Summer I 2009ELEN 5301 Adv. DSP and Modeling
0.30.98 j z e π ±=
Using data records of length N = 128, an ensemble of 50 spectrum estimates werecomputed by the methods we discussed…
. .
16
Autoregressive SE: Examples
1. Autocorrelation method – does not resolve peaks, biased, higher variance
Summer I 2009ELEN 5301 Adv. DSP and Modeling
50 spectral estimates Their average (solid)
And the true spectrum (dashed)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 9/20
6/22/
17
Autoregressive SE: Examples
2. Covariance method – unbiased, small variance
Summer I 2009ELEN 5301 Adv. DSP and Modeling
50 spectral estimates Their average (solid)
And the true spectrum (dashed)
18
Autoregressive SE: Examples
3. Modified covariance method – unbiased, small variance
Summer I 2009ELEN 5301 Adv. DSP and Modeling
50 spectral estimates Their average (solid)
And the true spectrum (dashed)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 10/20
6/22/
19
Autoregressive SE: Examples
4. Burg method – unbiased, small variance
Summer I 2009ELEN 5301 Adv. DSP and Modeling
50 spectral estimates Their average (solid)
And the true spectrum (dashed)
20
Autoregressive SE: Model Orderselection
AR model order p selection is an important problem.
If the order that is used is too small, the resulting spectral estimate will be smoothed
an w ave poor reso u on. n e o er an , e mo e or er s oo arge, espectral estimate may contain spurious peaks and lead to spectral line splitting (in
addition to a higher computational load).
In selecting order, one approach would be to increase the order until the modelingerror is minimized. The difficulty with this is that the error is a monotonically
nonincreasing function of the model order p . This problem may be mitigated byincorporating a penalty function that increases with the model order. Several criteria
proposed include a penalty term that increases linearly with p in form
Summer I 2009ELEN 5301 Adv. DSP and Modeling
( ) ( )log pC p N f N pε = + (9.20.1)
where ε p is the modeling error, N is the data record length, and f (N ) is a constant
that may depend on N . The idea is to select a value of p that minimizes C (p ).
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 11/20
6/22/
21
Autoregressive SE: Model Orderselection
Two criteria of this form are Akaike Information Criterion:
= 9 2 1 1 p
and the minimum description length criterion:
( ) log log p MDL p N p N ε = +
. .
(9.21.2)
It has been observed that the AIC gives an estimate for the order p that is too small
when a lied to non-AR rocesses and that it tends to overestimate the order as N
Summer I 2009ELEN 5301 Adv. DSP and Modeling
increases.
On the other hand, the MDL is a consistent order estimator in the sense that itconverges to the true order as N increases.
22
Autoregressive SE: Model Orderselection
Two other popular criteria are Akaike’s Final Prediction Error:
1 N p+ +
1 p
N p− −
and Parzen’s Criterion Autoregressive Transfer function:
( )1
1 p
j j p
N j N pCAT p
N N N ε ε =
⎡ ⎤− −= −⎢ ⎥
⎢ ⎥⎣ ⎦∑
. .
(9.22.2)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
.
These criteria should only be used as “indicators” of themodel order.
Finally, since these criteria depend on the prediction
error ε p , the estimate for the model order will depend onthe modeling technique used. The estimated order for
the AC method may differ from one evaluated for Burg.
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 12/20
6/22/
23
Moving Average SE
A moving average process may be generated by filtering unit variance white noisew n with an FIR filter as follows:
0
n k n k
k
x b w−=
= ∑The power spectrum of an MA process is
(9.23.1)
( )0
q j jk
x k
k
P e b eω ω −
=
= ∑ (9.23.2)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
The power spectrum can also be written in terms of autocorrelation function:
( ) ( )q j jk
x x
k q
P e r k eω ω −
=−
= ∑ (9.23.3)
24
Moving Average SE
Here r x (k ) is related to the filter coefficients b k through the Yule-Walker equations:
*q k −
0
; , ,..., x l k l
l
r q+=
= =
with( ) ( ) ( )*
0 x x x
r k r k r k for k q− = ; = >
With an MA model, the spectrum may be estimated in one of two ways.
1. Since the autocorrelation sequence of an MA process is finite in length, then
(9.24.1)
(9.24.2)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
( ) ( )ˆ ˆq
j jk
MA x
k q
P e r k eω ω −
=−
= ∑
where is a suitable estimate of the autocorrelation sequence.( ) ̂xr k
(9.24.3)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 13/20
6/22/
25
Moving Average SE
Although (9.24.3) is equivalent to the Blackman-Tukey estimate using a rectangularwindow, there is a subtle difference in the assumptions behind these two estimators.
, . . n ,
autocorrelation sequence is zero for |k | > q . Thus, if an unbiased estimate of theautocorrelation sequence is used for |k | ≤ q , then
( ){ } ( )ˆ j j
MA x E P e P eω ω =
Therefore, the power spectrum estimate is unbiased.
(9.25.1)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
e ac man- u ey me o , on e o er an , ma es no assump ons a ou x n
and may be applied to any type of process. Therefore, due to the windowing the
autocorrelation sequence, unless x n is an MA process, the Blackman-Tukey estimate
will be biased.
26
Moving Average SE: Durbin’smethod
2. Estimate the MA parameters b k from x n and then substitute these estimates as
2
( )0
ˆˆq
j jk
MA k
k
P e b eω ω −
=
= ∑
For example, MA parameters may be estimated by the Durbin’s method:
1) Find a high-order all-pole (AR) model Ap (z ) for the MA process;
2) Consider the coefficients a k to be a new “data set” and find the coefficients of a
q th
order MA model as a q th
order AR process for the sequence a k .
(9.26.1)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
Therefore, let x n be an MA process of order q with
( )0
qk
q k
k
B z b z−
=
= ∑ (9.26.2)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 14/20
6/22/
27
Moving Average SE: Durbin’smethod
such that q
n k n k x b w−= (9.27.1)
0k =
where w n is white noise. Suppose next that we find a p th order all-pole model for x n
and that p is large enough so that
( )( )
0
1
1 1q p
k pk
k
B z A z
a a z−
=
≈ =
+ ∑(9.27.2)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
Therefore:( )
( )0
1
1 1 p q
k q
k k
A z B z
b b z−
=
≈ =
+ ∑and 1/ B q (z ) represents a q th order all-pole (AR) model for the “data” a k .
(9.27.3)
28
Moving Average SE: Durbin’smethod
The following Matlab code illustrated the Durbin’s method:
function b = durbin(x,p,q)
%
x = x(:);
if p >= length(x), error (‘Model order too large’), end
[a, epsilon] = armcov(x,p);
[b, epsilon] = armcov(a/sqrt(epsilon),q);
Summer I 2009ELEN 5301 Adv. DSP and Modeling
=
The Durbin’s method is implemented with the modified covariance AR algorithm.However, any other methods may be used. Coefficient normalization is
implemented since a 0 will be evaluated as 1.
Alternatively, spectral factorization could be used instead of Durbin’s.
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 15/20
6/22/
29
Moving Average SE: Example
We consider a 4th-order MA process generated according to the difference equation:
1 2 3 4. . . .
n n n n n n x w w w w w− − − −= − + − +
Where w n is a unit variance white Gaussian process. The filter generating x n has the
following 4 complex zeros:0.2
0.5
0.98
0.98
j
j
z e
z e
π
π
±
±
=
=
(9.29.1)
(9.29.2)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
s ng e a a eng = , an ensem e o spec rum es ma es were
computed using the Blackman-Tukey method with rectangular window extendingfrom -4 to 4. Then, MA spectra were estimated via the Durbin’d method with q = 4
and p = 32.
30
Moving Average SE: Example
1. Blackman-Tukey method – biased.
Summer I 2009ELEN 5301 Adv. DSP and Modeling
50 spectral estimates Their average (solid)
And the true spectrum (dashed)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 16/20
6/22/
31
Moving Average SE: Example
2. Durbin’s method – less biased
Summer I 2009ELEN 5301 Adv. DSP and Modeling
50 spectral estimates Their average (solid)
And the true spectrum (dashed)
32
Autoregressive Moving Average SE
An autoregressive moving average (ARMA) process has a power spectrum of the
form:2
( ) 0
2
1
1
q jk
k
k j
x p
jk
k
k
b e
P e
a e
ω
ω
ω
−
=
−
=
=
+
∑
∑
which may be generated by filtering unit variance white noise with a filter having
both poles and zeros:
(9.32.1)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
( )( )
( )0
1
1
qk
k q k
pk p
k
k
b z B z
H z A z
a z
−
=
−
=
= =
+
∑
∑(9.32.2)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 17/20
6/22/
33
Autoregressive Moving Average SE
The spectrum of an ARMA(p,q ) process may be estimated using estimates of the
model parameters:
( )
2
0
2
1
ˆ
ˆ
ˆ1
q jk
k
k j
ARMA p
jk
k
k
b e
P e
a e
ω
ω
ω
−
=
−
=
=
+
∑
∑(9.33.1)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
-
equations either directly or by using a least squares approach. Once the ARcoefficients has been estimated, an MA modeling technique, such as Durbin’s
method may be used to estimate the MA parameters .ˆ
k b
34
Autoregressive Moving AverageSE: MYWE
Modified Yule-Walker Equation (method)
The autocorrelation se uence of an ARMA rocess satisfies the Yule-Walker
equations:
( ) ( )1
p
x l x k
l
r k a r k l c=
+ − =∑
where c k is the convolution of b k and h * -k :
* *q k −
= ∗ =
(9.34.1)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
0
k k k l k l
l
− +=
and r x (k ) is the autocorrelation:
( ) { }*
x n n k r k E x x −=
. .
(9.34.3)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 18/20
6/22/
35
Autoregressive Moving AverageSE: MYWE
Since h n is assumed to be causal, then c k = 0 for k > q and Yule-Walker equationsfor k > q are a function only of the coefficients a k :
( ) ( )1
0; p
x l x
l
r k a r k l k q=
+ − = >∑which, in matrix form for k = q +1, q +2, …, q +p , is
( ) ( ) ( )
( ) ( ) ( )
( )
( )1
2
1 1 1
1 2 2
x x x x
x x x x
ar q r q r q p r q
ar q r q r q p r q
− − + +⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥+ − + +⎢ ⎥ ⎢ ⎥⎢ ⎥ = −
(9.35.1)
(9.35.2)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
( ) ( ) ( ) ( )1 2 p x x x xar q p r q p r q r q p
⎢ ⎥ ⎢ ⎥⎢ ⎥+ − + − +⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦ ⎣ ⎦
(9.35.2) is a set of p linear equations in the p unknowns a k . These equations arecalled the Modified Yule-Walker equations and allow finding the coefficients a k in the
filter H (z ) from the autocorrelation sequence r x (k ) for k = q, q+ 1,…, q+p . This is theModified Yule-Walker method. Usually, autocorrelation estimates are used.
36
Autoregressive Moving AverageSE: MYWE
Once the AR coefficients a k are estimated, the MA parameters b k can be found
using one of several approaches…
x n s an p,q process w power spec rum
( )( ) ( )( ) ( )
* *
* *
1
1
q q
x
p p
B z B zP z
A z A z=
Therefore, filtering x n with an LTI filter having a system function Ap (z ) produces an
MA (q ) process y n with a power spectrum
(9.36.1)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
( ) ( ) ( )* *1 y q q
P z B z B z=
Therefore, the MA parameters b k may be estimated from y n , for instance, by theDurbin’s method.
(9.36.2)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 19/20
6/22/
37
Autoregressive Moving AverageSE: MYWE
2) Alternatively, explicit filtering can be avoided and b k may be found as follows:
Given a k , the Yule-Walker equations may be used to find the values of the sequence
c k or = , ,…,q :
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
* *0
*1 1
10 1
1 0 1
1 0
x x x
x x x
p q x x x
cr r r p
a cr r r p
a cr q r q r
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥
+ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦
(9.37.1)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
w c may e wr en n ma r x orm as
x p q R a c=
Since c k = 0 for k > q , the sequence c k is then known for all k ≥ 0. we will denote the
z -transform of this causal or positive-time part of c k by [C (z )]+
(9.37.2)
38
Autoregressive Moving AverageSE: MYWE
( ) k
k C z c z
∞−
+=⎡ ⎤⎣ ⎦ ∑ (9.38.1)
=
Similarly, we will denote the anticausal or negative-time part by [C (z )]-
( )1
1
k k
k k
k k
C z c z c z− ∞
−−−
=−∞ =
= =⎡ ⎤⎣ ⎦ ∑ ∑Recall that c k is the convolution of b k with h -k
* . Therefore:
( )* *
* * 1 B z
(9.38.2)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
( )* *1 z z z z
A z= =
Multiplying C q (z ) by A* p (1/ z * ), we arrive to
( ) ( ) ( ) ( ) ( )* * * *1 1 y q p q qP z C z A z B z B z≡ =
which is the power spectrum of an MA (q) process.
(9.38.3)
(9.38.4)
8/8/2019 Lecture 09 - Parametric SE
http://slidepdf.com/reader/full/lecture-09-parametric-se 20/20
6/22/
39
Autoregressive Moving AverageSE: MYWE
Since a k is zero for k < 0, then Ap * (1/ z * ) contains only positive powers of z .
Therefore, with
( ) ( ) ) ( ) ) ( ) )* * * * * *1 1 1 y q p q p q p
P z C z A z C z A z C z A z+ −
= = +⎣ ⎦ ⎣ ⎦
since both [C q (z )] and Ap * (1/ z * ) are polynomials that contain only positive powers of
z , the causal part of P y (z ) equals to
( ) ( ) ( )* *1 y q pP z C z A z
+ + +⎡ ⎤⎡ ⎤ ⎡ ⎤=⎣ ⎦ ⎣ ⎦⎣ ⎦
Thus althou h c is unknown for k < 0 the causal art of P z ma be com uted
(9.39.1)
(9.39.2)
Summer I 2009ELEN 5301 Adv. DSP and Modeling
, , y from the causal part of c k and the AR coefficients a k . Using the conjugate symmetry
of P y (z ), we may then determine P y (z ) for all z . Finally, performing a spectralfactorization of P y (z ):
( ) ( ) ( )* *1 y q qP z B z B z=
Produces the polynomial B q (z ).
(9.39.3)
40
Autoregressive Moving AverageSE: MYWE
Alternatively, a set of extended Yule-Walker equations can be formed andevaluated…
However, there are no simple recipes on computations ofARMA model parameters especially for relatively large modelorders.
Summer I 2009ELEN 5301 Adv. DSP and Modeling