frequency o set compensation in ofdm system using sequential...
TRANSCRIPT
Mat-2.108 Independent Research Project in Applied Matematics
Frequency Offset Compensation in OFDM System Using
Sequential Monte Carlo Methods
Tuomas Nyblom - 45345n
10.10.2003
Contents
1 Introduction 2
2 OFDM 4
2.1 OFDM System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Kalman Filters 7
3.1 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4 Particle Filters 11
4.1 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5 Recursive channel and frequency offset estimation in OFDM 15
5.1 Extended Kalman filter in OFDM . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.2 Regular Kalman and particle filters in OFDM . . . . . . . . . . . . . . . . . . . 16
6 Simulation 17
6.1 Simulation Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.2 Simulation Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7 Discussion 20
1
Chapter 1
Introduction
Normal dynamic processes in real world involve random noise. Normally information we get
from these processes also includes random noise. Typically we want to estimate real system
state from observation or in other words to know which part of random noise is from real system
and which from observations.
Filtering is method which uses system previous state and observation to estimate system
current state. There are several different filters and they approach this problem from very
different point of view. In this report we focus two different family of filters; Kalman filters
and particle filters [3]
Kalman filters are designed for problems which are linear and Gaussian. In these cases
their performance is optimal. They also work fine in non-linear cases in which they use linear
estimator and even non-Gaussian problems if distribution is ’close enough’ to Gaussian.
In this report we estimate wireless communication system with Orthogonal Frequency Di-
vision Multiplexing (OFDM) in particular multicarrier system used for example in Wireless
Local Area Network (WLAN) and Digital Video Broadcast Terrestrial (DVB-T). Orthogonal
frequency division multiplexing is one technique of data transmitting. It was developed by R.
W. Chang in 1960’s. We are interested about OFDM because, still today it is in use in many
different places. It is used because of its bandwidth efficiency and immunity against intersymbol
interference (ISI).
In these systems the noise may not be Gaussian and system model is not necessarily linear.
That is why we use another type of filters, particle filters. They do not assume neither linearity
nor Gaussian distributions and they have been used successfully in wireless communication
[5][6]. The main difference between Kalman filters and particle filters is that Kalman filters
2
solve problem analytically while particle filters find solution by simulations.
In This report we investigate if Sequential Monte Carlo Methods like the particle filters work
better than normally used Kalman filters in wireless multicarrier systems. We assume that it
could be the case because as said before, the system is not linear or Gaussian and Kalman filter
requires that to work optimally. The purpose of this report is to bring down the bit error rate
(BER) of the transmitted data.
In Chapter 2 we introduce OFDM transmission model and build its transfer equations. In
Chapter 3 different Kalman filters are introduced and In Chapter 4 the behavior of particle
filters is explained. In Chapter 5. The filters are combined to the communication system.
Simulation results are shown in Chapter 6 and finally conclusions are in Chapter 7.
3
Chapter 2
OFDM
Orthogonal frequency division multiplexing (OFDM) is a technique for digital data transmission
in broadband communication systems like WLAN high speed modems and video broadcasting.
It is a more developed version of multicarrier modulation (MCM), which is the principle of
transmitting a data stream by dividing it into parallel bit streams and modulating them into
individual carriers or narrow bound subcarriers. OFDM differs from original MCM in matter
of subcarriers. In OFDM subcarriers are allowed to overlap because they all are orthogonal [1].
Using OFDM technique gives significant advantages such as: bandwidth efficiency, immunity
against intersymbol interference (ISI) and simple receiver structure. Besides these benefits there
are also drawbacks. One of the main drawbacks is its sensibility to frequency offset caused by
oscillator inaccuracies and the Doppler shift due to mobility. Because of the sensibility it is
important to estimate very carefully frequency offset and channel.
H~freq.
offsetIDFT
r~DFTEqfreq.
offset
x~P/S
Channel
CP S/P
noise
CP
r
S/P
P/S
a
aepsilon epsilon
Figure 2.1: OFDM transmission chain.
OFDM transmission model can be represented as a chain of operations as it is done in
Figure 2.1. In the figure each block represents an operation which is done to signal. Acronyms
in figure are s/p serial to parallel, IDFM inverse discrete Fourier transform, CP cyclic prefix,
p/s parallel to serial, DFT discrete Fourier transfer.
4
2.1 OFDM System Model
In this section we build a mathematical model for OFDM transmission system. The model is
discrete-time because the observations we get from the system are discrete-time. In discrete-
time process we divide the system in the time blocks and each block represent system state at
the specified time. These equations are represented in [2].
Let us consider the k:th block. As we can see in the figure 2.1, the first operation is inverse
discrete Fourier transform (IDFT) which can be written as:
x(k) = FNa(k), (2.1)
where FN is the N × N IDFT matrix, N is the total number of subcarriers and a(k) is the
N × 1 complex vector of transmitted blocks. Next block in the figure introduces the frequency
offset into model. This operation can be represented mathematically as follows:
xε(k) = Cε x(k), (2.2)
where frequency offset matrix Cε is N × N diagonal matrix Cε = diag{exp
(j 2πnε
N
)}with
n = 0, . . . , N − 1 and the ε, chosen as 0 ≤ ε < 1.
Next blocks are cyclic prefix (CP) insertion, transmission on wireless and channel cyclic
prefix removal. In the figure these rounded by dash line and they can be dealt with single
operation
rε(k) = H(k)Cε x(k) + w(k). (2.3)
where H(k) is N × 1 circulant matrix, with the (i,j) entry gives by h(i,j)modN :
H =
h0 0 0 hL−1 . . . h1
.... . . 0 0
. . ....
hL−1 h0 0 hL−1
.... . .
... h0 0... hL−2 hL−3
. . . 0
0 . . . hL−1 hL−2 . . . h0
(2.4)
The channel taps hl l = 0...Lh − 1 are assumed to be constant during one OFDM block,
and to vary independently in time.
Since H(k) is a circulant matrix and circulant matrixes implement circulant convolution it
can be diagonalized by IDFT operation. This means that there are matrixes FN and D(k)
5
which satisfy the equation:
FHNH(k)FN = D(k), (2.5)
where
D(k) = diag
{N−1∑
k=0
hk(i) exp
(−j
2πnk
N
)}
n=0,...,N−1
(2.6)
is the frequency response of the channel.
After Fourier transform equation (2.3) will get form:
rε(k) = FHNH(k)CεFN a(k) + w(k), (2.7)
where w(k) = FHN w(k).
If we multiply equation (2.7) by FHN we get:
FHNH(k) = D(k)FH
N (2.8)
If we substitute that to equation (2.3) it gets from:
rε(k) = D(k)(FH
NCεFN
)a(k) + w(k). (2.9)
And finally we can solve a(k) (estimate of transmitted block) as a function of rε(k:
a(k) =(FH
NCεFN
)−1D(k)−1 rε(k). (2.10)
6
Chapter 3
Kalman Filters
Kalman filter is one of the simplest and widely used filtering method. Originally it was designed
for solving linear Gaussian systems. For nonlinear systems there is a method called extended
Kalman filter (EKF), which is based on regular Kalman filter. Its idea is to linearize the system
and then use same methods as in the Kalman filter.
Kalman filters are recursive techniques. Is computes MMSE estimator for present value
from a previous value and current observation. It also gives optimal solution in that sense, but
only for Gaussian linear systems.
3.1 Kalman Filter
Consider the problem of estimating the states of a dynamic system. Assume that every state
depends on its previous value and the random noise. That can be defined as:
x(k) = f(x(k − 1),v(k), k), (3.1)
where x(k) system state at t = k v(k) ∼ N (0, Q(k)). Q is the covariance matrix of the state
noise and assumed to be known.
Assume also that we can get observations about the system, which depends on system state
and the random noise as follows:
y(k) = h(x(k),w(k), k) (3.2)
where y(k) is observation at t = k and w(k)∼N(0, R(k)). R is the covariance matrix of
observation noise and also assumed to be known.
7
In linear case system equations (3.1-3.2) reduce to the form
x(k) = A(k)x(k − 1) + v(k), (3.3)
y(k) = H(k)x(k) + w(k) (3.4)
whereA(k) is the state transition matrix, H(k) is the observation matrix.
Kalman filter also needs the distribution of the initial value of the state. It is assumed to
be Gaussian and it is denoted as:
x(0) ∼ N(x(0),P(0)), (3.5)
where x(0) is the initial mean and matrix and P(0) is initial covariance.
Next step is to estimate the present value. If the system is Gaussian then value is also
Gaussian random variable. The idea is to estimate its expected value and the covariance
matrix. It can be written as:
x(k|k − 1) ∼ N(x(k|k − 1),P(k|k − 1)) (3.6)
where
x(k|k − 1) = A(k)x(k − 1) (3.7)
and
P(k|k − 1) = A(k − 1)P(k − 1|k − 1)A(k − 1)T + Q(k − 1). (3.8)
P(k|k − 1) is called covariance matrix of the prediction error.
finally we correct the prediction of mean and covariance based on new observations. the
corrected state is calculated from prediction and observation as follows:
x(k|k) = x(k|k − 1) + K(k)(y(k) − H(k)x(k|k − 1)) (3.9)
where K(k) is the Kalman gain matrix and is defined as:
K(k) = P(k|k − 1)HT(k)(H(k)P(k|k − 1)H(k)T + R(k))−1 (3.10)
The covariance matrix of the correction error is
P(k) = P(k|k − 1) − K(k|k − 1)HP(k|k − 1). (3.11)
8
3.2 Extended Kalman Filter
Extended Kalman filter and Kalman filter are very similar to each other. Difference between
them is, that in EKF transition and observation equations do not have to be linear. In both
of them we assume the noise to be Gaussian. Due to the linearization process, EKF is not
optimal like Kalman filter. In the prediction phase we use real functions and in correction
phase linearized ones.
The nonlinear system can be written as:
x(k) = f(x(k − 1),v(k), k) (3.12)
y(k) = h(x(k),w(k), k) (3.13)
where v(k)∼ N (0, Q) and w(k) ∼N (0, R) as in Kalman Filter and functions f(k) and h(k)
can be non-linear.
Predicted mean can be calculated from original function f(k) by assuming that there is no
noise. We get function:
x(k) = f(x(k|k − 1), 0, k). (3.14)
At the correction stage the nonlinear functions are replaced with linear approximations.
After the approximation optimality is lost.
From transition function we get:
A(k) =∂f(x(k|k − 1),v(k)), k)
∂x(k)
∣∣∣v(k)=0
(3.15)
V(k) =∂f(x(k|k − 1),v(k)), k)
∂v(k)
∣∣∣v(k)=0
(3.16)
and from observation function:
H(k) =∂h(x(k|k − 1),w(k)), k)
∂x(k)
∣∣∣w(k)=0
(3.17)
W(k) =∂h(x(k|k − 1),w(k))
∂w(k)
∣∣∣w(k)=0
(3.18)
Using equations (3.15-3.16), the covariance matrix of the prediction error may be written
as
P(k|k − 1) = A(k − 1)P(k − 1|k − 1)A(k − 1)T + v(k)Q(k − 1)V(k)T (3.19)
9
and the Kalman matrix form
x(k) = x(k|k − 1) + K(k)[y(k) − h(k)(x(k|k − 1), 0)] (3.20)
The Kalman gain is calculated as:
K(k) = P(k|k − 1)HT(k)[H(k)P(k|k − 1)H(k)T + W(k)R(k)W(k)T]−1 (3.21)
and finally covariance matrix will be:
P(k|k) = P(k|k − 1) − K(k|k − 1)HP(k|k − 1) (3.22)
which is similar to the update in Kalman filter.
10
Chapter 4
Particle Filters
Kalman Filter, as we noticed in the previous chapter, gives the optimal solution for linear-
Gaussian problems. However there are lots of systems which are not Gaussian or linear. For
example OFDM is neither Gaussian or linear. So for this kind of problems there might be
methods that work better. One way is to use methods like particle filters that approximate the
distribution by using simulation.
The deal of the particle filters is to simulate the posterior distribution by taking random
samples. In particle filters they are called particles. Every particle represents one possible
system state. Besides those possible states we also have to calculate their likelihoods. From
these values it is possible to solve statistics like conditional mean and covariance.
As we know the distribution only by these samples, we can not take samples directly from
the distribution. There are lots of methods for sampling such distributions. Most commonly
used are importance sampling and Markov Chain Monte Carlo [3]. Particle filter used in this
report use importance sampling or more precisely sequential importance sampling (SIR). The
benefit of using SIR is its time cost, O(N), where N is number of particles [4].
The first stage in a particle filter is to take N samples from initial distribution, which is
assumed to be known. After that observations arrive and can be used for sample weighting. The
weights of the particles are calculated from a proposal distribution in this case from transition
equation. Particles which are likely to come from prior distribution get high weights. This
phase is called sampling stage.
After the particles are updated and importance weights are calculated, resampling stage
follows. This is an optional stage which usually improves the results.
In this phase the N samples are resampled according to the weights calculated in the previous
11
stage. The ones with high weights get multiplied while ones with low weights disappear.
4.1 Sampling
One of the main strengths of the particle filters is that they can be used in filtering even in
problems where we can not compute the distribution analytically. It is only needed to be known
proportionally.
Importance sampling is a technique for getting samples from such distributions. The idea
is to get samples from another distribution, for example from normal distribution, and then
weight them according to the real distribution [3]. This estimation of the filtering distribution
is done in every time instant from time 0 to current time.
The weights of the particles are calculated as follows:
ω(i)(k) = ω(i)(k − 1)p(y(k)|x(i)(k))p(x(k)|x(i)(k))
p(x(i)(k)|x(i)(k),y(k)), (4.1)
where ω(i)(k) is weight of the particle, x(k) one possible system state sampled from the pro-
posal distribution, p(x(k)|x(i)(k − 1)) is its prior probability and y(k) are the observation,
p(y(k)|x(i)(k)) is its likelihood and p(x(i)(k)|x(i)(k − 1), y(k)) is the value of the density func-
tion of this particle’s proposal distribution.
After that weights are normalized so that their sum equals unity. The new weights are:
ω(i)(k) =ω(i)(k)
∑nj=1 ω(j)(k)
. (4.2)
From these equations we can see how the prior distribution differs from the real one. It
would be nice if samples match better the real distribution.
For example regular particle filters such as bootstrap and SIR filters use prior distribution
for the particle in the prediction stage. If prior distribution is far from the real one it is very
likely that many particles end up to the low likelihood area and will not be in the resampling
stage. The main techniques of increasing particles in the resampling stage are increasing the
number of the particle sample size, prior editing and Auxiliary variable.
Easiest way to get more particles to the resapling stage is simply get more than usual one
proposal per particle. That is called increasing proposal sample size technique. If we have more
proposals it is likely that more of them will get in to high likelihood area.
12
In prior editing, the proposals are computed one by one. If the likelihood of the observation is
for the proposal is high enough, more than , threshold value, we accept the proposal. Otherwise
it is rejected. This is continued until we have N accepted proposals.
Auxiliary variable technique is used in the simulations in this report. This technique we use
auxiliary variable, which allows us to get more proposals from the particles in high likelihood
area and none from the particles low likelihood area.
4.2 Resampling
Besides prediction stage particle filter needs also resampling stage as well. In this stage the
particles with high weights are multiplied and the ones with low weight disappear. Even though
the resampling decreases the numbers of the particles at current time instance, it also makes
the future estimates more accurate. That is because after resamling there are more particles
to describe high likelihood area.
Resampling can be performed with numerous different algorithms like multinomial resam-
pling, residual resampling, stratified sampling or deterministic sampling. Algorithms differ in
terms of computationally complexity, variance of the of the number of children of the particles
and bias.
Stratified Sampling is used in simulations in this report since it is the best unbiased resam-
pling algorithm among the mentioned [7]. Resampling algorithms determine how many copies
of each weight particles made to represent unweighted particles at the next point in time.
the number of each copies of each particle should satisfy:N∑
i=0
N(i)(k) = N, (4.3)
where N (i)(k) is number of the copies of i:s particle. This guarantees that number of particles
stays constant. Also since this algorithm is unbiased it must satisfy:
E(N(i)(k)) = Nω(i)(k), (4.4)
As mentioned before not all resampling algorithms are the unbiased, but usually unbiased
work faster.
The idea of the algorithm is that picking particle x(i)(k) is its normalized weight ω(i)(k).
To do that we need to define ω(0)(k) = 0 and calculate cumulative weight of the particles:
s(i)(k) =i∑
j=0
ω(i)(k) (4.5)
13
The sampling is handled as follows. We divide interval [0,1[ to N parts and we take one
uniformly distributed sample from each of them. Particle x(i)(k) is chosen if chosen u is
s(i−1)(k) ≤ u(k) < s(i)(k). (4.6)
In this algorithm the variance of the importance weights is
var(N(i)(k)) = (Nω(i)(k) − bNω(i)(k)c)(1 − ω(i)(k)) (4.7)
where bNω(i)(k)c largest integer not greater than Nω(i)(k).
14
Chapter 5
Recursive channel and frequency offset
estimation in OFDM
In this section, we build two simulation models for channel and offset estimation in OFDM
using the equations in chapter 2 and filter equations in chapter 3 and 4. the first model uses
extended Kalman filter and the second model uses regular Kalman filter and particle filter.
5.1 Extended Kalman filter in OFDM
First we had to choose state vector and as we want to estimate the channel and the offset,
we define the state vector as s(k) = [h0(k), h1(k) · · ·hLh−1(k), ε(k)]T =[hT (k), ε(k)
]T. The
transition and observation equations will be
s(k) = As(k − 1) + v(k), (5.1)
rε(k) = H(k) xε(k) + w(k) (5.2)
= Xε(k)h(k) + w(k), (5.3)
where Xε(k) is the circulant matrix of size N ×L made of xε(k). It can also be written function
on state vector s(k).
= G (s(k)) + w(k). (5.4)
The EKF equations in this case will be:
s(k) = s(k|k − 1) + K(k) [rε(k) − G (s(k|k − 1))] . (5.5)
15
K(k) = P(k|k − 1)GH(k)[G(k)P(k|k − 1)GH(k) + Rs
]−1, (5.6)
P(k|k − 1) = AP(k − 1|k − 1)AT + Qs (5.7)
P(k) = [I − K(k)G(k)]P(k|k − 1) (5.8)
where G =[
∂G∂s
]=[Xε, H
∂(xε)∂ε
],Qs =
Qh 0L×1
01×L σ2ε
where Qh = σ2
hI, σ2h
5.2 Regular Kalman and particle filters in OFDM
In this section we estimate the channel which is linear with regular Kalman filter and the offset
which is nonlinear with two different particle filters: normal bootstrap filer and auxiliary filter.
First we estimate the channel. The state vector contains only the channel h(k). State space
equations are:
h(k) = Ah(k − 1) + v(k), (5.9)
rε(k) = Xε(k)h(k) + w(k), (5.10)
and the Kalman filter equations are (5.10)
h(k) = Ah(k − 1) + K(k)[rε(k) − Xε(k)
(Ah(k − 1)
)]. (5.11)
K(k) = P(k|k − 1)GH(k)[G(k)P(k|k − 1)GH(k) + Rs
]−1, (5.12)
P(k|k − 1) = AP(k − 1)AT + Qh (5.13)
P(k) =[I − K(k)Xε(k)
]P(k|k − 1). (5.14)
The offset is estimated with two different particle filters: regular bootstrap and auxiliary
filter.
16
Chapter 6
Simulation
6.1 Simulation Tools
As a simulation tool we used MATLAB version 6.5. It is widely used program in all kind of
simulations. However as these simulation were quite heavy and took several days to run, we
had to make some trade off with time and number of particles in the particle filters.
6.2 Simulation Parameters
In the simulations we used the following values. The simulation frequency is f0 = 2.4 GHz
and the number of subcarriers is N = 128. Bandwidth is equal to B = 1 MHz leading to a
subcarrier symbol rate of 7.8 kHz. BPSK modulation is employed . The power delay profile
is [0,−1,−3,−9] [dB] and [0, 1, 2, 3] [µs] and the Doppler spectrum is Jakes’s. The receiver
speed is of 30 km/h. In particle filters we used 50 particles. After that increasing numbers of
particles did not seem to make estimations significantly better compared time it took.
6.3 Simulation Results
In first example we sent 250 OFDM blocks. Wireless channel is considered stationary dur-
ing each OFDM block. The frequency offset is considered to change after every 50 blocks.
Normalized offset varies between 0.1 and 0.4.
Frequency offset tracking with different filters is shown in the figures 6.1. In the pictures
SNR is 15 dB. From these figures we can see how good estimated offset follows the real offset.
17
If we discuss how fast the filters react to the changes in in offset, we see that the EKF and the
auxiliary filter are equal while the bootstrap filter is clearly slower. On the other hand both of
the particle filters wary less than EKF.
According these figures it looks that auxiliary filter is best of the filters that was examined,
because it reacted fast to the changes and did not wary much.
0 50 100 150 200 2500
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45Frequency offset tracking
OFDM block index
True offsetEstimated offset
(a) Auxiliary filter
0 50 100 150 200 2500
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45Frequency offset tracking
OFDM block index
True offsetEstimated offset
(b) Bootstrap filter
0 50 100 150 200 250
0.1
0.2
0.3
0.4
OFDM block index
Offset tracking
estimated offsetreal offset
(c) EKF
Figure 6.1: Frequency offset tracking. Continuous line is true offset dash line is estimation.
SNR is 15 dB and velocity in 30 km/h.
Finally in the figure 6.2 we see bit error rates (BER) at different signal to noise ratios
18
(SNR). Lets take a look at the lines called ’No offset compensation’ and ’true channel and
offset’. Those lines are limits of the BERs of the filters. ’No offset compensation’ line shows
BER in the case where we do not estimate offset at all. Every filter should give better result
than that. ’true channel and offset’ line shows BER in the case where we have the true channel
and offset and the true values are of course better than the estimated ones.
We discover that differences between the filters in low SNR are not significant. Actually
none of the filters works well in that area. In the high SNR particle filters work better than
EKF. We also see that the auxiliary filter works better than the regular bootstrap filter.
5 10 15 20 25 3010
−4
10−3
10−2
10−1
100
Eb/No [dB]
BE
R
Probability of error
No offset compensationAuxiliary filter Bootstrap filterEKFTrue channel and offset
Figure 6.2: Bit error rate with different filters velocity 30km/h.
19
Chapter 7
Discussion
The purpose of this report was to examine the application of particle filters to frequency offset
estimation in wireless communications, in particular, OFDM systems. We studied the perfor-
mance of two particle filters and extended Kalman filter. The performance of the algorithms
was investigated in simulation using time varying offsets and different SNR levels.
When SNR is below 5 dB it seems that none of the filters works reliably. In case SNR is
between 10-15 dB all the considered filters have almost equal performance. The benefits of
particle filters are obtained at very high SNR regime. The particle filters overperform EKF
then.
The main drawback of the particle filters is their computational complexity. While the EKF
complexity is proportional to the dimensions of the state space model, the particle filters need
time that is proportional to the number of particles. In particle filters we always have to trade
off between complexity and precision.
Although the results were promising, they were not as good as we expected. One reason
could have been that we had to make a trade off with the number of particles and the time.
Next step would be now to use the particle filters also in the channel estimation. However
there is at least one big problem. The offset estimation was multi-input single-input (MISO)
process and the channel estimation is multi-input multi-output (MIMO) process. The problem
whit that is that theoretically if we increase the output variables the need of the particles
increases exponentially.
Luckily there is methods to decrease number of the particles without loosing the accuracy of
the estimation. Other solution to this problem might be to convert some parts of the MATLAB
code with some other language for example C or C++. Especially the parts that MATLAB
20
does not run optimally for example some of the loops.
21
Bibliography
[1] Heiskala, T., Terry, J., “OFDM Wireless LANs: A Theorerical Practical Guide” SAMS,
2001.
[2] Roman, T., Enescu, M., Koivunen, V., “Joint Time-Domain Tracking of Channel and
Frequency Offset for OFDM Systems”, in IEEE Workshop on Signal Processing Advances
in Wireless Communications, SPAWC 2003, Rome, Italy.
[3] Douget, A., Freitas, N., Gordon., N., “Sequental Monte Carlo Methods in Practice” Sp-
inger, 2000.
[4] Oja, E., “Particle Filters” University of Tampere Department of Computer and Information
Sciences Algorithmics 2003.
[5] Chin, W.H., Ward, D.B., Constantinides, A.G., “Channel tracking for space-time block
coded systems using particle filtering”, Digital Signal Processing, 2002. DSP 2002. 2002
14th International Conference on , Volume: 2 , 1-3 July 2002. Page(s): 671 -674 vol.2
[6] Huber, K., Haykin, S., “Application of particle filters to MIMO wireless communications”,
Communications, 2003. ICC ’03. IEEE International Conference on , Volume: 4 , 2003.
Page(s): 2311 -2315
[7] Carpenter, J., Clifford, P., Fearnhead, P., “Improved particle filter for nonlinear problems”
Radar, Sonar and Navigation, IEE Proceedings - , Volume: 146 Issue: 1 , Feb. 1999 Page(s):
2 -7
22