linear predictionlinear prediction in kalman filter paper in kalman filter paper

22
Integration of Linear Prediction Techniques with Kalman Filter Estimation Faheem Khan 1 , Sung Ho Cho 1 and Naeem Khan 2 1 Department of Electronics and Computer Engineering Hanyang University South Korea [{faheemkhan, dragon}@hanyang.ac.kr] 2 Department of Electrical Engineering University of Engineering & Technology Peshawar Pakistan Abstract The missing data samples in state estimation process are restored by employing linear prediction theory. Three different linear prediction techniques i.e. Normal Equation, Levinson Durbin Algorithm (LDA) and Leroux Gueguen Algorithm (LGA) are implemented in this paper. The Normal Equation method has high computational complexity. On the other hand, LDA is less computationally expensive because it doesn’t make use of matrix inversion to compute linear prediction coefficients (LPC). However, LDA results in values of LPC having larger dynamic range. Alternatively, LGA avoids the dynamic range problem by applying Schwartz inequality in computing LPC values. It is concluded that LDA and LGA have lower computational complexity as compared to Normal Equation when integrated with Kalman filter. The major contribution of this paper is to reduce computational time required in the Normal Equation method by employing LDA and LGA techniques. Keywords: Kalman filter, Observation Loss, Open-Loop Estimation, Compensated Linear Prediction Techniques, Optimization A preliminary version of this paper appeared in IEEE IBCAST 2013, 15-19 Jan. Islamabad, Pakistan.

Upload: faheem-khan

Post on 30-Jan-2016

220 views

Category:

Documents


0 download

DESCRIPTION

It is related to kalman filter.

TRANSCRIPT

Page 1: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

Integration of Linear Prediction Techniques with Kalman Filter

Estimation Faheem Khan1, Sung Ho Cho 1 and Naeem Khan2

1 Department of Electronics and Computer Engineering Hanyang University South Korea

[{faheemkhan, dragon}@hanyang.ac.kr]2 Department of Electrical Engineering

University of Engineering & Technology Peshawar Pakistan

Abstract

The missing data samples in state estimation process are restored by employing linear prediction theory. Three different linear prediction techniques i.e. Normal Equation, Levinson Durbin Algorithm (LDA) and Leroux Gueguen Algorithm (LGA) are implemented in this paper. The Normal Equation method has high computational complexity. On the other hand, LDA is less computationally expensive because it doesn’t make use of matrix inversion to compute linear prediction coefficients (LPC). However, LDA results in values of LPC having larger dynamic range. Alternatively, LGA avoids the dynamic range problem by applying Schwartz inequality in computing LPC values. It is concluded that LDA and LGA have lower computational complexity as compared to Normal Equation when integrated with Kalman filter. The major contribution of this paper is to reduce computational time required in the Normal Equation method by employing LDA and LGA techniques.

Keywords: Kalman filter, Observation Loss, Open-Loop Estimation, Compensated Linear Prediction Techniques, Optimization

A preliminary version of this paper appeared in IEEE IBCAST 2013, 15-19 Jan. Islamabad, Pakistan.

Page 2: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

1. Introduction

State estimation has been an interesting and active research area over the past decades due to its important role in systems where direct access to the measured state of system is either impossible or very difficult [1]. Kalman filter (KF) is an online recursive algorithm used to estimate the system state with noise contaminated observations [2]. KF is a set of equations that is used to estimate the state of a process in an efficient way to minimize the mean square error [2]. Broadly speaking, KF consist of two major steps i.e. time update step (also called a priori estimate or prediction step) and measurement update step (also called a posteriori estimate or observation update). In simple words, a constant gain steady state Kalman filter is a recursive method utilized for estimation of state variables of an LTI (linear time invariant) system, subjected to stochastic noises [3]. Kalman filter accurately estimates system state on the basis of noisy measurements. Hence precise knowledge of system dynamics is required for Kalman filter. It relies on information of unmeasured stochastic inputs and noise contaminated measurement data. The sensor readings are used for computation of minimum mean square error estimate of system state [4][5][6].

In networked and distributed control system random communication packet losses may occur due to several reasons i.e. limited spectrum of the channel, channel fading, interference, and congestion of buffer registers, collision and many other deficits [7]. Kalman filter algorithm is dependent on system dynamics, information of received signal and unknown noise signal. Kalman filter predicts the system state and then updates it on the basis of measured observations [7]. In case of loss of observation, the conventional Kalman filter (CKF) fails to offer accurate state estimation. Alternatively, Open-Loop Estimation (OLE) is used to overcome this disadvantage in case of loss of observation. In Open-Loop Kalman filtering scheme, only prediction step is performed in case of observation loss. OLE doesn’t perform update step for tuning of the predicted signal because measurement data is not available [8]. As OLE only perform prediction step, it results in unbounded estimation error if the observation data is missing for a long period of time [7]. Some estimation techniques are required to produce acceptable state estimation with bounded estimation error in case of loss of observation. Some researchers have replaced OLE technique with other methods i.e. Zero Order Hold method or First Order Hold technique. In [9] [14] the Zero Order Hold method (ZOH) is described. In ZOH, only the last sample is stored and updated throughout the estimation process. This method has also certain disadvantages. If longer data loss is encountered then employing only one sample at update step will result in unbounded error. ZOH also requires the data samples to have strict correlation among them.

In this work, a novel estimation technique based on Kalman filter is shown, where observations for measurement update step are not available. The missing observation data is thereby reproduced through linear prediction techniques. The basic notion of the Linear Prediction is to estimate the future data samples on the basis of the past values of the input signal within a signal frame, the weights used to compute the linear combination are calculated by minimizing the mean square prediction error [10]. In internal prediction, Linear Prediction Coefficients (LPCs) are computed from the selected data frame by using autocorrelation concept for processing of the data window. The LPCs of external prediction are implemented for predicting the lost data samples which means LPCs associated with external prediction are computed from the past samples of the signal [11]. The conventional Normal Equation method has been found computationally expensive [12]. Alternatively,

Page 3: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

Levinson Durbin Algorithm (LDA) considerably reduces this computational time by avoiding the large matrix inversions involved in the computation of LPCs. However, LDA has the drawback of a larger dynamic range in the values of LPC [12]. Another alternate method is Leroux Gueguen Algorithm (LGA) which eliminates the problem of dynamic range in a fixed point environment taking advantage of the Schwartz inequality in computation of the LPC [10]. The LDA and LGA utilize the properties of autocorrelation matrix; thereby decreasing the computational time as compared to the Normal Equation method [10].

The rest of the paper is organized as follows: In Section II effect of data measurement loss is discussed. Section III outlines existing solutions for compensating data loss in state estimation. Section IV briefly introduces Linear Prediction theory. Modified LP techniques i.e. Normal Equation, LDA and LGA are discussed in section V. A numerical example is presented in section VI to show the effectiveness of the proposed method followed by conclusion.

.

2. Problem Statement

Kalman filter is dependent on sensor readings for computation of minimum mean square error estimate of the system state. However, when output data is unavailable due to channel congestion, buffer overflow and/or sensor faults, then the performance of Kalman filter is degraded to a considerable amount [9]. Observation loss is a major issue in control and communication systems. Study of loss of measurements is a hot research topic for the researchers since the last few years [5]. Kalman filter might face some situation where measurement data may not be available for update step. The authors in [7] have performed open loop Kalman estimation in case of LOOB i.e. when observation data is lost; the predicted samples are processed to next iteration, without performing update step. Consider the following discrete time LTI system [2]

xk=A xk−1+Buk−1+εk−1

zk=C xk+θk

In the above equations the parameters are defined as; k∈R={0,1,2,3 ,……};x , ε∈ Rn,

u∈Rl , θ ϵ Rm,A∈Rn∗n is the state transition matrix, B∈ Rn∗lis the input matrix,C∈ Rm∗n is

the output matrix and (x0 , εk ,θk) are Gaussian, uncorrelated white noise sequences with

mean (x0 ,0,0) and covariance (P0 ,Qk ,R k) respectively. The estimation through Kalman filter is summarized as follows [13].

Algorithm 1: Discrete Time Kalman Estimation1. Initialize

x0∨0 ,u0, ε 0 , θ0 , P0∨0∧k=1.2. Prediction cycle:

xk+1∨k=A xk∨k+Buk ; State estimation

Pk +1∨k=A Pk∨k AT+Qk ; Error covariance

Page 4: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

3. Time-step update: k→k+1

4. Sense measurements:zk+1=H xk +1+θk+1

5. Innovation vector calculation:rk +1=zk+1−H xk +1∨k

6. Then calculate the innovation covariance matrix:

Sk +1=H Pk+1∨k HT+Rk+1

7. Now calculate the gain matrix:

K k +1=Pk+1∨kHT Sk +1

−1

8. Perform update cycle:xk+1∨k +1=xk+1∨k+K k +1r k+1; State estimation

Pk +1∨k+1=(I−K k+ 1H )Pk +1∨k; Error covariance 9. Go back to step 2.

From the above algorithm it is clear that update step is dependent on measurements. In case of unavailability of the output data (zn), KF may not result in optimal estimation. Due to this reason we used three different Linear Prediction methods for predicting the missing output observations for update step.

3. Existing Solutions

3.1 Open-Loop Kalman Filtering

In literature, the most prevailed method used in case of output data loss is Open Loop Kalman filtering. In Open Loop Kalman estimation the measurement data is ignored and Kalman filter gain matrix (K k) is set to zero value matrix, which means that no update step is performed. As no measurement data arrives, no Kalman gain matrix is calculated and no state and covariance update is performed. Hence only prediction step is performed as measurement data is lost and it is not possible to obtain observational update step [14]. OLE is a fast estimation technique and it has very simpler structure. But despite these advantages, OLE may suffer from certain drawbacks, which are briefly discussed below. Open-Loop Kalman estimation may result in divergence in case of adequate data samples loss.When observation data becomes available after data loss period, oscillations and/or sharp spikes can be observed in the estimated parameters [14]. The steady state values of state and covariance are not regained even after recovery of data loss. It takes too longer to approach the steady state.OLE is summarized in the following algorithm.

Algorithm 2: Open Loop Kalman Filtering1. Initialize

x0∨0 ,u0, ε 0 , θ0 , P0∨0∧k=1.2. Prediction cycle:

xk+1∨k=A xk∨k+Buk ; State estimation

Page 5: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

Pk +1∨k=A Pk∨k AT+Qk ; Error covariance

3. Time-step update: k→k+1

4. Sense measurements: zk+1 ; is not available

5. There is no residual innovation and hence Kalman gain is not calculated. xk+1∨k +1 xk+1∨k ; State estimationPk +1∨k+1 Pk +1∨k ; Error covariance

6. Return to Step 2. In case of loss of observation for a long period of time, a sub-optimal estimation technique is required to provide robust estimation performance having bounded estimation error covariance.

3.2 Zero Order Hold Technique

Fig. 1 Open-Loop Kalman filtering

Due to various disadvantages associated with Open-Loop estimation, many researchers have tried to replace OLE with some other techniques [14]. The Zero Order Hold technique is explained in Fig. 1. The major limitation associated with this method is that only the most recent sample of signal is required to be stored during the whole estimation process. Since one data sample is employed at measurement update stage which may not result in optimal solution for adequate data loss. ZOH also requires strict correlation among data samples of the signal. Due to these reasons, this technique results in random sampling. Due to the above mentioned disadvantages of the existing methods, we have proposed a new scheme in which linear prediction techniques are implemented in Kalman estimation to compensate the loss of observations during the update cycle of Kalman filter. In the following sections, linear prediction techniques are discussed briefly.

Page 6: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

4. Linear Prediction Theory

Fig. 2 Linear Prediction Techniques

4.1 Linear Prediction

The basic idea in linear prediction is that a signal is modeled as a linear combination of the present and past samples of the signal. A major portion of work has been done on system modeling in the field of control systems under subjects of system identification and estimation. The weights used for computation of the linear combination are calculated by reducing the mean-square prediction error [11]. Linear prediction is basically an identification method where AR (autoregressive) parameters are found from the observed signal [10]. It is assumed that the signal to be predicted is an AR signal. The following equation shows the predicted signal:

z [n ]=∑i=1

p

∝i z [n−i ] (1)

Where z and z represent the predicted and input signal respectively. The symbol'∝i ' represents the ith coefficient weight of the signal. Linear prediction theory works on the principle of minimizing the mean square error. Based on minimization of mean square error, a mathematical expression is derived for computing the linear prediction coefficients. The prediction error is the difference between the original and predicted signal as shown in the equation as follows.

e [n ]=z [n ]− z [n ] (2)

The two major classes of linear prediction are external and internal linear prediction as shown in Fig. 2. The detailed explanation of internal and external linear prediction is given in the following sub-sections.

4.1.1 Internal Linear Prediction

Page 7: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

Fig. 3 Internal Linear Prediction Concept

In this type of prediction, prediction coefficients for a certain data frame are computed from the data inside that frame. The LPCs in internal prediction capture the data frame statistics accurately. The data frame may be static or dynamic. The advantage associated with longer frame size is its low computational complexity because LPCs are calculated and hence transmitted less often. However, the coding delay for longer frame sizes may grows larger as the system needs to wait for a longer period of time to collect too many samples [12]. Additionally, LPCs of a long frame may fail to achieve good prediction gain in case of changing nature of non-stationary systems. Alternatively, a shorter data frame requires more frequent LPC updates, which results in a more accurate portrayal of the input signal statistics as compared to the longer data frame. Mostly, the internal prediction techniques rely on non-recursive autocorrelation methods for estimation, which uses window of a finite length for obtaining the signal samples. Internal linear prediction doesn’t really predict the signal; rather it just computes the coefficients of the input signal to be transmitted. The transmission of LPCs of a signal requires less bandwidth and storage as compared to the original signal, thus saving the useful bandwidth and memory space [10].

As depicted in Fig. 3, sliding window concept is used in this work. The stationary window approach differs from the sliding window in its update process. In sliding window approach, the window updates in backward direction. However, in stationary window concept, same window is employed for computation of all sample values of the signal.

4.1.2 External Prediction

Fig. 4 External Linear Prediction Concept

Page 8: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

The LPCs derived in external prediction are used in future data frame; i.e. the coefficients related to a frame are not computed from data samples located inside that frame; instead LPCs are derived from the past samples of the signal [12]. External prediction is more useful where the statistical properties of the signal changes slowly with time. The frame size needs to be long enough such that in case of data loss it can recover the signal. Fig. 4 explains the sliding window concept for external linear prediction.

4.2 Prediction Gain

Prediction gain (PG) is yet another important parameter in LP theory. PG can be calculated from the equation given below [10]

PG=10 log10(σ s

2

σe2)=10 log10(

E {s2[n]}E {e2[n] }

)(3)

Equation 3 defines PG as the ratio of input signal variance to the variance of prediction error in units of decibel. PG is a parameter used to measure the performance of any predictor. If the value of prediction gain is higher, then it employs lower value of prediction error. So a predictor with a higher PG value is preferred to one with lower value of PG. An optimum frame size of LP filter for maximum PG may be sorted out by plotting PG as function of LP filter order. A point of saturation comes, where further increment in frame size will not affect the prediction gain significantly. If the expectations in equation (3) are changed into summation then prediction gain can be defined by the following equation.

PG [ l ]=10 log10(∑

n=m−N+1

m

s2[n]

∑n=m−N+1

m

e2[n]) (4)

In the above equation the parameters are defined as follows e [n ]=s [n ]−s [n ] (5)

¿ s [n ]+∑i=1

M

ai [m ] s [n−i ] ;n=m−N+1,…,m

In the above equation, the LPCs for internal prediction are calculated inside the interval [m−N+1 ,m ] and n<m−N+1for external linear prediction. In the above equation (4) the prediction gain is defined as a function of time variable ‘m’.

5. Linear Prediction Filter Order

Optimization is key concern in signal processing computer algorithms i.e. Normal equation, LDA and LGA. Since the prediction error doesn’t always decrease with increasing the window size in linear prediction, therefore it is important to find out the sub-optimal value of the frame size for LPC [14]. A constrain based method is given in Algorithm 3 to decide sub-optimal values of LPFO for Normal Equation, LDA and LGA techniques.

Page 9: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

Algorithm 3: LP Filter Order Selection

1. Compute em (k−1 )=max (e i ) ,where ei= xi− xi ∀ i= {1,2 , .. . k−1 }.2. Initialize j=1 ,then calculate R z and r γ

3. Recursion: j=2 .. . p Obtain: z from Equation (1)

Calculate: measurement update state estimation c xk On the basis of these compensated observations

Compute: e j (k )=xk−¿ c xk

Check: If e j (k )≤em (k−1 ) Yes: n← j : Order of the LP filter Else: j← j+1

4: Go back to step 3.

6. Linear Prediction Techniques

6.1 Normal Equation

In a system identification application, the FIR filter output shows an estimate of the present output sample from an unknown system as a sum of linearly weighted past and/or future samples from the input to unknown system. The Normal Equation derivation is actually based on minimization of the mean square error. Conventionally, several methods have been used to solve Normal Equations, most common are the covariance method which is used for non-stationary process and autocorrelation method is appropriate for a stationary process. In Normal Equation method the predicted signal is represented by equation given below [15]:

z [n ]=∑i=1

p

∝i z [n−i ] (6)

The mathematical equations for reducing the mean square error between actual and predicted signal are discussed here. The cost function is defined as [8].

J=E {e2 [n ] }=E {(z [n ]+∑i=1

p

∝i z [n−i ])2}(7)

‘J’ is the cost function which is precisely a second order function of LPCs. To get the sub-optimal value of LPC, the cost function ‘J’ is differentiated with respect to ‘∝’ and equated to zero, as [8].

( ∂ J∂∝k

)=2 E {(z [n ]+∑i=1

p

∝i z [n−i ]) z [n−k ]}=0(8)

Rearranging Equation (8) gives:

E {z [n ] z [n−k ] }+∑i=1

p

∝i E {z [n−i ] z [n−k ]}=0(9)

Now R z [i−k ]=E {z [n−i ] z [n−k ] }…∀ k=1 ,2 ,3. .. p(10)This leads to Equation (11)

Page 10: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

∑i=1

p

∝i Rz [ i−k ]=−R z[k ](11)

The auto-correlation matrix R z is given by:

R z=[ R z [ 0 ] R z [ 1 ] R z [2 ]R z [ 1 ] R z [ 0 ] R z [1 ]

⋯ R z [ p−1 ]Rz [ p−2 ]

⋮ ⋱ ⋮R z [ p−1 ] R z [ p−2 ] R z [ p−3 ] ⋯ R z [ 0 ] ]

From Equations (10) and (11):R z∝i=−rz (12)

The variable r z represents the transpose of the array formed by the elements R z[0] to R z[ p−1] of the autocorrelation array. The resulting matrix in Equation (12) is a Toeplitz matrix (a matrix having all elements along each diagonal to be same [11]). This property of Toeplitz matrix allows the linear equations to be solved by the Levinson-Durbin algorithm or the Schur algorithm [14].

6.2 Levinson Durbin Algorithm (Modified)

Levinson Durbin Algorithm (LDA) is a recursive prediction technique where parameter coefficients of an autoregressive process are predicted without high computational cost [14]. Levinson-Durbin algorithm (LDA) takes advantage of Toeplitz symmetry property of autocorrelation matrix and thereby decrease the computational time. In Equation (12) the autocorrelation array is used as the starting input for LDA algorithm. The coefficients that we compute in LDA technique are in fact, Reflection Coefficients (RCs) [10]. The RCs obtained as a result of this algorithm have one to one correspondence with LPC coefficients because the input signal is Wide Sense Stationary (WSS). Let ‘W’ and ‘Q’ be the number of iterations and frame size respectively. LDA algorithm can be described as follows:

Algorithm 3: Constraint Levinson Durbin Algorithm1. Set loop for varying the window size.

For W=10→902. Error Threshold: Set threshold limit for error to have bounded prediction error.3. Initialization: First iteration (W=0), set J0=R [0] The mean square prediction error is actually the first element of autocorrelation window.4. Recursion: For l=1,2,3 ,…,Q5. Compute value of lth RC as follows [12]

k l=(1 /J l−1)¿ Stop if l=Q

6. Now Calculate LPCs for the lth order predictor as in [12]

∝ll=−K l ,

∝il=∝i

l−1−K l∝l−il−1 ; for i=1,2,3 ,…,l−1

Page 11: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

Stop when l=Q7. Calculate the value of minimum mean square prediction error as in [1]

J l=J l−1 (1−K l2 )

Check: the value of threshold error and compare with 8. If J<= eth

No:l=l+1 ; go to step 5. Yes: Stop the Loop.

6.3 Leroux Gueguen Algorithm (Modified)

The main disadvantage of LDA technique is that it results in larger dynamic range in the values of LPC. Alternatively, another linear prediction method-Leroux Gueguen Algorithm (LGA) overcomes the problem of dynamic range in a fixed-point environment with the help of Schwartz inequality [16]. Due to simple structure and straightforward computations, LGA is integrated with Kalman filtering process in order to restore the missing observations required in measurement update step of Kalman filter. It computes Reflection Coefficients (RCs) from autocorrelation matrix without dealing with LPCs directly. The computational time by LGA is also smaller as compared to Normal Equation. The algorithm is discussed as follows. Consider ‘W’ as the number of iterations and ‘Q’ as frame size, then LGA technique can be described as follows:

Algorithm 5: Compensated Leroux Gueguen Algorithm1. Set the loop for varying the frame size. For W=10→902. Threshold Error: Set value for threshold error.3. Initialization: For l=0, set

ε (0 ) [k ]=R [k ] , k=−Q+1 ,…,0 ,…,QRecursion: for l=1,2,3 ,…,Q

4. Find lth Reflection Coefficient [21]

k l=ε(l−1)[ l ]ε (l−1 ) [0 ]

Stop when l=Q5. Compute values of epsilon parameters [8]

ε (l) [k ]=ε (l−1 ) [k ]−k l ε(l−1 ) [ l−k ] ;

where k=−Q+l+1 ,…,0 , l+1 ,…QSet l=l+1 ; Return to step 4.

6. Compute the error ‘e’ as follows:e [n ]=z [n ]− z [n ]

7. Check if e≤e th

No: Increment W=W+1Yes: Stop loop and save all parameters.

The values of epsilon parameters are used to calculate the LPC coefficients. LGA technique shows better performance in fixed point environment as the intermediate variables are

Page 12: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

having bounded values. The problem with LGA is that it returns only RCs, which is not a major concern if the filter is in lattice form [10]. The LGA technique followed by conversion of RC to LPC doesn’t result in substantial computational saving as compared to LDA. Due to these factors LDA technique is more prevalent as compared to Normal Equation and LGA techniques. These prediction schemes i.e. Normal Equation, LDA and LGA are implemented in Kalman estimation and the simulation results are presented in the following section.

7. Numerical Simulation Results and Analysis

7.1 Mass Spring Damper (MSD) System Model

The example employed for the evaluation of the above analysis is mass-spring-damper (MSD) system. The dynamics of the MSD system are described by the following mathematical equations:

x (t )=Ax (t )+Bu (t )+Lw (t )zk=ηk (C xk+v k)

The state vectorxT (t )=[ x1 ( t ) x2 (t ) x1 ( t ) x2 (t )] Consist of displacement and speed of the two bodies in proposed system.

A=[0 00 0

1 00 1

k1

m1

k 1

m1

k1

m2

−k1+k 2

m2

−b1

m1

b1

m1

b1

m2

−b1+b2

m1

]

BT=[001m1

0 ]

C=[ 0 10 0 ]L=[0 003 ]

The values of the parameters are m1=m2=¿ k 1=1 ,k 2=0.15 , b1=b2=0.1and the sampling timeT s=1ms. The MSD plant disturbance and sensor noise dynamics are given as

E {w (t ) }=0 , E {v (t ) }=0By putting the values of the given parameters, the above matrices will become:

A=[ 0 00 0

1 00 1

1 10.1 −1.15

−0.1 0.10.1 −0.2

]And

BT=[ 0 010 ]

Page 13: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

In the next section, the proposed Compensated Closed Loop KF (CCLKF) algorithms are implemented in MSD system and the results are presented. In Closed Loop KF, the lost observation samples are predicted by using linear prediction schemes; hence the estimation error is reduced as compared to existing Open Loop method.

7.2 Simulation Results

In the simulation we have used sampling time period of T s=0.001 s . In this section, the simulation results obtained for OLKF and CCLKF with data loss are compared. In case of unavailability of output data in OLKF, the measurement update step is not performed. So in OLKF the filtering is reduced to prediction step only. However, in CCLKF compensated data is achieved using Normal Equation, LDA and LGA methods. The error is significantly reduced by employing the proposed techniques as shown in the Table 1. In CCLKF the missing observation samples are predicted and then used in the update state of the Kalman filter which reduces the error in estimated signal. The computational time required for CLKF is more than OLKF, because of the more processing steps required for predicting the lost samples by LGA in the proposed method. The computational time for the CCLKF increases as the number of lost samples is increased due to more processing steps required for predicting the lost observation samples. The graph in Fig. 5 consists of three curves. The solid line, dotted dash line and dashed line represent the actual signal, signal predicted by Compensated CLKF and conventional OLKF signal respectively. Data loss occurs from sample 2220 to 2650. From the Fig. 5, the result shows that the predicted signal by OLKF is more deviated from the original signal as compared to the CLKF. In Fig. 5, the error of our proposed method has relatively less magnitude as compared to the error signal of the conventional OLKF method. It should be noted that Fig. 5 only represent one Compensated CLKF technique i.e. LGA, since the reuslts for other CLKF techniques i.e. Normal Equation and LDA are represented in detail in Table 1. The computational time, error estimation and prediction gain comparison has been shown in Table 1. It can be easily seen that Open Loop KF provides less computational time because during the loss time it avoids measurement update step. However, the absolute error generated by OLKF is quite larger than other schemes. On the other hand, all LPC schemes (Normal Equations, Levinson-Durbin algorithm and Leroux-Gueguen algorithm) are computationally expensive but they outperform OLKF. The errors generated by these schemes are significantly lower than the error produced by OLKF.

Page 14: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

0 500 1000 1500 2000 2500 3000 3500 4000-20

0

20

40First Estimated State

0 500 1000 1500 2000 2500 3000 3500 4000-20

-10

0

10

20

30Second Estimated State

0 500 1000 1500 2000 2500 3000 3500 4000-15

-10

-5

0

5

10

15

Third Estimated State

0 500 1000 1500 2000 2500 3000 3500 4000-10

-5

0

5

10Fourth Estimated State

Actual Signal

Estimation by CLKF

Estimation by OLKF

Fig. 5 Comparison of OLKF with CCLKF

The results for estimation error, computational time and gain are summarized in the following table. The table shows the estimation error, time and gain for the second state of the system during the observation loss period i.e. samples 2220-2650. Since the prediction gain is concerned only with linear prediction techniques, so it’s not applicable for the conventional open loop method.

Table 1. Comparison of various parameters of different Estimation techniques for the second state of the mass spring damper system

Open Loop KF

KF with Normal Equation

KF with Levinson Durbin Algorithm

KF with Leroux Gueguen Algorithm

Estimation Error 1.2130 .0621 .05327 .04751

Computational Time(seconds)

0.03721 0.1834 0.0612 0.0702

Prediction Gain N/A 14.2328 17.5401 18.8125

From Table 1, it is clear that the estimation error by the proposed algorithms is comparatively lower than OLKF; while the time taken by KF integrated with Normal Equation is highest (0.1834). The difference will be more prominent if the matrix size is increased. On the other hand, KF integrated with LDA and LGA techniques require less computational time as compared to Normal Equation because they don’t involve any matrix

Page 15: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

inversion operation in calculation of LPC values. The gain value of LGA is the least among all the methods, so it is preferred the most.

8. Conclusion

In this paper the loss of observation in Kalman filtering is studied and new techniques for recovery of the lost observation are discussed. The linear prediction techniques implemented in state estimation outperform the existing methods in reducing the estimation error in case of loss of observations. The KF integrated with Normal Equation method takes a lot of time in computation of the coefficients, so alternative techniques i.e. Levinson Durbin Algorithm and Leroux Gueguen Algorithm are presented. LDA and LGA have lower computational complexity than the Normal Equation method due to avoiding the matrix inversion in its computation of LPC. The results shown that LGA is better than Normal Equation and LDA techniques due to its lower computational complexity as well as bounded LPC coefficients.

References

[1] Dochain, D. (2003). "State and parameter estimation in chemical and biochemical processes: a tutorial." Journal of process control 13(8): 801-818.

[2] Welch, G. and G. Bishop (1995). An introduction to the Kalman filter.[3] Yan, R., et al. (2010). "Combining Adaptive Filtering and IF Flows to Detect DDoS Attacks

within a Router." KSII Transactions on Internet & Information Systems 4(3).[4] Kieu-Xuan, T. and I. Koo (2012). "Cooperative Spectrum Sensing using Kalman Filter based

Adaptive Fuzzy System for Cognitive Radio Networks." KSII Transactions on Internet & Information Systems 6(1).

[5] Khan, N., et al. (2013). Implementation of linear prediction techniques in state estimation. Applied Sciences and Technology (IBCAST), 2013 10th International Bhurban Conference on, IEEE.

[6] Zhao, N. and H. Sun (2011). "Robust Power Control for Cognitive Radio in Spectrum Underlay Networks." KSII Transactions on Internet & Information Systems 5(7).

[7] Shi, Y. and H. Fang (2010). "Kalman filter-based identification for systems with randomly missing measurements in a network environment." International Journal of Control 83(3): 538-551.

[8] Sinopoli, B., et al. (2004). "Kalman filtering with intermittent observations." Automatic Control, IEEE Transactions on 49(9): 1453-1464.

[9] Fang, H., et al. (2010). "Genetic adaptive state estimation with missing input/output data." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 224(5): 611-617.

[10] Chu, W. C. (2004). Speech coding algorithms: foundation and evolution of standardized coders, John Wiley & Sons.

[11] Makhoul, J. (1975). "Linear prediction: A tutorial review." Proceedings of the IEEE 63(4): 561-580.

[12] Khan, F., et al. (2013). On the optimal frame size of linear prediction techniques. Circuits, Power and Computing Technologies (ICCPCT), 2013 International Conference on, IEEE.

[13] Micheli, M. (2001). Random sampling of a continuous-time stochastic dynamical system: Analysis, state estimation, and applications, Citeseer.

[14] N. Khan. Linear Prediction Approaches for Compensation of Missing Measurements in Kalman Filtering. PhD thesis, University of Leicester, December 2011

[15] Marple Jr, S. L. (1982). "Fast algorithms for linear prediction and system identification filters with linear phase." Acoustics, Speech and Signal Processing, IEEE Transactions on 30(6): 942-953.

Page 16: Linear PredictionLinear Prediction in Kalman Filter Paper in Kalman Filter Paper

[16] Le Roux, J. and C. Gueguen (1977). "A fixed point computation of partial correlation coefficients." IEEE Transactions on Acoustics Speech and Signal Processing 25(3): 257-259.

Faheem khan received his B.Sc. degree in Electrical Engineering from University of Engineering & Technology, Peshawar, Pakistan in 2010. He has worked out as a lecturer from September 2011 to August 2013 in the Department of Electrical Engineering, University of Engineering & Technology Peshawar, Pakistan. He has published various reputed conference papers. He is currently pusuing his Ph.D. degree from Electronics and Computer department Hanyang University South Korea. His research areas include state estimation, wireless communication and UWB Radar technologies.

Sung Ho Cho has graduated from Department of Electronics Engineering Hanynag University Seoul Korea, in 1982 and has completed his Ph.D. degree from Department of Electrical & Computer Engineering at University of Utah USA, in 1989. He worked in Electronics & Telecommunication Research Institute (ETRI), Korea, as senior researcher for three years. Professor Cho is currently pursuing pragmatic researches for creating design methodologies. His research areas include wireless technologies, Digital signal processing, embedded system, and networking protocol technologies. He has more than 200 publications.

Naeem Khan is working as an assistant professor in Electrical department University of Engineering & Technology Peshawar, Pakistan. He received his B.E degree from department of electrical and electronics engineering University of engineering & Technology Peshawar, Pakistan in year 2003 and has completed his Ph.D. degree in control and communication system from Liester University U.K. in 2012. He has published several journal and conference papers. His area of interest is robust control, state estimation under intermittent observations, and OFDM channel estimation.