target tracking using kalman report
TRANSCRIPT
2D TARGET TRACKING USING KALMAN FILTER
2006-2007
PROJECT REPORT
SUBMITTED TO THE FACULTY OF ELECTRONICS AND COMMUNICATION ENGINEERING
NATIONAL INSTITUTE OF TECHNOLOGY, WARANGAL (A.P)
(DEEMED UNIVERSITY)
BACHELOR OF TECHNOLOGY IN
ELECTRONICS AND COMMUNICATION ENGINEERING
SUBMITTED BY
AMIT KUMAR KARNA (04403) DEVENDER BUDHWAR (04411)
SAHIL SANDHU (04455)
Under the Guidance of
Dr. T. Kishor Kumar
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
NATIONAL INSTITUTE OF TECHNOLOGY (DEEMED UNIVERSITY) WARANGAL-506004(A.P)
1
Acknowledgements
We would like to express our sincere thanks to our faculty, Dr. T. Kishor Kumar, Department of Electronics and Communication Engineering, National Institute of Technology, Warangal for his constant encouragement, splendid and gracious guidance throughout our work. He has been a constant source of inspiration and helped us in each stage. We express our deep gratitude to him for such encouragement and kindly cooperation. Amit Kumar Karna (4403) Devender Budhwar (4411)
Sahil Sandhu (4455)
2
CONTENTS
Abstract
Introduction
Kalman Filter
State-space Representation
Underlying Dynamic System Model
Code
Results
Conclusions
References
3
2D TARGET TRACKING USING KALMAN FILTER
Sahil Sandhu ¾ B.Tech (E.C.E.) National Institute of
Technology Warangal [email protected]
Devender Budhwar ¾ B.Tech (E.C.E.) National Institute of
Technology Warangal [email protected]
Amit Kumar Karna ¾ B.Tech (E.C.E.) National Institute of
Technology Warangal [email protected]
Abstract
It is now quite common in the recursive approaches for motion estimation, to find applications of the Kalman filtering technique both in time and frequency domains. In the block-based approach, very few approaches are available of this technique to refine the estimation of motion vectors resulting from fast algorithms.. This paper proposes an object motion estimation which uses the Kalman filtering technique to improve the motion estimates resulting from both the three step algorithm and Kalman application.
1.0 Introduction In the field of motion estimation for video coding many techniques have been applied. It is now quite common to see the Kalman filtering technique and some of its extensions to be used for the estimation of motion within image sequences. Particularly in the pixel-recursive approaches, which suit very much the Kalman formulation, one finds various ways of applying this estimation technique both in the time and frequency domains. On a very general perspective, we find use of Kalman filter (KF), which implies linear state-space representations and the extended Kalman filter (EKF), which uses the linearised expressions of non-linear state- space formulations. Moreover, the parallel extended Kalman filter (PEKF) which consists of a parallel bank of EKF’s, is often encountered in practice. In the block-based motion-compensated prediction approaches, the most common procedure is the block- matching technique.
Given a macro block in the current frame, the objective is to determine the block in a reference frame (one of the past for forward motion estimation or one of the futures for backward motion estimation) in a certain search area that “best” matches the current macro block according to a specific criterion. In most coding systems, the “best” match is found using the mean absolute error (MAE) criterion. There are several well known algorithms that perform the block matching motion estimation, among them being the full search algorithm (FSA) that determines the motion vector of a macro block by computing the MAE at each location in the search area. This is the simplest method, it provides the best performance, but at a very high computational cost. To reduce these computational requirements, several heuristic search strategies have been developed, as for example the two-dimensional logarithmic search, the parallel one-dimensional search, etc [3-51. These are often referred to as fast search algorithms. In fast algorithms the procedures applied for motion estimation are of significantly lower complexity, but yield a suboptimal solution in the sense that they may not avoid the convergence of the MAE cost function to a local minimum instead of the global one. Lately, some new fast strategies as well as motion estimation have been proposed. But, very few applications are available of Kalman filtering for the estimation of motion vectors.
This paper proposes a motion estimation using Kalman filtering to improve the motion vector estimates resulting from both the conventional three step algorithm (TSA) based Kalman filter proposed in Section 2 introduces the state-space representation for the motion vector. The corresponding Kalman equations are:
4
2.0 The Kalman Filter
e Kalman filter is a recursive estimator. This eans that only the estimated state from the
previous time step and e current measurement are needed to compute the estimate for the current state. In contrast to batch estimation tech ques, no history of observations and/or estim tes is required. It is unusual in being purel a time domain filter; most filters (for exam low-pass filter) are formulated in the frequency domain and then transformed back to the time domain for imp mentation. The state of the filter is represented by two variables:
Thm
th
niayple, a
le
, the estimate of the state at time k; , the error covariance matrix (a measure of
the estimated accuracy of the state estimate). The Kalman filter has two distinct phases: Predict and Update. The predict phase uses the estimate from the previous timestep to produce an estimate of the current state. In the update phase, measurement information from the current timestep is used to refine this prediction to arrive at a new, (hopefully) more accurate estimate. Predict
(predicted state)
(predicted estimate covariance)
Update
(innovation or measurement residual)
(innovation (or residual) covariance)
(optimal Kalman gain)
(updated state estimate)
(updated estimate covarianc
The formula for ate covariance above is only valid for the optimal Kalman gain. Usage o a more
formula found in the derivations section. InvarIf the model is accurate, and the valu s for
e) the updated estim
f other gain values require complex
iants e
and accurately reflect the distribution of thealues, then the following inva
initial state v riants are preserved n error zero : all estimates have mea
where E[ξ] is the expected value of ξ, and covarianc ect the e matrices accurately reflcovariance of estimates
The Kalman filter can be regarded as an adaptive
ter,
rarely of interest when designing state esti ators such as the alman Filter, whereas for Digital
filter is, and this is most often decided based on em rical Monte Carlo simulations, where "truth" (the true state) is known.
low-pass infinite impulse response Digital filwith cut-off frequency depending on the ratio between the process- and measurement (or observation) noise, as well as the estimate covariance. Frequency response is, however,
mK
Filters, frequency response is usually of primary concern. For the Kalman Filter, the important goal is how accurate the
pi
In the extende Kalman filter (EKF) the state transition and observation models need not be linear functions of the state but may instead be (
d
differentiable) functions.
The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.
5
At each timestep the Jacobian is evaluated with current predicted states. These matrices can be
ing extended Kalman filter equations: Predict
used in the Kalman filter equations. This process essentially linearizes the non-linear function around the current estimate. This results in the follow
Update
Where the state transition and observation matrices are defined to be the following Jacobians
Relationship to recursive Bayesian estimation The true state is assumed to be an unobserved Markov process, and the m surements are the observed states of a hidd n Markov model.
eae
Because of the Markov assumption, the true state
iven the immediately previous state.
is conditionally independent of all earlier statesg
Similarly the measurement at the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.
However, when the Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. (This is achieved by marginalizing out the previous states and dividing by the probability of the measurement
ally. The probability distribution associated with the
e true state at (k − 1) integrated out.
set.) This leads to the predict and update steps of the Kalman filter written probabilistic
predicted state is product of the probability distribution associated with the transition from the (k − 1)th timestep to the kth and the probability distribution associated with the previous state, with th
The measurement set up to time t is
The probability distribution of updated is proportional to the product of the measurement likelihood and the predicted state.
The denominator
is a normalization term. The remaining probability density functions are
Note that the PDF at the previous timestep is inductively assumed to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for
given the measurements is the Kalmanfilter estimate.
The scanning in a frame is from the top left to t. The motion vector of a predicted from that of its left
spatial neighbour. The measured motion vectors
3.0 State-space representation
Using these assumptions the probability distribution over all states of the HMM can be written simply as:
the bottom righmacroblock can be
6
are obtained through a conventional three step
In the same manner as in, the intraframe mo n estimation process is mlodelled through an auto-
zig-z lock
ponds to a conventional pixel
omponents are assumed independent.
be he k
ation:
result, it a better otion otion
noise. The state f the system is represented as a vector of real
d n r s
o y
present an arbitrary distribution for the next value of the state
model that is used for the Kalman filter. There is
procedure.
tio
regressive model which produces the state-space equations. From this macroblock-based representation how do we define the 8x8-block: based representation? Each 16x16-block yields a
ag sequence of four 8x8 b s:
This corresdecimation for block matching in an 8x8-bock. The motion vector of these sub-blocks is defined as V' = (vx,vy), where v, and v, denote the horizontal and vertical components, respectively. These two c The motion vector of an 8x8-block can predicted (through KF) from the one of tprevious 8x8-block according to the time index of the zig-zag order using the state equ
V(k+ 1 )=F(k)V(k) +W(k) where W(k)is the state noise vector, i.e. the prediction error with covariance matrix Q. The state noise components w,, w, are assumed independent and Gaussian distributed with zero-imean and same variance q. The matrix F(k) is the transition matrix which describes the dynamic behaviour of the motion vector from one 8x8-block to the next: It is observed that the measured motion vectors are actually obtained from the TSA run on a 16x16-block basis that yields the zig-zag sequence of four measured motion vectors
(k) with same values on the 8x8-block basis. YThis means that the Kalman filter has four measurements instead of one to adjust the
motion estimate V(k/k) when the assumption of smooth changes is not strictly valid. As a
is expected that we have estimate for the 8x8-mm
compensation procedure.
4.0 Underlying dynamic system model
Kalman filters are based on linear dynamical systems discretised in the time domain. They aremodelled on a Markov chain built on linearoperators perturbed by Gaussian onumbers. At each discrete time increment, alinear operator is applied to the state to generatethe new state, with some noise mixed in, anoptionally some information from the controls othe system if they are known. Then, anothelinear operator mixed with more noise generatethe visible outputs from the hidden state. TheKalman filter may be regarded as analogous tthe hidden Markov model, with the kedifference that the hidden state variables arecontinuous (as opposed to being discrete in the hidden Markov model). Additionally, the hidden Markov model can re
variables, in contrast to the Gaussian noise
a strong duality between the equations of the Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999). In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the framework of the Kalman filter. This means specifying the matrices Fk, Hk, Qk, Rk, and sometimes Bk for each time-step k as described below.
Fig. Model underlying Kalman filter
Model underlying the Kalman filter. Circles are vectors, squares are matrices, and stars represent Gaussian noise with the associated covariance
7
matrix at the lower right. The Kalman filter model assumes the true state at time k is evolved from the state at (k − 1) according to:
Where, Fk is the state transition model which is applied to the previous state xk−1; Bk is the control-input model which is applied to the control vector uk; wk is the process noise which is assumed to be drawn from a zero mean multivariate normal distribution with covariance Qk.
At time k an observation (or measurement) zk of the true state xk is made according to
where Hk is the observation model wh h maps the t and
icrue state space into the observed space
vk is the observation noise which is assumed to be zero mean Gaussian white noise with covariance Rk.
The initial state, and the noise vectors at each step {x0, w1, ..., wk, v1 ... vk} are all assumed to be mutually independent.
al dynamical systems do not exactly fit
e presence of noise,
Variations on the Kalman
<Detect.m>
Many rethis model; however, because the Kalman filter is designed to operate in than approximate fit is often good enough for the filter to be very useful. filter described below allow richer and more sophisticated models.
5.0 Code
40,320,3); for i = 1:5
/',int2str(i), '.jpg']));
imshow(Im) Imwork = double(Im);
%extract ball [cc(i),cr(i),radius,flag] =
ork,Imback,i);%,fig1,fig2,fig3,fi
0 e
/20 : 0.9*radius
%detect clear,clc % compute the background image Imzero = zeros(2
Im{i} = double(imread(['DATA/',int2str(i),'.jpg'])); Imzero = Im{i}+Imzero; end Imback = Imzero/5; [MR,MC,Dim] = size(Imback); % loop over all images for i = 1 : 60 % load image Im = (imread(['DATA
extractball(Imw15,i); g
if flag== continu end hold on
.9*radius: radius for c = -0 r = sqrt(radius^2-c^2); plot(cc(i)+c,cr(i)+r,'g.') plot(cc(i)+c,cr(i)-r,'g.')
d en %Slow motion! pause(0.02) end figure plot(cr,'g*') hold on plot(cc,'r*')
><extractball.m % extracts the center (cc,cr) and radius of thelargest b
lob
,flag]=extractball(Imwork,Imback,in,fig1,fig2,fig3,fig15,index)
ack); tract background & select pixels with a
R,MC); %image ion
(abs(Imwork(:,:,1)-Imback(:,:,1)) > 10) .. ork(:,:,2) - Imback(:,:,2)) > 10) ...
mwork(:,:,3) - Imback(:,:,3)) > 10); gy Operation erode to remove
all noise (fore,'erode',2); %2 time
function [cc,cr,radiusdex)% cc = 0; cr = 0; radius = 0; flag = 0; [MR,MC,Dim] = size(Imb % subbig difference fore = zeros(Msubtrackt fore = | (abs(Imw | (abs(I % Morpholosm foremm = bwmorph
% select largest object labeled = bwlabel(foremm,4); stats = regionprops(labeled,['basic']);%basic
ist mohem n [N,W] = size(stats); if N < 1
return end
8
% do bubble sort (large to small) on regions in
= stats(j);
re that there is at least 1 big region ts(1).Area < 100
return
); mass and radius of largest
ts(1).Centroid; rt(stats(1).Area/pi);
centroid(1); ;
case there are more than 1 id = zeros(N); for i = 1 : N id(i) = i; end for i = 1 : N-1 for j = i+1 : N if stats(i).Area < stats(j).Area tmp = stats(i); stats(i) stats(j) = tmp; tmp = id(i); id(i) = id(j); id(j) = tmp; end end end % make su if sta end selected = (labeled==id(1) % get center of centroid = sta radius = sq cc = cr = centroid(2) flag = 1; return <kalman.m> clear,clc % compute the background image
0,320,3); 5
(imread(['DATA/',int2str(i),'.jpg']));
Imzero/5; C,Dim] = size(Imback);
.0455]'];
0]',[0,1,0,0]',[dt,0,1,0]',[0,dt,0,1]']; pixels^2/time step
u = [0,0,0,g]';
=zeros(100,4);
['DATA/',int2str(i), '.jpg']));
s: radius/20 : 1*radius
te
]'
'+R) ([cc(i),cr(i)]' - H*xp))';
-1*radius: radius/20 : 1*radius
ons
ise (R) from stationary ball
Imzero = zeros(24for i = 1:Im{i} =doubleImzero = Im{i}+Imzero; end Imback =[MR,M % Kalman filter initialization R=[[0.2845,0.0045]',[0.0045,0H=[[1,0]',[0,1]',[0,0]',[0,0]']; Q=0.01*eye(4); P = 100*eye(4); dt=1;
0,A=[[1,0, = 6; %g
B
kfinit=0; x % loop over all images for i = 1 : 60
age % load imread( Im = (im
imshow(Im) imshow(Im)
k = double(Im); Imwor %extract ball [cc(i),cr(i),radius,flag] =
extractball(Imwork,Imback,i); if flag==0 continue end hold on
c = -1*radiu for r = sqrt(radius^2-c^2); plot(cc(i)+c,cr(i)+r,'g.')
)-r,'g.') plot(cc(i)+c,cr(i end % Kalman upda i
if kfinit==0 2,MR/2,0,0 xp = [MC/
else xp=A*x(i-1,:)' + Bu end kfinit=1; PP = A*P*A' + Q K = PP*H'*inv(H*PP*H
x(i,:) = (xp + K* x(i,:) [cc(i),cr(i)]
-K*H)*PP P = (eye(4)
on hold for c = r = sqrt(radius^2-c^2);
) plot(x(i,1)+c,x(i,2)+r,'r.' plot(x(i,1)+c,x(i,2)-r,'r.') end
se(0.3) pauend % show positi figure
c,'r*') plot(c hold on
r,'g*') plot(c%end %estimate image no posn = [cc(55:60)',cr(55:60)']; mp = mean(posn);
9
diffp = posn - ones(6,1)*mp; Rnew = (diffp'*diffp)/5;
6.0 RESULTS
e Step Algorithm sed Kalman Filter
ed in [22] and the proposed d Kalman Filter (8x8-KF) are run
ifferent classes of video sequences. We use llowing sequences: ‘Alkistis’, man’ and a sub-sampled
ews’.
that all the above listed techniques ed as sub-optimal techniques. Even
l d
rage PSNR (see
an filtering s provides a greater
verage peak signal to noise ratio than the one sulting from the conventional three step
algorithm. As block based proposed approach re n greater
al complexity of each of
maximum displacement within the
refin
imply implementation of extended Kalman filters and parallel Kalman filters? Any of the is feasible. Regarding the use enting such
su, F. Jldiz and J. B. Burl, ‘A image
For comparison purposes, the Full Search Algorithm (FSA), the Thre(TSA), the 16x16-block ba(16x16-KF) present
block base8x8-or df
50 frames of the fone’, ‘Fore‘Carpho
sequence of ‘N
It is observed qualifican be
the full search algorithm does not give the rea results expectemotion. Hence, in any case the
rms of aveare very close in teTable 1).
the KalmAs in, the algorithm usingn a 16x16-block basio
are
expected the 8x8-sults in an eve
PSNR better than both the TSA and 16x16-KF. For the particular state-space representation
used, when the real motion corroborate the assumptions made regarding, only translational motion with very smooth changes, the 8x8-block based motion estimation using Kalman filter is even better than the one resulting from the full search algorithm, as one can see from the average PSNR values and the curves in Figure 1.
The computationthe above stated algorithms is given in Table 2. The complexity is evaluated in terms of number of operations per block (NOPB). The formulation of the complexity uses N=16 and p=7, where N defines the size of a NxN-block and p thesearch area. The full search requires (2p+1)’ search locations and the three step 25.
The Kalman filter is usually of high complexity. But for our particular application, that uses an intentionally simple state-space model for the motion, it is observed from Table 2 that the Kalman filter implementation for
ing the motion estimates resulting from the TSA, does not significantly increase the
computational complexity. Indeed, due to the basic formulation of the motion, the Kalman filter equations can be largely simplified and thus reduced to their scalar form. Subsequently, any expensive matrix calculation is avoided.
7.0 Conclusion
The above presented results are encouraging, in the sense that with the appropriate state model and a priori assumptions close to the real motion vector behaviour, we are able through Kalman filtering to have a greater PSNR than the other techniques for any frame of the sequence. It is evident that a Kalman filter using such simple formulation as in this paper is more suitable for class A image sequences. At this point, there are few questions I alternatives that may be of interest: -Shall we consider sub-pixel accuracy? -Shall we leave the motion components independency assumption and consider the real correlation that exists between the displacement in x- and y-direction? -Shall we envisage more elaborate state-space representation for the motion that will
above considerationfulness of implem
approaches, this will be a matter of trade-off among the resulting motion estimates, the computational complexity and the real visual quality of the images.
References
[1] H. G. Musmann, P. Pirsch and H.-J. Grallert, ‘Advances in picture coding’, Proc. IEEE, Vol. 73, No. 4, pp. 523-548, 1985.
[2] I. Akcomparison of the performance ofmotion operating on low signal to noise ratio images’, Proc. of the 34th Midwest Symposium on Circuits and Systems, Vol. 2, pp. 1124-1 128, NY 1992.
[3] A. Murat Tekalp, ‘Digital video processing’, Prentice-Hall, 1995.
[4] Internet resources
10