learning to track motion -  · learning to track motion maitreyi nanjanath amit bose csci 8980...

Post on 07-Sep-2019

4 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Learning to Track Motion

Maitreyi Nanjanath

Amit Bose

CSci 8980 Course Project

April 25, 2006

Background

• Vision sensor can provide a great deal of

information in a short sequence of images

• Useful for determining camera position

and movement

• Availability of local trackers:

– Fast

– Error prone

– Globally inconsistent

Motivation

• Global consistency as a cue for error correction

• Is a master algorithm possible over and above local trackers that makes corrections?

• Assume local tracking algorithm is a black-box– Input: sequence of images

– Output: image co-ordinates of successive feature positions

Problem Statement

• Given: Inaccurate estimates from a set of

local trackers

• Objective: Devise a master algorithm for

global tracking that,

– is based on erroneous estimates of local

trackers only

– can make relatively accurate estimates of

feature positions

Our Approach

• Hypothesis: It is possible to develop a

learning algorithm that performs

corrective tracking

• Problem naturally fits the multi-

dimensional regression model

– We have seen the generalized additive (GA)

algorithm of Kivinen and Warmuth

– We have also seen a multitude of loss

functions

Model

• N local trackers

• Each tracker generates 6 values– 2 spatial co-ordinates

– 2 velocity components

– 2 acceleration components

• Data-point (x, y)

– x: 6N+1 vector of tracker given information

– y: 2N vector of true spatial co-ordinates

Generalized Additive AlgorithmGeneralized Additive AlgorithmGeneralized Additive AlgorithmGeneralized Additive Algorithm

1. Initialize the parameter matrix as Θ1 = Θ.

2. Repeat for t = 1,…,ℓ

a) Get the input xt

b) Compute the weight matrix Ωt = Ψ(Θt)

c) Compute the linear activation ât = Ωt xt

d) Output the prediction ŷt = φ(ât).

3. For j = 1,…,k, update the jth row of the

parameter matrix by

θt+1,j = θt,j – η(ŷt,j – yt,j)xt

Experiment Setup

• A set of 20 points is generated– uniformly distributed over the unit cube

• The set of points is moved along a trajectory

• The path is projected onto 2 dimensions

• 6 values are generated for each point– Co-ordinates, velocity and acceleration components

– This forms the ground truth

• Ground truth is perturbed by adding noise

• Ground truth and noisy data is input to the GA Algorithm– It learns to produce corrected spatial coordinates

Noise added

• Gaussian noise, with Mean 0

– Variance 0.0025

– Variance 0.025

• Chi-squared noise, with Mean 0.1

• Arbitrary noise

– In this, certain data points suddenly become

zero. (which corresponds to losing a feature)

Transfer and Loss Functions

(∇Φ)-1(x) = logx

(∇Φ)-1(x) = – 1/x

(∇Φ)-1(x) = ex / (1 + ex )

(∇Φ)-1(x) = ex – 1

(∇Φ)-1(x) = x

Transfer Function

Φ(x) = ex“Exponential

loss”

Φ(x) = – logxItakura-Saito

Distance

Φ(x) = xlogx +

(1-x)log(1-x)Logistic loss

Φ(x) = xlogxI-Divergence

Φ(x) = x2/2Squared loss

Matching Loss Function

Training data

• 800 training data points:

– Correspond to set of image points moving

randomly

• Data scaled and shifted

– To fit constraints of loss functions.

• Training performed for many epochs: 1,

100, 300, 500

– Data shuffled randomly in every epoch

• Learning rate was kept fixed in a run

Results

• Test data:

– 400 data

points

• Results with

final weight

matrix

– generated for

both training

and test data

Results: Gaussian Noise and Squared Loss

Results: Gaussian Noise and I-Divergence

Results: Gaussian Noise and Logistic Loss

Results: Gaussian Noise and Itakura-Saito Distance

Results: Gaussian Noise and “Exponential” Loss

Results: Other Data Sets (Chi-square and arbitrary)

Issues and Challenges

• High learning rate made weights oscillate

• Data preparation was a major challenge:– Input had to be tuned for the loss functions

for specialized domains

– Appropriate noise additives were needed

– Choice of feature dimensions• Acceleration input was added to help track

rotation

• We were unsure if the simple regression model was sufficient

Extensions and Future Work

• Extensions:

– Varying learning rate

– Use non-linear high-dimensional mapping

and kernels

• Future work

– Regularization

– Using a structured regression model

– Testing on real images

Conclusions

• Over 75% reduction in error was seen in

most cases

• I-divergence and squared loss performed

comparably

• Itakura-Saito distance was not a good

choice for this domain

• Better tuning of parameters may improve

results

Q & A

top related