value of information 1 st year review. ucla 2012 kickoff voi kickoff aro muri on value-centered...
TRANSCRIPT
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Kickoff
ARO MURI on Value-centered Information Theory for Adaptive Learning, Inference, Tracking, and Exploitation
Valueof
Information
1st year review. UCLA 2012
Kickoff
ARO MURI on Value-centered Information Theory for Adaptive Learning, Inference, Tracking, and Exploitation
VOI
Information Value in Registration and Sensor Management
Doug CochranArizona State University
ARO/OSD MURI ReviewUCLA
28 October 2012
Joint work with Steve Howard, Utku Ilkturk, Bill Moran, and Rodrigo Platte
1st year review. UCLA 2012
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Kickoff
1st year review. UCLA 2012
Valueof
Information
Kickoff
VOISummary of 2012 Research Thrusts
1. Gauge-invariant estimation in networks
In parameter estimation with a sensor network, which links contribute most?
2. Sensor Management via Riemannian geometry
The metric structure induced on a parameter manifold by the Fisher information in estimation problems provides an approach to managing sensor configurations
3. Measurement selection for observability (started in collaboration with ARL)
What are the considerations when deciding which linear measurement map to choose from a library at each stage of an iterative observation problem?
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Kickoff
!6!5!3!1!2 !4!7!8
VOI
Gauge-invariant Estimation in NetworksTenets
• The purpose of sensor networks is to sense; i.e., to enable detection, estimation, classification, and tracking
• The value of network infrastructure is to enable sharing and fusion of data collected at different sensors
• To exploit this, data at the nodes must be registered
• Intrinsic data; e.g., clocks, platform orientation• Extrinsic data: collected by sensors
• Can we quantify the value of adding particular links in terms that are meaningful to the sensing mission?
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
• Network graph: Directed graph G on a vertex set V(G) with edges E(G)
• Vertex labels: An element of a Lie group G is associated with each node of G
• Edge labels: An element of G on each edge e ϵ E(G) representing a noisy measurement of the difference of the values on the target vertex t(e) and the source vertex s(e)
• Goal: Estimate the connection – the true relative offsets between the vertex values
• The state of the network is a G-valued function x on V(G) but it is never directly observed
• If the network were aligned, the function would be constant (flat connection)
• What is observed is the connection w ϵ G|E(G)| relative to the chosen gauge• In the absence of noise, for an aligned network this is the identity
connection• If the network is not aligned, a gauge transformation can be found
that takes w to the identity
Gauge-invariant Estimation in NetworksRegistration on a Graph
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
• If G is the real line and the noise on the edges of G is zero-mean Gaussian with covariance matrix R=s2I
• The Fisher information is F=L/s2 where L is and cofactor of the Laplacian of G
• det F = t(G)/s2(|V(G)|-1) where t(G) denotes the number of spanning trees in G
• The ML estimator of the connection x modulo any chosen gauge is unbiased with covariance F-1 and determinant 1/det F
• Additional results for large classes of Lie groups G, including compact, non-compact, abelian, and non-abelian cases
• More precise general formulation of the notion of gauge-invariant estimation on graphs and properties of the estimators (Allerton 2012)
Gauge-invariant Estimation in NetworksRepresentative Results
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Kickoff
1st year review. UCLA 2012
Valueof
Information
Kickoff
VOISensor Management via Riemannian Geometry
• Mutual Information and divergences have successful histories in sensor management surrogates for actual cost functions (i.e., representing VoI)
• Problem: Given what is known, what sensor trajectory optimizes information gathering over the next T seconds?
• Fact: The set of all Riemannian metrics on a Riemannian manifold M is a (weak) infinite dimensional Riemannian manifold M(M)
• Observation 1: The Riemannian metrics on M corresponding to Fisher information constitute a submanifold of M, and
• Observation 2: For a particular problem of estimating a parameter from sensor data, each choice of sensor corresponds to a Riemannian metric on M lying in this submanifold of M
Sensing Action S log-Likelihood lS on M Fisher Information FS on M Riemannian metric on M Point in M(M)
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Kickoff
1st year review. UCLA 2012
Valueof
Information
Kickoff
VOI
Sensor Management via Riemannian GeometryEstimation-theoretic Preliminaries
• Consider a family of conditional densities p(x|q) for a RV x on X parameterized by q in a smooth d-dimensional manifold M
• For given x, p(x|q) defines the likelihood function on M
• The log-likelihood lx : M R is defined by lx(q)=log p(x|q)
• The optimal test for q versus q’ given data x is of the form
• The Kullback-Leibler (KL) divergence
is a natural measure of discrimination on M
)'|(
)|(
xp
xplog
Xdx
xp
xpxpppD
)|(
)|()|()|()|(
log
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Kickoff
1st year review. UCLA 2012
Valueof
Information
Kickoff
VOI
Sensor Management via Riemannian GeometryMetric Geometry of Sensor Selection
• A Riemannian metric on a smooth manifold is a (positive definite) inner product on each tangent space that varies smoothly from point to point
• Although the KL divergence is not symmetric, it induces a Riemannian metric on M
• This Fisher information metric is defined by
• The corresponding volume form
is the Jeffreys prior on M
• Under suitable assumptions, a metric on M is given by
where dP = volF or, more generally, is a probability density on M
][ ll ddF E
dF dd 1vol
)()(),( 11 dPkghgkhGMg
Tr
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Kickoff
1st year review. UCLA 2012
Valueof
Information
SensorRiemannian MetricsManifold ofMSManifoldg
VOI
Sensor Management via Riemannian GeometryManifold of Sensor Configurations
• Suppose that the sensor configuration is parameterized by a smooth manifold S
• A configuration sϵS gives rise to a particular Riemannian metric gs on M
• The mapping g taking s to gs will be assumed to be smooth and one-to-one (e.g., an immersion)
• Although M is infinite-dimensional, the trajectory planning takes place in a finite-dimensional sub-manifold that inherits its metric structure from M
Geometrically, optimal navigation in S is via geodesics
The geometry here is defined directly in terms of Fisher information
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Kickoff
1st year review. UCLA 2012
Valueof
Information
Kickoff
VOI
Sensor Management via Riemannian GeometryGeodesics
• The geodesic structure of M has been studied outside the context of information geometry
• The “energy integral” of a smooth curve g : [0,1] M is
• Geodesics in M are extremals of Eg; they satisfy
• With g restricted to S, E g becomes an integral Ig on S defined with respect to pullbacks of the quantities in the energy integral on M
• Geodesics in S are extremals of Ig, which satisfywhere G denotes the Cristoffel symbol for the Levi-Civita connection on M
dtdPttttEM
)())()()()(( 11
0
121 Tr
))0()0(()0()( 11 tt exp
})0()0(({)0(det)(det 1 tt Trexp
),(
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Sensor Management via Riemannian GeometryComputational Example
• Mobile sensor platforms with bearings-only sensors seek to localize a stationary emitter
• The parameter manifold M is R2 – the position of the emitter in the plane
• Noise is independent von Mises – mean zero, concentration parameter k
• To simplify computation, sensors are constrained to remain at right angles with respect to emitter
)~~()~~(1
)(
)( 2
14
0
1jjjj
j j
xyxyRI
IF
Valueof
Information
1st year review. UCLA 2012
Kickoff
VOI
Kickoff
!6!5!3!1!2 !4!7!8
VOI
Measurement Selection for ObservabilityNew Topic in Collaboration with ARL
• In a stochastic dynamical system, suppose the state can be measured at each time instant via a measurement map that is selectable from a library
• E.g., x ϵ Rd with x(k+1)=Ax(k)+w(k) and y(k)=Cs(k)+n(k), k=1,2,… where Cs is selectable
• What is the most informative measurement sequence, subject to constraints, for “observation”
• In terms of estimation fidelity for x(0)?
• For numerical conditioning?
• For hypothesis testing on functions of x(0) with myopic, finite-horizon, and infinite horizon objectives?
• What if the dynamics are driven by an adversary?
• How do biological systems manage measurement of dynamical information for sensorimotor control?
• Brian’s ongoing collaborations at UMD
• What can we learn about quantifying value of information for to support multi-faceted and dynamic tasks?
Started summer 2012 during Utku Ilkturk’s six-week visit to ARLContinuing in collaboration with Brian Sadler