distributed message passing for large scale graphical models alexander schwing tamir hazan marc...

Post on 19-Dec-2015

219 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Distributed Message Passing for Large Scale Graphical Models

Alexander SchwingTamir Hazan

Marc PollefeysRaquel Urtasun

CVPR2011

Outline

• Introduction• Related work• Message passing algorithm• Distributed convex belief propagation• Experiment evaluation• Conclusion

Introduction

• Vision problems → discrete labeling problems in an undirected graphical model (Ex : MRF)

– Belief propagation (BP)– Graph cut

• Depending on the potentials and structure of the graph

• The main underlying limitations to real-world problems are memory and computation.

Introduction

• A new algorithm – distribute and parallelize the computation and memory requirements– conserving the convergence and optimality guarantees

• Computation can be done in parallel by partitioning the graph and imposing agreement between the beliefs in the boundaries.– Graph-based optimization program → local optimization problems (one

per machine).– Messages between machines : Lagrange multipliers

• Stereo reconstruction from high-resolution image• Handle large problems (more than 200 labels in images larger than 10 MPixel)

Related work

• Provable convergence while still being computationally tractable.– parallelizes convex belief propagation– conserves its convergence and optimality guarantees

• Strandmark and Kahl [24]

– splitting the model across multiple machines

• GraphLab– assumes that all the data is stored in shared-memory

Related work

• Split the message passing task at hand into several local optimization problems that are solved in parallel.

• To ensure convergence we force the local tasks to communicate occasionally.

• At the local level we parallelize the message passing algorithm using a greedy vertex coloring

Message passing algorithm

• The joint distribution factors into a product of non-negative functions

– defines a hypergraph whose nodes represent the n random variables and the subsets of variables x correspond to its hyperedges.

• Hypergraph– Bipartite graph : factor graph[11]

• one set of nodes corresponding to the original nodes of the hypergraph : variable nodes

• the other set consisting of its hyperedges : factor nodes

• N(i) : all factor nodes that are neighbors of variable node i

Message passing algorithm

• Maximum a posteriori (MAP) assignment

• Reformulate the MAP problem as integer linear program.

Message passing algorithm

Distributed convex belief propagation

• Partition the vertices of the graphical model to disjoint subgraphs– each computer solves independently a variational program with respect to

its subgraph.

• The distributed solutions are then integrated through message-passing between the subgraphs– preserving the consistency of the graphical model.

• Properties : – If (5) is strictly concave then the algorithm converges for all ε >= 0, and

converges to the global optimum when ε > 0.

Distributed convex belief propagation

Distributed convex belief propagation

Distributed convex belief propagation

• Lagrange multipliers : – : the marginalization constraints within each computer– : the consistency constraints between the different computers

Distributed convex belief propagation

Experiment evaluation

• Stereo reconstruction– nine 2.4 GHz x64 Quad-Core computers with 24 GB memory each,

connected via a standard local area network

• libDAI 0.2.7 [17] and GraphLAB [16]

Experiment evaluation

Experiment evaluation

Experiment evaluation

• relative duality gap

Conclusion

• Large scale graphical models by dividing the computation and memory requirements into multiple machines.

• Convergence and optimality guarantees are preserved.

• Main benefit : the use of multiple computers.

top related