synchronization and calibration of camera networks from silhouettes sudipta n. sinha marc pollefeys...

Post on 21-Dec-2015

216 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Synchronization and Calibration of Camera

Networks from Silhouettes

Sudipta N. Sinha Marc Pollefeys

University of North Carolina at Chapel Hill, USA.

2

Goal

To recover the Calibration & Synchronization of a Camera Network from only Live Video or Archived Video Sequences.

3

Motivation

• Easy Deployment and Calibration of Cameras.– No Offline Calibration ( Patterns, LED etc)– No physical access to environment

• Possibility of using unsynchronized video streams (camcorders, web-cams etc.)

• Applications in wide-area surveillance camera networks (3D tracking etc).

• Digitizing 3D events

4

Why use Silhouettes ? Visual Hull (Shape-from-Silhouette) System

• Many silhouettes from dynamic objects

• Background segmentation

Feature-based ?• Features Matching hard for

wide baselines• Little overlap of backgrounds• Few features on foreground

5

Prior Work : Calibration from Silhouettes

Epipolar Geometry from Silhouettes • Porrill and Pollard, ’91• Astrom, Cipolla and Giblin, ’96

Structure-and-motion from Silhouettes• Vijayakumar, Kriegman and Ponce’96 (orthographic)• Furukawa and Ponce’04 (orthographic)• Wong and Cipolla’01 (circular motion, at least to start)• Yezzi and Soatto’03 (needs initialization)

Sequence to Sequence Alignment • Caspi, Irani,’02 (feature based)

6

Our Approach

• Compute Epipolar Geometry from Silhouettes in synchronized sequences (CVPR’04).

• Here, we extend this to unsynchronized sequences.

• Synchronization and Calibration of camera network.

7

Multiple View Geometry of Silhouettes

Frontier PointsEpipolar Tangents

• Always at least 2 extreme frontier points per silhouette

• Only 2-view correspondence in general.

x1 x2

x’1x’2

0Fxx1

T

2

0xFx1

T

2

8

Camera Network Calibration from Silhouettes

• 7 or more corresponding frontier points needed to compute epipolar geometry

• Hard to find on single silhouette and possibly occluded

• However, video sequences contain many silhouettes.

9

Camera Network Calibration from Silhouettes

• If we know the epipoles, draw 3 outer epipolar tangents (need at least two silhouettes in each view)

• Compute an epipolar line homography H-T

• Epipolar Geometry F=[e]xH

10

RANSAC-based algorithm

Repeat {• Generate a Hypothesis for the Epipolar Geometry• Verify the Model

}

Refine the best hypothesis.

• Note : RANSAC is used to explore 4D space of epipoles apart from dealing with noisy silhouettes

11

Compact Representation for SilhouettesTangent Envelopes• Store the Convex Hull

of the Silhouette.

• Tangency Points for a discrete set of angles.

• Approx. 500 bytes/frame. Hence a whole video sequences easily fits in memory.

• Tangency Computations are efficient.

12

RANSAC-based algorithm

Generate Hypothesis for Epipolar Geometry

• Pick 2 corresponding frames, pick random tangents for each of the silhouettes.

• Compute epipoles.

• Pick 1 more tangent from additional frames

• Compute homography

• Generate Fundamental Matrix.

13

RANSAC-based algorithm

Verify the ModelFor all tangents

Compute Symmetric Epipolar Transfer Error Update Inlier Count

(Abort Early if Hypothesis doesn’t look Promising)

14

What if videos are unsychronized ?

For fixed fps video, same constraints are valid up to an extra unknown temporal offset.

• Add a random temporal offset to RANSAC hypothesis.

• Use multi-resolution approach:– Keyframes with slow motion, rough

synchronization– ones with fast motion provide fine

synchronization

15

Computed Fundamental Matrices

16

Synchronization experiment

• Total temporal offset search range [-500,+500] (i.e. ±15 secs.)• Unique peaks for correct offsets• Possibility for sub-frame synchronization

# Promising

Candidates

Sequence Offset (# frames)

# Iterations(In millions)

17

Camera Network Synchronization

• Consider directed graph with offsets as branch value

• For consistency loops should add up to zero

• MLE by minimizing

+3

-5+8

+6

+2

0

22 tt

ground truth

in frames (=1/30s)

18

From epipolar geometry to full calibration

• Solve for camera triplet (Levi and Werman, CVPR’03;

Sinha et al. CVPR’04)

• Assemble complete camera network.

19

Metric Cameras and Visual-Hull Reconstruction from 4 views

Final calibration quality comparable to explicit calibration procedure

20

Validation experiment:Reprojection of silhouettes

21

Taking Sub-frame Synchronization into account

Reprojection error reduced from 10.5% to 3.4% of the pixels in the silhouette

Temporal Interpolation of Silhouettes.

to appear (Sinha, Pollefeys, 3DPVT’04)

22

Conclusion and Future Work

• Camera network calibration & synchronization just from dynamic silhouettes.

• Great for visual-hull systems.• Applications for surveillance systems.

• Extend to active PTZ camera network and asynchronous video streams.

Acknowledgments• NSF Career, DARPA.• Peter Sand, (MIT) for Visual Hull dataset.

top related