introduction 3d scene flow is the 3d motion field of points in the world. structure is the depth of...

Post on 22-Dec-2015

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Introduction

• 3D scene flow is the 3D motion field of points in the world. Structure is the depth of the scene.

• Motivation of our work:Numerous applications including intelligent robots, human-computer interfaces, surveillance systems, dynamic rendering, dynamic scene interpretation, etc.

• Challenges:• Absence of correspondences, image noises,

structure ambiguities, occlusion, etc.

System Block Diagram

Image Sequence 1

Optical Flow Optical Flow Optical Flow

3D Affine ModelStereo Constraints Regularization Constraints

3D Scene Flow

3D Correspondences

Dense Scene Structure

Image Sequence 2 Image Sequence N

Camera 1 Camera 2 Camera N

Multiple Camera Geometry

• A set of cameras provide N images. A 3D point in the world can be transformed to point by the relation,

• Normally, one pair is used as basic stereo pair. All cameras are pre-calibrated.

• Given an image point and its disparity, we can back-project it to the 3D world.

110 ,,, nCCC

im

PTJm iiii PW

P

Local Motion Model Selection

Frame t

Frame t +1

3D affine motion tMtmP

1tmP

Camera i :

tm

1tm

Local Motion Model Selection

• To avoid overfitting and ensure convergence in each local region, we can assume the motion in consecutive S frames is similar over time. The difference is only a scaling factor. Then,

).,[,

10003333

2222

1111

Stdcba

dcba

dcba

tt

M

Motion Model Fitting

• Eliminate translation unknowns to avoid trivial solutions.

• For remaining unknowns in each local region:

• Non-linear model fitting by using Levenberg-Marquardt (LM) algorithm.

))).(arg(min(*tt EOF UU

Available Local ConstraintsAvailable Local Constraints

Constraint Discussion

• The EOF function is defined based on all the available constraints.– Optical flow constraints:

• The projected 2D motion of 3D affine motion should be compatible with optical flow.

– Stereo constraints:• The projected image location on different image

planes of the same 3D scene point should have similar intensity patterns. Cross- correlation is used to measure this similarity.

EOF Function

• A 3D scene point is projected to different image planes of N cameras. The intensity patterns around the projective location should be similar. So,

bigCorelEOF ttj

tti

S

t A

N-

jiji jistereo

),(

1 1

,0, ,mm

mPMTPMT

EOF Function

• The EOF in local model fitting can be denoted as,

LM algorithm is then used to minimize the EOF function.

stereoflowoptical EOFwEOFEOF *

Regularization Constraints

• To avoid overfitting, penalty constraint is added to large motion.

This constraint is added to EOF function and used in every iteration.

)0.0,min( 1 rzzC iip

Initial GuessesInitial Guesses

• The unknown vector need to be initialized. By assuming small motion between two adjacent frames, we have

• The initial structure (depth) value can be computed by a stereo algorithm.

• The unknown vector need to be initialized. By assuming small motion between two adjacent frames, we have

• The initial structure (depth) value can be computed by a stereo algorithm.

tU

.1

,1,0,0

,0,1,0

,0,0,1

221

333

222

111

S

cba

cba

cba

Complete Recursive Algorithm

1. Initialize unknown vector . Set .2. If , carry out affine model fitting in each

local region using LM algorithm. Smoothness constraint is not used. Set ;Else, add smoothness constraint into EOF function, then carry out affine model fitting in each local region.

3. If regularization constraints are less than a threshold or maximum number of iteration has been exceeded, end the algorithm. Else go to 2.

0:flagtU

1:flag

0flag

Integrated 3D Scene Flow and Structure Recovery

Experiments on Synthetic DataExperiments on Synthetic Data

Integrated 3D Scene Flow and Structure

Recovered Motion FieldsRecovered Motion Fields

Integrated 3D Scene Flow and Structure

Ground Truth ValidationGround Truth Validation

Integrated 3D Scene Flow and Structure

Experiments on Real DataExperiments on Real Data

Integrated 3D Scene Flow and Structure

Recovered Motion FieldsRecovered Motion Fields

Experimental Results of Rule-Based Stereo

Top View

Right View Left View

Segmentation Map

Experimental Results of Rule-Based Stereo

Experimental Results of Rule-Based Stereo

Initial Sparse Disparity Map Result After Applied Rule 1 and 2

Experimental Results of Rule-Based Stereo

Experimental Results of Rule-Based Stereo

Result by Using A Direct Method Result by Using Our Method

Experimental Results of Rule-Based Stereo

Experimental Results of Rule-Based Stereo

Occlusion Map Confidence Map

Experimental Results of Sequential Formulation

• Sample input images (only reference views are shown).

Time t Time t+1

Experimental Results of Sequential FormulationExperimental Results of Sequential Formulation

• Disparity results. • Disparity results.

Reference View

Disparity Result

Experimental Results of Sequential FormulationExperimental Results of Sequential Formulation

• Scene flow results. • Scene flow results.

X-y projection of scene flow z motion of scene flow

Experimental Results of Integrated FormulationExperimental Results of Integrated Formulation

• Disparity results. • Disparity results.

Reference View

Disparity Result

Experimental Results of Integrated FormulationExperimental Results of Integrated Formulation

• Scene flow results. • Scene flow results.

X-y projection of scene flow z motion of scene flow

Local NonrigidMotion Tracking

Structure Nonrigid motion

3D correspon-dences

GlobalRegulari-

zation

Local NonrigidMotion Tracking

Local NonrigidMotion Tracking

Scheme Overview

Global Constraints

2D ImageSequence

Even Segmentation

Local motionanalysis module

Global motionanalysis module

Local Affine Motion Model

• Affine motion model assumed to remain the same for a short period of time;

• A scaling factor, , is incorporated in order to compensate for possible temporal deviations.

.

1000

,1,

3333

2222

1111

1

dcba

dcba

dcba

zyx

i

iiiiiii

i

M

PPMP

i

Local EOF Function

• Levenberg-Marquardt method is used to perform the EOF minimization.

• Unknowns include affine parameters and the scaling factors.

Frame (i)

Frame (i+1)

(R)

(R)(M)

ciEOF II 1iP

1iP

iI

1iI

cI

GOES-8 and GOES-9 are focused on clouds;GOES-9 provides one view at approximately every minute. GOES-8 provides one view at approximately every 15 minutes;Both GOES-8 and GOES-9 have five multi-spectral channels.

Cloud Image Acquisition

• Experiments have been performed on the GOES image sequences of Hurricane Luis, start from 09-06-95 at 1023 UTC to 09-06-95 at 2226 UTC.

Experiments

Experiments (cont.)

• Although the initial mean errors are very large, they decrease very quickly after the global fluid constraints are applied. Stable results are achieved at the end of the iterations.

Experiments on Simulation Images

Results Validation

Experiments on Real Images

Reconstruction Results

Jeab Min_Tracking

Jeab_render

Lin

Lin_render

Qian

Qian_render

Ye

Ye_render

Results Validation

Mean Error: 0.47006 Mean Error: 0.527872

Wave Tank Experiment

Experimental Setup

Stereoscopic camera used to record video sequences of ice forming in the CRREL wave tank.Camera details:

15 fps

B/W images at 320x240 pixel resolution

12 cm baseline with 255 pixel focal length

Camera mounted on platform ~0.8 m above surfaceMultiple film segments captured at various stages of ice formationSeveral marker types (buoys, sprinkles) placed on the surface at various times

Wave Tank Results

Experiments Performed

Visualization via Anaglyphs

• Ice Bucket – 3D images of small ice surfaces

• Wave Tank - 3D images of ice in CRREL wave tank

Analysis

• Ice Bucket - Surface reconstruction of bench-top ice

• Wave Tank - Surface reconstruction of ice in CRREL wave tank

1. Separate the color channels (RGB)

2. For each pixel in the anaglyph:1.Take the Red value from the left image

2. Take the Green and Blue values from the right image

3. View the constructed image with filtered glasses.

Visualizations

Steps to Creating an Anaglyph

L image

R1 G1 B1

R image

R2 G2 B2

R1 G2 B2

Anaglyph

Visualizations

Ice Bucket Anaglyphs

Ice pieces in small bucketCamera ~0.4 m from surface

Visualizations

Wave Tank Anaglyphs

● Wave tank motion● Surface mostly solid● Frames pre-aligned

Pre-study Examples

With calibration balls Without calibration balls

Stereo Analysis

Ice Bucket Experiment

Photographs taken in lab of ice in shallow bucket

Ambient lightingStereo camera

Correspondences determined manuallyMatching points hand selectedDetermining matches in specular areas still difficult

Stereo Results

Nearest Neighbor Surface

Depths calculated at given correspondence pointsAll other points assigned the depth of nearest known point

Stereo Results

Thin Plate Spline Surface

Depths calculated at given correspondence pointsAll other points assigned the depth of nearest known point

Current Results: Wave Tank

Wave Tank Results

Photographs taken at CRREL wave tankNo special lighting usedCamera mounted above tank, facing down

Initial correspondences determined manuallyMatching points hand selectedTank walls and camera support provide context

Current Results: Wave Tank

Thin Plate Spline Surface

Depths calculated at given correspondence pointsAll other points interpolated from smoothing spline

Stereo Analysis Algorithm

Thin Plate Spline Surface With Iterative Warping

1. Manually determine a set of correspondences2. Generate disparity surface using thin plate splines3. Warp the left image to the right image via the disparity

surface4. Fill in any gaps in warped image5. Obtain dense stereo between the right and warped left

images6. Update the disparity surface from the calculated dense

stereo7. Iterate back to step 3 until the two images converge

Stereo Analysis Algorithm

Thin Plate Spline Surface With Iterative Warping

1. Fit surface

2. Warp the left image to the right

Stereo Analysis Algorithm

Thin Plate Spline Surface With Iterative Warping

Current Results: Wave Tank

Visualizations

Deformable Dual Mesh--application to stereo(cont.)

A 3D array is formed by the correlation values between the stereo pair.

(a) A stereo pair

(b) Three cross sections of a 3D array filled with the correlation values (red represents higher correlation areas)

• NM starts deforming from the camera-side end of the volume V

• FM starts deforming from the far-side of the volume V

Deformable Dual Mesh-- application to stereo(cont.)

• Coarse to Fine Scheme:

A coarsely initialized 3D array V. The blue plane shows the initial position of the near mesh and the red plane shows the initial position of the far mesh

Deformable Dual Mesh-- application to stereo(cont.)

top related