unstructured lumigraph rendering chris buehler michael bosse leonard mcmillan leonard mcmillan...

Post on 19-Dec-2015

230 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Unstructured Lumigraph RenderingUnstructured Lumigraph Rendering

Chris BuehlerChris BuehlerMichael Bosse Michael Bosse Leonard McMillan Leonard McMillan MIT-LCS

Steven J. Gortler Steven J. Gortler Harvard University

Michael F. Cohen Michael F. Cohen Microsoft Research

The Image-Based Rendering ProblemThe Image-Based Rendering Problem

• Synthesize novel views from reference Synthesize novel views from reference imagesimages

•Static scenes, fixed lighting

•Flexible geometry and camera configurations

The ULR Algorithm The ULR Algorithm

• Designed to work over a range of Designed to work over a range of image and geometry configurationsimage and geometry configurations

• Designed to satisfy desirable Designed to satisfy desirable propertiesproperties

Geometric Fidelity

# o

f Im

ag

es

VDTMVDTM

LFLFULRULR

Desired Camera

“Light Field Rendering,” SIGGRAPH ‘96

u0

s0

u

s

Desired color interpolated from “nearest cameras”

Desired Camera

“Light Field Rendering,” SIGGRAPH ‘96

u

s

Desired Property #1: Epipole consistency

Desired Camera

“The Scene”

“The Lumigraph,” SIGGRAPH ‘96

u

PotentialArtifact

“The Scene”

“The Lumigraph,” SIGGRAPH ‘96

Desired Property #2: Use of geometric proxy

Desired Camera

“The Lumigraph,” SIGGRAPH ‘96

“The Scene”

Desired Camera

“The Lumigraph,” SIGGRAPH ‘96

“The Scene”

Rebinning

Note: all images are resampled.

Desired Camera

Desired Property #3: Unstructured input images

“The Lumigraph,” SIGGRAPH ‘96

“The Scene”

Desired Property #4: Real-time implementation

Desired Camera

Desired Camera

View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98

“The Scene”

Occluded

Out of view

Desired Camera

“The Scene”

Desired Property #5: Continuous reconstruction

View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98

Desired Camera

“The Scene”

θ1

θ2

θ3

View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98

Desired Camera

“The Scene”

θ1

θ2

θ3

Desired Property #6: Angles measured w.r.t. proxy

View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98

“The Scene”

Desired Camera

Desired Camera

“The Scene”

Desired Property #7: Resolution sensitivity

Previous WorkPrevious Work

Light fields and LumigraphsLight fields and Lumigraphs•Levoy and Hanrahan, Gortler et al., Isaksen et al.

View-dependent Texture MappingView-dependent Texture Mapping•Debevec et al., Wood et al.

Plenoptic Modeling w/Hand-held Plenoptic Modeling w/Hand-held CamerasCameras•Heigl et al.

Many others…Many others…

Unstructured Lumigraph RenderingUnstructured Lumigraph Rendering

1.1. Epipole consistencyEpipole consistency

2.2. Use of geometric proxyUse of geometric proxy

3.3. Unstructured inputUnstructured input

4.4. Real-time implementationReal-time implementation

5.5. Continuous reconstructionContinuous reconstruction

6.6. Angles measured w.r.t. proxyAngles measured w.r.t. proxy

7.7. Resolution sensitivityResolution sensitivity

Desired Camera

Blending FieldsBlending Fields

colordesired = Σ wi colorii

Desired Camera

Blending FieldsBlending Fields

colordesired = Σ w(ci) colorii

• Explicitly construct blending fieldExplicitly construct blending field•Computed using penalties

•Sample and interpolate over desired image

• Render with hardwareRender with hardware•Projective texture mapping and alpha

blending

Unstructured Lumigraph RenderingUnstructured Lumigraph Rendering

Angle PenaltyAngle Penalty

Geometric Proxy

θ1

θ2 θ3

θ4

θ5

θ6

penaltyang(Ci) = θi

C1

C2C3

C4

C5

C6

Cdesired

Resolution PenaltyResolution Penalty

Cdesired

Geometric Proxy

penaltyres(Ci) = max(0,dist(Ci) – dist(Cdesired ))

distdesired

pena

lty

res

Ci

Field-Of-View PenaltyField-Of-View Penalty

angle

pena

lty

FO

V

Total PenaltyTotal Penalty

penalty(Ci) =α penaltyang(i) + β penaltyres(i) + γ penaltyfov(i)

K-Nearest Continuous BlendingK-Nearest Continuous Blending

• Only use cameras with K smallest Only use cameras with K smallest penaltiespenalties

• CC0 0 Continuity: contribution drops to Continuity: contribution drops to zero as camera leaves zero as camera leaves K-nearest K-nearest setset

• w(Ci) = 1- penalty(Ci)/penalty(Ck+1st closest )

• Partition of Unity: normalize Partition of Unity: normalize

• w(Ci) = w(Ci)/Σw(Cj)

~

jj

Blending Field VisualizationBlending Field Visualization

Sampling Blending FieldsSampling Blending Fields

Just epipole samplingEpipole and grid sampling

Select blending field sample locationsfor each sample location j do

for each camera i doCompute penalty(i) for sample location j

end forFind K smallest penaltiesCompute blending weights for sample location j

end forTriangulate sample locations

Hardware Assisted AlgorithmHardware Assisted Algorithm

Clear frame bufferfor each camera i do

Set current texture and projection matrixCopy blending weights to vertices’ alpha channelDraw triangles with non-zero alphas

end for

Sample Blending Field

Render with Graphics Hardware

Blending over one triangleBlending over one triangle

Just epipole samplingEpipole and grid sampling

Select blending field sample locationsfor each sample location j do

for each camera i doCompute penalty(i) for sample location j

end forFind K smallest penaltiesCompute blending weights for sample location j

end forTriangulate sample locations

Hardware Assisted AlgorithmHardware Assisted Algorithm

Clear frame bufferfor each camera i do

Set current texture and projection matrixCopy blending weights to vertices’ alpha channelDraw triangles with non-zero alphas

end for

Sample Blending Field

Render with Graphics Hardware

DemoDemo

Future WorkFuture Work

• Optimal sampling of the camera Optimal sampling of the camera blending fieldblending field

• More complete treatment of More complete treatment of resolution effects in IBRresolution effects in IBR

• View-dependent geometry proxiesView-dependent geometry proxies• Investigation of geometry vs. Investigation of geometry vs.

images tradeoffimages tradeoff

ConclusionsConclusions

Unstructured Lumigraph Unstructured Lumigraph RenderingRendering •unifies view-dependent texture mapping and

lumigraph rendering methods

•allows rendering from unorganized images

• sampled camera blending field

AcknowledgementsAcknowledgementsThanks to the members of theThanks to the members of the

MIT Computer Graphics GroupMIT Computer Graphics Groupandand

Microsoft ResearchMicrosoft ResearchGraphics and Computer Vision GroupsGraphics and Computer Vision Groups

• DARPA ITO Grant F30602-971-0283

• NSF CAREER Awards 9875859 & 9703399

• Microsoft Research Graduate Fellowship Program

• Donations from Intel Corporation, Nvidia, andMicrosoft Corporation

top related