dynamically reparameterized light fields aaron isaksen, leonard mcmillan (mit), steven gortler...
TRANSCRIPT
Dynamically Reparameterized Light
FieldsAaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard)
Siggraph 2000
Presented by Orion Sky Lawlorcs497yzy 2003/4/24
IntroductionLightfield Aquisition
Image ReconstructionSynthetic Aperture
Introduction
Rendering cool pictures is hard Rendering them in realtime is
even harder (Partial) Solution: Image-based
rendering Acquire or pre-render many images At display time, recombine existing
images somehow Standard sampling problems:
Aliasing, acquisition, storage
Why use Image-based Rendering? Captures arbitrarily complex
material/light interactions Spatially varying glossy BRDF Global, volumetric, subsurface, ...
Display speed independent of scene complexity Excellent for natural scenes
Non-polygonal description avoids Difficulty doing sampling & LOD Cracks, watertight, manifold, ...
Why not use Image-based? Must acquire images
beforehand Fixed scene & lighting
•Often only the camera can move Predetermined sampling rate
•Undersampling, aliasing problems Predetermined set of views
•Can’t look in certain directions! Acquisition painful or expensive
Must store many, many images Yet access must be quick
How do Lightfields not Work? At every point in space, take a
picture (or environment map):
3D Space, 2D Images => 5D
Display is just image lookup!
Why don’t Lightfields work like that?
These images all contain duplicate rays, again and again
3D Space, 2D Images => 5D
How do Lightfields actually Work? We can thus get away with just
one layer of cameras:
2D Cameras
2D Images
=> 4D LightfieldReconstructed novel viewpoint
Only assumption:Rays are unchanged along
path
Display means interpolating several views
Camera Array Geometry
(Illustration: Isaksen, MIT)
IntroductionLightfield Aquisition
Image ReconstructionSynthetic Aperture
How do you make a Lightfield? Synthetic scene
Render from different viewpoints Real scene
Sample from different viewpoints In either case, need
Fairly dense sampling•Lots of data, compression useful
Good antialiasing, over both the image plane (pixels), and camera plane (apertures)
XY Motion Control Camera Mount
(Isaksen, MIT)
8 USB Digital Cameras, covers removed
(Jason Chang, MIT)
Lens array (bug boxes!) on a flatbed scanner (Jason Chang, MIT)
(Lightfield: Isaksen, MIT)
IntroductionLightfield Aquisition
Image ReconstructionSynthetic Aperture
Lightfield Reconstruction To build a view, just look up
light along each outgoing ray:
Camera Array
Reconstructed novel viewpoint
Need both direction and camera interpolation
Two-Plane Parameterization Parameterize
any ray via its intersection with two planes: Focal plane, for
ray direction Camera plane
May need 6 pairs of planes to capture all sides of a 3D object
(Slide: Levoy & Hanrahan, Stanford)
Camera and Direction Interpolation
(Slide: Levoy & Hanrahan, Stanford)
Mapping camera views to screen Can map camera view to new
viewpoint using texture mapping (since everything’s linear)
(Figure: Isaksen, MIT)
New Camera
Old Camera
Focal Plane
Lightfield Reconstruction (again) To build a view, just look up
light along each outgoing ray:
Camera Array
Reconstructed novel viewpoint
Reconstruction done via graphics hardware & laws of perspective
Related: Lenticular Display Replace cameras with
directional emitters, like many little lenses:
Reconstructed novel viewpoint
Reconstruction done in free space & laws of optics
Lens array
ImageOptional Blockers
(Isaksen)
Related: Holography A Hologram is just a sampling
plane with directional emission:
Reconstructed novel viewpoint
Reconstruction done in free space & coherent optics
Holographic film
Interference patterns on film act like little diffraction gratings, and give directional emission.
Reference Beam
(Hanrahan)
IntroductionLightfield Aquisition
Image ReconstructionSynthetic Aperture
Camera Aperture & Focus Non-pinhole cameras accept
rays from a range of locations:
Stuff’s in focus here
Stuff’s blurry out here
Lens
One pixel on CCD or film
Camera Aperture Can vary effective lens size by
changing physical aperture (“hole”) On a camera, this is the f-stop
Small Aperture Big Aperture
Not much blurring—long depth of field Lots of depth
blurring—short depth of field
Synthetic Aperture Can build a larger aperture in
postprocessing, by combining smaller apertures
Big Assembled Aperture
Note: you can assemble a big aperture out of small ones, but not split a small aperture from a big one—it’s easy to blur, but not to un-blur.
Same depth blurring as with a real aperture!
Synthetic Aperture Example
Vary reconstructed camera’s aperture size: a larger synthetic aperture means a shorter “depth of field”—shorter range of focused depths.
(Illustration: Isaksen, MIT)
Camera Focal Distance Can vary real focal distance by
changing the camera’s physical optics
Far Near
Synthetic Aperture Focus With a synthetic aperture, can
vary focus by varying direction
Synthetic Far Synthetic Near
Note: this is only works exactly in the limit of small source apertures, but works OK for finite apertures.
Synthetic Aperture Focus: Aliasing Aliasing artifacts can be caused
by focal plane mismatch
Synthetic Far Synthetic Near
Point sampling along this plane causes aliasing artifacts
Blurring along this plane due to source focal length
Variable Focal Plane Example
Vary reconstructed camera’s focal length: just a matter of changing the directions before aperture assembly.
(Illustration: Isaksen, MIT)
Advantages of Synthetic Aperture:
Can simulate a huge aperture Impractical with a conventional camera
Can even tilt focal plane Impossible with conventional optics!
(Illustration: Isaksen, MIT)
Conclusions Lightfields are a unique way to
represent the world Supports arbitrary light transport Equivalent to holograms &
lenticular displays Isaksen et al.’s synthetic
aperture technique allows lightfields to be refocused Opportunity to extract more
information from lightfields