cs194 -- light fields
TRANSCRIPT
Image Manipulation & Computational Photography UC Berkeley CS194, Fall 2016
Light Fields Guest Lecturer: Ren Ng
Fall 2016CS194
What’s Happening Inside the Camera?
Cross-section of Nikon D3, 14-24mm F/2.8 lens
Fall 2016CS194
Three Focus-Related Problems in 2D Photography
1. Need to focus before taking the shotSim
on Bruty, Sports Illustrated
Fall 2016CS194
2. Trade-off between depth of field and motion blur
f / 40.01 sec
f / 110.1 sec
f / 320.8 sec
Three Focus-Related Problems in 2D Photography
Fall 2016CS194
3. Lens designs are complex due to optical aberrations
Three Focus-Related Problems in 2D Photography
Light Field Photography Demo
Fall 2016CS194
Light Field Photographs
Fall 2016CS194
Lytro ILLUM with 30-250mm (equiv) lens F/2
Lens Designed For Light Field Computation
2D Photographs vs 4D Light Fields
Fall 2016CS194
What’s Happening Inside the Camera?
Cross-section of Nikon D3, 14-24mm F/2.8 lens
Fall 2016CS194
2D Photographs vs 4D Light Fields
Photograph = light arriving at all points in image (2D)Light field = light traveling along every ray (4D)
Fall 2016CS194
Credit:
?
?
The 4D Light Field Flowing Into A Camera
Cross-section of Nikon D3, 14-24mm F/2.8 lens
Fall 2016CS194
The 4D Light Field Flowing Into A Camera
Lens Sensor
The 4D Light Field Flowing Into A Camera
ux
ux
Lens
Focal plane
Sensor
What Does a 2D Photograph Record?
x
ux
u
Imagine Recording the Entire 4D Light Field
x
ux
u
Capturing Light Fields
A Plenoptic Camera Samples The Light Field
x
ux
u
Microlens arraySensorLight Field Sensor
Where Microlenses Go Inside Camera
Cross-section of Nikon D3, 14-24mm F/2.8 lens
Where Microlenses Go Inside Camera
Lens SensorCover Glass
Where Microlenses Go Inside Camera
Lens SensorCover Glass
Microlenses
Glass
(0.5 mm thick)
…Air
(0.04 mm thick)
Microlenses
(0.02 mm spacing)
CMOS pixels
(0.0014 mm spacing)
Raw Data From Light Field Sensor
Raw Data From Light Field Sensor
Raw Data From Light Field Sensor
One disk image
Raw Data From Light Field Sensor
One disk image
Raw Data From Light Field Sensor
One disk image
Raw Data From Light Field Sensor
u,v
x,y
Mapping Sensor Pixels to (x,y,u,v) Rays
u,v
Microlens location in image field of view gives (x,y) coord
Pixel location in microlens image gives (u,v) coordx,y
u,v
x,y
Mapping Sensor Pixels to (x,y,u,v) Rays
u,v
Microlens location in image field of view gives (x,y) coord
Pixel location in microlens image gives (u,v) coordx,y
Test Your Understanding
Sub-Aperture Images
Sub-aperture image, min u
Image from selecting same pixel under every microlens
Sub-Aperture Images
Image from selecting same pixel under every microlens
Sub-aperture image, max u
Sub-Aperture Imagesx
ux
u
Sub-aperture image, max u
Sub-Aperture Imagesx
ux
u
Sub-aperture image, min u
How Does Computational Refocusing Work?
Fall 2016CS194
Recall: How Physical Focusing Works
Sensor / lens gap determines plane of physical focus.
Credit: Stanford CS 178
Fall 2016CS194
Computational Refocusing
Fall 2016CS194
Computational Refocusing
Fall 2016CS194
Computational Refocusing
focus far
Fall 2016CS194
Computational Refocusing
Fall 2016CS194
Computational Refocusing
Fall 2016CS194
Computational Refocusing
compute ray projection
Fall 2016CS194
Computational Refocusing
focus close
Fall 2016CS194
Computational Refocusing
focus far
Output Image Pixel is Sum of Many Sensor Pixels
x
ux
u
Virtual focal plane
x
u
Output Image Pixel is Sum of Many Sensor Pixels
Fall 2016CS194
Shift-And-Add Algorithm
Fall 2016CS194
Shift-And-Add Algorithm
for every sub-aperture image I(x,y) • compute the (u,v) corresponding to that image • shift the sub-aperture image by Δ(x,y) = C * (u,v) • average the shifted image into an output image
Larger C means refocusing further from the physical focus Sign of C affects whether focusing closer or further
For non-integral Δ(x,y), use bilinear interpolation to blend into 4 nearest pixels of output image
Fall 2016CS194
Sampling & Aliasing in Shift-And-Add Algorithm
Fall 2016CS194
Antialiasing Shift-And-Add Algorithm
for a dense sampling of (u,v) over the lens aperture • compute a virtual sub-aperture image by bilinear
interpolation of the nearest 4 sub-aperture images • shift the image by Δ(x,y) = C * (u,v) • average the shifted image into an output image
Computationally ChangingDepth of Field and Viewpoint
Computationally Extended Depth of Field
Conventional Lens at f/22
Light Field Lens at f/4, all-focus algorithm
[Agarwala 2004]
Conventional Lens at f/4
Partially Extended Depth of Field
Extended DOF
Partially Extended DOF
OriginalDOF
Fall 2016CS194
Tilted Focal Plane
Fall 2016CS194
Tilted Focal Plane
View Camera, Scheimpflug Rule
Source: David Summerhayes, http://www.luminous-landscape.com/tutorials/focusing-ts.shtml
Lateral movement (left)
Computational Change of Viewpoint
Lateral movement (right)
Computational Change of Viewpoint
Lateral left movement
Computational Change of Viewpoint
Backward movement(orthographic effect)Forward movement(wide angle effect)
Lateral left movement
Computational Change of Viewpoint
Backward movement(orthographic effect)
Moving the Viewpoint Side-to-Side
Moving viewpoint laterally = selecting a different sub-aperture image
Marc Levoy
Moving the Viewpoint Back-to-Front
Moving viewpoint in/out = selecting pixels from different sub aperture images
Marc Levoy
Many Ways to Capture Light Fields
Spherical Gantry ⇒ 4D Light Field
Original light field rendering paper Take photographs of an object from all points on an enclosing sphere Captures all light leaving an object – like a hologram
L(x, y, ✓,�)
(✓,�)
Slide credit: Pat Hanrahan[Levoy & Hanrahan 1996] [Gortler et al. 1996]
Multi-Camera Array ⇒ 4D Light Field
Slide credit: Pat Hanrahan
Two-Plane Light Field
2D Array of Cameras 2D Array of Images
L(u,v,s,t) Slide credit: Pat Hanrahan
Fall 2016CS194
Camera Arrays
Very large “virtual aperture.” Very flexible imaging [Wilburn et al 2005] [Yang et al. 2002]
Fall 2016CS194
Light Field Microscope
Use microlens in microscope imaging path [Levoy et al 2006]
Mandibles of a silk worm
[Levoy et al 2006]
Fern spore
[Levoy et al, 2006]
Modeling Light - Another Way to Get to Light Fields
Fall 2016CS194
What Do We See?
Fall 2016CS194
What Do We See?
Fall 2016CS194
The Plenoptic Function
Q: What is the set of all things that we can ever see? A: The Plenoptic Function (Adelson & Bergen)
Let’s start with a stationary person and try to parameterize everything that person can see…
Figure by Leonard McMillan
Fall 2016CS194
Grayscale Snapshot
is intensity of light • Seen from a single view point • At a single time • Averaged over the wavelengths of the visible spectrum
(can also do P(x,y), but spherical coordinate are nicer)
P (✓,�)
Fall 2016CS194
Color Snapshot
is intensity of light • Seen from a single view point • At a single time • As a function of wavelength
P (✓,�,�)
Fall 2016CS194
A Movie
is intensity of light • Seen from a single view point • Over time • As a function of wavelength
P (✓,�,�, t)
Fall 2016CS194
Holographic Movie
is intensity of light • Seen from ANY viewpoint • Over time • As a function of wavelength
P (✓,�,�, t, Vx
, Vy
, Vz
)
Fall 2016CS194
The Plenoptic Function
• Can reconstruct every possible view, at every moment, from every position, at every wavelength
• Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! Not bad for a function…
P (✓,�,�, t, Vx
, Vy
, Vz
)
Fall 2016CS194
The 5D Plenoptic Function
• Ignore time and wavelength • Focus just on spatial structure of light
P (✓,�, Vx
, Vy
, Vz
)
Fall 2016CS194
4D Light Field
• In a region of free-space, 5D plenoptic function simplifies to 4D because light is constant along a ray
• In this lecture we focused on the 4D light field that flows into the body of a camera
P (✓,�, Vx
, Vy
) = P (u, v, s, t)
Fall 2016CS194
On Simulating the Visual Experience
Just feed the eyes the right data • No one will know the difference!
Philosophy: • Ancient question: “Does the
world really exist?” Physics: • “Slowglass" might be possible?
Computer Science: • Virtual Reality
Fall 2016CS194
Virtual Reality Capture (Outward Facing Light Fields)
GoPro Odyssey / Google Jump
Credit: Simon Crisp, gizmag.com
Fall 2016CS194
Things to Remember
4D light field: radiance along every ray Light field camera • Capture light field flowing into lens in every shot • Light field sensor = microlens array in front of sensor
Computational refocusing • Refocusing = reproject rays assuming new sensor depth • Can think of this as shift-and-add of sub-aperture images
Plenoptic Function / Light Field • Represent the total geometric structure of light