interactive light field editing and...

7
Interactive Light Field Editing and Compositing Billy Chen * Daniel Horn* * Stanford University Gernot Ziegler Hendrik P. A. Lensch* MPI Informatik (a) (b) (c) (d) Figure 1: Our system enables a user to interactively edit and composite light fields. This figure shows some of the operations that can be accomplished by the system. In (a) we create a light field by compositing multiple copies of a giraffe and flower light field. Each copy is twisted or turned differently to create a visually interesting scene. (b) is an image where objects have been selectively focused or defocused to create an artistic effect. Even though objects are defocused, occlusions between them are maintained and defocus blur blends with objects hidden behind occluders. This effect cannot be accomplished with a single image of each object. Image (c) shows light fields of refractive objects (glass balls) composited with a background light field. Because the background is a light field, the refracted image exhibits parallax and visibility change in different views. Note that we have chosen to focus on the virtual image in the refracted glass ball; consequently, the real image of the background is defocused. In (d) we approximate soft shadows cast from the giraffe light field onto the flower light field. Notice the penumbra of the shadow cast on the ground due to the light and notice that the shadow interacts with the flower believably. Abstract Light fields can be used to represent an object’s appearance with a high degree of realism. However, unlike their geometric counter- parts, these image-based representations lack user control for edit- ing and manipulating them. We present a system that allows a user to interactively edit and composite light fields. We introduce five basic operators, alpha-compositing, apply, multiplication, warping, and projection, which may be used in combination. To describe how multiple operators and light fields are combined together, we introduce a compositing language similar to those used for com- bining shader programs. To enable real-time compositing, we have developed a framework for the programmable graphics hardware that implements the five basic operators and provides mechanisms for users to develop combinations of these. We built an interactive system using this framework and demonstrate its use for editing and authoring new light fields. Keywords: light fields, image-based modeling and rendering, 3D photography * e-mail: {billyc|danielrh|lensch}@graphics.stanford.edu email: [email protected] 1 Introduction A light field [Levoy and Hanrahan 1996; Gortler et al. 1996] is a four-dimensional function mapping rays in free space to radiance. Using light fields to represent scene information has become a pop- ular technique for rendering photo-realistic images. Thus, the need arose to manipulate these datasets as flexibly as images and 3D models. In this paper, we present a system that allows a user to in- teractively edit and composite 4D light fields. Interactive rendering rates are achieved by exploiting programmable graphics hardware. Users can perform light field insertion, deformation and removal, refocusing, rapid-prototyping of photo-realistic scenes, or simula- tion of refractive effects in our framework. To support these capabilities, we introduce five basic operators: alpha-compositing, apply, multiplication, warping, and projec- tion. In combination, these operators enable a large number of ways to edit a light field. The operators are designed to be simple so that they can be implemented as basic functions on programmable graphics hardware. Complex operators can be created by combin- ing several basic operators together. To describe how this combining occurs, we introduce a composit- ing language, similar to those used to describe shaders [Cook 1984; Proudfoot et al. 2001] or pixel streams [Perlin 1985]. The language provides a structured foundation to composite light fields. This lan- guage also serves as an interface between the user and the graphics hardware. We have developed a simple hardware-accelerated framework for representing light fields, their operators, and the data flow. The framework provides a high-level abstraction layer for manipulating light fields, rather than working directly at the pixel level. We use this framework to interactively edit and composite light fields.

Upload: others

Post on 16-May-2020

18 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Interactive Light Field Editing and Compositinggraphics.stanford.edu/papers/lfediting/light_field_editing.pdf · Interactive Light Field Editing and Compositing ... refocusing, rapid-prototyping

Interactive Light Field Editing and Compositing

Billy Chen∗ Daniel Horn*

∗Stanford University

Gernot Ziegler † Hendrik P. A. Lensch*

† MPI Informatik

(a) (b) (c) (d)

Figure 1: Our system enables a user to interactively edit and composite light fields. This figure shows some of the operations that can beaccomplished by the system. In (a) we create a light field by compositing multiple copies of a giraffe and flower light field. Each copy istwisted or turned differently to create a visually interesting scene. (b) is an image where objects have been selectively focused or defocusedto create an artistic effect. Even though objects are defocused, occlusions between them are maintained and defocus blur blends with objectshidden behind occluders. This effect cannot be accomplished with a single image of each object. Image (c) shows light fields of refractiveobjects (glass balls) composited with a background light field. Because the background is a light field, the refracted image exhibits parallaxand visibility change in different views. Note that we have chosen to focus on the virtual image in the refracted glass ball; consequently, thereal image of the background is defocused. In (d) we approximate soft shadows cast from the giraffe light field onto the flower light field.Notice the penumbra of the shadow cast on the ground due to the light and notice that the shadow interacts with the flower believably.

Abstract

Light fields can be used to represent an object’s appearance with ahigh degree of realism. However, unlike their geometric counter-parts, these image-based representations lack user control for edit-ing and manipulating them. We present a system that allows a userto interactively edit and composite light fields. We introduce fivebasic operators, alpha-compositing, apply, multiplication, warping,and projection, which may be used in combination. To describehow multiple operators and light fields are combined together, weintroduce a compositing language similar to those used for com-bining shader programs. To enable real-time compositing, we havedeveloped a framework for the programmable graphics hardwarethat implements the five basic operators and provides mechanismsfor users to develop combinations of these. We built an interactivesystem using this framework and demonstrate its use for editing andauthoring new light fields.

Keywords: light fields, image-based modeling and rendering, 3Dphotography

∗e-mail: {billyc|danielrh|lensch}@graphics.stanford.edu†email: [email protected]

1 Introduction

A light field [Levoy and Hanrahan 1996; Gortler et al. 1996] is afour-dimensional function mapping rays in free space to radiance.Using light fields to represent scene information has become a pop-ular technique for rendering photo-realistic images. Thus, the needarose to manipulate these datasets as flexibly as images and 3Dmodels. In this paper, we present a system that allows a user to in-teractively edit and composite 4D light fields. Interactive renderingrates are achieved by exploiting programmable graphics hardware.Users can perform light field insertion, deformation and removal,refocusing, rapid-prototyping of photo-realistic scenes, or simula-tion of refractive effects in our framework.

To support these capabilities, we introduce five basic operators:alpha-compositing, apply, multiplication, warping, and projec-tion. In combination, these operators enable a large number of waysto edit a light field. The operators are designed to be simple sothat they can be implemented as basic functions on programmablegraphics hardware. Complex operators can be created by combin-ing several basic operators together.

To describe how this combining occurs, we introduce a composit-ing language, similar to those used to describe shaders [Cook 1984;Proudfoot et al. 2001] or pixel streams [Perlin 1985]. The languageprovides a structured foundation to composite light fields. This lan-guage also serves as an interface between the user and the graphicshardware.

We have developed a simple hardware-accelerated framework forrepresenting light fields, their operators, and the data flow. Theframework provides a high-level abstraction layer for manipulatinglight fields, rather than working directly at the pixel level. We usethis framework to interactively edit and composite light fields.

Page 2: Interactive Light Field Editing and Compositinggraphics.stanford.edu/papers/lfediting/light_field_editing.pdf · Interactive Light Field Editing and Compositing ... refocusing, rapid-prototyping

The rest of the paper is organized as follows. First, we describerelated work in the areas of compositing and image-based editing.Second, we present the basic operators and how to combine them.Third, we show how these operators can be implemented on pro-grammable graphics hardware to enable interactive editing. Finally,we describe four useful editing operations using the basic operatorsand demonstrate their use on a variety of light fields.

2 Related work

The idea of compositing images was originally used in film andvideo production to simulate effects like placing an image of anactor in front of a different background [Fielding 1972]. In 1984,Porter and Duff introduced the alpha channel in order to compos-ite digital images [Porter and Duff 1984]. Recently, Shum et al.extended 2D compositing to 4D light fields [Shum and Sun 2004].They built a system that allows a user to manually segment a lightfield into layers. The system composites these layers together us-ing a technique called coherence matting. Our framework offers asimilar, but simpler compositing operator amongst a larger gamutof operators. Our focus is on simple operators that can be combinedeasily on the graphics hardware.

Compositing is one way to edit images. Many researchers haveinvestigated other editing operations for either a single image orup to a handful of images. Oh et al. built a system that takes asingle image as input and allows the user to augment it with depthinformation [Oh et al. 2001]. Subsequent editing operations, likecloning and filtering, can be performed with respect to scene depth.Barsky has used a similar image representation to simulate how ascene would be observed through a human optical system [Barsky2004].

To handle multiple images of a scene, Seitz and Kutulakos devel-oped a system that builds a voxel-representation [Seitz and Kutu-lakos 1998]. In this system, 2D editing operations, like scissoringor morphing, are propagated to the other images using the voxelrepresentation. In our system, we use sufficiently sampled lightfields as an alternative representation of the scene and our operatorsmanipulate each light field individually.

For editing light fields, several researchers have built systems thattake light fields as input, perform a specific editing task and returna new light field. Two such systems allow for warping light fields[Zhang et al. 2002; Chen et al. 2005]. In the former, two input lightfields are combined to produce a morphed version. In the latter, ananimator can interactively deform an object represented by a lightfield. Our system offers a general editing framework where warpingis one of many operations.

The idea of combining operators to produce novel ones is inspiredby early work on flexible shading models [Cook 1984]. Cook usesa shading tree to represent a wide range of shading characteristics.In our paper, we describe a language that allows light fields to becombined.

3 Basic light field operators

A light field is a four-dimensional function that takes a ray as input,and returns the radiance along that ray. We augment the radiance ateach ray with a scalar alpha quantity that represents the opacity ofthe light field at that ray and call this the RGBA color.

Given this representation of a light field, we now define five ba-sic operators that enable light field compositing and editing. The

first four operators composite and manipulate light fields. The lastoperator computes 2D projections of light fields.

3.1 Basic operators

Notations and ConventionsEach of the following operators takes one or two light fields as in-put and returns a single light field. The exception is the projectionoperator, which returns a 2D image. When operating upon colors,we perform the same operation component-wise on each channelunless noted. To obtain the scalar opacity value from a color c wewrite c[α].

Alpha compositingThe alpha compositing operator enables correct depth compositingof multiple light fields in a scene. This operator can perform alltwelve of Porter and Duff’s [1984] compositing operations on 4Dlight fields1. Similar to their formulation, we also assume colorsare premultiplied by the alpha channel.

Given a ray, r, we define the compositing operator C as

CFA,FB(LA,LB)(r) = LA(r) · FA(LB(r)) (1)

+LB(r) · FB(LA(r))

where FA(c) and FB(c) are functions that take in a color and returna scalar.

As a shorthand, when FA(c) or FB(c) are defined the following way:

F(c) = 0F(c) = 1F(c) = c[α]

F(c) = c[1−α]

we give these functions the names 0, 1, α , and 1−α respectively.

Thus the traditional over, addition and out operators can be repre-sented as follows:

LA over LB = C1,1−α (LA,LB)

LA + LB = C1,1(LA,LB)

LA out LB = C1−α,0(LA,LB)

Other Binary Operators: apply and multiplicationWhile we can use the compositing operator above to define addi-tion by setting blending fractions FA and FB to 1, compositing doesnot support arbitrary binary operations on colors. Thus we definethe new operator apply, which takes two light fields and a binaryfunction that maps colors to a color. Apply then returns a new lightfield where the binary function has been applied to colors of thecorresponding rays in both light fields.

AF (LA,LB)(r) = F(LA(r),LB(r)) (2)

We can then define light field multiplication by passing a binaryfunction, color multiply, to apply. color multiply is a function thattakes two colors and performs a component-wise multiplication ineach color channel, including alpha.

LA × LB = Acolor multiply(LA,LB) (3)

1Our compositing operators can also be applied to 2D images since animage can be expressed as a simple 4D light field.

Page 3: Interactive Light Field Editing and Compositinggraphics.stanford.edu/papers/lfediting/light_field_editing.pdf · Interactive Light Field Editing and Compositing ... refocusing, rapid-prototyping

WarpingThe warping operator deforms a light field function L by transform-ing the directions of input rays. The warping operator takes twoinputs, the original light field L and any warping function F thatmaps input rays to output rays. The warping operator is defined as:

W (L,F)(r) = L(F(r)) (4)

ProjectionThe task of the projection operator is to map a 4D light field to a2D image for viewing.

The projection operator selects rays from the input light field andorganizes them into an image. In our system we use a canonicalpinhole camera model centered at (0,0,0) and facing towards nega-tive z with a field of view of 90 degrees and aspect ratio of 1. Giventhat L is the space of all 4D light field functions, and I is the spaceof all images, then:

P : L → I (5)

Additionally, we provide an abstraction over our canonical P thattakes in the location of an arbitrary pinhole and the extent of an im-age plane and creates a projection from that new camera location. Itaccomplishes this by applying a warping function to the input lightfield to orient it to our underlying canonical projection operation.In our paper, we use Ph,c,r,u to represent this abstraction. It denotestaking a 4D-to-2D pinhole projection at location h onto an imageplane parameterized by its center, right, and up vectors (c,r,u).

3.2 A light field compositing language

In order to describe how operators interact, we introduce a com-positing language that combines operators into an expression sim-ilar to code in a shading language. For example, the following ex-pression:

Ph,c,r,u(Lwindowframe over (Lglass × (Lflower over Ltable))) (6)

describes a pinhole image of a light field created by alpha-compositing a light field of a flower with a table and viewing itthrough a window frame containing tinted glass. The pinhole cam-era is located at h with an image plane whose center, right, and upvectors are c, r, and u, respectively.

4 Implementation on the GPU architecture

In order to keep the compositing process interactive, we make ex-tensive use of the compute power and bandwidth available in mod-ern programmable graphics hardware. Our framework provides ageneral interface for loading and manipulating 4D light fields onthe GPU.

The framework consists of two parts: the data representation forlight fields, and the functions that represent light field operators.To represent a light field, we use the two-plane parameterization[Levoy and Hanrahan 1996]. Hence, the light field is an array ofimages, where (u,v) describes the 2D location of a pinhole camerain the UV -plane. The (s, t) coordinates describe the location of apixel in the ST -plane. All the camera images are aligned to theST -plane. A ray r is parameterized by its two intersection pointswith the UV - and ST -plane. Because the light field is discretized, acolor L(r) and its alpha are interpolated from nearby samples usingquadralinear interpolation.

A light field using this representation is stored as a 3D texture onthe graphics hardware. The UV axes are vectorized into the z axis inthe 3D texture2. The ST axes map directly to each xy-planar slice inthe 3D texture. In other words, the z axis selects a camera in (u,v)coordinates, and the xy-plane is the image plane in (s, t) coordi-nates. The 3D texture is compressed using S3 texture compression[Iourcha et al. ]. This extension provides a 4:1 compression ratiofor RGBA data. The compression is key to being able to store mul-tiple light fields into texture memory. Multiple light fields are eachstored as separate 3D textures, each occupying one of the 16 avail-able texture units. Multiple operator expressions can be associatedto a single texture unit, allowing editing of multiple instances of asingle light field without using additional texture units or memory.

The basic light field operators are represented as functions in a frag-ment program written in GLSL. To edit and composite light fieldsduring runtime, the user specifies a GLSL expression using the de-fined basic functions. The user-definable functions for apply andcomposite are written directly in GLSL as functions mapping oneor two colors to another color or scalar. The resulting fragment pro-gram is passed the virtual view, Pi,c,r,u and the UV - and ST -planelocations for each light field. This fragment program is compiledand applied to the textures.

When the fragment program begins, the virtual view’s image plane(c,r,u) is discretized into display pixels. As the fragment programis executed per display pixel, the pixel is converted into a ray usingthe location of the virtual view. This ray is passed into the user-supplied compositing expression to determine its RGBA color. A2D image is formed when the fragment program has executed forall the display pixels.

5 Creating novel editing operators

We now describe four light field editing operators that are combi-nations of the basic operators described in Section 3. These oper-ators are deformation, synthetic aperture focusing, refraction, andshadow casting.

5.1 Light field deformation

Light field deformation consists of supplying a warping function foreach light field and compositing the resulting light fields together.Chen et al. used free form deformation to specify the warping func-tion [Chen et al. 2005]. In our system the user specifies a ray-to-raywarp as a function in the fragment shader. Figure 1a illustrates oneexample of a composition where two captured light fields were usedto create a scene where a giraffe is surrounded by a grove of flowers.

To create this scene, let us begin with the captured datasets. Figure2a is an image from a captured light field of a toy flower, aftermatting the flower from the background. The flower was acquiredin front of a known background and we used a Bayesian mattingtechnique [Chuang et al. 2001] to extract mattes for all images ofthe light field. The mattes are stored in the alpha component of thelight field.

Let us call this light field Lflower and place it at the origin of acoordinate system where the xz plane is the ground plane and they axis is the up direction in the image. We now define a twistingfunction on this light field. Each ray in freespace is parameterizedby two points, its intersection with the UV - and the ST -planes of

2Many graphics card limit the z-dimension of the 3D texture to 512 sowe tile multiple camera images in a single xy-planar slice.

Page 4: Interactive Light Field Editing and Compositinggraphics.stanford.edu/papers/lfediting/light_field_editing.pdf · Interactive Light Field Editing and Compositing ... refocusing, rapid-prototyping

(a) (b) (c)

Figure 2: Compositing deformed light fields. (a) is an image froma captured light field of a toy flower. (b) shows an image of thetoy flower deformed by the warping operator. (c) shows a simplecomposited scene using the compositing expression.

Lflower. We apply a rotation about the y axis on these two pointsby multiplying each point with the appropriate y-rotation matrix.The amount of rotation depends on the y (or height) component ofeach point. Points with greater y coordinates are rotated more. Thiscreates a twisting effect on the light field, and we call this functionFtwist . We then apply F twist to Lflower using the warping operator:

Ltwisted = W (Lflower,Ftwist)

Figure 2b is one image from the light field Ltwisted. Applying asimilar warp to a light field of a toy giraffe, Lgiraffe and composit-ing it with Ltwisted together produces Figure 2c, which uses thecompositing expression:

W (Lgiraffe,Ftwist) over Ltwisted

Figure 5 in the results section shows a more complex example forcorrecting the poses of individuals in a single light field.

5.2 Synthetic aperture focusing

We now show how to define a synthetic aperture focusing operatorfrom the basic operators to perform focusing within a light field.This form of focusing from light fields has been investigated by sev-eral researchers [Isaksen et al. 2000; Vaish et al. 2004; Levoy andHanrahan 1996].Figure 6b illustrates an example where the user hasfocused on the rear plane of flowers. We first show how this focus-ing operator can be represented in our framework, then we extendit to create a new form of imaging that creates views containingmultiple planes of focus.

In a single-lens optical system, a pixel in the image plane is com-puted by taking the sum over the colors of all rays that pass throughthe lens and are incident to that pixel.

We transform this image formation process into a summation ofpinhole images to utilize the pinhole projection operator in Equa-tion 5. Figure 3 illustrates how this transformation is performed.The pinhole cameras have centers that lie in the lens aperture. Theirimage planes coincide with a common focal plane centered at fcwith right and up vectors fr and fu. When an object lies off thefocal plane, its pinhole images do not align. When summing theseimages, this mis-alignment creates blur due to defocus.

Therefore, to create an image with a shallow depth of field, we sumover multiple pinhole images, with each pinhole located at a samplepoint in the lens and with each image plane coinciding with the

sensor planefocal plane

(fc,f

r,f

u)

lens

i

j

Figure 3: Creating a focused image by summing pinhole images.The lens has been discretized to several sample positions. The fo-cused image is computed by adding images produced by pinholecameras lying in the lens aperture. Two pinholes positions, i and jare shown as black points within the lens. Their fields of view areshown as black lines. The dashed lines show the distortion of eachfield of view through the lens. The image plane of each pinholecamera coincides with the focal plane.

focal plane, ( fc, fr, fu). The corresponding compositing expressionis as follows:

Σi∈SPi, fc, fr , fu(L) (7)

Where S is the set of discrete pinhole positions within the lens aper-ture, and Pi,c,r,u is a projection operator that renders a view of thelight field from pinhole position i onto an image plane with centerc, right vector r and up vector u. We call this operator the focusingoperator. Figure 6b illustrates focusing within a light field.

This focusing operator creates an image where objects at a singleplane in the scene are in focus. Sometimes a user may want tofocus on objects at multiple depths. We call this image a multi-focal plane image because the final image has more than one depthin focus. This has uses in sports photography and movies.

In a multi-focal plane image, each object, represented by a separatelightfield, has its own focal plane. Therefore, for each object wecan tweak the amount of defocus it has by simply moving its asso-ciated focal plane. In our formulation, all the light fields are viewedthrough a common lens aperture.

To create a multi-focal plane image, we step through each samplepoint on the lens as was done for a single-plane of focus. How-ever, for each sample point we create multiple pinhole images, onefor each light field. The image plane for each pinhole projection isaligned to the focal plane for that light field and a 2D image is ren-dered from that light field. The pinhole images are then compositedusing the over operator to form a single image per sample point.We then sum all sample point images to form the multi-focal planeimage.

For two light fields with two focal planes, the expression is:

Σ j∈SPj,c1, fr , fu(L1) over Pj,c2, fr , fu(L2)

where c1 and c2 are the locations of the centers of the focal planesfor lightfield L1 and L2 respectively. Figures 6c and d illustrate themulti-focal plane image.

5.3 Refraction

We now define an operator that simulates objects with transmissiveproperties. This allows the object to refract other light fields in thescene. Figure 1c illustrates an example of two glass balls refractingtwo background light fields.

Page 5: Interactive Light Field Editing and Compositinggraphics.stanford.edu/papers/lfediting/light_field_editing.pdf · Interactive Light Field Editing and Compositing ... refocusing, rapid-prototyping

(a) (b)

Figure 4: Refraction and reflection. (a) shows an image ofLcalibration refracted through a glass ball light field. Notice thatthe virtual image is flipped, which is the correct effect for a solidglass ball. The artifacts in the virtual image are due to low samplingof the warping function F re f raction, this could be remedied by usinga higher sampling or by defining an analytic description of the raywarp. In (b), we add a reflection light field which contains a spec-ular highlight. Both the highlight and the refraction move with theobserver.

A refractive object warps the incoming rays to different outgoingdirections. This warp can be stored as a function or it can be pre-computed and stored as a 4D lookup table [Heidrich et al. 1999; Yuet al. 2005]. We call this warping function F re f raction. To create therefraction effect, we simply warp any light field behind the objectwith Fre f raction. For example, Figure 4 shows how a calibrationlight field Lcalibration is refracted through a glass ball light field.The compositing expression is:

W (Lcalibration,Fre f raction)

Fre f raction maps incident rays through the refracted sphere. We pre-compute Fre f raction by raytracing through a synthetic sphere andstoring the ray-to-ray mapping as a 4D lookup table.

Notice that Figure 4a does not appear realistic because it is missingthe reflection component. To remedy this problem, we can alsowarp the background light field by a reflection warp, which mapsinput rays to reflected rays (as opposed to refracted ones). In thiscase, the reflection is a specular highlight. The final compositingexpression is:

A(W (Lcalibration,Fre f raction),W (Lhighlight

,Fre f lection)

Figure 4b shows an image from this composited light field.

5.4 Shadows

We now define a shadowing operator that approximates shadowcasting from and onto light fields. Figure 8a illustrates the resultsof this operator.

To create sharp shadows, we introduce a virtual point light sourceinto the scene. Suppose our scene contains two light fields, Lobject

and Loccluder, and Loccluder lies between the virtual light andLobject. We now compute which colors Lobject(r) which shouldbe shadowed by Loccluder.

First we warp the occluding light field Loccluder so that its rays orig-inate from the virtual light source and impact Lobject. We call thiswarped light field Llight. Llight can be computed as follows. For

each ray r in freespace, consider the 3D position of its intersectionwith Lobject’s ST -plane. We form a shadow ray which goes fromthis 3D position to the virtual light source. Let us define a ray-to-raymapping called Fob ject,light which maps a ray r to its correspondingshadow ray. Then it follows that Llight = W (Loccluder

,Fob ject,light).

Thus, Llight(r)[α] exactly describes how much a ray r’s intersectionwith Lobject’s ST-plane is in shadow. To compute the effect of theshadow on Lobject, the color returned by Lobject(r) is multiplied by1−Llight(r)[α]. This multiplication is the out operator describedin section 3.

The final compositing expression that computes a shadow cast fromLoccluder onto Lobject is:

W (Loccluder,Fob ject,light) out Lobject

Shadows cast from multiple light fields can be simulated by com-positing each occluding light field together with the over operatorbefore performing the warping. Multiple point light sources can beadded by defining a Fob ject,light warp for each light field, warpingthe light field and prepending the warped light fields to the aboveexpression. Figures 1d and 8 illustrate some examples of the shad-owing operator.

6 Results

First we demonstrate how the deformation operator is useful forediting light fields. Figure 5 shows an image from a light field takenof two people. Unfortunately, the people were not looking straight-ahead during the acquisition. To correct this problem, we define anew function F look which linearly interpolates two F twist operatorsas defined in Section 5.1. One F twist operator is specified for eachhead, and is weighted by proximity to that head.

One view of the resulting deformed light field is shown in Figure5b. Notice that the people are now looking straight at the camera.The resulting image shows changes in visibility (cf. the previouslyhidden ear of the left figure), which would be impossible to achievewith an image editing approach.

Second, we show the results of having multi-focal planes through alight field. This operator lets a user emphasize multiple depths in acomposited scene by focusing on them. This is useful when thereare several interesting objects at different depths, but unwanted ob-jects between them. Figures 6c and d illustrate these multi-focalplane images. The toy giraffe in the middle is defocused. The de-focus blur correctly mixes with the colors in the background lightfield.

Third, we demonstrate how the refraction operator can be used tosimulate refraction through a light field of a glass ball. Figure 7shows images of glass ball light field in front of several backgroundones. Because the background are also light fields, the refracted im-age exhibits the proper parallax due to depth, which can be seen inthe accompanying video. The refracted image is also upside-down,which is consistent with the optical properties of a glass sphere.

In addition, our system allows the refraction and the focusing op-erator to be combined. This allows a user to focus in scenes withrefractive objects. In Figure 7a, the user focuses on the virtual im-age of the Buddha light field. Notice that that the background isdefocused. In Figure 7b, the user refocuses on the real image of theBuddha, and the virtual image and specular highlight are defocused.

Page 6: Interactive Light Field Editing and Compositinggraphics.stanford.edu/papers/lfediting/light_field_editing.pdf · Interactive Light Field Editing and Compositing ... refocusing, rapid-prototyping

(a) (b)

Figure 5: Using deformation to correct the poses of individualsin a light field. (a) is an image from a captured light field of 12x 5 cameras, each 512 x 512 in resolution. During acquisition theindividuals were looking in the wrong direction. No single image inthe light field contains a view where both individuals were lookingin the same direction. (b) shows an image from the corrected lightfield, where both people are now looking in the same direction.Notice the visibility change where the ears now become visible.The bottom row shows various views of the corrected light field.

Finally, we illustrate the use of the shadow operator to cast shadowsfrom light fields. Figures 1d and 8 illustrate this operator. Noticethat the toy giraffe’s shadow projects a believable silhouette ontothe ground. This shadow is also correctly shaped when it interactswith the background flower light field. We simulate soft shadowsby inserting multiple point light sources into the scene.

7 Conclusions and future work

We have presented a system that allows a user to edit and compositelight fields at interactive rates. The system is useful for editing lightfields, creating novel imaging effects, or creating complex opticaland illumination effects for games.

Instead of developing specific operators suited for a single task, wedeveloped a framework that is based on a few basic operators anda compositing language allowing these operators to be combined.We have shown that these combinations can create interesting op-erations.

We are discovering that these operators can be used in a varietyof contexts. For example our warping operator can be used todescribe the aberrometry data of a human eye, produced from aShack-Hartmann device [Platt and Shack 1971; Barsky et al. 2002].The warped rays would sample from an acquired light field, pre-senting photorealistic images of scenes as seen through a humanoptical system.

Finally, we are investigating the use of our system to edit other 4Ddatasets, like 4D incident illumination [Sen et al. 2005; Chen andLensch 2005], or general 8D reflectance fields. The ability to com-posite 8D reflectance fields enables captured objects in a scene toexhibit realistic inter-reflections, global illumination, and visibilitychanges. Such effects are currently very difficult to produce butmay be possible with further extensions to our system.

(a) (b)

(c) (d)Figure 6: Focusing within a light field. Image (a) shows a pin-hole image from a composited light field, where every object is infocus. Both the flower and giraffe light fields were captured using256 camera images, with each image at 256 x 256 resolution. In im-age (b), a focal plane is selected near the rear flowers. Foregroundobjects are properly blurred and allow the viewer to see around andthrough occlusions. In image (c) we simulate the non-photorealisticeffect of having multiple focal planes in a scene. Both the front andrear flowers are in focus, but the middle region remains blurred. Im-age (d) is another view of this multi-focal plane light field. Properparallax between light fields are maintained, as well as the defocusblur for each focal plane.

References

BARSKY, B. A., BARGTEIL, A. W., GARCIA, D. D., AND KLEIN, S. 2002. Intro-ducing vision-realistic rendering. In Eurographics Rendering Workshop (Poster).

BARSKY, B. A. 2004. Vision-realistic rendering: simulation of the scanned fovealimage from wavefront data of human subjects. In APGV ’04: Proceedings of the1st Symposium on Applied perception in graphics and visualization, ACM Press,New York, NY, USA, 73–81.

CHEN, B., AND LENSCH, H. P. A. 2005. Light source interpolation for sparselysampled reflectance fields. In Workshop on Vision, Modeling and Visualization,461–468.

CHEN, B., OFEK, E., SHUM, H.-Y., AND LEVOY, M. 2005. Interactive deformationof light fields. In Symposium on Interactive 3D Graphics and Games (I3D).

CHUANG, Y.-Y., CURLESS, B., SALESIN, D. H., AND SZELISKI, R. 2001. Abayesian approach to digital matting. In Proceedings of IEEE CVPR 2001, IEEEComputer Society, vol. 2, 264–271.

COOK, R. 1984. Shade trees. In SIGGRAPH 1984, Computer Graphics Proceedings.

FIELDING, R. 1972. The Technique of Special Effects Cinematography. Fo-cal/Hastings House, London, third edition.

GORTLER, S. J., GRZESZCZUK, R., SZELISKI, R., AND COHEN, M. F. 1996. Thelumigraph. In Proc. SIGGRAPH 1996.

HEIDRICH, W., LENSCH, H., COHEN, M. F., AND SEIDEL, H.-P. 1999. Light fieldtechniques for reflections and refractions. In Eurographics Rendering Workshop.

IOURCHA, K., NAYAK, K., AND HONG, Z. System and method for fixed-rate block-based image compression with inferred pixel values. US Patent 5,956,431.

Page 7: Interactive Light Field Editing and Compositinggraphics.stanford.edu/papers/lfediting/light_field_editing.pdf · Interactive Light Field Editing and Compositing ... refocusing, rapid-prototyping

(a) (b)Figure 7: Simulating light field refraction and focusing. Image (a)shows a view from a light field created from two refractive spheresin front of a flower and Buddha light field. The sphere and Buddhalight fields were rendered synthetically with 1024 cameras, at 256x 256 resolution. The image is created using the focusing operatordescribed in Section 5.2 to focus on the virtual image of the flowerin the refractive sphere. This makes the background appear blurry.Notice that the virtual image is correctly distorted by the sphere, itis upside-down and warped. In image (b) the focal plane is shiftedto the background, and the virtual images are defocused.

(a) (b)Figure 8: Simulating shadows. In image (a), sharp shadows arebeing cast from the giraffe and flower light field. In image (b), wecreate multiple virtual light sources and sum their masks to createsoft shadows. Notice that the soft shadow interacts correctly withthe flower light fields. The image is also focused on the foreground,so that flower and shadows are blurred.

ISAKSEN, A., MCMILLAN, L., AND GORTLER, S. J. 2000. Dynamically reparame-terized light fields. In Siggraph 00, 297–306.

LEVOY, M., AND HANRAHAN, P. 1996. Light field rendering. In Proc. SIGGRAPH1996.

OH, B. M., CHEN, M., DORSEY, J., AND DURAND, F. 2001. Image-based modelingand photo editing. In Proc. SIGGRAPH 2001.

PERLIN, K. 1985. An image synthesizer. In SIGGRAPH 1985, Computer GraphicsProceedings, ACM Press / ACM SIGGRAPH.

PLATT, B., AND SHACK, R. 1971. Lenticular hartmann-screen. In Optical ScienceCenter.

PORTER, T., AND DUFF, T. 1984. Compositing digital images. In Computer GraphicsVolume 18, Number 3, 253–259.

PROUDFOOT, K., MARK, W. R., TZVETKOV, S., AND HANRAHAN, P. 2001. Areal-time procedural shading system for programmable graphics hardware. In SIG-GRAPH 2001, Computer Graphics Proceedings.

SEITZ, S., AND KUTULAKOS, K. N. 1998. Plenoptic image editing. In Proc. ICCV1998.

SEN, P., CHEN, B., GARG, G., MARSCHNER, S. R., HOROWITZ, M., LEVOY, M.,AND LENSCH, H. P. A. 2005. Dual photography. In ACM Transactions on Graph-ics (Proc. SIGGRAPH 2005).

SHUM, H.-Y., AND SUN, J. 2004. Pop-up light field: An interactive image-basedmodeling and rendering system. In ACM Transactions on Graphics.

VAISH, V., WILBURN, B., JOSHI, N., AND LEVOY, M. 2004. Using plane + parallaxfor calibrating dense camera arrays. In Proc. CVPR.

YU, J., YANG, J., AND MCMILLAN, L. 2005. Real-time reflection mapping withparallax. In Symposium on Interactive 3D Graphics and Games (I3D).

ZHANG, Z., WANG, L., GUO, B., AND SHUM, H.-Y. 2002. Feature-based light fieldmorphing. In ACM Transactions on Graphics (Proc. SIGGRAPH 2002).