ieee transactions on visualization and computer graphics...

13
Visualization and Computer Graphics on Isotropically Emissive Volumetric Displays Benjamin Mora, Ross Maciejewski, Min Chen, Member, IEEE, and David S. Ebert, Senior Member, IEEE Abstract—The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing x-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D data set or object as the input, creates an intermediate light field, and outputs a special 3D volume data set called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results. Index Terms—Three-dimensional displays, volume visualization, display algorithm, expectation maximization. Ç 1 INTRODUCTION T HE ultimate display may be thought of as a device that can reproduce any given light field (LF) or plenoptic function [1], [19], [26]. However, in order for such a hypothetical 4D LF device to display images at an adequate resolution, it would need a tremendous amount of band- width and processing power—capabilities that may not be available for many years [33]. In the meantime, 3D displays, which have recently become more affordable, can provide users with a more immersive visualization experience when compared with traditional 2D displays. While not the final goal, these new 3D displays are an important step toward the ultimate display. Our work is concerned with a technique for improving the shortcomings of such systems, focusing on the Perspecta Display System [15] as a typical isotropically emissive volumetric display (IEVD) [9]. This system is based on a sweeping plane that performs 24 rotations per second, projecting volume slices at a high refresh rate. Due to the absence of absorbing materials on IEVDs that would facilitate occlusion effects, the final image perceived by the viewer consists of a set of ray integrals, each of which is the sum of all emitted light along the ray. Hence, there is no physical means for displaying a completely opaque object in a conventional manner such that the luminance of the occluded part is not perceived by the viewer [9]. Given a closed surface object, without occlusion, the ray that passes through the object will normally intersect with the surface at two or more points. The perception of each surface point is view dependent (see Section 3), even before taking into account coloring and shading. Thus, the creation of shading effects in an IEVD is a nontrivial problem, for which no existing solution can be found yet. Unfortunately, occlusion and shading of object surfaces are common in real-world situations and provide optical cues for the shape of an object and its surface properties. As such, visualization on these IEVDs can be perceptually less intuitive. Fig. 1a shows a typical computer-synthesized isosurface of the UNC head data set as it would be displayed on a 2D screen. This work aims at reproducing such 3D rendering on IEVDs with viewpoint dependencies. When directly displaying the original data set on an isotropically emissive display, it results in an x-ray-like visualization, as shown in Fig. 1b. A naive attempt to display an isosurface by using a segmented data set would produce Fig. 1c. Clearly, it is still difficult to perceive surfaces in Fig. 1c. One imperative reason why Figs. 1b and 1c are less intuitive than Fig. 1a is that there is no surface shading nor occlusion present in Figs. 1b and 1c. One suggestion for improving renderings is to pass a voxelized representation of a shaded surface to an isotropi- cally emissive display. Unfortunately, as shown in Fig. 1d, IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 15, NO. 2, MARCH/APRIL 2009 221 . B. Mora and M. Chen are with the Department of Computer Science, University of Wales Swansea, Singleton Park, Swansea SA2 8PP, UK. E-mail: {b.mora, m.chen}@swansea.ac.uk. . R. Maciejewski and D.S. Ebert are with the School of Electrical and Computer Engineering, Purdue University, 465 Northwestern Ave, West Lafayette, IN 47907. E-mail: {rmacieje, ebertd}@purdue.edu. Manuscript received 28 Aug. 2007; revised 26 Dec. 2007; accepted 30 June 2008; published online 22 July 2008. Recommended for acceptance by T. Moeller. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number TVCG-2007-08-0116. Digital Object Identifier no. 10.1109/TVCG.2008.99. 1077-2626/09/$25.00 ß 2009 IEEE Published by the IEEE Computer Society Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Upload: nguyenthuan

Post on 14-Mar-2018

225 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

Visualization and Computer Graphics onIsotropically Emissive Volumetric Displays

Benjamin Mora, Ross Maciejewski, Min Chen, Member, IEEE, and

David S. Ebert, Senior Member, IEEE

Abstract—The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of

these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer,

producing x-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually

different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on

isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive

volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can

significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D data set or object as

the input, creates an intermediate light field, and outputs a special 3D volume data set called a lumi-volume. This lumi-volume encodes

approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is

fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to

the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number

of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes.

We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive

volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.

Index Terms—Three-dimensional displays, volume visualization, display algorithm, expectation maximization.

Ç

1 INTRODUCTION

THE ultimate display may be thought of as a device thatcan reproduce any given light field (LF) or plenoptic

function [1], [19], [26]. However, in order for such ahypothetical 4D LF device to display images at an adequateresolution, it would need a tremendous amount of band-width and processing power—capabilities that may not beavailable for many years [33]. In the meantime, 3D displays,which have recently become more affordable, can provideusers with a more immersive visualization experience whencompared with traditional 2D displays. While not the finalgoal, these new 3D displays are an important step towardthe ultimate display.

Our work is concerned with a technique for improving

the shortcomings of such systems, focusing on the Perspecta

Display System [15] as a typical isotropically emissive

volumetric display (IEVD) [9]. This system is based on a

sweeping plane that performs 24 rotations per second,

projecting volume slices at a high refresh rate.Due to the absence of absorbing materials on IEVDs that

would facilitate occlusion effects, the final image perceived

by the viewer consists of a set of ray integrals, each of whichis the sum of all emitted light along the ray. Hence, there isno physical means for displaying a completely opaqueobject in a conventional manner such that the luminance ofthe occluded part is not perceived by the viewer [9]. Given aclosed surface object, without occlusion, the ray that passesthrough the object will normally intersect with the surfaceat two or more points. The perception of each surface pointis view dependent (see Section 3), even before taking intoaccount coloring and shading. Thus, the creation of shadingeffects in an IEVD is a nontrivial problem, for which noexisting solution can be found yet.

Unfortunately, occlusion and shading of object surfacesare common in real-world situations and provide opticalcues for the shape of an object and its surface properties.As such, visualization on these IEVDs can be perceptuallyless intuitive. Fig. 1a shows a typical computer-synthesizedisosurface of the UNC head data set as it would bedisplayed on a 2D screen. This work aims at reproducingsuch 3D rendering on IEVDs with viewpoint dependencies.When directly displaying the original data set on anisotropically emissive display, it results in an x-ray-likevisualization, as shown in Fig. 1b. A naive attempt todisplay an isosurface by using a segmented data set wouldproduce Fig. 1c. Clearly, it is still difficult to perceivesurfaces in Fig. 1c. One imperative reason why Figs. 1b and1c are less intuitive than Fig. 1a is that there is no surfaceshading nor occlusion present in Figs. 1b and 1c.

One suggestion for improving renderings is to pass avoxelized representation of a shaded surface to an isotropi-cally emissive display. Unfortunately, as shown in Fig. 1d,

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 15, NO. 2, MARCH/APRIL 2009 221

. B. Mora and M. Chen are with the Department of Computer Science,University of Wales Swansea, Singleton Park, Swansea SA2 8PP, UK.E-mail: {b.mora, m.chen}@swansea.ac.uk.

. R. Maciejewski and D.S. Ebert are with the School of Electrical andComputer Engineering, Purdue University, 465 Northwestern Ave, WestLafayette, IN 47907. E-mail: {rmacieje, ebertd}@purdue.edu.

Manuscript received 28 Aug. 2007; revised 26 Dec. 2007; accepted 30 June2008; published online 22 July 2008.Recommended for acceptance by T. Moeller.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number TVCG-2007-08-0116.Digital Object Identifier no. 10.1109/TVCG.2008.99.

1077-2626/09/$25.00 � 2009 IEEE Published by the IEEE Computer Society

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 2: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

this does not alleviate the problem. Instead, it reduces thesurface perception because in such a representation the

shading effects are intended to be perceived correctly for an

opaque surface. When removing the opacity-based occlu-sion, the accumulative ray integrals aggregate the light

emitted from the shaded front surface, as well as all emitted

light behind the surface. The shading effects that capture the

continuity and curvature of the surface can no longer becorrectly preserved and perceived.

To overcome these problems, we have developed a novel

technique that enables users to visualize an isosurface with

shading effects similar to Fig. 1a on an IEVD. As a result, we

are able to generate, for the first time, a true 3D experienceof an isosurface visualization with emulated shading effects

on an IEVD, as shown in Fig. 1e, which is close to our

original simulated results (Figs. 1a and 1f).Fig. 2 illustrates the pipeline required to solve this

rendering problem on IEVDs. We first render a 3D object ordata set into an LF using a graphics or visualization system.

This LF is then encoded inside a specific volumetric data set

with the help of a reconstruction algorithm. We call this

special volume data set a lumi-volume. When such a lumi-volume is fed into an IEVD, the accumulative ray integrals

reproduce most shading effects on the viewer’s retinal

image, as exemplified in Fig. 1e. In other words, the lumi-

volume recreates an approximation of the original LF on theisotropically emissive display.

This pipeline can also accommodate an arbitrary LF

captured by a plenoptic camera. Nevertheless, in this work,

we focus on reproducing volume visualization and com-

puter graphics renderings, which is the main applicationarea of IEVDs. We employ an expectation-maximization

(EM) algorithm to generate a lumi-volume from an LF,

which minimizes the difference between the original LF andthe LF rendered by the IEVDs.

To improve current imaging on isotropically emissivedisplays, we must first formalize the rendering principlesbehind IEVDs. Then, we examine various technical issuesand limitations of the proposed technique. We also showthat the loss of one dimension of information in approx-imating a 4D LF with a 3D lumi-volume will result in amore continuous visualization of partially visible objectsthrough transparent surfaces. We examine the visualizationquality in relation to LFs generated using both directvolume rendering (DVR) and nonrealistic maximum in-tensity projection (MIP). Our experimentation with a realIEVD system shows that this technique is effective anddeployable, and our simulation study suggests that high-quality results are attainable.

2 RELATED WORK

2.1 3D Displays

Two-dimensional displays have a limited capacity in whichto provide depth cues. Many 3D or stereo displaytechnologies have been developed for enriching the users’experience of depth perception. An overview of thesedevelopments can be found in the August 2005 issue ofComputer.

The most widely used class of techniques for improv-ing depth perception is that of stereo parallax, whichprovides each eye with separate images. This is a low-costtechnique, only requiring the creation of two images perframe. However, wearing special glasses or goggles isoften inconvenient and may lead to the users’ fatigue.Autostereoscopic displays [13], [28] were developed tocircumvent these issues, and some multiview systems areable to horizontally multiplex many images. However,with such displays the object is always at a fixed focaldepth, and the aspect ratio is only valid for a given depth.

On the other hand, volumetric displays [4], [5], [15],which use 3D pixels that absorb or emit light, do not sufferfrom the limitations of stereo parallax and autostereoscopicdisplays. One such device is the Perspecta Display System[15] (Fig. 3). This system uses a fast sweeping plane where

222 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 15, NO. 2, MARCH/APRIL 2009

Fig. 1. The limitations of emissive displays and the improvement made. (a) Consider a reference visualization showing an isosurface that we wish todisplay on an emissive display. (b) Passing the raw volume to the Perspecta Display System results in an x-ray-like visualization. (c) Passing avolume containing a segmented isosurface to the display enables the identification of the surface concerned, but the perception of the surfacegeometry remains elusive without shading. (d) Passing a volume containing a shaded isosurface to the display produces a rather confusingvisualization. (e) With our new technique, passing a lumi-volume to the display offers a great improvement over (b), (c), and (d). (f) The simulatedresult of displaying a lumi-volume on a future high-resolution emissive display system demonstrates the scalability and usability of the proposedtechnique.

Fig. 2. The pipeline of our proposed technique. The thick lines indicate

the focus of this work.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 3: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

slices of a 3D volume are projected onto a rotating screeninside a hemisphere construct, providing a nearly 4�steradian field of view (FOV). Since every emission oflight on the sweeping plane will always be visible to theuser, this system is classified as an IEVD, which providesx-ray-like renderings of the input data. This particulardisplay has already been deployed in several applications[20], [35], [36].

Other types of IEVDs also exist, mostly based onfluorescent material that is excited by electron beams orlasers. Recently, some variants of the sweeping planetechnique, using a mirror instead of a diffuse surface, havebeen proposed [9], [25]. The main advantage here is that theimaging is not limited to the isotropically emissive model.In the transpost system [30], the mirror is replaced by ascreen with a limited viewing angle.

Another well-known way to produce 3D displays isholography [33]. This is an old technique that uses Fourieroptics to fully display a 4D LF. However, the technique islimited by the huge amount of computation requiredto process the data and can only be handled by today’ssupercomputers when high resolution is needed.

2.2 Light Field

An LF is a 4D function that associates a color to everyunobtrusive ray of a 3D space. The concept was introducedin 1996 independently by Levoy and Hanrahan [26] and byGortler et al. [19] to facilitate interactive browsing of anobject without the need for rendering the object. Forexample, the lumigraph representation [19] uses two facingslabs to discretize the 4D LF space. A color is then associatedwith every possible pair of grid points chosen from bothslabs, which actually defines a ray inside the LF space.Fundamentally, the LF notion is similar to the plenopticfunction proposed in 1991 by Adelson and Bergen [1].

In our work, the color of a ray of light will be obtained verydifferently by combining all the voxels along the ray pathaccording to a rendering model. Due to the fact that an LF dataset consists of samples taken from a 4D domain, a hugeamount of data is produced. Such data cannot yet be

displayed by current technologies, nor is it compatible withthe input representation required by IEVDs. As such, ourtechnique approximates a 4D LF through a new 3D light-emitting volume representation referred to as a lumi-volume.

2.3 Algorithms for Volume Reconstruction

Since IEVDs are not capable of accurately reproducing alluser-defined images, our goal is to create 3D renderingsthat approximate any arbitrary LF data that the user wishesto display. There are numerous voxel-based methods thatreconstruct a 3D volume from an LF [11], [14], [21], [31],sometimes with the help of specific heuristics. In contrast,our method achieves the opposite, recreating an approx-imation of the original LF through volume rendering (usingan isotropically emissive model in our case). Here, it isimportant to note that the resultant lumi-volume is notintended for general use and is only to be used on IEVDsthat generate x-ray-like renderings.

To create a lumi-volume, we need to define a volumewith projections that will match the given LF input asclosely as possible. This parallels medical techniques thatreconstruct 3D volumes from projections. The most promi-nent techniques are the filtered back-projection (FBP)reconstruction technique, the algebraic reconstruction tech-nique (ART), and the EM algorithm. The first techniqueuses a Fourier transform, while the two others are iterativetechniques that solve a linear system.

The FBP reconstruction method was the first techniqueused in computed tomography to reconstruct a volumetricrepresentation of an object. However, the reconstructionequation is known only for specific cases, for example,cone beam reconstruction [16], and cannot be used forany arbitrarily positioned projection. The ARTs [18] utilizean iterative process, and at every iteration, an algorithmevaluates the difference between the synthesized projec-tions of the current volume and the actual projections.The algorithm then uses this difference to produce a newrefinement. The algorithm usually converges to a solution,provided that it exists (e.g., the data is not corrupt).However, the ART algorithm may find negative solutions,which are not suitable for this work since the voxels in avolumetric display cannot emit a negative amount oflight. Therefore, this work is built on the EM algorithm[12], [32], which also employs an iterative process to solvea linear system. For self-containment, this algorithm willbe briefly described in Section 4.

There has been a series of efforts implementing thesealgorithms on graphics hardware [37], including a hard-ware-assisted FBP implementation by Cabral et al. [6], anART implementation by Mueller and Yagel [29] andTrifonov et al. [34], and a hardware-based EM algorithmby Chidlow and Moller [8]. More recently, there has beeninterest in applying tomography reconstruction techniquesto handling LFs that capture transparent phenomena [24]. In[3] and [23], Ihrke et al. used tomography reconstruction toreproduce fire. More closely related to our work, [17] usedtomography to model data sets. Their work, though limitedto convex surfaces, showed that this approach is useful.Finally, Dachille et al. [10] applied the simultaneous ART(SART) method to recreate an LF. While the SART methodmay produce negative solutions (which is undesirable in

MORA ET AL.: VISUALIZATION AND COMPUTER GRAPHICS ON ISOTROPICALLY EMISSIVE VOLUMETRIC DISPLAYS 223

Fig. 3. (a) The Perspecta Display in action. (b) A close view of a

voxelized tower model. (c) The traditional rendering of the tower model

captured from a 2D computer.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 4: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

this work), they managed to achieve an root-mean-square(RMS) error close to zero for a single image reconstruction.

3 LUMI-VOLUME

A lumi-volume is a 3D volumetric data set, which specifi-cally aims at recreating a fixed 4D LF when displayed on anIEVD. In order to consider the usefulness of a lumi-volume,we shall first examine how IEVDs differ from traditionalrendering displays in computer graphics and visualization.

Absorption and occlusion. IEVDs like the Perspecta havecommonly been used to display voxelized 3D models,where the voxel values typically store the material proper-ties of a model (e.g., a computed tomography data set) or adiscrete boundary representation of an object (e.g., anisosurface). When such voxel values are mapped toluminance by the IEVD device, the light emitted fromvoxels arrive at the viewer’s location in a summative mannerwithout any absorption or occlusion events. Although theresulting x-ray-like visualizations, as seen in Figs. 1b and 1c,can be interpreted by viewers, it is much different from thevisualization of a real-life object on a traditional 2Drendering system, such as Fig. 1a. As such, it is highlydesirable for a lumi-volume to encode voxel luminance in aspecific way to compensate for the lack of absorption andocclusion. This requirement was recognized and conjec-tured as nonsolvable without physical modification of thedisplay in [9].

View-dependent rendering. With traditional renderingof a 3D model on a 2D display, the illumination of eachvisible surface element of the model can be either viewdependent (e.g., Phong, Blinn, and BRDF) or viewindependent (e.g., diffuse-only and precomputed radio-sity). For view-dependent shading effects (e.g., specularhighlight), the color of each visible surface element of themodel is normally recomputed every time the viewpoint ismodified. However, on an IEVD, the voxel luminance, ifany, will always be view independent because of the lack ofocclusion. The viewpoint modification occurs when theviewer moves around; however, the 3D data set passed tothe display remains the same.

Hence, as shown in Fig. 1d, when a data set that encodesview-independent shading effects is passed to an IEVD, theresults at different viewing positions can be very differentfrom what is expected. In fact, at many viewing positions,the visualization can be quite incomprehensible. Thus, it isnecessary for a view-independent lumi-volume to encodeshading effects in an appropriate way such that the desired

view-dependent shading would be partly maintained

through the different view-dependent rendering integralsof an isotropically emissive display (Fig. 5).

Consistent brightness. A voxelized surface has a

physical thickness. As illustrated in Fig. 4, the length of aray-surface intersection d=cosð�Þ is related to the angle of

incidence, as is the brightness of the surface due to theaccumulated light being proportional to the length of the

intersection. Therefore, the wider the angle is, the brighterthe surface appearance is, as demonstrated in Figs. 1c and

12a. This visual effect is a major obstacle when attempting toreproduce computer graphics on the volumetric display

with voxelized scenes. Lumi-volumes address this difficultyby considering the most correct rendering solution for this

task. To circumvent the above issues on a display with afixed input data set, the ideal solution would be to output a

real 4D LF that captures the visualization for all viewpoints.Nevertheless, the practical restriction of the 3D input data set

used by IEVDs obliges us to seek an alternative solution, thatis, to approximate a 4D LF with a lumi-volume.

In order to match the lumi-volume as closely as possible

to the original LF, the problem must be first formalized interms of viewpoint, LF, and line integrals. The algorithm

for constructing a lumi-volume must then make use of allthe possible volume samples, not just the ones across thesurface, and work in a line integral domain, similar to

the process of volume reconstruction in medical imaging.For instance, if a surface element is represented as a voxel

(which can only have a unique color), it is not possibleto obtain different shading properties as the viewpoint

changes (Fig. 5). If one now encodes the luminance proper-ties along the entire light path instead of just considering thesurface elements, which results in different line integrals, it

is possible to create more independently shaded surfaceelements. When the lumi-volume is passed to an IEVD, it

results in different visualizations for different viewpoints,collectively offering the best approximation of the wanted

4D LF.There are, however, restrictions on the reproduced LF.

For instance, the projection made according to one

224 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 15, NO. 2, MARCH/APRIL 2009

Fig. 4. Effect of the viewing angle on the luminance of a surface. The

wider the angle, the longer the length of the intersection, and thus the

brighter the surface.

Fig. 5. To create differently shaded surface elements according to

different viewpoints, the color must be spread more continuously across

the entire ray path, as opposed to being determined by a given color at

the intersection.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 5: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

viewpoint must be the same as the one made for theviewpoint in the opposite direction.

4 EXPECTATION MAXIMIZATION

The EM algorithm was proposed by Dempster et al. [12] andapplied to medical imaging by Shepp and Vardi [32]. Themain idea is to solve a linear system in an iterative way. Let bbe an input LF (see Fig. 6), with b ¼ ðb1; b2; . . . ; bMÞ 2 RM

being the set of observed variables inside the 4D LF, and letx ¼ ðx1; x2; . . . ; xNÞ 2 RN be the unknown voxels of a lumi-volume that must be constructed as an approximation of b.Due to the isotropic emission of the target displays, everyprojection bi can be considered as a linear combination of thevoxel values in the lumi-volume. Therefore, we can writeour system as A � x ¼ b, where ðAijÞ is an M �N matrixrepresenting the partial volume effects. Each weight Aij

represents the volume of the intersection between bi and xj.The EM algorithm, which iteratively converges to thesolution of our linear system, can be expressed as follows:

xnþ1j ¼ xnj �

PMi¼1

AijbiPN

k¼1

Aikxnk

PMi¼1

Aij

;

where n is the iteration number.Each iteration of the algorithm can be decomposed into

three main steps. The first step is to project the current lumi-volume onto a set of the viewing planes, resulting in asynthesized intermediate LF r ¼ ðr1; r2; . . . ; rMÞ 2 RM ,where ri ¼

PNk¼1 Aikx

nk , and ri is initialized to one prior to

the first iteration.The second step is to evaluate the difference between the

input LF and the synthesized intermediate LF by calculatingthe ratios between bi and ri for all i ¼ 1; 2; . . . ;M.

The final step is to update every voxel xj of the lumi-volume by multiplying its value with the mean deviationratio of all the LF rays (or pixels) that intersect xj. Theaverage must also be weighted by the partial volume effects.

Unfortunately, this algorithm is rather slow, with acomplexity of Oðm3 � p � lÞ, where N ¼ m3 is the total

number of voxels in the lumi-volume, p is the total numberof projections in the LF, and l is the total number of iterationsteps. However, the convergence of this algorithm is linear,and a relatively low number of steps (�20-80) will usuallyresult in an adequate approximation. To compensate for thehigh complexity level, numerous methods have beenproposed to improve the speed of the algorithm. One suchmethod is the ordered subset maximization method(OSEM) [22] that runs the algorithm on subsets, therebyimproving the speed of convergence.

Converging to a solution usually makes sense only ifthere exists a solution. In our case, we know that there is nosolution to our problem, since the input LF is a full 4Drepresentation in a 4D space, and a bijection cannot beestablished with 3D space where the lumi-volume isdefined. However, the EM algorithm has been used instatistics to find maximum-likelihood estimates, indicatingthat the algorithm may converge to a good estimate of theinput LF by finding a minimum to ðA � x� bÞ2.

5 EXPERIMENTAL ENVIRONMENT

In addition to the development of the pipeline shown inFig. 2, we have conducted a series of experiments to study anumber of technical issues in deploying our system. Themain results of these experiments will be presented andanalyzed in Section 6. In this section, we concentrate on thedata flow depicted by the solid lines in Fig. 2, describing theenvironment and conditions for the experimental study,and presenting the models and target rendering results(Fig. 7) for benchmarking our approach using lumi-volumes.

MORA ET AL.: VISUALIZATION AND COMPUTER GRAPHICS ON ISOTROPICALLY EMISSIVE VOLUMETRIC DISPLAYS 225

Fig. 6. Projection and partial volume effect. bi represents an LF element,

xj represents a voxel, and Aij represents how much of xj is intersected

by bi.

Fig. 7. Models used in our experiments. (a) UNC Head, DVR.

(b) Aneurism, MIP. (c) Stanford Bunny, OpenGL. (d) “IEEE TVCG,”

OpenGL.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 6: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

5.1 Models and Rendering Methods

Fig. 7a was rendered using the DVR method, Fig. 7b wasrendered using the MIP method, and Figs. 7c and 7d wasrendered using the projection-based mesh renderingmethod. We utilized our own implementation of the DVRand MIP methods and OpenGL for mesh rendering. Thesethree methods capture the characteristics of some of themost commonly used rendering algorithms in visualizationand computer graphics.

MIP is a widely used volume rendering method inmedical visualization. Since only the maximum valuealong a ray is kept, it provides selective visualization ofthe internal structure of data sets. This is often used incombination with contrast agents that aim to enhance thesignal for the specific part of the data set under investiga-tion. Although MIP does not provide realistic renderings, itpresents an interesting challenge since it differs from theother two methods in many respects. On one hand, an LFresulting from MIP is more coherent since local maximumsare likely to be distributed over many projections of the LF.For instance, the projection will always be the same for aspecific direction and the opposite direction. On the otherhand, MIP is fundamentally different from traditionalsurface or x-ray-like renderings.

Our LFs are computed from different rendering meth-ods, and the lumi-volumes constructed in this workapproximate a wide variety of rendering effects on anIEVD. Collectively, the results can offer more conclusiveobservations as to the usability of this approach andprovide indicative conjecture as to its extensibility to otherrendering methods.

5.2 Light Field Sampling

While the construction of a lumi-volume that would encodeall information in an LF is fundamentally unattainable dueto the dimension reduction, a careful selection of samplingparameters can significantly improve the performance ofthe EM algorithm in terms of both speed and accuracy. Themain unknown attribute before starting this work was howmuch the RMS error will evolve with regard to the numberof projections used. Chai et al. [7] and Lin and Shum [27]considered the efficient sampling of LFs, and their workoffered useful guidance as to the optimal sampling rate.

To generate a coherent LF, one must evenly sample the4D LF space ð�; ’; u; vÞ, as illustrated in Fig. 8. This allowsfor the possibility of restricting the FOV of the LF to asmaller portion of the 4D space, enclosed by the imageryprojections in the LF. We utilize polar coordinates to definethe projection planes in the LF. Each projection direction isdefined by two angles, � and ’, that fall respectively in theranges of ½�max�;þmax�� and ½�max’;þmax’�. These twointervals are uniformly sampled by taking n evenly spacedsamples for the angle ’ and then taking n � cosð’Þ evenlyspaced samples for the angle �. Objects are then renderedusing an orthogonal projection for every sampled direction.The cosine term is required to avoid a biased highersampling density at the poles.

5.3 Implementation of the EM Algorithm

Although the OSEM version can improve computation timeby up to an order of magnitude and is relatively easy to

implement, it makes the algorithm oscillate around the idealsolution. Therefore, this speed acceleration technique wasnot adopted by our experimental environment, and allresults presented in this paper involve only lumi-volumesproduced by the original EM algorithm. To improve theperformance, we utilized a CPU-based implementation thathas been optimized using SSE instructions. This waspreferred to a hardware-accelerated implementation be-cause we wanted some flexibility on the implementationside. As demonstrated by previous hardware-based imple-mentations [6], [8], [37], it is feasible to further improve thespeed of a volume reconstruction algorithm using hard-ware-based algorithms, which is not the focus of this work.

5.4 Perspecta Display

We conducted extensive experimentation of displayingconstructed lumi-volume data sets on the Perspecta DisplaySystem version 1.7 (Fig. 3). The results, as shown in Fig. 1eand Section 6, are very encouraging when compared withother forms of volume data sets, such as those shown inFigs. 1b, 1c, and 1d.

The Perspecta system represents the current commerciallyavailable technology. It has a limited dynamic range. Thesweeping plane does 24 revolutions per second, and198 images (of resolution 7682 pixels) are displayed for eachrevolution, representing a total number of 117 million voxels.Since the Texas Instruments digital light processor can onlyperform approximately 8,000 binary state changes persecond for each pixel, a color depth of approximately 2 bitsper voxel is possible. The system makes use of dithering andhalftoning to improve the color depth, which could improvethe rendering capability but also may change the propertiesof our data sets. Unfortunately, we have no control over thisfeature. On the API side, an 8-bit voxel color depth is used,allowing 256 colors per voxel through the use of a palette.With only a few distinct intensities, the display can stillproduce useful visualizations (Figs. 1b and 1c) and cansupport lumi-volume visualization (Fig. 1e).

On the other hand, the lumi-volumes created with ourmethod could require floating-point precision, with voxelshaving a high dynamic range. We expect that the effective-ness and usability of our method will improve with the

226 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 15, NO. 2, MARCH/APRIL 2009

Fig. 8. Images in our LFs are parameterized with two polar coordinates

angles � and ’.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 7: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

gradual introduction of high-resolution IEVDs in the nearfuture. In addition, taking photographic pictures of thePerspecta system was challenging because a long exposuretime must be used, which results in fuzzy images due to theinternal vibrations of the device.

We have also provided high-resolution results producedfrom simulations in our evaluation in conjunction with thedirect experimentation on a Perspecta display. The simu-lated results offer a scalable evaluation of the capabilities ofour method.

5.5 Voxelization of 3D Meshes

In order to have a better idea of the rendering improve-ments brought to isotropically emissive displays, a simplevoxelization of the mesh-based models used in this paperhas been implemented. The voxelization was made withOpenGL by rendering a specific image of our LF (e.g., � ¼ 0and ’ ¼ 0) as many times as there are slices in the finalvoxelization. At each step, the front and back values of thefrustum are changed in order to consider only the part ofthe model inside the given slice. The collection of allobtained images gives us the voxelization. Note here thatvoxel shading is always dependent on our referenceviewpoint.

6 RESULTS

6.1 Quantitative Analysis of the Results

The different tests for assessing our method and itsparameters are summarized in Table 1, including thefollowing:

. the number of imagery projections in the rendered LF,

. the resolution of each imagery projection in therendered LF,

. the size of the corresponding lumi-volumeconstructed,

. the FOV attributes, max� and max’,

. the sampling rate,

. the runtime in seconds per iteration, and

. the number of iterations used to construct the lumi-volume.

An AMD athlon 64 X2 3800þ (2 GHz) was used for theconstruction of the lumi-volumes, but only one core was

used. The UNC head data set has undergone several tests toanalyze the effect of the FOV in generating LFs. In various“Head 1D” tests, the projections in an LF were generated byvarying the horizontal FOV ðmax�Þ, while fixing the verticalFOV ðmax�Þ to zero. The “Head 2D” tests involved varyingboth polar angles.

We evaluate the quality of a lumi-volume by measuringthe RMS error of the LF reconstructed by our lumi-volume.The RMS error considers all the samples of the original LFðbiÞ and computes the difference between the reconstructedLF ðriÞ at the original sampling locations using thefollowing formula:

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPMi¼1

ðri � biÞ2

M

vuuut:

The measurements are based on a 256-gray-level integerscale. Fig. 9 shows the progressive changes of the RMS errorduring the iterations of the EM algorithm.

The head data set has undergone several tests to analyzethe effect of the FOV on the reconstruction. The Head 1D tests(Fig. 9a) have been conducted by adjusting the horizontalFOV ðmax�Þ, while the vertical FOV ðmax’Þ was set to zero.Note that even with a null vertical FOV, our problem stillremains unsolvable. The Head 2D and bunny tests (Fig. 9b)have been realized by adjusting both angles.

First, in Fig. 9, one can easily observe that the algorithmappears to converge to a solution as expected. Here, we seethat lumi-volumes produced using smaller FOVs tend toresult in decreasing RMS errors after each iteration of theEM algorithm. However, for some of the input LFs createdwith a wider FOV (e.g., Head 1D 70-90 degrees, Head 2D40 degrees, Head 2D MIP 90 degrees, Aneurism MIP), theRMS error reaches a local minimum and then begins toincrease.

In practice, halting the iteration process right after thelocal minimum does not necessarily produce a lumi-volumethat will lead to the best visual results. More iteration stepsalways seem to lead to sharper reconstructed LFs in ourexperiments (Fig. 16, images 4e, 5a, and 5b).

Second, one can also observe that the RMS error is lowerwith lumi-volumes for MIP-rendered LFs (Fig. 9c). This isbecause MIP provides a more coherent rendering, as

MORA ET AL.: VISUALIZATION AND COMPUTER GRAPHICS ON ISOTROPICALLY EMISSIVE VOLUMETRIC DISPLAYS 227

TABLE 1Summary of the Different Test Configurations

For tests 4 and 8 (Head 1D), Max� varies in the range [10..90] with a 20-degree increment between each configuration.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 8: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

discussed previously. For example, comparing the

RMS errors for the UNC Head 1D tests in the top and

bottom graphs, we can see clearly that MIP-rendered LFs are

better approximated by lumi-volumes than DVR-rendered

LFs. Third, we can also observe that the quality is affected by

the scene complexity of the original LF. For example, the

aneurism MIP data set produces a more complex LF than

those LFs of the MIP-rendered UNC Head, thus resulting in

a less accurate lumi-volume. The UNC Head 2D test has a

higher degree of freedom than those UNC Head 1D tests in

producing an LF, which also results in higher RMS errors.

6.2 Qualitative Assessment of the Results

This section provides a visual assessment of the results inseveral different aspects. The videos that accompany thispaper also support the qualitative visual analysis byjuxtaposing some of the original input and the correspond-ing reconstructed LFs. Some screen captures of the displaysshowing the effectiveness of our technique on differentviewpoints are represented in Fig. 10. However, due to thelimitation of the display, we will mainly discuss thesynthetic results.

6.2.1 The Transparency Effect

Because of the lack of occlusion on IEVDs, real orsimulated, one inevitable artifact is the transparency effect.A surface that is opaque in the original input LF is likelyto become translucent in the reconstructed LF. This isnoticeable with the Head 1D example, where varying theFOV parameter max� from 10 to 180 degrees clearlyincreases the transparency of the isosurface (Fig. 16). Thisis also the case with the MIP LFs, and, especially, for theaneurism MIP data set (Fig. 16, image 5d). Note that this ismore clearly visible in the comparative videos.

To emphasize this effect, we have created a 3D scene(Fig. 7d), where the 3D text is partially hidden by a setvertical columns. In the reconstructed LF (Fig. 11b)obtained from a lumi-volume, the entire text is nowvisible, because the columns have lost their capacity forocclusion and appear to be translucent. In simulation, thesummation of all luminance along a ray may also result ina saturated pixel color, due to the color quantization (e.g.,the 256 gray scales). Nevertheless, this should not be thecase with isotropically emissive displays since the humaneyes have a higher dynamic range. Although one maysolve the saturation problem by scaling down the pixelintensities, we have chosen not to reduce the brightness ofany images in this paper in order to make a consistentvisual comparison of the results.

The apparent translucency of an opaque surface is aninevitable artifact of IEVDs. As demonstrated in Fig. 1, incomparison with the naive approach for passing raw orsegmented volume data sets to an IEVD, our results haveactually shown the removal of a significant amount oftransparency. As such, the use of lumi-volumes offers atangible solution to applications where excessive levels ofapparent translucency is highly undesirable. This is alsosupported by the direct comparison of the lumi-volume-reconstructed LFs (Figs. 11 and 12) with the LFs producedfrom an OpenGL voxelization of the same 3D models. Inthese two cases, transparency is clearly reduced with ourtechnique, though not completely removed.

The MIP case is also very interesting (Fig. 16). The mainMIP structures are now transparent, which is useful fordistinguishing several overlapping objects. This couldrepresent a nice extension to MIP, but a user study will berequired to determine the practicality of such an extension.

Figs. 12d and 12e show two cross sections of our bunnylumi-volume and illustrate the main difference between ourmethod and former volumetric reconstruction methods [29]that produce a voxelization of the scene. Here, the lumi-volume stores the required information inside the entire

228 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 15, NO. 2, MARCH/APRIL 2009

Fig. 9. RMS error according to the number of iterations.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 9: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

data set, not only on the surface of the model. As such, thedata set is of no use in regular rendering applications.

In many visualization applications, a small amount oftransparency effects can actually assist viewers in theirvisual interpretation of the internal structures or occludedparts of the models being observed on IEVDs. The advantageof our method is that the reconstructed rendering may allowusers to interpret the occluded parts from the front surface.For instance, Fig. 11b shows the clearly discernable text,IEEE TVCG, which is not distinguishable in the original LF.We could, for instance, further imagine an LF acquired froman airplane that would be processed with our technique todiscover what is hidden below trees.

6.2.2 Rendering Fidelity

The dimension reduction from a 4D LF to a 3D lumi-volume also leads to some loss of high-frequency details,as shown on the enlarged images in Fig. 12. In thisexample, a maximum FOV of �40 degrees was chosen forboth � and � angles, which corresponds to an approximatesolid angle of 1.7 steradian. In this case, some high-frequency texture details, as well as some specular lightingeffects, have partially faded. This attenuation also tends toget worse if we increase the FOV of the LF, especially foropaque surface renderings. Increasing the size of the lumi-volume does not seem to alleviate this problem (Fig. 14).Color fading is also visible with MIP-rendered LFs but to alesser extent.

Despite the loss of some high-frequency details inFig. 12c, we can observe that the contours of the texturesare actually quite well preserved, and the overall perception

of the surface with lumi-volumes is significantly superior toa raw voxelization, as shown in Fig. 12a. To be moregeneral, all our examples clearly show that the reproducedLFs create an adequate visual approximation of the originalLFs, despite the inherent limitations of such a technique. Inother words, unlike results that would be obtained from asimple object voxelization passed to the display, resultsproduced from lumi-volumes are relatively close to whatwould be displayed on a 2D screen, which is the main issuethis paper intended to address.

However, for realistic renderings, the FOV needs to berestricted to a given solid angle, and it seems that a�40-degree FOV is a reasonable choice. Since these lumi-volumes are created with a particular FOV, data can onlybe interpreted on the Perspecta Display when users standat the correct viewing angle. If users move outside of theappropriate viewing location, the volume becomes clou-dy, and the visualization becomes incoherent (Fig. 13).

6.2.3 Sampling Rate Analysis

To evaluate the sampling quality of an original LF, werendered the UNC head data set at different sampling rateson � (5, 10, 20, 40, and 80 images of 1922 pixels), with max�set to 40 degrees. Through simulation, we observed thereconstructed LFs at two consecutive sampling angles, aswell as at an intermediate angle. As shown in Fig. 15 for thelowest sampling rate (with five imagery projections, twoconsecutive images differ from an angle of 20 degrees), onecan see that the reconstruction from lumi-volumes is verygood at the original sampling locations, with no visibletransparency effect. However, the reconstruction at inter-mediate locations is far from satisfactory. By increasing thesampling rate, the reconstruction at intermediate locationsbecomes better and better. When this phenomenon istransferred to real displays, it means that viewers canexperience a more continuous visualization between twoconsecutive sampling angles.

We also noticed that ringing artifacts appear if thesampling rate is too low. We have found that using 80 samples(when images differ by an angle of 1 degree) appears to begood enough in this particular case. Finally, the reconstructedLF converges to a more transparent rendering solution whenthe sampling rate is increased.

We have made similar observations with other datasets. From our experience, increasing the sampling ratealways seems to result in a better visualization, and theEM algorithm also converges more consistently to a

MORA ET AL.: VISUALIZATION AND COMPUTER GRAPHICS ON ISOTROPICALLY EMISSIVE VOLUMETRIC DISPLAYS 229

Fig. 10. Photographic image captures of the Perspecta device. (a) Red channel of the bunny lumi-volume displayed on the Perspecta System.

(b) Head 2D (20 degrees) lumi-volume visualized from different viewpoints. (c) MIP aneurism using a full FOV.

Fig. 11. The transparency effect. Surfaces become transparent insidethe reconstructed LF on emissive displays. The lumi-volume approach(b), however, significantly reduces such an effect in comparison withpassing a voxelized volume of the scene directly to the emissivedevices (c).

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 10: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

specific lumi-volume. However, a high sampling rate doesrequire more time in the construction of a lumi-volume.

6.2.4 Lumi-Volume Sampling Rate

We have also evaluated the possible impact of using largerlumi-volumes on the rendering by varying the size from2563 to 5123 voxels. The same input LF (Head 1D,max� ¼ 40 degrees; 80 projections) was used, and the recon-structed LFs were observed. Four selected projections wereshown in Fig. 14.

Our observation indicates that the lumi-volume size hasrelatively little impact on the rendering fidelity and thetransparency effect. The intrinsic properties of our lumi-volume are still the same. The only effect we have noticed isthat the reconstructed LF appears to become smootherwhen the size increases, which is as expected.

6.2.5 Overall Remarks on the Results

Our experimental study shows that our new technique has

managed to simulate traditional rendering effects on

isotropically emissive displays, which was once thought

to be impossible [30], [9]. Computer graphics images can

now be shown on emissive displays with almost the same

appearance as images found on a basic 2D screen.

More importantly, surface shading effects can now be

produced on IEVDs, and the apparent translucency effects

of supposedly opaque surfaces can be controlled at a

reasonable level, provided that an appropriate FOV is

chosen. As long as a lumi-volume is generated from an

LF consisting of a sufficiently large number of imagery

projections, viewers can have a seamless 3D experience

with full depth perception inside the FOV. This result can

be compared with the very recent work of Jones et al. [25]

where the diffuse surface of the display has been replaced

230 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 15, NO. 2, MARCH/APRIL 2009

Fig. 12. The textured bunny data set. (a) An emissive-only rendering of a voxelized bunny data set. (b) An original LF projection. (c) A reconstructed

LF projection from the lumi-volume. (d) and (e) Horizontal (seen from below) and vertical slices extracted from the lumi-volume. Note that our 24-bit

color extraction produced dithering and saturation on those two images due to the high dynamic range of the lumi-volume.

Fig. 13. Two views of the Head 1D 40-degree data set taken outside of

the FOV of the original LF. Fig. 14. Reconstructed LFs using different lumi-volume sizes.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 11: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

by a mirror covered with a holographic diffuser, similar to

[9]. Both methods have specific advantages and disadvan-

tages. For instance, the user can experience a full 360-degree

horizontal FOV in [25] but at the cost of positioning the user

at a predefined distance d of the display and using head

tracking to ensure correct vertical parallax when required.

Shading is also very limited since this new display can only

work with binary images. In our case, the contribution of

several voxels to a ray allows having more gray levels, even

if the Perspecta display is also limited with respect to color

quantization.

Hardware modification will be needed to further im-

prove rendering quality, increase the FOV, and reduce

transparency effects. While some recent approaches use

anisotropic spinning surfaces [30], [25], [9] to achieve this

goal, we plan to study the use of multiple spinning diffuse

surfaces. For instance, a different image could be projected

on both sides of our Perspecta Display’s spinning plane,

leading to the use of two different lumi-volumes.

7 CONCLUSION

IEVDs offer an intuitive means for data visualization and

exploration. However, computer graphics on these displays

has been limited in the past to sending either voxelized 3D

meshes or medical data sets to the display, resulting in low-

fidelity renderings. This paper has offered a way to alleviate

this issue by formalizing, for the first time, the rendering

problem on isotropically emissive displays and introducing

the correct rendering pipeline to produce lumi-volumes.We have provided the first set of experimental evidence

to support the usability and scalability of this new technique.

This finding significantly broadens the application domain

of isotropically emissive displays and is expected to simulate

further research and deployment of the volumetric display

technology.

Our simulation indicates that the lumi-volume technique

can scale as the resolution and bandwidth of volumetric

display technology continues to increase. We are looking

forward to the continued enhancement of commercial

volumetric display systems such as the Perspecta system.

Our future work will include investigation into faster

reconstruction algorithms, investigation into reconstruction

algorithms that are not EM based, multiplane display

simulations, using head tracking, compression methods

for lumi-volumes, and perceptual merits of enhanced

transparency in MIP-based visualization on IEVDs.

ACKNOWLEDGMENTS

The authors wish to thank Gregg Favalora and Joshua

Napoli (Actuality Systems) for helping them with the use of

the Perpecta Display SDK and the Purdue University

Envision Center for giving them access to a Perspecta

display. This work was supported in part by a grant from

EPSRC (EP/E001750/1).

MORA ET AL.: VISUALIZATION AND COMPUTER GRAPHICS ON ISOTROPICALLY EMISSIVE VOLUMETRIC DISPLAYS 231

Fig. 15. Here, we present the quality of a reconstructed LF (Head 1D, max� ¼ 40) according to the number of projections (columns). Rows (a) and (c)

represent the reconstructed LF at two consecutive exact sampling locations. Row (b) shows the reconstruction at the middle angle.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 12: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

REFERENCES

[1] E.H. Adelson and J.R. Bergen, “The Plenoptic Function and theElements of Early Vision,” Computation Models of Visual Processing,M. Landy and J.A. Movshon, eds., MIT Press, 1991.

[2] E.H. Adelson and J.Y.A. Wang, “Single Lens Stereo with aPlenoptic Camera,” IEEE Trans. Pattern Analysis and MachineIntelligence, vol. 14, no. 2, pp. 99-106, Feb. 1992.

[3] L. Ahrenberg, I. Ihrke, and M.A. Magnor, “Volumetric Recon-struction, Compression and Rendering of Natural Phenomena,”Proc. Fourth Int’l Workshop Volume Graphics, pp. 83-90, 2005.

[4] B.G. Blundell and A.J. Schwarz, Volumetric Three-DimensionalDisplay Systems. John Wiley & Sons, 2000.

[5] B.G. Blundell and A.J. Schwarz, “The Classification of VolumetricDisplay Systems: Characteristics and Predictability of the ImageSpace,” IEEE Trans. Visualization and Computer Graphics, vol. 8,no. 1, pp. 66-75, Jan. 2002.

[6] B. Cabral, N. Cam, and J. Foran, “Accelerated Volume Renderingand Tomographic Reconstruction Using Texture Mapping Hard-ware,” Proc. IEEE Symp. Volume Visualization, pp. 91-98, 1994.

[7] J.-X. Chai, S.-C. Chan, H.-Y. Shum, and X. Tong, “PlenopticSampling,” Proc. ACM SIGGRAPH ’00, pp. 307-318, 2000.

[8] K. Chidlow and T. Moller, “Rapid Emission TomographyReconstruction,” Proc. Third Int’l Workshop Volume Graphics,pp. 15-26, July 2003.

[9] O.S. Cossairt, J. Napoli, S.L. Hill, R.K. Dorval, and G.E. Favalora,“Occlusion-Capable Multiview Volumetric Three-DimensionalDisplay,” Applied Optics, vol. 46, no. 8, pp. 1244-1250, 2007.

[10] F. Dachille, K. Mueller, and A.E. Kaufman, “Volumetric Back-projection,” Proc. IEEE Symp. Volume Visualization and Graphics,pp. 109-117, Oct. 2000.

[11] J.S. De Bonet and P. Viola, “Roxels: Responsibility Weighted 3DVolume Reconstruction,” Proc. Seventh Int’l Conf. Computer Vision(ICCV ’99), pp. 418-425, 1999.

232 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 15, NO. 2, MARCH/APRIL 2009

Fig. 16. The impact of the different parameters on reconstructed LFs, including the FOV angle and the number of EM steps. (1b) to (2b) Variation of

the horizontal FOV. (2c) to (2e) Variation of the horizontal and vertical FOV. (3a) X-ray of the original head data set. (3b) Original MIP LF. (3c) to (4b)

Variation of the horizontal FOV for the MIP head LF. (4c) Full FOV for the MIP head LF. (4d) to (5b) Impact of the number of EM steps on the

reconstructed LF. (5c) to (5d) MIP aneurism using a full FOV. (5e) X-ray of the original aneurism data set.

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.

Page 13: IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS ...rmaciejewski.faculty.asu.edu/papers/2009/TVCG_Lightfield.pdf · Visualization and Computer Graphics on ... fed directly

[12] A. Dempster, N. Laird, and D. Rubin, “Maximum Likelihood fromin Complete Data via the EM Algorithm,” J. Royal Statistical Soc.,Series B, vol. 34, pp. 1-38, 1977.

[13] N.A. Dodgson, “Autostereoscopic 3D Displays,” Computer, vol. 38,no. 8, pp. 31-36, Aug. 2005.

[14] P. Eisert, E. Steinbach, and B. Girod, “3D Shape Recon-struction from Light Fields Using Voxel Back-Projection,”Proc. Vision, Modeling, and Visualization Workshop (VMV ’99),pp. 67-74, Nov. 1999.

[15] G.E. Favalora, “Volumetric 3D Displays and Application Infra-structure,” Computer, vol. 38, no. 8, pp. 37-44, Aug. 2005.

[16] L.A. Feldkamp, L.C. Davis, and J.W. Kress, “Practical Cone-BeamAlgorithm,” J. Optical Soc. of America, vol. 1, no. 6, pp. 612-619,June 1984.

[17] D.T. Gering and M.W. Wells III, “Object Modeling UsingTomography and Photography,” Proc. IEEE Workshop Multi-ViewModeling and Analysis of Visual Scenes (MVIEW ’99), pp. 11-18,June 1999.

[18] R. Gordon, R. Bender, and G.T. Herman, “Algebraic Reconstruc-tion Techniques (ART) for Three Dimensional Electron Micro-scopy and X-Ray Photography,” J. Theoretic Biology, vol. 29,pp. 471-481, 1970.

[19] S.J. Gortler, R. Grzeszczuk, R. Szeliski, and M.F. Cohen, “TheLumigraph,” Proc. ACM SIGGRAPH ’06, pp. 43-54, 2006.

[20] T. Grossman, D. Wigdor, and R. Balakrishnan, “Multi-FingerGestural Interaction with 3D Volumetric Displays,” ACM Trans.Graphics, vol. 24, no. 3, p. 931, 2005.

[21] S.W. Hasinoff and K.N. Kutulakos, “Photo-Consistent 3D Fire byFlamesheet Decomposition,” Proc. Ninth Int’l Conf. ComputerVision (ICCV ’03), pp. 1184-1191, 2003.

[22] H.M. Hudson and R. Larkin, “Accelerated Image ReconstructionUsing Ordered Subsets of Projection Data,” IEEE Trans. MedicalImaging, pp. 100-108, 1994.

[23] I. Ihrke and M.A. Magnor, “Image-Based Tomographic Recon-struction of Flames,” Proc. ACM SIGGRAPH/Eurographics Symp.Computer Animation (SCA ’04), pp. 367-375, June 2004.

[24] I. Ihrke and M.A. Magnor, “Adaptive Grid Optical Tomography,”Graphical Models, vol. 68, no. 5, pp. 484-495, 2006.

[25] A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec,“Rendering for an Interactive 360 Degrees Light Field Display,”ACM Trans. Graphics (Proc. ACM SIGGRAPH ’07), vol. 26, no. 3,2006.

[26] M. Levoy and P. Hanrahan, “Light Field Rendering,” Proc. ACMSIGGRAPH ’96, pp. 31-42, 2006.

[27] Z. Lin and H.-Y. Shum, “On the Number of Samples Needed inLight Field Rendering with Constant-Depth Assumption,” Proc.IEEE Conf. Computer Vision and Pattern Recognition (CVPR ’00),pp. 588-597, 2000.

[28] W. Matusik and H. Pfister, “3D TV: A Scalable System forReal-Time Acquisition, Transmission and AutostereoscopicDisplay of Dynamic Scenes,” ACM Trans. Graphics (Proc.ACM SIGGRAPH ’04), vol. 23, no. 3, pp. 814-824, 2004.

[29] K. Mueller and R. Yagel, “Rapid 3D Cone-Beam Reconstructionwith the Simultaneous Algebraic Reconstruction Technique(SART) Using 2D Texture Mapping Hardware,” IEEE Trans.Medical Imaging, vol. 19, no. 2, pp. 1227-1237, 2000.

[30] R. Otsuka, T. Hoshino, and Y. Horry, “Transpost: A NovelApproach to the Display and Transmission of 360 Degrees-Viewable 3D Solid Images,” IEEE Trans. Visualization and ComputerGraphics, vol. 12, no. 2, pp. 178-185, Mar. 2006.

[31] S.M. Seitz and C.R. Dyer, “Photorealistic Scene Reconstruction byVoxel Coloring,” Int’l J. Computer Vision, vol. 35, no. 2, pp. 151-173,1999.

[32] L.A. Shepp and Y. Vardi, “Maximum Likelihood Restorationfor Emission Tomography,” IEEE Trans. Medical Imaging, vol. 1,pp. 113-122, 1982.

[33] C. Slinger, C. Cameron, and M. Stanley, “Computer-GeneratedHolography as a Generic Display Technology,” Computer, vol. 38,no. 8, pp. 46-53, Aug. 2005.

[34] B. Trifonov, D. Bradley, and W. Heidrich, “TomographicReconstruction of Transparent Objects,” Proc. 17th EurographicsSymp. Rendering (EGSR ’06), pp. 51-60, 2006.

[35] T. Tyler, A. Novobilski, J. Dumas, and A. Warren, “The Utilityof Perspecta 3D Volumetric Display for Completion of Tasks,”Proc. 17th Ann. Symp. Electronic Imaging Science and Technology,pp. 268-279, 2005.

[36] A.S. Wang, G. Narayan, D. Kao, and D. Liang, “An Evaluation ofUsing Real-Time Volumetric Display of 3D Ultrasound Data forIntracardiac Catheter Manipulation Tasks,” Proc. Fourth Int’lWorkshop Volume Graphics, pp. 41-45, 2005.

[37] F. Xu and K. Mueller, “Accelerating Popular TomographicReconstruction Algorithms on Commodity PC Graphics Hard-ware,” IEEE Trans. Nuclear Science, vol. 52, no. 3, pp. 654-663, 2005.

Benjamin Mora received the DEA (MSc equiva-lent) degree and the PhD degree in computerScience from Paul Sabatier University (Toulouse3) in 1998 and 2001, respectively. Since 2004,he has been a lecturer in the Department ofComputer Science, University of Wales Swan-sea. His research interests include generalcomputer graphics algorithms, volume visualiza-tion, ray tracing, and 3D displays.

Ross Maciejewski received the BS degree fromthe University of Missouri, Columbia, and the MSdegree in electrical and computer engineeringfrom Purdue University. He is a PhD student inelectrical and computer engineering in theSchool of Electrical and Computer Engineering,Purdue University. His research interests in-clude nonphotorealistic rendering and visualanalytics.

Min Chen received the BSc degree in computerscience from Fudan University in 1982 and thePhD degree from the University of Wales in1991. He is currently a professor in the Depart-ment of Computer Science, University of WalesSwansea. In 1990, he took up a lectureship inSwansea University. He became a seniorlecturer in 1998 and was awarded a personalchair (professorship) in 2001. His main researchinterests include visualization, computer gra-

phics, and multimedia communications. He is a fellow of the BritishComputer Society and a member of the IEEE, the Eurographics, and theACM SIGGRAPH.

David S. Ebert is a professor in the School ofElectrical and Computer Engineering, PurdueUniversity, as well as a university facultyscholar, the director of the Purdue UniversityRendering and Perceptualization Laboratory(PURPL), and the Director of the PurdueUniversity Regional Visualization and AnalyticsCenter (PURVAC), which is part of the Depart-ment of Homeland Security’s Regional Visuali-zation and Analytics Center of Excellence. He

performs research in novel visualization techniques, visual analytics,volume rendering, information visualization, perceptually based visua-lization, illustrative visualization, and procedural abstraction of complexmassive data. He has been very active in the visualization community,teaching courses, presenting papers, cochairing many conferenceprogram committees, serving on the ACM SIGGRAPH ExecutiveCommittee, serving as the editor in chief of IEEE Transactions onVisualization and Computer Graphics, serving as a member of theIEEE Computer Society’s Publications Board, serving on the NationalVisualization and Analytics Center’s National Research Agenda Panel,and successfully managing a large program in external funding todevelop more effective methods for visually communicating information.He is a senior member of the IEEE.

. For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

MORA ET AL.: VISUALIZATION AND COMPUTER GRAPHICS ON ISOTROPICALLY EMISSIVE VOLUMETRIC DISPLAYS 233

Authorized licensed use limited to: IEEE Xplore. Downloaded on February 19, 2009 at 09:12 from IEEE Xplore. Restrictions apply.