volume rendering for 3d display - · pdf filevolume rendering for 3d display ... a limited...

58
Volume Rendering for 3D Display MAGNUS WAHRENBERG Master of Science Thesis Stockholm, Sweden 2006

Upload: builien

Post on 23-Mar-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Volume Rendering for 3D Display

M A G N U S W A H R E N B E R G

Master of Science Thesis Stockholm, Sweden 2006

Page 2: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Volume Rendering for 3D Display

M A G N U S W A H R E N B E R G

Master’s Thesis in Computer Science (20 credits) at the School of Computer Science and Engineering Royal Institute of Technology year 2006 Supervisor at CSC was Lars Kjelldahl Examiner was Lars Kjelldahl TRITA-CSC-E 2006:099 ISRN-KTH/CSC/E--06/099--SE ISSN-1653-5715 Royal Institute of Technology School of Computer Science and Communication KTH CSC SE-100 44 Stockholm, Sweden URL: www.csc.kth.se

Page 3: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Abstract

In medical imaging more extensive use of hardware capable of producing 3D-datasets also results in a more demanding display procedure. Conventional2D viewing of 3D datasets impose a very di�cult and potentially unnecessaryinterpretation step. For viewing 3D scans Volume Rendering, or � the displayof data sampled in three dimensions�, is getting more of an issue. Theimmense rendering power required for this application is far beyond whatis found in a standard workstation. However, the development in consumergraphics hardware, used to accelerate rendering, progress in an incrediblerate, likely maintained by the computer gaming industry. Real-time VolumeRendering with acceptable interactive frame-rates has recently been enabledin wake of this development. In a few years we might see all workstationswithin a hospital capable of this kind of rendering. Is Volume Rendering goodenough for diagnostic purposes and is there anything that might enhancethe experience? One answer may be found in the combination of VolumeRendering and a 3D display device. In this Masters Thesis, I look at VolumeRendering in relation to one of the most advanced autostereoscopic displaysavailable which is currently being developed by Setred. This is done troughdesigning and implementing a Volume Rendering platform, capable of outputon the display. One of the questions which arose, is how this special case ofrendering handles on consumer graphics hardware. It seems however that itmay take just a while longer before hardware, really being able to cope withthe task of volume rendering, will be available at consumer level and price.

Page 4: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Sammanfattning

Volymrendering för 3D-skärmar

Volymrendering är en speciell gren av rendering som fokuserar på visualiser-ing av volym-data, till skillnad från ytor som är den vanligaste representatio-nen inom 3D-rendering. Volymrendering används främst inom vetenskapligvisualisering och kanske mest av allt i medicinska sammanhang. Med medi-cinsk utrustning kapabel att producera 3D-läsningar, tillkommer svårighetenav att visualisera denna data. En enkel metod är att visa volymen som ensekvens av genomskärningar i form av 2D-bilder. Denna visualisering infördock ett tämligen svårt tolkningssteg. Istället kan volymrendering användasoch tack vare den nya generationen gra�khårdvara närmar sig renderingsme-toderna interaktiva uppdateringshastigheter. Är renderingen tillräckligt braför att användas för diagnostik inom ett så viktigt område som medicin?Finns det något som skulle kunna förbättra upplevelsen? Svar på den frå-gan kan vara att kombinera volymrendering med skärmar som klarar avatt förmedla djupkänsla. I detta arbete tittar jag närmare på volymrender-ingsmetoder tillsammans med en av de mest avancerade autostereoskopiskaskärmarna som håller på att utvecklas av Setred. I examensarbetet arbetarjag fram en plattform för testning av volymrendering på skärmen. Det mankan se är att dagens generation gra�khårdvara knappt klarar interaktivahastigheter t.o.m. vid använding av lägre upplösningar. De reella kraven ärbetydligt högre men inom en snar framtid kan gra�khårdvaran vara bra nogtill detta ändamål. För rendering av �era vyer använt i stereo ställs Ennsstörre krav på hårdvaran vilket medför att den typen av rendering kommerkräva speciallösningar ett bra tag framöver.

Page 5: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Contents

I Introduction 1

1 Setred 1

2 Project 12.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

II Theory 3

3 Computer Graphics Hardware 33.1 Geometry processing . . . . . . . . . . . . . . . . . . . . . . . 43.2 Rasteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.3 Fragment Operation . . . . . . . . . . . . . . . . . . . . . . . 5

3.3.1 Blending . . . . . . . . . . . . . . . . . . . . . . . . . . 53.4 Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.4.1 Vertex Shader . . . . . . . . . . . . . . . . . . . . . . . 73.4.2 Fragment/Pixel Shader . . . . . . . . . . . . . . . . . 7

3.5 Texture Compression . . . . . . . . . . . . . . . . . . . . . . . 73.5.1 FXT1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.5.2 S3TC . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.5.3 3Dc . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4 Volume Rendering 84.1 DVR and IVR . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.2 Volume rendering integral . . . . . . . . . . . . . . . . . . . . 94.3 Optical Models . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4.3.1 Absorption only . . . . . . . . . . . . . . . . . . . . . 94.3.2 Emission only . . . . . . . . . . . . . . . . . . . . . . . 104.3.3 Absorption and Emission . . . . . . . . . . . . . . . . 104.3.4 Scattering . . . . . . . . . . . . . . . . . . . . . . . . . 11

4.4 Voxels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.5 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . 134.6 Classi�cation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4.6.1 Transfer function and feature extraction . . . . . . . . 134.7 Algorithms of Direct Volume Rendering . . . . . . . . . . . . 14

4.7.1 Image and Object order . . . . . . . . . . . . . . . . . 144.7.2 Shear-Warp . . . . . . . . . . . . . . . . . . . . . . . . 144.7.3 2D-Texture . . . . . . . . . . . . . . . . . . . . . . . . 154.7.4 3D-Texture . . . . . . . . . . . . . . . . . . . . . . . . 164.7.5 Raycasting . . . . . . . . . . . . . . . . . . . . . . . . 17

Page 6: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

4.7.6 Volume Splatting . . . . . . . . . . . . . . . . . . . . . 184.7.7 Maximum Intensity Projection . . . . . . . . . . . . . 18

5 Medical Imaging and DICOM 195.1 File format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.2 Medical Imaging . . . . . . . . . . . . . . . . . . . . . . . . . 20

6 3D Displaying 206.1 Depth Cues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

6.1.1 Physiological . . . . . . . . . . . . . . . . . . . . . . . 216.1.2 Psychological . . . . . . . . . . . . . . . . . . . . . . . 21

6.2 3D Display Device Technology . . . . . . . . . . . . . . . . . . 226.2.1 Field sequential separation . . . . . . . . . . . . . . . . 226.2.2 Time parallel separation . . . . . . . . . . . . . . . . . 236.2.3 Autostereoscopic displays . . . . . . . . . . . . . . . . 23

6.3 Setred Holoform Display . . . . . . . . . . . . . . . . . . . . . 246.3.1 Repeating viewing regions . . . . . . . . . . . . . . . . 24

III Implementation 24

7 Samurai 3D Volume Renderer for 3D Display 257.1 Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257.2 Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

7.2.1 2D Texture . . . . . . . . . . . . . . . . . . . . . . . . 267.2.2 3D Texture . . . . . . . . . . . . . . . . . . . . . . . . 287.2.3 Splatting . . . . . . . . . . . . . . . . . . . . . . . . . 287.2.4 Raycasting . . . . . . . . . . . . . . . . . . . . . . . . 297.2.5 Classi�cation . . . . . . . . . . . . . . . . . . . . . . . 32

7.3 GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327.3.1 Main Form . . . . . . . . . . . . . . . . . . . . . . . . 337.3.2 Rendering Form . . . . . . . . . . . . . . . . . . . . . 347.3.3 Color Table Form . . . . . . . . . . . . . . . . . . . . . 35

7.4 Display integration . . . . . . . . . . . . . . . . . . . . . . . . 357.5 Testing on display . . . . . . . . . . . . . . . . . . . . . . . . 36

8 Display Simulator 368.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.2 Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

8.2.1 Orthographic Image Plane View . . . . . . . . . . . . 388.2.2 Perspective View . . . . . . . . . . . . . . . . . . . . . 39

9 Example Images from Volume Renderer 39

Page 7: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

IV Discussion 409.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

9.1.1 Samurai . . . . . . . . . . . . . . . . . . . . . . . . . . 419.1.2 Display Simulator . . . . . . . . . . . . . . . . . . . . 43

9.2 Volume Rendering . . . . . . . . . . . . . . . . . . . . . . . . 449.2.1 Data management . . . . . . . . . . . . . . . . . . . . 459.2.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 459.2.3 Classi�cation . . . . . . . . . . . . . . . . . . . . . . . 469.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 47

9.3 Displaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

References 47

Page 8: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Part I

Introduction

1 Setred

Setred is developing an autostereoscopic display system based on technologybeing the result of joint research between MIT and Cambridge University.The company is based in England, Norway and Sweden. Supervisor at Setredfor the project is Thomas Ericson.

2 Project

Volume rendering is an important tool for scienti�c visualization. Major�elds include medicine and geology. Combining volume rendering with 3Ddisplay may lead to great advantages for users, perhaps even better diagnos-tic result within radiology. The �rst step toward that research is to createa limited software testing platform to get a preview of medical imaging andrendering on the 3D display.

2.1 Goals

The goals for this project is to create a specialized volume rendering platformfor testing volume imaging on the Setred 3D Display system. The system isin �rst hand designed for medical data. In that �eld, the dominating imageformat is the DICOM format, which is why one goal is to enable DICOMinput.

The major task of this project is to see how volume rendering is combinedwith 3D display. Looking at multiple volume rendering techniques is vitalto get a good insight in the discussion.

There will only be opportunity for a very short test of the renderingsoftware on the actual display. Generally, only having one prototype is alimitation, which yields need for the second part of the rendering platform -the display simulator. Making a physical correct simulation of the display is atask far beyond the scope of this project. The display simulator will thereforebe based on a simple model capable of showing the general output from thedisplay as well as one artifact called tearing that may a�ect rendering.

Here follows a summary of features included in the scope of this project.

1. DICOM Imaging

(a) Support dataset generated by actual medical scanning equipment

2. Rendering

1

Page 9: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

(a) Multiple rendering techniques

i. 2D Texture based

ii. 3D Texture based

iii. Volume splatting

iv. Raycasting

(b) At least one technique implemented with some of the possiblequality improvement measures.

(c) Con�gurable transfer function1

3. Simulator

(a) Orthographic view

(b) Perspective view

(c) Variable number of slits.

(d) Support for multiple viewing regions - Slice'n Dice

The project can be summarized in that I will design and implements theentire rendering platform as in the list above. This involves implementinga DICOM image reader as well as designing and implementing a volumerendering platform with renderers demonstrating four di�erent renderingalgorithms, all featuring classi�cation. Finally I will create a limited DisplaySimulator which gives some notion of the characteristics of the actual display.This involves working out a mathematical model based on the description ofthe workings of the display as well as implementation.

The parts of this platform not done by myself will be the display inte-gration library as well as some of the user controls.

2.2 Limitations

The DICOM format is one of the most extensive image formats available.There are di�erent levels of conformance, but this project is limited to gettingbasic functionality, for reading provided datasets.

Since the goal is to try implementing as many di�erent volume renderingmethods possible, some optimizations and quality improvement methods willbe left for discussion only.

The display simulator will be developed with the simplest mathematicalmodel possible, resulting in good performance and ease of use. The limita-tions within each area is listed below.

1. DICOM Imaging

1See section 7.2.5

2

Page 10: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

(a) Basic uncompressed images only

(b) Only support for 8 and 16 bit gray-scale

(c) Only support for images created with little-endian byte order(Intel).

2. Rendering

(a) 2D Texture

i. No opacity correction

ii. Only support for 8 bit per sample

iii. Only 2/3 axis-stacks - enables rotation around y-axis.

iv. No user con�gurable clip-planes

(b) 3D Texture

i. Only proxy-geometry based on Spherical shells

ii. Cube shaped volume only

iii. No user con�gurable clip-planes

(c) Splatting

i. Based on emission only optical model due to lack of depthsorting

ii. No clipping

(d) Raycasting

i. No clipping

ii. Few implemented optimizations

iii. Only 8 bit per axis ray setup

3. Display Simulation

(a) Only targeting tearing

(b) One eye view only - no stereo rendering

Part II

Theory

3 Computer Graphics Hardware

Today almost every computer is shipped with a graphics card capable ofaccelerating 3D computer graphics. A modern GPU2 provides acceleration

2Graphics Processing Unit. The term VPU - Visual Processing Unit is also commonlyused.

3

Page 11: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

trough out the rendering pipeline allowing more extensive rendering. Worthnoting is the power and complexity of modern GPUs. The latest generationGPUs has generally several times more transistors than CPUs, even if thelatest Intel Pentium D CPU series, with its 376 million transistors[23], almostmatches ATI's X1900 GPU having 384 million[13]. Compare with AMDAthlon 64 4000+ (Newark) CPU having 105 million transistors[23]. Thisnumber only shows some notion of complexity and is not well correlatedwith performance.

The rendering process begins with an application storing some kind ofscene-description based on planar polygons. The process of rendering animage from that scene is called display traversal [14]. For a long time thisprocess has been made up by �xed number of step and features, restrictingquality since real-time graphics is more or less dependant on the accelerationprovided by the graphics cards. Lately a new generation of graphics hardwarehas been introduced which o�ers a programmable pipeline through programscalled shaders. Although this o�ers much �exibility the overall pipeline stilllooks the same, summarized below in three steps.

1. Geometry processing

2. Rasteration

3. Fragment operation

The rendered values are �nally written to the framebu�er, the memory lo-cation which holds the current image displaying on the screen.

3.1 Geometry processing

The planar polygons enter geometry processing as vertices. Most of theoperations in this section is done per vertex so the actual geometry isn'tcomposed until late in the processing stage.[14]

First the vertices are transformed according to the Modeling and Viewingmatrices. These matrices are often combined trough multiplication into asingle model-view matrix. [14]

Secondly, the transformed vertices are lit based on normal vectors andsome kind of local illumination model. The lighting is dependant on thetransformations so it comes after the transformation stage in the pipeline.[14]

Finally, the geometry primitives are assembled for the clipping and �nalprojection onto the the image plane. Polygons are almost always reducedinto triangle primitives for the hardware to handle.[14]

3.2 Rasteration

Rasteration is where the geometry is actually converted into pixel-valuesor fragments. A fragment doesn't necessarily need to result in a pixel on

4

Page 12: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

the �nal output image since modern graphics hardware allow multi-passrasteration with di�erent ways of combining values[5]. When a fragmentis generated from textured geometry, one or with multitextureing severaltexture-lookup occurs, and the �nal fragment color is calculated based ontexture, shading/lighting and color.

3.3 Fragment Operation

We have now a complete fragment which is ready for output. However, wehave still the option of throwing away the fragment or combine it with alreadywritten values. This is done trough several available fragment operations, asthe ones listed below which are the standard OpenGL fragment operations.

Scissor test Throws away every fragment that are not within the user spec-i�ed rectangle on the screen.

Alpha test Decide whether the fragment should be thrown away based onits alpha/opacity value.

Stencil test Looks up if the fragment should be kept in an special sten-cil bu�er. The bu�er should have been written in before thefragment is generated.

Depth test Tests the fragment based on its depth. Useful for hidden sur-face removal.

Blending Combine the fragment with the existing value according to ablending function.

Dithering On systems where color depth is limited dithering can be per-formed to enhance the experienced color depth. Not commonlyused any more since color depth generally is high enough.

Logical operations Di�erent logical operation that can be performed asthe fragment is written.

If a fragment fails one test it doesn't proceed to to the next. [6]

3.3.1 Blending

Blending is a vital operation for volume rendering on graphics hardware.An generated source fragment is combined (blended) with the destinationfragment already written in the framebu�er. [6]

We denote the source fragment color and alpha component-wise withCsrc = {Rsrc, Gsrc, Bsrc, Asrc} and destination fragment color and alphawith {Cdst = Rdst, Gdst, Bdst, Adst} . From these values we also form asource factor {Fsrc = fRsrc, fGsrc, fBsrc, fAsrc} and a destination factor

5

Page 13: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Fdst = {fRdst, fGdst, fBdst, fAdst} that will be used to weight source anddestination values. There are lots of possible combinations but this is forexample very commonly used:

Fsrc = Asrc (1)

Fdst = 1−Asrc

The resulting pixel value is calculated with a blending equation. Mostrelevant in this case is add and max.[4]

Radd = RfRsrc + RdstfRdst

Gadd = GfGsrc + GdstfGdst

Badd = BfBsrc + BdstfBdst

Aadd = AfAsrc + AdstfAdst (2)

Rmax = max (Rsrc, Rdst)Gmax = max (Gsrc, Gdst)Bmax = max (Bsrc, Bdst)Amax = max (Asrc, Adst) (3)

If we take a look at the example blending factor in equation 1, togetherwith blending equation add, we see that the �nal result will be as if thegeometry of the source fragment was transparent, with opacity Asrc. This isvery useful but it requires the scene to be rendered back-to-front, which incomplex scenes can be very di�cult.[6]

3.4 Shaders

New graphics cards o�er a programmable pipeline trough programs calledshaders. Shaders is not a new inventions thought up by the graphics cardmanufacturers but rather a classical part of the rendering pipeline. Hi-Quality software renderers used for example in animation have had a highlyprogrammable pipeline for a long time. One of the �rst to introduce shadersinto the rendering pipeline was Pixar in their RenderMan interface but mostother rendering engines was soon to follow. It's common to think of a shaderas sort of an advanced material for the geometry but that's just one of severalapplications.

The actual program consists, just like any other program, by low levelinstructions carried out in the vertex and fragment processor. High-level

6

Page 14: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

languages was introduced early for more e�cient development. Among themajor languages are Cg3, GLSL4 and HLSL5, all of which resembles the basicsyntax of the C programming language.

3.4.1 Vertex Shader

The vertex shader is the program which substitutes part of the �x-functiongeometry processing stage. The program run once for every vertex andcan carry out various vector operation as well as changing di�erent globalattributes. Vertices can however not be created or removed, which is oneof the major limitations of this version of the pipeline. This limitation willbe lifted in the next generation graphics cards based on the Direct3D 10speci�cation, where geometry shaders, a third shader type, is introduced[10].

3.4.2 Fragment/Pixel Shader

The fragment shader is a program which run for every fragment and replacespart of the rasteration stage. In Microsoft's Direct X, the fragment shadergoes under the name pixel shader.

The fragment shader takes interpolated surface attributes and calculatea color value for the fragment. Fragment shaders enable per-pixel e�ects likereal-time Phong surface shading in contrary to the interpolated Gouraudshading used for example in the OpenGL �xed function pipeline[6].

3.5 Texture Compression

The incredible performance of the modern graphics card is depending onfast access to scene data. Therefore graphics cards are equipped with agreat amount of high-speed memory, on an internal bus, matching the speedof the memory. Despite the large amount on memory on the graphics cardswhole scenes seldom �ts into the memory . Data must be streamed crossthe slow AGP bus or the slightly (∼ 2x) faster PCI-Express bus, duringrendering. Much of this data comprise of textures. We might not considerimages as occupying a lot of space, but that's because of the extensive useof image compression.

Compression is now also available for textures stored on the graphicscard. There are three main compression techniques - FXT1, S3TC and 3Dc.All are three are destructive, meaning that the image quality is reducedcompared to the original.

3C for Graphics introduced by NVidia4OpenGL Shading Language5High Level Shading Language introduced by Microsoft

7

Page 15: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

3.5.1 FXT1

FXT1 was the �rst texture compression technique available on graphics hard-ware. It was developed by 3dfx, but is now open source. It is a block basedtechnique which can reach compression ratios up 8:1, with severely reducedquality. [22]

3.5.2 S3TC

S3 later developed their own compression technique, very similar to FXT1.It also uses 4x4 blocks for encoding and can only reach a compression ratioof 6:1[3]. Generally however, the quality is better than FXT1. Microsoftlicenced S3TC as the compression technique used in DirectX. There it goesunder the name DXTn.

3.5.3 3Dc

In later years, ATI developed their own technique, 3Dc, which partly targetsa new type of texture namely normal maps. The other available techniqueshandles those textures poorly. As normal mapping techniques gets moreand more common the same will probably happen to the 3Dc compressiontechnique. It o�ers very high quality to the compression rate of 4:1[11].

4 Volume Rendering

The true pioneer of volume rendering, Lee Westover, gave this de�nition -� Volume rendering is the display of data sampled in three dimensions�[25].This de�nition doesn't say more because of the wide range of di�erent appli-cations and techniques involved. The samples can represent density, velocityand can even hold multiple proprieties.

A corner stone of computer graphics is its representation of objects assurfaces[25][14]. This representation �ts most items in the real world sincethey are mostly opaque and re�ects light-rays at the surface. Even transpar-ent objects like glass �ts into the surface representation as long as the glasshas homogeneous refraction index and color. In that case the light refractstrough the glass but still only changes direction when entering and exitingtrough the surface. There are however cases where light interacts with thevolume rather than just the plain surface of the volume and this is wherevolume rendering comes into the picture.

In most applications of 3D computer graphics we have a camera, oftenrepresenting as spectator, which implies that we're simulating light in thevisual spectrum. This assumption is true when displaying but it's importantto realize that the data within the scene may not have been the result ofsampling light (as a photograph). This means that we might not get thewanted result by simulating light to the furthest extent.

8

Page 16: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

4.1 DVR and IVR

There are two families of volume rendering - Direct (DVR) and Indirect(IVR) Volume Rendering. In DVR we try to simulate the ray trough thevolume whereas in IVR we use algorithms to generate a surface from therelevant features from the volume data. There are also hybrid techniqueswhere both direct and indirect rendering are used. In this this project, focusis set on Direct Volume Rendering, since it can produce a wider spectrum ofresults.

4.2 Volume rendering integral

The principle of Direct Volume Rendering is to solve the volume renderingintegral for every pixel in the image. The volume rendering integral de�neshow much light is received along each ray cast from the camera point. Thecharacteristics of this integral is dependant on what is called the opticalmodel.

4.3 Optical Models

What we are trying to simulate with Direct Volume Rendering is how lightis a�ected as it travels trough the volume. There are mainly three physicalphenomenons that we must simulate to some degree in order to get a realisticresult[20].

• Absorption

• Emission

• Scattering

It is not in every case that all of these a�ect the result and so we choosedi�erent optical models for di�erent application. In radiology and x-rayimaging for example, emission and scattering are seldom used, since absorp-tion dominate the result[20].

4.3.1 Absorption only

Absorption only is a very simple optical model which is very useful in radi-ology and x-ray imaging. We assume that we have a medium which consistsof particles that absorbs the ray. How much absorbed energy depends onthe density or concentration of particles in the medium. If we let the raybe absorbed trough a cylinder with length ∆s we have a extinction coe�-cient τ (s)which is a function of particle density ρ (s)and the area which oneparticle occludes A.[20]

τ (s) = ρ (s) A

9

Page 17: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

This coe�cient is strongly related to the term opacity and de�nes withwhich rate the intensity I declines. The actual opacity is α = 1−T (s)whereT (s)is the transparency, or how much of the light passes through as seen inequation 6.

dI

ds= −τ (s) I (s) (4)

I (s) = I0e−R s0 τ(t)dt (5)

T (s) = e−R s0 τ(t)dt (6)

We see in the last equation that T (s) goes from 0 when the exponent is−∞ and 1when the exponent is 0. This is natural since the factor representsthe percentage of light throughput.

4.3.2 Emission only

If we have particles emitting light we can sometimes use the emission onlymodel. The existence of particles within the volume implies that there shouldoccur absorption. There are however cases where the particle density andthen also the absorption are negligible in comparison to emission. This canfor example be the case with light-emitting gases like �re. Instead of anextinction coe�cient τ (s), we have here what's called a source term g (s)which indicate how much light is emitted along the cylinder. The followingequation is the base of this model still using the cylinder de�nition fromprevious section. [20]

dI

ds= g (s)

The change of intensity is not dependant on the intensity itself whichmeans that we will only add to the generated intensity along the ray.

I (s) = I0 +∫ s

0g (t) dt (7)

This �nal equation 7 integrated trough the volume and all the way to theeye is the volume rendering integral for this optical model. The eye-distanceis de�ned at D and the volume integral is I (D).

4.3.3 Absorption and Emission

Absorption and emission is a more realistic version of the case where wehave particles emitting light. This model is the most used within volumerendering which would imply that it is a good trade-o� between realism andcomplexity[14]. In this case we have to take both the emitted light and the

10

Page 18: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

absorbed incoming light into account as seen in the equation 4. Again thesource term g (s) adds intensity change and the extinction coe�cient τ (s)de�nes the reduction by absorption.

dI

ds= g (s)− τ (s) I (s) (8)

When we solve and simplify this equation and also integrates all the wayfrom 0 to the eye distance D, de�ning the volume rendering integral, we getthis equation:

I (D) = I0T (D) +∫ D

0g (s) T ′ (s) ds (9)

The �rst term represent the absorbed incoming light where T (D) is thetotal transparency trough the ray. This equation gets somewhat clearer ifwe proceed with further simpli�cation. We assume that g (s) = Cρ (s),where ρ (s) is a function of basically the size of the particle and the amountof particles. C can be regarded upon as a color. This assumption lets ussimplify equation 9 resulting in equation 10.[20]

I (D) = I0T (D) + C (1− T (D)) (10)

We can now explain term two as we see that it's the color multiplied withthe opacity. The term can be interpreted with the opacity α = 1− T (D) asthe probability that the ray hits a particle and we see the color C.[20]

4.3.4 Scattering

If we want to introduce scattering into the simulation we take the complexityup another level. Previously we have assumed directional lighting comingfrom the back of the volume, but now we must take into account light re-ceived from other sources and directions. Sources can be directly from thelight source or already previously scattered. The complexity is immenseand correct simulation more or less impossible. Usually a simpli�ed modelis used, like for example the �Utah approximation�, from the University ofUtah, seen in equation 11. [20]

S (X, ω) = r (X, ω, ω) i (X, ω) (11)

i (X, ω) represent the light received from direction ω at the point X andr (X, ω, ω) represent the BRDF6 of the particle. The BRDF can be de�nedby taking a fraction a (X)of the extinction coe�cient and distributing thatin directions according to the phase-function p (ω, ω). Basically we take the

6Bidirectional re�ection distribution function - de�nes how much light is transmittedin each direction when the light interacts with the surface material.[28]

11

Page 19: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

light that hits the particle but doesn't get absorbed, and distribute acrossappropriate directions.[20]

r (X, ω, ω) = a (X) r (X, ω, ω) p (ω, ω) (12)

For spherical shaped particles p (ω, ω) is often approximated using theHenyey-Greenstein function seen in equation 13.

p (ω, ω) =14π

1− c2

(1 + c2 − 2cx)32

(13)

Here x represent the angle between incoming and outgoing light as shownin equation 14.

x = cos α = ω · ω (14)

It is common to use surface shading in volume rendering without actuallyproducing any surfaces. This e�ect is a special case of scattering wherethe BRDF is approximated by shading models like Lambert7 or Phong8

shading.[20]The scattering term S (X, ω) is �nally introduced into the source term g

from precious sections, making it more complex than before.

g (X, ω) = E (X) + S (X, ω)

Where E (X) is the emission taken into account previously.

4.4 Voxels

The data representation of volumes is very much the same as that of images.The di�erence is that images are two-dimensional and volumes are three-dimensional. In an image a pixel represents each sample, and in volumesthe corresponding 3D entity is called a Voxel. You often describe a pixelas a square building block of the image, and in the same way a voxel isdescribed as a cubic block. Both these representations are in fact misleadingeven though they are useful to visualize the concept. You should think ofpixels and voxels as point samples without width and height, just distanceto surrounding samples[14]. What a single pixel or voxel will look like is verymuch dependant on the interpolation kernel. A pixel is square only with anearest neighbor interpolation kernel and same applies to the cubic natureof a voxel. The volume is in fact a continuous function in the R3 domainwhich is represented by discrete samples. From these samples you make areconstruction of the original function and in this stage interpolation of somesort is required. [7]

7Lambert di�use shading distributes light equally in all directions.8Phong shading includes specular re�ection giving the material a glossier character.

12

Page 20: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

4.5 Reconstruction

With a bandwidth limited signal the sampling theorem indicate the possibil-ity for exact reconstruction provided a Nyqvist sampling rate is used. Thiswould be done with the sinc function but is actually far to computationalintensive since all samples must be considered at all points. Also, the as-sumption of a bandwidth limited signal is often invalid for real-life data,where sharp boundaries are imminent. Instead the most common �lters arethe box and tent �lter representing nearest neighbor and linear interpolation.[14]

4.6 Classi�cation

The premise so far is that we have data generated trough di�erent samplingmethods, but we have rendering based on light-simulation. We can choosean optical model that suits the situation but this is not enough to ensureimage quality. We have to perform some kind of classi�cation, which meansassigning optical properties to the volume data[14]. We do that with atransfer function which maps the volume data value to various properties. Inradiology for example, where the volume data represent density, the transferfunction can be used to sort out di�erent features (levels of densities), eithermaking unwanted parts transparent or assigning di�erent color.

4.6.1 Transfer function and feature extraction

The transfer function can either be applied before the interpolation (recon-struction) step, which is called pre-classi�cation, or after, then called post-classi�cation. When possible post-classi�cation is used since pre-classi�cationhave a tendency to produce visual artifacts.

The most basic transfer function is de�ned in an 1D domain and is imple-mented using a lookup-table. In medical CT scans for example, the sampleddensity is assigned a color and perhaps a transparency-weight. This singledata value is however not the only possible feature classi�er. The transferfunction can be extended to include more dimensions. A 2D transfer func-tion with both data value and gradient magnitude gives a better result as itcan better detect material boundaries.[14]

As automatic feature-classi�cation is very di�cult, transfer functions areusually best made manually. Creating a 1D transfer function is a rathersimple task which does not demand much from the user interface of theeditor[14]. When more dimensions are added user interface di�culties getsmuch more apparent.

13

Page 21: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

4.7 Algorithms of Direct Volume Rendering

There are quite a few di�erent approaches to Direct Volume Rendering forreal-time use. Di�erent performance characteristics and hardware require-ments makes choosing which to implement non-trivial.

4.7.1 Image and Object order

A Volume rendering algorithm can either be image or object order. Imageorder is when you start from the image plane and see how the volume a�ectsthe pixel values. This means basically that image order algorithms are thosewhich are done �per pixel�. With object order you start from the oppositeside and see how each voxel projects on the image plane, making them moreor less �per voxel�. Raycasting in 7.2.4 is a typical image order algorithmwhile Splatting in 4.7.6 is object order.[14]

4.7.2 Shear-Warp

Shear-Warp is considered the fastest volume rendering algorithm around[14,19]. As the name implies the algorithm is basically divided into two majorsteps, observable in �gure 1 - one where you shear slices of the volumeaccording to viewing angle and one where you warp them onto the imageplane. These two steps to resample the volume is what makes this algorithmso fast. Other approaches usually use more expensive multi-pass resampling.Shear-Warp o�ers a combination of advantages from both object order andimage order algorithms.[15]

Figure 1: Illustration of Shear-Warp in orthogonal projection

The entire technique can be summed up by equation 15 which is theShear-Warp Factorization. P is the permutation matrix used to line up thecoordinate system with the view.[15]

Mview = P · S ·Mwrap (15)

14

Page 22: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

The S matrix is the shearing matrix which transforms the slices in objectspace. For orthographic projection with slices perpendicular to the z-axis,S will take the shape as in equation 16 and respectively for perspectiveprojection in equation 17.

Sortho =

1 0 0 00 1 0 0sx sy 1 00 0 0 1

(16)

Spersp =

1 0 0 00 1 0 0sx sy 1 sw

0 0 0 1

(17)

Shear-warping is an overall good algorithm that almost produces imagequality rivaling Raycasting, described in section 7.2.4. Since the algorithmis dependant on opacity encoded in the volume interactive transfer functionshave been hard to realize, but as with other algorithms, the newly availableprogrammable pipeline, provides some relief.[19]

4.7.3 2D-Texture

Another fast approach is to use the graphics hardware's texture and blendingcapabilities to approximate the volume rendering integral. The volume isdivided into stacks of slices. One stack for each axis is required for viewingthe object from an arbitrary angle. This means that the amount of memoryrequired is trippled which is one of the major drawbacks with this method,specially since datasets tend to be rather large to begin with.[14]

Texturing methods are all object order algorithms just like ordinary ren-dering which makes them very compatible with standard graphics hardware.You project the texture onto proxy-geometry and makes a composite by ren-dering back-to-front with blending. This is what approximates the volumerendering integral. The proxy-geometry for 2D textures are almost alwaystextured rectangular slices.[14]

What makes this method fast is the graphics hardware's texturing per-formance. Ordinary 2D-textures are very commonly used for all renderingand so naturally the hardware and driver are optimized for that.

Besides from the memory overhead there are still a few problems withthis approach. The interpolation is only bi-linear since the interpolation ispart of the texturing which in this case is 2D. This means that your sampling-rate on one axis is �xed which, for most datasets, isn't optimal. There arehowever ways of solving this problem as long as you run on rather moderngraphics hardware. By using multitexturing and ways to interpolate betweenthe two textures - for example with a shader - you can introduce an arbitrary

15

Page 23: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

number of slices in between the two �xed, resulting in selectable samplingrate. This is often referred upon as Slice interpolation[14].[5]

Figure 2: Non-accurate sampling distance using slices as proxy-geometry.

Another problem is the planes as proxy-geometry. When we approximatethe volume rendering integral trough blending, we have a certain samplingdistance - the distance a ray travel between sampling points. This distanceshould be �xed for a correct result but this is not the case with planes asproxy-geometry. The distance ∆t is also dependant on the angle of the rayagainst the plane. With a distance of ∆s between slices, a ray that hits theslices at an angle of β, will travel ∆t illustrated in �gure 2 and in equation18.[14]

∆t =∆s

cos (β)(18)

The solution is to use Opacity Correction[14] to change the opacity sothat it matches the traveled distance.

α = 1− (1− α)∆t∆s (19)

Opacity Correction weigh the opacity to compensate for the varying trav-eled distance ∆t. In equation 19 we see the weighted and �nal opacity α.

4.7.4 3D-Texture

In OpenGL 1.2 3D-textures was introduced which means tri-linear interpo-lated textures using three texture-coordinates. The texture is de�ned inthe texture space which is easily correlated into object space by setting thetexture-coordinates u, v, s to the corresponding x, y, z of the object space.This transformation between coordinate systems compose the texture pro-jection.

There is still a need for proxy-geometry to project the texture on butsince the hardware does all the resampling you are not as tied to planes aswith 2D textures. Another advantage is that only one copy of the textureis required instead of three as with 2D-textures. The only real drawback in

16

Page 24: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

comparison to 2D textures is the speed. 3D-textures are usually a bit slowerpartly because of the tri-linear interpolation.

The simplest proxy-geometry for this case is view-aligned planes. Insteadof having object aligned planes as with 2D-texturing, where the slices are�xed against the axes, you have planes aligned to the camera so that youalways see them head on. This eliminates much of the artifacts found whenrendering 2D-textured slices but still requiring the use of Opacity Correction.[14]

Figure 3: Correct sampling distance using spherical shells as proxy-geometry.

If instead Spherical Shells, centered around the camera, are used asproxy-geometry, we get the �xed sampling distance. This can be seen in�gure 3. As a result the need for Opacity Correction is absent. The sphericalshells are however more di�cult to setup and depending on the tessellationof the spheres, you end up rendering a lot more geometry.[14]

4.7.5 Raycasting

Raycasting is one of the most straight forward methods of volume render-ing. It is the volume equivalent of Ray Tracing for surfaces, and has beenenabled for real-time use by the new programmable pipeline. It has severaladvantages over texture based volume rendering especially for large datasets.Only a small percentage of the actual voxel will in�uence the result whichgives great possibilities for optimization. It's generally hard to make thosekinds of optimizations with texture based methods where as there are severalavailable with raycasting. [14]

Raycasting is a typical image order approach. For each pixel a ray is castfrom the camera point trough the image plane and into the scene, illustratedin �gure 4. Along that ray the volume is sampled to numerically estimatethe volume rendering integral [16].

One of great advantages with raycasting is that it has several availableoptimization techniques. When enough intensity, near full intensity, is com-posited, then the ray can be terminated since further sampling won't a�ect

17

Page 25: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Figure 4: Basic setup for Raycasting

the result[19]. This is called early ray termination and can provide muchacceleration especially in dense volumes. When evaluating homogeneous re-gions in the volume the sample-rate can be decreased which is called space-leaping. Space-leaping can be implemented in many ways but all have acommon denominator in that some kind of pre-processing of the volume isneeded. This complicates implementation but is most often worthwhile.

4.7.6 Volume Splatting

Splatting is another object order volume rendering method introduced in1989 by Lee Westover[25] and therefore is not based on the graphics hard-ware we use today. The method is however still widely used in volume ren-dering applications. It o�ers good quality rendering with reasonable speed.The basis of the algorithm is that each visible voxel is considered a particlewhich absorbs and emits light and thus leaving a footprint - a splat - on theimage plane. Compositing these splats from back to front gives the approx-imation of the volume rendering integral. Usually you would use some kindof Gaussian blob as the footprint.[17]

4.7.7 Maximum Intensity Projection

In some medical applications the goal is not to composite the volumes opticalproperties. Instead the meaningful data lie within the maximum sample, socalled Maximum Intensity Projection (MIP). This type of rendering is usedwith MRI scanners, explained further in section 5.2, because of noisy natureof those scans. MIP can be implemented using the max blending operatordescribed in equation 3, section 3.3.1. The operator must however be usedwith care since it calculates values component-wise and can interfere withthe transfer function.

18

Page 26: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

5 Medical Imaging and DICOM

DICOM9 is the dominating world standard for imaging in medicine. Inthe 1970s computers was introduced into radiology with the development ofcomputed tomography (CT), which gives true 3D scans of the body. Thisdevelopment created a demand for a standard concerning imaging in medi-cine, since information must be accessible for doctors, in di�erent locations.At �rst the scanning hardware produced images in a verity of formats, whichbecame a major problem. As a result, the American College of Radiology,representing the users, and the National Electrical Manufacturer Association,representing hardware manufacturers, formed a joint committee to producea standard for imaging and communications. The �rst version of the stan-dard was released in 1985 with improved versions up until the latest todayfrom 2004.[2]

5.1 File format

Figure 5: General structure of the DICOM image format.

A DICOM �le generally does not just hold image data but also datarelevant to the image, for example, which type of device generated the image,various settings for the device and even patient and doctor names. Thegeneral layout of the �le format can be seen in �gure 5. To give structure,the data is divided into groups based on usage. For example:

Patient Group Holds patient information like name and date of birth.

Acquisition Group Settings relevant to the actual image acquisition.

Image Presentation Group Image data, bit depth and other attributesrelevant to the image itself.

9Digital Imaging and Communication in Medicine

19

Page 27: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Within the groups data is stored as data elements. A data element is acontainer for the data and de�nes the Value Representation (VR), whichidenti�es the data-type for the stored data. The VR can be simple data-types like an integer, or more complex data-types like Sequences, a recursivedata-type for hierarchies. VR is identi�ed with two character, with �UL� asUnsigned Long for example.[2]

5.2 Medical Imaging

3D medical images can be acquired by many means, for example CT, MRIand ultrasound.

Computed Tomography (CT), introduced in the 1970s, is a special type ofx-ray imaging where the target is scanned from multiple angles. A computerthen processes the information to compute a cross section. Multiple scanscan be done to produce several cross sections resulting in a 3D dataset.[27]

Magnetic Resonance Imaging (MRI) is a newer scanning method whichcan produce 3D data. The method is very expensive and uses very advancedphysics. A powerful magnetic �eld is applied, aligning the hydrogen nucleiof the target to the �eld. Some particles with a speci�c spin can absorbelectromagnetic radiation of a certain wavelength under the in�uence of amagnetic �elds. Radio-waves with the right wavelength for the hydrogenatoms are emitted. This forces the atoms into a non-aligned high energystate in which they stay for a short period of time. When they return totheir original state they emit the electromagnetic energy which is detectedby the device. Relaxation-times for the particles are used to form the �nalimage.[26]

Ultrasound is another great tool for scanning the human body. 2D ul-trasound have always had the problem that 3D tissue is being experiencedin 2D resulting in a very di�cult interpretation step which often requireslots of experience. 3D ultrasound is a way to address that and several otherproblems with conventional 2D ultrasound. Extending ultrasound to 3D andeven 4D (moving picture) was enable by the high frame-rate (10-60 imagesper second) of conventional ultrasound.[8]

6 3D Displaying

The principle of 3D display systems is very old dating back to early 1830swhen Sir Charles Wheatstone discovered stereoscopy. Wheatstone usedhandmade drawings as well as photography to give each eye a separate,tough static, image in his stereoscope. Since that time most of the e�ort hasbeen placed in improving two view stereo, but since the 1980s, the researchhave widen to include several other 3D display types.

20

Page 28: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

6.1 Depth Cues

Our ability to estimate depth is very important for many of the seeminglybasic tasks that we take upon ourself - like driving or even catching a ball.

There are several cues that we use to estimate depth and they sort undertwo categories - Physiological and Psychological. More usable cues will resultin a better estimation of depth, but con�icting depth information can be verymisleading, specially if one of the more powerful cues is involved.

6.1.1 Physiological

The physiological category is for cues involving physical functions, as thename implies. The main features is often called stereo vision which meansthat we use our two eyes to gain sens of depth. The actual cue is calledBinocular disparity which is the di�erence between the image seen on theleft and right eye.

Our eyes also enables two other cues namely Convergence and Accom-modation. Accommodation is when the eye lens changes thickness and thuschanging focal length of the eye. The focus along with the focal distanceprovides us with depth information. The Convergence cue is similar to theAccommodation cue, but here it is the inward rotation of the eye that is theactual cue. Again a cue that tells us something about the distance of theobject which we focus our vision on.

The last cue is the Motion parallax which means that we get a di�erentview when we move, either our entire body, when walking forward or side-ways, or just minor movement of the head. How much the view changes,or precisely how much we can look around an object also gives us depthinformation.

6.1.2 Psychological

The psychological, also as the name implies, are depth cues that are inter-pretations and guesses made by our brain based on previous experience. Itis these cues that enables us to look at an 2D image and still see depth.

The �rst one that comes into mind is the Linear perspective - the factthat distant objects look smaller than near objects. In graphics we use it inperspective projection de�ned in equation 20. This cue goes hand in handwith another namely Retinal image size. If we see two objects with the samesize, but we know that one generally is bigger, we assume, because of linearperspective, that that object is further away.

x =X

Z(20)

y =Y

Z

21

Page 29: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Another psychological cue is lighting and shadows which also gives hintsabout distances. With lighting we can estimate distance from the light sourcebased on fall-o�.

Interposition is yet another very important cue. Di�erent objects oc-cluding one another gives us a rough estimate of depth, or at least the depthordering, which often is quiet enough.

Further more there are Aerial perspective, Texture gradient and Color.Aerial Perspective is based in the fact that distant objects looks less sharpas a result of haze in the atmosphere. Really distant object also tend to lookslightly blue since the shorter wavelength penetrates the atmosphere moreeasily.

Texture gradient is similar but with texture. Due to our limited visionwe can't perceive texture on distant object and so they look more di�use.

We also interpret depth based on color. Bright colors makes the objectlook closer than when the object is dark. Also, some color gradients hasbeen found to relay depth information[24].

6.2 3D Display Device Technology

To enable basic 3D displaying we must account for the binocular disparity -or to produce di�erent views for the di�erent eyes. Displays of this sort iscalled stereo pair. Stereo pair displays can only give one view of the sceneat one time. To produce a motion parallax, a head-tracking solution is oftenapplied, limiting the display to one user. A device of some sort is also usedto separate views for each eye, for example the very commonly used glassesmasking out di�erent colors for each eye. There are however techniquesthat doesn't require special equipment to separate views. Those systems areautostereoscopic. [18]

It's quiet common in 3D display solutions that the right and left viewaren't completely separated, which is to say that the right view is slightlyvisible in the left view and vise versa. This is called cross talk and can resultin a blurred image as well as seeing double, so called ghosting.

Systems that gives binocular disparity as well as other cues like conver-gence and accommodation, is either sorted under the name holographic orvolumetic. Volumetic displays is displays that create volume �lling imagesby for example scattering light with heat or smoke[9]. [18]

6.2.1 Field sequential separation

In �eld sequential separation the display alternates between left and rightview. A blocking mechanism prevents the eyes from seeing the wrong view.The blocking mechanism is often viewing glasses which can be divided intoactive and passive groups. The passive glasses are composed of di�erentlypolarized glass for each eye. A shutter in front of the display polarizes the

22

Page 30: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

light to create the separation. With active glasses the shutter is placed onthe glasses at each eye. This requires synchronization with the display buto�ers better e�ciency. With �eld sequential techniques the refresh rate ofthe screen must be more or less doubled to avoid �icker. Passive systems area bit less sensitive, and can go as low as 90 Hz without serious �icker.[18]

6.2.2 Time parallel separation

In time parallel techniques the image is presented to both eyes at the sametime, hence the name time parallel. To separate views di�erent �lters areapplied. For movies the anaglyph method is often used, where the glasseshave red and green �lters. This method allows for simple inexpensive glassesbut it's very hard to get su�cient separation, with ghosting as a result.Along with ghosting comes problems with headaches and nausea.[18]

6.2.3 Autostereoscopic displays

As mentioned before, an autostereoscopic display provides left-right eye sep-aration without viewing glasses. There are many di�erent approaches toenable autostereoscopic displaying but the simplest uses our own ability toseparate views - free viewing. This how we in printed dot autostereogramscan see 3D after a while of unfocused viewing. Free viewing is however notused for display systems.[18]

Figure 6: Parallax Barrier system principle.

Parallax Barriers are a method more useful in the �eld of displays. Thebasic prinicple can be seen in �gure 6. The system is composed of a numberof very thin slits on an opaque medium. The screen space is divided intopieces to �t the slits so that the correct region is occluded for each eye. The

23

Page 31: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

problem with this system is that it's very view-dependant but works wellfor situations where you can make assumptions about the placement of theviewer. A similar approach is found in lenticular sheets where tiny lensesare used, and by refraction occluding the right regions. These systems aregenerally better than parallax barrier system since they doesn't absorb asmuch light, and as a result no back-lighting is required. Both techniques canprovide enough view to enable the motion parallax.[18]

6.3 Setred Holoform Display

The Setred display, seen in �gure 7, is based on a more advanced formof Parallax Barrier System often referred upon as a Moving/Scanning SlitParallax Barrier System. Instead of having static slits, controllable slits areused which allows them to be opened one at a time. This constitutes theshutter which is placed in front of a regular display device like a CRT or inthis case a projector system[18]. The shutter has to be synchronized withthe display device. The shutter is not mechanical but rather composed liquidcrystal columns, similar to ordinary displays.

Figure 7: Model of the Setred Display System.

6.3.1 Repeating viewing regions

With these kind of 3D systems bandwidth can be traded against look-around.This is done by keeping more than one slit opened evenly spaced across thescreen. What would with one slit be a separate image is now an imageshared with a number of other slits hence the reduction of bandwidth as wellas rendering power requirements. Opening slits in these fashion will createviewing regions as shown in �gure 8.

24

Page 32: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Figure 8: Multiple repeated viewing regions produced by several opened slits.

Part III

Implementation

7 Samurai 3D Volume Renderer for 3D Display

The application targeting the capabilities discussed earlier was somewherealong the road named Samurai. Samurai was developed as a testing plat-form rather than as a medical diagnostics tool. The design is focused on�exibility to avoid being cornered by the interface, all four volume renderingalgorithms being very di�erent to eachother. This design has naturally hadits toll on the re-usage of code even if in retrospect only parts of the textureloading mechanism could have been generalized. The integration with theSetred display went very smoothly. There were surprisingly few pitfalls whenrendering multiple views for the display.

7.1 Input

The DICOM input module handles three types of images - 8bit and 16bitgray-scale as well as 24bit color. Compression extensions are not supported.The goal for the input module was to read the entire contents not to skippingany groups or even single data elements. This approach makes the moduleincompatible with certain �les. Fixing incompatibilities has however mostlyproved to be a easy task only requiring small changes.

Samurai incorporates an image processor to be able to pre-process imageseries before rendering. This is necessary since image series taken directlyfrom scanning hardware, not always have appropriate properties for render-ing. The image processor only handles the two gray-scale formats for most

25

Page 33: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

operations. This is because the rendering interface shares the same limita-tion, only handing gray-scale data.

7.2 Rendering

The rendering is de�ned trough a lightweight interface which must be rede-�ned for each renderer. The renderer does not handle movement or rotationas well as some other settings like blending mode. The renderer can howeveroverride all previously applied settings. The rendering interface exports set-tings as C# properties which the renderer must implement. The renderercan, besides the obligatory, export its own properties which also are pickedup by the property grid in the rendering form. The render receives either justthe volume data or the volume data as well as a transfer function. Volumedata is passed as DICOM image object which treats the volume as a singlebyte-array leaving it to the renderer to interpret the data into voxels. For16bit volumes the renderer must simply convert the data into a short-array.Volumes can come in many sizes even if power-of-two dimensions are quietcommon. Many datasets have power-of-two dimensions for two axes, oftenwith the depth-dimension being non-power-by-two. Not all modern graphicscards, or rather the drivers, support non-power-by-two textures. For systemslacking that capability the 3D Texture and Raycast renders are limited tovolumes with power-of-two dimensions. For the 2D Texture renderer the lim-itation does not include the depth dimension whereas the Splatting rendereris not a�ected at all.

The transfer function is passed to the renderer as a bitmap. Throughoutthe classi�cation pipeline preparations to handle future 2D transfer func-tions have been made. This is estimated only to result in a minor loss ofperformance if any at all.

All renderers share the same basic user interface which is described insection 7.3.2.

7.2.1 2D Texture

This is the renderer which was implemented �rst and also considered to bethe main renderer of Samurai. 2D texturing o�ers excellent speed which isimportant in 3D displaying since multiple views must be rendered for everyframe. With the bonsai sample dataset the render performs 10-40 fps ona 2.2 GHz system equipped with a GeForce FX 6800 graphics card. Withthe default number of views, which is 16, the frame-rate would come to0.6-2.5 fps which is rather poor for interactive use. This demonstrates thetremendous rendering power required for 3D displaying and then also howessential speed is for these kinds of applications. The Samurai renderer hasnot been focused on optimization, probably leaving much performance yetto gain.

26

Page 34: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

To enable full rotation when using 2D Textures, three separate stacks oftextures must be loaded, one for each major axis. In Samurai only two axis-stacks are loaded, for the x- and z-axis, which enables rotation around the y-axis. Switching artifacts, when changing stack, are clearly noticeable but notsevere. Settings can be tweaked to minimize those artifacts but eliminatingthem completely is not possible. Because of the memory overhead of thismethod, all data is down-sampled to 8bit per pixel, where medical data oftenis de�ned with 16bit per pixel.

Despite limiting the renderer to two stacks as well as down-samplingvoxels, some large sets still doesn't �t into the graphics card memory, with aterrible performance hit as a result. Some dataset are as large as 512×512×512 which comes to 128MB per stack, totaling 256MB for the entire set. Thesolution to this problem lies in compression techniques. The texture-stacksin Samurai are compressed using the OpenGL ARB10 texture compressionextension. Which compression technique actually used is vendor speci�c andis not regulated by the extension. At the moment, the dominating two areS3TC and FXT1, described more in detail in section 3.5. The compressionratio lies between 4x and 8x bringing the total space requirement down to32MB−64MB, which is low enough to �t into the memory of most moderngraphics cards. There are two down sides to this application of compression.For volumes that would �t into the memory uncompressed the compressionitself becomes a performance hit due to the additional block decoding step.Secondly, texture compression techniques available today are destructive andreduces image quality.

The quality of 2D Texture Volume rendering is very poor if the volumeis reconstructed using the original sampling rate meaning rendering one slicefor each scanned slice. To overcome this, slice interpolation can be used, toenable a freely selectable sampling rate for reconstruction. Before applyingthis step, only texturing and blending features from OpenGL 1.1 were used,not counting texture compression, that being only an optional optimization.Although performance would make it unbearable, the renderer could run onvery old graphics hardware. For really small datasets a renderer could poten-tially be implemented on a mobile phone platform although not very useful.With slice-interpolation we have to elevate the requirements to the level ofthe recently available programmable pipeline, a simple shader is used aswell as multi-texturing. The actual implemented renderer uses the OpenGLshading language, and associated extensions, available in the OpenGL 2.0speci�cation. This is an unnecessary high requirement, since vendor speci�cshading extension has been available much longer, and the complexity of theshader is very low.

Opacity Correction is a feature which the Samurai implementation lacks.Visual artifacts due to the absence of opacity correction are not obvious for

10Architecture Review Board - founded in 1992 to govern the OpenGL speci�cation.

27

Page 35: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

the naked eye. For diagnostic implementations where the importance of acorrectly displayed volume is much greater, opacity correction would be anatural feature to implement. In Samurai, implementing opacity correctionwould only result in a few extra lines in the shader. The feature is leftunimplemented as a way of examining the gain of the spherical shell proxygeometry, used in the 3D Texture renderer, without having to implementother proxy geometry for that renderer.

7.2.2 3D Texture

The 3D Texture implementation is somewhat slower than the 2D Texturerenderer, performing 5-25 fps with the same settings described in section7.2.1.

Proxy-geometry consists of spherical shells, with selectable tessellation,removing any need for opacity correction. For the geometry, GPU vertexbu�ering features aren't used which makes rendering performance much de-pendant on tessellation. This is due to the fact that texture coordinatesystem must be correlated to the vertex coordinate system and that disablesthe possibility of using regular scaling as a means of generating the multiplespherical shells. To render the shells, which are centered around the camera,the camera coordinates relative the volume must be calculated. This is doneusing the model-view matrix M or rather the inverse.

Poscam = (0, 0, 0, 1) ·M−1 (21)

Shells are rendered and composited in back-to-front order and geometryoutside the volume are clipped using OpenGL user clip-planes. Distributingshells evenly over the distance between the eye and max depth would lead toa unnecessary high percentage of clipped spheres. Concentrating geometryto the volume region involves calculating at what depth the volume regionbegins and ends. The renderer does this using a simple bounding spheremodel which is easy to calculate distance to.

7.2.3 Splatting

The Splatting renderer is based on the simpli�ed form of splatting. Eachsplat is represented by a Gaussian blob loaded as a texture. Instead ofmanually projecting each splat onto the image plane, the ordinary graphicspipeline is used, with the support of hardware point sprites.

Point Sprites are vertices representing a single 3D points in space. Foreach point a textured square, aligned perpendicular to the camera, is ren-dered. Point Sprites does not, like regular geometry, scale according to theperspective. The size is instead de�ned in pixels on the image plane. Per-spective projected point sprite can however be achieved by using a simpleshader. This works as long as the camera doesn't come too near, since drivers

28

Page 36: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

and hardware limits the maximum size of the sprite. Samurai employs thisscaling technique in its rendering. The size is transformed to be dependanton viewing depth, but not necessarily correlated to real world perspective.This point sprite projection technique is acceptable as long as voxels sharethe same measurements on all axes. This is fairly common but still imposesa heavy constraint. At least the power-of-two limitation is absent since thevolume isn't stored as a texture.

The Samurai Splatting renderer is implemented with the entire volumeplaced inside a vertex bu�er object as coordinates and color. Every voxeloccupy 16 bytes - four 32bit �oating point values for position plus 32bit color.This is a very ine�ective way of storing the volume since coordinates, whichstands for the most space, can be predicted using the order of the voxels. Alarge dataset of 512x512x512 would take 2GB which is far more than eventhe best graphics card can handle today. Naturally, data can be streamedacross the bus to allow for these large datasets, but that would a�ect per-formance severely. The realistic maximum dataset size is thus somewherebetween 128x128x128, resulting in 32MB vertex bu�er, and 256x256x256,resulting in a 256MB vertex bu�er.

The voxels should be projected as splats on the image plane from backto front. This requires depth-sorting of the voxels. The Samurai imple-mentation is experimental and implemented for the purpose of investigatingthe rendering method closer. Because of this, the implementation takes ashort-cut, which eliminates the need for back-to-front compositing. This isdone through applying an emission only optical model11 which only adds tothe intensity along the ray, making it independent of compositing order. Asdescribed in section 4.3.2, the emission only model, simulates light emittinggases. This is quiet di�erent from the medical datasets, which are heavily de-pendent on absorption, showing the importance of choosing the appropriateoptical model.

The speed of the Splatting renderer is horri�c (several seconds per frame)if all voxels are rendered. To get around this, only voxels with intensitygreater than a set threshold value are rendered. The threshold value don'tneed to be high since datasets often have a low percentage of opaque voxels.The default threshold of 0.212 result in 4% used voxels for the bonsai dataset.

7.2.4 Raycasting

Raycasting is the method which has most recently been enabled for real-timeuse with the introduction of advanced shading hardware. It is considered bymany as the most promising since shader performance is bound to increaseand it only makes the necessary amount of expensive texture-lookups. The

11Se section 4.3.2, equation 7.12With values between 0 and 1.

29

Page 37: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

raycasting in Samurai is implemented using the simplest model. The onlyimplemented optimization is early ray termination.

Figure 9: Color Cubes representing exit points on the left using front faceculling and entry points on the right using back face culling.

First step toward raycasting is the ray setup procedure. Each pixel in the�nal image will result in a ray cast trough the scene and those ray directionshave to be calculated. This is done by rendering the bounding box of thevolume with color component correlated to world coordinates. Renderingusing back-face-culling will produce the ray entry coordinate and renderingwith front-face-culling will give the exit coordinates as shown in �gure 9.The line of which to evaluate the integral will of course be between thosetwo points. As calculations are done using the rendering pipeline, render-to-texture techniques are used to save the result in a usable format. Thereare a wide range of render-to-texture methods in OpenGL, with the mostattractive one being the latest framebu�er objects extension. The Samu-rai implementation however uses the much slower method of rendering intothe ordinary framebu�er, and then using the copyTexImage2D method fortransferring the framebu�er data to a texture. It is also in this step that themost signi�cant disadvantage of this implementation lies. The framebu�eris set for 8bit precision per color component. Color-values represent coordi-nate values in the world - one components for each of the three axes. Theray-direction coordinate precision is thus mear 8bit. This limitation a�ectsthe resolution of the rendering since only 256 distinct ray directions can beformed for each axis, resulting in a maximum e�ective resolution of 256x256,which for other than evaluation purposes is unacceptable.

Using framebu�er objects would simplify the matter of rendering into areal 16 or 32bit per color component texture. There are ways of increasingprecision of the framebu�er as well. Since this is not the superior method interms of performance, e�ort should rather be aimed toward the framebu�erobject solution. A third alternative would be to use the forth color compo-nent to obtain 10bit precision, resulting in a possible resolution of 1024x1024.

30

Page 38: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

This entire step is basically one example of the many uses of HDR13 Ren-dering. The third alternative is the integer HDR rendering format 10-10-2.Although the shader itself requires the latest generation graphics hardware,the high precision pipe-line (HDR) is also reserved for later models, addingyet another constraint to this algorithm.

The next step in the rendering process is the actual evaluation of thevolume rendering integral. This is done using a fragment shader, which in itspresent form, requires the latest generation of graphics cards - those capableof dynamic branching, meaning having access to while, for and if-clauses.Raycasting is possible without dynamic branching by using a multi-passapproach. The single-pass variant is however much less complex making itpreferable when targeting new hardware.

The basic structure of the algorithm looks like this:

• While inside volume & intensity < threshold value

� Perform volume texture lookup

� Apply classi�cation

� Composite value

� Update position

The second constraint in the while clause is the early ray termination. If wehave composited enough intensity we can abort further calculations.

A sample is looked up in the tri-linear interpolated 3D texture. Thatvalue goes trough classi�cation covered more closely in section 7.2.5. Afterclassi�cation the value has to be added to the �nal result. In raycasting eitherfront-to-back or back-to-front rendering is possible. Only the compositingstep has to be modi�ed - back-to-front compositing seen here in equation 22and front-to-back compositing in equation 23. The equations use the samesemantics as in section 3.3.1. Samurai is implemented using front-to-backcompositing, being more easily optimized.

Cdst = (1− αsrc) Cdst + Csrc (22)

Cdst = Cdst + (1− αdst) Csrc (23)

αdst = αdst + (1− αdst) αsrc

Updating position is an important step for space-leaping optimizationslike adaptive sampling. The step taken de�nes the sampling-rate of thereconstruction. Samurai is implemented with �xed step sampling as goodspace-leaping techniques are generally very complex. An important notice

13High Dynamic Range

31

Page 39: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

here is that although loops are allowed, the shader has a maximum value interms of execution time or rather executed operations. This limit is easilyoverstepped with to high sampling rate resulting in dropped fragments. Thisis noticed as thick regions of the volume gets black with very abrupt bound-aries. Of course, the information about the maximum number of operationsare available and a maximum sampling rate can thus be calculated. Thislimit however makes adaptive sampling even more important, not only forperformance but for quality. With a sampling limit, the rendering bene�tsfrom placing sampling points where they matter the most.

7.2.5 Classi�cation

Samurai use a 1D transfer function for classi�cation - basically an interpo-lated lookup-table. The lookup-table can either be loaded from a image �leor generated using the Color Table Form described more in detail in sec-tion 7.3.3. The transfer function maps monochrome intensity to color value.Late in the development corresponding transparency weights were added tothe lookup procedure, however only implemented as a test in the 3D Tex-ture renderer. The lookup table is generated before rendering making itnon-interactive, which of course is not to prefer. There are however a secondsimpler step in Samurai's classi�cation pipeline - Scale'n Bias. Values, eitherbefore lookup or after, can be scaled and biased in this fashion:

Vpost = Bias + (Scale · Vpre)

This equation can be applied on both color value as well as transparencydepending on the shader setup, described more in detail in section 7.3.2.These two values can be changed in real-time giving some basic control overclassi�cation. This is not a su�cient tool for feature classi�cation but workswell to adjust the volume for acceptable viewing. The Scale'n Bias and trans-fer function classi�cation steps are both implemented as post-classi�cationin all renderers to avoid pre-classi�cation artifacts.

The 3D Texture renderer is as only renderer equipped with transparencytransfer function. This is useful for extinguishing features occluding moreinteresting ones.

7.3 GUI

The Samurai user interface has two major forms. The main form, seenin �gure 10, where data is loaded and some information and con�gurationis presented. From there the rendering is started, displaying the modalrendering form seen in �gure 11.

32

Page 40: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Figure 10: Application user interface.

7.3.1 Main Form

The main form is centred around the loaded image-series occupying most ofthe form. To navigate along the stack of 2D images a slider is provided onthe bottom of the screen. There exists no explicit label to tell which frame isviewed. The entire image viewing capability is just for browsing trough thedataset to see that it loaded correctly as well as to apply contrast. Contrast iregulated with its own slider to the lower left. Changing the slider will resultin a preview for the currently visible frame. To apply contrast settings on thewhole dataset the �apply� button is supplied. This operation takes a bit oftime under which a modal14 progress-bar form is showed. An auto-contrastfeature is available, scaling values over the entire range.

To the right a property grid is located which displays di�erent propertiesfor the currently loaded image. Property grids support changing values butall properties in this view are read-only.

All other features are located inside the menus:

• File

� Open DICOM

� Open RAW - To open RAW volume data

� Save - Save current dataset as DICOM Image

� Last - Sub-menu with references to the last �ve opened DICOM�les

� Exit14Modal forms prevents interaction with other forms until closed or hidden.

33

Page 41: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

• Rendering

� 2D Texture Volume Rendering

� 3D Texture Volume Rendering

� Volume Splatting

� Volume Raycasting

� Load Color Lookup - Loads lookup table from a �le

� Disable Color Lookup

� Create lookup table - Opens Color Table Form, see section 7.3.3

• Processing (not covered)

• Help

7.3.2 Rendering Form

Figure 11: User interface when rendering

By choosing renderer from the Rendering menu in the main form, themodal Rendering Form is activated. When rendering there are many possiblesettings to vary. All can be seen and changed from the property grid locatedto to the far right. Some settings are di�cult to control using real values andfor those settings various sliders are provided. Some settings not dependanton renderer are also located outside the property grid, for example near/farclipping plane settings, seen to the left.

The subdivision controls are used exclusively for the slice based renderers.This is a shortcut provided since that is main renderer of Samurai. For allother renderers that section becomes inactive.

34

Page 42: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Another important setting is Scale'n Bias seen on the left side of theproperty grid. Combined with shader selection, either further down or inthe property grid, these slider control the classi�cation procedure describedin section 7.2.5. Sliders provide a suitable range for the two variables makingit easier to set sensible values.

Navigation is done mostly using the mouse. Holding in the left mouse-button while dragging orbits the camera. Dragging, holding in the middlemouse-button, moves the camera sideways, and �nally dragging while hold-ing in the right mouse button moves the camera forward and backward.

7.3.3 Color Table Form

Figure 12: User interface for transfer function editing.

The Color Table Form, seen in �gure 12, is used to create color transferfunctions. Intensity is looked up starting from the left. The table is con-trolled trough keys, which are marked with white triangles at the bottom ofthe color table panel. The currently selected key is colored dark green. Newkeys are added trough clicking on the color table panel at the location wherethe new key should be inserted. The keys can also be moved by draggingthe triangles to the right location. With the �Set Color� button color valuefor the selected key is changed trough a standard Windows color selectiondialog.

To get some idea of the �nal result a preview is provided in the lowerright corner. This view shows the transfer function applied to a slice of thereal image. The slider to the left is used to change which frame to preview.

The �Save� button will allow saving the lookup table to disk as an image-�le, as hitting apply only loads the table internally.

7.4 Display integration

The display requires multiple views to be rendered together with a modi�edprojection matrix for each view. Before clearing the framebu�er to render the

35

Page 43: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

next view or frame, the image is copied to another memory location, pilingup a stack of images depending on number of views. The modi�cation of thematrix and the copying of images, is done by the Setred library. Samuraihas to invoke certain methods in the rendering process, and provide a loopwhich renders the same numbers of frames as views.

7.5 Testing on display

During this thesis work the prototype for the display was located in Cam-bridge, England. In September, during a short two day visit, an opportunityto see Samurai on the actual display arose. At the time, only the 2D tex-ture renderer, and partly the 3D texture renderer, were implemented. Italso gave chance to get some point of reference for the development of theDisplay Simulator. The simulated result must be fairly consistent with theoutput from the actual display. Correlating result provided a good base fordebugging both the mathematical model as well as the implementation.

8 Display Simulator

The Display Simulator takes a set of images rendered for the display, forexample by Samurai, and simulates the resulting image on the screen for oneeye. This simpli�ed simulation provides some hint of the characteristics ofthe display as well as illustrating the basic working model. The simple userinterface can be seen in �gure 13.

Figure 13: Display Simulator user interface.

8.1 Model

The image plane on which the actual image is projected onto, lies on x = 0and stretches over the interval y = [0, 1]. The shutter with all the slits is

36

Page 44: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

located on a distance of Dshutter from the image plane on the x-axis withthe same length as the image plane. The model includes a possibility tohave distance between slits even if that distance can be considered to beDslit = 0. The eye is de�ned at E = (ex, ey). The width Wslit of a slit isde�ned trough equation 24 where n is the number of slits. An overview ispresented in �gure 14.

Figure 14: Model representation of the display in the Display Simulator soft-ware.

Wslit =1− (n + 1) Dslit

n(24)

P1i,x = Dshutter

P1i,y = (i + 1) Dslit + iWslit (25)

P2i,x = Dshutter

P2i,y = (i + 1) (Dslit + Wslit) (26)

The part of the image plane visible trough slit i is is de�ned by wherethe lines, between the eye and the two points de�ning the slit, intersects theimage plane at x = 0. A line passing trough points (x1, y1)and (x2, y2) canbe written as in equation 27

y − y1 =y2 − y1

x2 − x1(x− x1) (27)

We want to calculate what y is for x = 0 using the line between E andP1i as well as E and P2i. Substituting values from equation 25 and 26 intoequation 27 results in equation 28 and 29 which produces �nal value.

37

Page 45: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

yi,p1 − ey =((i + 1) Dslit + iWslit)− ey

Dshutter − ex(−ex) (28)

yi,p2 − ey =((i + 1) (Dslit + Wslit))− ey

Dshutter − ex(−ex) (29)

The region within yi,p1 and yi,p2 is the only visible part of the imageplane at the moment when the belonging slit i is open. This is also theregion which to project the rendered view i on in the simulation. The valuescan naturally be outside the image plane and if an entire slit misses thatarea it is skipped. If only one of the points misses, it is cropped to either 0or 1 depending which side it is on.

The �nal algorithm looks like this:

• for each slit i

� calculate yi,p1 and yi,p2

� check values and proceed, crop values or skip entire slit

� project view i on region between yi,p1 and yi,p2

For multiple nreg viewing regions using Slice'n Diced images last step ischanged into this:

• for each slit i

� calculate yi,p1 and yi,p2

� check values and proceed, crop values or skip entire slit

� project view i (mod nreg) on region between yi,p1 and yi,p2

8.2 Views

Two viewing regions compose the central parts of the simulator as seen in�gure 13. The larger main view to the left and the smaller view to the rightmeant for a top view.

Besides the top-view for navigation, two viewing types are implemented,for use in the main viewing region.

8.2.1 Orthographic Image Plane View

This view-type provides a view of the perceived image plane from the currentview position which is useful when evaluating sampling artifacts, like tearing.It basically shows the image on the image plane from the angle of the eyebut without applying that exact transformation.

38

Page 46: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

8.2.2 Perspective View

The perspective view provides a experience of how the display would be per-ceived in reality when moving around. The di�erence from the orthographicview is that the correct transformation according to the perspective is used.

9 Example Images from Volume Renderer

Here follows example images produced by the four volume renderers. Alldatasets are well known within the volume rendering community.

Figure 15: Engine sample dataset. Upper left corner: 2D Texture. Upperright corner: 3D Texture. Lower left corner Raycasting. Lower right corner:Splatting.

39

Page 47: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Figure 16: Flower seed sample dataset. Upper left corner: 2D Texture. Upperright corner: 3D Texture. Lower left corner Raycasting. Lower right corner:Splatting.

Part IV

Discussion

9.1 Future work

Even if Samurai touches most parts of volume rendering, it still merelyscratches the surface. The entire testing platform with the display simu-lator has yet more future development in head of it.

40

Page 48: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Figure 17: Human head MRI dataset. Upper left corner: 2D Texture. Upperright corner: 3D Texture. Lower left corner Raycasting. Lower right corner:Splatting.

9.1.1 Samurai

Samurai has four Renderers, all of which are far from performing optimally.The main restriction of the 2D Texture renderer concerns viewing angle.

The Volume can only safely rotate around the y-axis. In one way is this anoptimization if no need for other rotational axes exists. Support for this typeof optimization should remain but the renderer should have a base handlinggeneral viewing. Extending clipping is also a feature which easily can beenabled for the 2D Texture renderer. Standard OpenGL user clip-planes canbe used which more or less only leaves user interface issues left to develop.

The 3D Texture renderer can also perform clipping using the near cameraclip-plane. More than that is however not generally possible with the useof OpenGL clip-planes. The 3D Texture renderer already uses six user clip-planes which is the maximum for most graphics cards and drivers. Some ofthe clipping would have to be delegated to clipping done in the software.

The Splatting renderer has an enormous limitation in its emission only

41

Page 49: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

Figure 18: Bonsai tree dataset. Upper left corner: 2D Texture. Upper rightcorner: 3D Texture. Lower left corner Raycasting. Lower right corner:Splatting.

optical model. This model is the one least used within medical imaging mak-ing the renderer more or less unusable. This can be observed in �gure 17where the image of the head show hardly any detail even tough the bestsettings were chosen. The solution is sorting the voxels on depth beforerendering as well as using di�erent compositing method, which is the basicrequirements to implement the optical model based on absorption. Even ifthese two steps are not terribly di�cult to implement they bring forward amore di�cult issue - data transfer. The worst case of sorting the volume

42

Page 50: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

every frame would elevate the load on the bus immensely since all data mustbe streamed to the graphics card. As pointed out in section 7.2.3 the imple-mentation already su�ers from excessive amount of data in representation ofthe volume. An additional transfer step would have a signi�cant impact onperformance. With the introduction of geometry shaders in the next gener-ation of the graphics pipeline, the transfer step could possibly be avoided.Geometry for the point sprites would then be generated by an geometryshader based on the volume stored as a 3D Texture. Unfortunately, it isunclear in what manner textures can be accessed from the geometry shader.

For the Raycast renderer there exists a great need to correct the ray direc-tion precision issue described in section 7.2.4. This issue limits the e�ectiveresolution of the rendering to 256 × 256 which is naturally not acceptable.The defect is seen in all example renderings shown in �gures 15, 16, 17 and18, where you notice a blockyness in the image produced by raycasting. Thesampling limit for the Samurai implementation of raycasting, brought for-ward in section 7.2.4, can be reduced by using space-leaping techniques, alsoraising performance for some types of volumes...

Classi�cation is very important in volume rendering since it allows fea-ture separation and removal. This step has in Samurai gotten little attentionresulting in a non-interactive 1D transfer function. Perhaps the most impor-tant future work is making transfer function editing interactive with thevolume rendering. This would allow users to optimize the transfer functionfor their intended viewing. Combining the interactivity with a 2D transferfunction from section 7.2.5 would enhance the potential rendering qualityenormously. Both of these changes brings great demand on the user inter-face since more features has to be available at the same time. Editing multi-dimensional transfer functions is as previously noted, much more complexthan the single 1D lookup-table.

9.1.2 Display Simulator

With the display simulator the volume rendering evaluation platform for thedisplay is complete. Evaluation is in this version based on visual inspection,basically giving users some notion of how volume data presents itself inthe 3D-display. This is very useful as a �rst tool for demonstrating thedisplay but the future of the evaluation platform may have more scienti�cpurposes. The sampling artifacts due to limited bandwidth to the displaycan be reproduced with the display simulator. Implementing algorithms toinspect and classify the severity of the artifacts would be a great tool forcreating and evaluating suitable anti-aliasing methods.

The exchange of data between Samurai and the Display Simulator is verysimple. Samurai writes views as images on the hard-drive which the DisplaySimulator reads. Both these actions is performed manually. Future featureswould include continuously streaming data from the render to the simulator

43

Page 51: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

providing means to see other output than static images. This can be achievedin a variety of ways. Two basic issues that must be addressed - sharing thedata and communicating updates and settings. Sharing OpenGL contexts orjust having shared memory spaces in which to place images are two possibleways of sharing data. Communication can be done in even more ways witheverything from socket communication to interprocess windows events.

As the Simulator is meant to give a preview of the display, the fact thatit gives non 3D output is unfortunate. There exists however the possibilityto extend the simulator with stereo rendering, making the characteristicscome even closer to the real display. To make the Simulator as compatibleas possible with standard computers anaglyphic glasses would be the naturalchoice of separation method.

9.2 Volume Rendering

Volume rendering began mainly as a tool for scienti�c visualization and thatis perhaps still the biggest �eld. The complexity of the algorithms and shearsize of the volume data resulted in late adoption within other applications,like cinema and gaming.

In the scienti�c environment rendering tend to be focused on single high-resolution datasets. Quality and correctness is often more important thanspeed in terms of frame-rates.

As in scienti�c visualization rendering can be narrowed down to �t asingle purpose, other rendering environments like cinema and gaming have toconsider overall quality of an entire scene. Complete scenes contains multipleobjects, many with unique properties for rendering, making it important todirect performance where it matters the most. Volume e�ects are howevergetting more and more attention in the entertainment �elds. With non-real-time rendering for �lm, volume e�ects have been used for quiet some time,but its importance and level of simulation is still growing. An example offairly recent area of development is subsurface-scattering used for realisticrendering of skin. Considering only the surface as with ray-tracing is notenough for a realistic result since light penetrates the skin and scatters.This basically involves volume rendering even if algorithms for performancereasons are very speci�c for this occasion. In gaming the same evolution canbe observed even if limitations due to interactivity are eminent.

Outside scienti�c visualization, where real-world data is the object ofwhich to study, procedural generation of volume data is often used. Firesimulated using particle systems simulation is a good example of this. Thisof course is due to the di�culty and expense of generating real-world data.

Looking forward, I think that we might see some change in this area, asreal-world phenomenons often are the aim of the simulations. In animation,character motions are often captured from real actors at a motion capturestudio. I don't see why that development can be applied in the volume e�ect

44

Page 52: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

area, et least in a longer term.

9.2.1 Data management

Even if rendering algorithms produce excellent result they will always be de-pendant on the resolution of the original data. Storing and transferring datais today one of the major limiting factors within real-time volume rendering.Memory sizes and bus transfer rates are bound to increase but still will havea di�cult time matching data-sizes of volumes.

Di�erent ways of compressing the volume data is today commonly usedbut will most likely evolve even further. Besides simply using texture com-pression the most common approach is called bricking where the volumeis divided into segment similar to an octree. Di�erent resolutions for eachsegments are available and a smart algorithm decide how detailed segmentsneed to be. This type of compression is very e�ective and resembles previouswork on level-of-detail even if depth is far from the only quali�er here. Wewill probably see many good algorithms along this line but avoiding artifactson the boundaries of segments may prove di�cult.

As I see it, the best path would be to follow the development of or-dinary image compression. The �rst thing that comes in mind is waveletcompression techniques similar to the ones used in JPEG-2000. With thecurrent generation of hardware using wavelet compression internally is notpossible[14]. Such development is not even evident in the near future. Theproblem is that textures are queried for single samples multiple times andwith wavelet coding there exists not simple way of looking up single pixelvalues. It is likely that the idea of using wavelet internally is �awed to suchan extent that it will remain impossible. Even if that would be true it doesnot mean that there will exist no use of wavelets. Another great bottleneckis the bus between CPU and GPU today either AGP or PCI Express. Thisbottleneck could indeed be eased by wavelet-compressing volume data fortransfer. Work is already being done in this area its importance will mostde�nitely grow along with animated volume data. Consider a future caseof animated sets of 512x512x512@16bit which for a single frame comes to512MB. Graphics cards with this amount of memory exists but will in thenear future be more common, making uncompressed internal storage viable.Animating this dataset in 25 fps would result in data-rates of 12.5GB/s.Heavy compression techniques both in storage and transfer is clearly vitalto enable these animated sequences.

9.2.2 Algorithms

It is not di�cult to see the bene�ts of raycasting in being straight forward,having good performance and tremendous �exibility. With other algorithmsit is particularly di�cult to freely choose optical model but raycasting does

45

Page 53: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

not hide the integration step which is where optical model is de�ned. Thisdoes not however mean that the other algorithms are dead. Shear-Warp stillhas a great advantage in terms of speed. Implementation of texture basedtechniques is quiet simple at �rst, but when slice-interpolation, other proxy-geometry or opacity correction is added, the complexity closes that of theRaycaster. Quality and speed are also quiet good and volume clipping canbe done real easily with the use of standard OpenGL clip-planes.

In the graphics hardware, the texture lookups pose as the major bottle-neck, and with the introduction of more advanced compression methods, itwill most likely remain that way. The fragment bottleneck is easily detectedwith Samurai and apparent for all algorithms even strictly object order al-gorithms like splatting. Splatting also relies on texture lookups since thegaussian distribution is stored as a texture. The �ll-rate15 dependency is no-ticed as frame-rates drops substantially as the camera is moved toward theobject. For hardware this means that it is important, besides having con-siderable amounts of memory, to have a fast fragment processor or ratherseveral processors. Rasteration is a step which easily can be parallelized andmodern GPUs have several parallel fragment pipelines. Volume renderingis far from the only rendering algorithm which bene�ts from faster frag-ment processing. A tendency toward more and more parallelism in fragmentprocessing can clearly be noticed. The latest GPUs from ATI called X1khave for example up to 48 parallel fragment pipelines in comparison withtheir last X800 series having 16 and the powerful NVidia 7800 series having24[12, 1, 21]. Of course the number of parallel processors doesn't necessarycorrelate with performance but it gives a hint of where e�ort to increaseperformance is being put. This development will greatly bene�t the volumerendering environment.

9.2.3 Classi�cation

Classi�cation is very important for �ltering out irrelevant features. For ren-dering classi�cation is applied trough transfer functions. Generating transferfunction depends very much on what kind of data is being classi�ed. Formedical imaging classi�cation is mostly about separating di�erent tissuesfrom each other. A good result can be obtained from a general 2D transferfunction as described in section 4.6.1. More advanced classi�cation gets veryspeci�c for isolated areas within a �eld.

From the implementation of Samurai with its 1D transfer function, a fewthings can be noted. First, a one dimensional transfer function goes far,and the possibility for an easy creation interface makes it a viable option.Secondly, for most kinds of imaging, the most relevant optical property tomap is opacity. Color is great when separating di�erent features while still

15Fill-rate means number of fragments produced per unit of time.

46

Page 54: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

displaying all of them, but that still relies heavily on being able to controlthe opacity transfer. Also, transfer function creation is indeed an interactiveprocess and non-interactive creation raises an unnecessary barrier.

9.2.4 Summary

It is clear that the development and research of real-time volume renderingclosely follows the development of graphics hardware. The general develop-ment at the moment toward faster fragment processing is excellent for vol-ume rendering. Even tough there are several rivaling algorithms, in aspectsof quality and speed, raycasting seems at the moment to be the favorite, withgood performance, excellent quality and especially straight forwardness.

9.3 Displaying

The limitation with volume rendering and 3D display is all about speed. Vol-ume rendering, as is, still barely produces interactive frame-rates, at least forheavier datasets. As described before, the renderer has to produce at least 16times the amount of frames for 3D displaying. It is doubtful if even the mostpowerful graphics cards today can produce that even for fairly lightweightdatasets. It may seem that volume rendering for 3D displaying is quiet farinto the future, but there are some properties in rendering that facilitateshigh speed rendering. Rendering is generally incredibly parallelizable whichmeans that clustering can be used to achieve high frame-rates. The �rstlevel of clustering available today is using only consumer hardware - multi-ple GPUs. Both nVidia and ATI have systems to combine performance oftwo graphics cards, almost doubling it. We may see support for combingmore cards than that in the future, but already today the possibility of us-ing four parallel cards exist even tough it isn't widely available yet. Somegraphics card vendors have made cards already with two GPUs integratedon the board. Just as with any other, these cards can be combined in pairsor perhaps quads which results in quad and in the future octa GPU systems.

This kind of system might still not be su�cient to accommodate multi-view rendering. The next step would be to use multiple computers clusteredtogether. It is however very di�cult to keep good performance scaling forthose kinds of systems especially with high data-rates both as input andoutput. For volume rendering, excellent scaling could probably be achievedif the entire volume could be stored on board each graphics card. Streamingvolume data across a network link while still maintaining frame-rate, is notplausible.

47

Page 55: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

References

[1] Radeon x800 3d architecture white paper. http://www.ati.com/

products/radeonx800/RadeonX800ArchitectureWhitePaper.pdf -last visited 2006-02-20.

[2] National Electrical Manufacturers Association. Dicom standard docu-mentation. http://medical.nema.org/dicom/2004.html - last visted2006-04-20, 2004.

[3] Aleksey Berillo. S3tc and fxt1 texture compression. http://www.

digit-life.com/articles/reviews3tcfxt1/ - last visited 2006-05-04.

[4] OpenGL Architecture Review Board. Opengl extensionext_blend_minmax. http://oss.sgi.com/projects/ogl-sample/

registry/EXT/blend_minmax.txt - last visited 2006-01-25, 1995.

[5] Michael Bauer Günther Greiner Christof Rezk-Salama, Klaus Engel andThomas Ertl. Interactive volume rendering on standard pc hardware.Eurographics Workshop on Computer Hardware 2000, 2000.

[6] Jackie Neider Tom Davis Dave Shreiner, Mason Woo and DavidShreiner. OpenGL Programming Guide: The O�cial Guide to LearningOpenGL, Version 1.2 (3rd Edition). 1999.

[7] Klaus Engel and Thomas Ertl. Interactive high-quality volume render-ing with �exible consumer graphics hardware. Eurographics '02, 2002.

[8] Aeron Fenster and Donal Downey. 3-d ultrasound imaging: a re-view. IEEE Engineering in Medicine and Biology Magazine, 15(6):41�51, 1988.

[9] Deirdre Hall Rick Dorval Michael Giovinco Michael RichmondGregg Favalora, Joshua Napoli and Won Chun. 100 million-voxelvolumtic display.

[10] Jack Hoxley. An overview of microsoft's direct3d 10 api. http://www.gamedev.net/reference/programming/features/d3d10overview/ -last visited 2006-04-25, 2005.

[11] ATI Technologies Inc. 3dc white paper. http://www.ati.com/

products/radeonx800/3DcWhitePaper.pdf - last visited 2006-02-11.

[12] ATI Technologies Inc. Radeon x1000 technology overview.http://www.ati.com/products/radeonx1k/whitepapers/X1000_

Family_Technology_Overview_Whitepaper.pdf - last visited 2006-05-04.

48

Page 56: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

[13] ATI Technologies Inc. Radeon x1900 speci�cation. http://www.ati.

com/products/RadeonX1900/specs.html - last visited 2006-05-03.

[14] Joe M. Kniss Aaron E. Lefohn Christof Rezk Salama Daniel WeiskopfKlaus Engel, Markus Hadwiger. Real-time volume graphics. CourseNotes 28 from SIGGRAPH '04, 2004.

[15] Philippe Lacroute and Mark Levoy. Fast volume rendering using ashear-warp factorization of the viewing transformation. Proceedings ofSIGGRAPH '94, 1994.

[16] Mark Levoy. Display of surfaces from volume data. IEEE ComputerGraphics and Applications, 8(3):29�37, 1988.

[17] Jaroen van Baar Matthias Zwicker, Hanspeter P�ster and Marcus Gross.Ewa splatting. IEEE Transactions on Visualization and ComputerGraphics, 2002.

[18] David McAllister. Wiley Encyclopedia on Imaging, chapter 3D Displays,pages 1327�1344.

[19] Dirk Bartz Klaus Mueller Michael Meiÿner, Jian Huang and RogerCraw�s. A practical evaluation of popular volume rendering algorithms.Proceedings of the 2000 IEEE symposium on Volume visualization, 2000.

[20] Max Nelson. Optical models for direct volume rendering. IEEE Trans-actions on Visualization and Computer Graphics, 1(2), 1995.

[21] nVidia Inc. Nvidia 7800 speci�cation. http://www.nvidia.com/page/geforce_7800.html - last visited 2006-04-05.

[22] Inc. Quantum3D. Texture compression on quantum 3d. http:

//www.quantum3d.com/PDF/whitepapers/TextureCompression.pdf -last visited 2006-03-10.

[23] techPowerUp.com. Cpu database. http://www.techpowerup.com/

cpudb/index.php - last visited 2006-05-04.

[24] Le Clerc J Malbert E Chanteau PL. Troscianko T, Montagnon R. Therole of colour as a monocular depth cue. Vision reasearch, 31(11), 1991.

[25] Lee Westover. Interactive volume rendering. Proceedings of the 1989Chapel Hill workshop on Volume visualization, 1989.

[26] Wikipedia. Magnetic resonance imaging. http://en.wikipedia.org/

wiki/Magnetic_resonance_imaging - last visited 2006-05-04.

[27] Wikipedia. Computed tomography. http://en.wikipedia.org/wiki/Computed_axial_tomography - last visited 2006-05-04, 2004.

49

Page 57: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

[28] Chris Wynn. An introduction to brdf-based lighting. http://www.cim.mcgill.ca/~image529/TA529/Image529_99/docs/BRDFIntro.pdf -last visited 2006-05-04.

50

Page 58: Volume Rendering for 3D Display - · PDF fileVolume Rendering for 3D Display ... a limited software testing platform to get a preview of medical imaging and ... Visual Processing Unit

TRITA-CSC-E 2006:099 ISRN-KTH/CSC/E--06/099--SE

ISSN-1653-5715

www.kth.se