visualization

48
2IV35: V isualization Construction of an interactive application that implements several visualization techniques for a real-time simulation of fluid flow D.J.W.M.G. Dingen (0580528) S.J. van den Elzen (0573626) Department of Mathematics and Computer Science Eindhoven University of Technology January 7, 2009

Upload: stefvandenelzen

Post on 24-Oct-2014

33 views

Category:

Documents


3 download

TRANSCRIPT

2IV35: V isualizationConstruction of an interactive application that implements several visualization techniques for a

real-time simulation of fluid flow

D.J.W.M.G. Dingen (0580528)S.J. van den Elzen (0573626)

Department of Mathematics and Computer ScienceEindhoven University of Technology

January 7, 2009

Abstract

In this document is described what methods we used to construct an interactive appli-cation that implements several visualization techniques for a real-time simulation of fluidflow for the course 2IV35: Visualization at Einhoven, University of technology. The usedvisualization techniques, color mapping, glyphs, streamlines, slices, stream surfaces, andimage-based flow visualization, are described in detail in separate sections.

Contents

1 Introduction 4

2 Skeleton compilation 52.1 Division in components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 GL triangle strip color interpolation bug fix . . . . . . . . . . . . . . . . . . . 62.4 Compile instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.4.1 Qt 4.4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.4.2 FFTW 2.1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.4.3 Real time visualization simulation . . . . . . . . . . . . . . . . . . . . 8

3 Camera, picking & controls 93.1 Quaternion based camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.2 openGL Picking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.3 Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Color mapping 114.1 Legenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.2 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.2.1 Scalar data set computation . . . . . . . . . . . . . . . . . . . . . . . . 124.2.2 Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.2.3 Clamping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4.3 Color maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134.3.1 Rainbow color maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134.3.2 Grayscale color maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.3.3 Alternating color map . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.3.4 Black-body color map . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.3.5 User defined color map . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.3.6 Overlay alternating color map . . . . . . . . . . . . . . . . . . . . . . . 17

5 Glyphs 185.1 Scalar and vector field data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.2 Glyph distribution methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.3 Glyph interpolation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.4 Glyph parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.5 Clamping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.6 Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.7 Implemented glyphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2

5.7.1 Hedgehogs 2D/3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.7.2 Arrows 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.7.3 Cones 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.7.4 Cones silhouette 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

6 Gradient 246.1 Gradient computation methods . . . . . . . . . . . . . . . . . . . . . . . . . . 246.2 Noise reduction methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.3 Gradient application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

7 Streamlines 277.1 Computation of the streamline . . . . . . . . . . . . . . . . . . . . . . . . . . 27

7.1.1 Euler integration method . . . . . . . . . . . . . . . . . . . . . . . . . 277.1.2 Runge Kutta second and fourth order integration method . . . . . . . 277.1.3 Integration direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287.1.4 A value for ∆t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287.1.5 Integration stop criterion . . . . . . . . . . . . . . . . . . . . . . . . . 29

7.2 Seedpoints feeding strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297.2.1 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

7.3 Tapering effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

8 Slices 338.1 Ringbuffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338.2 Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

8.2.1 Separate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348.2.2 Blended . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348.2.3 First and Last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.2.4 Composed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

9 Streamsurfaces 409.1 Stream surface seed curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409.2 Integration method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419.3 Surface creation, splitting and joining . . . . . . . . . . . . . . . . . . . . . . 41

10 Image-Based Flow Visualization 4310.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4310.2 Injected noise texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4410.3 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4510.4 Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3

Visualization

1 Introduction

In this document the design decisions and several different used visualization techniques aredescribed and explained. These visualization techniques are implemented in a single program.This program is an interactive real-time simulation of fluid flow. The construction of thisprogram is the practical assignment for the course 2IV35: Visualization.

“Construct an interactive application that implements several visualizationtechniques for a real-time simulation of fluid flow.”

In the upcoming sections the different visualization techniques are explained. Trade offsare discussed, design decisions are explained and general remarks are given. The assignmentconsists of several different steps, which are shown in Figure 1. We implemented all thesteps on the right side of the tree, including the bonus step: Image-based flow visualization.In Section 2 the steps made to adapt the provided skeleton, to compile with our tools, aredescribed. In Section 4 the color mapping technique is described and explained. Section 5explains the vector glyph technique. In Section 6 the different manners of gradient compu-tation are explained. In Section 7 the streamline technique is explained. Section 8 describesthe slices technique with all different blending techniques. In 9 the different design decisionswith respect to streamsurfaces are described. Last but not least Section 10 describes thedifferent aspects of Image-based flow visualization. Some aspects of the left assignment treehave been implemented as well. For example by adding a circle as a seed curve for stream-surfaces streamtubes are implemented. Also a height plot is implemented because we madeeverything three dimensional, even adding force to the simulation with the mouse can be donefrom every camera angle. All steps are successfully implemented and have lead to a betterunderstanding of the different visualization techniques.

Figure 1: Assignment tree; Dark outlined steps are implemented, faded steps are omitted.

4

Visualization

2 Skeleton compilation

We chose to adapt the given skeleton to make it compile with the following tools:

• GNU Compiler Collection (GCC) 4.3.2http://gcc.gnu.org/

• Qt 4.4.3http://trolltech.com/products/qt

• Fastest Fourier Transform in the West (FFTW) library 2.1.5http://www.fftw.org/

• OpenGL Utility Toolkit (GLUT)http://www.opengl.org/resources/libraries/glut/

Qt is a cross-platform application framework for C++. The main choice to adapt the skeletonto work with this setup, is to provide cross-platform compilation. With this setup we caneasily compile the constructed code to run on Linux, Mac and Windows. Another greatadvantage of Qt is that it has an extensive library with graphical user interface componentswhich can easily be adapted to serve our needs. The steps that were necessary to make theprovided skeleton code to work with the above mentioned components are explained in thenext sections.

2.1 Division in components

The first adaptation made to the provided skeleton is the division of different components todifferent header and source files. The initialization of the main window is no longer woven inthe glfluids component. The code concerning the glFluids and the main window are splittedinto two different components. The glFluids code is promoted to a QWidget. Also a controlpanel QWidget is created to handle user interface actions. In this widget all simulationparameters can be controlled. In the main window component the glFluidsWidget and thecPanelWidget are included (see the listing below).

1 #ifndef __MAINWINDOW_H__2 #define __MAINWINDOW_H__3

4 #include <QMainWindow >5 #include <QStatusBar >6 #include "ui_mainwindow.h"7 #include "glfluidswidget.h"8 #include "cpanelwidget.h"9

10 class CMainWindow : public QMainWindow ,11 private Ui:: MainWindow12 {13 Q_OBJECT14

15 public:16 CMainWindow(QWidget *parent = 0);17 private:18 CGLFluidsWidget* m_pGLFluidsWidget;19 CCPanelWidget* m_pCPanelWidget;20 QStatusBar* m_pStatusBar;21 private slots:22 };23

24 #endif // __MAINWINDOW_H__

5

2.2 Graphical User Interface Visualization

2.2 Graphical User Interface

The non graphical user interface is adapted to a graphical user interface using Qt components.This interface maps all the previous key-bindings to graphical components. Operations likedirection coloring and fluid viscosity can be performed in a graphical way. Figure 2 showsthe graphical user interface along with the control panel and glFluids openGL widget.

Figure 2: First version of openGL fluids widget with graphical user interface. During thecompletion of the assignment steps, the graphical user interface is expanded with more com-ponents as needed.

2.3 GL triangle strip color interpolation bug fix

In the original application there was a bug which did not show the matter visualization in aproper way. The bug showed ugly triangles on the left side of the matter of the simulation.This was because the colors of the drawn triangle strip were not interpolated in the right way.We fixed this by setting the right colors for the triangle strip such that they are interpolatedin the right way. Figure 3 shows the bug in the original application on the left and the fixedapplication on the right. Notice how in the right image the left boundary of the fluid flowsimulation does not have the triangles that are present in the left image.

(a) (b)

Figure 3: (a) Triangle strip color interpolation bug. (b) fixed application.

6

2.4 Compile instructions Visualization

2.4 Compile instructions

This section first explains how to build and install the requirements for smoke, which are Qt4.4.3 and FFTW 2.1.5 library. After that, the build procedure for the real-time visualizationsimulation program itself is given.

In this section it is assumed that the code is build for Linux. The build procedures forWindows and Mac OS X are almost the same, but will not be discussed here. The maindifferences for compiling on the different platforms is how to included and link the externallibraries. This is solved by explicitly linking the libraries in the project file and results in thefollowing code:

1 unix {2 LIBS = -lrfftw -lfftw -lglut3 }4 mac {5 LIBS = -framework \6 OpenGL \7 -framework \8 GLUT \9 -L/sw/lib \

10 -lrfftw \11 -lfftw12 INCLUDEPATH += /sw/include13 }14 win32 {15 LIBS = -lglut32 -lglu32 -lopengl32 -mwindows -Llib -lFFTW16 INCLUDEPATH += ./ include17 }

On each platform the program can easily be build without adjustments by loading theproject file into Qt creator (free downloadable from http://trolltech.com/developer/qt-creator)and pressing the build-all button. At code level, platform dependent differences are takeninto account (for example mouse handling) but will not be discussed here.

2.4.1 Qt 4.4.3

Qt 4.4.3 can be downloaded at ftp://ftp.trolltech.com/qt/source. The file needed is called“qt-x11-opensource-src-4.4.3.tar.gz”. Save this file to a known directory, say “/qt”. After thefile has been downloaded, open up a “terminal” and run the following code (please note thatfor the last command you need to enter your password):

1 user@host :~$ cd /qt2 user@host :/qt$ tar -zxvf qt-x11 -opensource -src -4.4.3. tar.gz3 user@host :/qt$ cd qt-x11 -opensource -src -4.4.34 user@host :/qt/qt-x11 -opensource -src -4.4.3$ ./ configure --prefix =/usr5 user@host :/qt/qt-x11 -opensource -src -4.4.3$ make6 user@host :/qt/qt-x11 -opensource -src -4.4.3$ sudo make install

After this Qt 4.4.3 is installed. This can be verified by running “qmake -v”, which outputsomething like:

1 user@host :~$ qmake -v2 QMake version 2.01a3 Using Qt version 4.4.3 in /usr

2.4.2 FFTW 2.1.5

FFTW 2.1.5 can be downloaded at http://www.fftw.org/fftw-2.1.5.tar.gz. The file needed iscalled “fftw-2.1.5.tar.gz”. Save this file to a known directory, say “/fftw”. After this file hasbeen downloaded, open up a “terminal” and run the following code (please note that for thelast command you need to enter your password):

7

2.4 Compile instructions Visualization

1 user@host :~$ cd /fftw2 user@host :/fftw$ tar -jxvf fftw -2.1.5. tar.gz3 user@host :/fftw$ cd fftw -2.1.54 user@host :/fftw/fftw -2.1.5$ ./ configure --prefix =/usr/local5 user@host :/fftw/fftw -2.1.5$ make6 user@host :/fftw/fftw -2.1.5$ sudo make install

After this FFTW 2.1.5 is installed.

2.4.3 Real time visualization simulation

If both Qt 4.4.3 and FFTW 2.1.5 are installed, The visualization simulation can be compiled.Assuming the source code of the program is placed in “/src”, run the following commands ina “terminal”:

1 user@host :~$ cd /src2 user@host :/src qmake smoke.pro -config release3 user@host :/src$ make

Smoke is now compiled. To run smoke, you can run the executable found in the directorywhere the source of smoke is by typing in a “terminal” (from the “/smoke” directory):

1 user@host :/src$ ./smoke

8

Visualization

3 Camera, picking & controls

For allowing three dimensional movement around the fluid flow simulation a quaternion basedcamera class is implemented. This free rotating and translating will later become very usefulwhen visualizing slices (see Section 8) and streamsurfaces (see Section 9). The free cameramovement introduced some problems. Adding force to the fluid flow simulation with themouse was not as easy as before, because now the z-axis had to be taken into account.Because now the mouse coordinates do not map one-on-one on the fluid flow coordinatesanymore, some calculation has to be done to determine where the force has to be added. Thisproblem is solved by implementing a procedure which calculates the right coordinates, usingthe openGL picking function.

3.1 Quaternion based camera

The implemented camera class, which allowes for free movement around the fluid flow sim-ulation, is based on quaternions. The detailes on how the quaternions are implemented isbeyond the scope of this assignment and is therefor left out. The user is able to move freelyby using the w,a,s,d keys and by holding the Ctrl -key while moving the mouse.

3.2 openGL Picking

The problem of not being able to add force to the flow simulation anymore, because of thecamera class, is solved by implementing a picking procedure. This picking procedure calculatesthe exact position to which the mouse points on the flow simulation plane. An extra plane isadded which is placed direct behind the simulation to enable picking when glyphs are drawn.The extra plane is made transparant, so you cannot see it. This extra plane is add becauseit is possible to click in an empty spot between the glyphs. The plane makes sure the pickingprocedure always returns a clicked place in the simulation, if force is added to the fluid flow.

1 void CGLFluidsWidget :: startPicking ()2 {3 GLint viewport [4];4

5 glSelectBuffer(PICKBUFSIZE ,m_selectBuf);6 glRenderMode(GL_SELECT);7

8 glMatrixMode(GL_PROJECTION);9 glPushMatrix ();

10 glLoadIdentity ();11

12 glGetIntegerv(GL_VIEWPORT ,viewport);13 gluPickMatrix(m_mouseX ,viewport [3]-m_mouseY ,1,1, viewport);14 gluPerspective(FIELD_OF_VIEW , 1.0* viewport [2] / viewport [3] ,0.1 ,10000);15 glMatrixMode(GL_MODELVIEW);16 glInitNames ();17 glLoadIdentity ();18 m_pCam ->Apply();19 // Push one name for picking20 glPushName (1);21 // Draw to hit22 visualize ();23 }24

25 void CGLFluidsWidget :: processHits(GLint hits)26 {27 double modelview [16], projection [16];28 int viewport [4];29 float z;30

31 glGetDoublev( GL_PROJECTION_MATRIX , projection );32 glGetDoublev( GL_MODELVIEW_MATRIX , modelview );33 glGetIntegerv( GL_VIEWPORT , viewport );

9

3.3 Controls Visualization

34

35 if (hits > 0)36 {37 glReadPixels( m_mouseX , viewport [3]-m_mouseY , 1, 1,GL_DEPTH_COMPONENT , GL_FLOAT , &

z );38 gluUnProject( m_mouseX , viewport [3]-m_mouseY , z, modelview ,projection , viewport , &

m_objx , &m_objy , &m_objz );39 }40 }41

42 bool CGLFluidsWidget :: stopPicking ()43 {44

45 int hits;46

47 // restoring the original projection matrix48 glMatrixMode(GL_PROJECTION);49 glPopMatrix ();50 glMatrixMode(GL_MODELVIEW);51

52 // returning to normal rendering mode53 hits = glRenderMode(GL_RENDER);54

55 if (hits >0)56 {57 processHits(hits);58 return true;59 }60 return false;61 }

3.3 Controls

Several different controls are bind to different actions. An overview of all different user keysis given below.

Control Action DependencyW Zoom in flow simulation has the focusS Zoom out flow simulation has the focusA Move camera left flow simulation has the focusD Move camera right flow simulation has the focusCtrl + left mouse button Rotate camera flow simulation has the focusAlt + P Pauses the simulationRight mouse button Place user defined seed point

for streamlineDrawing streamlines enabled

Space Place a seedcurve for astreamsurface

Drawing streamsurface enabled

10

Visualization

4 Color mapping

In this step of the assignment we implemented several different colormapping techniques.The colormapping techniques are applied to the three data sets in our application. Thesedata sets are the fluid density, the fluid velocity magnitude and the force field magnitude.The user is able to choose between different colormaps. As stated in the assignment, threecolormaps have to be implemented; A rainbow colormap, a grayscale colormap and anotherof our choice. We implemented more than three colormaps to give the user the opportunityto choose the right colormap for the right kind of data. More on these ‘right colormaps’ isdescribed in the next sections. The implemented colormaps are:

• Black-White colormap Grayscale colormap with 255 colors ranging from black towhite

• Grayscale colormap User defined number of color bands ranging form black to white

• Rainbow colormap Rainbow (Red, Orange, Yellow, Green, Blue, Indigo, Violet) col-ormap with 255 colors

• Bands colormap Rainbow colormap with user defined number of colors

• Alternating colormap Blue-yellow bands which alternates when values change, givena certain sensitivity

• Black-body colormap 255 colors ranging from black to red to yellow to white

• User defined colormap Interpolated colormap between two user defined colors, in-terpolation is also based on the users preference. The user can choose to interpolatebetween RGB, Hue, Saturation and Value.

All the above mentioned colormaps can be overlayed by an alternating colormap for whichthe sensitivity is user defined. This has the great advantage that alternations in values can beseen much faster. More on this advantage and the overlayed alternating colormap is describedin Section 4.3.6. All implemented colormaps can be applied to three different scalar data setsnamely, The fluid density rho, the fluid velocity magnitude | v | and the force field magnitude| f |. More on the computation of these scalar data sets is described in Section 4.2.1.

4.1 Legenda

We choose to implement a vertical color legend bar. This, as opposed to a horizontal colorlegend bar, makes it intuitive for the user that colors higher in the legend represent higher datavalues. The legenda is drawn in the same openGL window as the real time fluid simulationis rendered. Because of this the cursor coordinates have to be adapted to the new situation.At the left side of the simulation we want to apply a correction which is equal to 2 times thelegend-width. At the right side we do not want a correction. Too achieve this we interpolatethe coordinates. This results in the following interpolation code:

1 static float correction = 2* m_legendWidth;2 if (mx < correction) return;3 else {4 mx -= correction * (1-(mx -correction)/( m_winWidth - 2* m_legendWidth));

11

4.2 Computation Visualization

4.2 Computation

4.2.1 Scalar data set computation

For applying the colormap to the three data sets, the values of the data sets have to becomputed. The first data set, the fluid density rho, is given by the external FFTW library.The second and third data set, the fluid velocity magnitude |v| and the force field magnitude|f | have to be computed on the fly. These magnitudes |v|, |f | are computed by taking thevector lengths (see Formulas 1, 2). After computation all scalar data values are multiplied bya factor 10 because the data values are really small.

|v| =√v2x + v2

y (1)

|f | =√f2x + f2

y (2)

4.2.2 Scaling

Both scaling and clamping are implemented for all different color maps. When the scalingoption is selected the entire actual range of the data values at the current time are mappedto the selected color map. The minimum and maximum data values are tracked by theapplication, therefor every occurring data value is shown in the visualization, for both scalingand clamping. As a consequence of this the displayed values in the legenda are updatedreal time and therefor change dynamically. This consequence is also described at page 130of [Tel]: ”(...) if we do not know f(t) for all values t before we start the visualization,we cannot compute the absolute scalar range [fmin, fmax] (...). In such situations, a bettersolution might be to normalize the scalar range separately for every time frame f(t). Ofcourse, this implies drawing different color legends for every time frame as well.” An effect ofthe updating of the legenda is that it is hard to interpret the data real time. A solution to thisproblem is to pause the simulation (which is of course possible in our simulation). The currentvalues of the legenda are then correct for the timeframe shown, and can easily be interpreted.An example of the implemented scaling and clamping features is shown in Figure 4. In theleft image the data values are clamped to the range [0..1], in the right image the exact samedata values are scaled, such that the highest value corresponds to the color highest in thelegend. The shown data values in the right image apply to the range [0.0236323, 0.539868].

(a) (b)

Figure 4: Data values clamped (a) and scaled (b).

12

4.3 Color maps Visualization

4.2.3 Clamping

The user is able to clamp the displayed data values for all different color maps. A minimumand a maximum value can be defined, only data values within this range are displayed usingthe selected color map. Data values that are not within the defined range, less than minimumand/or larger than maximum, are clamped to minimum and maximum respectively. Theclamping option can be used for example to show only the extreme high (or low) values, byspecifying a small range with both high (or low) minimum and maximum values. Figure 5shows the clamping option applied to the range [0.9, 0.95]. The left figure shows the entirerange [0..1], in the right image only high values within the range [0.9, 0.95] are shown, valuesoutside this range are clamped.

(a) (b)

Figure 5: Data values, no clamping (a). (b) Clamping set to the range [0.9, 0.95], only highvalues are shown.

4.3 Color maps

4.3.1 Rainbow color maps

(a) (b) (c)

Figure 6: (a) Rainbow color map (b) grayscale color map and (c) black-body color map [BI07]

According to several research papers [BI07], [Hea96] the rainbow colormap (see Figure6) is one of the worst colormaps one can use. Still the visualization community widely usesthe rainbow colormap.

13

4.3 Color maps Visualization

Not only does the rainbow color map confuse viewers through its lack ofperceptual ordering and obscure data through its inability to present smalldetails, but it actively misleads the viewer by introducing artifacts to thevisualization [BI07].

The biggest problem with the rainbow colormap is that it is not perceptually ordered.For example a grayscale colormap is perceptually ordered. It is immediately clear that highershades of gray present a higher value. For a rainbow colormap this ’greater than’ relationshipis not immediately clear. To interpret the data, one has to know the precise order of the colorsof a rainbow. If this is not the case, data is misinterpreted. To solve this problem a legendacould be introduced, but this leads to unnecessary distraction. Not only is the perceptuallyordering a problem, the rainbow colormap introduces artifacts. Because of the sudden changein colors the user may think that values suddenly have a great difference, while this is notthe case at all. This effect is shown in the upper images of Figure 7, the lower image showsthe black-body colormap applied to the same data values, which does not introduce artifacts.This negative effect, known as banding, is even stronger as the number of displayed colors inthe rainbow colormap are decreased.Although we know that a rainbow colormap because of the above mentioned reasons is oneof the worst colormaps to use, we did implement it (as it is part of the assignment) but withthe option to show a legenda next to it, to interpret the data in the intended way.

(a) (b)

(c)

Figure 7: (a-b) Negative banding effect of the rainbow colormap. (c) The black-body colormapis applied to the same data values as in (a-b), here the banding effect is absent.

14

4.3 Color maps Visualization

4.3.2 Grayscale color maps

Two grayscale color maps are implemented. Namely the RGB interpolated black-white col-ormap and the grayscale color bands map. The RGB interpolated black-white colormap has255 shades of gray. According to the book [Tel] a usable colormap has to have 64 to 255different shades. This is to prevent color banding. In our implementation we give the userthe option to set the number of different shades for the grayscale color bands map. We ex-perienced that indeed above 64 shades of gray there is no real color banding anymore. Alsobecause the implemented program is a smoke simulation realism is preferred and color band-ing breaks the realistic believe of the simulation. Figure 8 shows the grayscale color bandsmap with different values for the shades of gray.

(a) (b)

(c) (d)

Figure 8: Grayscale colormap with increased shades of gray. (a) 4, (b) 16, (c) 32 and (d) 256shades of gray respectively.

4.3.3 Alternating color map

As an extra option to our program we choose to implement an alternating color map. Analternating color map is used to emphasise relatively large differences in data values. Wherethe data has more or less the same value the color is the same, and when this value is relativelysmaller or greater the current color changes to another color. These two colors alternate andtherefor the color map is called an alternating color map. The sensitivity can be set by theuser. The value is initialised to 0.01 as this in practice gave the best results for our data set.For the alternating colormap we choose for the colors blue and yellow to deal with the problemof colorblindness. Colorblindness affects 10 percent of the male population, and is therefor

15

4.3 Color maps Visualization

a great problem in visualization [BI07]. Colorblind people tend to have difficulty to see thedifference between the colors green and red. We solved this problem by taking the alternatingcolors blue and yellow. Figure 9 shows the alternating colormap applied to matter (left image)and arrow glyphs (right image).

(a) (b)

Figure 9: Alternating blue-yellow colormap. Sensitivity set to 0.01. (a) Drawn using matter.(b) Drawn using three dimensional arrow glyphs.

4.3.4 Black-body color map

In our visualisation simulation we included the option to apply a black-body color map (seeFigure 6 and 7). The black-body colormap is the best choice if nothing is known about thedata or task [BI07] because of the perceptual ordering and use of color to avoid contrasteffects. In our case we do know something about the data but not enough to select a typicalbest fitting colormap. The only thing that we know is that the received data from the FFTWlibrary is non-discrete but continuous, and resembles a fluid flow. Our opinion is that for ourfluid simulation the black-body color map is the best choice.

4.3.5 User defined color map

To let our simulation support the people with less common types of colorblindness (for exam-ple violet colorblindness), we implemented the user defined color map. For this color map theuser can set from which color to which color the color map interpolates. Also the user is ableto select the interpolation function. The colors can be interpolated via RGB, Hue, Saturationand Value. Figure 10 shows a user defined colormap from blue to brown, interpolated withthe four different options.

(a) (b) (c) (d)

Figure 10: User defined colormap from blue to brown, interpolated via (a) RGB, (b) Hue, (c)Saturation and (d) Value respectively.

16

4.3 Color maps Visualization

4.3.6 Overlay alternating color map

On top of all the implemented color maps we implemented the option to have an overlayedalternating color map. This alternating overlayed color map is laid on top of the currentselected colormap. This way, both the actual values of the data can be spotted, by thecurrent selected color map, but also great changes in values can easily be spotted because ofthe alternating overlayed color map. The sensitivity of the alternating overlay can be set bythe user.Because in the smoke simulation there are no great differences in values, for example you willnot encounter a data value of −10 next to a data value of 100 the alternating colormap andthe overlayed alternating colormap are not really useful in this particular situation. There isnot a great difference in neighbouring values because of the way the data values are computed(by the external FFTW library). Force is just added to the current value in the selected areaand the data value gradually rises in that area therefor no great differences in value occur.The overlayed alternating colormap becomes more useful when the tracked scaling option isenabled. The right image of Figure 11 shows the overlayed alternating colormap on top ofthe black-body colormap.Nevertheless this option is implemented to show that in some situations it is a good choice fora colormap, and experimentation has indeed shown that in our case the alternating colormapis not a good choice.

(a) (b)

(c) (d)

Figure 11: (a) The black-body colormap applied to the data set, (b) The overlayed alternatingcolormap on top of the black-body colormap applied, making it easier to see groupings ofvalues that are more or less the same. (c-d) Applied to the rainbow colormap.

17

Visualization

5 Glyphs

In the third step of the assignment glyphs are implemented using several different glyphtechniques. Glyphs are icons that represent the different values of a vector field. These valuesare encoded in a single glyph. For example, these values can be encoded through color, widthand height of a single glyph. Also the shape and appearance of the glyphs can be varied in alot of ways, for example 2D/3D arrows, 2D/3D hedgehogs, triangles. Glyphs can also be usedto visualize one scalar field and one vector field at the same time. Glyphs are implementedto visualize the three data sets, the fluid density, the fluid velocity magnitude and the forcefield magnitude.

5.1 Scalar and vector field data

In a single glyph both a vector field data and a scalar are encoded. For the scalar field theuser is able to set it to the density rho, the fluid velocity magnitude | v |, or the force fieldmagnitude | f |. The vector field can be set to either the fluid velocity v or the force field f .The fluid velocity magnitude and force field magnitude are computed by taking the vectorlength as described in 4.2.1. The vector field direction and magnitude are visualized by theorientation and length of the glyph. The scalar field value is visualized by the color of theglyph. The length and thickness of the glyph encodes the scalar field rho, all these fields canbe adjusted by the user.

5.2 Glyph distribution methods

Part of the assignment to implement glyphs is to implement a mechanism to specify whereto draw the glyphs. We implemented three different ways of placing the glyphs:

• Uniform

• Random

• Uniform userdefined

The uniform method aligns the glyphs on a regular grid. Because this regular grid causessome interpretation problems as described in [Tel] random is implemented. In regular gridswith highly densed areas, the perception of the diagonal orientation of the vector glyphs isweakened because of the uniformity of the sampling points. This problem can be solved bysub-sampling the data set using a randomly distributed (instead of regularly arranged) set ofpoints. The user is able to define how many random glyphs are drawn. Figure 12 shows ahard to interpret image because of the highly densed area in the right image the solution tothis problem is shown, random placed glyph.Also the number of uniform glyphs can be set, by selecting the Uniform userdefined option.Initially this is set to 50 × 50 glyphs, but can be adjusted by the user. In Figure 13 thenumber of glyphs is set to 300× 300 making the glyphs look like matter. The user is able tobreak the symmetry by setting the number of horizontal glyphs to a different value than thenumber of vertical glyphs.Because sample points do not always coincide with actual grid points, and thus have no value,different interpolation methods are implemented as described in 5.3.

18

5.3 Glyph interpolation methods Visualization

(a) (b)

Figure 12: (a) Highly densed area and the solution, (b) random placed glyphs.

Figure 13: User defined uniform 300× 300 glyphs.

5.3 Glyph interpolation methods

The data values for the glyphs are computed by two different interpolation methods. Theuser is able to select the interpolation method that suits him best. The methods to choose arenearest neighbor interpolation or Bilinear interpolation. The nearest neighbor method speaksfor itself, for computing the value of a point p, it takes the value of the nearest point q, andgives the point x the same value as y. More formally it is checked in which quadrant of the cellthe point lays, and the according data value of the point belonging to the quadrant is taken.For example in Figure 14 if a value needs to be found for point P , it is checked which is thenearest neighbor of P , which in this case is Q12. Point P is then given the exact same valueas point Q12. Bilinear interpolation takes the four corner values of a point into considerationand interpolates these twice. And then these found values are interpolated again to find thevalue for the given point. For example in Figure 14 a value for point P has to be found.Q12, Q22, Q11 and Q21 are taken into consideration. First Q12 and Q22 are interpolated tofind a value for R2. Next Q11 and Q21 are interpolated to find the value for R1. Then finallyR2 and R1 are interpolated to find the value for P .

19

5.4 Glyph parametrization Visualization

Figure 14: Example bilinear interpolation [Wik08]

In formula form bilinear interpolation is computed by (where f(x, y) gives the value inpoint (x, y):

f(x, y) = f(Q11)(x2−x1)(y2−y1)(x2 − x)(y2 − y)+

f(Q21)(x2−x1)(y2−y1)(x− x1)(y2 − y)+

f(Q12)(x2−x1)(y2−y1)(x2 − x)(y − y1)+

f(Q22)(x2−x1)(y2−y1)(x− x1)(y − y1)

(3)

Of course bilinear interpolation gives better results, but because of the many interpolationsis also much slower than the nearest neighbor interpolation method. In practice we do notsee many difference between the two interpolation methods and therefor we have chosen totake the nearest neighbor interpolation method as default.

5.4 Glyph parametrization

The glyphs can be fine tuned in a number of ways. The coloring of the glyphs can be set tocolor map coloring or direction coloring. If direction coloring is selected the glyph is coloreddepending on the direction the glyph has. If colormap coloring is selected the glyph has acolor depending on the value of the selected data set in that specific point. All previousdescribed color maps can be applied to glyphs.

5.5 Clamping

Glyphs can also be clamped if their length is longer than the grid cell size. If this happensglyphs will overlap each other, making it difficult to interpret the data. There are threedifferent ways to set the clamping.

• Uniform All glyphs are made the same length. Independent of the data value, eachglyph is scaled to be uniform length.

20

5.6 Lighting Visualization

• Normal All glyphs are scaled dependent of their data value. This could mean thatglyphs overlap each other.

• Clamped Glyphs are scaled dependent of their data value, but are clamped to the gridcell size if they would exceed this value.

An example of the clamping set the Normal and Clamped is shown in Figure 15

(a) (b)

Figure 15: (a) Clamping option set to Normal. (b) Clamped, making it easier to interpretthe data.

It would be interesting to clamp the glyphs with a logarithmic scale. By making thevector length scale smaller as the data value increases. This is not implemented because oftime constraints.

5.6 Lighting

To make the interpretation of the glyphs easier for the user, lighting is applied. Because ofthe applied lighting, the position and direction of the three dimensional glyphs are easier tosee. The applied lighting is directed (no point, but sunlight effect) and always comes from onedirection (independent of the camera position). Also bidirectional lighting is applied, this isto visualize the back side of polygons. This will later on become important when visualizingstreamsurfaces.

5.7 Implemented glyphs

There are a number of glyphs which the user can select to apply to the different data sets. Aselection can be made of Hedgehogs 2D or 3D, Cones 3D, Cones silhouette 3D and Arrows3D. Different glyphs are useful in different types of situation. Each glyphs has its advantagesand disadvantages. Below the different types of glyphs are explained and the advantages anddisadvantages of each glyph type are clarified. Glyphs are implemented using the openGLdisplayList. The great advantage of a displayList is that the glyphs are precompiled androtations and transformations can be done on the model coordinates. After rotating andtransforming the list is called and the glyphs is displayed in world coordinates, speeding upthe rendering significantly.

21

5.7 Implemented glyphs Visualization

5.7.1 Hedgehogs 2D/3D

Two dimensional hedgehogs are basically just short lines. Within these glyphs several differentvariables can be encoded. The color, length and direction can be used to encode the datavalues. Hedgehogs have the advantage that they are simple to implement and fast to render.There are however some obvious problems with hedgehogs. Because the hedgehogs are 2Dthey are hard to interpret because no lighting model can be applied to them. This problem iseasily solved by making 3D hedgehogs on which for example gouraud shading can be applied,therefor three dimensional hedgehogs are also implemented. Also because of the simplicityof a hedgehog it is not clear to which direction the glyph is pointing. This problem is solvedby extending the hedgehog with an arrow point in the direction the data is flowing, this isimplemented in the form of three dimensional arrows (yet these arrow points introduce newproblems as described in 5.7.2. Other problems are less easily solved. The visual impressionand interpretation of the data depends heavily on the spatial distribution of the glyphs. Alsousing many glyphs gives unwieldy pictures. To solve these problems several different glyphdistribution methods are implemented as described in Section 5.2.

5.7.2 Arrows 3D

As stated in the previous paragraphs arrow glyphs are hedgehogs with an arrow point attachedto it. This solved the problem of not seeing in which direction the data flows. By the arrowpoint this becomes clear, yet another problem is introduced. As the glyph is scaled, how bigshould one make the arrow point. Should this also scale or remain the same length all thetime. We chose to keep the arrow head always the same length, but this preference may bedifferent for other users. To solve the problem of scaling the arrow head, three dimensionalcones are introduced.

5.7.3 Cones 3D

Three dimensional cones are basically the arrow point without the arrow tube. This solvesthe problem of scaling because now the arrow point is the entire glyphs and it’s length andscale are according to the data to be visualised. To try to solve the problem of cluttering ofglyphs which results in unwieldy pictures we implemented three dimensional cone silhouetteglyphs.

5.7.4 Cones silhouette 3D

Three dimensional cone silhouette glyphs (see left image of Figure 16) are the same as 3Dcone glyphs, but only the wire-frame of the cone is rendered instead of the whole surface.In theory this seemed a good idea to solve the problem of cluttering of glyphs, in practicalthis turned out rather useless (see Figure 16). The effect of only rendering the wire-frameof a glyph makes the picture even harder to interpreted. Because lighting has little effect ona single wire-frame (as opposed to the surface of solid cones) the picture is very difficult tointerpreted. Because no lighting model can be applied, it cannot easily be seen which wirebelongs to which glyph, making it extremely hard to interpret the data. Thus making conesilhouette glyphs not a good choice for a glyph. Clamped 3D arrow glyphs (right image ofFigure 16) turned out to be the best choice of glyphs, if rendering speed is less important,otherwise 2D hedgehogs are preferred.

22

5.7 Implemented glyphs Visualization

(a) (b)

(c)

Figure 16: (a) Three dimensional solid cones and three dimensional cone silhouette glyphs(b). The best choice of glyphs, clamped 3D arrow glyphs applied to the same data set (c).

23

Visualization

6 Gradient

The fourth step of the construction of the fluid visualisation concerns the understanding andimplementation of scalar field operators. One scalar field operator in particular has to beimplemented into the fluid flow simulation. The scalar field operator to be implemented isthe gradient operator. By implementing scalar field operators several different quantities onscalar fields can be computed. The visualization of these computations enables the user todo various kinds of analysis on the scalar data field. The gradient of a scalar field is a vectorwhich shows at every point, the direction in which the quantity of the data varies the mostat that point. In other words, the direction of maximal change. The length of the vectorencodes the amount of change per unit length in that direction.

6.1 Gradient computation methods

To compute the gradient, the derivative of the scalar quantity in that direction is computed.To compute the gradient, denoted as ∇, of a function f the derivative is computed:

∇f = (∂f

∂x,∂f

∂y,∂f

∂z) (4)

Because of the fact that we do not have a continuous function, but a discrete data value ineach point the formula as stated above (Formula 4) cannot be directly applied to computethe gradient. In our fluid simulation each point in the data field has a discrete scalar value,therefor the partial derivative of this scalar is computed by the finite differences method. Tocompute the derivative of a discrete value, three different methods of computing this can beapplied. By forward computation (Formula 5) the point itself and the next point are takeninto consideration, by backward computation (Formula 7) the previous point and the currentpoint are taken into account and for central computation (Formula 6) both the previous andnext point are considered.

forward:∇x = xi+1,j,k − xi,j,k (5)

central:∇x =

12

(xi+1,j,k − xi−1,j,k) (6)

backward:∇x = xi,j,k − xi−1,j,k (7)

All three methods are implemented in the fluid flow simulation and are a user preference. Forboth the x and y direction the partial derivative of the fluid density rho and the fluid velocity

magnitude | v | are computed and combined into a vector(xy

). The vector is visualized in the

data field using the glyphs described in Section 5. Which gradient (rho or | v |) is visualizedcan be selected in the graphical user interface.The gradient of the velocity is the fastest increase in that point. The superposition of all localvelocities is the relation between the velocity and the gradient of the velocity.

24

6.2 Noise reduction methods Visualization

6.2 Noise reduction methods

A different approach to compute the first order derivatives of discrete data is to use the socalled Sobel operator [IS73] or the Prewitt operator [PM66]. Both techniques are originallymeant for stabilizing edge detection, but have the pleasant effect of reducing the presenceof noise in the data. Both techniques are implemented to see if the noise reduction had anpositive effect on the interpretation of the gradient data. These first order derivatives arecomputed by Formula 8 and Formula 10. Both techniques turned out to have little effect onour dataset, which is of course the case, because there is little noise in our data set.

Sobel operator:

∂I

∂x(i, j) = Ii+1,j−1 + 2Ii+1,j + Ii+1,j+1 − Ii−1,j−1 − 2Ii−1,j − Ii−1,j+1 (8)

∂I

∂y(i, j) = Ii+1,j+1 + 2Ii,j+1 + Ii−1,j+1 − Ii+1,j−1 − 2Ii,j−1 − Ii−1,j−1 (9)

Prewitt operator:

∂I

∂x(i, j) = Ii+1,j−1 + Ii+1,j + Ii+1,j+1 − Ii−1,j−1 − Ii−1,j − Ii−1,j+1 (10)

∂I

∂y(i, j) = Ii+1,j+1 + Ii,j+1 + Ii−1,j+1 − Ii+1,j−1 − Ii,j−1 − Ii−1,j−1 (11)

(a) (b)

Figure 17: (a) Three dimensional arrow glyphs (b) Gradient visualization using Sobel opera-tor, applied to the same dataset

6.3 Gradient application

As suggested in [Tel] an example application of the gradient is the computation of the normalvector of a surface. A surface normal is given by the formula:

n = (−∂f∂x,−∂f

∂y, 1) (12)

25

6.3 Gradient application Visualization

The gradient vector is the same as the normal vector, though it has not the same length. Thenormal can be computed from the gradient by normalizing the vector:

n = (−∂f∂x√

(∂f∂x )2 + (∂f∂y )2 + 1,−

∂f∂y√

(∂f∂x )2 + (∂f∂y )2 + 1, 1) (13)

Note that the z component of the normal to the surface is always set to 1 as stated in [Tel].The z could also be computed by taking the partial derivative in the z direction. This ispossible because slices are introduced in the next section. This is not done because at thetime of this implementation, slices did not yet exist.

26

Visualization

7 Streamlines

In the fifth stage of the construction of the flow visualization program, stream objects wereimplemented. A special case of stream objects, streamlines are implemented. Streamlinesare lines visualised in the flow field. These lines follow the trajectory of one particle. Thestreamlines visualize the path the particle would follow if it is released at the start of thestreamline. The starting points of the streamlines are called seedpoints. “The streamline isthe curved path over a given time interval T of an imaginary particle passing through a givenstart location or seed, in a stationary vector field over some domain D” [Tel].

7.1 Computation of the streamline

The streamlines are constructed by computing the integral of the field v. This is becausethe trajectory of the fluid velocity field v must be visualised over a given time interval.This integration can be done by either the Euler integration method, or the Runge Kuttaintegration method.

7.1.1 Euler integration method

The Euler integration is given by the following formula:

∫ τ

t=0v(p)dt =

τ/∆t∑i=0

v(pi)∆t (14)

where pi = pi−1 + vi−1∆t

7.1.2 Runge Kutta second and fourth order integration method

Another method for computing the streamline, in stead of the Euler integration method is theRunge Kutta method. The Runge kutta method approximates the vector field v between twosample points along a stream object with the average value v(pi)+v(pi+1)

2 . The great advantageof the Runge Kutta method over the Euler method, is the accuracy of the streamlines for thesame timestep ∆t. This means that to maintain the same accuracy of the streamlines thetimestep can be increased. This in turn means that the process of computing the streamlinesis much faster. Or it can be such that the time is held the same as it was with the Eulermethod but gives much more accurate results. So there is (as is often the case) a trade offbetween accuracy and time. The Runge Kutta method relies on an approximation to theTaylor polynomial. Two methods of computing the integral by approximating the Taylorpolynomial are implemented. The second order Taylor polynomial (or Runge Kutta 2ndorder) and the fourth order Taylor polynomial (or Runge Kutta 4th order). These again havethe time versus accuracy trade off. The second order Runge Kutta is slightly faster but lessaccurate as opposed to the fourth order Runge Kutta method.

27

7.1 Computation of the streamline Visualization

Runge Kutta second order [PTVF02]:

k1 = hf(xn, yn) (15)

k2 = hf(xn +12h, yn +

12k1) (16)

yn+1 = yn + k2 +O(h3) (17)

Runge Kutta fourth order [PTVF02]:

k1 = hf(xn, yn) (18)

k2 = hf(xn +12h, yn +

12k1) (19)

k3 = hf(xn +12h, yn +

12k2) (20)

k4 = hf(xn + h, yn + k3) (21)

yn+1 = yn +16k1 +

13k2 +

13k3 +

16k4 +O(h5) (22)

7.1.3 Integration direction

For the Euler integration method (see Formula 14) and the Runge Kutta method (see Formulas17 and 22) there are a number of different ways to compute the integral. There are threeways of computing the integral of which the last gives the best results:

• Implicit integration (backward integration)

• Explicit integration (forward integration)

• Bilinear integration (central integration)

Implicit integration takes the current point and the point ∆t before the current point intoaccount to calculate the integral. Explicit integration takes the current point and the point∆t after this point into account. Bilinear integration takes both the point ∆t before thecurrent point and the point ∆t after the current point into account to calculate the integralin the current point.The choice of the integration method is left to the user, initially the method is set to centralintegration because this method gives the best results in practice.Sometimes streamlines have

7.1.4 A value for ∆t

Now there is the problem of finding the best value for ∆t. This value depends on the dataset cell sizes, vector field magnitude, vector field variation, desired streamline length, and

28

7.2 Seedpoints feeding strategy Visualization

desired computation speed. In the ideal case ∆t should be adapted to the situation and varyover time, for the same streamline. This is because the values locally differ as the stream-line proceeds. There is even a fourth order Runge Kutta method with a self-adjusting stepsize [WJE00]. For uniform spatial sampling ∆t is adjusted, depending on the spatial integra-tion step and the vector magnitude:

1 if(m_streamLineUniformSpatialSampling)2 {3 dt = m_streamLineSpatialIntegrationStep / mag;4 }

There are however some guidelines on how to choose a right value for ∆t. According toTelea [Tel] it is a good idea to set the spatial integration step to around 1

3 times the cell size:

“In practice, spatial integration steps of around one-third of a cell sizeshould yield good results for most vector fields.” [Tel]

To give the user some freedom the integration timesteps for all three methods (Euler, RungeKutta second order, Runge Kutta fourth order) are user inputs. Initially the spatial inte-gration step is set to 1

3× cell size, as advised by Telea. Since our cell size is 20 the spatialintegration step is initialized at 20

3 = 6.667. According to [Tel] ∆t× the vector length shouldequal the spatial integration step. Therefor ∆t is calculated by the following formule:

∆t =13 × cellsize

length(vector)(23)

The spatial integration step is important for the density based streamline method. Settingthis value too small, could lead to overlapping streamlines because cells are ‘stepped over’.

7.1.5 Integration stop criterion

The length of the streamline depends on the actual vector field value. Therefor most of thetime a maximal time criterion is not very useful. A better approach is to integrate until acertain streamline length is reached. Not only when the maximum time or length is reached,but also when the actual value (is close to) zero, the integration is stopped. As suggestedby [Tel] this constant is set to 0.0001. Also the minimum length for a streamline can beuseful. Certain streamlines which are very short, would be filtered out. Combining theminimum length with the maximal length, certain ranges of streamlines can be visualized.The maximal integration time, maximal streamline length, minimal streamline length andthe width of the streamline can all be adapted by the user. Initially these values are set to1500, 1000, 120 and 3 respectively because for this fluid flow simulation these turned out togive the best results.

7.2 Seedpoints feeding strategy

Not only the integration method, the choice for ∆t and the stop criterion, have a greatinfluence on the visualized streamlines. The location and number of seedpoints is as equalimportant. Seedpoints can be placed regularly in the domain, but this leads to cluttering.Advantage of the regularly placed seedpoints is that you know for sure the whole domain iscovered. Downside is that the generated picture may not be useful anymore because of the

29

7.2 Seedpoints feeding strategy Visualization

cluttering. A solution to the cluttering problem is to trace a streamline until it gets arbitrarilyclose to itself or an already traced streamline. For the fluid flow simulation three differentmethods of placing the seedpoints are implemented of which all take the cluttering solutioninto account.

• Random — Seedpoints are placed randomly in the vector field. The number of placedseedpoints is initially set to 10 but can be adapted by the user. The streamlines gen-erated by the seedpoints are interactively. This means that if the flow simulation isrunning, the streamlines are real-time updated. Figure 18 shows the matter and ac-cording generated streamlines with 200 seedpoints.

(a) (b)

Figure 18: (a) Matter fluid flow visualization with (b) according generated streamlines with200 random placed seedpoints

• User defined — By choosing this option in the graphical user interface the user is ableto place the seedpoints on this place where he/she wants by right clicking on the desiredplace in the flow simulation. These user defined streamlines are also real-time updated.

• Density based — For evenly spaced streamlines of arbitrary density the method andalgorithm of [Lef97] is implemented. This method generates evenly distributed stream-lines. Because the computation is taking significantly more time than the Random andthe User defined seedpoint placement, this method is not real-time updated, and theflow simulation has to be paused. This method is slower than the random method,but gives way better results. An example showing the difference between placing theseedpoints on a regular grid an the density based method of Lefer et al. [Lef97] is shownin Figure 19.

7.2.1 Interpolation

Because of the fact that seedpoints can be placed anywhere within the domain, it is not alwaysthe case that it is placed at a grid point. The user is not limited in placing the seedpoints

30

7.3 Tapering effect Visualization

Figure 19: Long streamlines with seed points placed on a regular grid (left); Same flow fieldcomputed using lefer streamline placement method (right) [Lef97]

because of this. If the seedpoint does not lay on a grid point, the value of the selected seedpointis computed by either nearest neighbor interpolation or bilinear interpolation (as describedin Section 5.3). The value is computed by bilinear interpolation of the four surrounding cellpoints. An example of bilinear interpolation is shown in Figure 14.

7.3 Tapering effect

Because the streamlines are traced until they come close to itself or another already tracedstreamline, disparities of density appear in the resulting image. This leads to visual artifacts.Turk [TB96] suggested to taper the ends of the streamlines by decreasing the thickness of thelines as they go closer to another one. The thickness of the streamlines, to create the taperingeffect can be computed by the thickness coefficent as suggested in [Lef97]:

thicknessCoef =

{1.0 ∀d ≥ dsepd−dtest

dsep−dtest∀d < dsep

; thicknessCoef ∈ [0; 1] (24)

Where d is the distance to the closer streamline. This tapering effect and the visual artifactsthat appear without this effect are shown in figure 20.The tapering effect is implemented different because of time constraints, but generates similarresults. Now the tapering is implemented by increasing and decreasing the linewidth of astreamline at the beginning and the end. The length on which the linewidth increases anddecreases is initially 10 timesteps and can be adapted by the user. If the actual timestepsis smaller than the user specified number of timesteps, half the timesteps of the streamlineare taken to apply the tapering effect to. Initially the taper effect is enabled, to prevent theappearance of visual artifacts. Figure 21 shows the fluid flow visualization without and withtapering effect generated with the density based method.

31

7.3 Tapering effect Visualization

Figure 20: Streamlines computed without and with the tapering effect [Lef97]

(a) (b)

Figure 21: Density based streamlines generated without (a) and with (b) the tapering effect

32

Visualization

8 Slices

The goal of the sixth stage of the construction of the fluid flow simulation was to implementand understand the time-dependent slices technique. Slices are used to visualize a higherdimension of the fluid flow simulation. In this specific application, slices are used to visual-ize the time dimension of the fluid flow. This is achieved by stacking several different twodimensional planar grids on top of each other along the z-axis. The result of this is that thesimulation now becomes a cube in stead of a plane (see Figure 22). Through slices volumetricdata set can be visualized. Here it is used to show how the different data values change overtime.

Figure 22: Hundered separate slices stacked on top of each other along the z-axis.

8.1 Ringbuffer

The slices technique is achieved by implementing a ring buffer structure. In this buffer the twodimensional planar grids are stored. Each place in the buffer represents a different momentin time. Every consecutive buffer place stores the data values of the planar grid one momentlater in time compared to the current data values. The planar grids in this buffer are thendrawn on top of each other. The different planes are equally spaced along the z-axis. Thetwo dimensional planar grids are stored by a slice frame struct in the ring buffer. Each sliceframe struct has its own data and a displaylist, which is rendered once (precompiled), andthen called to speed up the rendering process. Each slice frame structure has a pointer to thenext to be rendered slice frame.

1 // Slice frame structure2 typedef struct SliceFrame {3 fftw_real *m_pVx , *m_pVy; //(vx ,vy) = velocity field at the current moment4 fftw_real *m_pFx , *m_pFy; //(fx ,fy) = user - controlled simulation forces , steered

with the mouse5 fftw_real *m_pRho; // smoke density at the current (rho) and previous (rho0) moment6 fftw_real *m_pVx0 , *m_pVy0; //(vx0 ,vy0) = velocity field at the previous moment7 fftw_real *m_pRho0; // smoke density at the current (rho) and previous (rho0) moment8 GLuint m_displayList; // holds the display list for this slice to speed up rendering9 }* SliceFramePtr;

33

8.2 Transparency Visualization

The number of planes can be set by the user through the graphical user interface. This valueis initially set to 100 because this gave the best results. With 100 timesteps the effect ofthe change of data values over time can be seen. Of course more timeframes increases thiseffect but also needs more processor power. With 100 timeframes we were able to still havethe fluid flow simulation run smoothly and is therefor chosen as initial value. The slicestechnique is implemented for every other technique implemented in the previous stages of theconstruction. A box is drawn around the slice frames to show the area that contains all sliceframes.

8.2 Transparency

In the previous steps the two dimensional grid is opaque. Consequence of this is that whenthe slices are stacked on top of each other, only the first slice is visible. The rest of theslices are hidden underneath this first slice. Therefor several different drawing methods areimplemented to solve this problem. These drawing methods are separate, blended, first andlast, and composed. Below these different drawing methods are explained in detail. Thedifferent methods can be selected as desired, the previous selected methods will still applyto the earlier time frames until they disappear over time. The slice separation distance (thedistance between the different sliceframes) can be adjusted by the user and is initially set tothe width of a cell.

8.2.1 Separate

Slices are drawn along the z-axis without making the slices transparent. Consequence of thisis that only the first slice is visible. By rotating the camera around the cube the outer sidesof the other slices can also be shown. Slices are drawn on top of each other such that the firstframe is the current time frame and the ones after that are each 1 timeframe later.

8.2.2 Blended

The slices are drawn along the z-axis with blending. This means that the color of the currentpixel is combined with the color values in the frame buffer. If we denote the color of thecurrent drawn pixel by dst and the linked color of the pixel in the frame buffer by src thenthe final color of the pixel is:

dst′ = sf × src+ df × dst (25)

The destination and source weight factors sf and df can be varied in the range [0..1]. Theseweight factors are called the blending factors. For our fluid flow simulation several differentblending factors are implemented and can be selected by the user. The alpha value for slicesdepends on what drawing method is selected. If matter is drawn the alpha value depends onthe rho data value of the fluid flow, if glyphs are drawn the alpha depends on the velocityof the fluid flow (see Listing below). These functions are fast because they are linear, andgenerate reasonable results by experience.

1 void CGLFluidsWidget :: setGlyphAlpha(POINT2D pos)2 {3 // Alpha values are depending on the data , but the user can override this value with

a global user defined value

34

8.2 Transparency Visualization

4 if(m_useGlobalAlphaForSlices)5 {6 // Override the alpha value7 m_currentColor [3] = m_sliceGlobalAlpha;8 }9 else {

10 // Derive alpha value11 int mPrevVecField = m_vectorField;12 // Make sure we are sampling velocity vector field13 m_vectorField = VECTOR_FIELD_VELOCITY;14 m_currentColor [3] = 0.5 + (VELOCITY_DATA_SCALE * VECTOR3D_Length (&(

getInterpolatedDirection(pos))));15 m_vectorField = mPrevVecField;16 }17 glColor4fv(m_currentColor);18 }19

20 void CGLFluidsWidget :: setMatterAlpha(int idx)21 {22 // Alpha values are depending on the data , but the user can override this value with

a global user defined value23 if(m_useGlobalAlphaForSlices)24 {25 // Override the alpha value26 m_currentColor [3] = m_sliceGlobalAlpha;27 }28 else {29 // Derive alpha value30 m_currentColor [3] = 0.0 + (RHO_DATA_SCALE * m_pCurrentSliceFrame ->m_pRho[idx]);31 }32

33 glColor4fv(m_currentColor);34 }

The implemented blending factors are (shown in Figure 23):

B(sf , df ) = (SRC ALPHA, DST COLOR)B(sf , df ) = (SRC COLOR, DST COLOR)B(sf , df ) = (SRC ALPHA, DST ONEMINUSSRCCOLOR)B(sf , df ) = (SRC ALPHA DST ALPHA)

Beside these blending factors a global alpha value for each slice is implemented. The globalalpha value can be set by the user and is initially set to 0.1. Setting the global alpha valuemeans that every slice gets a transparency of (1−global alpha value). The great advantageof the global alpha value is that the problem of not being able to look through the slices issolved.Because of the blending and global alpha value slices are drawn in reverse order. Last slicefirst, from back to front. An example where the global alpha feature is enabled and disabledis shown in Figure 24.Note that when a white background is used for blending, the colors count up to 1 very fast,which makes the visualization completely white. Therefor a white background should beavoided when blending is enabled. The background color can be adjusted by the user, and isinitialized to black.

35

8.2 Transparency Visualization

(a) (b)

(c) (d)

Figure 23: Different blending factors. (a) SRC ALPHA, DST COLOR (b) SRC COLOR,DST COLOR (c) SRC ALPHA, DST ONEMINUSSRCCOLOR (d) SRC ALPHADST ALPHA

(a) (b)

Figure 24: (a) Global alpha blending of 0.05 enabled. (b) No global alpha blending.

36

8.2 Transparency Visualization

8.2.3 First and Last

Only the first and last slice of the fluid flow simulation are drawn (see Figure 25), this willlater on become very useful when visualizing stream surfaces (see Section 9).

Figure 25: Only the first and last slice of the fluid flow simulation are visualized.

8.2.4 Composed

For the composed drawing method several different projection methods are implemented.Maximum intensity projection function, Minimum intensity projection function, Average in-tensity function, Distance to value function and Isosurface function. These projection meth-ods are all methods mentioned and explained in [Tel] (Figure 26 shows all the implementedprojection functions applied to the same data set). It was first implemented for every pixel,but this turned out to be too slow. Because the projection functions are a valuable additionto our visualization program, we decided to keep them but apply them to every grid point toincrease rendering speed. The result of the projection function is because of this, somewhatmore pixelated then the rest of the fluid flow simulation.

B Maximum Intensity projection functionFor every grid point p, the Maximum Intensity projection function shoots a ray r throughall the slices and finds the maximum value along this ray. This value is then projectedand visualized to the first slice. Using the parametrized ray notation this projectionfunction is expressed as [Tel]:

I(p) = f( maxt∈[0,T ]

s(t)) (26)

37

8.2 Transparency Visualization

B Minimum Intensity projection functionThe Minimum Intensity projection function is much the same like the maximum inten-sity projection function. The only difference is that this function projects the minimumvalue found along the ray:

I(p) = f( mint∈[0,T ]

s(t)) (27)

B Average Intensity FunctionFor every grid point p, the Average Intensity Function shoots a ray through all the slicesand averages all scalar data values found along this ray. This value is then projectedand visualized to the first slice:

I(p) = f(

∫ T

t=0s(t)dt

T) (28)

B Distance to Value FunctionThis projection function differs from the above mentioned projections in the sense thatit does not project a certain value, but projects the distance. For every grid point aray is shoot to the first point where the scalar value is at least a specified value σ. Thisdistance is then projected and visualized to the first slice.

I(p) = f( mint∈[0,T ],s(t)≥σ

t) (29)

B Isosurface FunctionThe isosurface projection function is used to construct a isosurface. For every grid pointa ray is shoot, if a value of σ is found along this ray then the projected grid point getsthe according color. If no value of σ is found a background color I0 is assigned.

I(p) ={f(σ), ∃t ∈ [0, T ], s(t) = σ,I0, otherwise

(30)

For the isosurface function a Phong lighting model is implemented [Tel]:

I(p,v,L) = Il(camb + cdiff max(−L · n, 0) + cspec max(−r · v, 0)α) (31)

For calculating the normal vector, the gradient (as described in Section 6) is used.

38

8.2 Transparency Visualization

(a) (b)

(c) (d)

(e)

Figure 26: The different composed projection functions. (a) Maximum intensity projectionfunction, (b) Minimum intensity projection function, (c) Average intensity function, (d) Dis-tance to value function and (e) Isosurface function. All applied to the same data set.

39

Visualization

9 Streamsurfaces

The aim of the seventh step of the construction of the fluid flow simulation is the implemen-tation and understanding of stream surfaces. Stream tubes and stream ribbons are a specialcase of stream surface. Stream surfaces can be seen as generalised streamlines (from Section7). If these streamlines are generated over time and visualized through the slices (from Section8) and a surface is placed over these streamlines, a stream surface is created. More formally:

“Given a seed curve Γ, a stream surface SΓ is a surface that contains Γand is everywhere tangent to the vector field”. [Tel]

The advantage of stream surfaces over streamlines and stream ribbons, is that stream surfacesare easier to follow visually. An example stream surface created with our visualization toolis shown in Figure 27.

Figure 27: Streamsurface with supporting streamlines.

9.1 Stream surface seed curves

A stream surface seed curve is similar to a seed point from a streamline. The seed curveis sweeped through the three dimensional volumetric dataset (slices) to create the streamsurface. For the seed curve several different shapes can be chosen. A special case of shapeis the point, which creates a single streamline, and thus is a seed point. But a single seedpoint can not create a surface and is therefor not useful in this context. For the fluid flowsimulation three different seed curves are implemented, a line, a quad, and a circle. A circleis another special case of seed curve namely a stream tube. The different seed curves can beselected by the user. Initially the seed curve is a line, because unlike a quad or circle this isnot a special case of stream surface. Figure 28 shows the different seed curves applied to thesame data set on the same seed point. For all the seed curves several different parameterscan be set. For example the seed curve radius can be set (line length in case of a line), thenumber of sample points on the line, and the number of sample points on the circle can be

40

9.2 Integration method Visualization

set. From every sample point on the seed curve a streamline is traced. Too few sample pointson the seed curve will create a non-smooth surface, but too many will significantly slow downthe process and it may become too dense.

(a) (b)

(c)

Figure 28: The different implemented seedcurves: (a) Line, (b) circle, (c) quad. Applied tothe same data set at the same seedpoint.

The placement of a seed curve is another important aspect. Just as is the case withthe seed points of streamlines, wrong placement of the seed curve generates an uninterestingstream surface. Because the placement is important the user can freely move within the threedimensional volumetric data set and place the seed curves where desired.

9.2 Integration method

The computation of the streamlines supporting the stream surface are based on the streamlineintegration methods of Section 7. The user is able to choose between Euler integration orRunge Kutta second and fourth order. Again, as is the case with the streamlines, the differentparameters can be set. The time steps, spatial integration step, maximal time, minimal andmaximal length of the streamlines can be set. The direction of the integration (forward,backward or central) is again user set able.

9.3 Surface creation, splitting and joining

The creation of the surface is realised by connecting two consecutive streamline. While tracingthe streamlines every time step a point is placed on each of the streamlines. These points

41

9.3 Surface creation, splitting and joining Visualization

are then connected through a quad mesh between two consecutive streamlines. It may be thecase that the distance between two streamlines becomes very small or very high. In the casethat the distance becomes really small, it is wise to stop tracing one of the two streamlines, tospeed up the process because really small polygons would not be drawn. This has no influenceon the other streamlines, they just keep tracing. By stop tracing one of the streamlines theperformance is increases and the computational cost is reduced. If the distance between twoconsecutive streamlines becomes very high, there are two options. It can be the case to split(see Figure 29a) the surface in two different surfaces from a certain point, or to keep thesurface as a whole. It can be the case that it is not desired to splitted surfaces, thereforthe splitting of the stream surfaces is a user parameter. When also the joining option ofstreamsurfaces is enabled it is possible to create holes (see Figure 29b) in the surface. Theseoccur if first the distance between two consecutive streamlines gets very high and the surfaceis split, and later on becomes very small, and the lines join again. Note that streamlines cannot merge with other streamlines. Streamlines only join streamlines if they first splitted.The drawing of the supporting streamlines of a streamsurface can be enabled or disabled.This option is added because if many streamlines are traced with a small consecutive distancebetween each other, the picture becomes unclear. By not drawing the supporting streamlinesthe surface visualization becomes clear again.

(a) (b)

Figure 29: (a) Splitting of a streamsurface (b) splitting and joining of a streamsurface, creatingholes

42

Visualization

10 Image-Based Flow Visualization

The final step of the construction of the fluid flow simulation stated the implementation ofimage based flow visualization. The image based flow visualization method is introduced byvan Wijk and Telea [vW02,TvW99]. Image-based flow visualization is a method for texture-based vector visualization. This method, unlike the line integral convolution (LIC), createsanimated flow textures in real time. Again adding force with the mouse can be done in realtime. The image-based flow visualization is even somewhat simpler to implement comparedto the LIC method. Our implementation is mainly based on the method described in [Tel].Figure 30 shows an Image-based flow visualization generated by our visualization program.

(a) (b)

Figure 30: (a) Matter visualization. (b) According Image-based flow visualization.

10.1 Method

Figure 31: Pipeline image-based flow visualization [vW02]

Image-based flow visualization differs from other visualization techniques such as vector glyphsand streamlines, in that it used textures to visualize the vector field. The main idea is that the

43

10.2 Injected noise texture Visualization

vector field grid is warped, then a noise texture is injected to the warped grid. Examples ofwarped vector field grids applied to our simulation is shown in Figure 32. The noise injectiontexture is then blended with a certain factor α with the previous frame buffer texture. Thenin the next timeframe the above described process is repeated and an animated image-basedflow is visualized. The image-based flow visualization pipeline is shown in Figure 31. Thevalue for α is initially set to 0.2 as advised by [Tel]: “Good values in practice are α ∈ [0, 0.2]”.In practice it turned out that a value of α = 0.05 gave the best results for our fluid flowsimulation. For this α value the flow is smooth and therefor best visualized. Integration isdone by the Euler integration method (see Section 7).

(a) (b)

Figure 32: Warped vector field meshes. Note that in Figure (b) the warp pulls the mesh outof its boundaries. Also the force is so great that points overlap other points. This could beavoided but then the flow effect would also decrease, and is therefor omitted.

10.2 Injected noise texture

The visualized image depends heavily on the injected noise texture, therefor this noise textureshould be chosen wisely. To achieve a high spatial contrast, neighboring pixels should havedifferent colors. This is done by generating a random texture of black and white dots. Thenthere is the problem of how big to make the black and white dots. This size is importantbecause it should be correlated with the velocity magnitude. The spot size can be set bythe user. The spot size is initialized to 2 as suggested in [Tel]: “In practice, using a dotsize d ∈ [2, 10] pixels gives good results”. The noise textures N ′(x, t) are all time-dependentbecause they are generated from the original stationary noise texture N(x) by:

N ′(x, t) = f((t+N(x)) mod 1) (32)

where f : R+ → [0, 1] is a periodic function with period 1. The number of noise sampletextures is set initially to 32 because that is typically enough to capture the periodic behaviourof the function f . f as suggested by [Tel] is set to a simple step function:

44

10.3 Parameters Visualization

1 // Periodic function2 float CGLFluidsWidget :: ibfv_f(int t)3 {4 return (t > 127)? 1:0;5 }

This step function is called the sawtooth function by [vW02]. Some other suggestions are touse a sinusoidal function. This is tested but gave worse results than the sawtooth function.

10.3 Parameters

The work texture and frame buffer are both set to 512× 512 pixels. But can be adjusted bythe user. The noise textures (also adjustable) are initially set to 256× 256 pixels. The size ofthe noise texture is made smaller to save memory and increase performance. To fill the entirearea of the frame buffer openGL stretches and replicates the noise texture. All parametershave a huge impact on the generated visualizations. Generated pictures with one variablechanged and the rest of the parameters kept constant to see the effect are shown in Figures35, 36 and 37 on the next page.

10.4 Notices

Choosing textures bigger than 512 × 512 gave problems. When setting the texture to a sizeabove 512× 512, random video memory was shown (Figure 34) to the screen due to unkownreasons. Also choosing a different simulation timestep, created unexplainable artifacts andslow rendering. When creating textures with openGL the filtering method has to be set.The filtering method can be set to linear or nearest. Nearest takes the value of the closesttexel and linear interpolates. By choosing the linear interpolation method, the noise textureis more smooth than taking the nearest interpolation method. For the noise and workingtexture it is better to take the nearest interpolation method because the noise shouldn’tbe smooth. Therefor the nearest interpolation method is chosen for generating the noise andworking textures. Figure 33 shows the two different filtering methods applied to the generatednoise and working textures. Because of time constraints the insertion of dye [vW02] is notimplemented. This would add a great value to the image-based flow visualization.

45

10.4 Notices Visualization

(a) (b)

Figure 33: (a) Nearest filtering method applied to noise texture generation (b) Linear filteringmethod applied to noise texture generation. Notice that (a) gives better results than (b)

Figure 34: Random video memory

Figure 35: The alpha factor has a huge inpact on the generated visualizations. This Figureshows an alpha factor of 0.6, 0.4, 0.2 and 0.05 respectively.

46

10.4 Notices Visualization

Figure 36: Also the working and frame buffer texture size have a huge impact on the generatedvisualizations. Values of 10× 10, 50× 50, 100× 100 and 512× 512 are used to generate thesepictures. (Noise texture size is kept constant at 512×512. Notice how in the rightmost imageartifacts occur because the force is relatively big.

Figure 37: Different values of noise texture size: 10 × 10, 50 × 50, 128 × 128 and 512 × 512.(Working texture is kept constant at 512× 512).

47

REFERENCES Visualization

References

[BI07] David Borland and Russell M. Taylor II. Rainbow color map (still) consideredharmful. IEEE Comput. Graph. Appl., 27(2):14–17, 2007.

[Hea96] Christopher G. Healey. Choosing effective colours for data visualization. Technicalreport, Vancouver, BC, Canada, Canada, 1996.

[IS73] G. Feldman I. Sobel. A 3 x 3 isotropic gradient operator for image processing.Pattern Classification and Scene Analysis, pages 271–273, 1973.

[Lef97] Wilfrid Lefer. Creating evenly-spaced streamlines of arbitrary density. pages 43–56, 1997.

[PM66] J.M.S. Prewitt and M.L. Mendelsohn. The analysis of cell images. Annals NYAcad. Sci, 128:1035–1053, 1966.

[PTVF02] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery.Numerical recipes in c++: The art of scientific computing. February 2002.

[TB96] Greg Turk and David Banks. Image-guided streamline placement. pages 453–460,1996.

[Tel] Alexandru C. Telea. Data visualization.

[TvW99] Alexandru Telea and Jarke J. van Wijk. Simplified representation of vector fields.pages 35–42, 1999.

[vW02] Jarke J. van Wijk. Image based flow visualization. pages 745–754, 2002.

[Wik08] Wikipedia. Bilinear interpolation. http://en.wikipedia.org/wiki/BilinearInterpolation,2008.

[WJE00] Rudiger Westermann, Christopher Johnson, and Thomas Ertl. A level-set methodfor flow visualization. pages 147–154, 2000.

48