rich visualizations from discrete primitives...the pen-and-ink non-photorealistic rendering style,...

106
- i - Rich Visualizations From Discrete Primitives By BRETT EUGENE WILSON B.S. (University of California, Berkeley) 2000 M.S. (University of California, Davis) 2003 DISSERTATION Submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in Computer Science in the OFFICE OF GRADUATE STUDIES of the UNIVERSITY OF CALIFORNIA DAVIS Approved: Committee in Charge 2005

Upload: others

Post on 14-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

- i -

Rich Visualizations From Discrete Primitives

By

BRETT EUGENE WILSON B.S. (University of California, Berkeley) 2000

M.S. (University of California, Davis) 2003

DISSERTATION

Submitted in partial satisfaction of the requirements for the degree of

DOCTOR OF PHILOSOPHY

in

Computer Science

in the

OFFICE OF GRADUATE STUDIES

of the

UNIVERSITY OF CALIFORNIA

DAVIS

Approved:

Committee in Charge

2005

Page 2: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

- ii -

Abstract

Visualization is becoming an increasingly important part of science and the arts. Over the past twenty years,

computer graphics algorithms and hardware have matured to the point where any imaginable image can be

created. Now, the challenge is to create clear, visually rich images which faithfully represent the data while

clearly illustrating the details of interest.

Non-photorealistic rendering is one important method for creating rich images because viewers are

more accustomed to abstraction, allowing less important and distracting details to be removed. The pen-

and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique,

but is particularly prone to over-detailing in areas with complex geometry. A two-dimensional rendering

and image processing pass is added after the initial three-dimensional processing of a scene. It incorporates

segmentation and other image-processing methods to identify and abstract away areas that might be con-

fusingly complex. Stippling is a related style often used for abstract images, and the dots that make up the

image must be carefully placed to avoid undesirable patterns. While previous algorithms can be pro-

hibitively slow, a pre-generated data structure is presented which provides high-quality, variable-density

blue noise dot distributions at interactive rates. It supports both two and three dimensional rendering and

provides temporal coherence, a feature lacking in previous high-quality stippling algorithms.

In the field of scientific visualization, the purpose is to identify areas of interest. Therefore, as many

details as possible must be represented accurately, even when the data is far larger than can be handled by a

visualization system. A hybrid point/volumetric representation is presented which optimizes storage for

either point-based or volumetric data sets. The dual-representation increases efficiency by exploiting the

advantages of each: the point-based portion is used to represent small local details, while the volumetric

portion is used to represent continuous, slowly-varying portions of the data. Ultimately, richer non-

photorealistic and scientific visualizations will create more effective methods of exploration and commun-

ication for scientific and artistic advancement.

Page 3: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

- iii -

Contents

Abstract······································································································································································· ii Contents ····································································································································································iii 1 Introduction·······························································································································································

1.1 Discretization in art and science ····················································································································· 1.2 Rich visualization ·············································································································································· 1.3 Outline ································································································································································

2 Rich illustrative rendering········································································································································ 2.1 Introduction to illustrative rendering·············································································································

2.1.1 Geometry-based methods····················································································································· 2.1.1.1 Pen-and-ink styles···················································································································· 2.1.1.2 Geometry-based stippling······································································································· 2.1.1.3 Painting and cartoon styles···································································································

2.1.2 Image-based methods·························································································································· 2.1.2.1 Pen-and-ink styles·················································································································· 2.1.2.2 Image-space stippling ············································································································ 2.1.2.3 Watercolor and painting styles ·····························································································

2.1.3 Volumetric methods ···························································································································· 2.1.4 Silhouette and edge processing methods ·························································································· 2.1.5 Hybrid methods····································································································································

2.2 Point-based illustrative rendering················································································································· 2.2.1 Introduction··········································································································································

2.2.1.1 Sample hierarchies ················································································································· 2.2.2 Dynamic Voronoi hierarchies·············································································································

2.2.2.1 Two-dimensional dynamic Voronoi hierarchies································································ 2.2.2.2 Three-dimensional dynamic Voronoi hierarchies ····························································· 2.2.2.3 Sampling fine details ··············································································································

2.2.3 Rendering dynamic Voronoi hierarchies ·························································································· 2.2.3.1 Ambiguous point locations··································································································· 2.2.3.2 Edge position adjustments···································································································· 2.2.3.3 Matching two-dimensional input ························································································

2.2.4 Performance ·········································································································································· 2.2.5 Future work in stippling ······················································································································ 2.2.6 Review of dynamic Voronoi hierarchies ··························································································· 34

2.3 Perception-based illustrative rendering ······································································································· 36 2.3.1 Image segmentation ······························································································································38 2.3.2 The hybrid pipeline······························································································································

2.3.2.1 The three-dimensional processing stage ·············································································40 2.3.2.2 The two-dimensional processing stage················································································ 2.3.2.3 The final rendering stage······································································································· 2.3.2.4 Performance····························································································································

2.3.3 Differences from other hybrid work ··································································································

Page 4: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

- iv -

2.3.4 Rendering styles···································································································································· 2.3.4.1 The pen-and-ink rendering style ························································································· 2.3.4.2 Analysis of traditional pen-and-ink illustrations······························································· 2.3.4.3 Requirements for a renderer·································································································

2.3.5 Complex silhouette rendering ············································································································ 2.3.5.1 Generating complexity maps································································································ 48 2.3.5.2 Separating regions·················································································································· 2.3.5.3 Rendering lightly colored complex regions ········································································

2.3.6 Complex region hatching···················································································································· 2.3.6.1 Segmentation ·························································································································· 2.3.6.2 Stroke generation ···················································································································

2.3.7 Examples················································································································································ 2.3.7.1 Tree models····························································································································· 2.3.7.2 Primate lung CT scan ············································································································

2.3.8 Review of hybrid illustrative rendering····························································································· 2.3.8.1 The two-dimensional processing stage················································································ 2.3.8.2 The final rendering stage······································································································· 2.3.8.3 Other future directions··········································································································

3 Hybrid rendering for scientific visualization······································································································· 66 3.1 Introduction····················································································································································· 66

3.1.1 Direct volume rendering ····················································································································· 66 3.1.2 Point-based volume rendering ···········································································································

3.2 Hybrid point/volume visualization for volume rendering········································································· 3.2.1 Hybrid data generation························································································································ 68

3.2.1.1 Point selection ························································································································ 69 3.2.1.2 Optimized representations···································································································· 69 3.2.1.3 Storage ····································································································································· 70

3.2.2 Rendering hybrid data ························································································································· 70 3.2.2.1 Sources of error ······················································································································

3.2.3 Results ···················································································································································· 3.2.3.1 The argon bubble data set ····································································································· 3.2.3.2 The chest data set ··················································································································· 3.2.3.3 The Furby data set ··················································································································

3.2.4 Review of hybrid visualization for volume rendering ····································································· 8 3.3 Hybrid rendering for particle visualization ·································································································

3.3.1 Particle beam visualization ················································································································· 3.3.2 Data representation······························································································································ 84 3.3.3 User interaction ···································································································································· 3.3.4 Rendering ·············································································································································· 3.3.5 Results ···················································································································································· 3.3.6 Review of hybrid visualization for particle rendering·····································································

3.4 Conclusions on hybrid visualization ············································································································ Conclusion ······························································································································································· References································································································································································· 96

Page 5: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

1

1 Introduction

Visualization and rendering are becoming more important parts of today’s scientific and artistic landscape

as researchers from all fields increasingly use computer graphics as part of the everyday scientific workflow.

Complex computer simulations and sophisticated measurements allow research efforts to be more accur-

ately directed and experiments to be more precisely analyzed, but produce enormous streams of data. Simple

algorithmic analysis of this data is sufficient for some purposes, such as verifying that certain values are as

expected. However, the scientific process depends on communication with others, and the features of

interest are often not known or are difficult to parameterize. It is these tasks of communication and

exploration in which visualization is critical.

Until recently, “visualization” meant forming a mental image, such as imagining the final print of a

photograph as it is being taken to aid in composition and exposure. Ansel Adams observed, with respect to

one of his early and more famous photographs of Half Dome,

This photograph represents my first conscious visualization; in my mind’s eye I saw (with reasonable completeness) the final image as made with the red filter. … Over the years I became increasingly aware of the importance of visualization. The ability to anticipate — to see in the mind’s eye, so to speak — the final print while viewing the subject makes it possible to apply the numerous controls of the craft in precise ways that contribute to achieving the desired result. [Adams 1983, pp. 4–5]

In the field of computer graphics, visualization has come to refer to the process of producing images from

data to yield insight, that is, a “visualization” in the mind of the viewer of the intangible numbers in the

computer. The result of this visualization provides guidance for what to do next: just as visualization helped

Adams control his camera to achieve the desired results, scientific visualization helps scientists control their

experiments to achieve the desired scientific discoveries.

Page 6: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

2

Computers are also increasingly used to generate artistic output, where they are used in two primary

capacities. First, computers are used by artists to generate effects that would be difficult or impossible using

traditional media. One example is photographic manipulation, where photographs are combined and

manipulated in the computer to achieve an impossible or otherwise novel image. The use of photographs

allows a level of realism that can not be duplicated with artistic media, while the computer allows the images

to be manipulated almost as easily as a painting. Another example where a computer can be useful is

animation with an artistic style: the artist carefully defines the look of the subjects of the animation and the

computer adapts this specification to new frames. This frees artists from having to draw every frame

individually, while allowing substantial artistic freedom.

Second, computers are used to generate artistic renderings automatically or semi-automatically. While

traditional rendering has the goal of making images as photorealistic as possible, these “non-photorealistic”

rendering methods attempt to duplicate a specific artistic style and are designed to replace all or part of an

artist’s efforts. They are usually used to more efficiently generate a desired image where the efforts of a

professional artist would be too expensive or time-consuming. Simple examples of automatic non-

photorealistic rendering are certain filters in Photoshop. There are also a wide range of more complex

methods available to artists and engineers that produce higher quality output but require more complex and

time-consuming processing.

1.1 Discretization in art and science

Most types of traditional artistic styles approximate continuous functions (the desired image) with discrete

components. In some styles, this effect is minimal. For example, in watercolor and oil painting, each brush

stroke is discrete, but can be any of a continuous range of colors, and can also interact with underlying layers

of paint to produce blending effects. This effectively eliminates, to the extent the artist desires. the discrete

nature of the media.

Page 7: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

3

Other artistic methods, however, do not have the flexibility of continuous color. Pen-and-ink is one

example of such a media. Pens produce sharp, well-defined lines that are easily distinguishable from other

strokes, and, because the ink dries quickly, do not usually blend with previous strokes. Changing inks in

most technical pens is difficult, requiring meticulous cleaning of the pen, so the palette of colors is usually

very limited. This limitation, as well as the nature of the artistic characteristics of the most common styles,

mean the majority of most pen-and-ink drawings use only black ink. And, while some instruments such as

quills can give variable-width lines, the more commonly used technical pens are strictly limited in the line

width or dot size that they produce.

Many types of scientific simulations and measurements produce discrete output. In some cases these

discrete measurements are samples of continuous fields and are often rendered as such through

interpolation. Examples include scientific simulations and three-dimensional scans for medical or

mechanical purposes where values of interest are measured on a grid in space. In contrast, other applications

are not as appropriate for interpolation. One example is particle accelerator simulations. In this application,

the density of particles does have some meaning and can be interpolated, the greatest meaning of the data

involves the movement of the particles as the simulation.

1.2 Rich visualization

When computer graphics was relatively new, visualization was difficult and merely producing useful images

from data was sufficient. Now that computer graphics has matured and become more efficient, scientists and

the public expect images from the data, and producing the richest possible images is the biggest challenge.

Richness in visualization might mean different things in different situations, but at the lowest level, refers to

getting the most out of a single image or animation. For example, it might refer to representing the most

possible detail out of some measurements, achieving high frame rates for interactive purposes, illustrating

areas of interest with the least possible distraction, or even making the most interesting possible image for

Page 8: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

4

distribution to a curious public. The goals and approaches necessary for rich rendering vary considerably

based on desired style and application, of which four are addressed by this research.

Stipple drawings approximate continuous functions, usually a reference photograph or rendering,

through the use of small dots. This style has been historically popular for illustrations in printed material

because the two-toned nature reproduces well using only black ink, yet is capable of implying the full range

of grayscale value. It is used on computers today for its artistic value and for its clear differentiation from

other rendering styles, making it useful for multi-modality rendering. However, the placement of these dots

is extremely critical in giving the illusion of a continuous function. Any recognizable patterns or higher level

(low frequency) structure will be interpreted by the viewer as detail in the image rather than a side-effect of

the representation. Accurate depictions of photographs or models, therefore, require that the dots be evenly

spaced, yet devoid of any recognizable patterns. Previous research has produced algorithms capable of

generating extremely detailed stipple renderings with highly optimized distributions. However, such

optimization comes at the cost of performance. Section two presents an algorithm that quickly produces

stipple illustrations with minimal artifacts, allowing interactive uses such as three-dimensional visualization

in addition to the more traditional static black-and-white images.

Hatching is another artistic illustration style that uses discrete black-and-white primitives to

approximate continuous functions. This style uses short pen lines to represent grayscale value and surface

texture, meaning that the style is usually concentrates more on edges than does stipple drawing. The result is

that typical hatched illustrations have greater detail, but with this greater detail comes the danger of over-

detailing. In places where many edges are present, drawing them all can ruin the grayscale value and create

visual confusion, leaving areas of particular interest difficult to differentiate. Much of traditional computer

graphics avoids this problem, due primarily to the fact that most computer-generated scenes are so simple

and edges are not emphasized. As computers get faster, the complexity of the scenes that can be rendered

increases, becoming particularly problematic for hatching and related binary illustration styles. The research

presented in the second part of section two provides a generalized framework for addressing these

Page 9: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

5

complexity problems by incorporating a measure of the viewer’s visual response to the image into the

rendering pipeline. This measurement can be used to guide the application of detail or abstraction to

different areas of an image, giving simpler and more clear illustrations.

While the use of discrete points as a rendering primitive may have started as a response to the need for

dual-tone reproduction, they have also been used in three-dimensional visualization. In this case, the goal is

the same: to produce an approximation of a continuous function with a set of discrete primitives. The

reasons for using points in these applications are flexibility and efficiency rather than artistic merit or ease of

reproduction. Points can also be rendered more efficiently without hardware acceleration than more

complex primitives such as polygons or volumes. With today’s hardware acceleration, however, points are

relatively inefficient to render because the hardware is optimized for displaying texture-mapped polygons

for games. A middle ground can be reached by combining the flexibility and efficient storage of point-based

representations with the fast rendering performance of volume rendering. The flexibility that the addition of

points offers a reduction in data size with a less-than-proportional reduction in quality, allowing larger data

to be more efficiently visualized. This addition of discrete point-based rendering for volume rendering

applications is discussed in section three.

Hybrid point and volumetric representations can also be used to more efficiently represent and render

data that was originally particle-based. Pure particle data renders slowly on most PCs, but some data is

inherently point-based, such as particle accelerator simulations in which each point represents a subatomic

particle. The slow rendering times, combined with the large size of each time step, means that interactive

visualization is impractical. Replacing key portions of the point data with volumetric representations is

discussed in the second part of section three, and results in faster rendering times and lower storage

requirements. In this case, continuous volume rendering is used to give the illusion of a discrete point-based

rendering rather than the opposite as is most commonly the case.

In these four application areas of visualization, this research addresses the use of discrete, visible

primitives in generating rich visualizations. In some areas, such as volume rendering, the addition of

Page 10: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

6

discrete points enhances a type of rendering that is usually rendered using continuous volumes. In other

areas, the inherent discrete nature of the data or the rendering style is enhanced or supplemented to provide

more or clearer information or better interactivity. In all cases, this research leverages the strengths and

weaknesses of the human visual system to generate visualizations with a better subjective appearance, or

provide better performance without degradation. Improved visualization and rendering allows increasingly

more efficient exploration and more effective communication.

1.3 Outline

This dissertation presents contributions to visualization and rendering in two primary areas: non-photo-

realistic rendering and scientific visualization. In section 2.2, a new data structure called a dynamic Voronoi

hierarchy is introduced for the generation of non-photorealistic stipple illustrations. It allows interactive

computation of temporally coherent dot distributions with few low-frequency components. In section 2.3, a

hybrid pipeline is introduced to aid in the generation of a non-photorealistic renderings of scenes

containing complex geometry. This technique uses an intermediate two-dimensional rendering and image

processing stage that extracts information about how a viewer might perceive the final rendering. The

pipeline is demonstrated using the pen-and-ink rendering style, one that is particularly prone to over-

detailing.

In section 3.2, a method for converting volume data into a hybrid point-and-volume representation for

visualizing large datasets is presented. This technique uses points to represent localized areas of high detail

and a volume to represent those areas of low detail. Because each method is used in the way it is most

efficient, data size is reduced, and large volumes can be viewed with minimal quality degradation. A related

approach is presented in section 3.3, using the same hybrid representation for particle beam simulations.

These simulations are particularly hard to visualize because the original data will not fit on a typical

visualization system. The point-based portion of the hybrid data is used for areas of low density and fine

detail, while the volumetric data replaces more homogenous, dense areas. All of these methods aim to visual

richness and clarity of images for exploration and communication.

Page 11: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

7

2 Rich illustrative rendering

2.1 Introduction to illustrative rendering

Hand-drawn non-photorealistic renderings have both artistic and practical importance. Artistic drawings

are appropriate for a variety of uses. Sometimes, all that is needed is an attractive image, and non-photo-

realistic renderings can fill this roll well. For example, Schumann et al. [1996] found that many architects

preferred stylized renderings over traditional “exact” CAD drawings for some purposes, especially for the

initial design phase. The architects felt that the sketch-style renderings used in the study conveyed a sense of

uncertainty, allowing clients to focus on the large-scale design rather than on the details which may not have

been decided yet. Other techniques are particularly well-suited to certain representations. For example,

black-and-white line and stipple illustrations ideal for many printed reproductions.

From a practical standpoint, artistic techniques can also aid in the understanding of a scene or object.

For example, silhouette edges and contour lines can convey shape that is not easily visible in a realistically

shaded object. Gooch et al. [1998] demonstrate the use of silhouette edges and artistically inspired lighting

to generate technical illustrations that are more clear than using traditional lighting and rendering models.

Lum and Ma [2002] use similar methods to improve clarity in the context of interactive volume rendering,.

However, silhouette edges, an important component of many such approaches, break down when there are

many small objects very close together. Silhouette edges and contour lines become so closely spaced that

they are an impediment to clarity rather than an aid. In these cases, high-level scene knowledge can assist in

simplifying complex areas to produce both meaningful and attractive results.

Page 12: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

8

There have been two primary approaches for rendering in non-photorealistic styles: geometry-based

methods and image-based methods. Geometry-based methods deal with the primitives directly, and render

them using a specific texture or style. Visibility and the screen size of objects is often used to make

adjustments for level-of-detail or to scale strokes appropriately, but typically no other image-space

parameters are used.

Image-based methods operate only on images, which can be photographs or drawings in addition to

computer-generated renderings. They can emulate a wider range of styles than geometry-based methods

because they can take a higher-level view of the scene, but important details can be lost in the process. Less

commonly used are hybrid methods, such as the one presented in section 2.3, which incorporate informa-

tion from both geometry and images to form the final rendering.

2.1.1 Geometry-based methods

2.1.1.1 Pen-and-ink styles

One of the most influential works on geometry-based non-photorealistic rendering is by Winkenbach and

Salesin [1994], who use pre-generated, resolution-dependent prioritized stroke textures on simple planar

geometry to emulate pen-and-ink drawings. The user also has the option of specifying important portions of

the drawing so that unimportant areas can be left out. They extend the technique in [Winkenbach 1996] for

more complex parametric surfaces. The extensions include controlled-density hatching for constant line

density on surfaces that change area, and planar maps of the scene which identify regions that correspond to

a single visible object or shadow.

Instead of using predefined textures and parametric equations for selecting stroke directions, many

researchers have worked on choosing good stroke directions for arbitrary surfaces. Girshick, et al. [2000]

argue that principal curvature direction is the most useful for portraying the shape of the surface, but the

results lack the artistic look of some other techniques. Rössl and Kobbelt [2000] describe an interactive

system in which the user edits a segmentation of a two-dimensional direction field. Strokes are generated

along this direction field to cover a three-dimensional model. Deussen et al. [1999] compute surface lines by

Page 13: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

9

intersecting the objects with sets of planes. By drawing different portions of the image, a style similar to

copper-plate engraving is produced. Ostromoukhov [1999] achieves a similar effect by defining curved

meshes over portions of the model to guide the generation of engraving lines. He also discusses this method

with the addition of some halftoning techniques in [Ostromoukhov 2000]. Hertzmann and Zorin [2000]

grow hatches out from silhouette edges along a direction field. The direction field is created by choosing the

principal curvature direction for areas where it is well-defined, and optimizing the intermediate areas. They

also provide high contrast for undercuts to highlight areas where the surface occludes itself.

While the aforementioned pen-and-ink techniques deal primarily with ease-of-use or image quality,

there has also been research on interactive techniques to emulate pen-and-ink rendering. Praun et al. [2001]

pre-compute a set of textures called tonal art maps for the model using a natural parameterization. For

arbitrary surfaces, lapped textures [Praun 2000] are used for generating a texturing of an object given an

example texture. The technique uses hardware multi-texturing to blend between differing resolutions of the

maps to create temporally coherent, interactive shading. Freudenberg et al. [2001] implement a similar mip-

mapped hatching technique geared for game engines and interactive walkthroughs.

2.1.1.2 Geometry-based stippling

Stippling is an illustration style consisting of dots made with a technical pen. These images usually consist of

one dot size and ink color (almost always black), and the artist creates the desired image by generating the

correct density of points much like dithering algorithms. Geometry-based stippling algorithms attach a set

of points to the object surface. Because the points move with the surface, these methods usually have good

temporal coherence, and the movement of the points provides important visual cues about the third

dimension. Meruvia et al. [2003] generate a hierarchy of points on the surface using mesh subdivision. This

hierarchy encodes which points to draw for a given density, allowing density to be locally adjusted at

rendering time. Lu et al. [2002 (b)] use a similar method but generate samples using a Poisson disk sequence

over the interior of each face of the mesh.

Page 14: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

10

Pre-generated hierarchies offer fast, temporally coherent rendering and a selectable level-of-detail, so

they have also been used for a wide range of other rendering styles, including hatching [Winkenbach 1994,

Rössl 2000]. The limitation of geometry-based approaches is that the stipple distribution can not be as even

as in image-space approaches. These methods trade off precise distributions for runtime variability. When a

point is removed, the local density decreases dramatically, resulting in a visible hole in the distribution as in

Figure 1(b).

2.1.1.3 Painting and cartoon styles

Much work has gone into generating temporally coherent animations of a specific rendering style. Meier

[1996] attaches a set of seed points to the scene geometry which are used as the source for brush strokes

along the surface. Strokes are rendered in image-space and Z-buffered, resulting in an abstract painting style

that exhibits frame-to-frame coherence of brush strokes. Kalnins et al. [2002] attempt a more detailed

rendering style consisting of silhouette edges and hatch shading. The user specifies some example strokes at

one or more viewpoints and the computer adapts the strokes for novel orientations so that silhouette,

shadow, and highlight strokes “stick” to the appropriate location on the surface.

There has also been some work on interactive painting styles. A straightforward approach is to use pre-

generated textures that emulate a given style, such as Majumder and Gopi’s [2002] charcoal rendering

method. Lum and Ma [2001] combine different textures to build up light and texture for emulating

Figure 1 If a slightly lower density than in (a) is required, removing one point leaves a noticeable hole (b). Re-optimizing that point’s immediate neighbors, as in the dynamic Voronoi hierarchies discussed in section2.2 allows the change to be spread across a wider region (c).

(a) (b) (c)

Page 15: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

11

watercolor rendering in graphics hardware. Textures are not required for all techniques, however. Claes et al.

[2001] present an interactive rendering technique using a simply colored cartoon model in hardware. Special

care is taken to generate sharp and continuous transitions between light and shadow areas.

2.1.2 Image-based methods

2.1.2.1 Pen-and-ink styles

Many have proposed interactive systems to aid in the manual generation of pen-and-ink renderings. While

they often require significant time be spent by a person to generate an image, these methods have the

advantage that they can capture the artistic vision of a person while still being much faster than manual

drawing. Salisbury et al. [1994] allow the user to “paint” with a set of textures from which the computer

generates individual strokes. Salisbury et al. [1997] propose a more efficient, higher-level system in which the

user edits a direction field and specifies a set of example strokes. The computer applies the example strokes

based on the direction field to generate the output strokes.

2.1.2.2 Image-space stippling

Most stippling algorithms work in two-dimensions, just like their artistically generated counterparts. This

process of picking point locations is similar to sampling algorithms for antialiasing, a field with a very long

history in computer graphics. Yellot [1983], in his study of the retinal cells of primates, proposed that the

best form of sampling would have minimal low frequency components and no strong frequency

components that would cause aliasing. In signal processing, blue noise is noise with power proportional to

the corresponding frequency, but in computer graphics the term “blue noise” is often used to refer to any

signal meeting Yellot’s criteria [Mitchell 1987].

McCool and Fiume [1992] use a dart throwing technique [Mitchell 1985] to generate blue noise, in

which sampling positions are generated uniformly (the white noise distribution), but are accepted or

rejected based on a minimal distance criteria from previously selected points. Hierarchical multiresolution

distributions are generated by progressively reducing the minimum distance as the points are selected. The

Page 16: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

12

resulting distribution is then refined using Lloyd’s method [1982]. In this algorithm, the Voronoi diagram of

the point distribution is calculated and each point is moved to the center of its Voronoi cell. This process is

repeated until the distribution converges to a desired precision, as measured by the maximum movement of

all points during the previous iteration. Theoretical analysis of Lloyd’s method and centroidal Voronoi

diagrams can be found in [Du 1999].

Hiller et al. [2001] refine this technique to produce sampling tiles. In this work, an initial point

distribution is generated randomly and optimized using Lloyd’s algorithm. More dense resolutions are

created by inserting a new set of points and optimizing them with Lloyd’s algorithm while keeping the

previously existing points fixed. Special consideration is given to making the distributions tile seamlessly,

allowing samplings of varying distributions to be generated by using a variety of different tiles.

Distributions with minimal low-frequency components are ideal for stippling, because points are

spaced at regular intervals without any higher-level pattern. In contrast, white noise, which contains all

frequencies, often contains visible clumps of points and empty space that the eye interprets as detail.

Deussen et al. [2000 (a)] produce blue-noise-based stipple drawings by first distributing a given number of

samples over an image. These points are relaxed using Lloyd’s algorithm to give the desired distribution, but

user input is required to achieve some effects such as sharp edges. Secord [2002] extends this method to

eliminate the need for manual input by weighting the centroids of the Voronoi cells by the grayscale value of

the reference image. Hiller et al. [2003] expand on this technique, allowing more complex primitives than

just points to be distributed.

Because Voronoi-based relaxation works on the plane, these stippling techniques work only from two-

dimensional input. Furthermore, they typically require significant processing time and the points do not

move coherently if animation is desired. Secord et al. [2002] use a different image-based approach based on

two-dimensional distribution functions that provides temporal coherence and quick generation times. The

trade-off is that the resulting distributions are not as evenly distributed as those generated with Voronoi-

based relaxation.

Page 17: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

13

All two-dimensional approaches are limited by the amount of three-dimensional information they can

provide. Generally, three-dimensional information is not available (as in the form of a depth buffer, for

example), and those techniques that provide temporal coherence do so by moving points in the screen plane.

The subtle movement of the surface in three dimensions which provides depth cues is lost.

2.1.2.3 Watercolor and painting styles

Watercolor and impressionistic painting styles are popular techniques to emulate using image-based

methods due to their indistinct nature: it does not matter that many of the small details are lost since this is

an accepted part of the technique. In one of the earliest investigations of painting styles, Strassman [1986]

discusses a model for the brush, paint, and paper for manually creating simple paintings in the traditional

Japanese bokkotsu style of sumi-e. More recently, Curtis et al. [1997] present a relatively complete model for

computer-generated watercolor rendering that includes models for the brush, pigments, and paper, and

includes both interactive and automatic components. Durand et al. [2001] describe an interactive technique

for manually generating artistic renderings in a variety of styles. The user draws strokes on the image and

the system interactively generates an output image based on models of artistic materials and the source

image, giving the user a high degree of control without the need to worry about small details.

Hertzmann et al. [2001 (a)] stylize images given two source images: an original image and a stylized

version. This method can then duplicate the style for other original images. It can handle a wide range of

rendering styles and image-processing effects, but requires both an example input and output which are

seldom available. Hertzmann [2001 (b)] also generates painterly renderings from source images using a

relaxation function to optimize brush strokes. Different rendering styles are generated by varying the

weights for the energy function parameters.

2.1.3 Volumetric methods

Non-photorealistic rendering has most often been applied to surfaces and images, but some volumetric

methods have also been developed. Nagy et al. [2002] combine stroke rendering and cartoon shading to

Page 18: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

14

augment volume visualization (in this sense, it could also be categorized as hybrid) and to make embedded

surfaces more clear. Likewise, Treavette and Chen [2000] present a system capable of handling three-

dimensional, two-dimensional, and two+-dimensional (multiple two-dimensional renderings) data. An

enhanced version is presented in [Treavett 2001] which describes a hybrid system that also handles volume

data by applying different stylistic filters in modeling space, rendering space, and lastly, image space.

Dong et al. [2001] detect fine structures such as muscle fiber orientations, blood vessels, and cracks in

medical volume data. These details are then enhanced in the volumetric rendering to make the rendering

more clear. Dong et al. [2003] also describe a method for direct stroke-based rendering of volume data.

Strokes along the surface of the volume in question are drawn, along with many interior strokes aligned

according to an extracted direction field. This technique allows more detail to show through than a surface

rendering alone, but because the strokes are not optimized in screen space, the resulting drawing is not as

refined as some image-space hatching techniques. Interrante [1997] generates a volumetric texture from the

source data. A set of points are uniformly distributed throughout the volume, and are turned into a series of

three-dimensional strokes by following the principle curvature direction for a short distance. Isosurfaces are

displayed by rendering the corresponding set of strokes from the volume.

Ebert and Rheingans [2000] take a different approach to volume illustration. Their model uses a

physics-based illumination model along with a variety of feature enhancement techniques, depth cues, and

tone shading to make the details and structure of the volume more clear.

2.1.4 Silhouette and edge processing methods

One important part of many non-photorealistic rendering styles is the extraction and processing of

silhouette edges. Because brute-force extraction of silhouette edges for polygonal models is straightforward

most work has focused on faster or higher-quality results. Raskar and Cohen [1999] present a fast and

simple method for rendering silhouette edges in hardware. This method produces only bitmapped output, so

the lines can not be drawn in any special styles. Buchanan and Sousa’s [2000] edge buffer achieves

interactive rates without hardware acceleration.

Page 19: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

15

Extracted edges are often not directly suitable for high-quality renderings because they represent too

much detail of the underlying geometry. For example, silhouette edges extracted from a triangle mesh will

consist of a series of straight lines, and the discrete nature of the data means that there are often small areas

of false silhouette that will also be detected. Freeman et al.’s [1999] method processes lines such as extracted

silhouette edges and produces an artistically styled version based on a set of example strokes. Northrup and

Markosian [2000] process the extracted edges in the two-dimensional projection. The result is a set of long,

smooth lines that fixes many of the small artifacts and discontinuities of lines extracted from triangle

meshes. Hertzmann and Zorin [2000], in addition to surface stroke generation (discussed above), also

address high-quality, topologically correct silhouette and cusp detection for smooth surfaces and meshes.

Claes et al. [2001] supplement their interactive cartoon-style shading algorithm with high-quality, real-

time silhouette edges. These edges are identified by traversing a pre-generated four-dimensional octree.

Edges are represented in the octree as a pair of points, one for each surface adjacent to the edge, with the

coordinates of the points being the four values from the plane equation of the surface. Silhouette edges are

detected when the dot product of these points with a four-dimensional representation of the camera view

vector. Interrante et al. [1995] generate ridge and valley lines to portray important features of the surface, but

focus on minimizing occlusion of surfaces or volumes that may lie behind them.

2.1.5 Hybrid methods

One interpretation of “hybrid” non-photorealistic rendering is the use of more than one rendering style.

Many of the previously discussed interactive drawing systems either implement, or can easily be enhanced

to implement, additional drawing styles, such as [Nagy 2002] and [Treavett 2001]. Also of note is the inter-

active system of Halper et al. [2002] for image generation. It allows the user to specify, using a simple sketch

interface, the rendering styles used and the pipeline in which they are applied.

The hybrid rendering in this work, however, focuses more on hybrid image-generation pipelines, that

is, using multiple processing methods to achieve a given result. The graphics community has long

recognized the need to combine information that typically is available only to different rendering methods

Page 20: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

16

or to disjoint steps of a single pipeline. The simplest example is probably the use of the screen size of an

object to control its rendering attributes. For example, small objects are drawn with fewer strokes to

maintain a consistent stroke spacing on the final image. In the case of real-time graphics, small objects can

be drawn with a faster, lower precision technique [Chamberlain 1996].

A common hybrid approach is to use a two-dimensional rendering algorithm, but to supply it with

several input variables in the form of multiple renderings generated from the original three-dimensional

data, sometimes referred to as two+-dimensional processing. Pixar’s RenderMan renderer is able to generate

multiple output variables which Pixar demonstrates can be used in a two-dimensional post-process [PRMan

App. Note № 24] to produce images rendered in different styles, including pen-and-ink and cartoon styles.

This image-generation pipeline, therefore, consists of two distinct phases, a three-dimensional rendering

phase, and a two-dimensional post-processing stage. In Hamel and Strothotte’s rendering technique [1999], a

set of renderings representing different variables, which they call base images, are also used. This hybrid

approach also consists of an initial three-dimensional rendering stage followed by a secondary two-

dimensional stage. However, this system generates pen strokes directly from the input geometry in the first

stage. These strokes are then modified in the second stage according to the base images using a template that

maps base image parameters to certain changes in line style. The system is able to extract templates from

manually generated renderings and apply them to novel models, in effect transferring the rendering style.

Deussen and Strothotte [2000 (b)] use a somewhat different hybrid approach. This work, specifically

targeted at tree and plant rendering, uses two-dimensional depth renderings to extract important features in

the foliage. In the two-dimensional rendering stage, each leaf is rendered as a disk or, for more detailed

drawings, as a leaf-shaped primitive. Strokes are generated where there are large depth differences. By

varying the size of the primitive and the depth threshold required for stroke generation, a range of rendering

styles can be emulated. Level-of-detail is handled automatically if the primitive size and depth threshold is

constant for all objects in the scene, an effect that can be exaggerated by increasing primitive sizes for

smaller objects.

Page 21: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

17

2.2 Point-based illustrative rendering

2.2.1 Introduction

Sampling is one of the fundamental concepts in computer graphics. The stipple rendering style is

particularly sensitive to the distribution of samples. This style is often used because of its aesthetic value and

because its dual-tone nature makes it well-suited for many types of printing. Its usefulness also extends into

interactive visualization. The dot pattern is minimalist, mostly transparent, and clearly differentiable from

other rendering styles, making it ideal for multi-modality rendering.

As discussed above, when using stippling to represent a model or a feature in a data set, it is very

important that the points be spaced evenly across the surface. Small low-frequency changes in density are

easily discerned by the human eye, and the perception of the shape of the surface can be lost. An ideal

stipple rendering has even sample spacing and minimal low-frequency components. An ideal rendering

would also have point density which varies smoothly according to the underlying surface and it would be

temporally coherent: not only moving points in a pleasing manner, but moving them so as to enhance the

perception of the surface.

2.2.1.1 Sample hierarchies

One simple application for a variable density sampling approach is merely to produce samplings of the plane.

Such distributions are used to reduce aliasing, for example, in ray tracing programs, as well as for evenly

distributing arbitrary objects such as plants as in [Deussen 1998]. The minimal spacing properties of the blue

noise make it very desirable for many such applications because of the attenuation of low frequencies that

can be visible as changes in density. Unfortunately, the highest-quality method for generating this type of

distribution is Lloyd’s algorithm, which is very slow.

To generate a hierarchy of samples from top to bottom, a set of points is first generated over the plane.

One point is removed, and the local density of points at that location is recorded. Likewise, the hierarchy

can be built from the bottom up by selecting a very-low-density initial distribution and repeatedly inserting

Page 22: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

18

points at low-density regions. Hierarchies such as these, like McCool and Fiume [1992] and Hiller et al.

[2001] are static, and, as discussed above, changes in density can be uneven. Hiller’s method offers some

flexibility in this respect. By generating fewer levels of density through adding and optimizing many points

at once, the distribution can be improved, but there will be fewer levels of distribution to choose from. This

trade-off is perfect for certain applications such as antialiasing where the precise sample density is less

important. However, for other applications such as stippling, which must represent a precise density over a

local area, such a trade-off is not worthwhile.

2.2.2 Dynamic Voronoi hierarchies

Dynamic Voronoi hierarchies are a method for quickly generating approximated blue noise sampling

distributions with a variable density. Applied to three-dimensional stippling, the algorithm is an geometry-

based approach in which the distribution problem common to static hierarchical methods is reduced; the

samples are no longer fixed to a single position, but instead move slightly on the surface of the object. This

allows the density change of removing a point to be spread across several neighboring points, making the

change less visible. This difference is illustrated in Figure 1(c) on page 10, where it is extremely difficult to tell

a point was removed.

(a) Step 1: The Delaunay neighbors (open circles) are identified for

the point to be removed (center).

Figure 2 The steps in removing one point. The point removal/optimization process is repeated until allpoints are removed.

(b)Step 2: The center point is

removed and the Delaunay neighbors are re-optimized.

(c) Step 3: The new position and densities of the neighbors are

stored in the hierarchy.

Page 23: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

19

Compared to most previous image-space Voronoi stippling, the goal of a global minimum-energy

solution is relaxed in exchange for more local control of stippling density. The result is a slightly less optimal

distribution, but with temporally coherent animation and variable density at interactive rendering rates.

And, using a three-dimensional hierarchy, points move with the surface of the object, giving important

visual hints about the shape of the surface.

2.2.2.1 Two-dimensional dynamic Voronoi hierarchies

Dynamic Voronoi hierarchies reduce the sudden change in point density by distributing it to the

surrounding points. When a point is removed, that point’s immediate neighbors (those points with Voronoi

cells adjacent to that of the removed point) are repositioned using Lloyd’s algorithm, as shown in Figure 2 on

page 18. Because the remaining points in the distribution are kept fixed during this process, each point

typically moves a short distance toward the location of the removed point. The hierarchy can also be built in

reverse, inserting a point, and optimizing it and its neighbors.

The removal of points and the optimization of that point’s neighbors is repeated until the lowest

desired density is reached. As points are moved, their locations are stored, along with the index of the point

whose removal or addition caused the movement. The result is a list associated with each point containing,

in order, the indices of each neighbor that is removed, and the new position the point takes as a result of the

removal. Using these lists, the movement of points can be “played back” at rendering time to achieve smooth

transitions as point density is adjusted. Figure 3 on page 20 illustrates the difference between static (a) and

dynamic hierarchies (b). The dynamic approach reduces the difference between the most dense and the least

dense points. Figure 4 on page 21 shows three arbitrary density levels extracted from the same hierarchy,

along with the corresponding Fourier transforms. The Fourier transforms exhibit the characteristic blue

noise distribution, with few low frequency components (the dark center) and no spikes in frequency. The

inner ring represents the spacing between each point and its closest neighbor, and larger rings represent

successive “layers” of neighbors.

Page 24: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

20

Figure 3 A static hierarchy (a) can achieve reasonable results, but has an overall unevenness to it becausepoints are not spaced evenly. A dynamic Voronoi hierarchy (b) can even out the point spacing to give amuch smoother appearance, but still suffers from some artifacts around edges where points are drawn out-of-order (two such cases are circled). Selecting the point position by averaging all desirable positions (c) canremove most of these artifacts.

(a) Static hierarchy

(b) Dynamic hierarchy

(c) Dynamic hierarchy with averaging

Page 25: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

21

Figure 4 One hierarchy can represent a range of point densities over a region with blue noise characteristics.These images were extracted from the same 5,000 point hierarchy. The Fourier transforms are shown to theright, with the spacing between the “rings” corresponding to the point spacing in the image.

(a) 50 density

(b) 25 density

(c) 20 density

Page 26: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

22

2.2.2.2 Three-dimensional dynamic Voronoi hierarchies

Dynamic two-dimensional samplings are useful for generating static images (see section 2.2.3.3 “Matching

two-dimensional input” on page 29), but rendering point-sampled surfaces in three dimensions is also very

important, especially in the visualization field. The two-dimensional algorithm can be extended to arbitrary

surfaces to give good samplings at any density. This gives the ability to preserve point density regardless of

scale or angle of viewing incidence. It can also be used to give the illusion of lighting or other effects on the

surface.

As noted above, ideal distributions have few low frequencies, but this is measured in the screen plane

rather than on the surface of the object. The screen space sampling is approximated with an geometry-based

sampling, which provides good results in many cases. When the surface of the object has a large angle to the

screen plane, this approximation no longer holds (see section 2.2.3.2 “Edge position adjustments” on page

26).

The first step in building the hierarchy is to generate an initial sampling of the surface. The application

uses triangle meshes for simplicity, but the algorithm generalizes to any other surface representation

including point-based surfaces and implicit isosurfaces in a volume. Meruvia, et al. [2003] use the mesh

vertices as the generators for the initial distribution, whereas I, like Lu [2002 (b)], distribute initial samples

over the interior of the triangles in the mesh. The points are inserted in each triangle with a white noise

distribution and are then globally optimized using Lloyd’s algorithm. This gives a more even global

distribution than Lu’s per-triangle Poisson disk distribution, but requires much more time to generate.

Using this initial distribution, higher- or lower-density representations can be computed by adding or

removing points and optimizing the neighbors of those points. The application always generates the initial

distribution at the highest density and uses decimation to generate lower levels. This requires more work in

the initial optimization stage because there are more points, but simplifies the later handling of details that

are small compared to the current sampling rate (see section 2.2.3.2 “Sampling fine details” on page 26).

Page 27: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

23

Figure 5 The teapot mesh with 52,000 total points. The view and scale can be changed interactively atruntime while maintaining an even point distribution. The points move coherently with the surface as the camera is moved, giving important three-dimensional cues.

Page 28: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

24

At each iteration, the most dense point is removed from the hierarchy and its last Voronoi cell’s area

stored. To compute the new positions of the point’s neighbors, a planar projection is generated tangent to

the surface at the removed point as in [Alexa 2001]. The points are projected onto this plane so two-

dimensional Voronoi diagrams can be computed. Lloyd’s algorithm is then used to optimize the removed

point’s neighbors, keeping the rest of the points fixed. Because there are so few points being moved and their

movement is constrained, the process typically converges to within acceptable error in less than three

iterations. The points are then projected back onto the surface and their new locations are stored in the

hierarchy. This method of local planar Voronoi projections is also what is used to optimize the initial

distribution using Lloyd’s algorithm,

2.2.2.3 Sampling fine details

Because the Voronoi diagrams are computed over local planar projections of the surface, the point and its

immediate neighbors must be projected onto a plane with little distortion. If there are small details or

corners, or the sampling rate is very low, these projections will not be accurate.

Figure 6 The hip joint data set with 14,000 samples. Only 5,000 are needed to illustrate this image, the restare not drawn because they are on the back, or the density of the region does not require many points.

Page 29: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

25

From a sampling perspective, the Voronoi hierarchy represents a multiresolution sampling of the

surface. The ability to locally project samples onto a plane with a given error threshold is related to how well

the surface is sampled. If the underlying surface has higher frequencies than the requested sampling rate can

represent, there are no point positions that can accurately represent that detail.

One approach would be to simplify the object to the maximum representable detail for a given level of

sampling. This would be ideal for scaling the object and corresponds to level-of-detail adjustments that are

very common in computer graphics. However, one major goal is also to use the varying density to correct

for the viewing angle and for lighting. Surface simplification would not be appropriate in these cases, since

the object would change shape if the light changes intensity or a highlight moves.

From an artistic perspective, the goal is to represent some feature using fewer primitives than would

ordinarily be required. In these cases, an artist would be likely to abandon an accurate representation

altogether. Instead, most information would be abstracted away and only key details would be drawn. A

system implementing this approach would not generate very low sampling rates at all, but would switch to a

different approach such as Sousa and Prusinkiewicz’s [Sousa 2003] system for suggestive contours.

The implementation uses a simple rule to address this problem. If a point’s Voronoi cell can not be

accurately represented by projecting it and its neighbors onto a plane, that point is “locked” and no longer

considered for movement. Once points are locked, they behave as in a static hierarchy with no local

optimization steps. This does not produce visibly inferior distributions because of the nature of the surface

in these cases. If a point’s Voronoi cell can not be accurately represented on its own tangent plane, that area

can not be accurately projected onto the screen either, and the measurement of surface area and calculation

of centroids has no meaning.

2.2.3 Rendering dynamic Voronoi hierarchies

Once the hierarchy has been created, rendering is very efficient. First, the desired points are selected. For

stippled surface rendering, this means adjusting for lighting, scale, and angle of view. Points are then

selected based on their “death” area, that is, the surface area of the largest Voronoi cell out of all of the point’s

Page 30: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

26

positions. If desired distribution requires larger Voronoi cells (it is less dense) than a given point allows, it is

not drawn.

The surface normal for lighting information is computed once for each point at its first position in the

hierarchy. To properly compute densities based on each possible location of each point would require a

much more complex optimization process, and because points do not usually move very much, one test is

usually sufficient. Because all lighting information is computed at each point, the original mesh is not

required for rendering except for masking the depth buffer for hidden surface removal.

In a second pass, the ideal position of each point is selected according to which of its neighbors are

being drawn. If the point density is set globally, point selection will occur in the same order as the

decimation process and each point has a uniquely optimized position.

2.2.3.1 Ambiguous point locations

When the density of the point distribution is not scaled uniformly, each point’s list of positions may become

inconsistent. In this case, points are selected in a different order than they were removed during the hier-

archy generation process, and the hierarchy encodes more than one desirable position for a point. This

problem occurs most frequently on the edge of sharp highlights. If only one of these positions is chosen,

some artifacts can appear, as shown in Figure 3(b) on page 20.

If no one position in the hierarchy can be chosen for a point, all desired positions are averaged. This is

not guaranteed to produce good results, but works well in practice, as seen in Figure 3(c). Conceptually, the

averaging operation works because each point typically remains in a localized area on the surface. If a point’s

neighbors are “pulling” it in opposing directions, the best place to be relative to those neighbors is the center.

2.2.3.2 Edge position adjustments

The ideal stipple distribution is blue noise in screen space, but the distribution and optimization are

performed in object space. As the surface approaches a perpendicular angle to the view plane, object space is

no longer a reasonable approximation for the screen plane. The error introduced is proportional to the sin of

Page 31: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

27

the angle of the surface from the screen plane. This means, that for a typical convex object, the majority of

the surface will have little error. However, some artifacts are visible around the edges, and certain objects can

exhibit this problem more significantly.

These artifacts are a result of the compression of the surface in one direction. This turns the relatively

isotropic blue noise distribution into a highly anisotropic distribution. Conceptually, the Voronoi relaxation

tends to align points in a semi-regular hexagonal grid. When viewed at an angle, these points can line up,

producing visible patterns. This is effect is illustrated with an 80 degree projection in Figure 7 (a) and (b) on

page 28.

To reduce the visibility of this effect, low density attenuation can be traded for decreased anisotropy by

adding a small amount of white noise. Each point is pre-assigned a random noise value between –1 and +1

to ensure temporal coherence. This noise value is scaled proportional to the amount of distortion and the

local point density when displacing each point perpendicular to the direction of distortion. The jitter vector

is J = sinθ · d /2 · r · E × N , where θ = cos E · N is the angle between the surface normal and the screen

plane, d is the average distance to neighboring points in object space, r is the randomized jitter distance for

the point, E is the normalized vector to the eye, and N is the surface normal.

It is important not to add too much white noise or low frequency artifacts will become visible. Figure 7

shows this effect for varying amounts of noise. Without noise, as seen in (b), the horizontal compression

produces horizontal artifacts and a distractingly anisotropic distribution. More noise results in less

anisotropy but increased low frequencies which can become distracting as in (e). Moderate amounts of noise

such as (c) or (d) are probably the best compromise between low frequency attenuation and anisotropy.

Page 32: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

28

Figure 7 A uniform planar distribution (a) rotated 80 degrees from the screen plane around the vertical axis (b–e). Artifacts are created because optimization is done on the untransformed plane, but the rotationaround vertical compresses the points horizontally. The horizontal distance between points in (b–e) is about 17 of the original. Adding in random vertical displacements when surfaces approach perpendicularreduces some of the artifacts, trading better anisotropy for less low density attenuation. Relatively low noise values as in (c) produce the most pleasing compromise. The Fourier transforms of each distribution are seen on the bottom row. As the noise is increased, the distribution becomes more isotropic but low-frequencies are more significant.

(a) (e)(b) (c) (d)

Page 33: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

29

2.2.3.3 Matching two-dimensional input

Dynamic Voronoi hierarchies can also be used to generate two dimensional output that matches a reference

image. One approach would be to generate a weighted hierarchy for one specific image. The reference image

could be used to compute weighted centroids for the optimization steps and the initial distribution. This

would allow the image to be rendered at any resolution at high quality. However, because of the long

preprocessing time, it is not practical for most applications needing stippled images. More useful is the

ability to quickly match any input image using a single pre-generated hierarchy, even if quality is not as high.

This can be done using an evenly distributed hierarchy computed on the plane, and then selecting the

proper point density out of the hierarchy to match each area of the image. The process is the same as for

generating three-dimensional images: visibility is computed at each point’s initial position based on the

reference image, and the selected points are optimized relative to each other in a second pass.

Figures 8, 9, and 10 were all generated from the same 50,000 point hierarchy computed on a square.

Because a single hierarchy can be re-used for any image, preprocessing time is not significant. The run times

given in these figures are for computation of the point positions and does not include the time to render a

point at that position. These images were all generated by writing points directly to an encapsulated

PostScript file. Generating this file takes about twice as long as selecting the positions of the points.

Figure 8 Gradient image consisting of 4,957 points. Point positions were calculated in 0.06 seconds on a1.33 GHz Athlon. The source hierarchy is an evenly sampled square consisting of 50,000 points.

Page 34: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

30

Figure 9 Stippled output consisting of 12,392 points based on a photograph. The source hierarchy is anevenly sampled square consisting of 50,000 points. Around sharp transitions such as the two marked, (a)has a slight blurring effect while with the addition of weighting (b), the points follow the edge more closely.The gap marked with the arrow is in the original photograph and is not an artifact.

(a)Image calculated using dynamic Voronoi hierarchies with “regular” point selection.

Point selection takes 0.07 seconds on a 1.33 GHz Athlon.

(b)Point positions from (a) modified using a weighted average of four nearby positions.

Point selection takes 0.13 seconds on a 1.33 GHz Athlon.

Page 35: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

31

Performing only one sample of the reference image per point is very fast, but can miss some small

details that happen never to be underneath a point’s initial position. A further problem is that the optimiza-

tion step can cause points to move some distance away from their initial position. These effects are visible as

boundaries that are smeared slightly and as gaps in small details such as the railing in Figure 9 or the thin

spike at the top of the lighthouse.

There are many possibilities for improvement if additional processing time is available. Because high-

quality but slow stippling algorithms exist for two-dimensional input, this research has focused on

generating the highest possible quality at interactive rates. My approach is to add a third pass after the point

positions are optimized relative to each other. This pass adds back in more dependence on the reference

image by sampling it at several positions around each point and moving the point to the weighed centroid of

its samples. This process can be seen as a very rough approximation of one iteration of weighted Voronoi

relaxation. The result of this optimization is visible on the lower image of Figure 9. The weighted

optimization successfully sharpens the boundaries of high-contrast edges as compared to the un-optimized

Figure 10 Stippled output consisting of 8,334 points from a volume rendering (with silhouette enhance-ment) of a skull. The source hierarchy is an evenly sampled square consisting of 50,000 points. Theintermediate rendering was blurred before stippling because the rendering contains many high frequencydetails not representable by the comparatively low resolution stippling. These details would appear in the stippled result as artifacts. The blurring results in less distinct edges, but a much more coherent overallimage. Point positions were calculated in 0.06 seconds on a 1.33 GHz Athlon from the blurred rendering.

Page 36: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

32

image on the left. This image was computed with only four linearly interpolated samples per point.

Additional optimization is possible if more precision is required, such as adding more samples to give a

better approximation to true centroid calculation. At the extreme, the output can be used as the initial

distribution for full weighted Voronoi iteration. Because the initial distribution would already be very good,

fewer iterations should be required to achieve a high-quality result.

2.2.4 Performance

Like all algorithms based on Voronoi relaxation, the hierarchy generation process is very slow, even with the

use of hardware accelerated Voronoi diagram computation [Hoff 1999]: reading the frame buffer and

traversing the resulting bitmap are both time-intensive operations. For an even plane of 5,000 points, one

relaxation step takes 15 seconds on a 1.33 GHz Athlon. Removal and the accompanying optimization takes

about 0.12 seconds for a single point, or about 10 minutes for all 5,000 points. With 15 initial relaxation steps,

the plane takes about 15 minutes total to generate. The teapot example has many more points and requires

extra time for projection to and from the tangent planes for Voronoi optimization. The 52,000 point

example in Figure 5 on page 23 takes about 30 minutes for one full Voronoi relaxation step, and point

removals take about ⅓ second. The full generation time with five initial relaxation steps is about seven hours.

Once the hierarchy has been created, rendering the final image is much more efficient because the

hierarchy contains all the optimization information. The 52,000 point teapot example can be rendered at 10

frames per second on a 1.33 GHz Athlon. This includes the time to compute optimal point density at each

sample, lighting, and for traversing the lists of point movements. Because most computation is done in

software, performance does not depend on the video card to a significant degree.

Due to the significant preprocessing cost, the applications of the dynamic hierarchy to surface

rendering are limited to those cases where a given hierarchy can be re-used or when sample distribution is

extremely important, as in stippling. If the three-dimensional point motion is not necessary, as in the

example of stippling based on a reference image, a single two-dimensional hierarchy can be re-used for all

Page 37: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

33

possible input. As a result, rendering is extremely efficient: the example images in Figures 8, 9, and 10 can all

be generated at 10–15 frames per second using a single pre-generated 50,000 point hierarchy.

In an optimized Voronoi distribution, each point has six neighbors, so when a point is removed, its six

neighbors move to fill the hole. The result is that each point averages six different positions in its lifetime. At

each three-dimensional position, the index of the point whose removal (or addition) caused the movement

is also stored. Therefore, with 32-bit numbers, each of the (on average) six positions of each point requires 16

bytes of storage. With the addition of the threshold density at which each point is removed (4 bytes) the

surface normal (12 bytes), and bookkeeping information for the position lists (8 bytes) each point requires a

total of 120 bytes of storage. This is 4.5 times more storage than a static hierarchy (28 bytes to store position,

normal, and threshold density).

Figure 11 The paths that points take as the density is changed. The longer paths belong to the points that aredrawn at very sparse densities. The very short paths belong to the points that will be removed first if the density is decreased.

Page 38: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

34

2.2.5 Future work in stippling

The stipple rendering style has promise in visualization mostly when combining it with other styles for

multi-modality rendering. This technique may be useful for displaying static surfaces, such as simulation

boundaries, in a minimal style clearly differentiable from the simulation itself. Because many of these types

of structures are static, one three-dimensional hierarchy can be precomputed and re-used for many

visualizations or frames of a simulation. For applications requiring more flexibility, such as frequently

changing isosurfaces, the three-dimensional hierarchies are not practical due to the high generation time. In

these applications, a two-dimensional stippling style can be used, which gives interactive updating at the

expense of a loss of three-dimensional depth cues as the object is moved.

Animation and soft transitions of point additions and deletions is another area of stippling requiring

further work. Some previous work has used soft transitions as a way to deal with problems of distribution in

a static hierarchy: because points fade in and out, the change in density is less noticeable. This work

concentrates on the harder problem of high-quality binary stippling, which is more true to the original

artistic style, preserves the advantage of stippling when printed, and offers greater flexibility in rendering:

because changes in color and opacity of points are not required to give good distributions, these variables

can be used to illustrate other dimensions of the input data. However, softer transitions are useful if the

extra dimensionality is not required and the output will only be displayed on a computer screen. In this case,

point additions and deletions would be faded in and out, and the neighbors of those points would be moved

gradually to their new positions. This approach is difficult because the position of a point depends on its

neighbors, and these neighbors appear and disappear in an unpredictable order. It is likely that the only

approach would be for the rendering to lag several frames behind the actual distribution so that softer

transitions can be made.

2.2.6 Review of dynamic Voronoi hierarchies

Dynamic Voronoi hierarchies are a flexible method for computing multiresolution samplings of surfaces

with blue noise characteristics. By allowing the samples to move slightly on the surface, the density can be

Page 39: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

35

adjusted while maintaining a relatively even distribution. And, because the hierarchy is precomputed,

interactive rendering times are possible.

The precomputation step uses hardware-accelerated Voronoi diagram computation, but it still takes

significant resources in time and computation power. Moreover, because reading back and processing the

frame buffer takes significant time, the hardware accelerated approach may not have provided the best

performance. The use of these discrete intermediate results also introduces numerical instability, making

implementation more difficult, especially in the case of three-dimensional hierarchies that may have

discontinuities. More study is necessary to determine the best implementation approach for Voronoi

diagram computation.

Although useful for any application needing fast blue-noise-like distributions, including sampling for

antialiasing, dynamic Voronoi hierarchies are used here to generate stippled illustrations. This drawing style

consists of distinctly visible dots so it is particularly sensitive to the quality of point distributions. Using a

pre-generated hierarchy computed on a plane, the algorithm can generate stippled output to match a

reference image at much higher qualities than were previously possible at interactive rates. This pre-

computed hierarchy also means that pre-computation time is not a factor in image generation speed.

Dynamic Voronoi hierarchies can also be used for three-dimensional rendering, providing important

depth cues as the points move with the surface. These images are aesthetically attractive, and are also of

practical value because the style is distinct from most other forms of computer-generated rendering, making

it visually separate from other pars of an image. Three-dimensional applications are more limited than the

two-dimensional case because the hierarchies can not be re-used for different surfaces. This means that

some cases where multi-modality rendering would be desired, such as embedded isosurfaces, are not

appropriate for this technique because the surface can not be interactively changed. In some of these cases, a

two-dimensional approach would work, although depth cues would be lost. In other cases, such as

simulation boundaries, the surface is known in advance and the three-dimensional dynamic Voronoi

hierarchy is a good rendering choice.

Page 40: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

36

2.3 Perception-based illustrative rendering

The stippling style discussed above is primarily concerned with the optimal placement of dots as defined by

numerical measurements. In contrast, many other techniques are focused more on incorporating higher-

level artistic concepts to produce images that are æsthetically pleasing. They can also provide concrete

benefits over traditional methods: because the representation is not constrained to be realistic, a variety of

enhancements can be added that make the image more informative.

Much research has gone into developing precise and artistically pleasing rendering methods in a

variety of mediums, from pen-and-ink [Winkenbach 1994], to charcoal [Majumder 2002], to watercolor

[Curtis 1997]. Typical applications of these algorithms are simple, easily understandable geometry of single

objects such as sculptures. Many real-world scenes, however, include large amounts of complex overlapping

geometry that can be much more difficult to understand when viewing an image. Although these scenes

consist of increasing numbers of polygons, points, and textures, this research is focused more on perceived

complexity, that is, how hard it is to understand a complex area rather than how hard it is to process the

geometry. Perceived complexity is especially problematic for non-photorealistic techniques because artists

treat complex regions very differently from smooth surfaces with few details.

Human artists keep in mind multiple levels of abstraction when producing artwork. At the highest

level is the overarching artistic æsthetic which we can not hope to duplicate by computer. At the lowest level

of abstraction are the individual objects and geometry that make up the scene, including edges, surfaces, and

texture. In between these extremes exists knowledge about the visual interaction of objects. Using domain

knowledge and intuition about how a viewer will perceive the scene, the artist picks which details to

represent exactly, and which details to represent at a higher level of abstraction. The result is an image where

objects and features of interest are very clearly defined, and additional details are represented at a minimum

level. Figure 12 illustrates two examples of the same scene, one with a more abstract representation, and one

with potentially ambiguous boundaries highlighted.

Page 41: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

37

The bulk of computer graphics deals with the lowest level of abstraction: that of the individual

primitives. If multiple objects are considered for a given calculation, it is almost always to compute the

interaction of light between them rather than how they will be perceived when combined in the final image.

While this approach is well-suited for rendering photorealistic scenes because it mimics reality, it is poor for

artistic renderings where much of the information exists only in the realm of suggestion. In these cases, a

higher-level view must be taken to generate high-quality output.

The pen-and-ink style of rendering is particularly prone to overwhelming detail. While it has the

advantage that very small details and surface properties can be very accurately and simply represented, it is

difficult to represent large amounts of detail in small areas. The binary nature of the medium means that if

too many lines are drawn close together, the result will be an incomprehensible black mass. Even if lines are

drawn far enough apart to be well-resolved, many small lines in a small area can be distracting and convey a

limited amount of information.

The goal is to deal with highly complex geometrical models in a way that gives clear, meaningful, and

artistically believable renderings. This is done by integrating a two-dimensional image processing step into

the three-dimensional rendering pipeline, providing more information to make choices about handling

(a) Scene where different

geometry has adjacent and similar looking areas.

(b)Abstract representation where similar overlapping regions are merged into larger meta-objects

in screen space.

(c) More precise representation where potentially ambiguous boundaries are highlighted.

Figure 12 A hand-drawn example of how an artist may incorporate image-space information in a rendering.

Page 42: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

38

complex geometry and artistic style.

The hybrid image generation pipeline narrows the gap between the multilevel view taken by human

artists and the more limited local view typically taken by computer algorithms. This pipeline incorporates

both the low-level geometrical information of the scene and view-dependent, high-level information that

reflects how the scene will be perceived by a viewer. The additional information will allow the final

rendering to more closely emulate a human-generated drawing.

2.3.1 Image segmentation

The hybrid rendering pipeline in this work includes image segmentation. Therefore, it is important to

discuss some of the segmentation strategies that can be used, although they are not strictly non-

photorealistic rendering techniques and I do not attempt to improve on existing segmentation techniques.

Possibly the simplest segmentation technique is thresholding, in which one or more threshold values are

chosen and pixels are assigned to regions based on value alone. There have been many proposed

enhancements to make thresholding produce better results [Sahoo 1988], for example, by incorporating

spatial information [Mardia 1988].

Another common class of segmentation algorithms is region growing. In these methods, certain seed

points are selected in each region, and the regions are expanded outward to incorporate material of similar

character [Adams 1994]. Gradient information [Sato 2000] and extracted edges [Pavlidis 1990] can also be

used to help define the boundaries of segments. Watershed algorithms [Meyer 1990, Vincent 1991] treat the

image as a heightfield and simulate rising water that isolates different regions. Enhancements to this

algorithm use edges [Haris 1998] and gradient information [Gauch 1999] to improve the segmentation and

build up a logical hierarchy of regions. There are also a many methods that fall under the category of

“relaxation,” in which a segmentation is iteratively optimized [Rosenfeld 1976, Rosenfeld 1981]. Some are

also interactive, for example, [Hansen 1997]. Other techniques include snakes [Kass 1987], balloons [Cohen

1991], and region competition [Zhu 1996].

Page 43: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

39

Normalized-cut image segmentation [Shi 1997] is an advanced image segmentation technique that can

achieve more meaningful results than simple methods like region growing. This method repeatedly divides

the image in two (or occasionally more) portions, maximizing both the difference between identified

regions and their internal self-similarity. As a result, it is not as sensitive to noise, small fluctuations, or

indistinct boundaries as region-growing and thresholding techniques. Since it is a global optimization, it is

also not subject to selections of seed points. Unfortunately, implementation is complex and subject to

numerical instability, and execution time can be prohibitive for some applications, especially for larger

images. The ability to use any metric for measuring the similarity between pixels makes this algorithm easy

to enhance. Malik et al. [2001] use this ability to segment based on the texture of regions by convolving the

image with a variety of kernels to extract texture frequency, direction, and magnitude. This method is able to

handle images with strong but regular textures that would confuse most other algorithms.

2.3.2 The hybrid pipeline

One of the most useful properties of artistic rendering styles is the ability to simplify an image while

maintaining the perception that it is complete. Simpler images are easier to understand and make the details

important to the artist generating the image more clear to the person viewing it. The ability to simplify the

image comes from both domain knowledge of what needs to be represented and what can be omitted, and

the multi-level understanding the artist has of the scene. Certain low-level details may be omitted or

simplified if they are unimportant in a higher-level context. For example, small objects are left out or

simplified, or a silhouette line that might otherwise be drawn can be omitted if the objects it divides already

have sufficient contrast.

A multi-level approach is also necessary for the high-quality implementation of many rendering styles.

A pen-and-ink artist may group objects of similar shading and material properties and treat the group as

one meta-object having a single stroke style. Shadows are often represented as long lines crossing object

boundaries. Painters often build up layers, first ignoring image boundaries and filling in large areas of

similar color. Smaller details are then filled in on top.

Page 44: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

40

Traditional computer graphics techniques, which deal with only the low-level primitives, are unable to

simplify based on a higher-level understanding of the scene. A hybrid pipeline, incorporating a large

amount of two-dimensional scene information, can address many of these shortcomings. This work is a

hybrid pipeline for the automatic or semi-automatic generation of non-photorealistic renderings. The

approach attempts to be style-independent: the pipeline should be useful for a wide range of rendering styles.

The pipeline is divided into three distinct stages. First, the three-dimensional processing stage works from

the source geometry to generate a series of renderings. Next, these renderings are processed in the two-

dimensional stage to extract information describing the scene. Last, the final rendering stage generates the

output image based on the information from the previous stages. See Figure 13 for a chart describing this

process.

2.3.2.1 The three-dimensional processing stage

The three-dimensional processing stage reads the source data and generates a series of two-dimensional

renderings for use in the two-dimensional processing stage. This work deals primarily with polygonal input

data, although volumetric data could be used directly, as a source for an isosurface, or in combination with

some other type of data such as a previously extracted isosurface. The type of renderings that must be

generated depend on the needs of the following two stages. Some possibilities are:

□ Grayscale and/or color rendering. This is the standard rendering used to select stroke densities or paint

colors when generating the final output.

□ Depth rendering. Two objects can be adjacent in the image and have identical color, but they may be at

very different depths. The depth information can be used to determine which edges need more or less

emphasis.

□ Edge rendering. The approximate density of visible edges over a given area can give hints about

complexity.

Page 45: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

41

□ Normal buffer. Surface normals can give important scene information. Sharp changes in normals may

not always correspond to changes in depth or lighting, but they may be important to give the proper

impression of an object.

The silhouette is also calculated during the three-dimensional processing stage. This is done both to generate

an edge rendering for the two-dimensional processing stage, and for the final rendering stage to use directly,

if needed. Many non-photorealistic rendering techniques rely on high-quality silhouette edges. In these

cases, it is best to use software silhouette edge extraction for the its precise output and line-based output.

Unfortunately, this is a slow operation for large models since all edges must be checked, and high-

performance techniques rely on pre-computation or temporal coherence, neither of which is helpful for a

single computation. If such precision is not required for the final rendering, a much faster hardware-

accelerated technique could be used to generate the edge rendering, such as [Raskar 1999].

2.3.2.2 The two-dimensional processing stage

The goal of the two-dimensional rendering stage is to extract high-level information from the scene as it

would be perceived by the viewer. These image-processing operations are therefore based on the view-

dependent renderings generated in the three-dimensional processing stage.

□ Value-based region growing. One of the simplest segmentation techniques is to pick a point and grow a

region outward until pixels differ from the seed point by more than a pre-determined value. This

method is easy to implement and can be used when speed or ease-of-implementation are important.

Unfortunately, it can be very sensitive to noise and the selection of the seed point.

□ Gradient-based region growing. For some types of renderings, it is desirable to make regions of

contiguous smooth gradient. Gradient-based region growing expands regions until a given gradient

magnitude is reached. This style is appropriate for depth images that consist of large areas of smooth

gradient.

□ Gaussian blur. Gaussian blur can be useful as a pre-processing step before another technique such as

segmentation. For example, by generating a rendering of all edges in a scene (silhouette edges and/or

Page 46: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

42

creases) and blurring it, one obtains a map of the approximate edge complexity of various regions.

Segmenting or thresholding the resulting image will result in regions of different complexities. Such

information could be used by the final rendering phase to select rendering strategies.

□ Normalized-cut image segmentation. Normalized cut [Shi 1997] is a more advanced image

segmentation technique that can achieve more meaningful results than simple methods like region

growing.

□ Texture-based segmentation. Some image segmentation techniques can segment images into regions of

similar texture [Malik 2001]. This type of segmentation is particularly useful when used in conjunction

with texture extraction (see below).

□ Other segmentation strategies. The field of image segmentation has a long history and a great variety of

algorithms, many of which may be well-suited for certain tasks in the pipeline. Some of these

techniques are discussed in the previous work section.

□ Texture extraction. Some regions, especially those computed by texture- or edge-complexity-based

Originalgeometry

Intermediaterendering

Imageprocessing

Final NPRrendering

Depth modality

Grayscale modality

... other modalities ...

Figure 13 The rendering pipeline, with an intermediate rendering stage and image processing. These extra steps allow view-dependent image information to be integrated into the final rendering. In addition to the segmentation results, all geometry and the bi-directional mapping between the geometry and the segmen-tation are available to the final renderer.

Page 47: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

43

segmentation should be rendered differently according to their texture. Applying a texture extraction

step such as that used by [Malik 2001] for segmenting, could give important information about the

overall visual “feel” of a region. If the texture of the region is strongly in one or two principal directions,

a rendering style should be chosen to reflect that. If there is little texture coherence, a scribbling style

rendering may be more appropriate.

2.3.2.3 The final rendering stage

The final rendering stage is where the data from the two-dimensional and three-dimensional stages is

brought together to form the final image. It implements one or more non-photorealistic rendering styles that

utilize the previously computed three-dimensional and two-dimensional information. More information on

specific rendering styles, their requirements, and implementation is available in section 2.3.4: “Rendering

Styles.” The final rendering stage can also implement any photorealistic technique desired.

2.3.2.4 Performance

The pipeline is not intended to be interactive. Some of the operations, especially the segmenting and stroke

generation operations, are inherently expensive, and, since there are many such steps, interactivity could not

be achieved even if each step was interactive on its own. Rather, the aim is to generate the higher-quality

output, that is, output that is both informative and artistically pleasing.

The three-dimensional processing phase is the fastest and most flexible because it can take advantage

of hardware acceleration, and there are many performance/quality tradeoffs that can be made depending on

the needs of the output. The grayscale, depth buffer, normal, and edge rendering operations, for example,

can all take advantage of modern graphics cards’ rasterization capabilities for interactive or near-interactive

speeds. If higher-quality output is desired for computing visibility or shadows, software methods can be

used at the expense of speed. Unfortunately, one of the slowest operations of modern graphics hardware is

reading the frame buffer from the graphics card into main memory, an operation that is required for each

rendering style.

Page 48: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

44

Silhouette edge calculation can take several seconds, especially for large models, since all edges must

be traversed. But exact silhouette edges may not be necessary for a given final rendering style, so a fast,

potentially interactive hardware accelerated method like [Raskar 1999] could be used. If pre-processing is

acceptable, a method such as [Buchanan 2000] can be used to achieve interactive rates.

The two-dimensional processing stage is very difficult to run quickly. Even simple, fast techniques such

as region growing can take many seconds to complete since there are many individual region-growing steps

that must be run to segment a single image. More complex techniques such as relaxation [Hansen 1997] or

normalized cut [Shi 1997] can require many minutes or even hours to complete. One compromise between

quality and speed is to use a very simple segmentation technique on the whole image to obtain a coarse,

first-level segmentation. A more complex algorithm could be used to refine the segmentation. Shi and Malik

showed that the run time of normalized cut is typically about O(n3/2), with n being the number of pixels of

the area being segmented, so using smaller input sets can result in a substantial improvement in speed.

2.3.3 Differences from other hybrid work

In some ways, the hybrid method is similar to several previous works, but there are also important

differences. One key paper is Deussen and Strothotte’s Computer-Generated Pen-and-Ink Illustration of Trees

[2000 (b)], discussed on page 16. The hybrid method is a generalization of Deussen’s technique, which

focuses on techniques specifically for dealing with the challenge of realistic tree rendering. Deussen’s

algorithm must be given special sets of data for the branches and the leaves, which it is hard-coded to treat

differently for realistic results. This limits it to rendering plants in a specific range of styles, which, while

attractive, does not nearly encompass the full range of hand drawn plants.

The goal of the hybrid method is to achieve similar results for a wider range of scenes with little or no

inherent scene knowledge. The addition of image processing, especially segmentation, for multiple

rendering modalities is key, as it helps the program differentiate areas that should be rendered with different

strategies. It incorporates both more three-dimensional information, and a wider range of two-dimensional

Page 49: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

45

image processing techniques in an attempt to automatically make decisions that were a result of specific

domain knowledge in Deussen’s work.

Some of these shortcomings were addressed in Hamel and Strothotte’s Capturing and Re-Using

Rendition Styles for Non-Photorealistic Rendering [1999] (discussed on page 16). This technique does use

multiple three-dimensional rendering styles in order to learn more about the scene for the final rendering

stage. However, these two-dimensional base renderings are used only to generate input values to a mapping

function. No image segmentation or other two-dimensional operations are performed to gain a higher-level

understanding of the scene.

2.3.4 Rendering styles

Before attempting to replicate any specific rendering style, we must first determine what makes it unique,

and which types of information will be necessary in order to emulate it as best as possible. The discussion

and implementation in this research focus primarily on pen-and-ink rendering, but the technique itself is

not limited to any specific rendering style, and the potential for watercolor rendering is also analyzed.

2.3.4.1 The pen-and-ink rendering style

The pen-and-ink rendering style has been used extensively for a wide range of applications. The discrete,

black-and-white nature of pen-and-ink illustrations, like the copper-plate and wood engravings before them,

makes them easy for printers to incorporate into black-and-white text. The fine lines allow for a great deal of

detail to be represented precisely, making them ideal for technical illustrations. These limitations are less

significant for today’s increasingly electronic documents. But, the style is still useful for artistic reasons and

because abstraction can be presented in a natural way [Schumann 1996] that is not as distracting as missing

detail from an otherwise realistic image.

Page 50: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

46

2.3.4.2 Analysis of traditional pen-and-ink illustrations

Silhouette edges are one of the key features of many pen-and-ink drawings because the medium is

inherently line-based and because the limited dynamic range makes drawing edges one of the most effective

ways to add detail to an illustration.

Very long lines are seldom drawn because they are difficult to control and can result in uneven texture.

As a result, artists usually divide constant or smoothly varying regions into small blocks, each which is filled

in using a series of short strokes. To convey a very smooth effect, these blocks will have aligned stroke

directions and will be arranged in a very regular pattern. Less-regular patterns can be represented using

random block patterns or stroke directions; a variety of weave patterns are also possible. Long lines and lines

that are curved are difficult to control accurately using a pen, and so are seldom used, even when the under-

lying surface is curved. This observation contradicts much of the computer-generated pen-and-ink drawing

literature, which, although generally producing high-quality results, has focused a great deal of effort in

drawing contour lines [Hertzmann 2000, Girshick 2000]. Instead, an artist often chooses one stroke

direction that is appropriate for the general curvature of the surface in question, and the three-dimensional

effect is completed by careful shading or overlapping of different stroke directions. Curved lines are typically

only found on silhouette edges or in scribbles representing very uneven or complex regions.

Finally, pen-and-ink illustrations usually leave out a lot of detail. Since all strokes look similar, complex

regions can quickly become overwhelming if all details are drawn. As a result, most successful illustrations

represent important details using full detail, but and leave the rest to suggestion. Although potentially

confusing, psychological studies [Cavanagh 1989] have shown that the human brain is very good at filling in

details based on simple suggestions such as shadows and outlines.

2.3.4.3 Requirements for a renderer

There is a variety of two-dimensional and three-dimensional data that can help in duplicating the attributes

of a hand-drawn pen-and-ink illustration. Furthermore, the renderer should have at its disposal:

Page 51: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

47

□ Silhouette edges: One of the easiest and well-studied areas is silhouette detection, which must be done

in three-dimensional space for accurate results. However, an artistic silhouette rendering leaves plenty

of room for interpretation because many silhouette edges may be omitted if they are not necessary.

□ Curvature: Stroke directions are often based on principle curvature, which should be available on a

per-pixel basis and as dominant directions for different logical regions.

□ Scene rendering: A rendered version of the scene can be used to determine if there is sufficient contrast

in an area where a silhouette edge may not be necessary. The grayscale value would also be used as a

source for stroke density.

□ Edge density map: A blurred rendering of edges indicates areas where there may be an excessive

amount of detail and a simplified representation should be used instead.

□ Segmentation: Segmentation information is very important for creating blocks of similar strokes.

2.3.5 Complex silhouette rendering

Much of the information in pen-and ink illustrations is contributed by the silhouette lines. However, the

portions of an image with very high geometric complexity have large numbers of silhouette lines that can

interfere with each other and make the result meaningless. There are two, sometimes conflicting goals in

silhouette rendering. First, as much texture and important silhouette lines should be preserved as possible.

Second, the approximate grayscale built up by the lines should accurately reflect the darkness or brightness

Figure 14 Renderings of a tree model showing the target grayscale image (a) and the extracted silhouetteedges (b).

(a) (b)

Page 52: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

48

of a scene. I use a reference grayscale rendering to determine the density of the scene and to ensure that

grayscale value is consistent. Value must be consistent both within complex regions and in relation to non-

complex regions potentially rendered with other algorithms because it can give important visual clues about

shape which are lost if its use is inconsistent. Figure 14 shows the reference grayscale rendering and all

extracted silhouette edges for the tree model. Despite the simple square leaves, the silhouette rendering is far

too complex to look good. More realistic leaves would be even more confusing.

2.3.5.1 Generating complexity maps

One of the key advantages that the hybrid pipeline offers is the ability to identify complex regions that may

require special treatment. For this purpose, a two-dimensional complexity map is generated that measures

the approximate complexity across the image. Texture maps, hierarchical stroke textures, and procedural

shaders are all sources of complexity in a scene. This work focuses on geometric complexity, which is

measured by rendering all visible silhouette edges and blurring the result with a Gaussian kernel of a user-

specified radius. Complex regions are therefore defined to be those areas where the complexity map value is

above a given threshold, meaning that there are a large number of silhouette edges in the vicinity. Figure 15

shows a complexity map corresponding to Figure 14.

The size of the Gaussian kernel should reflect how close lines can be without becoming ambiguous or

confusingly complex. It depends on the output resolution, the output image size, expected viewing distance,

Figure 15 Complexity map generated from the silhouette edge rendering in Figure 14(b). The leaves and trunk can be differentiated by the value of the complexity map. The radius of the blur can be adjustedaccording to the final image size and expected viewing distance.

Page 53: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

49

and artistic aesthetic. Small kernels should be used when a very detailed output image is required, as strokes

can be very close without being merged by the blur operation. Larger kernels are appropriate for images

designed to be viewed at a greater distance, reproduced at low resolution, or when less detailed, more

impressionistic output is desired.

2.3.5.2 Separating regions

Complex areas are separated into regions requiring different treatment during silhouette rendering. These

areas are identified using a combination of the target grayscale rendering and a rendering of the silhouette

edges that has been blurred. The blurred edge rendering represents the approximate grayscale value that the

viewer will perceive if all edges are drawn, and uses the same user-specified kernel as the complexity map.

This blurred silhouette edge rendering is identical to the complexity map in the current implementation, but

this is not generally the case. More general definitions of complexity are also possible that would contribute

to the complexity map, and it is possible that additional candidate strokes should be considered in the

silhouette rendering.

By comparing the target grayscale rendering and the blurred edge rendering, the image is separated

into three regions. First, there are those areas where the edge rendering approximates the correct grayscale

value of the target rendering. These areas need no special handling because drawing all silhouette edges

gives a close approximation of both grayscale value and preserves all details. Second, there are the very dark

areas of the target rendering where, even though the edge density is sufficient to consider the area complex,

it is not enough to build up the desired grayscale value. Last are the regions where the impression given by

rendering all visible silhouette edges is darker than the target rendering. The areas that require darkening are

straightforward to handle because darkness can be added using traditional crosshatching on top of the

silhouette rendering, a technique also used by artists. The most common areas, however, are those requiring

lightening since complex areas will usually have large numbers of silhouette edges.

Page 54: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

50

2.3.5.3 Rendering lightly colored complex regions

Representing very complex regions using silhouette edge lines often produces very dark masses of lines, but

lighting and surface properties may dictate that the area should actually be a light color. For these areas,

silhouette lines should be removed so that the correct value is represented, while preserving the

characteristics of the geometry as much as possible. A depth-based method is used to identify important

silhouette lines, and add the goal that the selected lines should be arranged such that the desired grayscale is

approximated.

The importance of a silhouette line segment is measured by the difference in depth on each side of the

segment. Those line segments separating objects very far apart in Z are most likely to represent the edges of

large logical regions such as clumps of leaves or the outermost edges of a group of objects. Manually

generated tags or spatial partitioning could also be used to identify important boundaries.

The depth difference for each candidate silhouette line segment is measured at its midpoint. This

method is fast and simple, and is appropriate for the finely tessellated polygonal models used here. Mea-

suring this difference using the Z-buffer does not work well for small regions with large numbers of edges

since visible surfaces may not be well-resolved in the Z-buffer. If the model has large numbers of very long

line segments in the complex regions, depth differences may vary significantly over the length of the line. To

support these types of models, one could evaluate the depth difference at intervals along each line and break

them into smaller lines of relatively constant depth difference.

All candidate lines in a given light region are checked in order of importance to determine which

should be included in the final rendering. If the addition of a line decreases the root mean square error

between the blurred rendering of all previously selected lines and the target grayscale rendering, it is added

to the selection. Since lines are checked in order, the most important lines that can contribute the necessary

grayscale are always used.

In many cases, it is also desirable to ensure that the most important boundaries are drawn even if they

make an area darker than is indicated in the reference rendering. Therefore, important lines above a given

Page 55: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

51

threshold are drawn even if they do not make a good grayscale contribution. This importance threshold is a

tunable parameter so that the user can select between renderings with high grayscale fidelity and low detail,

or renderings with high detail differentiation and low grayscale fidelity.

At first, I experimented with a scaling factor that rewards the grayscale contribution of lines propor-

tionally to their importance. Thus, a somewhat important line could have a small negative impact on

grayscale fidelity and still be accepted, and a very important line could have a larger negative impact on

grayscale fidelity and still be accepted. However, this measure did not produce the desired results because

important lines tend to appear in clumps around the outlines of groups of objects. Once several such lines

are drawn close together, the error becomes so high that no additional lines can be drawn in the area

regardless of importance. When the boundaries of the groups of objects are broken up, much of the

advantage of having the boundary regions are lost.

The final implementation uses a hard importance threshold. All lines above the given threshold are

always drawn, and then lines below the threshold are drawn to achieve the desired grayscale contribution.

Therefore, our silhouette edge rendering method in complex regions reduces to a method very similar to the

leaf-drawing algorithm of [Deussen 2000 (b)] if an all-white target image is used, meaning that all grayscale

contribution is ignored. An example of the important edge detection is shown in Figure 16.

Figure 16 Tree image displaying only the most important lines. Compare this to the full silhouette renderingin Figure 14(b).

Page 56: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

52

2.3.6 Complex region hatching

As with silhouette edge rendering, hatching is well-suited to simple surfaces, but does not work well when

there are many overlapping surfaces in small areas. In these cases, image-based techniques are often used to

abstract away the complex geometry to give a simple, understandable, and artistically believable image.

Because of a high degree of complexity and the difficulty of controlling long pen strokes, artists often

draw small stroke groups with similar properties with the aim of approximating local geometric behavior

[Guptill 1997]. An artist also keeps in mind the exact geometry being represented and is careful to respect

the boundaries of important features when generating strokes, an advantage over two-dimensional image-

processing techniques which can abstract away many important but subtle details.

To approximate this balance of abstraction and detail preservation used by artists, a 2+-dimensional

image processing technique [Hamel 1999] is used. This technique combines multiple renderings of the scene

along with image segmentation to extract regions that can be hatched with consistent pen strokes. The

segmentation process extracts the high-level information about how the user will perceive each region. The

use of multiple modalities helps fulfill the often opposing desire to respect important boundaries even if

they are not visible. Rendering modalities currently in use are a grayscale rendering, a depth rendering and a

rendering of the angle of the surface normal projected onto the screen. Any additional rendering modalities

representing surface texture or other properties are also straightforward to incorporate.

2.3.6.1 Segmentation

Segmentation is used to divide the image into areas that can be drawn with a consistent pen stroke. Two

segmentation strategies are used depending on the modality of the input rendering. For the grayscale and

the normal angle images, a simple seeded region growing segmentation method is used. For the depth

rendering, a gradient-based region growing technique is used. This gradient method allows the grouping of

the large, smooth areas of the gradient rendering that often occur with surface going perpendicular to the

view plane. Figures 17 on page 53 and 19(c) on page 56 show the resulting segmentation for the tree models.

Page 57: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

53

2.3.6.2 Stroke generation

Once all stroke regions have been generated by intersecting the segmentations, each is separately filled with

pen strokes. As with some artistic styles, no attempt is made to follow the surface exactly, but rather use

small groups of parallel lines to represent approximate surface properties. This approach is better suited to

very complex regions where many surfaces may be present and the exact representation is very complex. For

rendering more simple regions, a traditional surface rendering approach would probably give a better

impression of surface properties.

When generating strokes for each segmented region, the angle of the pen strokes is determined from

on the average angle of all pixels in the region in the normal angle buffer. Cross hatches are then drawn in

the region to build up the correct grayscale value first parallel to the average angle, then perpendicular, then

in each diagonal direction. An example is shown in Figure 20 on page .

2.3.7 Examples

The processing is divided into a three-stage pipeline as in Figure 13 on page 42. The first stage is the three-

dimensional processing stage which works from the source geometry. After the user interactively picks view

parameters, the system generates the set of reference renderings used by the two-dimensional processing

stage and extract the silhouette edges required for edge processing.

The second stage is the two-dimensional processing stage, which works only from the reference images

Figure 17 Segmentation output, combining the segmentation of the grayscale reference image, a depthrendering, and a rendering of the surface normal direction. Different colors are used only to show each region and have no inherent meaning.

Page 58: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

54

and silhouette edges generated previously. Here, the reference renderings are segmented and blurred, and

the set of regions that will be used for hatch strokes are generated. These hatch regions are the intersection of

all the regions extracted from the different rendering modalities, including grayscale, depth, and angle-to-

screen. This means that hatches can cross geometry that is very similar, but does not cross important

boundaries. Last, the target grayscale approximation is generated by blurring the grayscale buffer, the

complexity map is generated by blurring a rendering of the silhouette edges, and those silhouette regions

that require lightening or darkening are extracted.

The last stage is where the actual strokes are generated. One stylistic option is a pure silhouette

rendering such as Figure 19(i) on page 56. For very complex regions, these types of renderings can capture

the approximate texture and grayscale value of a sample rendering by drawing a subset of the visible

silhouette edges. For less complex regions, a different rendering style would be required for the same effect.

Another option is a cross-hatched version of the model such as 19(f) on page 56. This example uses the

segmentation-based hatch regions to pick good sizes and positions for groups of cross-hatches. By

combining the two styles, a range of styles can be generated, as seen in Figure 20 on page 57.

Page 59: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

55

2.3.7.1 Tree models

Both tree models, the deciduous tree in Figures 14–18, and the conifer tree in Figures 19–20, were modeled

using the xFrog package in Maya. These trees are good examples of geometric complexity: the only regions

that are non-complex are the trunks of the trees. The hybrid pipeline is able to produce good results using

these models. Figure 18 and 20(c) are renderings consisting of important silhouette edges and hatching to fill

in the interior of the tree. The segmentation-based hatch groups allows the texture of the area to be

maintained without representing every object and boundary. Both of these renderings are displayed large so

the hatch boundaries are more visible, but the idea reproduction size is probably smaller: a rendering this

size can communicate more detail than is contained in these images.

Figure 18 The deciduous tree. A small subset of the most of important silhouette edges were drawn (as inFigure 16), combined with cross-hatching based on the segmentation in Figure 17.

Page 60: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

56

Figure 19 Example of handling complex regions to produce (f) hatches and (i) silhouette edge renderings.The hatched result is based on the combined segmentation and the blurred target. The silhouette edgerendering displays the most important lines to match the blurred target.

(b)Silhouette edge rendering

(c) Combined segmentation

(a) Grayscale rendering

(d) Blurred target

(e) Complexity map

(f) Hatch only rendering

(g) Regions to lighten

(h)Regions to darken

(i) Adjusted silhouette edges

Page 61: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

57

Figure 20 (a) Drawing only very important silhouette lines with no grayscale matching gives a very simple,open result. Compare to Figure 19(i). (b) Adding cross-hatching without segmentation. The result is more correct grayscale, but the interior is abstract. Compare to Figure 19(f). (c) Using different combinations of silhouette rendering and hatching to represent the tree.

(a) (b)

(c)

Page 62: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

58

2.3.7.2 Primate lung CT scan

The primate lung data set is an isosurface consisting of about 3.5 million triangles that was extracted from a

256 × 512 × 512 voxel CT scan of a primate. The isosurface value was selected to show the surface of the lung

tissue, and a clipping plane was used to expose the interior. This data is challenging due to the complex

nature of the lung airways. This complexity is already confusing in the simple grayscale rendering in

Figure 21(a) on page 59. When a complex rendering style such as pen-and-ink is introduced as in Figure 22

on page 60, the detail is even more difficult to understand.

The image processing step is able to extract those areas that are complex and those that are not. Notice

that with the geometric definition of complexity, the center of the lungs, while appearing complex, is actually

identified as simple. This is actually a positive result, since this complexity is a result of shading on a bumpy

surface, rather than actual structures as in the inside of the lungs. Because these regions are identified as

separate, using different rendering styles can further differentiate them.

The complexity map is used to identify regions where a simpler rendering style should be used. The

style used in Figure 23 on page 61 for complex regions is a simple blurred thresholding operation that

approximates India ink applied with a brush. This style does not contain the distracting high-frequency

detail of the hatch strokes, so it is less confusing when used in the complex areas. And, because it preserves

the binary nature of the hatching, the overall feel of the fully hatched illustration is preserved.

The process of identifying complex areas through blurring can have negative side effects when the

results of the segmentation are clearly visible to the user. One example is seen along the top of the lower

edge of the lungs in Figure 23. The blurring has caused part of the simple surface on the bottom of the lungs

to be identified as complex, producing a thick black line, when it probably be more desirable to have a sharp

edge at the surface boundary. One way to deal with this general problem is to use the complexity map to

influence the rendering in a more subtle way, making the boundaries between regions less distinct. A better

solution would be to use a method that better respects sharp boundaries in complexity, an goal requiring

further research.

Page 63: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

59

Figure 21 Example showing a 3.5 million triangle primate lung isosurface. Notice that the apparently complex center area is identified as simple because it is actually a single smooth surface. It appears theconfusingly similar to the lung detail because of the surface shape and lighting.

(a) Original grayscale rendering

(b) Extracted simple regions (c) Extracted complex regions

Page 64: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

60

Figure 22 A non-photorealistic rendering of the primate lung isosurface using only hatching. The line direction emulates an artist, which draws clumps of parallel lines over similar regions, regardless of the underlying geometry, a feature made possible by two-dimensional image segmentation. However, this diagram is overly complex, and much of the detail of the lungs is hard to understand.

Page 65: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

61

Figure 23 A Hybrid rendering of the primate lung isosurface using hatching and a solid ink-based style for the very complex regions. This preserves the feel of the previous rendering, but with better clarity in the finely detailed regions.

Page 66: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

62

2.3.8 Review of hybrid illustrative rendering

Computer generated images are becoming increasingly complex as they capture more of the tremendous

detail present in everyday scenes. Both traditional and non-photorealistic rendering algorithms must adapt

to handle this complexity. The contribution of the hybrid rendering pipeline is to provide a framework for

making decisions about abstraction and detail in an image.

This general pipeline incorporates two-dimensional image processing techniques and traditional three-

dimensional rendering to leverage the strengths of each. The intermediate rendering stage, combined with

image processing provides additional information to the final renderer about how the viewer will perceive

the scene. This information is combined with the original geometry to generate the final stylized rendering.

Areas that are overly complex are candidates for abstraction, while separate areas that image segmentation

identifies as appearing similar can be made more distinct.

2.3.8.1 The two-dimensional processing stage

The addition of the two-dimensional image processing is key to understanding how an image will be

perceived by the viewer. The fast and simple region growing techniques used in this work produce results

good enough for most applications. To understand why, it is important to remember that the standards for

judging a segmentation for this application is very different from those of traditional image segmentation. In

traditional image segmentation, the goal is often to segment the image exactly how a human would segment

the same image. Problems are caused by slightly ambiguous areas that look similar, but actually belong to

different objects. This is the opposite of the goal in this work, which is to identify those potentially confusing

areas.

However, this work has only touched on the full range of image processing techniques possible in the

two-dimensional processing stage. Histogram analysis and enhanced seed point selection should improve

the existing segmentation techniques. More advanced techniques such as normalized cut can also be used to

improve the segmentation, but at significant cost of speed and ease-of-implementation. More significant

gains should be realized from the use of texture-based segmentation, as this information can be very helpful

Page 67: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

63

in selecting rendering styles and strategies for a given region. Related to texture-based segmentation is

texture extraction. Even a very simple texture extraction implementation may be useful in the rendering

stage, since it can help choose stroke directions and styles to achieve a higher level of abstraction.

2.3.8.2 The final rendering stage

The pipeline was incorporated with several different rendering methods. For silhouette edge renderings,

complex areas are identified and important edges in those areas are drawn to match the grayscale value of a

target rendering. For hatching, small blocks of the image with similar properties are extracted, allowing

abstraction of less-visible detail while preserving the most important boundaries.

Combinations of rendering styles are also possible. Hatching and silhouette edge renderings are a

natural fit, and mimics a common method of pen-and-ink illustration: silhouette edges are used for the

most important details, while hatching is used to build up value. A simple solid black rendering method is

also demonstrated that approximates ink applied with a brush. This style, by virtue of its simplicity, is a good

choice for simplifying very complex areas while representing most of the detail and maintaining a non-

photorealistic appearance.

The generalized hybrid 2d/3d pipeline provides a large amount of detail about the scene, and this

information can be difficult to use. This work focuses on some of the challenges associated with the pen-

and-ink rendering style on scenes with high geometric complexity, and much more work can be done in the

field. Incorporating a wider range of pen-and-ink illustration algorithms can improve quality in many cases.

For example, specialized curved-surface illustration algorithms such as [Hertzmann 2000] can be used in

areas with very low geometric complexity. Also, automatic methods such as machine learning and example-

based rendering such as [Hamel 1999] may be good ways to incorporate more information from the two-

dimensional stages in an intuitive way.

Many other rendering styles can be used with the hybrid pipeline. One good candidate for further

study is watercolor, especially if it is combined with another style such as pen-and-ink. Watercolor drawings

are typically characterized by large brush strokes and multiple layers of pigment. Because the brush is large,

Page 68: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

64

watercolor illustrations tend to be less precise than pen-and-ink illustrations; artists focus more on building

up tone and value, and portray edges and details by suggestion.

A renderer that implements the watercolor rendering style can be more complex than a pen-and-ink

renderer because it must use complex models for the paper, brush, and pigment [Curtis 1997]. In other

respects, however, the technique is simpler because it is less detailed; precise edges, curvature, and texture

are usually not as important in a watercolor painting as they are in a pen-and-ink illustration. The

watercolor renderer would rely heavily on the standard color rendering and segmentation. Some styles

combine watercolor with another medium, for example, using watercolor to build up color and value, and

pen-and-ink to add important details.

2.3.8.3 Other future directions

One possible direction for future study would be to generate an interactive version of the procedure. As

many of the implemented techniques will not run in real-time on current hardware, simpler, faster

alternatives must be used in their place. Possible strategies include using fewer two-dimensional rendering

types, creative use of graphics hardware, and taking advantage of temporal coherence to re-use segmen-

tation information between frames.

A related area of further study would be the use of the hybrid technique in animations. Currently, the

pipeline does not handle temporal coherence, and the rendering may vary wildly if the viewpoint is moved

even a small amount. This is not acceptable for animations, where the rendering primitives should vary

smoothly between frames. Previous efforts [Meier 1996, Praun 2001] have tackled the problem in three

dimensions, attaching the strokes to the geometry and selecting which portions to draw based on the

current view parameters. As long as stroke selection and order varies smoothly with respect to view

direction and lighting, the resulting animation will also change smoothly with time. Unfortunately, the two-

dimensional processing steps, such as segmentation, can produce drastically different results for similar

frames. It would therefore be necessary to optimize segmentations across several frames, or to use a tracking

Page 69: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

65

technique such as optical flow [Beauchemin 1995] to ensure that the segmentations vary smoothly as the

view and lighting change.

For scientific visualization purposes, some support for the direct handling of volume data may be

useful. One way to handle this case is presented in [Dong 2003]. A possible approach that fits better into the

pipeline would be to generate a series of two-dimensional reference renderings directly from the volume.

These reference images would be generated by averaging the parameters such as gradient using the volume

rendering integral just as color is in a standard volume renderer. Working from these two-dimensional

images, appropriate strokes could be generated.

Ultimately, user studies will be required to evaluate specific combinations of abstraction and rendering

style, an inherently subjective property. The ideal combination would provide clear impressions of a scene

with minimal extraneous information in an attractive style. One way around this problem is to add an

additional step to the pipeline that would allow interactive editing of rendering parameters. Currently, the

pipeline is designed primarily for automatic generation of images, but much better results can be achieved

with the addition of domain knowledge and artistic vision that only a human can provide. In this step, the

user would be able to edit and refine the segmentation, select rendering styles and parameters, and assign

them to the objects and regions of the scene. Ideally, this technique would allow results that approach the

quality of hand-drawn image while requiring only a fraction of the effort from a person.

Page 70: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

66

3 Hybrid rendering for scientific visualization

3.1 Introduction

Visual richness in thee context of scientific visualization is often measured by interactivity and the ability to

clearly represent all details. In contrast to the previously discussed non-photorealistic rendering techniques,

the important areas of interest in scientific data are often not known. Therefore, it is important to preserve

all possible details and to provide a way that the data can be interactively explored and analyzed. Yet, the

ability to render high detail and to achieve interactive rendering speeds are often mutually exclusive. This

research uses a combination of volume- and point-based storage and rendering to achieve higher-quality

interaction without sacrificing significant quality.

3.1.1 Direct volume rendering

Direct volume rendering is a common technique for visualizing volume data. It is straightforward and

preserves all the features of the original data, including structures that can not be analytically defined. Most

volume rendering techniques use an absorption-plus-emission or scattering-plus-shading model for

rendering [Max 1995] and can be applied to a wide range of cell formats, including rectilinear grid,

tetrahedral cell [Shirley 1990, Williams 1998, Röttger 2000], irregular grid [Wilhelms 1996, Rüdiger 1997],

and AMR (adaptive mesh refinement) [Weber 2001] data.

Hardware-accelerated volume rendering is usually implemented using either two-dimensional [Cabral

1994] or three-dimensional [Wilson 1994] texture-mapping hardware and paletted textures for interactive

transfer function manipulation [Meissner 1999]. Dedicated volume-rendering hardware [Pfister 1999] has

Page 71: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

67

also been used. However, even on the most advanced systems, the size of many data sets can far exceed

available video memory. There are several approaches to rendering large volumetric data that does not fit

within the video card’s memory. The simplest is texture paging, which is often implemented in the display

driver and is used automatically when memory is full. Unfortunately, the latency required for transferring

data over the bus to video memory inhibits interactive display rates. Past efforts have also used parallel

rendering techniques to increase the amount of available texture memory [Lum 2002, Kniss 2001]. While

this approach is feasible, the required amount of hardware is often costly, efficient implementation can be

complex, and network latency and compositing can become bottlenecks. Finally, multi-resolution

techniques [Weiler 2000, LaMar 1999] attempt to make better use of texture memory by using lower

sampling rates for areas of low interest. The disadvantages of this technique are the introduction of artifacts

and loss of detail.

3.1.2 Point-based volume rendering

The use of points as a display primitive was first introduced in [Levoy 1985] and was more recently presented

in [Grossman 1998]. Related to this work are the point-based rendering methods used to handle the

interactive display of large and complex polygonal data sets [Rusinkiewicz 2000, Wand 2001]. In the context

of volume rendering, splatting [Westover 1989] has become an increasingly popular approach. There has

been the development of faster [Laur 1991] and higher-quality [Swan 1997] splatting techniques, the use of

splatting for rendering irregular-grid data [Mao 1996, Meredith 2001], and a more universal approach in

reconstructing surface and volume data [Zwicker 2001].

3.2 Hybrid point/volume visualization for volume rendering

Volume rendering and point-based rendering each have their own unique strengths and weaknesses.

Volume rendering provides fast rendering times on commodity graphics cards and even coverage of slowly

varying regions, but is (usually) restricted to a uniform resolution. Point-based rendering provides more

flexibility about where detail is placed, but can be less efficient to render. A hybrid storage and rendering

Page 72: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

68

approach using both points and volumes can help overcome the limitations of each. This approach converts

a large, high-resolution volume into a low-resolution version, using this volumetric representation for the

majority of the volume. Points are then added to make up the detail lost in the conversion. Since most data

sets have few sharp boundaries compared to slowly varying regions, many fewer points are necessary than

would be required for a full point-based representation.

Using hardware-accelerated texture-based volume rendering for the majority of the volume means that

rendering times are kept fast. Using points around high-frequency boundaries ensures that those areas have

sharp transitions that are faithful to the original data. The result is a much smaller data set that preserves

most of the interesting details in the original. The goal is not to replace high-accuracy volume rendering.

Rather, this approach allows more efficiently storage, transfer, and interactive previewing of reasonably

accurate data using a single low-cost PC and graphics card.

3.2.1 Hybrid data generation

Converting a high-resolution volume data set to a hybrid representation involves generating a low-

resolution volume and a corresponding set of points. The low-resolution volume data represents the larger,

more uniform areas of the original, and the points are inserted where the low-resolution data skips or de-

emphasizes important details. Thus, storage and processing techniques are allocated according to the detail

requirements of a given region of data. Because this technique is most useful for data that is larger than that

easily handled by a single computer, processing the full-resolution version can be slow. The preprocessing

program requires on the order of tens of minutes for typical data sets on a desktop PC. Large data can also

be preprocessed in pieces, either in parallel on multiple computers, or in serial on a single computer. The

pieces are then merged to produce the final hybrid representation.

Any standard subsampling technique can be used to generate the low-resolution volume data; the best

choice depends on the nature of the data and the application it is used for. Most images in this paper were

generated using tri-cubic interpolation, although other subsampling techniques are more appropriate in

certain cases (see section 3.2.1.2 “Optimized representations” on page 69).

Page 73: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

69

3.2.1.1 Point selection

Points are generated at those locations where the volumetric data contains large amounts of error. These

locations are identified by sampling the difference between the low-resolution representation and the

original data at regular intervals. Typically, the sampling interval of the error grid is the same as the original

resolution of the data, meaning the low-resolution data is compared to each of the original data values.

Lower-resolution error grids can also be used. Linear interpolation is used on the low-resolution data to

simulate the value the video card will compute at each error grid location. Point insertion is done based on a

threshold error value: if the error value is above a user-defined level, a point is inserted at that location. If a

maximum output size is given, a predefined number of points can be inserted at the locations with the

highest error magnitude.

3.2.1.2 Optimized representations

One source of error results from the limitation that video cards can only add color on the screen. That is,

points can only contribute to the opacity of a given area, and never make it more transparent. On average, a

low-resolution representation will overestimate the original function as much as it underestimates, so the

addition of points can only make up 50 of the error in the best case.

If something is known at the time of preprocessing about the type of transfer function the user will

select, the error can be improved by selecting the low-resolution representation such that it never produces

positive error for the expected transfer function. One common type of transfer function is one that is never

decreasing; the user often wishes to see areas of higher value as always being more opaque. In this instance,

the low-resolution representation is selected so that it never overestimates the original function value (the

interpolation function is the minimum function). In practice, we found this method produces poor results.

It generates large areas where the original function is extremely underestimated, so it requires an impractical

number of points to produce a good image. As a compromise, the preprocessing program can take a

parameter that specifies the minimum amount the low-resolution representation can overestimate the

original, allowing the user to decide the extent to which this type of accuracy balances the number of points.

Page 74: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

70

3.2.1.3 Storage

The low-resolution volumetric data and point data are calculated in a preprocessing step and stored for later

viewing. The sampling interval and error threshold are provided as parameters to the preprocessing

program. For points, the position, error value, and the value and normal of the original data at that location

are stored. The original function value is used to map the point to the corresponding color and opacity when

it is displayed. The error value indicates how much the opacity should be scaled so that, when combined

with the volumetric data, the correct opacity is presented on the screen. The preprocessor also computes

normals for each value of the volumetric data. All normals are stored as one-byte indices into a lookup table

containing normal vectors uniformly distributed in three-space.

If a 2563 grid is used for error calculation, each point can be stored with 8-bit coordinates, giving a

storage space of six bytes (one byte each for x, y, z, normal, error, and original function value). If a higher-

resolution grid is used, each point requires 16-bit coordinates, giving nine bytes of storage per point. An

optimized hierarchical representation such as that presented in [Rusinkiewicz 2000] could reduce the

storage required for coordinates to about 16 bits for each point, giving five bytes storage per point. We did

not implement this method because the data size is dominated by volume data and the corresponding

normals, and such optimization would only reduce the total data size by 5–10 for the examples presented

here.

A 5123 volume with one-byte normals can therefore be converted into a 2563 volume with enough room

for 26 million points in the same amount of file space (46 million using the five-byte hierarchical coordinate

representation). In practice, many fewer points are required to achieve good results, and even lower

resolution volumes are acceptable for some purposes. Therefore, the hybrid approach can give a significant

savings in disk and memory requirements over the non-hybrid method.

3.2.2 Rendering hybrid data

The renderer for hybrid data was designed for older video cards. Therefore, three-dimensional textures are

not used and the volumetric portion of the image is rendered using parallel, axis-aligned, texture-mapped

Page 75: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

71

planes. Each texture is an indexed-color slice of the data, allowing fast updating of the transfer function by

manipulation of the palette. To prevent the volume from disappearing when the slices are viewed edge-on,

three sets of slices are maintained, one for each axis, and the set most parallel to the view plane is used. Since

each slice of the data is usually partially transparent, the video card’s Z-buffer will not correctly handle

ordering of objects. Therefore, special care must be taken to draw items back-to-front.

Each slice also has a corresponding normal texture map. This map contains the one-byte normal

indices computed during preprocessing, stored as an indexed texture. A palette maps these indices to the

specular (alpha channel) and diffuse (red, green, and blue channels) lighting values for the corresponding

normal vector. Multi-texturing and hardware register combiners are use to multiply each data value with its

diffuse lighting value, and then add the specular light value.

Slices of points are drawn interleaved with each pair of volumetric planes so ordering is maintained.

These points can be rendered using either an OpenGL point primitive, which is rendered as a square, or a

Gaussian splat. OpenGL points are sized such that adjacent points overlap slightly, a fixed value dependent

on the resolution of the grid used to compute error (Gaussian splats must overlap more since their opacity

falls off toward the edges). If adjacent points do not meet, distracting patterns will appear, and if they

overlap too much, the covered area will be more opaque than was intended during preprocessing. Since

OpenGL points are drawn with integer pixel sizes only, problems can arise when the correct point size is a

small non-integer. In these cases, the round-off error is a large fraction of the correct value, resulting in

points that are much too big or too small. To combat this problem, point sizes are always rounded up when

passed to OpenGL, and their opacity is adjusted downward proportional to the increase in area of the point.

For greater rendering efficiency when drawing OpenGL points, each slice of points between adjacent

pairs of planes are loaded into a display list. This triples the amount of memory required for point storage on

the video card, since each point is loaded into one display list for each axis, but yields greater rendering

efficiency: even if the video driver must page the display list data out of video memory, it is still faster than

loading the point data from scratch. Gaussian splats are rendered using a Gaussian texture mapped onto

Page 76: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

72

forward-facing squares (billboards). Since rendering a forward-facing primitive is a view-dependent

operation, Gaussian splats can not be loaded into display lists, increasing rendering time.

No care is taken to render points within a slice back-to-front, and this can introduce artifacts [Levoy

1985]. Although a hardware-based solution is given in [Rusinkiewicz 2000], it increases rendering time, and

the goal of the hybrid method is to display a reasonably accurate representation of large volumes as fast as

possible. If a more accurate representation is desired, a more effective use of display resources would be to

increase volume resolution or point counts. Furthermore, artifacts are limited for two reasons. First, whereas

the previous work uses points to display primarily opaque surfaces, the points in this research are mostly

transparent and ordering is less of a factor in the combined result. Second, since the error sampling grid is

only a small factor (usually two to eight) greater than the volumetric slice resolution (the scale at which

ordering is correct), the number of points with incorrect ordering is limited to the same small factor.

The video memory requirements of the texture data could be reduced by using the volumetric texture

support of recent video cards, which would eliminate the data duplication caused by maintaining three sets

of planes for the volume. Unfortunately, most video cards do not support texture paging for portions of a

volumetric texture, preventing the use of textures larger than video memory. This means a maximum

volumetric texture size of 2563 on the nVidia GeForce4 used for this project. Nevertheless, volumetric

textures are still desirable because the parallel planes drawn on the screen do not have to be aligned on the

texture’s axis. The planes can be always be drawn parallel to the view plane while the texture coordinates are

transformed, and the video card will interpolate the texture in three dimensions, reducing rendering

artifacts [Wilson 1994]. However, this method greatly complicates correct ordering when combined with

point rendering, since arbitrary slices of points must be selected to interleave with each texture-mapped

plane. It also prevents the use of display lists for point slices, since the shape of each slice of points is view-

dependent. As a result, we chose to stay with axis-aligned rendering planes for better speed, ease of

implementation, and to be able to generate large (5123) hardware-accelerated reference renderings.

Page 77: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

73

3.2.2.1 Sources of error

Although the hybrid method can make up considerable amounts of error present in lower-resolution

volume representations, it is not without its own sources of error. First, as was discussed above, points are

only additive and can not compensate for overestimation errors. Second, if the transfer function’s color map

is rapidly changing, or error is great, the volumetric data will map to a significantly different color than the

correct value represented by a point at that location. The rendering algorithm described above only gives

correct opacity for the combination of a point and volume, the color may be incorrect (more precisely, it will

be the average of the correct color and the color of the volume data).

Last, since the hybrid method is essentially a multi-resolution rendering method, it is subject to the

same sources of error as other multi-resolution methods. In areas displayed at higher resolution through the

use of points, the transfer function must be adjusted downward to give consistent opacity over the entire

image. In addition, since the points tend to exist on edges and corners, the amount of adjustment required is

highly view-dependent. We chose not to address the view-dependent multi-resolution problem in this

current work, and found that global point transparency can be manually adjusted to achieve acceptable

results in many cases.

3.2.3 Results

Hybrid renderings generated with different parameters were compared with standard volumetric

representations of the same data. All tests and timing measurements were done on a 1.0 GHz Pentium 3

Xeon running Linux with 1gb of system memory and an nVidia GeForce4 Ti 4600 graphics card with 128mb

of video memory.

3.2.3.1 The argon bubble data set

The “argon bubble” data set is one frame of a 640 × 256 × 256 time-varying simulation of an argon bubble. It

was padded and converted to 5123 using tri-cubic interpolation. This step is required for an accurate

Page 78: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

74

reference, since most video cards require texture sizes to be powers of two. The hybrid representations and

lower-resolution volume representations were all generated from the 5123 version of the data.

The argon bubble contains some very fine detail in the turbulence, as well as many localized areas of

very high density surrounded by areas of low density. As can be seen in Figure 24, a 2563 volume rendering

misses some details, produces incorrect density values for some small features (for example, the small

protrusion at the top-center) and exhibits severe pixilation artifacts. Using this low-resolution volume, a

hybrid representation was generated using an error calculation grid of 5123 and an error threshold of ⁄,

producing 361k points. The size of the hybrid representation is a mere 37mb, compared to 33mb for the low-

resolution volume rendering alone, and 268mb for the original.

Page 79: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

75

(c)

(d) (e)

(b) (a)

Figure 24 Close-up images comparing renderings of the argon bubble (c). The top row shows standard volume renderings of the original 5123 volume (a) and the lower-resolution 2563 volume (b). The bottom row shows the same 2563 volume with 361k points, rendered with square OpenGL points (d) and with Gaussiansplats (e). The middle image is the entire view at 5123 with the zoom area marked.

Page 80: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

76

The hybrid renderings, even with this relatively low point count is able to improve the low-resolution

rendering in several respects. Pixilation artifacts in key areas are decreased, and shapes are more obvious. It

also corrects color in some areas, such as the small yellow feature in the top center, which appears as red in

the low-resolution version. Notice that an artifact of point rendering, particularly visible at this extreme

magnification, is that small details tend to get larger (this hybrid volume was not produced using the

optimized representation given above to reduce the effect).

The hybrid rendering drawn with OpenGL points in the lower-left of Figure 24 displays interactively

(10 fps), but their square shape is clearly visible when zoomed in. The Gaussian splats in the lower-right

image look much smoother, but take several seconds to render. One approach would be to use OpenGL

points for normal interaction, and use Gaussian splats when both the image is static and the zoom level is

high. Recently, a point sprite extension was introduced in OpenGL which will probably allow better

interaction with Gaussian splats.

3.2.3.2 The chest data set

The chest data set is a 5123 MRI scan of a male chest. At full resolution, the fine airway structure of the lungs

is clearly resolved. However, at lower resolutions, almost all of this detail is lost. The hybrid approach is able

to restore much of this detail. However, because of the large area covered by fine detail, a great number of

points are required to make up the difference. Therefore, a loose error threshold for point insertions is

required to achieve acceptable performance and storage requirements.

Page 81: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

77

Figure 25 Comparison of rendering techniques of the chest MRI dataset. In all pictures, the front and back havebeen clipped by 20 to allow the detail of the lungs to be visible. Greater numbers of points in (d) and (f) fill out the fine structures of the lungs more fully, and provide more accurate value. For example, the color of the lungpathways in (e) shifts toward pink due to down-sampling. The addition of points in (f) gives more true color compared to the original values in (a).

(a) 5123 volume rendering. 0.35 s/frame, 268mb. (b) 1283 volume rendering. 0.01 s/frame, 4mb.

(c) 1283 volume rendering with 3979k points:5123 error sampling grid, ⁄ error threshold,

0.29 s/frame, 40mb.

(d) 1283 volume rendering with 10999k points:5123 error sampling grid, ⁄ error threshold,

0.75 s/frame, 103mb.

(e) 2563 volume rendering: 0.04 s/frame, 33mb. (f) 2563 volume rendering with 7058k points:5123 error sampling grid, ⁄ error threshold,

0.47 s/frame, 97mb.

Page 82: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

78

Figure 25 shows renderings of several different representations of the chest data. The two hybrid

renderings using a volumetric resolution of 1283 in images (c) and (d) provide a large improvement over the

1283 volume rendering alone in image (b). The ⁄ error threshold in image (d) produces an improvement

over the ⁄ error threshold in image (c) by extending the very ends of the lung airways. The 2563 volume

rendering in image (e) is able to capture the larger airways, but notice that the color is slightly incorrect: the

airways are more pink than those in the original. This is because the very low density areas around the

airways smear out the higher values, resulting in density values that are too low. The hybrid representation

in (f) is able to correct much of this error and add additional detail to the fine structures.

Figure 26 shows the improvement that the addition of points offer for a 1283 volume representation.

These images were generated by subtracting the standard and hybrid representations, respectively, from a

5123 reference rendering, and inverting the result for better clarity in print. The hybrid rendering noticeably

reduces the magnitude of the darker regions (those with highest error) of the difference image. The

magnitude of the lightly colored, low-error regions were not affected, because the error value of those

regions was below the threshold set during point generation (in this case, ⁄).

Page 83: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

79

Black–white

(b)Pixel difference between 5123 resolution rendering and the same

1283 volume with 11 million points. mean = 245, std. dev. = 12

Figure 26 Pixel difference showing the effect of the addition of points. Values were computed as 256 –| high_res – low_res | for each channel, with white (256) indicating an exact match and black indicating that all channels had opposite values ( black – white ). The hybrid rendering increases the mean (reduces the average error) and reduces the standard deviation (fewer pixels have low value/high error).

No error

(a)Pixel difference between 5123 resolution rendering and

a 1283 volume rendering. mean = 238, std. dev. = 23

Page 84: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

80

3.2.3.3 The Furby data set

The hybrid rendering method is particularly well-suited for mechanical data. This type of data is often

characterized by high resolution, sharp edges, fine detail, and large differences in density. The Furby data set

is a 512 × 512 × 2048 CT scan of a Furby, a mechanical toy that interacts with people by moving its body and

playing recorded pieces of conversation. It consists of a fur-covered plastic shell (the fur is not resolved in

the limited dynamic range of the data) containing a variety of electronic and mechanical parts. The original

data was reconstructed from 361 X-ray views at a resolution of 1746 × 1869.

The Furby exhibits many of the properties of mechanical data: it is too big to visualize in real time on

commodity hardware (1gb with one-byte normals), density values are concentrated at the upper and lower

extremes, and there are many small, sharp features. In the left column of Figure 27 on page 81, the transfer

function was selected to provide a faint outline of the plastic body in red, and to highlight the high-density

metal parts in yellow. The right column shows a slightly different transfer function and view angle, with the

outline of the cloth exterior (and, unfortunately, the box used to hold the toy) in purple, and the metal parts

in yellow and green.

Few metal parts are visible in the volume rendering (parts (a) and (b) of Figure 27) because the down-

sampling blurred these fine details into the surrounding low-density area (air). These areas have incorrect

density values and so do not appear when the transfer function is selected to highlight metal parts. The

larger metal parts that do appear are poorly resolved. However, in the hybrid renderings, the printed circuit

board, with the electronics parts coming off the top, is clearly visible in the abdomen of the Furby. The

hybrid method also restores the sharp corners on the metal bars used to pivot the joints, resolves the heads

on the screws, and does a particularly good job on the wires leading to the sensors in the forehead.

Page 85: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

81

Figure 27 Comparison of hybrid rendering for the Furby data set. The original resolution was 512 × 512 × 2048 (1gb with 1-byte normals), too big to display on commodity hardware. The hybrid technique in (c) and (d) is able to restore much of the detail lost by subsampling. Using more points and a finer grid in (e) and (f) gives sharper, smoother details. For example, the wires leading to the head are better separated and less blocky.

(c) and (d): 2563 volume rendering with 2.9 million points, 5123 error sampling grid,1/16 error threshold, 0.36 s/frame 59mb.

(a) and (b): 2563 volume rendering, 0.07 s/frame, 33mb.

(e) and (f): 2563 volume rendering with 4.7 million points, 10243 error sampling grid,⅛ error threshold, 0.58 s/frame, 71mb.

Page 86: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

82

Raising the resolution of the error sampling grid makes a visible difference in the quality of rendering

generated, but requires the error threshold be raised to prevent too many points from being generated.

Figure 27 shows the difference between error grids of 5123 and 10243. Raising the error threshold for the 10243

error calculation grid above ⅛ (the bottom row of Figure 27) to ⁄ (not shown) makes almost no visible

difference in the quality of rendering. This is because both ⅛ and ⁄ are enough to capture enough error

from the very dense metal parts, and neither ⅛ nor ⁄ is small enough to capture error present in the low-

density plastic and cloth regions. Raising the error threshold even further to ⁄ starts to capture error for

the plastic regions, but using a sampling grid of 10243 produces over 20 million points: too many to render

in real time.

3.2.4 Review of hybrid visualization for volume rendering

While visualization systems can handle increasing amounts of data, there will always be data sets larger than

the available graphics subsystem memory. Reducing data size by subsampling can allow this data to be

rendered, but many details and sharp transitions can be lost in this process. The hybrid method presented

here adds points to the volume where error between the original and subsampled data is large. In typical

data sets, these regions of high error are localized to material boundaries, meaning relatively few points can

make up much of the error in the subsampled image.

The most significant limitation of the hybrid approach for volume rendering is that the point-based

rendering performance is much lower than that of the texture-based rendering used by most hardware-

accelerated volume rendering methods. This means that even if data size is significantly reduced using the

hybrid representation, rendering performance might be unacceptable if large numbers of points are used.

Because of this limitation, the hybrid rendering technique is most useful when the original volume is too

large to fit into graphics memory, yet high-quality images are desired.

Page 87: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

83

3.3 Hybrid rendering for particle visualization

The technique discussed above allows large-scale volumetric data to be more efficiently stored on

commodity PCs with limited memory and hard drive space. The hybrid point/volume technique can

leverage the same advantages of point-based and volume rendering for large-scale particle-based data: it

combines the speed of texture-based, hardware-accelerated volume rendering and the detail of point-based

rendering. This allows very large scale particle simulations to be interactively visualized on commodity PCs

with little degradation in image fidelity.

3.3.1 Particle beam visualization

Particle accelerators are playing an increasingly important role in basic and applied sciences, such as high-

energy physics, nuclear physics, materials science, biological science, and fusion energy. The design of next-

generation accelerators requires high-resolution numerical modeling capabilities to reduce cost and

technological risks, and to improve accelerator efficiency, performance, and reliability. While the use of

massively parallel supercomputers allows scientists to routinely perform simulations with 100–1,000 million

particles [Qiang 2000 (parallel)], the resulting data typically requires terabytes of storage space and over-

whelms traditional data analysis and visualization tools. At the same time, the complexity and importance of

these simulations makes visualization even more necessary.

The goal of beam dynamics is to understand the beam’s evolution in phase space as it propagates along

the accelerator. Two codes currently produces the results, consisting of the spatial coordinates (x, y, z) and

the momenta (Px, Py, Pz) of each particle: one using Fortran 90 [Qiang 2000 (halo)], and the other using

the Pooma (Parallel Object-Oriented Methods and Applications) C++ framework [Humphrey 1998]. Of

particular interest is the beam halo, the very-low-density region of charge far from the beam core. The halo

is responsible for beam loss as stray particles strike the beam pipe.

In the past, researchers visualized simulated particle data by either viewing the particles directly, or by

converting the particles to volumetric data representing particle density [McCormick 1999]. Each of these

Page 88: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

84

techniques has disadvantages. Direct particle renderings takes too long for interactive exploration of large

datasets. Benchmarks have shown that it takes approximately 50 seconds to render 300 million points on a

Silicon Graphics InfiniteReality engine, and PC workstations are unable to even hold this much data in their

main memory. Reducing the data to fit on a smaller system can result in artifacts, and can hide fine

structures, especially in the critical low-density halo region of the beam.

Ideally, a visualization tool would be able to interactively visualize the beam halo of a large simulation

at very high resolutions. It would also provide real-time modification of the transfer function, and run on

high-end PCs rather than a supercomputer. This tool would be used to quickly browse the data, or to locate

regions of interest for further study. Once identified, these regions can be rendered offline at maximum

quality using a parallel supercomputer.

To address these needs, the system uses a combined particle- and volume-based rendering approach.

The low-density beam halo is represented by directly rendering its constituent particles. This preserves all

fine structures of the data, especially the lowest-density regions consisting of only one or two particles that

would be invisible using a volumetric approach. The high-density beam core is represented by a low-

resolution volumetric rendering. This area is of lesser importance, and is dense enough so that individual

particles do not have a significant effect on the rendering. The volume-rendered area provides context for

the particle rendering, and, with the right parameters, is not even perceived as a separate rendering style.

3.3.2 Data representation

To prepare for rendering, a multi-resolution, hierarchical representation is generated from the original,

unstructured, point data. The representation currently implemented is an octree, which is generated on a

distributed-memory parallel computer. This pre-processing step is performed once for each plot type

desired (since there are six values per point, many different three-dimensional plots can be generated from

each dataset). This data is later loaded by a viewing program for interactive visualization.

The hierarchical data consists of two parts: the octree data and the point data. At each octree node, the

density of points in the node and the minimum density of all sub-nodes is stored. At the leaf octree nodes

Page 89: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

85

(the nodes at the finest level of subdivision), the index into the point data of the node’s constituent points is

stored. The leaf nodes should be small enough so that the boundary between point-rendered nodes and

volume-rendered nodes appears smooth on the screen. Simultaneously, the nodes need to be big enough to

contain enough points to accurately calculate point density.

Since the size of the point data is several times the available memory on the workstation used for

interaction, not all of the points can be loaded at once by the viewing program. Having to load points from

disk to display each frame would result in a loss of interactivity. Instead, the application takes advantage of

the fact that only low-density regions are rendered using the point-based method. High-density regions,

consisting of the majority of points in the dataset, are only volume rendered, and the point data is never

needed. Therefore, the points belonging to lower-density nodes are stored separately from the rest of the

points in the volume. The rendering program pre-loads these points from disk when it loads the data. It can

then generate images entirely from in-core data as long as the display threshold for points does not exceed

that chosen by the partitioning program. For this reason, the partitioning program generates approximately

as much pre-loaded data as there is memory available on the viewing computer.

Page 90: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

86

Figure 28 The user interface, showing the volume transfer function (black box in the top right of the window) and the point transfer function (below it) with the phase plot (x, Px, z) of frame 170 loaded. This image consists of 2.7 million points and displays in about one second on a GeForce 3.

Page 91: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

87

3.3.3 User interaction

The rendering program is used to view the partitioned data generated by the parallel computer. As shown in

Figure 28, it displays the rendering of the selected volume in the main portion of the window, where it can

be manipulated using the mouse. Controls for selecting the transfer functions for the point-based rendering

and the volume-based rendering are located on the right panel.

The volume transfer function maps point density to color and opacity for the volume-rendered portion

of the image. Often, a step function is used to map low-density regions to zero (fully transparent) and higher

density regions to some low constant so that one can see inside the volume. Ramp functions are used for the

transition between the high and low values, so the boundary of the volume-rendered region is less visible.

More complex transfer functions could also be used. However, they would not necessarily hide the tran-

sition between the point and volumetric data, and a great amount of control is not necessary for this

application: three-dimensional rendering is most commonly used for high-level exploration, while tradi-

tional full-resolution two-dimensional renderings are used for more detailed study of identified features.

The point transfer function maps density to number of points rendered for the point-rendered portion

of the image. Below a certain threshold density, the data is drawn rendered as points; above that threshold,

no points are drawn. Intermediate values are mapped to the fraction of points drawn. When the transfer

function’s value is at 0.75 for some density, for example, it means that three out of every four points are

drawn for areas of that density. This allows the user to see fewer points if too many points are obscuring

important features, or to make rendering faster. It also allows a smooth transition between point-rendered

portions of the image and non-point-rendered portions. Point opacity is given as a separate control, a

feature that can be useful when many points are being drawn.

By default, the two transfer functions are inverses of each other. Changing one results in an equal and

opposite change in the other. This way, there is always an even transition between volume- and point-

rendered regions of the image. In many cases, this transition is not even visible. The user can unlink the

functions, if desired, to provide more or less overlap between the regions.

Page 92: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

88

3.3.4 Rendering

The octree data structure allows efficient extraction of the information necessary to draw both the volum-

etric- and point-rendered portions of the image. Volumetric data is extracted directly from the density

values of all nodes at a given level of the octree.

To draw the volume, a palette is loaded that is based on the transfer function the user specified for the

volumetric portion of the rendering. This palette maps each 8-bit density value of the texture to a color and

an opacity; regions too sparse to be displayed for the given transfer functions are simply given zero opacity

values. Then, a series of planes is drawn through the volume. The accumulation of these planes gives the

impression of a volume rendering. While often the highest possible resolution supported by the hardware

can be used for rendering, lower resolutions are often in this application. This is because the core of the

beam is typically diffuse, rendered mostly transparent, and is obscured by points. All beam images presented

here were produced using a volume resolution of only 643.

In contrast to the volume rendering, in which only the palette is changed in response to user input,

point rendering requires that the appropriate points from the dataset be selected each time a frame is

rendered. Therefore, those regions that are too dense to require point-based rendering should be quickly

eliminated. To display a frame, the maximum density node that can be represented with points is computed

based on the transfer function given by the user. Since each octree node contains the minimum density of

any of its sub-nodes, only octree paths leading to renderable leaf nodes must be traversed; octree nodes

leading only to dense regions in the middle of the beam need never be expanded.

Once the program decides that a leaf node must be rendered, it uses the point transfer function to

estimate the fraction of points to draw. Most commonly, this value is one (meaning all points are drawn in

the cell), but may be less than one depending on the transfer function specified by the user. The list of points

is then processed and every n-th one is drawn. The first point drawn is selected from a random index

between 0 and n. This eliminates possible visual artifacts resulting in the selection of a predictable subset of

points from data that may have structure in the order it was originally written to disk. Figure 29 illustrates

the two regions of the volume used in the image generation process.

Page 93: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

89

3.3.5 Results

The system was tested using the results from a self-consistent simulation of charged particle dynamics in an

alternating-focused transport channel. The simulation, which was based on an actual experiment, was done

using 100 million particles. Each particle was given the same charge-to-mass ratio as a real particle. The

particles moving inside the channel were modeled including the effects of external fields from magnetic

quadrupoles and self-fields associated with the beam’s space charge. The three-dimensional mean-field

space-charge forces were calculated at each time step by solving the Poisson equation using the charge

density from the particle distribution. The initial particle distribution was generated by sampling a 6d

waterbag distribution (i.e. a uniformly filled ellipsoid in 6d phase space). At the start of the simulation, the

distribution was distorted to account for third-order nonlinear effects associated with the transport system

upstream of the starting point of the simulation. In the simulation, as in the experiment, quadrupole settings

at the start of the beam line were adjusted so as to generate a mismatched beam with a pronounced halo.

The output of the simulation consisted of 360 frames of particle phase space data, where each frame

contained phase space information at one time step.

Volume-rendered portionParticle rendered portion

Original octree representingparticle density Result

Figure 29 The image is created by classifying each octree node as belonging to a volume-rendered region or a point-rendered region, depending on the transfer functions for each region (the regions can overlap, as inthis example). The combination of the two regions defines the output image.

Page 94: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

90

Figure 30 Comparison of a volume rendering (top) and a mixed volume/point rendering (bottom) of thephase plot (x, Px, y) of frame 170. The volume rendering has a resolution of 2563. The mixed rendering has a volumetric resolution of 643, two million points, and displays at about three frames per second. The mixed rendering provides more detail than the volume rendering, especially in the lower-left arm. Note that the renderings look different due to slight differences in the renderer. The volume renderer is also limited in thevery light areas by the resolution of the frame buffer.

Page 95: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

91

Several frames of this data were moved onto a PC cluster for partitioning, although the data could have

been partitioned on the large IBM SP that was used to generate the data. Eight PCs were used, each with a

1.33 GHz AMD Athlon processor and one gigabyte of main memory. A typical partitioning step took a few

minutes, with most of the time being spent on disk I/O. The resulting data was visualized on one of the

cluster computers equipped with an nVidia GeForce 3.

Figure 30 shows a comparison of a standard volumetric rendering, and a mixed point and volumetric

rendering of the same object. The mixed rendering is able to more clearly resolve the horizontal

stratifications in the right arm, and also reveals thin horizontal stratifications in the left arm not visible in

the volume rendering from this angle.

Figure 31 shows how the view program can be used to refine the rendering from a low-quality, highly

interactive view, to a higher-quality less interactive view.

3.3.6 Review of hybrid visualization for particle rendering

Large particle accelerator simulations can be difficult to visualize because the extremely large size of the data

overwhelms most visualization systems. With the hybrid storage and rendering method presented here,

those areas of near-constant point density are replaced with the density value itself. The majority of the

points are in the dense but homogeneous beam center, so the resulting volume is able to represent the

majority of the data in any frame in a compact way. The less homogeneous and less dense areas of the beam

halo are kept in their original representation, preserving possible features of interest. The result is a

substantially smaller data set that can be rendered with the illusion of a full particle rendering.

3.4 Conclusions on hybrid visualization

Hybrid point/volumetric storage and rendering method improves rendering quality available for a given

amount of graphics hardware, and significantly reduces the amount of storage space required. Volume

rendering lacks the spatial resolution and the dynamic range to resolve regions with very high frequency,

areas which may be of significant interest to researchers. Point-based rendering lacks the interactive speed

Page 96: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

92

and the ability to run on a desktop workstation for many large data sets. The hybrid point/volume approach

presented here combines the strengths of each to produce significantly smaller data with minimal loss in

quality.

The hybrid approach replaces areas with slowly varying density with a low-resolution volume

representation. For volumetric source data, this means generating a low-resolution version of the original.

For point-based source data, this means producing a low-resolution volume representing density. Points are

then added to this low-resolution volume data in those areas poorly represented in the original. In typical

data sets, these areas are localized: they are material boundaries in volume data, and they are the beam halo

regions for particle accelerator simulations. This means that relatively few points are necessary to achieve

renderings with quality close to that of the original.

Page 97: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

93

Figure 31 A progression showing how exploration is performed. (a) Shows the initial screen, with a volume-only rendering. (b) The boundary between the high-density volume rendering and the low-density particle rendering has been moved to show more particles. (c) The transfer functions have been unlinked to show more particles while keeping the volume-rendered portion relatively transparent. (d) The point opacity has been lowered to reveal more structure. (e) The volume has been rotated to view it end-on. (f) A higher-resolution version similar to (d).

(a) (b)

(c) (d)

(e)

(f)

Page 98: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

94

Conclusion

The increasing maturity of the computer graphics fields and the power of today’s computers have made

graphics and visualization a more integral part of many parts of society, including science and the arts. This

means, however, that many of the traditional methods and styles of rendering are no longer seen as unusual

or innovative. Now, the challenge is to produce the most useful and attractive images and animations

possible.

The production of artistically attractive renderings is often seen as completely separate from

visualization. But aesthetics and utility are intrinsically intertwined. Improving the speed and fluidity of an

animation makes it more attractive, but also increases the ability of viewers to comprehend the content.

Likewise, improving the clarity and simplicity of a visualization will simultaneously create an image that is

more aesthetically focused.

Attractive scientific renderings are also an important part of communication with the general public.

Attractive images are more likely to be published and viewed by the public, increasing the visibility of the

research in question. For example, the Hubble Space Telescope often takes many different exposures at

different wavelengths of a given object, useful for scientists but not understandable by the general public.

The Hubble Heritage Project was formed to take these measurements and generate attractive color images

for dissemination to the public. These color images are less scientifically useful than the originals, but “by

emphasizing compelling HST images distilled from scientific data, [they] hope to pique curiosity about our

astrophysical understanding of the universe we all inhabit” [Hubble Heritage]. It is efforts such as this that

make the Hubble telescope so widely known and appreciated by the general public. Many other scientific

Page 99: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

95

projects can benefit from this type of public exposure for funding, for disseminating new research results,

and to inspire today’s students to enter careers in science.

This research has used an understanding of the human visual system to achieve better results than

traditional rendering for a variety of applications and with a variety of goals. Certain kinds of optimized

representations and rendering can increase efficiency while being visually very similar to a full rendering of

all the data. Also, an understanding of the way images are perceived helps create more simple and

understandable renderings. With an explosion of complex data, simplicity is one of the most important

aspects of a successful visualization; the eye should be directed to the areas of primary interest, and there

should be enough surrounding data to put those details into proper context.

The approach of using discrete primitives has worked very well for the applications addressed by this

research. Looking forward to the future of rich visualization, it is likely that the most significant gains will be

made through the intelligent combination of these rendering approaches with others. For example, using

image processing to understand human perception of a scene can be applied to any rendering technique,

including traditional volume and surface rendering. Combining many different rendering styles such as

stippling with surface meshes and volume rendering can help make sense of a complex multimodality data

set. Combining point-based and volume structures with an even wider range of representations such as

irregular grids and surfaces might further improve efficiency for new classes of problems.

The great range of representations and rendering styles gives a nearly limitless potential. The key to

developing this potential is the intelligent application and combination of these approaches. For example, if

image segmentation indicates ambiguous areas in a volume rendering with an embedded stippled surface,

how can this ambiguity best be resolved? On a machine with specific performance characteristics, what is

the best combination of volume and point-based techniques that will balance rendering performance with

storage performance? By optimally combining a variety of rendering and storage methods with little or no

human interaction, visually rich and informative visualizations can be open to people of all scientific

disciplines.

Page 100: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

96

References

[Adams 1983] Ansel Adams. Examples: The Making of 40 Photographs. Little, Brown, and Company, Boston, 1983.

[Adams 1994] Rolf Adams, Leanne Bischof. Seeded Region Growing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 166, pp. 641–647, 1994.

[Alexa 2001] Marc Alexa, Johannes Behr, Daniel Cohen-Or, Shachar Fleishman, David Levin, and Claudio T. Silva. Point Set Surfaces. Proceedings of IEEE Visualization, pp. 21–28, 2001.

[Barnard 1991] Michael Barnard. Introduction to the Printing Process. Blueprint, London, 1991.

[Beuchemin 1995] Steven S. Beauchemin and John L. Barron. The Computation of Optical Flow. ACM Computing Surveys CSUR, 273, pp. 433–467, 1995.

[Buchanan 2000] John W. Buchanan and Mario C. Sousa. The Edge Buffer: A Data Structure for Easy Silhouette Detection. Proceedings of the Symposium on Non-Photorealistic Animation and Rendering NPAR, pp. 39–42, 2000.

[Cabral 1994] Brian Cabral, Nancy Cam, and Jim Foran. Accelerated Volume Rendering and Tomographic Reconstruction Using Texture Mapping Hardware. Proceedings of the Workshop on Volume Visualization, pp. 91–98, 1994.

[Cavanagh 1989] Patrick Cavanagh and Yvan G. Leclerc. Shape from Shadows. Journal of Experimental Psychology, 151, pp. 3–27, 1989.

[Chamberlain 1996] Bradford Chamberlain, Tony DeRose, Dani Lischinski, David Salesin, and John Snyder. Fast Rendering of Complex Environments Using a Spatial Hierarchy. Proceedings of Graphics Interface, pp. 132–141, May 1996.

[Chamberlain 1996] Bradford Chamberlain, Tony DeRose, Dani Lischinski, David Salesin, and John Snyder. Fast Rendering of Complex Environments Using a Spatial Hierarchy. Proceedings of Graphics Interface, pp. 132–141, 1996.

[Claes 2001] Johan Claes, Fabian Di Fiore, Gert Vansichem, and Frank Van Reeth. Fast 3D Cartoon Rendering with Improved Quality by Exploiting Graphics Hardware. Proceedings of Image and Vision Computing New Zealand IVCNZ, pp. 13–18, 2001

[Cline 1988] Harvey E. Cline, William E. Lorensen, Sigwalt Ludke, Carl R. Crawford, and Bruce C. Teeter. Two Algorithms for the Three-Dimensional Construction of Tomograms. Medical Physics, 153, pp. 320–327, 1988.

[Cohen 1991] Laurent D. Cohen. On Active Contour Models and Balloons. CVGIP: Image Understanding, 532, pp. 211–218, 1991.

Page 101: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

97

[Curtis 1997] Cassidy Curtis, Sean Anderson, Josh Seims, Kurt Fleischer, and David Salesin. Computer-Generated Watercolor. Proceedings of SIGGRAPH, pp. 421–430, 1997.

[Deussen 2000 (a)] Oliver Deussen, Stefan Hiller, Cornelius van Overveld, and Thomas Strothotte. Floating Points: A Method for Computing Stipple Drawings. Computer Graphics Forum, 193 (Proceedings of Eurographics), pp. 41–51, 2000.

[Deussen 2000 (b)] Oliver Deussen and Thomas Strothotte. Computer-Generated Pen-and-Ink Illustration of Trees. Proceedings of SIGGRAPH, pp. 13–18, 2000.

[Deussen 1999] Oliver Deussen, Jörg Hamel, Andreas Raab, Stefan Schlechtweg, and Thomas Strothotte. An Illustration Technique Using Hardware-Based Intersections and Skeletons. Proceedings of Graphics Interface, pp. 175–182, 1999.

[Dong 2001] Feng Dong, and G.J. Clapworthy, and M. Krokos. Volume Rendering of Fine Details Within Medical Data. Proceedings of IEEE Visualization, pp. 387–394, 2001

[Dong 2003] Feng Dong, Gordon J. Clapworthy, Hai Lin, and Meleagros A. Krokos. Nonphotorealistic Rendering of Medical Volume Data. IEEE Computer Graphics and Applications, 234, pp. 44–52, 2003.

[Du 1999] Qiang Du, Vance Faber, and Max Gunzburger. Centroidal Voronoi Tesselations. SIAM Review, 414, pp. 637–676, 1999.

[Durand 2001] Frédo Durand, Victor Ostromoukhov, Mathieu Miller, François Duranleau, and Julie Dorsey. Decoupling Strokes and High-Level Attributes for Interactive Traditional Drawing. Proceedings of the Eurographics Workshop on Rendering, pp. 71–82, 2001.

[Ebert 2000] David Ebert and Penny Rheingans. Volume Illustration: Non-Photorealistic Rendering of Volume Models. Proceedings of IEEE Visualization, pp. 195–202, 2000.

[Freeman 1999] William T. Freeman, Joshua B. Tenenbaum, and Egon Pasztor. An Example-Based Approach to Style Translation for Line Drawings. Technical report tr99-11, Mitsubishi Electric Research Laboratories MERL, 1999.

[Freudenberg 2001] Bert Freudenberg, Maic Masuch, and Thomas Strothotte. Walk-Through Illustrations: Frame-Coherent Pen-and-Ink Style in a Game Engine. Computer Graphics Forum 203, 2001.

[Gauch 1999] John M. Gauch. Image Segmentation and Analysis via Multiscale Gradient Watershed Hierarchies. IEEE Transactions on Image Processing, 81, pp. 69–79, January 1999.

[Girshick 2000] Ahna Girshick, Victoria Interrante, Steven Haker, and Todd Lemoine. Line Direction Matters: An Argument for the Use of Principal Directions in 3D Line Drawings. Proceedings of the Symposium on Non-Photorealistic Animation and Rendering NPAR, pp. 43–52, 2000.

[Gooch 1998] Amy Gooch, Bruce Gooch, Peter Shirley, and Elaine Cohen. A Non-Photorealistic Lighting Model for Automatic Technical Illustration. Proceedings of SIGGRAPH, pp. 447–452, 1998.

[Grossman 1998] J. P. Grossman and William J. Dally. Point Sample Rendering. Proceedings of the Eurographics Rendering Workshop, pp. 181–192, 1988.

[Guptill 1997] Arthur L. Guptill. Rendering in Pen and Ink. Watson-Guptill Publications, New York, 1997.

[Halper 2002] Nick Halper, Stefan Schlechtweg, and Thomas Strothotte. Creating Non-Photorealistic Images the Designer’s Way. Proceedings of the Symposium on Non-Photorealistic Animation and Rendering NPAR, pp. 97–105, 2002.

[Hamel 1999] Jörg Hamel and Thomas Strothotte. Capturing and Re-Using Rendering Styles For Non-Photorealistic Rendering. Proceedings of Eurographics, pp. 173–182, 1999.

Page 102: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

98

[Hansen 1997] Michael W. Hansen and William E. Higgins. Relaxation Methods for Supervised Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 199, pp. 949–962, September 1997.

[Haris 1998] Kostas Haris, Serafim N. Efstratiadis, Nicos Maglaveras, and Aggelos K. Katsaggelos. Hybrid Image Segmentation Using Watersheds and Fast Region Merging. IEEE Transactions on Image Processing, 712, pp. 1684–1699, 1998.

[Hertzmann 2000] Aaron Hertzmann and Denis Zorin. Illustrating Smooth Surfaces. Proceedings of SIGGRAPH, pp. 517–526, 2000.

[Hertzmann 2001 (a)] Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, and David H. Salesin. Image Analogies. Proceedings of SIGGRAPH, pp. 327–340, 2001.

[Hertzmann 2001 (b)] Aaron Hertzmann. Paint by Relaxation. Proceedings of Computer Graphics International, pp. 47–54, 2001.

[Hiller 2001] Stefan Hiller, Oliver Deussen, and Alexander Keller. Tiled Blue Noise Samples. Proceedings of Vision, Modeling, and Visualization, pp. 265–272, 2001.

[Hiller 2003] Stefan Hiller, Heino Hellwig, and Oliver Deussen. Beyond Stippling—Methods for Distributing Objects on the Plane. Computer Graphics Forum, 223, pp. 515–522, 2003.

[Hoff 1999] Kenneth E. Hoff III, Tim Culver, John Keyser, Ming Lin, and Dinesh Manocha. Fast Computation of Generalized Voronoi Diagrams Using Graphics Hardware. Proceedings of SIGGRAPH, pp. 277–286, 1999.

[Hubble Heritage] Hubble Heritage Project. http://heritage.stsci.edu/

[Humphrey 1998] William Humphrey, Robert Ryne, Timothy Cleland, Julian Cummings, Salman Habib, Graham Mark, Ji Qiang. Particle Beam Dynamics Simulations Using the Pooma Framework. Proceedings of the International Symposium on Computing in Object-Oriented Parallel Environments ISCOPE, pp. 25–34, 1998.

[Interrante 1995] Victoria Interrante, Henry Fuchs, and Stephen Pizer. Enhancing Transparent Skin Surfaces With Ridge and Valley Lines. Proceedings of IEEE Visualization, pp. 52–59, 1995.

[Interrante 1997] Victoria L. Interrante. Illustrating Surface Shape in Volume Data via Principal Direction-Driven 3D Line Integral Convolution. Proceedings of SIGGRAPH, pp. 109–116, 1997.

[Kalnins 2002] Robert D. Kalnins, Lee Markosian, Barbara J. Meier, Michael A. Kowalski, Joseph C. Lee, Philip L. Davidson, Matthew Webb, John F. Hughes, and Adam Finkelstein. WYSIWYG NPR: Drawing Strokes Directly on 3D Models. ACM Transactions on Graphics, 213, pp. 755–762, 2002.

[Kass 1987] Michael Kass, Andrew Witkin, and Demetri Terzopoulos. Snakes: Active Contour Models. International Journal of Computer Vision, 14, pp. 321–331, 1987.

[Kniss 2001] Joe Kniss, Patrick McCormick, Allen McPherson, James Ahrens, Jamie Painter, Alan Keahey, and Charles Hansen. Interactive Texture-Based Volume Rendering for Large Data Sets. IEEE Computer Graphics and Applications, 214, pp. 52–61, 2001.

[LaMar 1999] Eric LaMar, Bernd Hamann, and Kennith I. Joy. Multiresolution Techniques for Interactive Texture-Based Volume Visualization. Proceedings of IEEE Visualization, pp. 355–361, 1999.

[Laur 1991] David Laur and Pat Hanrahan. Hierarchical Splatting: A Progressive Refinement Algorithm for Volume Rendering. Proceedings of SIGGRAPH, pp. 285–288, 1991.

[Levoy 1985] Marc Levoy and Turner Whitted. The Use of Points as a Display Primitive. Technical Report tr-85-022, The University of North Carolina at Chapel Hill, 1985.

Page 103: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

99

[Lloyd 1982] S. Lloyd. Least Square Quantization in PCM. IEEE Transactions in Information Theory, 282, pp. 129–137, 1982.

[Lu 2002 (a)] Aidong Lu, Christopher J. Morris, David S. Ebert, Penny Rheingans and Charles Hansen. Non-Photorealistic Volume Rendering Using Stippling Techniques. Proceedings of IEEE Visualization, pp. 211–218, 2002.

[Lu 2002 (b)] Aidong Lu, Joe Taylor, Mark Hartner, David Ebert, and Charles Hansen. Hardware-Accelerated Interactive Illustrative Stipple Drawing of Polygonal Objects. Proceedings of Vision, Modeling, and Visualization, pp. 61–68, 2002.

[Lum 2001] Eric B. Lum and Kwan-Liu Ma. Non-Photorealistic Rendering Using Watercolor Inspired Textures and Illumination. Proceedings of Pacific Graphics, pp. 322–331, 2001.

[Lum 2002] Eric B. Lum and Kwan-Liu Ma. Hardware-Accelerated Parallel Non-Photorealistic Volume Rendering. Proceedings of the Symposium on Non-Photorealistic Animation and Rendering NPAR, pp. 67–74, 2002.

[Ma 2002] Kwan-Liu Ma, Greg Schussman, Brett Wilson, Kwok Ko, Ji Qiang, and Robert Ryne. Advanced Visualization Technology for Terascale Particle Accelerator Simulations. Proceedings of Supercomputing, pp. 1–11, 2002.

[Majumder 2002] Aditi Majumder and M. Gopi. Hardware Accelerated Real Time Charcoal Rendering. Proceedings of the Symposium on Non-Photorealistic Animation and Rendering NPAR, pp. 59–66, 2002.

[Malik 2001] Jitendra Malik, Serge Belongie, Thomas Leung, and Jianbo Shi. Contour and Texture Analysis for Image Segmentation. International Journal of Computer Vision, 431, pp. 7–27, 2001.

[Mao 1996] Xiaoyang Mao. Splatting of Non-Rectilinear Volumes Through Stochastic Resampling. IEEE Transactions on Visualization and Computer Graphics, 22, pp. 156–170, June 1996.

[Mardia 1988] Kanti V. Mardia and T. J. Hainsworth. A Spatial Thresholding Method for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 106, pp. 919–927, November 1988.

[Max 1995] Nelson Max. Optical Models for Direct Volume Rendering. IEEE Transactions on Visualization and Computer Graphics, 12, pp. 99–108, 1995.

[McCool 1992] Michael McCool and Eugene Fiume. Hierarchical Poisson Disk Sampling Distributions. Proceedings of Graphics Interface, pp. 94–105, 1992.

[McCormick 1999] Patrick S. McCormick, Ji Qiang, and Robert D. Ryne. Visualizing High-Resolution Accelerator Physics. IEEE Computer Graphics and Applications, 195, pp. 11–13, 1999.

[Meier 1996] Barbara J. Meier. Painterly Rendering for Animation. Proceedings of SIGGRAPH, pp. 477–484, 1996.

[Meissner 1999] Michael Meißner, Ulrich Hoffmann, and Wolfgang Straßer. Enabling Classification and Shading for 3D Texture Mapping Based Volume Rending Using OpenGL and Extensions. Proceedings of IEEE Visualization, pp. 207–214, 1999.

[Meredith 2001] Jeremy Meredith and Kwan-Liu Ma. Multiresolution View-Dependent Splat Based Volume Rendering of Large Irregular Data. Proceedings of the Parallel and Large Data Visualization Symposium, pp. 93–99, 2001.

[Meruvia 2003] Oscar Meruvia Pastor, Bert Freudenberg, and Thomas Strothotte. Real-Time Animated Stippling. IEEE Computer Graphics and Applications, 234, pp. 62–68, 2003.

Page 104: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

100

[Meyer 1990] Fernand Meyer and Serge Beucher. Morphological Segmentation. Journal of Visual Communication and Image Representation, 11, pp. 21–46, 1990.

[Mitchell 1987] Don P. Mitchell. Generating Antialiased Images at Low Sampling Densities. Proceedings of SIGGRAPH, pp. 65–72, 1987.

[Nagy 2002] Zoltan Nagy, Jens Schneider, and Rüdiger Westermann. Interactive Volume Illustration. Proceedings of Vision, Modeling, and Visualization, pp. 497–504, 2002.

[Nice 1993] Claudia Nice. Sketching Your Favorite Subjects in Pen and Ink. North Light Books, Cincinnati, Ohio, 1993.

[Nice 1995] Claudia Nice. Creating Textures in Pen and Ink with Watercolor. North Light Books, Cincinnati, Ohio, 1995.

[Northrup 2000] J. D. Northrup and Lee Markosian. Artistic Silhouettes: A Hybrid Approach. Proceedings of the Symposium on Non-Photorealistic Animation and Rendering NPAR, pp. 31–37, 2000.

[Ostromoukhov 2000] Ostromoukhov. Artistic Halftoning—Between Technology and Art. SPIE Volume 3963, pp. 489–509, 2000.

[Ostromoukhov 2001] Victor Ostromoukhov. A Simple and Efficient Error-Diffusion Algorithm. Proceedings of SIGGRAPH, pp. 567–572, 2001.

[Ostromoukhov 1999] Victor Ostromoukhov. Digital Facial Engraving. Proceedings of SIGGRAPH, pp. 417–424, 1999.

[Pavlidis 1990] Theodosios Pavlidis and Yuh-Tay Liow. Integrating Region Growing and Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 123, pp. 225–233, 1990.

[Pfister 1999] Hanspeter Pfister, Jan Hardenbergh, Jim Knittel, Hugh Lauer, and Larry Seiler. The VolumePro Real-Time Ray-Casting System. Proceedings of SIGGRAPH, pp. 251–260, 1999.

[Praun 2000] Emil Praun, Adam Finkelstein, and Hugues Hoppe. Lapped Textures. Proceedings of SIGGRAPH, pp. 465–470, 2000.

[Praun 2001] Emil Praun, Hugues Hoppe, Matthew Webb, and Adam Finkelstein. Real-Time Hatching. Proceedings of SIGGRAPH, pp. 579–584, 2001.

[PRMan App. Note № 24] PhotoRealistic RenderMan Application Note 24: Using Arbitrary Output Variables in PhotoRealistic Renderman (With Applications). Pixar, 1998. http://graphics.stanford.edu/lab/soft/prman/Toolkit/AppNotes/appnote.24.html

[Qiang 2000 (parallel)] Ji Qiang, Robert D. Ryne, Salman Habib, Viktor Decyk. An Object-Oriented Parallel Particle-In-Cell Code for Beam Dynamics Simulation in Linear Accelerators. Journal of Computational Physics, 1632, pp. 434–451, 2000.

[Qiang 2000 (halo)] Ji Qiang and Robert D. Ryne. Beam Halo Studies Using a 3-Dimensional Particle-Core Model. Physical Review Special Topics—Accelerators and Beams Volume 3, 2000.

[Raskar 1999] Ramesh Raskar and Michael Cohen. Image Precision Silhouette Edges. Proceedings of the Symposium on Interactive 3D Graphics I3DG, 1999.

[Reeves 1983] William T. Reeves. Particle systems—A Technique for Modeling a Class of Fuzzy Objects. ACM Transactions on Graphics, 22, pp. 91–108, April 1983.

[Reeves 1985] William T. Reeves and Ricki Blau. Approximate and Probabilistic Algorithms for Shading and Rendering Structured Particle Systems. Proceedings of SIGGRAPH, pp. 313–322, 1985.

Page 105: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

101

[Rosenfeld 1976] A. Rosenfeld, R. Hummel, and S. Zucker. Scene Labeling by Relaxation Operations. IEEE Transactions on Systems, Man, and Cybernetics, 66, pp. 420–433, 1976.

[Rosenfeld 1981] A. Rosenfeld and R. C. Smith. Thresholding Using Relaxation. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI, 35, pp. 598–606, September 1981.

[Rössl 2000] Christian Rössl and Leif Kobbelt. Line-Art Rendering of 3D-Models. Proceedings of Pacific Graphics, pp. 87–97, 2000.

[Röttger 2000] Stefan Röttger, Martin Kraus, and Thomas Ertl. Hardware-Accelerated Volume and Isosurface Rendering Based on Cell-Projection. Proceedings of IEEE Visualization, pp. 109–116, 2000.

[Rüdiger 1997] Rüdiger Westermann and Thomas Ertl. The VSBUFFER: Visibility Ordering of Unstructured Volume Primitives by Polygon Drawing. Proceedings of IEEE Visualization, pp. 35–42, 1997.

[Rusinkiewicz 2000] Szymon Rusinkiewicz and Marc Levoy. Qsplat: A Multiresolution Point Rendering System for Large Meshes. Proceedings of SIGGRAPH, pp. 343–352, 2000.

[Sahoo 1988] P. K. Sahoo, S. Soltani, A. K. C. Wang, and Y. C. Chen. A Survey of Thresholding Techniques. Computer Vision, Graphics, and Image Processing, 412, pp. 233–260, 1988.

[Salisbury 1994] Mike Salisbury, Sean Anderson, Ronen Barzel, and David Salesin. Interactive Pen-and-Ink Illustration. Proceedings of SIGGRAPH, pp. 101–108, 1994.

[Salisbury 1997] Michael P. Salisbury, Michael T. Wong, John F. Hughes, and David Salesin. Orientable Textures for Image-Based Pen-and-Ink Illustration. Proceedings of SIGGRAPH, pp. 401–406, 1997.

[Sato 2000] Mie Sato, Sarang Lakare, Ming Wan, Arie Kaufman, and Masayuki Nakajima. A Gradient Magnitude Based Region Growing Algorithm for Accurate Segmentation. Proceedings of the International Conference on Image Processing, pp. 448–451, 2000.

[Schumann 1996] Jutta Schumann, Thomas Strothotte, Stefan Laser, and Andreas Raab. Assessing the Effect of Non-Photorealistic Rendered Images in CAD. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 35–41, 1996.

[Secord 2002] Adrian Secord, Wolfgang Heidrich, and List Streit. Fast Primitive Distribution for Illustration. Thirteenth Eurographics Workshop on Rendering, pp. 215–226, 2002.

[Secord 2002] Adrian Secord. Weighted Voronoi Stippling. Proceedings of the Symposium on Non-Photorealistic Animation and Rendering NPAR, pp. 37–43, 2002.

[Shi 1997] Jianbo Shi and Jitendra Malik. Normalized Cuts and Image Segmentation. IEEE Conference on Computer Vision and Pattern Recognition, pp. 731–743, 1997.

[Shirley 1990] Peter Shirley and Allan Tuchman. A Polygonal Approximation to Direct Scalar Volume Rendering. ACM Computer Graphics (San Diego Workshop on Volume Visualization), 245, pp. 63–70, 1990.

[Simmons 1992] Gary Simmons. The Technical Pen. Watson-Guptill Publications, New York, 1992.

[Sousa 2003] Mario Costa Sousa and Przemyslaw Prusinkiewicz. A Few Good Lines: Suggestive Drawing of 3D Models. Computer Graphics Forum, 223 (Proceedings of Eurographics), pp. 381–390, 2003.

[Strassmann 1986] Steve Strassmann. Hairy Brushes. Proceedings of SIGGRAPH, pp. 225–232, 1986.

[Swan 1997] J. Edward Swan II, Klaus Mueller, Torsten Möller, Naeem Shareef, Roger A. Crawfis, and Roni Yagel. An Anti-Aliasing Technique for Splatting. Proceedings of IEEE Visualization, pp. 197–204, 1997.

[Treavett 2000] Steve M. F. Treavett and Min Chen. Pen-and-ink Rendering in Volume Visualization. Proceedings of IEEE Visualization, pp. 203–210, 2000.

Page 106: Rich Visualizations From Discrete Primitives...The pen-and-ink non-photorealistic rendering style, consisting of discrete black primitives, is one such technique, but is particularly

102

[Treavett 2001] Steve M. F. Treavett, Min Chen, Richard Satherly, and Mark W. Jones. Volumes of Expression: Artistic Modeling and Rendering of Volume Datasets. Proceedings of Computer Graphics International, pp. 99–106, 2001.

[VanGelder 1996] Allen Van Gelder and Kwansik Kim. Direct Volume Rendering with Shading via Three-Dimensional Textures. Proceedings of the IEEE Symposium on Volume Visualization, pp. 23–30, 1996.

[Vincent 1991] Lee Vincent and Pierre Soille. Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 136, pp. 583–598, 1991.

[Wand 2001] Michael Wand, Matthias Fischer, Ingmar Peter, Friedhelm Meyer auf der Heide, and Wolfgang Straßer. The Randomized Z-Buffer Algorithm: Interactive Rendering of Highly Complex Scenes. Proceedings of SIGGRAPH, pp. 361–370, 2001.

[Weber 2001] Gunther H. Weber, Hans Hagen, Bernd Hamann, Kenneth I. Joy, Terry J. Ligocki, Kwan-Liu Ma, and John M. Shalf. Visualization of Adaptive Mesh Refinement Data. Proceedings of SPIE, pp. 121–132, 2001.

[Weiler 2000] Manfred Weiler, Rüdiger Westermann, Charles Hansen, Kurt Zimmerman, and Thomas Ertl. Level-of-Detail Volume Rendering via 3D Textures. Proceedings of the IEEE Symposium on Volume Visualization, pp. 7–13, 2000.

[Westover 1989] Lee Westover. Interactive Volume Rendering. Proceedings of the Chapel Hill Workshop on Volume Visualization, pp. 9–16, 1989.

[Wilhelms 1996] Jane Wilhelms, Allen Van Gelder, Paul Tarantino and Jonathan Gibbs. Hierarchical and Parallelizable Direct Volume Rendering for Irregular and Multiple Grids. Proceedings of IEEE Visualization, pp. 57–64, 1996.

[Williams 1998] Peter L. Williams, Nelson L. Max, and Clifford M. Stein. A High Accuracy Volume Renderer for Unstructured Data. IEEE Transactions on Visualization and Computer Graphics, 41, pp. 37–54, 1998.

[Wilson 1994] Orion Wilson, Allen Van Gelder, and Jane Wilhelms. Direct Volume Rendering via 3D Textures. Technical Report UCSC-CRL-94-19, Jack Baskin School of Engineering, University of California at Santa Cruz, 1994.

[Winkenbach 1994] Georges Winkenbach and David H. Salesin. Computer-Generated Pen-and-Ink Illustration. Proceedings of SIGGRAPH, pp. 91–100, 1994.

[Winkenbach 1996] Georges Winkenbach and David H. Salesin. Rendering Parametric Surfaces in Pen and Ink. Proceedings of SIGGRAPH, pp. 469–476, 1996.

[Yellot 1983] John I. Yellot, Jr. Spectral Consequences of Photoreceptor Sampling in the Rhesus Retina. Science, volume 221, pp. 382–385, 1983.

[Zhu 1996] Song Chun Zhu and Alan Yuille. Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multiband Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 189, pp. 884–900, 1996.

[Zwicker 2001] Matthias Zwicker, Hanspeter Pfister, Jeroen van Baar, and Markus Gross. EWA Volume Splatting. Proceedings of IEEE Visualization, pp. 29–36, 2001.