nonphotorealistic rendering by q-mapping

13
Volume 18 (1999 ), number 1 pp. 27–39 COMPUTER forum GRAPHICS Nonphotorealistic Rendering by Q-mapping P. Hall Department. of Computer Science, University of Wales Cardiff, Cardiff, Wales. [email protected] Abstract We present Q-mapping which is a technique for rendering three-dimensional objects using nonphotorealistic cues, by applying Q-maps. Q-maps are three-dimensional textures that make marks on objects, and thus provide visual cues for shape, shade, and texture. Q-maps adapt to light intensity, typically by making more marks in darker areas. Q-maps can produce images with a very wide range of visual styles (e.g. half tone shading, and pen-and-ink colour wash). The primary contribution is that these styles reside in a single parametric space. Importantly this space includes photorealism as a style, which is therefore regarded as a special case of nonphotorealistic image rendering in general. We illustrate our explanation of Q-mapping using examples from scientific visualisation and computer graphics – and provide a gallery of images to show the versatility of the approach. 1. Introduction The contribution of this paper is a method for rendering three-dimensional (3D) objects to produce nonphotore- alistic images, or photorealistic images, using a single system. We call the method Q-mapping. It makes marks on objects by applying Q-maps. A Q-map is a 3D tex- ture which adapts to the intensity of light. For example, more marks are made where an object is darker (a higher density of marks can be achieved by compress- ing the texture). The object is then projected to make a two-dimensional (2D) image. Marks from the Q-map make the object in the image look 3D by providing shape, shadow, and texture cues – hence Q-mapping (or cue-mapping). The marks from a Q-map give the image a “visual style”. For example, some Q-maps produce a cross-hatch shading effect, which is typical of pen-and-ink drawings. Q-maps are parametrically defined textures, and the pre- cise visual style made by a Q-map depends on the value of the parameters that define it. The way in which a Q-map adapts to light is also under parametric control, this is part of the visual style. For example, the rate at which the texture is compressed as a function of light intensity. Thus each Q-map can be regarded as a point in a parametric space, and the terms “point in paramet- ric space”, “Q-map”, and “visual style” are synonyms in this paper. The parametric space includes many styles. Some re- semble traditional media such as pen-and-ink, but im- portantly the parametric space also includes photoreal- ism. We should make clear what this means. Q-mapping does not perform any lighting calculation but instead relies on some other system such as a ray-tracer to do that. Q-mapping uses the results of such calculations to adapt the Q-map textures and make marks, or cues, on objects. Given a special set of parameters, the cues made are exactly those computed by the ray-tracer, say. Placing photorealism and nonphotorealism into a sin- gle system makes unique special effects possible. For ex- ample, a single image can be rendered with some parts of it using photorealistic cues for shape, shade, and tex- ture, and other parts using nonphotorealistic cues. Such an image can change in time so that, for example, pho- torealistic cues “grow into” nonphotorealistic cues, as in Figure 2. To the best of our knowledge the use of a single parametric space for all styles is a unique and novel contribution. The inclusion of photorealism as a special case of more general image synthesis is also unique and novel. c The Eurographics Association and Blackwell Publishers Ltd 1999. Published by Blackwell Publishers, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA. 27

Upload: p-hall

Post on 25-Sep-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Nonphotorealistic Rendering by Q-mapping

Volume 18 (1999 ), number 1 pp. 27–39 COMPUTER forumGRAPHICS

Nonphotorealistic Rendering by Q-mapping

P. Hall

Department. of Computer Science,

University of Wales Cardiff, Cardiff, Wales.

[email protected]

Abstract

We present Q-mapping which is a technique for rendering three-dimensional objects using nonphotorealistic

cues, by applying Q-maps. Q-maps are three-dimensional textures that make marks on objects, and thus

provide visual cues for shape, shade, and texture. Q-maps adapt to light intensity, typically by making more

marks in darker areas. Q-maps can produce images with a very wide range of visual styles (e.g. half tone

shading, and pen-and-ink colour wash). The primary contribution is that these styles reside in a single

parametric space. Importantly this space includes photorealism as a style, which is therefore regarded as

a special case of nonphotorealistic image rendering in general. We illustrate our explanation of Q-mapping

using examples from scientific visualisation and computer graphics – and provide a gallery of images to show

the versatility of the approach.

1. Introduction

The contribution of this paper is a method for rendering

three-dimensional (3D) objects to produce nonphotore-

alistic images, or photorealistic images, using a single

system. We call the method Q-mapping. It makes marks

on objects by applying Q-maps. A Q-map is a 3D tex-

ture which adapts to the intensity of light. For example,

more marks are made where an object is darker (a

higher density of marks can be achieved by compress-

ing the texture). The object is then projected to make

a two-dimensional (2D) image. Marks from the Q-map

make the object in the image look 3D by providing

shape, shadow, and texture cues – hence Q-mapping (or

cue-mapping).

The marks from a Q-map give the image a “visual

style”. For example, some Q-maps produce a cross-hatch

shading effect, which is typical of pen-and-ink drawings.

Q-maps are parametrically defined textures, and the pre-

cise visual style made by a Q-map depends on the value

of the parameters that define it. The way in which a

Q-map adapts to light is also under parametric control,

this is part of the visual style. For example, the rate at

which the texture is compressed as a function of light

intensity. Thus each Q-map can be regarded as a point

in a parametric space, and the terms “point in paramet-

ric space”, “Q-map”, and “visual style” are synonyms in

this paper.

The parametric space includes many styles. Some re-

semble traditional media such as pen-and-ink, but im-

portantly the parametric space also includes photoreal-

ism. We should make clear what this means. Q-mapping

does not perform any lighting calculation but instead

relies on some other system such as a ray-tracer to do

that. Q-mapping uses the results of such calculations to

adapt the Q-map textures and make marks, or cues, on

objects. Given a special set of parameters, the cues made

are exactly those computed by the ray-tracer, say.

Placing photorealism and nonphotorealism into a sin-

gle system makes unique special effects possible. For ex-

ample, a single image can be rendered with some parts

of it using photorealistic cues for shape, shade, and tex-

ture, and other parts using nonphotorealistic cues. Such

an image can change in time so that, for example, pho-

torealistic cues “grow into” nonphotorealistic cues, as in

Figure 2.

To the best of our knowledge the use of a single

parametric space for all styles is a unique and novel

contribution. The inclusion of photorealism as a special

case of more general image synthesis is also unique and

novel.

c© The Eurographics Association and Blackwell Publishers Ltd

1999. Published by Blackwell Publishers, 108 Cowley Road,

Oxford OX4 1JF, UK and 350 Main Street, Malden, MA

02148, USA. 27

Page 2: Nonphotorealistic Rendering by Q-mapping

28 P. Hall / Nonphotorealistic Rendering by Q-mapping

Q-mapping has useful properties: it rests neatly within

the rendering pipeline, so no specialist rendering meth-

ods are needed; it is independent of particular objects

and their representation; it guarantees frame-to-frame

coherence in animation, and Q-maps may be functions

of time.

There is a broad range of motivations for Nonphoto-

realistic rendering (NPR). One of the most important is

that illustrations are often more informative than pho-

tographs because they summarise significant elements in

a scene; otherwise said, illustrations are salience maps1, 2.

Our original motivation was to produce 3D scientific vi-

sualisations that separated colour and shape cues on a

surface3. Briefly, colour was used not only to indicate

a distribution of some scalar over the 3D surface, but

also to provide photorealistic shape cues for that sur-

face. Hence a conflict arose: the surface colour must

change to cue shape, but must not change to cue dis-

tribution. We argue that using a cross-hatch style of

shading resolves this conflict by separating the shape

cue and colour cue: shading is fully black in well de-

fined areas and the correct surface colour is elsewhere;

this can be seen in Figure 2.

Simply because we find nonphotorealistic images

more aesthetically appealing than photographs, we de-

veloped our early work into the form presented in this

paper – a form which is more suitable for more artistic

environments. We had in mind images such as pen-and-

ink illustrations, paintings by Roy Lichtenstein, cartoons

for children, and advertising photographs that are “re-

touched” to make them more visually acceptable by

highlighting selected features in the image. But rather

than be a pale Lichtenstein, say, we wanted to explore

computers as a medium for NPR. In that sense we had

no definitive objective except images that pleased us and

a professional artist whom we consulted occasionally.

With these motivations in mind we set primary objec-

tives:

• To automatically render 3D models onto 2D pictures.

Consequently NPR work that is 2D based and which

might be part of an advanced paintbox is not directly

relevant4, 5, 6.

• To convey shape, shade, and texture using nonphoto-

realistic cues7, 8, 9, 10. These cues include cross-hatching,

and half-tone.

• To provide a range of styles under a single frame-

work, rather than use a specific method for a specific

style. In particular, we wanted all Q-mapping styles to

be represented in one parametric space that includes

photorealism as a special case.

• We did not try to emulate physical media7, 11, 12, rather

we wished to explore the computer as a medium for

NPR in its own right, as do2.

We set pragmatic secondary objectives too:

• To engineer the system to fit into the standard graphic

pipeline, as do8, enabling easy integration into existing

ray-tracers or scan-converters. To facilitate portability

it should be modular.

• To use colour effectively by employing colour com-

positing operations, thus allowing the application of

more than one Q-map.

• To provide frame-to-frame coherence for 3D

animation12.

• To provide a method independent of any particular

object, and the way objects are represented (as B-rep,

CSG, or voxels, say). This is unique so far as we know.

All objectives have been met.

We did not set out to emulate physical media, nor

to model any other physical thing. Nonetheless, images

rendered with Q-maps can resemble traditional media,

and even natural textures such as wood13, 14. However,

we emphasise resemble and refute any other claim. In

general, this is neither a disadvantage nor an advantage,

merely a feature: the relative merits of photorealism

and nonphotorealism depend on the application, and

are necessarily subjective in nature. For example, Fig-

ure 3 is a cartoon for children in which all textures were

made by Q-mapping. In this case, pictorial elememnts,

such as the stylised sea waves, may be preferred over

a physical model which is photorealistically rendered

(e.g.15). Clearly, metrics such as how similar, or convinc-

ing, the image is in relation to a photograph are wholly

inappropriate in such cases.

As an aside, we argue that because NPR specifies what

the field is not, it is too broad to be meaningful, and

therefore favour a more positive term such as “artificial

drawing” to classify techniques such as Q-mapping.

Section 2 explains Q-mapping in the context of related

literature, Section 3 details Q-mapping and Q-maps, and

Section 4 provides illustrative examples in a gallery of

images. Section 5 concludes the paper.

2. Background

Compared to photorealism, NPR has been sparsely in-

vestigated. Nonetheless, it is an area of growing impor-

tance and the literature is quite voluminous. We will,

therefore, restrict our attention to work which is simi-

lar to Q-mapping. In particular we will consider NPR

approaches that render 3D objects as 2D images.

Some approaches “draw” on the 2D projection of the

3D scene1, 7, 11; that is, they draw on the image plane.

This approach is not like rendering from photographs,

as some painting systems do1, 16, 17, 18, 19, 20, because the

image is pre-segmented into parts (the general segmen-

tation problem remains unsolved by the vision commu-

nity). Segmentation is a real advantage because marks

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 3: Nonphotorealistic Rendering by Q-mapping

P. Hall / Nonphotorealistic Rendering by Q-mapping 29

applyzero or moreQ-mapsto point

computelightdirected toeye

applystandardtextures

finalimage

set

in worldobjects

APPLYQ-MAPS

typical pipeline (schematic)

Figure 4: A schematic but standard rendering pipeline, showing the inclusion of Q-mapping.

can be allowed to extend beyond borders of an object,

and an object that is partially obscured (a dog cut in two

by a lamp-post, say) can be rendered with a consistent

treatment. Furthermore, the image can contain addi-

tional information, such as depth1, 2, which is very hard

to retrieve from photographs. These systems can produce

impressive images but require specialist rendering algo-

rithms and consequently do not fit into the standard

graphics pipeline; indeed some provide very sophisti-

cated user-interaction mechanisms to allow artists to

adjust the final image7, 11. Finally, because the marks are

2D it is difficult to guarantee frame-to-frame coherence

in an animation. Nonetheless, recent work has processed

video sequences into nonphotorealistic images19, but is

not directly relevant here because it does not render 3D

models.

An alternative approach is to draw on the objects

before they are projected3, 8, 9, 10, 12, 21. The principal ad-

vantage is that spatial coherence (e.g. across occluding

objects) and temporal coherence (between video frames)

are both readily provided. Q-mapping uses this approach

and is related to Leister8 and our early research3. Both

authors use a 3D texture map constructed from laminae

of finite width, which we call slabs. Slabs are arranged

into three sets, typically each set is parallel to a major

plane in Cartesian space, such as the xy plane, to make

a grid-like texture. Typically, a point of an object is

painted black, if it is inside any slab. This produces a

cross-hatch style in the final image which Leister8 likens

to copper-plate engravings. Others also mark the surface

of objects with lines9, 10, or with particles12, but do not

use 3D textures. Two refinements are common:

1. Leister8 changes the kind of mark made by changing

the condition that defines a mark. For example, if

a mark requires that a point must be in three slabs

simultaneously, then an image of half-tone dots is

produced. Other combinations of slabs are possible,

and each produces a unique style.

2. Some researchers3, 9, 10 vary the density of marks in

inverse proportion to the intensity of light. This pro-

vides a shade cue by drawing more marks in darker

regions. We call this adaptation.

Q-maps include both refinements, and two additional

refinements: (1) because the basic grid-like texture is too

rigid and austere for artistic use it is “wobbled” and

“broken” as it is applied; (2) Q-maps use colour and

colour compositing operation in such a way as to allow

the application of more than one Q-map. Both of these

are novel to Q-maps and add considerably to the variety

of pictures that can be produced. Furthermore, Q-maps

adapt all their features to light intensity, including: slab

density; the mapping from object coordinates (in which

the object is defined) to Q-map coordinates (in which

the texture slabs are defined); the colours they use; and

way those colours are used.

3. Q-mapping

Q-mapping occupies a unique position in the rendering

pipeline. Standard texture maps are applied before light-

ing calculations, Q-maps are applied afterwards. This is

so that Q-maps can adapt to the light leaving a point in

the direction of the eye, as seen in Figure 4. This light

can be computed by any method and may include both

reflective and refractive contributions, nonetheless it is

convenient to refer to it simply as “reflected light”. We

define the intensity of reflected light, I , to be the maxi-

mum of its red, green, and blue components. Therefore,

a rendering system that includes Q-mapping has three

parts, which we express here as a high-level algorithm:

1. Compute the intensity of reflected light using any

standard method, such as ray-tracing.

2. Apply Q-maps to re-colour the point.

For each Q-map to be applied:

a. Adapt the Q-map using light intensity.

b. Transform the point from object into Q-map co-

ordinates.

c. Decide if the point is in or out of the Q-map

texture (e.g. in a slab, or not).

d. Decide a colour for the point, and composite onto

current colour.

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 4: Nonphotorealistic Rendering by Q-mapping

30 P. Hall / Nonphotorealistic Rendering by Q-mapping

Figure 1: Q-maps used to animate style, here photore-

alistic cues grow over Venus and into the drawn lines.

This effect is unique to Q-mapping, and is made possible

because all Q-maps reside in a single parametric space.

Figure 2: A scientific visualisation in which surface

colour and shape cue have been separated.

Figure 3: A 3D image rendered in a style reminiscent of childhood cartoons.

Figure 5: Q-mapping applied to a voxel volume in a vi-

sualisation application, showing three sets of orthogonal

slabs.

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 5: Nonphotorealistic Rendering by Q-mapping

P. Hall / Nonphotorealistic Rendering by Q-mapping 31

3. Project the re-coloured point onto the image plane,

using standard methods.

We are interested only in the second step of these three

because that step is the Q-mapping processes. Notice

that more than one Q-map can be applied (see also

Figure 4). This lends great richness to the styles that can

be produced using Q-maps. For example, textures can

overlay each other, rather as prioritised textures7 do.

Q-mapping has four sub-parts, as can be seen in the

algorithm above (2a to 2d). To explain how these sub-

parts work it is necessary to first describe Q-map textures

in a little more detail.

3.1. Q-maps

Q-maps are 3D textures comprising slabs arranged par-

allel to a major plane. Slabs can be seen in Figure 3,

which shows a Q-map used to pick out planes from a

voxel volume in a scientific visualisation (the slabs are

three voxels thick and the middle voxels are coloured

black to promote a nonphotorealistic shape cue, see3 for

further details). Slabs define a binary partition of 3D

coordinate space: any point is either in or out of the

Q-map texture, this is its in/out status.

Q-map parameters control such things as the distance

between slabs, the width of slabs, the way they combine,

their colour, and the way in which these quantities adapt

to light intensity. Q-map parameters can be divided into

three broad groups:

1. Parameters that control the transform from object

coordinates to Q-map coordinates (that is, from one

3D space to another). A user specifies this transform

as,

M = {M,R,N} (1)

in which M is a (4× 4) affine transform (16 parame-

ters) from object to Q-map coordinates which orients

and scales the Q-map texture; R is a random trans-

form (comprising 10 parameters) which “wobbles”

the basic texture so that straight lines resemble scrib-

bles; andN is a multi-valued transform (comprising

3 parameters) that carries a single point into zero or

more points, thus fracturing the now wobbly texture.

In total, the user supplies 29 parameters to specify

the transform and its adaptation.

2. Parameters that control the type of marks made by

the Q-map. As mentioned above, the binary texture

in Q-map coordinates comprises orthogonal sets of

slabs. The user specifies this as:

G = {g0, gb, gr, w0, wb, wr,H} (2)

in which: g0 gives the gap between nearest parallel

slabs; w0 is the width of a slab; gb and gr , and wb and

wr control the rate at which the gap and width adapt

to light intensity. H is a look-up table, each entry

specifies how slabs are combined to produce different

marks, a particular mark is indexed by light intensity.

Therefore, the user supplies 6 + |H| parameters to

specify the texture.

3. Parameters that control the choice of colour and

colour composition. The user specifies:

K = {Cin, Oin,1, Oin,2, fin,1, fin,2,

Cout, Oout,1, Oout,2, fout,1, fout,2} (3)

in which Cin and Cout are colours (each with red,

green, blue, and opacity components); the O are

compositing operators (specified as an index num-

ber into a jump-table) and the f are flags that permit

the reflected light and diffuse colour (strictly, diffuse

colour coefficients) to be used in composition. The

user supplies 16 parameters to specify colour and its

composition.

The number of user-supplied parameters (29+6+ |H|+16 = 41 + |H|) seems to imply Q-maps are hard to con-

trol. However, because Q-maps are defined in a single

parametric space they can be saved in a library of Q-

maps, each defining a unique style – we found “tweak-

ing” of these styles quite easy. Additionally, many of

the parameters are correlated, the values in the affine

map being an obvious example. Consequently, the task

of using Q-maps is not as difficult as it may seem.

We continue our explanation of Q-mapping by show-

ing how to map a point into Q-map coordinates, how

to adapt the grid and decide whether a node is in a Q-

map, and how to combine many Q-maps by compositing

colours.

3.2. Transforming a Point into Q-map Coordinates

The transform parameters in equation 1 are specified as

M = {M,R,N}. We now explain each of these terms

in execution order. Because the transform is defined

from object to Q-map coordinates the texture is fixed to

the object, thus guaranteeing frame-to-frame coherence.

This can be seen in the “stubble” on the face in Figure 3

– the hair stays in place as the head moves.

The first step is an affine transform, M (see Figure 6,

top left). It carries a point x into a point y. If a unit

normal to the object, n, is available, then that too is

transformed to yield a new normal, ny , thus:

y = Mx

ny = Mnn (4)

where Mn is an affine transform derived from M; Mn is

like M, but with translation and scale effects removed,

see Watt and Watt22 pages 6–7 for details.

Next the point (and its normal) are transformed using

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 6: Nonphotorealistic Rendering by Q-mapping

32 P. Hall / Nonphotorealistic Rendering by Q-mapping

Two points in Q-map coordinates

pieces of texture.Here, it falls in two

piece 2

piece 1

texture-space

object-space

scales, and tilts the Q-map texture.The affine map translates, turns,

texture grid line

underlying cosinesquiggle

wavelength

amplitude

squiggle

In 2D the squiggles go both ways.

Point in object centred coordinatespoint "in"

point "out"

The random map "squiggles" the texture,

regular texture

giving the appearance of scribbles.

M

The combined effect of the above three maps is to transform a single point on an object into zero, one or more points in

The multi-valued map

which overlap in some placesand leave blank area elsewhere.

breaks the texture into parts

blan

k-re

gion

sing

le te

xtur

e

overlapping texture

Q-map coordinates. The visual effect is to squiggle and break the otherwise regular texture.

Figure 6: The transform from object to Q-map coordinates comprises an affine map (top left), a random squiggle (top

middle), and a multi-valued transform (top-right). Their combined effect is to create a surface texture which appears

squiggly and broken, but the logical texture remains Cartesian.

R, see Figure 6, top middle. This random transform is

used to generate a squiggle, it adds considerably to the

artistic feel of a drawing, and can resemble natural

textures. It translates the point y = (y1 , y2 , y3 ) by dy =

(dy1 , dy2 , dy3 ), whose components are computed via a

“randomised” cosine wave (as in Figure 6):

dyi = aA(y) cos(2π(y2j + y2

k )12

λΛ(y)) (5)

where a is an amplitude, λ a wavelength, and A(y) and

Λ(y) are random numbers that depend on the position

y; i, j, k ∈ {1, 2, 3} and i 6= j 6= k.

The difficulty is choosing functions for the random

variables so that the squiggle retains at least zeroth order

continuity at all locations, and higher-order continuity

at most locations. Our solution is to use two noise-

boxes14. A noise box comprises a set of voxels, and

defines a continuous random function by specifying a

random scalar at every vertex of every voxel. A random

number is indexed from a noise-box in three steps: (1)

transform the point y = (y1 , y2 , y3 ) into the noise-box,

y′i = N( yiD− b yi

Dc). (2) Locate the voxel that contains the

transformed point y′ = (y ′1 , y ′2 , y ′3 ), and (3) use tri-linear

interpolation within the voxel to compute a random

value.

We specify a noise box as the set {[a, b], N, D} in

which a and b give upper and lower bounds on the

range of random numbers, N is the number of voxels

in the noise-box, and D is the physical size of the noise

box. Thus the random transform is specified by

R = {a, λ,A,Λ} (6)

in which A and Λ are both noise-boxes. So far as we

know, squiggles are unique.

The multi-valued transform breaks a single texture

into parts to create regions of overlapping texture, and

regions blank of texture (see Figure 6, top right). It is

specified as:

N = {θ, DN, NN} (7)

which defines a cube, edge DN having NN samples along

each side. This is used to construct a set of randomly-

oriented boxes as follows: (1) The cube is partitioned

into N3N boxes, each of edge DN/NN. (2) Each box

is subject to a unique random, rotational transform, m.

The (random) angle of rotation over any axis (through

its centre) is limited to the range [−θ, θ]. This process

yields a set of boxes, each described by (b, m−1), where

b defines the rotated box (for example, b is the set

of rotated vertices), and m−1 is the inverse transform.

Transforming a point, y with the multi-value transform

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 7: Nonphotorealistic Rendering by Q-mapping

P. Hall / Nonphotorealistic Rendering by Q-mapping 33

comprises two steps: (1) The point z is transformed to

z′ exactly as y was mapped to y′; (2) If z is inside any of

the boxes, b, then it is transformed by the mapping m−1

to give z′; if a surface normal is available, then it too

is transformed as for the affine transform. Because any

y may be in zero, one, or more boxes the result of the

multi-valued transform is a set of points (and normals).

So far as we know, the multi-valued transform is unique.

Thus, the combined effect of all three transforms is

to carry a single point in object coordinates into several

points, each in Q-map texture coordinates, as in Figure 6,

bottom.

Visual effects are illustrated in Figure 7, which shows

the effect of an extreme squiggle on the Venus figure, a

lesser squiggle produced the wood texture on the table,

blemishes and marks in the wood are made using the

multi-valued transform. There is also a multi-valued

transform on the wall behind the bottle.

The above transforms can be functions of light inten-

sity. For example the wavelength of a squiggle could be

a function of intensity (smaller wavelengths in darker

regions, say). Such an effect is simple to implement. In

addition, the transform can be a function of time; this

was used to make the sea move and hair straighten in

Figure 3, in this case the affine and random transform

were both functions of time.

Having transformed a point into Q-map coordinates

the next step is to decide if it is in or out of the binary

texture, as explained next.

3.3. Deciding if a Point is in the Binary Texture

Within Q-map coordinates the unadapted binary tex-

ture has a grid-like structure based on planes through

Cartesian space. It is specified in equation 2 as

G = {g0, gb, gr, w0, wb, wr,H}, and as explained com-

prises three sets of parallel slabs and a look-up tableHwhich controls the kind of mark made.

A slab-set is a set of parallel slabs, as seen at the

left of Figure 8. Here we use three slab sets, with slabs

parallel to the major planes (xy), (yz), or (zx). A useful

generalisation is to specify each slab set individually:

{g0, gb, gr, w0, wb, wr, q,m}, where m is a unit normal to

each of the slabs, and q is a 3D point that fixes their

spatial location.

The three slab-sets are specified by

{g0, gb, gr, w0, wb, wr}. In an unadapted texture each slab

has width, w0, and is separated from its nearest neigh-

bour by a gap, g0. The remaining terms are used to

adapt the slabs by computing a new width, w, and a

new gap, g, using the intensity of light, I:

g = g0gbgrIcb (8)

w = w0w−bwrIcb (9)

A single explanation suffices for both gap and width

adaptations, except width decreases with intensity –

hence the negative exponent – we consider the gap adap-

tation only. The intensity is partitioned into gr distinct

bands. In the darkest region, where bgrIc = 0, the gap

is just the “natural” gap, g0. The gap changes from this

value at an exponential rate, with base gb. If gb is in-

teger, then a grid line in a bright region will continue

without bend or break into a dark region. In scientific

visualisation such continuity is important, but less so

in computer graphics. If gb = 2, then the gap doubles

between consecutive intensity bands – this creates a rea-

sonable approximation to the logarithmic characteristic

of visual perception. The adaptation is shown as the

middle diagram in Figure 8.

There is an additional subtelty we apply to slab

widths, provided a surface normal n is available. The

angle at which the surface and slab intersect alters the

width of the slab on the surface. For example, slabs

intersecting a sphere will project to a wider area at the

edges of the sphere than the middle. Consequently the

real slab width, w, is adjusted such that its projection

onto a surface is invariant, as shown at the right of

Figure 8:

w ← w|n×m| (10)

The width of each slab-set is adjusted independently.

Once a slab set, s is adapted (and its width adjusted)

a decision can be made if the point is in or out of the

slab set. This is defined by

Ins(x) =

{1 w ≤ e ≤ g − w0 otherwise

(11)

where

e = d− gb dgc (12)

d = (x− q) ·m (13)

Clearly, the in/out status of a point with respect to an

individual slab-set is a Boolean valued variable.

This is not sufficient to decide the in/out status with

respect to the Q-map, because slab-sets are combined in

a variety of ways using entries of the look-up table, H.

The entries of H are premised on slab-set partitioning

3D space into points in any slab, and points out of every

slab, as we now explain.

Suppose we have computed Inxy(x), Inyz(x), and

Inzx(x), that is have the in/out status of a point with

respect to each of the individual slab-sets. Boolean

expressions are used to combine them. For example,

Inxy∧Inyz∧Inzx makes half-tone dots, but Inxy∨Inyz∨Inzx

yields a cross-hatch mark; Figure 9 has more examples.

A convenient way to represent every possible Boolean

expression of three variables is as the result column in

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 8: Nonphotorealistic Rendering by Q-mapping

34 P. Hall / Nonphotorealistic Rendering by Q-mapping

0g

slab normal

gap

centre of slabs

point originfor slab-set

poin

t out

width w0

A slab-set comprisesparallel slabs (shown end-on)A point is either "in" or "out"the slab-set.

poin

t in

both adapt to light intensity. Therate of adaption is parametericallydefined.

adjusted so it projects toan invariant width on the

The width of a slab can be

object’s surface.

g0

The gap and width of a slab-set

g

b = 2, k = 3 unad

just

ed s

lab

wid

th, w

projected on surface

surface

θ

θ mn

adju

sted

sla

b w

idth

, w’

adjusted

(non-integer ’b’ breaks lines)b = 3, k = 4

lines in dark region

lines in bright regionslab width, w’,

Figure 8: Slabs sets, their definition (left), adaptation (middle), and adjustment (right).

8 of the 16 marks (for 2 slab-sets) are shown, the remainder are the inverse of these.

A nor Bjust Ajust BA and B

horizontal slabs are set Avertical slabs are set B

mark-expressions control theway slab-sets combine to makemarks. In this diagram only2 slab-sets are used for simplicityof drawing.

AnorBorjustA

justAorjustB

AnandBnothing AnorB justA justB AnorBorjustB

A nor Bjust Ajust BA and B

0 00 11 01 1

0000

1000

0100

1100

0010

1010

0110

1110

visualresult

AB some possible result columns (mark-expressions)

Figure 9: Mark expressions effect the visual appearance

of marks made. They can be selected on the basis of light

intensity.

a truth-table of three Boolean variables; we call such

a column a mark-expression because it effects the kind

of mark. There are 256 = 223distinct mark-expressions

(using N slab-sets gives 22N distinct mark-expressions,

computed as the number of relations on {0, 1}N). Sup-

pose b1b2b3b4b5b6b7b8 is a mark expression, each br is

a Boolean variable. We combine the in/out status with

respect to slabs by first computing the subscript

r(x) = Inxy(x) + 2 Inyz(x) + 4 Inzx(x) (14)

and then classifying using the value br(x). For example,

the mark-expression 00000001 generates half-tone dots,

and 01111111 makes a cross-hatching.

Each entry in look-up table H is a mark-expression.

The particular entry used to combine the slabs is selected

by indexing H with light intensity (suitably quantised).

Hence br(x) gives the in/out status of the point with

respect to the grid, G.

This is still not sufficient to decide in/out status of the

point with respect to the Q-map, because the transform

from object to Q-map coordinates is multi-valued. Con-

sequently, we define the in/out status of a point with

respect to a Q-map as in if any of its mapped points

are in with respect to the grid, and out otherwise. This

completes the definition of the in/out status of a point

with respect to a Q-map texture.

Examples of the effects that can be produced by the

adaptive binary texture are illustrated in Figure 10. The

ball shows a cross-hatch pattern drawn adapting by

changing grid width only. Notice that because of the

definition of light intensity that adaptation is continuous

over different underlying colours (there are standard

textures on the ball and wall). The Venus figure shows

the effect of changing only the width of the slabs, quite

subtle shading effects can be produced. The snowman

shows the effect of changing the mark-expression as

a function of light intensity: throughout the darkest

region every point is in; over the next darkest region

points which are out are coloured black, while points

which are in are coloured white; over the next brightest

region the role of black and white is reversed; finally, the

very brightest region contains points that are are always

out. The hat is darker than the body, so it supports only

three regions.

The in/out status of a point with respect to a Q-map

gives shape to the texture but does not provide colour.

The way this status is used with colour is explained next.

3.4. Colouring the Point and Combining Q-maps

The colour of the point is decided using the colour

parameters, which recalling equation 3, are: K =

{Cin, Oin,1, Oin,2, fin,1, fin,2,Cout, Oout,1, Oout,2, fout,1, fout,2}; the

subscripts denote which set of variables apply to the

point, depending if the point is in or out of the texture.

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 9: Nonphotorealistic Rendering by Q-mapping

P. Hall / Nonphotorealistic Rendering by Q-mapping 35

Figure 7: An illustration of the effects of random trans-

form and multi-valued transform.

Figure 10: An illustration of the effects of different binary

textures and they way they adapt to light intensity.

The simplest way to colour a point is to use either

Cin, or Cout. For example, suppose Cin is black and Cout

is white, then given the adaptation of the binary texture

and the way in which light intensity is defined, the

result is a black-and-white picture in which the density

of black marks indicates both shade and tone (objects

which are naturally dark are drawn darker, and note

the bottle is transparent). Such an image can be seen in

Figure 11. A variation on this is seen in the visualisation

of Figure 2. It uses the diffuse colour of the surface if

a point is out, and black if it is in. Another variation

is the ball in Figure 10, where points that are in the

texture are given the colour of the reflected light, and

points out of the texture are given the diffuse colour of

the surface. The way Q-map uses colours allows each of

these examples, and other variations too. This flexibility

is gained by the use of partially transparent colours,

compositing operations, and flags that act as switches.

Figure 13: An illustration of the effects of colour compo-

sition.

Even greater variety is available by applying more than

one Q-map to an object.

We use three colours and two compositing steps

to compute a final colour for the point, these are

seen as X,Y,Z, and O1, O2 in Figure 12. We use

Porter and Duff’s23 representation of colours with opac-

ity, and use their colour compositing operators. Thus,

all colours are represented by C = α(r, g, b, 1) and

C = AoverB (CA,CB ) = CA + (1 − αA)CB is a typical

compositing operator.

The compositing operation is designed as a unit block,

and several unit blocks can be arranged in sequence, see

Figure 12. Y′ is the final colour projected to the image

plane. Thus Q-maps can be composited on top of one

another.

We begin by discussing the application of one Q-map.

Let us consider the colours X,Y,Z before composition

operators:

• X is set to be the reflected light, L. Its opacity is set

to 1, fully opaque, unless the user specifies differently.

• Y is the diffuse colour, D (strictly diffuse colour co-

efficients). Opacity is taken from the colour of object

if it is available, otherwise we set opacity to 1, unless

the user-specifies differently.

• Z is set to either Cin or Cout, depending on in/out

status.

As an example, the black-and-white image previously

described (and seen in Figure 11) can be reproduced by

setting Cin and Cout to black and white respectively and

using the Aalone operator (C = Aalone(CA,CB ) =CA)

in place of all operations O∗,∗. Keeping all constant but

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 10: Nonphotorealistic Rendering by Q-mapping

36 P. Hall / Nonphotorealistic Rendering by Q-mapping

2

01Z

X X’

Y’Y

inpu

t thr

ee c

olou

rs

single compositing block

over(X,Y)

outp

ut tw

o co

lour

s

0

plus(X,Y)

alone(Z)

first Q-map, point in "in" second Q-map, point is "out" third Q-map. point is "in"

repl(X,Z)

over(X,Y)

reflected light, L

diffuse colour, D

"in" colour, in1

"out" colour, out2 flag_in,1 raisedswitch to L in colour, in3

final colour of point

alone(Z)

Compositing 3 Q-maps at one point: final colour is over(repl(in3,L),plus(out2,over(in1,D)))

Figure 12: Colour composition for a single Q-map (top), and for several Q-maps in sequence (below). This sequence

includes the use of a switch which re-introduces the reflected light into the compositing pipeline. Note: because of space

restrictions, this diagram uses operators such as alone(Z), rather than Balone(L,Cin). Ambiguities can be resolved by

referring to the single compositing block.

Figure 11: A still life drawn with cross-hatch marks, in-

tended to resemble a disciplined pen-and-ink style.

setting Oout,1 = Balone, Oout,2 = Aalone, restores the dif-

fuse colour to the picture in regions that were previously

white, as in Figure 2. Similarly, Oin,1 = Balone, Oin,2 =

Aalone restores the reflected light to in points, (the ball

in Figure 10). As implied by the examples just given the

composition operations used (specified by Oin,1, Oin,2 and

Oout,1, Oout,2) have a major impact on the final picture.

In practice operations are accessed via a jump table,

so that specifying an operation only requires an index

number. We supplement Porter and Duff’s23 operators

with one of our own – a binary operation that replaces

the opacity, α, of the second argument with that of

the first: AreplB (A,B) = BαA/αB . This allows the user

to specify opacities for either the reflected light or the

diffuse colour.

As seen from Figure 12 the result of composition is

two colours, X′ and Y′. The reason for outputting two

colours is so that Q-maps can be applied in sequence to

produce richer effects. In particular, X′ is used as colour

X, and Y′ is used as Y in latter Q-map applications. Note

that transform and grid adaptation of any subsequent

Q-map still depend on the intensity of the reflected light.

The f∗,∗ parameters are flags that are of most use

when more than one Q-map is being applied. Their

purpose is to allow the original reflected light, or diffuse

colour, to be switched into the compositing pipeline at

any stage. For example, if fin,1 = 1, then the original

reflected light is input, as seen in Figure 12.

Some effects of colour composition are seen in Fig-

ure 13. The bottle is drawn using several Q-maps, some

for the shading, others for the highlights; the translu-

cent effect of the highlight is produced by overlaying

two “transparent white” Q-maps. A subtle effect is seen

in the shadow of the bottle; this is tinged with green –

a visual effect difficult to obtain in a standard scan-line

renderer such as produced this picture.

The woman in the fur hat is an NPR picture, real fur

would never look this way. This is an example of the

kind of image that can be produced by “retouching” a

photograph for advertising; in this case NPR highlights

features in a subtle but effective way. The model, both

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 11: Nonphotorealistic Rendering by Q-mapping

P. Hall / Nonphotorealistic Rendering by Q-mapping 37

face and hat, is white and is lit with a red light and

a blue light, these can be seen on the face. The hat’s

colour comes from a Q-map whose Cout colour was set

transparent brown, and whose Cin was black (squiggles

and the multi-valued map broke the black up to give a

resemblance to fur shadow). Fur highlights were made

with another Q-map in which the fin,1 flag was raised so

that reflected light from the underlying surface could be

used; by analogy the second Q-map locates small mirrors

over the surface of the hat. Of course, methods for

producing natural textures already exist and we do not

advocate our work as a replacement for them. However,

such methods model the texture rather than “draw” it

as we do, and the rendering would look photorealistic,

which expressly is not the aim here.

Colour operations and colour values can also depend

on light intensity. For example, colour values such as Cin

may be computed by interpolation between two colours,

using light intensity of the interpolating variable.

4. A Gallery of Images

Q-maps can produce many styles, certainly we have

not exhausted them. Thus the gallery in Figure 14 is a

mixture of many images, and it is only intended to give

examples of possibilities.

Some images show Q-maps used to produce effects

which resemble media such as pen-and-ink, see the small

black-and-white bottle for example. Other styles, such as

ink wash are possible, the corresponding small coloured

bottle gives an example. We regard the large bottle, top-

right, as giving a pastel-like impression. However, we

reiterate that we never set out to emulate any physi-

cal media and resemblances are sufficient for us (if we

wanted to make a pastel drawing, we would use pastel

rather than a computer).

Natural texture can be resembled too: the wooden

table, the fur hat, and marble Venus. Notice the sub-

tle special effect on the marbled Venus; the marble

comprises white and photorealistic marks, their fore-

ground/background relation is reversed in dark and

light regions of the model.

Shadows can be reversed so they look pale instead of

dark (see the small bottle, top-right), and we have al-

ready mentioned tinging shadows (the falling bottle). Q-

maps can also cut-away sections of objects, as in Venus.

Q-mapping is designed for use in animation, some

examples have already been given. Here attention is

drawn to the falling bottle, notice that highlight and

shading marks animate coherently and appropriately

Q-mapping is designed to be independent of model

type. Most images in the gallery shows their application

to B-rep objects. We have already shown Q-maps applied

to voxels (see Figure 3). Q-mapping can be applied

to video images, the image to right of falling bottle

animation is one frame from a sequence.

As can be seen, we also rendered edges and differen-

tiated between “profile” and “inner” edges. This is not

part of Q-mapping, but edges are effective in drawing.

Briefly, each edge bordered exactly two faces. If just one

of those faces were visible, the edge was a “profile” edge,

otherwise it was an “inner” edge. Edges too can be Q-

mapped. Inner and outer edges can be treated differently,

as in the crate of the still-life images. We also filtered

out inner edges, if their faces subtended a shallow angle.

A typical result is the drawn face wearing a red hat.

Space prevents us from presenting more examples

of results, but the reader is referred to our web site

http://www.cs.cf.ac.uk/User/Peter.Hall/. In ad-

dition, we have implemented Q-mapping code as a

stand-alone module which can be “plugged” into many

rendering systems (including ray-tracers scan-converters,

even video editors). Code is available upon request.

5. Discussion and Conclusion

We have described Q-mapping and Q-maps. We have

presented Q-maps as points in a parametric space, and

Q-mapping as the process that applies them. We have

shown Q-maps applied to objects of a wide variety of

kinds, and producing a variety of effects.

The particular description of Q-maps given here is

just one of several alternatives. For example, in early

work we experimented with different kinds of binary

texture such as those based on cylindrical polar coor-

dinates, on spherical polar coordinates, and on projec-

tive geometries24. However, we found that the Cartesian

based grids we described here gave a good balance be-

tween generality and simplicity and hence opted to de-

scribe that. One option to consider for future work is to

define the binary texture using a standard texture map.

A difficulty with Q-maps, and indeed other NPR sys-

tems, arises because marks are used two distinct ways:

(1) to cue shape and shade, (2) to give visual style to the

image. This inevitably leads to a conflict: if the marks

are rigidly attached to objects (as we describe here),

then frame-to-frame coherence is guaranteed in anima-

tion. However, the marks also scale with the object,

including any scaling because of perspective projection.

Thus the size and density of marks may change in an

undesirable way. If we address this problem by ensuring

that density and size are invariant when projected onto

the image plane, then we lose the guarantee of frame-

to-frame coherence – the texture may “swim” as density

changes, say. The dilemma is not dissimilar to the use of

colour in two distinct ways for a scientific visualisation

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 12: Nonphotorealistic Rendering by Q-mapping

38 P. Hall / Nonphotorealistic Rendering by Q-mapping

Figure 14: A compendium of images made by Q-mapping, showing the variety of styles the method is capable of, and its

application to both 3D rendering and 2D video processing.

– which motivated our early NPR work (see Introduc-

tion). Clearly, this problem is about the dual use of cues,

and must therefore affect every NPR system, including

hand-drawings. Investigating the problems is a rich and

challenging area for future work.

We should mention resource implications in terms of

computational cost and memory use. We implemented

Q-mapping in about 400 lines of un-optimised C++

code, and integrated it into a scan-line renderer we wrote

for teaching purposes. The time taken to produce any

single NPR image was about 1.2 to 2 times needed to

produce the corresponding photographic image; on our

SUN LX this could take up to two hours. Later we used

a more optimised version and an optimal ray-tracer on

a Silicon Graphics Indy; this reduced rendering time to

a few minutes. Memory resources are very low because

Q-maps are parametrically defined, each requires space

for 41 + |H| numbers.

In summary, Q-mapping provides an alternative to

current NPR techniques. It uses a single parametric

c© The Eurographics Association and Blackwell Publishers Ltd 1999

Page 13: Nonphotorealistic Rendering by Q-mapping

P. Hall / Nonphotorealistic Rendering by Q-mapping 39

space to encode many styles, and these include photore-

alism. This space is unique and very useful. In addition,

some Q-mapping transforms are unique, the random

and multi-valued transform for example. Since Q-maps

can be composited on top of each other the range of

styles it supports is very large. The problems cited above

point to possible directions for future work.

Acknowledgments

I would like thank the referees for their hard work and

helpful comments.

References

1. Saito, T. and Takahashi, T. “Comprehensible ren-dering of 3-D shapes”. In Computer Graphics (Proc.ACM SIGGRAPH), 24(4), pp. 197–206, (1990).

2. Lansdown, J. and Schofield, S. “Expressive ren-dering: a review of nonphotorealistic techniques”.IEEE Computer Graphics and Applications, 15(3),pp. 29–37, (1995).

3. Hall, P. M. “Nonphotorealistic shape cues for visu-alisation”. In The Third International Conference inCentral Europe on Computer Graphics and Visuali-sation, pp. 113–122, Plzen, Czech Republic, (1995).

4. Hsu, S. C. and Lee, I. H. H. “Drawing and anima-tion using skeletal strokes”. In Proc SIGGRAPH,Annual Conference Series, pp. 109–118, Orlando,USA, (1994).

5. Salisbury, M.P., Anderson, S. E., Barzel, R. andSalesin, D. H. “Interactive pen-and-ink illustra-tion”. In Proc. SIGGRAPH, Annual Conference Se-ries, pp. 101–108, Orlando, USA, (1994).

6. Berman, D. F., Bartell, J. T. and Salesin, D. H. “Mul-tiresolution painting and compositing”. In ProcSIGGRAPH, Annual Conference Series, pp. 85–90,Orlando, USA, (1994).

7. Winkenbach, D. and Salesin, D. H. “Computer-generated pen-and-ink illustration”. In Proc. SIG-GRAPH, Annual Conference Series, pp. 91–100, Or-lando, USA, (1994).

8. Leister, W. “Computer generated copper plates”.Computer Graphics Forum, 13(1), pp. 69–77, (1994).

9. Elber, G. “Line art rendering via a coverage ofisoparameteric curves”. IEEE Transactions on Visu-alization and Computer Graphics, 1(3), pp. 231–239,(1995).

10. Winkenbach, G. and Salesin, D. H. “Renderingparametric surfaces in pen-and-ink”. In Proc. SIG-GRAPH, Annual Conference Series, pp. 469–476,New Orleans, USA, (1996).

11. Strothotte, T., Preim, B., Raab, A., Schumann, J. andForsey, D. R. “How to render frames and influencepeople”. Computer Graphics Forum, 13(3), pp. C455–C466, (1994).

12. Meier, B. J. “Painterly rendering for animation”.In Proc. SIGGRAPH, Annual Conference Series, pp.477–484, New Orleans, USA, (1996).

13. Kajiya, J.T. and Kay, T. L. “Rendering fur withthree-dimensional textures”. In Computer Graph-ics (Proc. ACM SIGGRAPH), 23(3), pp. 271–280,(1989).

14. Perlin, K. “Hypertexture”. In Computer Graph-ics (Proc. ACM SIGGRAPH), 23(3), pp. 253–262,(1989).

15. Fournier, A. and Reeves, W. T. “A simple model ofocean waves”. In Computer Graphics 20(4), Proc.SIGGRAPH, pp. 75–84, (1986).

16. Haeberli, P. “Paint by numbers: abstract image rep-resentations”. In Computer Graphics (Proc. ACMSIGGRAPH), 24(4), pp. 207–214, (1990).

17. Salisbury, M. P., Anderson, C., Lischinski, D. andSalesin, D. H. “Scale-dependent reproduction ofpen-and-ink illustrations”. In Proc. SIGGRAPH,Annual Conference Series, pp. 461–468, New Or-leans, USA, (1996).

18. Salisbury, M. P., Wong, M. T., Hughes, J. F. andSalesin, D. H. “Orientable textures for image-basedpen-and-ink illustration”. In Proc. SIGGRAPH, An-nual Conference Series, pp. 401–406, Los Angeles,USA, (1997).

19. Litwinowicz, P. “Processing images and video for animpressionist effect”. In Proc. SIGGRAPH, AnnualConference Series, pp. 407–414, Los Angeles, USA,(1997).

20. Curtis, C. J., Anderson, S. E., Seims, J. E., Fleisher,K. W. and Salesin, D. H. “Computer-generated wa-tercolor”. In Proc. SIGGRAPH, Annual ConferenceSeries, pp. 421–430, Los Angeles, USA, (1997).

21. Markosian, L., Kowalski, M. A., Trychin, S. J.,Bourdev, L. D., Goldstein, D. and Hughes, J. F.“Real-time nonphotorealistic rendering”. In Proc.SIGGRAPH, Annual Conference Series, pp. 415–420, Los Angeles, USA, (1997).

22. Watt, A. and Watt, M. Advanced animation andrendering techniques. Addison-Wesely, Wokingham,(1992).

23. Porter, T. and Duff, T. “Compositing digital im-ages”. In Computer Graphics (Proc. ACM SIG-GRAPH), 18(3), pp. 253–259, (1984).

24. Hall, P. M. Four new algorithms for visualistion. PhD

thesis, Department of Computer Science, Sheffield

University, (1993).

c© The Eurographics Association and Blackwell Publishers Ltd 1999