siggraph ’96 course 10 notes procedural …bobl/cpts548/u05_proc...siggraph ’96 course 10 notes...

242
SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David S. Ebert University of Maryland Baltimore County Lecturers John Hart Washington State University F. Kenton Musgrave The George Washington University Ken Perlin New York University Karl Sims Genetic Arts Brian Wyvill University of Calgary

Upload: hangoc

Post on 24-May-2018

229 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

SIGGRAPH ’96

COURSE 10 NOTES

Procedural Modeling and Animation Techniques

Monday August 5, 1996

OrganizerDavid S. EbertUniversity of Maryland Baltimore County

LecturersJohn HartWashington State University

F. Kenton MusgraveThe George Washington University

Ken PerlinNew York University

Karl SimsGenetic Arts

Brian WyvillUniversity of Calgary

Tom Berryhill
Supplemental files for this course are located in its subdirectory: SUPPLMNT
Page 2: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 3: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Course Abstract

This course imparts a working knowledge of several procedural approaches in modeling, shading,

and animation. The procedural approaches presented include solid textures, hypertextures, volume

density functions, character animation, fractals, artificial evolution, L-systems, and implicit

surfaces. The course provides participants with details that are normally omitted from technical

papers, including useful and practical guidelines for selecting parameter values. The course

will start with an overview of different procedural techniques and how they interrelate. An

in-depth description of basic primitive functions used will be presented, including noise and

turbulence functions and their implementation. After the introduction, the course will follow a

progression in the use of procedural techniques from solid texturing to hypertextures. Extensions

of hypertextures, including gas, liquid, and fire volume density functions will be presented.

Animations using these techniques will also be presented. Procedural animation techniques for

creating real-time responsive animated characters will be explored and demonstrated. Procedural

modeling techniques will be explored further with presentations on modeling with implicit surfaces,

fBm, IFS, L-systems, and procedural geometric instancing. The course will then describe fractal

applications and their relationship to the other procedural techniques. An ‘‘automated’’ procedural

approach, artificial evolution, will be presented, in contrast to the manual techniques described

earlier in the course. The usefulness of this approach for texturing and object modeling will be

explored. The course will conclude with a panel session for discussing tricks of the trade, common

pitfalls, and future directions.

i

Page 4: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 5: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Speaker Biographies

David Ebert is an Assistant Professor in the Computer Science and Electrical EngineeringDepartment at the University of Maryland Baltimore County. Previously, he was aninstructor in the Department of Computer and Information Science at The Ohio StateUniversity. He received his PhD in Computer and Information Science in June, 1991 fromThe Ohio State University. His current research interests include procedural modeling,rendering, and animation of gases, fluids, and fire; realistic volumetric scientific, medical,and information visualization; and animation control.

Dr. Ebert has chaired courses at SIGGRAPH ’92, ’93, ’94, and ’95 on ProceduralModeling and Rendering Techniques, has authored papers on the rendering and animationof procedurally defined gases, has published several journal papers on procedural modelingand animating volumetric gases and realistic visualization techniques and has produced 2animations featuring this work which won numerous international awards and appeared inthe SIGGRAPH ’89 and SIGGRAPH ’90 Animation Screening Rooms. His work has alsoappeared in the SIGGRAPH ’89, ’90, ’91, ’92, and ’96 Technical Slide sets as well asseveral textbooks and the cover of IEEE Computer Graphics and Applications magazine.Dr. Ebert has co-authored the book Texturing and Modeling: A Procedural Approach forAP Professional with Ken Musgrave, Darwyn Peachey, Ken Perlin, and Steve Worley,and recently written a chapter on Advanced Geometric Modeling Techniques for the CRCHandbook of Computer Science and Engineering.

John C. Hart is an Assistant Professor in the School of Electrical Engineering and ComputerScience at Washington State University. Hart received his B.S. in Computer Sciencefrom Aurora University, and his M.S. and Ph.D. in Computer Science in the ElectronicVisualization Laboratory at the University of Illinois at Chicago. He also interned in AlanNorton’s group at the IBM T.J. Watson Research Center, and at AT&T Pixel Machines(R.I.P.).

Dr. Hart’s research, funded by the NSF and Intel, focuses on the relationship between fractalgeometry and geometric design, producing both new techniques for implicit surface modelingas well as extending the geometric theory for fractal modeling. ‘‘unNatural Phenomena’’ (oneof four animations he has produced based on his fractal modeling algorithms) appeared in theSIGGRAPH ’91 Electronic Theater, ABC’s ‘‘Prime Time Live’’ and Miramar’s ‘‘Beyond theMind’s Eye.’’ He also consulted for Kleiser-Walczak on a new fractal transition appearingin a multimedia show at the Luxor Hotel, Las Vegas. Dr. Hart is a member of ACM,SIGGRAPH and the IEEE Computer Society. At the time of this writing, he serves on theSIGGRAPH Executive Committee as a Director-at-Large, and is a candidate for Director ofCommunications.

Ken Musgrave is an Assistant professor in the Computer Science Department at George Wash-ington University. Dr. Musgrave received his PhD in computer science from Yale in 1993;he worked with Benoit Mandelbrot in the Yale Department of Mathematics from 1987 to1993. He received his MS and BA in computer science from UC Santa Cruz in 1987 and1984, respectively. He studied visual arts at Skidmore college and Colgate University in1973-5, and the natural sciences at UC Santa Cruz in 1975-7.

ii

Page 6: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

While a researcher in the field of computer graphics, Musgrave considers the product of hiswork to be fine art. Thus he is interested in the implications of proceduralism to the creativeprocess. His pioneering work in fractal imagery have led Mandelbrot to credit him withbeing ‘‘the first fractal-based artist’’. His images have appeared internationally in exhibits,and in publications both technical and popular.

Ken Perlin is an Associate Professor of Computer Science and the Director of the Media ResearchLaboratory at the Courant Institute of Mathematical Sciences of New York University. Hereceived his Ph.D. in Computer Science at the Courant Institute and his B.A. in TheoreticalMathematics at Harvard University. In 1991 he was a recipient of a Presidential YoungInvestigator Award.

Dr. Perlin was Head of Software Development at R/GREENBERG Associates in NewYork, NY from 1984 through 1987. Prior to that he was the System Architect for computergenerated animation at Mathematical Applications Group, Inc., Elmsford, NY. from 1979 to1984. He has served on the Board of Directors of the New York chapter of ACM/SIGGRAPH.His algorithms for computer graphics have been widely used in commercials and featurefilms.

Karl Sims studied Life Sciences as an undergraduate at M.I.T. and later studied computer graphicsat the M.I.T. Media Laboratory. After developing special effects software for Whitney DemosProductions, and co-founding Hollywood based Optomystic, he collaborated with ThinkingMachines Corporation for several years as an artist in residence and research scientist. Hecurrently works as an independent in Cambridge, Massachusetts and continues to explorenew techniques for creating images with computers.

His works of computer animation include ‘‘Panspermia,’’ ‘‘Liquid Selves,’’ ‘‘PrimordialDance,’’and ‘‘Particle Dreams.’’ His interactive installation ‘‘Genetic Images’’ was recentlyexhibited at the Centre Pompidou in Paris.

Brian Wyvill is a professor in the department of computer science at the University of Calgary.After gaining his PhD in the UK in 1975, he worked at the Royal College of Art as apost doc. in London on a computer animation system used to make sequences for the film‘‘Alien’’. Since coming to Calgary in 1981, Brian’s research has concentrated on buildingthe Graphicsland Animation and visualization system. Recent work is in the areas of implicitsurface modeling, animation techniques and scientific visualization. Brian has directedseveral animations (two shown at SIGGRAPH) that feature implicit surfaces. Currently he isinterested in a very efficient adaptive tiling algorithm, as well as new techniques for warping,blending and collision detection using implicit surfaces and most recently the simulation oflightning and glowing objects.

Brian is a member of ACM, SIGGRAPH, and the editorial boards of the Visual Computerand the Journal of Animation and Scientific Visualization.

iii

Page 7: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Speaker Contact Information

Dr. David S. Ebert Assistant ProfessorComputer Science and Electrical Engineering DepartmentUniversity of Maryland Baltimore County5401 Wilkens Ave.Baltimore, MD 21228-5398(410) 455-3541 (work)(410)455-3969 (fax)[email protected] (Internet)

Dr. John C. Hart Assistant ProfessorSchool of EECSWashington State UniversityPullman, WA 99164-2752(509)335-2343, (509)335-3818 (fax)[email protected] (Internet)

Dr. F. Kenton Musgrave Assistant ProfessorComputer Science DepartmentThe George Washington University20101 Academic WayAshburn, VA 22011(703) 729-8254(703) 729-8251 (fax)[email protected] (Internet)

Dr. Ken Perlin Associate ProfessorDepartment of Computer ScienceNew York UniversityNew York, NY(212) [email protected] (Internet)

Karl Sims IndependentCambridge, [email protected]

Dr. Brian Wyvill ProfessorDepartment of Computer ScienceUniversity of Calgary2500 University Dr. NWCalgary, Alberta, T2N 1N4 Canada(403) 220-6009(403) 284-4707 (fax)[email protected] (Internet)

iv

Page 8: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 9: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Course Schedule

1. Course Introduction Time: 0.25 hr : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1-1David Ebert

2. Procedural Texturing, Hypertextures, and Character Animation Time: 1.0 hr : : : : : 2-1Ken Perlin

(a) Primitives for Procedural Texturing and Modeling

(b) Hypertextures

(c) Procedural Character Animation

3. Procedural Volume Modeling and Animation of Gases Time: 1.0 hr : : : : : : : : : : : : : : : 3-1David Ebert

(a) Introduction to Procedural Modeling and Animation

(b) Volumetric Modeling of Gas Geometry

(c) Animation Techniques for Procedural Modeling

4. Building and Animating Implicit Surface Models Time: 1.0 hr : : : : : : : : : : : : : : : : : : : : 4-1Brian Wyvill

(a) Parametric and Implicit Surface Models

(b) Implicit Surfaces Based on Skeletons

(c) Interactive Modeling Techniques

(d) Blending (continuity, hard and soft blending)

(e) Rendering, Polygonalization, and Animation Techniques

(f) Constructive Solid Geometry and Implicit Surface Models

5. Procedural Models of Geometric Detail Time:1.0 hr : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5-1John Hart

(a) An Implicit Representation of Rough Surfaces

(b) Recurrent Iterated Function Systems

(c) Procedural Geometric Instancing

v

Page 10: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6. Procedural Models of Natural Phenomena Time: 1.0 hr : : : : : : : : : : : : : : : : : : : : : : : : : : : 6-1Ken Musgrave

(a) Introduction

(b) Procedural Models of Natural Phenomena

(c) Development of a Procedural Planet

(d) What Lies Ahead: Procedural Universes

7. Procedural Modeling with Artificial Evolution Time: 1.0 hr : : : : : : : : : : : : : : : : : : : : : : : 7-1Karl Sims

(a) Evolution

(b) Genetic programming

(c) Generating and Searching the Parameter Space

(d) Evolving Plants, Textures, and Creatures

8. Conclusion Time: 0.25 hrAll Speakers

vi

Page 11: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Table of Contents

1. Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1-1by David Ebert

2. A Hypertexture Tutorial : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2-1by Ken Perlin

3. Volumetric Procedural Modeling and Animation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3-1by David Ebert

4. Building and Animating Implicit Surface Models : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4-1by Brian Wyvill

5. Procedural Models of Geometric Detail : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5-1by John Hart

6. Fractals and Procedural Models of Nature : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6-1by Ken Musgrave

7. Procedural Modeling with Artificial Evolution : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7-1by Karl Sims

vii

Tom Berryhill
This is a clickable Table of Contents
Page 12: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 13: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Chapter 1:Introduction

David S. EbertThe University of Maryland Baltimore County

1.1 Procedural Techniques in Computer Graphics

Procedural texturing, modeling, and animation is an exciting, active, rapidly growing area of research incomputer graphics. Procedural techniques have been used in computer graphics since the 1970’s. Oneof the most popular early uses of procedural techniques was for creating textures and is still widely usedtoday. Their use exploded with the introduction of three-dimensional texturing (solid texturing) by GeoffreyGardner, Darwyn Peachey, and Ken Perlin in 1985.

The use of procedural techniques has increased since the mid 1980’s and now includes modelingtechniques (fractals, hypertextures, iterated function systems, L-systems, implicit surfaces, etc.) andanimation techniques. They are used to create realistic images of wood, marble, water, steam, smoke,clouds, fog, entire artificial planets, flexible trains, and tribbles. Procedural techniques for animation allowflexible control of gases, fluids, and even realistic animated actors. We hope that after reading thesenotes and attending our course, you will continue to expand the use of procedural techniques in computergraphics.

1.2 Defining Procedural Techniques

Although a definition of procedural techniques may be difficult to pinpoint, one common characteristic isthe use of code segments to create the desired effect as opposed to the use of precomputed information,such as a scanned image for a texture. After reading these notes, you will have a better idea of the meaningof procedural techniques as they apply to computer graphics. For a good definition of procedural textures,read Darwyn Peachey’s chapter in ‘‘Texturing and Modeling: A Procedural Approach’’ (see section 1.5).

1.3 Why Proceduralism?

Proceduralism is a powerful paradigm for image synthesis. Procedural approaches provide an abstraction

of the model or action, encode classes of objects, and allow high-level control and specification. Codesegments or algorithms are used to abstract and encode the details of the model, instead of explicitlystoring vast numbers of low-level primitives. The use of algorithms unburdens the modeler/animator fromlow-level control, provides great flexibility and allows amplification of their efforts through parametriccontrol: a few parameters to the model yield large amounts of geometric detail (Alvy Ray Smith referred tothis as database amplification). This amplification allows a savings in storage of data and user specificationtime. The modeler has the flexibility to capture the essence of the object or phenomena being modeledwithout being constrained by the laws of physics and nature. He can include as much physical accuracy, andalso as much artistic expression, as he wishes into the model. The effects achievable are constrained only

1-1

Page 14: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

by the user’s procedural design abilities. This course will illustrate the relationship among several advancedprocedural modeling techniques (L-systems, fractals, implicit functions, volumetric density functions, anditerated function systems), as well as describe how procedural techniques can be used for animation.

1.4 Course objectives

The objective of this course is to provide the attendee with an understanding and working knowledge ofseveral uses of procedural techniques in modeling and rendering. The attendee will gain the following fromthe course:

An insight into different design approaches used by the speakers in designing procedures.

A toolbox of specific procedures and basic primitive functions (noise, turbulence, etc.) to producerealistic images.

An understanding of several advanced procedural approaches for modeling object geometry (hyper-textures, gases, fractals, L-systems, implicit surfaces, iterated function systems).

An introduction to animating these procedural objects and textures.

An introduction to modeling and animating with implicit surfaces.

An introduction to artificial evolution techniques.

1.5 Course Notes

The course notes are divided into 7 chapters. The first chapter is this introduction. The next 6 chapterscontain the course notes from each of the speakers. Chapter 2 by Ken Perlin is concerned with proceduraltextures, hypertextures, and character animation. Chapter 3 by David Ebert shows a general approach toprocedural animation, and also describes procedural approaches for modeling and animating gases andfluids. Chapter 4 by Brian Wyvill discusses building and animating implicit surface models. Chapter5 by John Hart discusses techniques for procedurally modeling geometric details. Chapter 6 by KenMusgrave discusses procedural models of natural phenomena including atmospheric effects, fractal modelsand ‘‘planetary textures’’. Finally, Chapter 7 by Karl Sims discusses ‘‘automated’’ proceduralism, or, asKarl calls it, ‘‘Artificial Evolution.’’

For further material on procedural modeling and texturing, you are encouraged to read Texturing and

Modeling: A Procedural Approach published by Academic Press Professional. This book by David Ebert,Ken Musgrave, Darwyn Peachey, Ken Perlin, and Steve Worley contains more detailed information onprocedural modeling and texturing. The book greatly expands upon the material in these notes by Ebert,Musgrave, and Perlin. The book also covers a substantial amount of new material. There is one chapterexplaining solid texturing techniques by Darwyn Peachey and another by Steve Worley on ‘‘tricks of thetrade’’ and efficiency considerations for two-dimensional and solid texturing.

1-2

Page 15: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

CHAPTER 2. - A HYPERTEXTURE TUTORIAL

Ken PerlinNew York University

2.1 INTRODUCTION:

This tutorial is mostly to help you with the practical matters of modeling and render-ing hypertextures. I will start off by explaining the general structure of my hypertex-ture renderer (the raymarcher), and where the hooks are for the user to insert his/herfunctions to create hypertextures customized to taste.

Then I will briefly go over issues of interaction with the raymarcher - how to setthings up so that small iterative design changes can be made quickly.

After this I will show you several examples of hypertextures that I’ve been playingwith. A number of interesting and useful tricks will come out of this part of the dis-cussion.

Finally, I list the C code for the turbulence and noise functions.

2.1.1 Shape, texture, and two paradigms:

Hypertexture comes out of earlier work where I used space filling textures for sur-face shading, as in figure marble vase. I was curious to see what would happen if Iextended solid shapes out into these same texture spaces. The result ends up beingsomething between shape and texture. A hypertexture starts with a fuzzy shape, andmodifies this by some space filling texture. When I say a "fuzzy shape", I mean ashape whose inside/outside function, instead of being just a boolean value, can takeon any intermediate value between 0.0 and 1.0. The portion of space having suchintermediate values constitutes the shape’s "fuzzy region". In hypertexture, bothshape and texture are mappings from R 3 to density.

There are two basic mathematical paradigms you can use to model hypertexture. Inthe first paradigm, you use the texture to distort the space before you evaluate theshape. It’s sort of like looking at the shape through a space-filling ripple glass effect:

f(xh) = shape(x

h+ texture(x

h) v→)

where v→is a simple vector expression (such as the local gradient of the shape func-tion, or the direction of one of the principal coordinate axes).

In the other paradigm, you add the texture value to a fairly soft shape function (ie.one having a wide fuzzy region). Then you apply a "sharpening" function to theresult:

2-1

Tom Berryhill
Some unusual symbols may display in this section. However, printing should be fine. - Production Editor.
Page 16: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

marble vase

Page 17: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

f(xh) = sharpen( shape(x

h) + texture(x

h) )

I will be illustrating of both types of hypertexture, and it will become clear whateach one does for you.

2.1.2 You need a fast computer, preferably two or more.

Hypertexture is compute intensive, but very easy to run in parallel - you can run eachray on a different processor.

I use an AT&T Pixel Machine - which distributes the rays among 64 processors byinterleaving - each processor handles every 8th pixel across and every 8th pixeldown. Each processor has an identifier [i,j], where 0 < i < 8 and 0 < j < 8, so proces-sor [i,j] computes all pixels with coordinates [ 8a + i, 8b + j ].

This arrangement is ideal for load balancing, since any 8x8 square in the image isseen by every processor. If you intend to farm out a hypertexture computation tomultiple computers, you should use an analogous load balancing technique.

2.1.3 You need a good library of texture generator functions.

Part of the reason that this stuff looks any good is that it incorporates randomness ina controlled way. Most of the work involved in achieving this is contained inside thenoise() function, which is an approximation to what you would get if you took whitenoise (say, every point mapped to a random value between -1.0 and 1.0) and blurredit to dampen out frequencies greater than 1.0.

The key observation here is that, even though you are creating "random" things, youcan use noise() to introduce high frequencies in a controlled way by applyingnoise() to an appropriately scaled domain. For example, if x

his a point in R 3, then

noise(2xh) introduces frequencies twice as high as does noise(x

h). Another way of

saying this is that it introduces details that are twice as small.

In the appendix I go into some detail about how the noise() function works, and Igive my original C implementation (which hasn’t really changed since 1984). I alsotalk in detail about the turbulence() function, which is a simple function built on topof noise() that you use to make textures that look like marble and flame and smoke.David Ebert uses this function extensively in his 3D atmospheric simulations.

2.2 WEARING TWO HATS - SYSTEM AND APPLICATIONS

I’ve done this work wearing two hats. First I built a system layer - the part thatnever changes. This is a "raymarcher" that renders hypertexture by marching a stepat a time along each ray, accumulating density at each sample along the way. Thereare all kinds of hooks for user interaction built into the raymarcher.

2-2

Page 18: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Then, for each type of hypertexture, I construct different "application" code, describ-ing a particular hypertexture. At this point in the process I can safely wear the hat ofa "naive user". Everything is structured so that at this level I don’t have to worryabout any system details. All I really need to worry about is what density to return atany point in R 3. This makes it very painless to try out different types of hypertex-ture.

2.2.1 System Code - The Raymarcher

I render hypertexture by "raymarching" - stepping front-to-back along each ray fromthe eye until either total opacity is reached or until the ray hits a back clipping plane.Conceptually, the raymarching is done inside the unit cube: -0.5 < x,y,z < 0.5. A4x4 viewing matrix transforms this cube to the desired view volume.

One ray is fired per pixel; the general procedure is as follows:

step = 1.0 / resolution;for (y = -0.5 ; y < 0.5 ; y += step)for (x = -0.5 ; x < 0.5 ; x += step) {

[point, point_step] = create_ray([x,y,-0.5],[x,y,-0.5+step],view_matrix);previous_density = 0.;init_density_function(); -- User suppliedcolor = [0,0,0,0];for (z = -0.5 ; z < 0.5 && color.α < 0.999 ; z += step) {

density = density_function(point); -- User suppliedc = compute_color(density); -- User supplied

-- Do shading only if needed

if (is_shaded && density != previous_density) {normal = compute_normal(point, density);c = compute_shading(c, point, normal); -- User suppliedprevious_density = density;

}

-- Attenuation varies with resolution

c[3] = 1.0 - pow( 1.0 - c[3], 100. * step );

-- Integrate front to back

if (c[3] > 0.) {t = c[3] * (1.0 - color.α);color += [ t*c.red, t*c.green, t*c.blue, t ];

}

2-3

Page 19: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

-- March further along the ray

point += point_step;}

}

2.2.2 Application Code - User defined functions:

The "user" gets to define four functions to create a particular hypertexture:

void init_density_function();float density_function(float x, float y, float z);color compute_color(float density);color compute_shading(color c, vector point, vector normal);

What makes things really simple is that as a user you only have to define behavior atany given single point in space - the raymarcher then does the rest for you.

init_density_function()

This function is called once per ray. It gives you a convenient place to com-pute things that don’t change at every sample.

density_function()

This is where you specify the mapping from points to densities. Most of thebehavior of the hypertexture is contained in this function.

compute_color()

Here you map densities to colors. This also gives you a chance to calculate arefractive index.

compute_shading()

Non-luminous hypertextures react to light, and must be shaded. The model Iuse is to treat any substance that has a density gradient as a translucent sur-face, with the gradient direction acting as normal vector, as though the sub-stance consists of small shiny suspended spheres.

In the raymarcher library I’ve included a Phong shading routine. I usuallyjust call that with the desired light direction, hilight power, etc.

2-4

Page 20: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Shading is relatively expensive, since it requires a normal calculation. Also, inmany cases (eg: self-luminous gases) shading is not necessary. For this reason,shading is only done if the user sets an is_shaded flag.

The raymarcher computes normals for shading by calling the user’s density functionthree extra times:

vector compute_normal(point, density) {

vector d = [ density_function[point.x-ε,point.y,point.z] - density,density_function[point.x,point.y-ε,point.z] - density,density_function[point.x,point.y,point.z-ε] - density ];

return d / |d|;}

The above is the basic raymarcher. Two features have not been shown - refractionand shadows. Shadows are done by shooting secondary rays at each ray step wheredensity != 0. They are prohibitively expensive except for hypertextures with "hard",fairly sharpened, surfaces. In this case the accumulated opacity reaches totality inonly a few steps, and so relatively few shadow rays need be followed.

Of course if we used a large shared memory to store a shadow volume, then cast sha-dows would only involve an additional raymarch pass. Unfortunately, the AT&TPixel Machine architecture does not support large shared memory.

Refraction is done by adding a fifth component to the color vector - an index ofrefraction. The user sets c.irefract (usually from density) in the "compute_color"function. The raymarcher then uses Snell’s law to shift the direction of "point_step"whenever c.irefract changes from one step along the ray to the next. An example ofthis, described also in the attached Hypertexture paper, is shown in figure blue glass.

Since the density can change from one sample point to the next, it follows that thenormal vector can also change continuously. This means that refraction can occurcontinuously. In other words, light can travel in curved paths inside a hypertexture.This raises some interesting possibilities. For example, imagine a manufacturingprocess that creates wafers whose index of refraction varies linearly from one face tothe other (probably by some diffusion process). By carving such a material, onecould create optical components within which light travels in curved paths. It mightbe possible to do things this way that would be very difficult or impossible to dowith traditional optical components (in which light only bends at discrete surfacesbetween materials). The results of such components should be quite straightforwardto visualize using refractive hypertexture.

2.3 INTERACTION

2-5

Page 21: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

blueglass

Page 22: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

2.3.1 Levels of editing - changing algorithms to tweaking knobs

There are three levels of changes you can make. I describe them in order of slowest(and most sweeping) to fastest.

g Semantic changes - changing the user functions

Ie.: redefining your hypertexture methods. This type of change is covered indetail in section 2.5 below.

g Parameters you change by editing an input file of numeric parameters

This saves the time of recompiling the user functions, when all you want tochange is some numeric value. The raymarcher has a mechanism built into itthat lets you refer to a file that binds symbols to floating point values whenyou run the program. These bindings are made accessible to the hypertexturedesigner at the inner rendering loop.

In the examples below, I will adopt the following convention: Any symbolthat begins with a Capital letter refers to a parameter whose value has beenset in this input file. Symbols beginning with lower case letters refer to vari-ables that are computed within the individual rays and samples.

g Parameters you change from the command line

These override any parameters with the same name in the input file. Theyare used to make animations showing things changing. For example, let’ssay you want to create an animation of a sphere with an expanding Radius,and you are working in the UNIX csh shell:

set i = 0while ($i < 100)

hypertexture -Radius $i sphere >$i@ i++

end

There are also some special parameters: XFORM for the view matrix, RES forimage resolution, CLIP for image clipping (when you just want to recalculate part ofan image). These can be set either from the command line or as an environmentvariable (the former overrides the latter, of course).

In these notes, I have hard wired numerical parameters into a number of expressions.These are just there to "tune" the model in various useful ways. For example, theexpression "100 * step" appearing above in the attenuation step of the raymarcherhas the effect of scaling the integrated density so that the user can get good results byspecifying densities in the convenient range [0.0...1.0].

2-6

Page 23: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

2.3.2 Z-slicing

For much of the time when designing hypertextures, you just need a general sense ofthe shape and position of the textured object. In this case it is useful to evaluate onlyat a fixed z - setting the value of a ray to the density at only one sample point a fixeddistance away. This obviously runs many times faster than a full raymarch. I use Zslicing for general sizing and placement, often going through many fast iterations inZ-slice mode to get those things just right.

2.4 SOME SIMPLE SHAPES TO PLAY WITH

2.4.1 Sphere:

Start with sphere with inner I_radius and outer O_radius. Inside I_radius, densityis everywhere 1.0. Beyond O_radius, density has dropped completely to 0.0. Theinteresting part is the hollow shell inbetween:

(1) Precompute (only once):rr0 = O_radius * O_radius;rr1 = I_radius * I_radius;

(2) t = x * x + y * y + z * z;

(3) if (t > rr0)return 0.;

else if (t < rr1)return 1.;

elsereturn (t - rr0) / (rr1 - rr0);

2.4.2 Egg:

To create an egg, you start with a sphere, but distort it by making it narrower at thetop. A good maximal "narrowing" value is 2/3, which is obtained by inserting thefollowing step to the sphere procedure:

(1.5) e = ( 5. - y / O_radius ) / 6.;x = x / e;z = z / e;

Notice that we must divide, not multiply, by the scale factor. This is because x and zare the arguments to a shape defining function - to make the egg thinner at the top,

2-7

Page 24: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

we must increase (not decrease) the scale of x and z.

2.5 EXAMPLES OF HYPERTEXTURE

2.5.1 Explosions:

The texture component here is turbulence uniformly positioned throughout space.

t = 0.5 + Ampl * turbulence(x, y, z);return max(0., min(1. t));

Shape is just a sphere with I_radius = 0.0, which ensures that the fuzzy region willconsist of the entire sphere interior.

The density function is:

d = shape(x, y, z);if (d > 0.)

d = d * texture(x, y, z);return d;

You can animate an explosion by increasing the sphere O_radius over time. Figureexplode1 shows an explosion with O_radius set to 0.4. Figure explode2 shows thesame explosion with O_radius set to 0.8.

To create these explosions I oriented the cusps of the texture inward, creating theeffect of locally expanding balls of flame on the surface. Contrast this with figurefire ball (which is discussed further in the attached Hypertexture paper), where thecusps were oriented outward to create a licking flame effect.

2.5.2 Lifeforms:

Just for fun, I placed a shape similar to the above explosions inside of an egg shapeof constant density, as in figure embryo. By pulsing the O_radius and Amplrhythmically, while rotating slightly over time, I managed to hatch some rather intri-guing simulations.

2.5.3 Space filling fractals:

Figures eggs 1,2,3,4 show steps in the simulation of a sparsely fractal material. Ateach step, noise() is used to carve volume away from the egg. Then noise() oftwice the frequency is carved away from the remainder, and so on.

2-8

Page 25: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

explode1 explode2

Page 26: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

fireball

Page 27: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

embryo

Page 28: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

egg1 egg2

Page 29: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

egg3 egg4

Page 30: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure fractal egg shows one possible result of such a process, a shape havinginfinite surface area and zero volume.

2.5.4 Woven cloth:

Cloth is defined by the perpendicular interweaving of warp threads and woof threads.We define a warp function: warp(x, y, z), where y is the direction perpendicular tothe cloth:

(1) make an undulating slab:

if (fabs(y) > PI)return 0.;

y = y + PI/2 * cos(x) * cos(z);if (fabs(y) > PI/2)

return 0.;

density = cos(y);

(2) separate the undulating slab into fibers via cos(z):

density = density * cos(z);

(3) shape the boundary into a hard surface:

density = density * density;density = bias(density, Bias);density = gain(density, Gain);return density;

We can then define a woof function by rotating 90 degrees in z,x and flipping in y.The complete cloth function is then:

cloth(x, y, z) = warp(x, y, z) + warp(z, -y, x);

You can make the cloth wrinkle, fold, etc. by transforming x, y, and z before apply-ing the cloth function. You can also add high frequency noise() to x,y,z beforeapplying cloth(), to simulate the appearance of roughly formed fibers. In the exam-ples shown I have done both sorts of things.

In the cloth examples shown here, I "sharpen" the surface by applying the bias() and

2-9

Page 31: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

fractal egg

Page 32: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

gain() functions described in the attached Hypertexture paper. Figures cloth 1,2,3,4are extreme closeups of cloth with various bias and gain settings. Figure cloth 1 haslow bias and gain. In cloth 2 I increase gain, which "sharpens" the surface. In cloth3 I increase bias, which expands the surface, in effect fattening the individualthreads. In cloth 4 I increase both bias and gain. Figure cloth 5 shows a high reso-lution rendering of a low bias, high gain cloth, which gives a "thread" effect. Con-versely, a high bias, low gain would give a "woolen" effect.

2.5.5 Architexture:

Now let’s take an architectural sketch and "grow" solid texture around it, ending upwith hard textured surfaces. This is similar in spirit to Ned Greene’s voxel automataalgorithm. The difference is that whereas he literally "grows" a volume from adefining skeleton, one progressive voxel layer at a time, the hypertexture approachdirectly evaluates its result independently at each point in space.

I start with a skeleton of architectural elements. This can be supplied by freehanddrawing or, alternatively, generated from a CAD program. Each architectural ele-ment is a "path" in space formed by consecutive points Pi .

Each path defines an influence region around it, which gives the architexture itsshape component. This region is created by "blurring" the path. To do this I treateach point along the path as the center of a low density soft sphere of radius R. Theshape density at a given point x

his given by:

path_shape(xh) =

KR 21hhhhh

iΣmax (0,R 2− | Pi−x

h|2),

where the normalizing constant K is the distance between successive points on thepath. For each volume sample, the cost per path point is a dot product and someadds, which is fairly expensive. To speed things up I maintain a bounding boxaround each path, which eliminates most paths from consideration for any givensample point.

I’ve only played so far with rocklike textures for architexture. The texture com-ponent of this is given by a simple noise based fractal generator:

rock_texture(xh) =

f = log base_freqΣ

log resolution2− fnoise(2 fx

h)

and I define the final density by:

sharpen( path_shape(xh) + rock_texture(x

h) )

where I use the sharpening function to reduce the effective fuzzy region size about

2-10

Page 33: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

cloth 1,2cloth 3,4

cloth 5

Page 34: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

one volume sample. For a given image resolution and shape radius R, correct shar-pen is done by:

g scaling the density gradient about 0.5 by a factor of 1/R (adjusting also for variableimage resolution)

g clipping the resulting density to between 0.0 and 1.0.

The idea of the above is that the larger R becomes, the smaller will be the gradientof density within the fuzzy region, so the more sharpening is needed. The actualcode I use to do this is:

density = (density - 0.5) * (resolution / 600) / R + 0.5;density = max(0.0, min(1.0, density));

I also decrease the shape’s radius R with height (y coordinate), which gives architec-tural elements a sense of being more massive in their lower, weight supporting,regions.

In the illustrations I show an typical arch (which I suppose could archly be calledarchetypical architexture). This started out as a simple curved line tracing the innerskeleton, which was then processed as described above.

Figure arch1 shows a sequence where three parameters are being varied. From leftto right the width of the arch at the base increases. From top to bottom the thicknessenhancement towards the base increases. These two parameters act in concert to adda "weight supporting" character to architextural elements. Finally, the amplitude ofthe texture is increased linearly from the first image in the sequence to the last.

Figure arch2 shows a high resolution version of the bottom right image in thesequence.

2.5.6 The NYU Torch:

In figure NYU torch, a number of hypertextures were used for various parts of thesame object to build an impression of this well known New York University icon.The various amounts of violet reflected down from the flame to the torch handlewere just added manually, as a function of y.

2.5.7 Smoke:

In a recent experiment we tried to create the look of an animating smoke column,using as simple a hypertexture as possible. This work was done in collaborationwith Ajay Rajkumar at NYU.

The basic approach was to create a smooth smoke "column" along the y axis, andthen to perturb this column in x,z - increasing the perturbation at greater y values.

2-11

Page 35: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Arch 1 Arch 2

Page 36: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

NYU Torch

Page 37: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

We added knobs for such things as column opacity and width. These ‘‘shaping’’knobs can have a great effect on the final result. For example, figures smoke1 andsmoke2 vary only in their column width. Yet the difference in appearance betweenthem is drastic.

2.5.7.1 Time dependency:

We make the smoke appear to "drift" in any direction over time by moving thedomain of the turbulence in the opposite direction. In the example below, we do thisdomain shift in both x and y.

We move y linearly downward, to give the impression of a rising current. We movex to the left, but increase the rate of movement at greater y values. This creates theimpression that the smoke starts out moving vertically, but then drifts off to the rightas it dissipates near the top.

The particular shape of the smoke can vary dramatically over time, yet the generalfeel of the smoke stays the same. Compare, for example, the two images smoke3and smoke4, which are two frames from the same smoke animation.

2.5.7.2 Smoke rings:

Figure smoke ring shows the formation over time of a smoke ring. Smoke ringswill occur when the turbulence function distorts the space sufficiently so that thecolumn appears to horizontally double over on itself. We need to only let this hap-pen fairly high up on the column. If it happens too low, then the rings will appear tobe forming somewhere off in space, not out of the column itself.

For this reason, we employ two different gain curves - one controls turbulenceamplitude near the column as a function of y, the other controls turbulence ampli-tude far from the column as a function of y. The latter curve always lags behind theformer, thereby preventing low lying smoke rings.

2.5.7.3 Optimization:

Since smoke is quite sparse within its sampled volume, it is a good candidate foroptimization based on judicious presampling. Tests on the AT&T Pixel Machinewith 64 DSP32 processors showed that it took about 8 hours to compute a singleframe of a 640x480x640 volume. This is rather impractical for animation. To speedthis up, we pre-compute the image at a smaller resolution to find out where thesmoke lives within the volume We then do the final computation only within thoseparts of the volume.

More specifically, we do a preliminary ray march at 1/4 the final x,y,z resolution.Note that this requires only 1/64 as many density evaluations as would a full compu-tation. At each 4x4 pixel, we save the interval along z which bounds all non-zerodensities. For many pixels, this will be a null interval. Then to be conservative, at

2-12

Page 38: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

smoke 1,2,3,4

Page 39: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

smokerings

Page 40: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

each pixel we extend this interval to be its union with the intervals at all neighborpixels.

We use this image of bounding z intervals to restrict the domain of the final raymarching. We have found about a 30-fold speed-up using this method - each imagenow takes about 16 minutes to compute (including the time for the sub-sampledprepass). Smoke is optimal for this type of speedup for two reasons: (1) since it issparse, the speedup is great and (2) since density takes many sample widths to falloff to zero, tiny details are not inadvertently skipped over.

Here is pseudocode for the smoke density function. It’s mostly just C code withsome unimportant details and declarations left out.

smoke_density_function(x, y, z){

/* k1, k2 ... are the column shape knobs */

/* rotate z randomly about y, based on noise function *//* this creates the impression of random "swirls" */

t = noise(x,y,z);s = sin(t / 180. * PI);c = cos(t / 180. * PI);z = x * s + z * c;

/* once the space is "swirled", create the column of smoke */

/* 1) phase shift x and z using turbulence; this varies with time */

x += k1 * phase_shift(x, y, z);z += k2 * phase_shift(x, y, z + k3);

/* 2) define column by distance from the y-axis */

rx = (x * x + z * z) * k4;

/* 3) if inside the column, make smoke */

if (rx < 1.) {rx = bias(rx, k5); /* the basic column shape is */s = sin(PI * rx); /* a tube with hollow core */return s * s;

}else

return 0.;}

2-13

Page 41: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

phase_shift(x, y, z)float x, y, z;{

/* c1, c2 ... are the "texture" knobs */

p[0] = c1 * x + bias(y + .5, .3) * TIME; /* vary with time */p[1] = c1 * y + TIME;p[2] = c1 * z + c2;g = gain(y + .5, c3); /* dropoff with y */

/* these 3 lines remove smoke rings that are *//* too low in y to be physically plausible */

r = max(0., 1. - (x * x + z * z) * c5);gl = gain(bias(y + .5, c4), c3); /* smoke ring dropoff with y */g = g * LERP(gl, r, 1.);

return g * (turbulence(p, 1., RES) + c6); /* c6 recenters the column */

}

2.6 TURBULENCE AND NOISE

2.6.1 The turbulence function

The turbulence function, which you use to make marble, clouds, explosions, etc., isjust a simple fractal generating loop built on top of the noise function. It is not a realturbulence model at all. The key trick is the use of the fabs() function, which makesthe function have gradient discontinuity "fault lines" at all scales. This fools the eyeinto thinking it is seeing the results of turbulent flow. The turbulence() functiongives the best results when used as a phase shift, as in the familiar marble trick:

sin(point + turbulence(point) * point.x);

Note the second argument below, lofreq, which sets the lowest desired frequencycomponent of the turbulence. The third, hifreq, argument is used by the function toensure that the turbulence effect reaches down to the single pixel level, but nofurther. I usually set this argument equal to the image resolution.

float turbulence(point, lofreq, hifreq)float point[3], freq, resolution;{

2-14

Page 42: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

float noise3(), freq, t, p[3];

p[0] = point[0] + 123.456;p[1] = point[1];p[2] = point[2];

t = 0;for (freq = lofreq ; freq < hifreq ; freq *= 2.) {

t += fabs(noise3(p)) / freq;p[0] *= 2.;p[1] *= 2.;p[2] *= 2.;

}return t - 0.3; /* readjust so that mean returned value is 0.0 */

}

2.6.2 The noise function

noise3 is a rough approximation to "pink" (band-limited) noise, implemented by apseudorandom tricubic spline. Given a vector in R 3, it returns a value between -1.0and 1.0. There are two principal tricks to make it run fast:

g Precompute an array of pseudo-random unit length gradients g[n].

g Precompute a permutation array p[] of the first n integers.

Given the above two arrays, any integer lattice point (i,j,k) can be quickly mapped toa pseudorandom gradient vector by:

g[ (p[ (p[i] + j) % n ] + k) % n]

By extending the g[] and p[] arrays, so that g[n+i]=g[i] and p[n+i]=p[i], the abovelookup can be replaced by the (somewhat faster):

g[ p[ p[i] + j ] + k ]

Now for any point in R 3 we just have to do the following two steps:

(1) Get the gradient for each of its surrounding 8 integer lattice points as above.

(2) Do a tricubic hermite spline interpolation, giving each lattice point the value 0.0.

The second step above is just an evaluation of the hermite derivative basis function3t 2 − 2t 3 in each by a dot product of the gradient at the lattice

2-15

Page 43: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Here is my implementation in C of the noise function. Feel free to use it, as long asyou reference where you got it. ;v)

/* noise function over R3 - implemented by a pseudorandom tricubic spline */

#include <stdio.h>#include <math.h>

#define DOT(a,b) (a[0] * b[0] + a[1] * b[1] + a[2] * b[2])

#define B 256

static p[B + B + 2];static float g[B + B + 2][3];static start = 1;

#define setup(i,b0,b1,r0,r1) \t = vec[i] + 10000.; \b0 = ((int)t) & (B-1); \b1 = (b0+1) & (B-1); \r0 = t - (int)t; \r1 = r0 - 1.;

float noise3(vec)float vec[3];{

int bx0, bx1, by0, by1, bz0, bz1, b00, b10, b01, b11;float rx0, rx1, ry0, ry1, rz0, rz1, *q, sy, sz, a, b, c, d, t, u, v;register i, j;

if (start) {start = 0;init();

}

setup(0, bx0,bx1, rx0,rx1);setup(1, by0,by1, ry0,ry1);setup(2, bz0,bz1, rz0,rz1);

i = p[ bx0 ];j = p[ bx1 ];

b00 = p[ i + by0 ];b10 = p[ j + by0 ];b01 = p[ i + by1 ];b11 = p[ j + by1 ];

2-16

Page 44: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

#define at(rx,ry,rz) ( rx * q[0] + ry * q[1] + rz * q[2] )

#define s_curve(t) ( t * t * (3. - 2. * t) )

#define lerp(t, a, b) ( a + t * (b - a) )

sx = s_curve(rx0);sy = s_curve(ry0);sz = s_curve(rz0);

q = g[ b00 + bz0 ] ; u = at(rx0,ry0,rz0);q = g[ b10 + bz0 ] ; v = at(rx1,ry0,rz0);a = lerp(sx, u, v);

q = g[ b01 + bz0 ] ; u = at(rx0,ry1,rz0);q = g[ b11 + bz0 ] ; v = at(rx1,ry1,rz0);b = lerp(sx, u, v);

c = lerp(sy, a, b); /* interpolate in y at lo x */

q = g[ b00 + bz1 ] ; u = at(rx0,ry0,rz1);q = g[ b10 + bz1 ] ; v = at(rx1,ry0,rz1);a = lerp(sx, u, v);

q = g[ b01 + bz1 ] ; u = at(rx0,ry1,rz1);q = g[ b11 + bz1 ] ; v = at(rx1,ry1,rz1);b = lerp(sx, u, v);

d = lerp(sy, a, b); /* interpolate in y at hi x */

return 1.5 * lerp(sz, c, d); /* interpolate in z */}

static init(){

long random();int i, j, k;float v[3], s;

/* Create an array of random gradient vectors uniformly on the unit sphere */srandom(1);for (i = 0 ; i < B ; i++) {

do { /* Choose uniformly in a cube */for (j=0 ; j<3 ; j++)

v[j] = (float)((random() % (B + B)) - B) / B;s = DOT(v,v);

} while (s > 1.0); /* If not in sphere try again */

2-17

Page 45: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

s = sqrt(s);for (j = 0 ; j < 3 ; j++) /* Else normalize */

g[i][j] = v[j] / s;}

/* Create a pseudorandom permutation of [1..B] */for (i = 0 ; i < B ; i++)

p[i] = i;for (i = B ; i > 0 ; i -= 2) {

k = p[i];p[i] = p[j = random() % B];p[j] = k;

}

/* Extend g and p arrays to allow for faster indexing */for (i = 0 ; i < B + 2 ; i++) {

p[B + i] = p[i];for (j = 0 ; j < 3 ; j++)

g[B + i][j] = g[i][j];}

}

2-18

Page 46: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Real time responsive animation with personality

Ken Perlin

Media Research Laboratory

Department of Computer Science

New York University

715 Broadway, NY, NY 10003

[email protected]

Abstract

Building on principles from our prior work on procedural texture synthesis, we are able

to create remarkably lifelike, responsively animated characters in real time. Rhythmic and

stochastic noise functions are used to de�ne time varying parameters that drive computer

generated puppets. Because we are conveying just the \texture" of motion, we are able to

avoid computation of dynamics and constraint solvers.

The subjective impression of dynamics and other subtle in uences on motion can

be conveyed with great visual realism by properly tuned expressions containing pseudo-

random noise functions. For example, we can make a character appear to be dynamically

balancing herself, to appear nervous, or to be gesturing in a particular way.

Each move has an internal rhythm, and transitions between moves are temporally

constrained so that \impossible" transitions are precluded. For example, if while the

character is walking we specify a dance turn, the character will always step into the turn

onto the correct weight-bearing foot. An operator can make a character perform a properly

connected sequence of actions, while conveying particular moods and attitudes, merely by

pushing buttons at a high level.

Potential uses of such high level \textural" approaches to computer graphic simulation

include Role Playing Games, simulated conferences, \clip animation", graphical front ends

for MUDs (Ste92) (Ger92), and synthetic performances.

1 Introduction

1.1 Description of the Problem

In previous work (Per85) we used pseudo-random functions to create natural surface textures

of surprisingly realistic appearance without having to model the underlying physics. This

was done with a set of interactive tools, consisting of:

a powerful interactive prototyping language

a good set of signal generation and modi�cation functions

a good controllable noise primitive

a set of conventions for �tting things together

1

Page 47: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

In the recent work described in this paper we have applied this approach to the problem

of building real time graphic puppets that appear to be emotionally responsive.

We choose the word \puppets" very deliberately here. This work is not arti�cial intelli-

gence - these animated characters do not encode any real intentionality; they only encode

a visual impression of personality. This work was �rst presented in (Per94). We will refer

to the \dancer" �gure from that animation in our examples.

1.2 Related work

Simulated actors that embody true physical constraints are being developed by Badler et al

at the University of Pennsylvania (BPW93). Dynamic balancing walking robots have been

developed by Raibert (Rea86), and animal and human �gure simulations based on inverse

dynamics have been developed by Girard (GM85).

Using layered construction in the design of articulated movement has been explored by

Chadwick et al (CHP89). Similarly, layered control structures for walking robots (subsump-

tion architectures) have been used e�ectively by Brooks (Bro86).

Morawetz and Calvert have added a sense of personality to simulated human movements

by supporting secondary movements (MC90). In a radically di�erent approach, genetic

algorithms have been seen to induce movement that gives an intriguing impression of per-

sonality in goal directed mutations of articulated �gures (Sim94).

1.3 Guiding principles

We adopt the following general approach: Program individual motions into the puppets

beforehand, but also ensure that transitions between any pair of actions are visually correct.

A potential objection to this approach is that things might look repetitious. But by using

randomization we can easily build actions that are very controllable yet never actually

repeat themselves.

In addition to the forward kinematics of individual actions, we build in only three simple

constraints. Characters never walk through walls, they never spin their heads all the way

around backwards, and they maintain �xed foot contact with the oor when doing \walking"

actions.

1.4 Comparison with dynamics approaches

This approach has advantages as well as disadvantages when compared with more ambitious

approaches that try to model the underlying physics. One advantage is that it allows much

more direct control over the subtle movements that convey the appearance of emotional

expressiveness. Another is that computational costs are far lower.

A disadvantage is that the model can't teach itself new actions. If the puppet's foot

is snagged by a rock while walking, the puppet cannot properly perform the particular

2

Page 48: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

movement of tripping and recovering its balance, unless we've already taught it how to do

that.

The two approaches are compatible. Ideally a system would employ both predetermined

movements, as well as physical laws that allow it to deal with unexpected events in its

environment, such as the sorts of dynamic balancing of the `Jack' �gure from Badler's

group at U. Penn.

2 Method

2.1 Actions, Weights, Transitions

The two major components of our method are actions and weights. An action is some simple

or repetitive movement, such as walking or standing, or a pirouette turn. The relative

contribution of an action is given by a weight, which is always a scalar value between 0:0

and 1:0.

We cause the puppet to respond to her environment in real time mainly by changing these

various weights. For example, decreasing the weight for one action, while simultaneously

increasing the weight for another, causes a transition in behavior from the �rst action to

the second [�gure 1].

Note that if this is done properly, the actions can be connected together seamlessly in

arbitrary sequences, like characters in a string of text, to build up complex behaviors [�gure

2]. In spirit this is similar to the summation of B-spline knot functions to construct smooth

piecewise-cubic curves.

If these transitions are applied naively, the results can be disastrous. An example would

be a transition from an action in which our puppet is stepping onto her left foot to an action

in which she is stepping onto her right foot. Another example would be a transition that

would make the puppet's arm interpenetrate her body on its way from the �rst action to

the second action.

We solve this problem by controlling the times of transitions between actions, and by

designing actions in such a way that it is possible to control their transitions.

In this section we will describe the structure of our system, proceeding in a bottom up

fashion. We start with joint kinematics, go on to explain how coherent actions are developed,

and then to the scalar controls that combine those actions. The bottom to top structure of

our system is as follows:

joint kinematics

individual actions

scalar weights to blend actions

synchronizing actions

discrete choice controls

3

Page 49: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 50: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

layers of choice controls

constraints

2.2 Bottom level kinematic hierarchy

19 universal joints are used in our current approximation of the human �gure: one each for

waist, neck, and head, plus four for each limb [�gure 3]. Ideally there should be more; this

was the minimum that seemed necessary to allow emotional expressiveness.

Separate joints appear at the base and top of the neck. The �rst universal \arm" joint is

actually at the chest. This controls the position of the shoulder. The arm structure involves

the following universal joints: chest, shoulder, elbow, and wrist. Similarly, a leg has: pelvis,

hip, knee, and ankle joints.

Each universal joint allows three rotations: x, then z (the two \aiming" parameters),

followed by y (rotation around the limb axis). Any angle not speci�ed defaults to a \zero"

position where the puppet is standing upright with arms at her side.

Here is the actual code of the main routine for positioning and drawing the body, executed

once per frame, as expressed in our modeling language's reverse polish notation:

Push

Neck joint

Nod joint draw_head

Push

Lchest joint

Lshoulder joint

Lelbow joint

Lwrist joint draw_arm

-1,1,1 scale

Rchest joint

Rshoulder joint

Relbow joint

Rwrist joint draw_arm

Pop

Waist draw_torso

Push

Lpelvis joint

Lhip joint

Lknee joint

Lankle joint 1 draw_leg

-1,1,1 scale

Rpelvis joint

4

Page 51: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 52: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Rhip joint

Rknee joint

Rankle joint 2 draw_leg

Pop

Pop

Push and Pop manipulate a local matrix stack, as in the Silicon Graphics GL model

(SGI94). Waist, Neck, Nod, Lchest, etc., are joint variables, and draw head, draw arm,

draw torso, draw leg are procedures to draw parts of the body. Note the scaling by �1 in

the x dimension to draw the right arm and leg as mirrors of the left arm and leg.

The variables Waist, Neck, Nod, etc., represent the 19 universal joints of the �gure. Each

is a vector of length three. The values in these vectors, which change at every frame, drive

the �gure's joints. They are set by actions and by transitions between actions.

The actual work is done in subprocedures draw head, draw torso, draw arm and draw leg.

Each of these routines does successive forward kinematic transformations on the current

matrix in order to compute locations for the puppet's component parts.

2.3 Actions

A primitive action is constructed by varying the puppet's scalar joint angles over time t

via expressions of raised sine and cosine, as well as noise, where \raised sine" and \raised

cosine" are de�ned by 1+sin�2

and 1+cos�2

.

At each frame we compute raised sine and cosine, as a function of time, at two frequencies

one octave apart:

s1 := rsin(time)

c1 := rcos(time)

s2 := rsin(2*time)

c2 := rcos(2*time)

The variables s1 and c1 are used together within actions to impart elliptical rotations.

Variables s2 and c2 do the same at double frequency.

Together the expressions s1, c1, (1� s1), and (1� c1) collectively generate a four phase

periodic signal. In practice we have not found any need for �ner phase control of periodic

actions than this quarter cycle accuracy.

We also provide a set of independent coherent noise sources n1; n2; ::::

n1 := .5 * (1 + noise(t))

n2 := .5 * (1 + noise(t + 100))

n3 := .5 * (1 + noise(t + 200))

The coherent noise source is de�ned as in (Per85). Here the noise is simpler, since it

need only be de�ned over a one dimensional temporal domain, rather than over a three

dimensional spatial domain. The algorithm we use is:

5

Page 53: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

(1) if x is an integer, noise(x) = 0.

(2) de�ne a mapping G(i) from the integers to a �xed set of pseudorandom gradients.

(3) given any i < x < i + 1, do a hermite spline interpolation, using the two neighboring

gradients G(i) and G(i+ 1).

The only tricky step above is (2). To implement this step e�ciently, we precompute a

table of pseudorandom gradients g[0::255]. Then for any integer i we return g[imod256 ]. A

comprehensive discussion of noise implementations can be found in (Eea94).

Every action is built by using some combination of the above source signals in simple

expressions, to control the rotation of some joints within some range. Each joint is at its

zero position when the puppet is standing at attention with both arms at the side. An

action is speci�ed by a table of ranges and time dependent behavior for each joint that this

action a�ects.

The stylized way in which these are coded makes it simpler to provide high level descrip-

tions of rhythmic motions. Here is the code we use to specify a \rhumba" dance [�gure

4]:

{

{ 5 5 5 } { -5 -5 -5 } { n1 n2 n3 } Nod

{ 15 0 5 } { -15 0 -5 } { c1 0 s1 } Rchest

{ 0 0 0 } { 0 0 0 } { s1 0 s1 } Rshoulder

{ -90 0 0 } { -70 0 0 } { s1 0 s1 } Relbow

{ 0 0 0 } { 0 0 0 } { s1 0 s1 } Rpelvis

{ -25 -15 5 } { 0 0 -10 } { s1 s1 s1 } Rhip

{ 50 0 0 } { 0 0 0 } { s1 0 s1 } Rknee

{ 0 0 0 } { 0 0 0 } { s1 0 s1 } Rankle

{ 0 0 10 } { 0 0 -10 } { s1 0 s1 } Waist

{ -15 0 -5 } { 15 0 5 } { c1 0 s1 } Lchest

{ 0 0 0 } { 0 0 0 } { s1 0 s1 } Lshoulder

{ -70 0 0 } { -90 0 0 } { s1 0 s1 } Lelbow

{ 0 0 0 } { 0 0 0 } { s1 0 s1 } Lpelvis

{ 0 0 -20 } { -10 -25 20 } { s1 s1 s1 } Lhip

{ 0 0 0 } { 20 0 0 } { s1 0 s1 } Lknee

{ 0 0 0 } { 0 0 0 } { s1 0 s1 } Lankle

} 'rhumba define_action

Each line of the above code speci�es an assignment of three items to a particular joint

(Nod, Rchest, etc). Each of these items contains three numeric values. Each of the �rst two

items is run immediately and packaged up as a vector, representing an extreme position

of motion for the joint. The third item is evaluated at every frame where the action is

performed, and is used as a linear interpolant between the two extremes.

Let us take the second line as an example. It speci�es motion for the \Rchest" joint, the

universal joint which pivots around the chest to displace the right shoulder. For this joint,

6

Page 54: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 55: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

the limits of rotation about the x axis are -15 degrees to 15 degrees. Similarly, the y axis is

�xed at 0 degrees, and the z axis varies from 5 degrees to -5 degrees.

The time varying behavior for this joint is as follows. The x axis interpolates between its

limits as c1 = rcos(time) and the z axis interpolates between its limits as s1 = rsin(time).

The y axis stays �xed.

The action de�ned above is a relatively stylized dance step, so most of its motion is

rhythmic, controlled by periodic functions. Only the head motion has a little randomness,

which in this case gives the impression that the puppet is looking around while she dances.

Notice also that the joints at the chest are driven by c1 in their x axis and by s1 in their z

axis. This gives an elliptical motion to the shoulders, which is crucial for giving the subtly

\latin" feel of this dance move.

In contrast, here is the de�nition for standing in a casual pose [�gure 5]:

{

{ 0 15 0 } { 0 -15 0 } { 0 n1 0 } Neck

{ 20 0 0 } { } { } Nod

{ 0 0 -5 } { } { } Lchest

{ 0 0 0 } { } { } Rchest

{ -10 0 0 } { } { } Lshoulder

{ -10 0 0 } { } { } Rshoulder

{ 0 0 -10 } { } { } Lelbow

{ 0 0 -10 } { 0 0 -5 } { 0 0 n1 } Relbow

{ 0 0 5 } { } { } Waist

{ -2 0 2 } { 2 0 -2 } { n1 0 n1 } Lpelvis

{ -2 0 -2 } { 2 0 2 } { n1 0 n1 } Rpelvis

{ 0 0 -14 } { } { } Lhip

{ -10 25 12 } { } { } Rhip

{ -5 0 0 } { } { } Lknee

{ 25 0 0 } { } { } Rknee

} 'stand define_action

Here most of the vectors are left blank. This means that these joints are completely static

for this action. All of the motion of this action is driven by noise - there is no rhythmic

motion at all. The noise gives the e�ect of subtle restlessness and weight shifting. The

motion is subtle, but if it is left out, the puppet looks sti� and unrealistic.

For some actions, we put small additional expressions in the third, time dependent, vector

in order to couple actions between the joints, or to modify the bias or gain of a joint (Per85).

For example, here is the speci�cation of a running action. For clarity of exposition, we have

assigned the four phase signals to variables A, B, C and D, respectively. We have also

assigned double speed oscillation and its complement to variables A2 and B2, respectively.

{

7

Page 56: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 57: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

c1 => A

s1 => B

1 A - => C

1 B - => D

c2 => A2

1 A2 - => B2

{ 0 -15 0 } { 5 15 0 } { A2 C 0 } Waist

{ 0 -90 0 } { 0 90 0 } { 0 N 0 } Head

{ 0 -10 -5 } { 0 10 5 } { 0 C D } Rchest

{ 0 0 0 } { 45 0 0 } { D 0 0 } Rshoulder

{ -120 0 -10 } { 0 0 0 } { C 0 B2 } Relbow

{ -10 0 0 } { 10 0 0 } { C 0 0 } Rwrist

{ 0 -10 -5 } { 0 10 5 } { 0 A B } Lchest

{ 0 0 0 } { 45 0 0 } { B 0 0 } Lshoulder

{ -120 0 -10 } { 0 0 0 } { A 0 B2 } Lelbow

{ -10 0 0 } { 10 0 0 } { A 0 0 } Lwrist

{ 0 -10 0 } { 0 10 0 } { 0 A 0 } Rpelvis

{ -40 0 0 } { 40 0 0 } { A B .3 * - 0 0 } Rhip

{ 0 0 0 } { 130 0 0 } { B .2 bias 0 0 } Rknee

{ -45 0 0 } { 45 0 0 } { C .7 bias 0 0 } Rankle

{ 0 -10 0 } { 0 10 0 } { 0 C 0 } Lpelvis

{ -40 0 0 } { 40 0 0 } { C D .3 * - 0 0 } Lhip

{ 0 0 0 } { 130 0 0 } { D .2 bias 0 0 } Lknee

{ -45 0 0 } { 45 0 0 } { A .7 bias 0 0 } Lankle

} 'running define_action

This action consists entirely of rhythmic motion, except for the head's \looking around"

movements, which are driven by coherent noise. The double speed oscillations serve to

slightly bend and unbend the waist twice per cycle, as weight is alternately borne either by

one foot or by two feet. Double speed oscillations are also used to rotate slightly about the

elbow's y axis. This rotation pulls the forearms a bit closer to the body twice per cycle -

both when in front and when behind the body.

Note the use of the bias function in the rotations about the knee joints. These give the

non-weight bearing knee a little extra kick at the time it swings forward. For the same

reason, we add a small amount of rotation to each hip, 90 degrees out of phase with its

primary motion.

Once an action is designed, this sort of structure provides many opportunities for cus-

tomization. In the above example, we can replace each of the constants in the knee and hip

\bias" expressions by variables. As we modify these variables we obtain walks that re ect

di�erent emotive states and degrees of energy.

8

Page 58: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

2.4 Combining multiple weighted actions

To combine actions we assign numerical weights to every potential action in the action

mix. These weights give the relative contribution of each action to the total motion of the

puppet. The weights vary over time; at any given moment, only a few actions have a non-

zero weight. For each joint, the contributions from all the actions to that joint's position

are combined via a convex sum (a weighted sum in which the weights add to unity). We

do this as follows.

Assume there are k actions with associated weights w1; w2; :::wk, and that there are n

universal joints in the entire body, where each joint involves three rotational degrees of

freedom [x; y; z]. Any one of the k actions will use only some subset of these n joints. Let

Ci be a vector whose values are 1 for each joint j used by action i, and 0 otherwise.

Let Ai = [V1; V2; :::Vn] be a vector representing the value [x; y; z] that is generated

by action i for every universal joint j. We obtain the position of universal joint j via:P

(AijCijwij)=P

(Cijwij).

In the next sections we describe the way in which we coordinate the variation over time

of the weights of di�erent actions.

2.5 Synchronizing actions

The phases of all the actions are synchronized. For example, when a dancer puppet is

walking, and we ask her to do a classical fondue turn to the right, she won't begin the

turn until she is about to put her weight down on her right foot [�gure 6]. This is the only

sensible thing to do, since one must begin a fondue turn by stepping into it. This requires

that both the walk and the fondue turn are built from expressions that run o� the same

master clock.

We handle transitions between two actions that we wish to have di�erent tempos via a

morphing approach: At the start of the transition, we use the tempo of the �rst action;

at the end, we use the tempo of the second action. During the time of the transition, we

continuously vary the speed of the master clock from the �rst to the second tempo. In this

way, the phases of the two actions are always aligned during transitions.

We may also de�ne new actions as extended transitions between two or more other actions.

For example, we may morph between the \running" example above and a \standing at

attention" pose, in which all joint angles are �xed at zero. When the interpolant is in the

range 0:3 to 0:5, this interpolated action becomes a visually realistic walk (to the author's

surprise). But a human walking tempo is also 0:3 to 0:5 that of a human running tempo.

For this reason, we use the morph transition parameter to modulate the tempo. In this

way a puppet can be made to continually and realistically transition from standing still,

through walking, to running, or to anywhere in between.

In a scene with multiple puppets (see section 4 below) each puppet maintains its own

individual tempo.

9

Page 59: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

2.6 Dependencies between weights

Let's say that the puppet is walking, and we decide to have her do a pirouette. It would

make no sense for her to continue walking while she is pirouetting. The mechanism we

employ is to build dependencies between weights.

The pirouette weight acts as an inhibitor - as its value rises from zero to one, it drives

down the e�ect of the weight that controls such steady state actions as walking. Then as

the pirouette weight drops down to zero again, the walking weight is allowed to take e�ect

again.

The numerical value of the walking weight is not itself modi�ed. But anything that de-

pends upon it is seen through the �lter of the pirouette weight. Conceptually, the pirouette

weight \blocks" the walking weight, much as the alpha channel of a foreground image blocks

a background image during a compositing operation [�gure 7].

Note that an action need not involve all joints. Examples include such actions as waving

with the left arm, shrugging the shoulders, or scratching one's head. If an action which

involves only a subset of the joints blocks another action, then it will only block those

joints included in this subset. So we may use this structure to \layer" partial actions. For

example, a puppet that is running can be told to wave his hand, without breaking his stride.

A small section in the program creates a layering structure for such dependencies. This

control is divided into two levels - states and weights. The state level consists entirely of

discrete boolean values. For example, either the puppet \wants" to walk, or she does not.

The weight level consists of continuous values between zero and one. These are derived by

integrating the e�ect over time of the discrete states [�gure 8]. These continuous weights

are what go into the convex sum above, to drive the puppet. Some of these weights are

dependent on others.

For example, if a user directs the puppet to walk, then the discrete walk state turns

on, and the continuous weight controlling the walking action gradually rises from zero to

one. If the user subsequently directs the puppet to perform a pirouette, the weight of the

pirouette action gradually rises to one. The discrete walk state continues to stay on, but

the continuous weight of the walk is driven down to zero by its dependency on the weight

of the pirouette action. When the pirouette state is disabled, the weight of the pirouette

action gradually falls to zero, and the walking action at the joints gradually reappears.

The dependencies between weights are implemented by a sequence of conditional expres-

sions. This approach is similar in spirit to Brooks' subsumption architecture (Bro86) for

walking robots, in which more immediate goals (eg: \don't fall over") block out longer term

goals (eg: \walk to the edge of the table").

2.7 Transition times

Each user speci�ed action starts the rise of some scalar weight from zero to one, via an

S-shaped ramp. We have found that the only tuning needed for controlling the shape of

any given transition is a single scalar value that speci�es the duration of the transition from

10

Page 60: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 61: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 62: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

zero up to one or back down again. This is speci�ed in seconds, not frames - behavior

should not change with frame rate! In order to e�ect this, we use the actual system clock,

not a frame counter, to time transitions. We also use the system clock to drive the signal

sources described above in section 2.3.

Some transitions look better when they are fast, and others look better when slow. It

is surprising how much expressiveness one can achieve by tuning these transition times.

For example, when the dancer puppet performs the action \put hands on hips indignantly

and look at the camera", she puts her hands on her hips �rst, and only then, a beat later,

does she turn to look at the viewer [�gure 9]. Then when she goes from this state into the

slow dance, she continues to look at the viewer for a second or so, even while she's already

dancing and her body is turning away.

We conjecture that this behavior looks correct because �xing one's gaze on another person

is a more explicit emotional signi�er than is changing one's bodily activity, and therefore

should happen more slowly. In this case, even when she is beginning to dance we still want

the dancer to convey the reminder that she was annoyed at us just a moment ago.

We believe that the use of di�erent transition times for various parts of a gesture is a

very powerful means for conveying subtle impressions of intention. Although our approach

to this is currently ad hoc, we hope that experimentation of this kind can lead to a set of

useful rules for understanding of human body language. Related work in the role of emotion

and communication in gesture is found in the chapter by Calvin and Morevic in (BBZ91).

2.8 Non-hierarchical motions

The puppet will always conform to certain simple constraints no matter what the forward

kinematics specify. Here are the key constraints:

� the foot on the ground propels the puppet

� the supporting foot must be at oor level

� obstacles are avoided by turning away from them

� the head won't turn all the way around backwards

Each of these constraints is imposed by a few lines within the code that models the body.

For example, to propel the puppet from the foot we detect the lowest foot. We measure

how far this foot has moved since the previous frame. We then add this to a cumulative

displacement, and apply a positional o�set to the rendered body equal to the opposite of

this displacement. The e�ect is that the lower foot always stays in place and propels the

body.

Also, whenever we compute each new total foot position, we average in half of the previous

total just for the y (vertical) component. The e�ect of this is to always keep the supporting

foot level with the ground.

11

Page 63: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 64: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Object avoidance is done as follows. Each wall emits a repulsive force vector, which

increases near the wall. We sum all of these vectors. If the puppet walks into such a vector

�eld, and is angled o� to the left (or right) of facing the wall, then we give her a tendency

to turn more to the left (or right). When this is tuned properly, she just avoids walls, and

so we don't have to worry about collisions. In the more general case, we would put a similar

repulsive vector �eld around any object we want her to avoid, as well as an attractor �eld

at each open doorway. This would act as a variety of remote compliance to help her �nd

her way in.

We can make the puppet \look at the camera", just by turning her Neck joint. Actually

she can look at any aim point in the scene. But this is not desirable when the puppet's body

is facing directly opposite from this direction. To avoid complete backward head turns, we

add a constraint into the neck turning joint. A dot product of the body's forward position

and the desired aim direction is calculated. As the value of this dot product drops from 1:0

down to �1:0, we continually lessen the factor by which we in uence the head to turn in

the aim direction. We found through trial and error that we get the most natural results

when this factor reaches zero at a dot product value of �0:6.

The visual e�ect is that as the puppet turns away from us, she holds our gaze for a bit,

and then gradually ignores us as she continues to turn further away. Then as she continues

turning, she eventually locks her gaze with us again on the other side, by turning her head

over her other shoulder [�gure 10].

The above constraints constitute all of the \physics" built into the model, other than the

natural constraints imposed by the forward kinematics itself (ie: that the limbs never y

apart).

2.9 Shifting body parts

In order to achieve real-time performance, it is important to limit the elaboration of puppet

geometry (see next section). Yet we do not wish to sacri�ce the appearance of human form.

To attain a natural appearance without using large numbers of body parts, we shift body

parts around as joints ex, in order to keep the visual appearance of human form for all

body angles.

For example, the thigh consists of three intersecting ellipsoids, one for the main thigh

mass, a second for the muscle in back of the thigh, and a third for the muscle high up and

inside the thigh. When the puppet bends the thigh very far backward about the hip (such

as in a fondue turn [�gure 6]) the two front ellipsoids have a tendency to separate from

the puppet's pelvis. We compensate as follows. As the hip joint bends backwards, we slide

these ellipsoids down the thigh (away from the hip) in linear proportion to the degree of

bending [�gure 11]. We provide similar sliding mechanisms at all body parts where such

compensation is needed.

12

Page 65: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

2.10 User interaction

User interaction is quite simple. The user only needs to control the discrete states of

the puppet. Currently this is handled by a panel of buttons [�gure 12]. All continuous

behavior is automatically derived by the system through integration over time, as previously

described.

3 Implementation

In our current instantiation, all parts are rendered as polygonal mesh approximations of 50

ellipsoids. The simulation has been run on an SGI Indigo Elan (at 7.5 frames/sec) or an

Indigo 2 (at 15 frames/sec), and calls the SGI GL library for rendering.

It also runs e�ciently on any UNIX or 486 based LINUX machine, but for this instan-

tiation the �gure can be rendered only in silhouette. In this case rendering is done by

computing the silhouette ellipse for each ellipsoid and then doing a software scan conver-

sion [�gure 13]. Since this is a silhouette rendering, front-to-back ordering does not need

to be taken into account. Using this method, the dancer runs at 6 frames per second on a

486/DX66 processor.

The button panel is implemented via a small stand-alone tcl/tk program. Communication

with the this program is done through a two-way ascii pipe.

4 Ongoing and Future work

We are looking at the combination of these procedural techniques with motion capture.

The research question here is how to analyze motion capture of walk cycles or gestures in

order to convert them into a form compatible with the procedural synthesis techniques. Our

approach is to align the natural cycles of walks and other rhythmic motions so that they

can be blended together.

We also are beginning to study group interactions between these simulated puppets. This

research is focused on situations in which people communicate richly through body language,

such as parties, bar scenes, meetings.

Because all control of a puppet's state is discrete, knowledge of actions and transitions

between two or more interacting puppets can be done by exchanging state tokens. If a

character knows the state of another character, then for non-contact pairwise interactions

it su�ces to know only the position and facing direction of the other character. This method

of communication is fast and compact, and scales up gracefully in simulations with large

numbers of puppets.

Using this approach, we make the characters \press each others' buttons." For example,

when two characters are engaged in conversation, they tend not to simultaneously talk at

once (although they occasionally do), and they also tend to avoid long collective silences.

When a third character walks up the behavior of the other two shifts, depending upon the

status of the newcomer and how each of the other two feels about him/her.

13

Page 66: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 67: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

A related notion that we will explore is peripheral attention. For example, suppose a man

and a woman are engaged in conversation, and another man appears in the line of vision of

the �rst. How would one show the fact that the e�ect of the woman's attention involuntarily

drifting toward the other man, even though her intention is to maintain the conversation?

Similarly, how would one show the shift in each man's attitude when the woman's behavior

is noticed? The �rst man might begin to talk more frequently; the second might drift toward

the conversation.

In recent work, we have developed methods of running these group simulations on multi-

processors and across multiple networked computers. This is done over a network of UNIX

workstations as follows. Each actor is a separate program which communicates through

its standard input and standard output. A supervisory rendering process opens up a two

way read/write pipe to each actor. Each actor may be invoked via a remote shell, so that

it need not be on the same workstation as the renderer. To send messages, actors print

commands to their standard output which are parsed and executed at each frame by the

supervisor program. If one actor wants to send a message to another, then it prints a wrap-

per command. This wrapper command instructs the supervisor program to print a message

command string to the standard input of the recipient actor. The recipient then parses and

executes this message.

This approach makes it quite easy to allow di�erent kinds of actors to each respond in

the most appropriate way to a given message. The Camera is an actor which possesses

behavior like any other. For example, in our current system Actor1 can send the messages:

{ my_location "look_here" } "Camera" send_message

{ my_location "look_here" } "Actor2" send_message

where my location is the current x; y; z position of Actor1. Note that di�erent recipients

are free to interpret any message as they see �t. For example, the Camera actor generally

averages all \look here" requests, so that it keeps all attention seeking actors in its range

of vision. In contrast, if several \look here" requests are sent to a human actor, it will

generally honor only one of them. As a result, an actor will adjust his/her gaze to track

only the message sender of greatest interest. This provides a simple object-oriented message

capability, with overloading of methods based on the type of the recipient.

In addition, we are exploring immersive interactions using projector screens and position

sensors, so that real people can interact with these characters, which are digitally composited

into miniature models of interior spaces. Within this experimental laboratory we explore

questions of how to convey peripheral awareness, approach/avoidance, \paying attention",

\listening", etc. This is in the spirit of the recent Alive project of Maes' et al at MIT

(Mae93). We are particularly interested in immersive scenarios involving two or more

projection screens, in order to see to what extent simulated body language will help to

convey the impression of various competing social or attention getting activities.

We are also studying the semantics of the discrete state transitions that visually represent

shifts in attitude and attention. We are particularly interested in determining to what

extent can we encode merely the rhythm of interpersonal interaction, in order to convey

14

Page 68: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

the impression of social complexity. For example, could one structure entire narratives in

this manner?

5 Conclusions

Using ideas from procedural texture synthesis, we are able to create remarkably lifelike,

responsively animated characters in real time. By conveying just the \texture" of motion,

we are able to avoid computation intensive dynamics and constraint solvers. We believe

these techniques have the potential to have a large impact on computer Role Playing Games,

simulated conferences, \clip animation", graphical front ends for MUDs, and synthetic

performances.

6 Acknowledgements

I would like to thank Athomas Goldberg for production support on this paper, and in

particular for the illustrations, as well as on the work itself. I would also like to thank

Cynthia Allen, David Bacon, Troy Downing, Mehmet Karaul, Tom Laskawy, Kuochen Lin,

Jon Meyer, and Jack Schwartz for all their help and encouragement. Ben Bederson, Bruce

Naylor, and Silicon Graphics Inc. have provided hardware assistance for this research.

Thanks as well to Marcelo Zu�o, Roseli Lopez, and the folks down at the University of Sao

Paulo for all their support. And muito obrigado to Emi, who inspires the dance.

7 References

Norman I. Badler, Brian A. Barsky, and David Zeltzer. Making Them Move: Mechanics,

Control, and Animation of Articulated Figures. Morgan Kaufmann Publishers, San

Mateo, CA, 1991.

N.I. Badler, C. Phillips, and B.L. Webber. Simulating Humans: Computer Graphics, Ani-

mation, and Control. Oxford University Press, 1993.

R. Brooks. A robust layered control system for a mobile robot,. IEEE Journal of Robotics

and Automation, 2(1):14{23, 1986.

J.E. Chadwick, D.R. Haumann, and R.E. Parent. Layered construction for deformable

animated characters,. Computer Graphics (SIGGRAPH '89 Proceedings), 23(3):243{

252, 1989.

D. Ebert and et. al. Texturing and Modeling, A Procedural Approach. Academic Press,

London, 1994.

D. Gerlernter. Mirror Worlds. Oxford University Press, 1992.

M. Girard and A.A. Maciejewski. Computational modeling for the computer animation

of legged �gures. Computer Graphics (SIGGRAPH '85 Proceedings), 20(3):263{270,

1985.

15

Page 69: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

P. Maes. The mit alive project. Computer Graphics (SIGGRAPH '93 Proceedings), 1993.

Claudia L. Morawetz and Thomas W. Calvert. Goal-directed human animation of multiple

movements. Proc. Graphics Interface, pages 60{67, 1990.

K. Perlin. An image synthesizer. Computer Graphics (SIGGRAPH '85 Proceedings),

19(3):287{293, 1985.

K. Perlin. Danse interactif. Computer Graphics (SIGGRAPH '94 Proceedings), 28(3), 1994.

M. Raibert and et al. Legged Robots That Balance. MIT press, 1986.

SGI. SGI Programmers Manual. Silicon Graphics Incorporated, Mountainview, 1994.

Karl Sims. Evolving virtual creatures. Computer Graphics (SIGGRAPH '94 Proceedings),

28(3):15{22, 1994.

Neal Stephenson. Snow Crash. Bantam Doubleday, New York, 1992.

16

Page 70: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Movie Placeholder

Movie Placeholder

Page 71: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Chapter 3:Volumetric Procedural Modeling and Animation

David S. Ebert

3.1 Introduction

This chapter explores the design and animation of volumetric procedural models for creating realisticimages and animations of gases and fluids. Volumetric procedural models use three-dimensional volumedensity functions (vdf(x; y; z)) that define the density of a continuous three-dimensional space. Volumedensity functions (vdf’s) are the natural extension of solid texturing to describing the actual geometry ofobjects.

Many advanced geometric modeling techniques are inherently procedural. L-systems, fractals, particlesystems, and implicit surfaces are, to some extent, procedural and can be combined nicely into theframework of volumetric procedural models. This chapter will mainly concentrate on the development andanimation of several volume density functions. The inclusion of fractal techniques into vdf’s is presentedin most of the example volume density functions because they rely on a statistical simulation of turbulencethat is fractal in nature. This chapter will also briefly discuss the inclusion of implicit function techniquesinto vdf’s.

3.1.1 Background

Volume density functions are used extensively in computer graphics for modeling and animating gases,fire, fur, liquids, and other ‘‘soft’’ objects. Hypertextures [18], metaballs [23](also called implicit surfacesand soft objects), and Inakage’s flames [10] are other examples of the use of volume density functions.

There have been several previous approaches to modeling gases in computer graphics. Kajiya [11] hasused a simple physical approximation for the formation and animation of clouds. Gardner [8] has used solidtextured hollow ellipsoids in modeling clouds and more recently produced animations of smoke rising froma forest fire [9]. Other approaches include the use of height fields [14], constant density media [13, 15], andfractals [22]. The author has developed several approaches for modeling and controlling the animation ofgases [2, 7, 5, 3, 1]. Recently, Stam has used ‘‘fuzzy blobbies’’ as a three-dimensional model for animatinggases [20] and has extended their use to modeling fire [21].

This chapter describes my design approach for modeling, rendering, and animating gases, and showsthe development of several example procedures. In the discussion that follows, an overview of gasrendering issues is discussed. Next, a brief description of my approach to modeling and animating gases,called solid spaces is presented, followed by an in-depth description of the modeling of the gases andfluids. Finally, animation techniques for solid textures, hypertextures, and vdf’s are thoroughly discussed,including detailed descriptions of several example procedures.

3-1

Page 72: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3.1.2 Overview of a Rendering System for Volumetric Procedural Models

For realistic images and animations of gases, volume rendering must be performed. Any procedure-basedvolume rendering system, such as Perlin’s [18], or my system [5, 7], can be used for rendering volumetricprocedural models. My system is a hybrid rendering system that uses a fast scanline a-buffer renderingalgorithm for the surface-defined objects in the scene, while volume modeled objects are volume rendered.The algorithm first creates the a-buffer for a scanline containing a list for each pixel of all the fragments thatpartially or fully cover the pixel. Then, for each volume that covers this scanline, the volume rendering isperformed, creating a-buffer fragments for the separate sections of the volumes. Volume rendering ceasesonce full coverage of the pixel by volume or surfaced-defined elements is achieved. Finally, these volumea-buffer fragments are sorted into the a-buffer fragment list based on their average Z-depth values, and thea-buffer fragment list is rendered to produce the final color of the pixel. This rendering system featuresa physically-based low-albedo illumination and atmospheric attenuation model for the gases. Volumetricshadows are also efficiently combined into the system through the use of three-dimensional shadow tables[4].

3.2 Solid Spaces

My approach to modeling and animating gases started with work in solid texturing. Solid texturing canbe viewed as creating a three-dimensional color space that surrounds the object. When the solid textureis applied to the object, the defining space is simply being carved away. My first procedural gas imageused solid textured transparency for simulating a butterfly emerging from fog. My rendering system wasdesigned to allow most of the rendering characteristics of an object to be textured or procedurally textured,similar to the idea behind Cook’s ‘‘shade trees’’ that evolved into the Renderman shading language. Mostof my initial solid textured transparency functions were based on Perlin’s simulation of turbulent flowand produced effects similar to Gardner’s approach [8]. The problems with surface-based approximationsto volumetric gases led to the development of volumetric procedural functions for modeling gases andthe idea of solid spaces. Solid spaces are three-dimensional spaces associated with an object that allowcontrol of the attributes of the object. This solid space framework encompasses traditional solid texturing,hypertextures, and other techniques within a unified framework. Solid spaces have many uses in describingobject attributes. For a more detailed description of the uses of solid spaces, please see [2, 4, 5, 7].

3.3 Gas Geometry

As mentioned in the introduction, the geometry of the gases is modeled using turbulent flow based volumedensity functions. I have used a ‘‘visual simulation’’ of turbulent flow similar to Ken Perlin’s approach [16].The volume density functions take the location of the point in world space, find its corresponding locationin the turbulence space (a three-dimensional space), and apply the turbulence function. The value returnedby the turbulence function is used as the basis for the gas density and is then ‘‘shaped’’ to simulate the typeof gas desired by using simple mathematical functions. (My implementation of noise and turbulence can befound in [4].) In the discussion that follows, the use of basic mathematical functions for shaping the gas isdescribed, followed by the development of several example procedures for modeling the geometry of thegases.

3-2

Page 73: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3.3.1 Basic Gas Shaping

Several basic mathematical functions are used to shape the geometry of the gas. The first of these is thepower function. Let’s look at a simple procedure for modeling a gas and see the effects of the powerfunction, and other functions on the resulting shape of the gas.

basic_gas(pnt,density,parms)xyz_td pnt;float *density,*parms;

{float turb;int i;static float pow_table[POW_TABLE_SIZE];static int calcd=1;

if(calcd){ calcd=0;

for(i=POW_TABLE_SIZE-1; i>=0; i--)pow_table[i] = (float)pow(((double)(i))/(POW_TABLE_SIZE-1)*

parms[1]*2.0,(double)parms[2]);}

turb =fast_turbulence(pnt);*density = pow_table[(int)(turb*(.5*(POW_TABLE_SIZE-1)))];}

This procedure takes as input the location of the point being rendered in the solid space, pnt, and aparameter array of floating point numbers, parms. The returned value is the density of the gas. Parms[1]

is the maximum density value for the gas with a range of 0.0 to 1.0, and parms[2] is the exponent forthe power function. The fast turbulence function called in the above procedure is simply an optimizedversion of the turbulence function described in Chapter 2 of these course notes. Figure 1 shows the effectsof changing the power exponent, with parms[1] = 0:57. As you can see, the greater the exponent, thegreater the contrast and definition to the gas plume shape. With the exponent at 1 there is a continuousvariation in the density of the gas; whereas, with the exponent at 2, it appears to be separate individualplumes of gas. So depending on the type of gas you are trying to model, you can choose the appropriateexponent value. This procedure also shows how precalculated tables can increase the efficiency of theprocedures. The pow table[] array is calculated once per image and assumes that the maximum densityvalue, parms[1], is constant for each given image. A table size of 10,000 should be sufficient for producingaccurate images. This table is used to limit the number of pow function calls. If the following straightforward implementation was used, a power function call would be needed per volume density functionevaluation:

*density = (float) pow((double)turb*parms[1],(double)parms[2]);

Assuming an image size of 640x480, with 100 volume samples per pixel, the use of the precomputed tablesaves 30,710,000 pow function calls.

Another useful mathematical function is the sine function. Perlin [16] uses the sine function in solidtexturing to create marble, which will be described in a later section. This function can also be used inshaping gases. This can be accomplished by making the following change to the basic gas function:

turb =(1.0 +sin(fast_turbulence(pnt)*M_PI*5))*.5;

The sine function has a similar effect as in its use for marble: the above change creates ‘‘veins’’ in theshape of the gas. As you can see from these simple examples, it is very easy to shape the gas using simplemathematical functions. Next, we’ll see how to produce more complex shapes in the gas.

3-3

Page 74: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3.3.2 Steam Rising From a Teacup

The goal is the to create a realistic image of steam rising from a teacup. The first step is to place a‘‘slab’’ [12] of volume gas over the teacup. (Any raytracable solid can be used for defining the extent ofthe volume.) Since steam is not a very thick gas, a maximum density value of 0.57 will be used with anexponent of 6.0 for the power function. The resulting image can be seen in Figure 6(a). This was producedfrom the above basic gas procedure.

The image created, however, does not look like steam rising from a teacup. First of all, the steam isnot confined to be only above and over the cup. Secondly, the steam’s density does not decrease as it rises.These problems can be easily corrected. First, ramp off the density spherically from the center of the topof the coffee. This will make the steam be only within the radius of the cup and will make the steam risehigher over the center of the cup. The following addition to the basic gas procedure will accomplish this:

steam_slab1(pnt, pnt_world, density,parms, vol)xyz_td pnt, pnt_world;float *density,*parms;vol_td vol;

{float turb;int i;xyz_td distance;static float pow_table[POW_TABLE_SIZE], ramp[RAMP_SIZE];static int calcd=1;

if(calcd){ calcd=0;

for(i=POW_TABLE_SIZE-1; i>=0; i--)pow_table[i] = (float)pow(((double)(i))/(POW_TABLE_SIZE-1)*

parms[1]*2.0,(double)parms[2]);make_ramp_table(ramp);

}turb =fast_turbulence(pnt);*density = pow_table[(int)(turb*0.5*(POW_TABLE_SIZE-1))];

/* determine distance from center of the slab ^2. */XYZ_SUB(diff,vol.shape.center, pnt_world);dist_sq = DOT_XYZ(diff,diff);density_max = dist_sq*vol.shape.inv_rad_sq.y;indx = (int)((pnt.x+pnt.y+pnt.z)*100) & (OFFSET_SIZE -1);density_max += parms[3]*offset[indx];

if(density_max >= .25) /* ramp off if > 25% from center */{ i = (density_max -.25)*4/3*RAMP_SIZE; /* get table index 0:RAMP_SIZE-1 */

i=MIN(i,RAMP_SIZE-1);density_max = ramp[i];*density *=density_max;

}}

make_ramp_table(ramp)float *ramp;

{int i;float dist, result;

for(i = 0; i < RAMP_SIZE; i++){ dist =i/(RAMP_SIZE -1.0);

ramp[i]=(cos(dist*M_PI) +1.0)/2.0;}

}

To achieve the more realistic image, several additional parameters are used in the new procedure:pnt world and vol. pnt world is the location of the point in world space. vol is a structure containinginformation on the volume being rendered. The following table will help clarify the use of the variousvariables:

3-4

Page 75: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Variable Description

pnt location of the point in the solid texture spacepnt world location of the point in world spacedensity the value returned from the functionparms[1] maximum density of the gasparms[2] exponent for the power function for gas shapingparms[3] amount of randomness to use in fall offvol.shape.center center of the volumevol.shape.inv rad sq 1/radius squared of the slabdist sq point’s distance squared from the center of the volumedensity max density scaling factor based on

distance squared from the centerindx an index into a random number tableoffset a precomputed table of random numbers used

to add noise to the ramp off of the densityramp a table used for cosine falloff of the density values

The procedure now ramps off the density spherically using a cosine falloff function. If the distancefrom the center squared is greater than 25%, the cosine falloff is applied. The resulting image can be seenin Figure 6(b).

Second, we need to ramp off the density as it rises to get a more natural look. The following additionwill accomplish this:

dist = pnt_world.y - vol.shape.center.y;if(dist > 0.0)

{ dist = (dist +offset[indx]*.1)*vol.shape.inv_rad.y;if(dist > .05)

{ offset2 = (dist -.05)*1.111111;offset2 = 1 - (exp(offset2)-1.0)/1.718282;offset2 *=parms[1];*density *= offset;

}}

This procedure uses the ex function to decrease the density as the gas rises. If the vertical distanceabove the center is greater than 5% of the total distance, the density is exponentially ramped off to 0. Theresults of this addition to the above procedure can be seen in Figure 7. As you can see in this image, theresulting steam is very convincing. In a later section animation effects using this basic steam model will bepresented.

3.3.3 A Single Column of Smoke

Another example procedure that I will describe creates a single column of rising smoke. For this smokecolumn, the basis is a vertical cylinder. By deforming this shape, we can create a realistic smoke column.The obvious visual characteristics of a smoke column are that is it disperses as it rises and that it is initiallysmooth and becomes more turbulent as it rises. Turbulence can be added to the cylinder’s center to make amore natural looking smoke column. To simulate turbulent effect and air currents, the amount of turbulenceis increased as a function of the height of the smoke column. Also, you need to decrease the density asa function of height to help simulate the dispersion. These ideas, which are similar to the steam risingprocedure, are only the start to a realistic column of smoke. The shape also needs to be bent and swirled asthe smoke rises. This can be achieved by using a vertical helix to displace each point before calculating itsdistance from the center of the cylinder. A more detailed description of this procedure can be found in [4].

3-5

Page 76: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3.3.4 Volumetric Procedural Implicit Functions

As mentioned in the introductory chapter, implicit surfaces are procedural in nature, since they usethree-space implicit functions to define a three-dimensional density field. This density field is evaluatedto determine a surface where the density is a constant. This produces an iso-surface that is then renderedand animated. The advanced blending techniques and basic implicit procedural models can be combinedinto a volumetric procedural model to create impressive images. One example is shown in Figure 11. Tocreate this cloud image, eight elliptical implicit functions were positioned in space. Their definition wasmodulated with turbulence functions and the result was volume rendered using a low-albedo illuminationmodel, atmospheric attenuation, and physically-based shadowing. The resulting cloud image is quiteconvincing.

3.4 Animating Solid Spaces

Now that you have seen how to model the geometry of the gases, a discussion of animating these gasprocedures and other solid space procedures will be presented. There are several ways that solid spaces canbe animated. These notes will consider two approaches:

Changing the solid space over time.

Moving the point being rendered through the solid space.

The first approach has time as a parameter which changes the definition of the space over time. This isa very natural and obvious way to animate procedural techniques.

The second approach is to not change the solid space, but actually move the point in the volume orobject over time through the space. The movement of the gas (solid texture, hypertexture) is created bymoving the fixed three-dimensional screen space point along a path over time through the turbulence spacebefore evaluating the turbulence function. Each three-dimensional screen space point is inversely mappedback to world space. Then from world space, it is mapped into the gas and turbulence space through the useof simple affine transformations. Finally, it is moved through the turbulence space over time to create themovement. Therefore, the path direction will have the reverse visual effect. For example, a downward pathapplied to the screen space point will show the texture or volume object rising.

Both of these techniques can be applied to solid texturing, gases, and hypertextures. The application ofthese techniques to solid texturing will be discussed first, followed by the use of these techniques for gasanimation, and finally, the use of these techniques for hypertextures, including liquids and fire.

3.5 Animating Solid Textures

The previous section described two animation approaches. This section will show how these approachescan be used for solid texturing. The use of these two approaches for color solid texturing will be presentedfirst, followed by a discussion of these approaches for solid textured transparency.

The example to be used for describing the animation of color solid texturing is a marble function. Asimple marble function is given below.

rgb_td marble(pnt)xyz_td pnt;

{float y;

y = pnt.y + 3.0*turbulence(pnt, .0125);y = sin(y*M_PI);return (marble_color(x));}

3-6

Page 77: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

rgb_td marble_color(x)float x;

{rgb_td clr;

x = sqrt(x+1.0)*.7071;clr.g = .30 + .8*x;x=sqrt(x);clr.r = .30 + .6*x;clr.b = .60 + .4*x;return (clr);

}

This function applies a sine function to the turbulence of the point. The resulting value is then used todetermine the color. The results achievable by this procedure can be seen in Figure 3(d).

The application of the above two animation approaches to this function will have very different effects.When the first approach is used, changing the solid space over time, the formation of marble from bandedrock can be achieved. Initially, no turbulence is added to the point, so we have the sine function determiningthe color. This produces banded material. As the frame number increases, the amount of turbulence addedto the point is increased, deforming the bands into the marble vein pattern. The resulting procedure is givenbelow.

rgb_td marble_forming(pnt, frame_num, start_frame, end_frame)xyz_td pnt;int frame_num, start_frame, end_frame;

{float x, turb_percent, displacement;

if(frame_num < start_frame){ turb_percent=0;displacement=0;

}else if (frame_num >= end_frame)

{ turb_percent=1;displacement= 3;

}else

{ turb_percent= ((float)(frame_num-start_frame))/ (end_frame-start_frame);displacement = 3*turb_percent;

}

x = pnt.x + turb_percent*3.0*turbulence(pnt, .0125) - displacement;x = sin(x*M_PI);return (marble_color(x));}

The displacement value in the above procedure is used to stop the entire texture from moving. Withoutthe displacement value, the entire banded pattern moves horizontally to the left of the image, instead ofthe veins forming in place. The realism of this effect can be increased in several ways. First of all, ease-inand ease-out of the rate of turbulence addition will give more natural motion. Secondly, the color of themarble can be changed to simulate heating before and while the bands begin to deform and to simulatecooling after the deformation. This can be achieved by the following additions:

rgb_td marble_forming2(pnt, frame_num, start_frame, end_frame, heat_length)xyz_td pnt;int frame_num, start_frame, end_frame, heat_length;

{float x, turb_percent, displacement, glow_percent;rgb_td m_color;

if(frame_num < (start_frame-heat_length/2) ||frame_num > end_frame+heat_length/2)glow_percent=0;

else if (frame_num < start_frame + heat_length/2)

3-7

Page 78: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

glow_percent= 1.0 - ease( ((start_frame+heat_length/2-frame_num))/heat_length),0.4,0.6);

else if (frame_num > end_frame-heat_length/2)glow_percent = ease( ((frame_num-(end_frame-heat_length/2))/

heat_length),0.4,0.6);else

glow_percent=1.0;

if(frame_num < start_frame){ turb_percent=0;

displacement=0;}

else if (frame_num >= end_frame){ turb_percent=1;

displacement= 3;}

else{ turb_percent= ((float)(frame_num-start_frame))/(end_frame-start_frame);

turb_percent=ease(turb_percent, 0.3, 0.7);displacement = 3*turb_percent;

}

x = pnt.y + turb_percent*3.0*turbulence(pnt, .0125) - displacement;x = sin(x*M_PI);m_color=marble_color(x);glow_percent= .5* glow_percent;m_color.r= glow_percent*(1.0)+ (1-glow_percent)*m_color.r;m_color.g= glow_percent*(0.4)+ (1-glow_percent)*m_color.g;m_color.b= glow_percent*(0.8)+ (1-glow_percent)*m_color.b;return(m_color);}

The resulting images can be seen in Figure 3. Of course the resulting sequence would be even morerealistic if the material actually deformed, instead of the color simply changing. This effect will be describedin a later section.

A different effect can be achieved by the second animation approach, moving the point through thesolid space. The procedure below moves the point along a helical path before evaluating the turbulencefunction. This produces the effect of the marble pattern moving through the object. This technique can beused by a designer in determining the portion of marble to ‘‘cut’’ his/her object from in order to achieve themost pleasing vein patterns.

rgb_td moving_marble(pnt, frame_num)xyz_td pnt;int frame_num;

{float x, tmp, tmp2;static float down, theta, sin_theta, cos_theta;xyz_td hel_path, direction;static int calcd=1;

if(calcd){ theta =(frame_num%SWIRL_FRAMES)*SWIRL_AMOUNT; /* swirling effect */cos_theta = RAD1 * cos(theta) + 0.5;sin_theta = RAD2 * sin(theta) - 2.0;down = (float)frame_num*DOWN_AMOUNT+2.0;calcd=0;

}tmp = fast_noise(pnt); /* add some randomness */tmp2 = tmp*1.75;

/* calculate the helical path */hel_path.y = cos_theta2 + tmp;hel_path.x = (- down) + tmp2;hel_path.z = sin_theta2 - tmp2;XYZ_ADD(direction, pnt, hel_path);

x = pnt.y + 3.0*turbulence(direction, .0125);x = sin(x*M_PI);return (marble_color(x));}

3-8

Page 79: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

In the above procedure, SWIRL FRAMES = 126 and SWIRL AMOUNT = 2 � �=126. Thisproduces the path swirling every 126 frames. DOWN AMOUNT = 0:0095 and controls the speed ofthe downward movement along the helical path. RAD1 and RAD2 are the y and z radii of the helical path.

3.5.1 Animating Solid Textured Transparency

The previous section described two different ways that solid space functions can be animated for colorsolid texturing and the results achievable by both techniques. This section describes the use of animationtechniques for solid textured transparency.

The animation technique of moving the point through the solid space was my original animationtechnique. The results of this technique applied to solid textured transparency can be seen in [1]. Thefollowing procedure, which is similar in animation techniques to the above moving marble procedure,produces fog moving through the surface of an object. Again, a downward helical path is used for themovement through the space. This produces an upward swirling to the gas movement.

void fog(pnt,*transp, frame_num)xyz_td pnt;float *transp;int frame_num

{float tmp;xyz_td direction,cyl;double theta;

pnt.x += 2.0 +turbulence(pnt, .1);tmp = noise_it(pnt);pnt.y += 4+tmp;pnt.z += -2 - tmp;

theta =(frame_num%SWIRL_FRAMES)*SWIRL_AMOUNT;cyl.x =RAD1 * cos(theta);cyl.z =RAD2 * sin(theta);

direction.x = pnt.x + cyl.x;direction.y = pnt.y - frame_num*DOWN_AMOUNT;direction.z = pnt.z + cyl.z;

*transp = turbulence(direction, .015);*transp = (1.0 -(*transp)*(*transp)*.275);*transp =(*transp)*(*transp)*(*transp);}

A still of this procedure applied to a cube can be seen in Figure 2. For these images, the following valueswere used: DOWN AMOUNT = 0:0095, SWIRL FRAMES = 126, and SWIRL AMOUNT =2 � �=126, RAD1 = 0:12, and RAD2 = 0:08.This technique is similar to Gardner’s technique forproducing images of clouds [8], except that it uses turbulence to control the transparency instead of Fouriersynthesis.

3.6 Animation of Gaseous Volumes

As described in a previous section, the movement of the gas is created by moving the fixed three-dimensionalscreen space point along a path over time through the turbulence space before evaluating the turbulencefunction. Each three-dimensional screen space point is inversely mapped back to world space. Then fromworld space, it is mapped into the gas and turbulence space through the use of simple affine transformations.Finally, it is moved through the turbulence space over time to create the movement of the gas. Therefore,the path direction will have the reverse visual effect. For example, a downward path applied to the screenspace point will show the gas as rising.

3-9

Page 80: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Several interesting animation effects can be achieved through the use of simple helical paths for themovement through the solid space. A discussion of these effects is presented first, followed by a discussionof the use of three-dimensional tables for controlling the gas movement. Finally, several additional primitivefunctions for creating gas animation will be presented. The second animation technique, moving the pointthrough the gas space, is used in all the procedures in this section.

Helical Path Effects

As mentioned above, helical paths can be used to create several different animation effects for gases.Earlier in these notes, a procedure for producing a still image of steam rising from a teacup was described.This procedure can be modified to produce convincing animations of steam rising from the teacup by theaddition of helical paths for motion. The modification needed is given below. This is the same techniquethat was used in the moving marble function.

steam_moving(pnt, pnt_world, density,parms, vol)xyz_td pnt, pnt_world;float *density,*parms;vol_td vol;

{float tmp,turb, dist_sq, density_max, offset2, theta, dist;static float ramp[RAMP_SIZE];extern float offset[OFFSET_SIZE];extern int frame_num;xyz_td direction, diff;int i, indx;static float pow_table[POW_TABLE_SIZE];static int calcd=1;static float down, cos_theta2, sin_theta2;

if(calcd){ calcd=0;/* determine how to move the point through the space (helical path) */theta =(frame_num%SWIRL_FRAMES)*SWIRL;down = (float)frame_num*DOWN*3.0 +4.0;cos_theta2 = RAD1*cos(theta) +2.0;sin_theta2 = RAD2*sin(theta) -2.0;

for(i=POW_TABLE_SIZE-1; i>=0; i--)pow_table[i] = (float)pow(((double)(i))/(POW_TABLE_SIZE-1)*

parms[1]*2.0,(double)parms[2]);make_ramp_table(ramp);

}

tmp = fast_noise(pnt);direction.x = pnt.x + cos_theta2 +tmp;direction.y = pnt.y - down + tmp;direction.z = pnt.z +sin_theta2 +tmp;

turb =fast_turbulence(direction);*density = pow_table[(int)(turb*0.5*(POW_TABLE_SIZE-1))];

/* determine distance from center of the slab ^2. */XYZ_SUB(diff,vol.shape.center, pnt_world);dist_sq = DOT_XYZ(diff,diff);density_max = dist_sq*vol.shape.inv_rad_sq.y;indx = (int)((pnt.x+pnt.y+pnt.z)*100) & (OFFSET_SIZE -1);density_max += parms[3]*offset[indx];

if(density_max >= .25) /* ramp off if > 25% from center */{ i = (density_max -.25)*4/3*RAMP_SIZE; /* get table index 0:RAMP_SIZE-1 */

i=MIN(i,RAMP_SIZE-1);density_max = ramp[i];*density *=density_max;

}

/* ramp it off vertically */dist = pnt_world.y - vol.shape.center.y;if(dist > 0.0)

3-10

Page 81: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

{ dist = (dist +offset[indx]*.1)*vol.shape.inv_rad.y;if(dist > .05){ offset2 = (dist -.05)*1.111111;

offset2 = 1 - (exp(offset2)-1.0)/1.718282;offset2*=parms[1];*density *= offset2;

}}

}

This function creates upward swirling movement in the gas, which swirls around 360 degrees everySWIRL FRAMES frames. Noise is applied to the path so that it appears more random. The parametersRAD1 and RAD2 determine the elliptical shape of the swirling path.

A downward helical path through the gas space produces the effect of the gas rising and swirling in theopposite direction. The same technique can be used to produce animations of fog developing and rollingby. A horizontal helical path creates the movement of the gas. A description of this can be found in [5].

For more realistic steam motion, a simulation of air currents is helpful. This can be approximated byadding turbulence to the helical path. The amount of turbulence added will be proportional to the heightabove the teacup with no turbulence added at the surface.

As shown above, a wide variety of effects can be achieved through the use of helical paths. Thisrequires the same type of path being used for movement throughout the entire volume of gas. Obviously,more complex motion can be achieved by having different movement paths for different locations withinthe gas. A three-dimensional table specifying different procedures for different locations within the volumecreates a flexible method for creating complex motion in this manner.

3.6.1 Three-dimensional tables

The use of three-dimensional tables (solid spaces) to control the animation of the gases is an extension tomy previous use of solid spaces in which three-dimensional tables were used for volume shadowing effects[5].

The three-dimensional tables are handled in the following manner: the table surrounds the gas volumein world space and values are stored at each of the lattice points in the table. These values represent thecalculated values for that specific location in the volume. To determine the values for other locations in thevolume, the eight table entries forming the parallelepiped surrounding the point are interpolated.

There are two types of tables for controlling the motion of the gases: vector field tables and functionalflow field tables. The vector field tables will not be described in detail in these notes. A thorough descriptionof their use and merits can be found in [7]. The vector field tables store direction vectors, density scalingfactors, and other information for their use at each point in the lattice. Thus, these tables are suited forvisualizing computational fluid dynamics simulations or using external programs for controlling the gasmovement[6].

The flow field function and vector field tables are incorporated into the volume density functions forcontrolling the shape and movement of the gas. Each volume density function has a default path andvelocity for the gas movement. First, the default path and velocity are calculated, then the vector field tablesare evaluated and functions that calculate direction vectors, density scaling factors, etc., from the functionalflow field tables are applied. The default path vector, the vector from the vector field table, and the vectorfrom the flow field function are combined to produce the new path for the gas.

3-11

Page 82: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3.6.2 Accessing The Table Entries

For accessing values from these tables during rendering, the location of the sample point within the tableis determined. The values at the eight points forming the surrounding parallelepiped are interpolated todetermine the final value. The location within the table is determined by first mapping the three-dimensionalscreen space point back into world space, then into the three-dimensional table. The following formula isthen used to find the location of the point within the table (ptable), given the point in world space(point):

ptable:x = (point:x� table start:x) � table inv step:x

ptable:y = (point:y � table start:y) � table inv step:y

ptable:z = (point:z � table start:z) � table inv step:z

Table start is the location in world space of the starting table entry and table inv step is the inverseof the step size between table elements in each dimension. Once the location within the table is determined,the values corresponding to the eight surrounding table entries are then interpolated (tri-linear interpolationis normally sufficient).

3.6.3 Functional Flow Field Tables

The type of table described in these notes to control the gas movement is the functional flow field table. Themajor use of these functional flow field tables is for choreographed animation of the gases. These tablesdefine, for each region of the gas, which function to evaluate to control its movement. Each flow field tableentry can either contain one specific function to evaluate, or a list of functions to evaluate to determine thepath for the movement of the gas (path through the gas space). For each function, a file is specified whichcontains the type of function and parameters for that function. The functions evaluated by the flow fieldtables return the following information:

Flow Field Function Values

- direction vector- density scaling value- percent of vector to use- velocity

The advantage of the flow field functions over the vector field tables is that they can provide infinitedetail in the motion of the gas; they are evaluated for each point that is volume rendered, not stored atfixed resolution. The disadvantage of the functional flow field tables is that the functions are much moreexpensive to evaluate than simply interpolating values from the vector field table.

The ‘‘percent of vector to use’’ value in the above table is used to provide a smooth transition betweencontrol of the gas movement by the flow field functions, the vector field tables, and the default path of thegas. This value is also used to allow a smooth transition between control of the gas by different flow fieldfunctions. This value will decrease as you move away from the center of control for a given flow fieldfunction.

3.6.4 Functional Flow Field Functions

Two powerful types of functions for controlling the movement of the gases are attractors/repulsors andvortex functions. Repulsors are the exact opposite of attractors, so only attractors will be described here.To create a repulsor from an attractor, simply negate the direction vector.

3-12

Page 83: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Attractors

Attractors are primitive functions that can provide a wide range of effects. Figure 4 shows several frames ofan attractor whose attraction increases in strength over time. Each attractor has a minimum and maximumattraction value. In this figure, the interpolation varies over time between the minimum and maximumattraction values of the attractor. By animating the location and strength of an attractor, many differenteffects can be achieved. Effects such as a breeze blowing (see Figure 8) and the wake of a moving objectcan easily be created. Spherical attractors simply create paths radially away from the center of attraction(as stated previously, path movement needs to be in the opposite direction of the desired visual effect). Thefollowing is an example of a simple spherical attractor function:

spherical_attractor(point, FF, direction, density_scaling, velocity, percent_to_use)xyz_td point, *direction;flow_func_td FF;float *density_scaling, *velocity, *percent_to_use;

{float dist, d2;

/*calculate distance and direction from center of attractor */XYZ_SUB(*dir, point, FF.CENTER);dist=sqrt(DOT_XYZ(*dir,*dir));

/* set the density scaling and the velocity to 1 */*density_scaling=1.0;*velocity=1.0;

/* calculate the falloff factor (cosine) */if(dist > FF.DISTANCE)

*percent_to_use=0;else if (dist < FF.FALLOFF_START)

*percent_to_use=1.0;else{ d2 = (dist - FF.FALLOFF_START)/(FF.DISTANCE - FF.FALLOFF_START);

*percent_to_use = (cos(d2*M_PI)+1.0)*.5;}

}

The flow func td structure contains parameters for each instance of the spherical attractor. The param-eters include the center of the attractor, FF.CENTER, the effective distance of attraction, FF.DISTANCE,and where to begin the falloff from the attractor path to the default path, FF.FALLOFF START. Thisfunction ramps the use of the attractor path from FF.FALLOFF START to FF.DISTANCE. A cosinefunction is used for a smooth transition between the path defined by the attractor and the default path of thegas.

Extensions of Spherical Attractors

Variations on this simple spherical attractor include moving attractors, angle limited attractors, attractorswith variable maximum attraction, non-spherical attractors, and of course combinations of any or all ofthese types. These variations can be animated over time to achieve more complex and interesting effects.For example, the minimum and maximum attraction can be animated over time to produce the effects seenin Figure 4 and Figure 8.

Instead of having the attraction be spherical in geometry, the geometry of the attraction can, for example,be planar or linear. A linear attractor can be used for creating the flow of a gas along a wall, as will beexplained in a later section.

3-13

Page 84: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3.6.5 Spiral Vortex Functions

Vortex functions are very useful for creating realistic gas motion. They have a variety of uses fromsimulating actual physical vortices to creating interesting disturbances in flow patterns as an approximationof turbulent flow. One vortex function is based on the simple 2D polar coordinate function:

r = �

which translates into three-dimensional coordinates as

x = � � cos(�)

y = � � sin(�)

The third dimension is normally just linear movement over time along the third axis. To animate thisfunction, � is relative to the frame number. To increase the vortex action, a scalar multiplier for the sineand cosine terms based on the distance from the vortex’s axis is added. This is by no means a true physicalsimulation of gaseous vortices. Simulating true turbulent flow characteristics, such as those found inKarman vortex streets (turbulent flow induced vortices in the wake of the flow about an object) is extremelycomplex and requires large amounts of supercomputer time for approximation models. A simpler vortexfunction is given below.

calc_vortex(pt, ff, path, velocity, percent_to_use, frame_num)xyz_td *pt, *path;flow_func_td *ff;float *percent_to_use, *velocity;int frame_num;

{static tran_mat_td mat={0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};xyz_td dir, pt2, diff;float theta, dist, d2, dist2;float cos_theta,sin_theta,compl_cos, ratio_mult;

/*calculate distance from center of vortex */XYZ_SUB(diff,(*pt), ff->center);dist=sqrt(DOT_XYZ(diff,diff));dist2 = dist/ff->distance;/* calculate angle of rotation about the axis */theta = (ff->parms[0] *(1+.001*(frame_num)))/

(pow((.1+dist2*.9),ff->parms[1]));

/* calculate the matrix for rotating about the cylinder’s axis */calc_rot_mat(theta, ff->axis, mat);transform_XYZ((long)1,mat,pt,&pt2);XYZ_SUB(dir,pt2,(*pt));path->x = dir.x;path->y = dir.y;path->z = dir.z;

/* Have the maximum strength increase from frame parms[4] to* parms[5]to a maximum of parms[2] */

if(frame_num < ff->parms[4])ratio_mult=0;

else if (frame_num <= ff->parms[5])ratio_mult = (frame_num - ff->parms[4])/

(ff->parms[5] - ff->parms[4])* ff->parms[2];else

ratio_mult = ff->parms[2];

/* calculate the falloff factor */

3-14

Page 85: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

if(dist > ff->distance){ *percent_to_use=0;*velocity=1;

}else if (dist < ff->falloff_start)

{ *percent_to_use=1.0 *ratio_mult;/*calc velocity */*velocity= 1.0+(1.0 - (dist/ff->falloff_start));

}else

{ d2 = (dist - ff->falloff_start)/(ff->distance - ff->falloff_start);percent_to_use = (cos(d2*M_PI)+1.0)*.5*ratio_mult;*velocity= 1.0+(1.0 - (dist/ff->falloff_start));

}}

This vortex function uses some techniques by Karl Sims [19]. For these vortices, the angle of rotationabout an axis is determined by both the frame number and the relative distance of the point from the center(or axis) of rotation. The direction vector is then the vector difference of the transformed point and theoriginal point.

A third type of vortex function is based on the conservation of angular momentum: r � � = constant,where r is the distance from the center of the vortex. This can be used in the above vortex procedure incalculating the angle of rotation about the axis of the vortex: � = (time � constant)=r. This will give morerealistic motion since it conserves the angular momentum.

An example of the effects achievable by these vortex functions can be seen in Figure 5. Animatingthe location of these vortices produces interesting effects, especially when coordinating their movementwith the movement of objects in the scene, such as producing a swirling wake created by an object movingthrough the gas.

3.6.6 Combinations of Functions

As mentioned above, combining these simple types of functions in controlling the gas movement throughdifferent parts of the volumes gives the most interesting and complex effects. The real power of flowfield functions is the ability to combine simple primitives to produce these effects. Two examples of thecombination of flow field functions, wind blowing and flow into a hole, are presented below to illustratethe power of this technique.

Wind Effects

The first complex gas motion example we will look at is wind blowing the steam rising from a teacup. Aspherical attractor will be used to create the wind effect. Figure 8 shows frames of an animation of a breezeblowing the steam from the left of the image. To produce this effect, an attractor was placed to the upperright of the teacup and the strength of attraction was increased over time. The maximum attraction was only30%, so it appears as a light breeze. Increasing the maximum attraction would simulate an increase in thestrength of the wind. The top-left image has the steam rising only vertically with no effect of the wind. Thetop-right image to the bottom-right image show the effect on the steam as the breeze starts blowing towardthe right of the image. This is a simple combination of helical motion with an attractor. Notice how thevolume of the steam as well as the motion of the individual plumes is ‘‘blown’’ toward the upper right. Thiseffect was created by moving the center of the volume point for the ramping off of the density over time.The x value of the center point is increased based on the height from the cup and the frame number. By

3-15

Page 86: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

changing the spherical attractor flow function and the steam moving procedure given above, the blowingeffect can be implemented. The following is the addition needed to the spherical attractor procedure:

/***************************************************************************** Move the Volume of the Steam**************************************************************************** the shifting is based on the height above the cup (parms[13]->parms[14])* and the frame range for increasing the strength of the attractor.* This is gotten from ratio_mult that is calculated above.****************************************************************************/

/* Have the maximum strength increase from frame parms[4] to* parms[5]to a maximum of parms[2]*/if(frame_num < ff->parms[4])

ratio_mult=0;else if (frame_num <= ff->parms[5])

ratio_mult = (frame_num - ff->parms[4])/(ff->parms[5] - ff->parms[4])* ff->parms[2];

if(point.y < ff->parms[6])x_disp=0;

else{ if(point.y <= ff->parms[7])

d2 =COS_ERP((point.y - ff->parms[6])/(ff->parms[7] -ff->parms[6]));elsed2=0;

x_disp = (1-d2)*ratio_mult*parms[8]+fast_noise(point)*ff->parms[9];}

return(x_disp);

The following table should clarify the use of all the parameters.

Variable Description

point location of the point in world spaceff!parms[2] maximum strength of attractionff!parms[4] starting frame for attraction increasingff!parms[5] ending strength for attraction increasingff!parms[6] minimum y value for steam displacementff!parms[7] maximum y value for steam displacementff!parms[8] maximum amount of steam displacementff!parms[9] amount of noise to add in

The ratio mult value for increasing the strength of the attraction is calculated in the same way as inthe calc vortex procedure. The x disp value needs to be returned to the steam rising function. This valueis then added to the center variable before the ramping off of the density. The following addition to thesteam rising procedure will accomplish this:

center = vol.shape.center;center.x += x_disp;

Flow Into a Hole in a Wall

The next example of combining flow field functions constrains the flow into an opening in a wall. Theresulting images are shown in Figure 9(a) and (b). For this example, three types of functions are used. Thefirst function is an angle-limited spherical attractor placed at the center of the hole. This attractor has arange of 180 degrees from the axis of the hole toward the left. The next function is an angle-limited repulsorplaced at the same location, again with a range of repulsion of 180 degrees, but to the right of the hole.These two functions create the flow into the hole and through the hole. The final type of function creates

3-16

Page 87: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

the tangential flow along the walls. This function can be thought of as a linear attraction field on the leftside of the hole. The line in this case would be through the hole and perpendicular to the wall(horizontal).This attractor has maximum attraction near the wall, with the attraction decreasing as you move away fromthe wall. As you can see from the flow patterns toward the hole and along the wall in Figure 9, the effect isvery convincing. This figure also shows how these techniques can be applied to hypertextures. The rightimage is rendered as a hypertexture to simulate a (compressible) liquid flowing into the opening.

3.7 Animating Hypertextures

All of the animation techniques described above can be applied to hypertextures. The only change neededis in the rendering algorithm. By using a non-gaseous model for illumination and for converting densitiesto opacities, the techniques described above will produce hypertexture images. As mentioned above, anexample of this is Figure 9. The geometry and motion procedures are the same for both of the images inFigure 9. Two other examples of hypertexture animation will be explored: simulating molten marble andfire.

3.7.1 Molten Marble

Previously in these notes, a procedure was given for simulating the formation of marble. The addition ofhypertexture animation to the solid texture animation can increase the realism of the animation considerably.

One way of animating hypertextures for the simulation of marble forming is described below. However,the reader is encouraged to try various techniques to produce different results.

The main idea behind this approach is to base the density changes on the color of the marble. Initially,no turbulence will be added to the ‘‘fluid’’: density values will be determined in a manner similar to themarble color values, giving the different bands different densities. Just as in the earlier marble formingprocedure, turbulence will be added over time. As you can see in the procedure below, all of the above isachieved by returning the amount of turbulence from the solid texture function, marble forming, describedearlier. The density is based on the turbulence amount from the solid texture function. This is then shapedusing the power function in a similar manner to the gas functions given before. Finally, a trick by Perlin[17] is used to form a hard surface more quickly. The result of this function can be seen in Figure 10.

/*********************************************************************** parms[1] = Maximum density value: density scaling factor ** parms[2] = exponent for density scaling ** parms[3] = x resolution for Perlin’s trick (0-640) ** parms[8] = 1/radius of fuzzy area for Perlin’s trick (> 1.0) ***********************************************************************/

molten_marble(pnt, density, parms,vol)xyz_td pnt;float *density,*parms;vol_td vol;

{float parms_scalar, turb_amount;

turb_amount = solid_txt(pnt,vol);*density = (pow(turb_amount, parms[2]) )*0.35 +.65;/* Introduce a harder surface quicker. parms[3] is multiplied by 1/640 */*density *=parms[1];parms_scalar = (parms[3]*.0015625)*parms[8];*density= (*density-.5)*parms_scalar +.5;*density = MAX(0.2, MIN(1.0,*density));

}

3-17

Page 88: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3.7.2 Fire

Simulating fire is a very complex problem. Flames are another example of a flow problem. These notes do

not describe a complete solution for modeling fire. A true physical simulation would require the solution of

the flow equations for the oxidants and the reactants and the chemical equilibrium equations. The technique

described here is a very preliminary approximation to simulating the visual characteristics of flames. The

flames can be modeled as a three-dimensional volume density. To simulate the luminous characteristics

of the flames, a constant illumination will be assumed and the emittance of light from the flames will be

ignored. For flames produced from wood, paper, etc. the light is emitted from carbon particles in the flame;

hence, the flames will cast shadows on other objects in the scene.

To create the flames, a base height of the fire is used to give a relatively continuous area of fire. Above

this area, individual flames will become more prominent. For the distribution of the flames, a combination

of turbulent sine waves is used. The flames density will also decrease as the flames rise.

Finally, a simulation of the flame color is needed. A simple way to do this is to have the most dense

portions of the flames be red and have the color change to yellow as the flame density decreases.

Here is a very rough procedure for modeling fire:

/*********************************************************************** Fire *********************************************************************** parms[1] = Maximum density value - density scaling factor ** parms[2] = exponent for density scaling ** parms[3] = amount of randomness to add into the ramp off. ** parms[4] = gas density threshold if < than this, =0. ** parms[5] = center point x value for ramp off. ** parms[6] = percent of height for base fire ** parms[8] = sin multiplier value ** parms[7] = minimum density for base fire ***********************************************************************/

fire(pnt,density,parms, pnt_w, vol, final_pnt)xyz_td pnt, pnt_w, *final_pnt;float *density,*parms;vol_td *vol;

{float tmp, dist_sq, density_max, tmp2, offset3;float vect_len, compl_len, hel_len, flow_amount;extern float offset[OFFSET_SIZE];extern int frame_num;xyz_td direction,cyl, diff, hel_path, center, pnt2;int i, indx;static float ramp[RAMP_SIZE];static float pow_table[POW_TABLE_SIZE];static int calcd=1;static float down, cos_theta2, sin_theta2;double ease(), height_ratio, compl, turb_amount, d_color,begin_ramp,ease_amt,

theta_fire, cos_theta, sin_theta;rgb_td colr;

if(calcd){ theta_fire =(frame_num%SWIRL_FRAMES_FIRE)*SWIRL_FIRE; /*swirling effect */

cos_theta = cos(theta_fire);sin_theta = sin(theta_fire);down = (float)frame_num*DOWN_FIRE -4.0;cos_theta2 = .09*cos_theta +2.0;sin_theta2 = .06*sin_theta -2.0;calcd=0;

3-18

Page 89: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

for(i=POW_TABLE_SIZE-1; i>=0; i--){ pow_table[i] = (float)pow(((double)(i))/(POW_TABLE_SIZE-1)*

parms[1]*2.0,(double)parms[2]);}

make_ramp_table(ramp);}

tmp = fast_noise(pnt);

/* calculate the amount of turbulence to add onto the path based on height* above the surface.*/height_ratio = (pnt_w.y - vol->shape.b_box.center.y)*vol->shape.b_box.inv_rad.y;if (height_ratio < 0) height_ratio =0;else

height_ratio = ease (height_ratio, 0.4, 0.6);pnt2.x = pnt.x *1.75; pnt2.y = pnt.y*.5; pnt2.z=pnt.z*.75;turb_amount= new_turbulence_three(pnt2) *height_ratio;

/** calculate the path base on the unperturbed flow: helical path*/

hel_path.x = cos_theta2 + tmp + turb_amount;hel_path.y = (- down) - tmp - turb_amount;hel_path.z = sin_theta2 + tmp + turb_amount;hel_len = NORM_XYZ(hel_path);XYZ_ADD(direction, pnt, hel_path);

/** The flame shaping part* Use multiple sine waves to get the general shape of the flames*/tmp = new_turbulence_three(direction);tmp = (sin((direction.x+tmp)*parms[8])+1.25) *.4444444444444;tmp *= ((sin((direction.z+tmp)*2*parms[8])+1.0) *.5);*density = pow_table[(int)((tmp)*(.5*(POW_TABLE_SIZE-1)))];

/************************************************************************* RAMP IT OFF************************************************************************/center=vol->shapecenter;/* determine distance from center ^2. */XYZ_SUB(diff,center, pnt_w);dist_sq = DOT_XYZ(diff,diff);density_max = dist_sq*vol->shape.b_box.inv_rad_sq.y;

indx = (int)((pnt.x+pnt.y+pnt.z)*100) &(OFFSET_SIZE -1);density_max += parms[3]*offset[indx];if(density_max >= .25)

{ /* * (1/.75)*pi */i = (density_max -.25)*266.66; /* get table index 0:199 */if(i > 199)

i=199;density_max = ramp[i];*density *=density_max;

}

/* ramp it off vertically */tmp2 = 2*(pnt_w.y - center.y);if(tmp2 > 0.0)

{ tmp2 = (tmp2 +offset[indx]*parms[3])*vol->shape.inv_rad.y;if(tmp2 > 1.0) tmp2=1.0;if(tmp2 > .05)

{ offset3 = (tmp2 -.05)*1.111111;offset3 = 1 - (exp(offset3)-1.0)/1.718282;*density = offset3*parms[1];

}}

if(*density < parms[4])*density=0;

/* give an area of the fire where there is a minimum density

3-19

Page 90: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

*/if (*density < parms[7])

{if (pnt_w.y < center.y - parms[6] * vol->shape.rad.y)

{begin_ramp = pnt_w.y -(center.y -parms[6]* vol->shape.rad.y*.5);if (begin_ramp > 0)

{ /* ease to no min density */begin_ramp = begin_ramp*(vol->shape.b_box.inv_rad.y*2.0);ease_amt = ease(begin_ramp, .4, .6);

*density = *density*ease_amt +(1 -ease_amt)* parms[7]*density_max;}

else{ *density = parms[7]*density_max;}

}}

/** Determine the fire color & stick it into the vol->color structure.*/d_color = 1 - *density*1.5/parms[1];if (d_color < 0.0) d_color =0.0;compl = 1 - d_color;colr.r = 1.0;colr.g = d_color * .2 + compl * .85;colr.b = d_color * .2 + compl * .5;vol->color = colr;}

3.8 Conclusion

The goal of these notes has been to describe my techniques to create realistic images and animations of gases

and fluids in detail, as well as provide the reader with an insight into the development of these techniques.

These notes have shown a useful approach to modeling gases and powerful animation techniques for

procedural modeling. A more detailed and expanded description of these techniques can be found in [4]. To

aid the reader in reproducing the results presented here, all of the images in these notes are accompanied by

detailed descriptions of the procedures used to create them. opportunity to reproduce the results, but also

the opportunity and challenge to expand upon the techniques presented in these notes. This gives the reader

not only the These notes should also give the reader an insight into the procedural design approach I use

and will hopefully help the reader explore and expand procedural modeling and animation techniques.

References

[1] Ebert, David, Boyer, Keith, and Roble, Doug. Once a Pawn a Foggy Knight ... [videotape]. InSIGGRAPH Video Review, 54 (November 1989), ACM SIGGRAPH, New York. Segment 3.

[2] Ebert, David, Carlson, Wayne, and Parent, Richard. Solid spaces and inverse particle systems forcontrolling the animation of gases and fluids. The Visual Computer 10, 4 (1994), 179--190.

[3] Ebert, David, Ebert, Julia, and Boyer, Keith. Getting Into Art. [videotape], Department of Computerand Information Science, The Ohio State University, May 1990.

[4] Ebert, David, Musgrave, Kent, Peachey, Darwyn, Perlin, Ken, and Worley. Texturing and Modeling:A Procedural Approach. Academic Press, Oct. 1994. ISBN 0-12-228760-6.

[5] Ebert, David, and Parent, Richard. Rendering and Animation of Gaseous Phenomena by CombiningFast Volume and Scanline A-buffer Techniques. Proceedings of SIGGRAPH’90,(Dallas, Texas, Aug6-10, 1990 ). In Computer Graphics 24,4 (August 1990), 357--366.

[6] Ebert, David, Yagel, Roni, Scott, Jim, and Kurzion, Yair. Volume Rendering Methods forComputational Fluid Dynamics Visualization Proceedings of Visualization ’94. 232--239.

3-20

Page 91: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

[7] Ebert, David S. Solid Spaces: A Unified Approach to Describing Object Attributes. PhD thesis, TheOhio State University, 1991.

[8] Gardner, Geoffrey. Visual Simulation of Clouds. Proceedings of SIGGRAPH’85 (San Francisco,California, July 22-26, 1985). In Computer Graphics 19,3 (July 1985), 297--303.

[9] Gardner, Geoffrey. Forest fire simulation. In Computer Graphics (SIGGRAPH ’90 Proceedings)(Aug. 1990), F. Baskett, Ed., vol. 24, 430.

[10] Inakage, Masa. Modeling laminar flames. In SIGGRAPH’91: Course Notes, 27 (July 1991), ACMSIGGRAPH.

[11] Kajiya, James, and Von Herzen, Brian. Ray Tracing Volume Densities. Proceedings of SIGGRAPH’84(Minneapolis, Minnesota, July 23-27, 1984). In Computer Graphics 18,3 (July 1984), 165--174.

[12] Kay, Timothy, and Kajiya, James. Ray Tracing Complex Scenes. Proceedings of SIGGRAPH’86(Dallas, Texas, August 18-22, 1986). In Computer Graphics 20, 4 (August 1986), 269--278.

[13] Klassen, R. Victor. Modeling the Effect of the Atmosphere on Light. ACM Transaction on Graphics6, 3 (July 1987), 215--237.

[14] Max, Nelson. Light diffusion through clouds and haze. Computer Vision, Graphics, and ImageProcessing 33 (1986), 280--292.

[15] Nishita, Tomoyuki, Miyawaki, Yasuhiro, and Nakamae, Eihachiro. A Shading Model for Atmo-spheric Scattering Considering Luminous Intensity Distribution of Light Sources. Proceedings ofSIGGRAPH’87 (Anaheim, California, July 27-31, 1987). In Computer Graphics 21,4 (July 1987),303--310.

[16] Perlin, Ken. An Image Synthesizer. Proceedings of SIGGRAPH’85 (San Francisco, California, July22-26, 1985). In Computer Graphics 19,3 (July 1985), 287--296.

[17] Perlin, Ken. A hypertexture tutorial. In SIGGRAPH’92: Course Notes, 23 (July 1992), ACMSIGGRAPH.

[18] Perlin, Ken, and Hoffert, Eric. Hypertexture. Proceedings of SIGGRAPH’89, (Boston, Massachusetts,July 31-Aug 4, 1989 ). In Computer Graphics 23,3 (July 1989), 253--262.

[19] Sims, Karl. Particle Animation and Rendering Using Data Parallel Computation. Proceedings ofSIGGRAPH’90 (Dallas, Texas, Aug 6-10, 1990 ). In Computer Graphics 24,4 (August 1990),405--413.

[20] Stam, Jos, and Fiume, Eugene. Turbulent wind fields for gaseous phenomena. In Computer Graphics(SIGGRAPH ’93 Proceedings) (Aug. 1993), J. T. Kajiya, Ed., vol. 27, 369--376.

[21] Stam, Jos, and Fiume, Eugene. Depicting fire and other gaseous phenomena using diffusion processes.In SIGGRAPH 95 Conference Proceedings (Aug. 1995), R. Cook, Ed., Annual Conference Series,ACM SIGGRAPH, Addison Wesley, 129--136. held in Los Angeles, California, 06-11 August 1995.

[22] Voss, Richard. Fourier Synthesis of Gaussian Fractals: 1/f noises, landscapes, and flakes. InSIGGRAPH 83: Tutorial on State of the Art Image Synthesis (1983), vol. 10, ACM SIGGRAPH.

[23] Wyvill, Brian, and Bloomenthal, Jules. Modeling and animating with implicit surfaces. In SIGGRAPH90: Course Notes, 23 (August 1990), ACM SIGGRAPH.

3-21

Page 92: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 1: The effects of the power and sine function on the gas shape. (a) has a power exponent of 1, (b)has a power exponent of 2, (c) has a power exponent of 3, and (d) has the sine function applied to the gas.

3-22

Page 93: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 2: Solid textured transparency based fog.

3-23

Page 94: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 3: Marble forming. The images show the banded material heating, deforming, then cooling andsolidifying.

3-24

Page 95: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 4: Effect of a spherical attractor increasing over time. Images are every 45 frames. The top-leftimage has 0 attraction. The lower-right image has the maximum attraction.

3-25

Page 96: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 5: Spiral vortex. Images are every 21 frames. The top-left image is the default motion of the gas.The remaining images show the effects of the spiral vortex.

3-26

Page 97: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 6: Preliminary steam rising from a teacup. (a) has no shaping of the steam. (b) has only sphericalattenuation.

3-27

Page 98: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 7: Final image of steam rising from a teacup, with both spherical and height density attenuation.

3-28

Page 99: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 8: A increasing breeze blowing towards the right created by an attractor.

3-29

Page 100: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 9: (a) Gas flow into a hole in a wall. (b) Liquid Flow into a hole in a wall.

3-30

Page 101: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 10: Liquid Marble Forming

3-31

Page 102: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 11: Volumetric Procedural Implicit Cloud

3-32

Page 103: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Movie Placeholder

Page 104: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3-33

Volumetric Procedural Modeling and Animation

David S. EbertComputer Science & Electrical Engineering Department

University of Maryland Baltimore County

David S. EbertComputer Science & Electrical Engineering Department

University of Maryland Baltimore County

Overview• Background and Introduction

•Modeling Gases

•Procedural Animation Techniques

•Animating Gases

•Animating Hypertextures

•Conclusion

• Background and Introduction

•Modeling Gases

•Procedural Animation Techniques

•Animating Gases

•Animating Hypertextures

•Conclusion

Background & Introduction

Why Model Gases ?

Rendering System Considerations

Why Model Gases ?

Rendering System Considerations

Why Model Gases ?

Visual Realism

Artistic Effects

Visual Realism

Artistic Effects

Rendering System ConsiderationsVolume Rendering Support

Illumination Issues

• Participating media - scatters, reflects, absorbs light

• Low-albedo models (single scattering)

• High-albedo models (multiple scattering)

Shadowing

Modeling Capability

Volume Rendering Support

Illumination Issues

• Participating media - scatters, reflects, absorbs light

• Low-albedo models (single scattering)

• High-albedo models (multiple scattering)

Shadowing

Modeling Capability

Hybrid Volume and Surface RenderingVolume Rendering Support

• Scanline a-buffer with volume tracing

Illumination Issues

• Low-albedo illumination model

Volume Rendering Support

• Scanline a-buffer with volume tracing

Illumination Issues

• Low-albedo illumination model

Page 105: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3-34

Hybrid Volume and Surface Rendering (continued)Shadowing

• Fast, efficient table-based volume shadowing

Modeling Capability

• Procedural volume density functions (solid spaces)

• Volume visualization support

Shadowing

• Fast, efficient table-based volume shadowing

Modeling Capability

• Procedural volume density functions (solid spaces)

• Volume visualization support

Solid SpacesThree-dimensional Spaces that Control

Object Attributes

Examples:

• Solid texturing, hypertextures, volumetric shadows

Access:

• Find location of point in world space

• Map into solid space, evaluate solid space function

Three-dimensional Spaces that Control Object Attributes

Examples:

• Solid texturing, hypertextures, volumetric shadows

Access:

• Find location of point in world space

• Map into solid space, evaluate solid space function

Modeling GasesPrevious Approaches

Volumetric Procedural Modeling Approach

• Turbulence-based approach

Gas Shaping Primitives

Example: Steam Rising from a Teacup

Volumetric Implicit Functions

Previous Approaches

Volumetric Procedural Modeling Approach

• Turbulence-based approach

Gas Shaping Primitives

Example: Steam Rising from a Teacup

Volumetric Implicit Functions

Previous ApproachesSurface Approaches

• Hollow/ flat objects

• Interaction problems

• Fast

Volume Approaches

• Greater realism, flexibility

• Slower

Surface Approaches

• Hollow/ flat objects

• Interaction problems

• Fast

Volume Approaches

• Greater realism, flexibility

• Slower

Previous ApproachesSurface Approaches

• Constant density (Klassen; Nishita, et al)

• Solid textured ellipsoids (Gardner)

• Height fields (Max), Fractals (Musgrave, Voss)

Volume Approaches

• Physically-based clouds (Kajiya)

• Volume fractals (Sakas), Fuzzy blobbies (Stam)

Surface Approaches

• Constant density (Klassen; Nishita, et al)

• Solid textured ellipsoids (Gardner)

• Height fields (Max), Fractals (Musgrave, Voss)

Volume Approaches

• Physically-based clouds (Kajiya)

• Volume fractals (Sakas), Fuzzy blobbies (Stam)

Procedural Gas ModelingTurbulence-based Procedures

• (Perlin’s noise and turbulence functions)

Shape Resulting Gas

• Simple mathematical functions

Originally Used for Solid Textured Transparency

Currently Used to Define Volume Density

Turbulence-based Procedures

• (Perlin’s noise and turbulence functions)

Shape Resulting Gas

• Simple mathematical functions

Originally Used for Solid Textured Transparency

Currently Used to Define Volume Density

Page 106: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3-35

Basic Gas Procedure

Density =

(turbulence(pnt)*density_scaling)exponent

• Exponent typically 1.0 to 10.0

Density =

(turbulence(pnt)*density_scaling)exponent

• Exponent typically 1.0 to 10.0

Gas Shaping Primitives

Power Function

Sine Function

Exponent Function

Power Function

Sine Function

Exponent Function

Steam Rising From a Teacup

Volume of Gas Over the Teacup

Basic Gas Procedure Used for Density

Shape Gas Spherically

Shape Gas Vertically

Volume of Gas Over the Teacup

Basic Gas Procedure Used for Density

Shape Gas Spherically

Shape Gas Vertically

Volumetric Implicit FunctionsUse Implicit Surface Primitives and

Bending Functions

Procedurally Manipulate the Result as Desired

Volume Render the Resulting Function

Example: Volumetric Clouds

Use Implicit Surface Primitives and Bending Functions

Procedurally Manipulate the Result as Desired

Volume Render the Resulting Function

Example: Volumetric Clouds

Procedural Animation Techniques

Change the Definition of the Space over Time

Move the Point Before Evaluation of the Procedure

Change the Definition of the Space over Time

Move the Point Before Evaluation of the Procedure

Animating Solid TexturesColor

• Marble forming

• Marble moving

Transparency

• Flat fog

Color

• Marble forming

• Marble moving

Transparency

• Flat fog

Page 107: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3-36

Simple Marble Function

Val =

(sin((point.y + 3*turbulence(point))*PI)+1)/2

Calculate Color Based on Val

Val =

(sin((point.y + 3*turbulence(point))*PI)+1)/2

Calculate Color Based on Val

Marble FormingAnimation Technique:

• Change the definition of the space over time

Marble Formed From Turbulent Mixing of Different Materials

Simulate Banded Material by sin(y)

Animation Technique:

• Change the definition of the space over time

Marble Formed From Turbulent Mixing of Different Materials

Simulate Banded Material by sin(y)

Marble Forming (continued)Add Turbulence Over Time:

• percent = MIN((frame# - start_frame)/(end_frame - start_frame), 1.0)

• Val = (sin((point.y + 3*percent*turbulence(point))*PI)+1)/2

• Calculate color based on Val

Simulate Heating and Cooling

Add Turbulence Over Time:

• percent = MIN((frame# - start_frame)/(end_frame - start_frame), 1.0)

• Val = (sin((point.y + 3*percent*turbulence(point))*PI)+1)/2

• Calculate color based on Val

Simulate Heating and Cooling

Marble MovingAnimation Technique:

• Move the point before evaluation of the procedure

Useful to Select Pieces of Marble

Move Point Over Time

Evaluate Turbulence Function for New Point

Animation Technique:

• Move the point before evaluation of the procedure

Useful to Select Pieces of Marble

Move Point Over Time

Evaluate Turbulence Function for New Point

Animating Solid Textured TransparencyAnimation Technique:

• Move the point before evaluation of the procedure

Useful for Flat Planes of Fog, Smoke, Clouds

Control transparency of “Hollow” Objects

• Transparency =(turbulence(pnt)*transp_scale)exponent

Animation Technique:

• Move the point before evaluation of the procedure

Useful for Flat Planes of Fog, Smoke, Clouds

Control transparency of “Hollow” Objects

• Transparency =(turbulence(pnt)*transp_scale)exponent

Procedural Gas Animation

Helical Path Effects

Functional Flow Field Tables

Flow Function Primitives

Combination of Flow Functions

Helical Path Effects

Functional Flow Field Tables

Flow Function Primitives

Combination of Flow Functions

Page 108: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3-37

General Animation Approach

Animation Technique:

• Move point over time before evaluating procedure

Map Screen Space Point to Object Space

Move Through Gas Space Over Time

Animation Technique:

• Move point over time before evaluating procedure

Map Screen Space Point to Object Space

Move Through Gas Space Over Time

General Animation Approach (continued)

Evaluate Density Procedure

Path Direction Produces Opposite Movement

Path Specified by

• Helical Paths or Flow Field Functions

Evaluate Density Procedure

Path Direction Produces Opposite Movement

Path Specified by

• Helical Paths or Flow Field Functions

Helical PathsGive General Direction of Motion with

Some Variation

Example of Downward Helical Path

• θ based on the frame number

• x = sin(θ) * radius1

• z= sin(θ) * radius2

• y = - (down_velocity * frame_number)

Give General Direction of Motion with Some Variation

Example of Downward Helical Path

• θ based on the frame number

• x = sin(θ) * radius1

• z= sin(θ) * radius2

• y = - (down_velocity * frame_number)

Helical Path Effects

Steam Rising

• Downward helical path

Fog Rolling In

• Horizontal helical path

Steam Rising

• Downward helical path

Fog Rolling In

• Horizontal helical path

Helical Path EffectsSmoke Column

• Two downward helical paths

Teapot with Steam

• Two downward helical paths

Getting Into Art

• Horizontal helical path

Smoke Column

• Two downward helical paths

Teapot with Steam

• Two downward helical paths

Getting Into Art

• Horizontal helical path

Functional Flow Field TablesThree-Dimensional Table

Entry Specifies Functions to Evaluate

Functions Control Gas Movement

• Direction vector, density & velocity scaling factors

• “Percent to Use” value

Provides More Complex Gas Motion Than Helical Paths

Three-Dimensional Table

Entry Specifies Functions to Evaluate

Functions Control Gas Movement

• Direction vector, density & velocity scaling factors

• “Percent to Use” value

Provides More Complex Gas Motion Than Helical Paths

Page 109: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3-38

Flow Field FunctionsAttractors & Repulsors

• Spherical, linear, planar

• Angle-limited, range-limited, directional

Vortices - visual simulation based on r=θ• θ calculated from frame number

Combinations of Functions

Attractors & Repulsors

• Spherical, linear, planar

• Angle-limited, range-limited, directional

Vortices - visual simulation based on r=θ• θ calculated from frame number

Combinations of Functions

Combination of Functions

Wind Effects

Flow Into an Opening

Wind Effects

Flow Into an Opening

Wind Effects - Gentle Breeze

Helical Path (for upward swirling)

Spherical Attractor:

• Strength increases over time

• Moves volume of steam

Helical Path (for upward swirling)

Spherical Attractor:

• Strength increases over time

• Moves volume of steam

Flow Into An Opening

Motion Created by

• Angle limited spherical attractor

• Angle limited spherical repulsor

• Linear attractor

Motion Created by

• Angle limited spherical attractor

• Angle limited spherical repulsor

• Linear attractor

Animating Hypertextures

Same Procedural Animation Techniques

Flow Into an Opening

Marble Forming

Same Procedural Animation Techniques

Flow Into an Opening

Marble Forming

Marble FormingDensity “Based” on Marble Color

Animation Created by Solid Texture Function

• Solid texture function’s color based on

– val = sin((pnt.y+percent*3*turbulence(pnt))*PI)

• Density = sqrt(val + 1.005)/2.005)*0.5 + 0.5

Density “Based” on Marble Color

Animation Created by Solid Texture Function

• Solid texture function’s color based on

– val = sin((pnt.y+percent*3*turbulence(pnt))*PI)

• Density = sqrt(val + 1.005)/2.005)*0.5 + 0.5

Page 110: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

3-39

ConclusionImportant Aspects

• Flexible, volume modeling system

• Accurate illumination and shadowing

Procedural Gas Modeling

• Flexible, turbluence-based volume modeling

Procedural Animation - Flexible, Extensible

• Two general techniques

Important Aspects

• Flexible, volume modeling system

• Accurate illumination and shadowing

Procedural Gas Modeling

• Flexible, turbluence-based volume modeling

Procedural Animation - Flexible, Extensible

• Two general techniques

Page 111: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David
Page 112: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Chapter 4:Building and Animating Implicit Surface Models

Brian Wyvill

4.1 Introduction

The idea of building 3D models by using equations which describe their shape is often the approach takenwhen first confronted with the problem of making a geometric computer model. For example the implicitequation:

x2+ y2

+ z2� r2

= 0 (1)

describes a sphere in that (x; y; z) values that satisfy the equation are on the surface of the sphere of radius r.It soon becomes obvious that this is not necessarily a good approach for computer graphics, since findingthe right (x; y; z) values requires some kind of a search procedure. However a sphere can be defined bythe parametric equations:

x = rcos(�)sin(�)

y = rsin(�)

z = rcos(�)cos(�)

where (0 <= � <= 2�) and (0 <= � <= �)

All points on the surface can be found by choosing values for the parameters � and � as indicatedabove. Thus the parametric approach is the one usually taken in computer graphics. However, despite thedifficulty of finding the surface defined by the implicit equation there are a number of advantages of usingthis technique outlined in this paper. For a given (x; y; z) equation 1 will produce a value that correspondsto the point being inside the sphere (negative), or outside the sphere (positive), or on the surface of the sphere(zero). The sign is easily changed by re-arranging equation 1. Later in this paper I have adopted the oppositesign convention, for compatibility with other work. For each point in space there is a corresponding valuecalculated from the implicit equation. A scalar field is one in which every point has a scalar value, thus animplicit equation defines a scalar field. For equation 1 the surface is defined by points which have a value

4-1

Page 113: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

of zero in the field. Since these points have equal values they form an iso-surface (or contour surface) andby simply choosing a constant other than zero, different iso-surfaces can be modeled.

Using implicit equations as a starting point, we have begun to think of models as defined by iso-surfaces.The field values define the shape of the model. Thus any procedural way of producing the right field valueswill in turn define the desired shape of the model.

Jim Blinn introduced the idea of modeling with iso-surfaces as a side effect of a visualizing electrondensity fields [1]. Such models have various desirable properties including the ability to blend with theirclose neighbours. These models have been given a variety of names in particular: Blobby Molecules (Blinn),Soft Objects (Wyvill) [26] and MetaBalls (Nishimura) [15]. Jules Bloomenthal pointed out that these modelscould be grouped under the more general heading of implicit surfaces, defined as the point set: F (P ) = 0[2]. In these course notes I have concentrated on implicit surfaces built from skeletons rather than fromimplicit equations directly.

A skeleton is composed of a number of skeletal elements. A scalar field is defined around a skeletalelement. The shape and a variety of properties of a skeletal element are discussed below. Implicit surfacemodeling techniques are now beginning to penetrate the animation industry. Several examples in commercialanimation exist including at least one commercial system (the MetaEditor - Meta Corporation) an interactiveeditor which uses metaballs. In this paper I have outlined the major trends and techniques using skeletalimplicit surfaces, various problems of the technique and areas of future research are also discussed.

These notes are organized as follows:

� Introduction, Parametric and Implicit Surface definitions.

� Skeletal Implicit Surface Models (implicit, distance, blending and weighting functions)

� Rendering (Polygonizing, Shrinkwrap, Breakfast algorithm, glowing objects)

� Animating Implicit Surfaces (curve following, warping, blending, negative fields, metamorphosis).

4.2 Skeletal Implicit Surface Models

The basic idea is that a model can be built from a primitive skeleton by combining elements such as points,lines, polygons, circles and splines. A surface representing a blended offset from the skeletal elements, iscalculated and visualized. The skeletal elements are linked hierarchically. At each frame an implicit surfaceencloses the skeleton using the techniques described in [4]. In general, any three dimensional object can bea part of the skeleton, as long as it is possible to determine the distance from a given point in space to theobject.

The skeleton and its primitives can be viewed as black box functions. Such a function will providean implicit field value and the gradient of the field at a given point. The basic method for composing theprimitive functions is by summing the field values. The gradient is calculated from a weighted average of

4-2

Page 114: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 1: Glass Dinosaur

the gradients from the individual primitives weighted by the field value. Other methods are discussed laterin this paper.

An example of the skeleton for a dinosaur, constructed from spheres, ellipsoids and lines is given infigure 1. The primitives are shown as small solid icons, the fields around the actual primitives will overlapto produce the blended surface represented by the simulated glass in the figure.

Skeletons are useful for several reasons:

� Skeletons provide intuitive representation for many natural objects.

� Skeletons themselves are easily manipulated and displayed.

� Skeletons provide a more concise representation than parametric surfaces.

� As in Constructive Solid Geometry (CSG), complex shapes can be modeled with few elements. Butunlike CSG it is much easier to add new primitive types. (CSG requires O(n2

) intersection algorithmsfor n primitive types, implicit surfaces require O(n) distance algorithms).

4-3

Page 115: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

4.2.1 The Implicit Function

The skeleton is surrounded by a scalar field Ftotal(P ) (equation 2). The intensity of the field being thehighest on the skeleton, and decreasing with distance from the skeleton. The function Ftotal(P ) relatesthe field value (intensity) to distance from the skeleton, it has an impact on the shape of the surface, anddetermines how separate surfaces blend together (see [9]). The surface is defined by the set of points in spacefor which the intensity of the field has some chosen constant value (or iso-value thus the name iso-surface).Fields from the individual elements of the skeleton are added to find the potential at some chosen point.(Values can be negative or positive). The value at some point in space is calculated as follows:

Ftotal(P ) =

i=nXi=1

ciFi(ri) (2)

where P is a point in spaceFtotal(P ) is the value of the field at Pn is the number of skeletal elementsci is a scalar value (used for positive or negative elements)Fi is the blending function of the ith elementri is the distance from P to the nearest point Qi on the ith element.

4.2.2 Distance Function

The evaluation of Ftotal(P ) has two steps. The first step involves finding the nearest pointQi on the skeletalelement to the given query point P and calculating the distance between them. The second step involvesevaluation of the blending function discussed in the next section.

This procedure depends on the geometry of the skeletal element and can be very simple (trivial in thecase of a point skeleton), or quite complex in the case of spline curves and patches, when an iterative ornumerical method is necessary. For example to implement a straight line primitive of thickness t; write afunction that given a point in space returns the distance to the nearest point on the line. Figure 2 showsvarious possibilities. AB is the line. From the query point, drop a perpendicular to AB, P1 is positiveand P2 is negative. The perpendicular to the line from P3 is not on the segment AB so the distance to thenearest end point (A or B) is determined and compared against t. This method produces cylindrical lineswith hemi-spherical ends.

If the line is represented by vector v, a is the vector from A to C , b is the vector from A to the point inquestion, P , then by projecting P onto the line we get:

a =v

kvk2 (b:v) (3)

The distance of P from the line is returned and plugged into the blending function.

4-4

Page 116: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

A

P2

P1

C

P3

B

Figure 2: Line Primitive

We can see then that a primitive, or skeletal element is just a black box procedure that returns a scalarvalue. Another example derived from the line primitive, is an offset surface from a planar polygon. For agiven point P , first project P onto the polygon, if the projected point is inside the polygon (see [?]), returnthe distance between P and the projection of P , if outside, test against each line as indicated above andreturn the minimum distance. Since the scalar field value is greatest at a distance of zero from the primitivea value representing infinity is needed to indicate that the P is not influenced by this primitive.

4.2.3 Blending Function

Figure 4 shows some blending spheres. As the primitives approach each other they smoothly blend usingthe blending function introduced in [26] and pictured in figure 3. In Blinn’s earlier work [1] he used anexponential function, since he was modeling electron density fields. However it is more convenient forskeletal modeling if the blending function falls to zero so that fewer primitives may be considered that affectthe field. (Further work has been done to reduce this, see [26]). The cubic of figure 3 has other properties,such as: it has zero derivatives at r = 0 and r = R, it is also symmetrical about the contour value 0:5,we use this value for convenience in our modeling system. This function makes for nice smooth blends asshown in figure 4.

The blending function may be modified by noise or other perturbation function as described in [9]. Thewarping described below (see section 4.5.6) is separate from these modification functions. The surface

4-5

Page 117: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

0

0.5

1

0 R/2 Rr

F(r)

F

Figure 3: Cubic Blending Function Fcub(r) = �49r6

R6 +179

r4

R4 �229

r2

R2 + 1

Figure 4: Spheres Blending using the cubic blending function and demonstrating the glow.

4-6

Page 118: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 5: Lines blended with the cubic blending function (left) and an arctan function (right)

is controlled by applying local or global transformations, such as scaling, translation, and rotation, to theelements of the skeleton, and by changing the blending functions. Also in [9] smoother (higher continuity)blends are described.

Figure 5 shows two line primitives blending. The left hand pair of lines are blended with the originalcubic giving a soft blend. The right hand pair use an arctan function which has the property of dropping offvery quickly around the contour value giving rise to a hard blend.

4.2.4 Weighting Function

Equation 2 contains a scalar weight (Ci), this value is used to increase or reduce the effect of the primitiveon the surrounding field thus allowing more control over the shape of an object in an interactive modelingsystem. Also, the weight may be negative so that the primitive subtracts from the field. An example isshown in figure 6. An offset surface from a triangle has a negative cylinder placed across the flat part of thesurface.

4.2.5 Modeling Problems

Before better interactive design tools can be built, solutions need to be found to a number of outstandingproblems. Firstly, the method is not localized sufficiently. Moving a skeletal element (primitive) has aglobal effect on the field. Some progress towards building a hierarchical scheme, where primitives maybe subdivided and their children have limited effect has been made [8] but as yet, the design tools do nothave the intuitive feel of Forsey’s hierarchical parametric surface editor [6]. A second problem is known asthe bulging problem. Fields from neighbouring skeletal elements are summed and produce a bulge where

4-7

Page 119: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 6: Negative Cylinder on Positive Triangle

their combined values are equal to the iso-value. Jules Bloomenthal and Ken Shoemake offered a solutionto this problem [3], however their convolution surface technique is relatively slow and involves a lengthypre-processing step at a discrete resolution. Another possibility would be to use some other function as analternative to taking the sum of the implicit values. For example given two primitives, A and B, for somepoint P the values could be combined thus: Vtotal = (VA + VB)=(V

2A + V 2

B). The bulge is reduced usingthis technique, although not eliminated.

Another source of further research is the unwanted blending problem. A good example is a humanhand. The fingers blend at their roots but not along their length. This has been partially solved in theMeta-Editor by introducing the concept of a primitive which has an asymmetric effect on the surroundingfield. Another approach is to find the implicit value by taking the maximum of the contributions of thesurrounding primitives. This technique was first introduced by Thad Bier [23]. This produces no blendingbetween primitives. Blending can thus be designated between groups of primitives in a graph like structure[22]. Figure 7 shows an example which extends this technique in a similar manner to that described in[7]. In regions affected by two or more groups that don’t blend the field value is calculated by taking themaximum contribution from any group and subtracting the contributions from the other groups. Modelsproduced using this technique polygonize badly as can be seen between the fingers.

4-8

Page 120: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 7: Unblending Primitives (Courtesy Andrew Guy University of Calgary)

Figure 8: Wheel showing blending and CSG operations

4-9

Page 121: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 9: Train showing blending and CSG operations

4.3 ISM and CSG

Implicit Surface Models and models built by Constructive Solid Geometry have in common the fact that theprimitives defined a closed volume of space. For an explanation of CSG see [17] and [13]. ISM has beenused in conjunction with CSG for example see [25], using ray tracing to both traverse the CSG tree andrender the objects. Recently, work at the University of Calgary has produced a polygonizer to handle bothCSG and ISM primitives. The wheel (Figure 8) shows some cylinders blended with a torus, the sides areflattened by taking the intersection with two cutting planes. Another torus is subtracted to form the flangeand a third to form the groove in the wheel’s surface. Finally two half space planes have been intersectedwith the wheel to reveal the inside.

The wheel is formed by defining CSG operations on groups of ISM primitives. We call such a groupa gang since the word group has other particular meanings in our system. As can be seen this works wellwhere the relationship between the ISM primitives is simple, i.e. all primitives within a group are blended.Another example is the train model of figure 9.

In the following table we list some significant differences between CSG features and ISM features.

4-10

Page 122: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

property CSG features ISM features BCSO Features

primitives geometric Primitives skeletal elements unlimited varietye.g. sphere, cone, torus e.g. soft ellipsoids, of ISM’s either

soft lines, skeletal elementssoft polygons,... or blended groups

thereofjunctions unblended (non-C1), blended (C1), both C0 and C1

difficult to get C1 difficult to get in one objectnon-C1

rendering either output b-rep either output polygon CSG operationssurface or ray trace while mesh or ray-trace performed on-the-doing CSG-classification finding intersections fly during poly-for each traced ray with ISM for each ray gonisation

main areas CAGD, industrial design computer (cartoon-type) traditionalof appli- and NC-machining animation, (pseudo-) CSG combinedcation biologic or organic with ISM

looking shapes, e.g.also in the realm ofof industrial design

For further information on this work please see [24].

4.4 Rendering

Implicit surfaces can be visualized by using a ray tracer. The ray surface intersection can be found by one ofa number of numerical techniques. A good survey of these techniques is given in [23]. A popular methodof rendering implicit surfaces is to convert the surface to a polygonal approximation and then use a standardrenderer to visualize the polygons. A survey of methods of doing this polygonization is detailed in [14]. Inan interactive environment, the surface must be visualized as fast as possible, to keep up with changes toa model entered interactively by the user. An additional advantage of the polygonization approach is theavailability of a full 3-D approximation of the implicit surface which allows for fast viewing from arbitrarydirections.

4.4.1 Polygonizing Implicit Surfaces

Most currently existing techniques for the polygonization of implicit surfaces are based on data structuresthat allow spatial indexing: a voxel-based structure [2] or the hash-table structure [26] may be used. Thedata structures have some inherent disadvantages. Firstly, the data structure comprises a partitioning of thespace rather than a tesselation of the surfaces to be polygonized. Especially in the case of animation (e.g.in the computer animation "The great train rubbery", [21]), this is likely to cause geometric artifacts that

4-11

Page 123: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

are fixed with respect to space, thus moving in an incoherent way over every animated surface. Second,there is an apparent mismatch between the number of triangles that is generated by these algorithms andthe complexity of the surface that is approximated: even relatively smooth and flat segments of an implicitsurface usually result in large amounts of facets. Bloomenthal ([2]) uses an adaptive version of the spatialindexing data structures, an octree, in order to reduce the amount of polygons produced in tesselating animplicit surface. This indeed reduces the amount of polygons generated, but full advantage of large cellscan only be taken if the flat regions of the surface happen to fall entirely within the appropriate octants. Thealgorithm proves in practice to be considerably slower than the uniform voxel algorithm [26], and is verycomplicated to implement.

4.4.2 Shrinkwrap

A new, fast, adaptive algorithm, called ShrinkWrap, was offered in [18]. This algorithm took the followingapproach. When constructing an adaptive tesselation for a curved surface the most obvious parameter touse as an indicator for the local tesselation-resolution is the local curvature of the surface. In many cases,a measure for this local curvature (the Gaussian curvature), can be computed analytically. This is also truefor implicit (or equi-potential) surfaces in the case of 1=r-type potentials. When tesselating the surface, i.e.approximating the surface by a discrete set of samples plus some connecting topology, the interpretationof the value of this curvature in the sampled points is not at all clear. For instance, the surface may behighly curved between two adjacent sample vertices, but if it happens to be flat in these vertices we won’tknow and the tesselation is likely to miss this curved feature. In this algorithm the tesselation consists ofa mesh of triangles, but the error analysis takes place on the edges of the triangles (the chords) rather thanon the triangles themselves. Chords are considered as approximations of segments of curves in the implicitsurface.

The following criteria lead to a definition of an acceptable surface. �, L and Lmax are real-valuedparameters that are used to characterize the algorithm.

� the surface is given by f(r) = 0 where r 2 IR3 ;

� a chord is a tuple (a; b; na; nb) 2 IR3� IR3

� IR3� IR3 where f(a) = f(b) = 0 and na = rf(a) and

nb = rf(b);

� the surface is called acceptable iff for every chord on the surface 6 (na; nb) � �ja� bj where 6 (p; q)is the angle between p and q;

� a chord is called acceptable iff 6 (na; nb) � � and ja� bj � Lmax;

� a triangle, consisting of three chords, is acceptable iff all three chords are acceptable.

The parameter L is essentially the Lipschitz parameter of the surface to be tesselated. The Lipschitzcriterion bounds the variation of the derivative of a function between neighbour points in the domain of that

4-12

Page 124: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

a b

na

nb

Figure 10: chord criteria

a b c

Figure 11: triangle subdivision

function. The Lipschitz criterion has been used in 1989 by Kalra and Barr [10] to compute ray-intersectionswith implicit surfaces with a guaranteed accuracy. Before that date, B. von Herzen [19] studied its applicationin computer graphics.

The algorithm attempts to create a tesselation which consists of acceptable triangles. The simplest closedpolygon mesh is a tetrahedron, consisting of 4 vertices, 4 triangles and 6 chords. If one or more chords areunacceptable, they have to be split. Figure 11 shows a splitting scheme which illustrates how a triangle canbe subdivided into smaller triangles. In a, one of the chords is subdivided; in c, three chords are subdividedand in b, one of the two possible ways of subdividing two chords. This subdivision scheme can be used toassign surface coordinates to all newly created vertices to be the average of the surface coordinates in thetwo extreme vertices of the split edge. The new midpoint is then moved onto the surface by an iterativemethod (see [18]).

One of the problems is that if the surface is too convoluted, the iterative algorithm mentioned above willnot be able to converge onto the surface. However we observe that equi-potential surfaces

fr 2 IR3jXi

�i

jr �Rij= V0g

with V0 < 1 have a shape which is less involved, whereas equi-potential surfaces with V0 > 1 are moreinvolved (see Figure 12). An extreme case of the first example is V0 = 0, which produces a sphere with an

4-13

Page 125: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 12: Implicit Contours (courtesy K. van Overveld)

4-14

Page 126: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

infinitely large radius. We take advantage of this observation and start with a tetrahedron which is comparedto a surface with a low value of V0. The triangles are then split according to the acceptability criteria andthe resulting mesh then compared with a surface where the value V0 is increased. Thus we have an iterativeprocess which creates gradually more involved surfaces.

Figures 13 and figure 14 show the development of a penguin model through successive contour values,using the Shrinkwrap algorithm. 1

4.4.3 The Breakfast Algorithm

The above process does not solve all the problems. As the values of V0 are increased the surface can breakoff into separate parts. The above algorithm cannot cope with holes (as in a toroidal model) or separatemanifolds. The Breakfast Algorithm partially solves the problem by using the voxel algorithm [26] toidentify separate surfaces and then applies Shrinkwrap to each of these. Voxels are classified as inside,all vertices have an implicit value greater than the iso-value, outside, all vertices have an implicit valueless than the iso-value, or containing in which at least one vertex is the opposite sign. A search algorithmis applied to produce groups of voxels that have neighbours in the containing category. These groups ofvoxels are polygonized and form the starting mesh for applying the Shrinkwrap Algorithm. At first sight itseems that the Breakfast Algorithm combines the best of both the previous approaches. However the numberof manifold surfaces emerging from the first pass is dependent on the size of the voxels, in our system auser chosen parameter. Further experiments have to be done to establish the utility and efficiency of thesealgorithms.

4.4.4 Ray Tracing Glowing Objects

Making an object glow is a useful effect in computer animation. This can be done relatively easily withimplicit surface models. A glow should be seen surrounding an object and fade to zero at some distanceaway. A value for the brightness of the glow can be obtained as a function of the field in which the modelis defined.

This has been implemented as a part of a ray tracer. The nearest distance between each ray and eachprimitive ellipsoid is calculated. The point on the ray is shown as P in figure 15. One way to calculate theglow is to take the value of P corresponding to the shortest of these distances and pass it to equation 2 toobtain the implicit value, v. This is used to calculate the brightness of the glow as follows:

G� = m�v0 (4)

where

v0 =

8><>:

0 if v < 00:5 if v > 0:5v otherwise

(5)

1This work was developed with Kees van Overveld, University of Eindhoven, who also created the Penguin model.

4-15

Page 127: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 13: Penguin at Varying Potentials (courtesy K. van Overveld)

4-16

Page 128: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 14: Final Penguin (courtesy K. van Overveld)

and m��[0; 1] are scalars controlling the intensity of independent wavelengths.There is a problem with this procedure. Consider two neighbouring rays, Ra and Rb. The nearest

primitive to Ra may be different from the nearest primitive to Rb, thus returning two different points withcorrespondingly different implicit values. Thus causing a discontinuity in the glow. An obvious methodaround this problem would be to find the closest distance for each primitive and sum the implicit valuescontributed by each primitive for each ray. Since each primitive contributes a value between 0 and 1, thesum can be normalized by simply dividing by the number of primitives. This procedure causes the glowto be very dense near to areas where there are lots of primitives grouped together, but far too sparse inareas where there are few primitives. For example figure 16 shows a dinosaur model next to its glowingcounterpart. The tail section received no glow by this method. To achieve the even glow in the figure thepoints along the ray representing the closest point to each primitive are selected as above. The implicitvalues are calculated as before but only the N highest values are used for the glow calculation. It has beenfound empirically for our models that N = 3 is a reasonable number to choose. With N<3, discontinuitiesappear, with N>3, no glow is seen where there are few primitives.

Figure 4 shows two rows of merging sphere primitives. The top row has been raytraced and a textureapplied. The bottom row is the same set of textured primitives with a glow added. The advantage of usingimplicit surface models is that the glow can be calculated directly from the field as shown above. The shape

4-17

Page 129: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

R0

M0

Rd(Rd Unit Vector in the Ray Direction)

(Mo − Modeling Primitive Origin)

(Ro − Ray Origin )

P=R0+(Rd.(Mo−R0))Rd(points are taken as vectors from the world origin)

Figure 15: Finding the nearest distance from a ray to an implicit surface primitive

of the glow follows the zero contour. Values of the field greater than zero are brighter than the background.It is also possible to change the shape of the glow, by altering the blending function used in the expression

for Ftotal(P ). For the dinosaurs we use the cubic (introduced in [26]) shown in figure 3. The glow fieldcan be increased by using a function that falls to zero more slowly than the blending function for the trainmodel itself.

4.5 Animation

Various animation techniques have been devised to make use of the blending and other properties of implicitsurface models. For example:

� Path Following

� Negative Primitives

� Metamorphosis

� Collision Detection (see [7])

� Warping

Traditional animators often criticize 3D computer generated animation for the stilted way objects move.Computer generated objects or characters tend to lack the subtleties in motion seen in traditional animation.

4-18

Page 130: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 16: When Dinosaurs Glowed!

Characters do not have to be humanoid, objects can be given character such as the brooms in the Sorcerer’sapprentice sequence from Disney’s Fantasia. In other words they are anthropomorphic. The motion of suchobjects is controlled to a fine degree to characterize their movement. Characters tend to bend as they move.At times, they must conform to their surroundings: a figure sitting in a chair is an example, or flowing water.In computer animation, popular modeling techniques such as polygon meshes or spline patches do not lendthemselves to the manufacture of objects which can be given this type of motion. However implicit surfacemodels do lend themselves to certain kinds of shape change. The intention is to let the animator design theskeleton of the character or object and then automatically clothe this skeleton with a surface. If the skeletonmoves then the surface changes its shape smoothly to conform. If the skeleton undergoes metamorphosisto a totally different skeleton or inbetweens to a skeleton in a new position then a new surface is calculatedat every frame. The surface is represented in such a way that it maintains nearly constant volume asthe skeleton moves, thus providing convincing character coherence. This property of coherence is veryimportant. Intuitively, it means that throughout metamorphosis, the model remains recognizable as the samecharacter. A model built from skeletal elements can be moved without shape change by maintaining therelative spatial relationship between the elements, in the absence of any influencing field. Achieving shapechange without distorting the model beyond recognition requires that certain constraints be placed on theanimation system. The following techniques can be used to animate a model built from a skeleton to achieve

4-19

Page 131: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 17: Train Skeleton

shape change:

� Geometric Transformation: motion of skeletal elements relative to each other.

� Altering the Field: influence of global or local field and warping.

� Altering parameters describing a skeletal element.

The following sections describe how these methods can be used by high level motion specifications to createthe appropriate blended surfaces. It should be noted that this method does not manufacture accurate humanmodels. However it does provide a very fast way of producing blended surfaces that can approximate ahumanoid character. The methods described here are low level, they affect individual skeletal elements orgroups of skeletal elements. It is intended that they can be assimilated into a higher level animation system.

Figure 18: Bending the train

4-20

Page 132: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

4.5.1 Geometric Motion

By simply altering the relative spatial relationship between skeletal elements a surface can be made to alterits shape. Figure 18 shows the train model with the position of the skeletal elements marked. By movingthe skeletal elements the surface blends smoothly and the train appears to bend.

4.5.2 Path Deformation

Motion paths are extremely useful when applied to the skeletal elements of an implicit surface model. Acartoon-like character can be made to conform to its surroundings by using the shape of an object to definethe path which a group of skeletal elements must follow. This effect was demonstrated in the movie SOFT(see [20]). The character in this case was a group of letters spelling the word “SOFT”. Each letter was madefrom about 20 skeletal elements, the path was drawn up a flight of stairs by marking various positions andpassing a spline through these points using a spline technique similar to that described in [11]. Each skeletalelement in the character is moved to the interpolated path position at each frame. Each point along the pathrepresents a position at a particular instance in time. Rather than define a separate path for each key, theskeletal elements are grouped so that each group is moved together to the next point in the path. Normallya group is chosen according to some spatial relationship, for example all skeletal elements within a specificrange of z values. To maintain “character coherence” the relative positions of the skeletal elements within agroup must be maintained within certain constraints. In this case the amount each group is allowed to varyits position is constrained by the change along the path. Each skeletal element maintains its position relativeto the group. It is the origin of the group that follows the path. The positions of the groups change relativeto the other groups by an amount specified in the path. Thus the model undergoes the desired deformationbut maintains the overall shape and retains the character coherence property which distinguishes shapedistortion from metamorphosis. When the letters are moved up the stairs in the movie SOFT the directionof motion was along the (-)ve z axis so the groups were chosen as skeletal elements with same z value. Aseach group advances along the path the y value of the group was altered according to the path specification.Figure 18 is another example of path animation. The train model is comprised of a number of elementsarranged in vertical columns (see figure 17), each column is moved as a single unit to the next point alongthe path.

4.5.3 Altering the Field

Implicit surface objects may also be animated by altering the field in which the objects exist. Localdeformation can be achieved by moving other skeletal elements relative to these objects, or by placing someglobal influence into the field. Global deformation is more difficult to specify in a completely general way.The field itself could have some external influence such as a plane of constant value. Objects approachingthat plane will deform according to the value chosen. This technique has been adopted along with warping,see section 4.5.6.

4-21

Page 133: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

4.5.4 Inbetweening and Metamorphosis

One area in which implicit surface models are particularly useful is in metamorphosis, or inbetweening. Amethod suitable for two-dimensional (cartoon) models, would have the character drawn on a vector orienteddisplay in two positions, and the inbetween frames interpolated by the system. The simplest way to do thisis to interpolate each point of the first (source) object to a corresponding point on the second (destination)object. Difficulties arise if the number of points is different on source and destination objects. New pointshave to be created or several points have to collapse onto a single point. Even if the number of points is thesame on the two objects, if they are distributed in a different way the image will be scrambled as each pointis interpolated. Implicit surface models always guarantee a closed surface, so this problem does not arise.However, it is still easy to lose character coherence in the inbetween versions.

Inbetweening is not only used to show motion of a character from one position to another, it can also beused to show metamorphosis from one character to another. If the characters are very different in shape andnumber of skeletal elements, then the scrambling problem is difficult to avoid. Peter Foldes uses softwareby Burtnyk and Wein [5] in the film, “La Faim” and exploits this technique to good advantage. However toavoid scrambling is a difficult and tedious task which requires very careful design of the skeletal elementpositions for inbetweening.

Burtnyk and Wein developed a computer version of this technique using skeletons. These skeletonsdefined a conformal mapping from one key frame to the next. The space itself was distorted, thus anyline within the space was similarly distorted according to the mapping function. In contrast, 3D computergenerated characters are often moved by applying geometric transforms to the different parts, which changestheir relative positions but do not necessarily give the character a smooth change of shape as can be achievedusing the 2D techniques. An effective way of producing shape change in 3D animation is to use theinbetween technique. However extending the 2D technique to 3D introduces new problems. Reeves pointsout in [16] that it is difficult to identify corresponding points (and polygons) on different characters. Evenwith functional representations, the parameters from which a surface is manufactured must be chosen sothat the source model matches the destination model. Each of the parameters defining the source modelmust be changed to one of the parameters defining the destination model. The matching process choosesthe appropriate destination parameters corresponding to the source parameters. At each intermediate stageduring the inbetween, a model will be manufactured from an interpolated set of parameters.

In the following paragraphs several different heuristics for matching the models are presented. The shapeof the intermediate models vary according to the chosen method, based on one or more of the heuristics.

4.5.5 Heuristics for Point Matching in Metamorphosis

In this section four approaches are described that we have found useful for defining the matching process.Although the implicit surface modeling system has been used to illustrate how these heuristics may beapplied, the methods are general and can be extended to other modeling techniques. In practice an animatorwill want to experiment with different combinations of these techniques to arrive at the desired effect.

4-22

Page 134: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

We start with two models; the source model and the destination model. The source model must be madeto change into the destination model. The models are defined as a set of skeletal elements as describedabove. Although point primitives are used in the discussion on metamorphosis, these techniques do haveapplication using other geometric primitives. Each ellipsoid skeletal element has the following properties:

Axes Vectors v1;v2;v3

Position x; y; z

Force F

Each of these methods assumes that the objects have been pre-processed so that there are the same numberof skeletal elements defining each object. This may involve creating zero weighted skeletal elements. Askeletal element can be weighted using the force, F, which scales the contribution to the field, or by scalingthe axes. A zero weighted element has its axis vectors set to zero or F = 0. When a source element isinbetweened to a destination element, at each frame a new element is chosen which has an interpolatedvalue for position, axes vectors and force. The start or finish position of the new skeletal elements is chosenby the appropriate method.

4.5.5.1 Hand Matched

The simplest method of establishing which skeletal elements are to be interpolated is to to order the sourceand destination skeletal elements by hand and to process each pair in turn. Since the number of skeletalelements is small compared to the number of polygons in an equivalent model, this method is feasible forsome objects. However computer animation is generally moving towards higher levels of control so thismethod is considered a last resort.

4.5.5.2 Hierarchical Matching

In this heuristic it is assumed that each model is represented by a hierarchy of skeletal elements. Each nodein the hierarchy has an arbitrary number of sibling nodes and one or zero child nodes. Nodes are matched atthe same level in the hierarchy. It is assumed that the hierarchies are designed that each level of the sourceobject has an equivalent level in the destination object. If a man is to change into a rabbit the heads will bematched, the arms of the man can match the front legs of the rabbit and so on The main problem with thisapproach is that the hierarchies have to be constructed carefully. Not only do the levels have to match butwithin a level the nodes must either be ordered or labeled to match. Despite these drawbacks this methodis still preferable to ordering all the skeletal elements by hand and for small sets of skeletal elements, withsuitable interactive tools quite acceptable.

4-23

Page 135: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 19: Cellular Inbetweening

4.5.5.3 Cellular Inbetweening

In this technique the models are matched corresponding to the space they occupy. The world is firstdivided into a 3D grid of cells. This is done by finding the extents of each model and manufacturing thecorresponding rectangular box. Each box is then divided along the x,y,z axes by some user defined amount.The two boxes may be different shapes but they are divided into an equal number of cells. The skeletalelements are then sorted into the cellular grid and the skeletal elements in each source grid cell are theninterpolated to the skeletal elements in the corresponding grid cell of the destination model. The objectscan have different sizes but the method tries to maintain some sort of position coherence between sourceand destination objects. Figure 19 shows a 2D version of how the skeletal elements are matched. Circleswith similar shading patterns are matched between source and destination. In the top diagram the pointsmatch exactly, for every element in every cell in the destination object there is a corresponding element inthe corresponding cell in the source object. In the lower diagram there are some cells containing skeletalelements in the destination object for which there are no skeletal elements in the corresponding cell in thesource object. In this case a zero weighted element (indicated as a small circle) will be manufactured in thesource object. Similarly skeletal elements which exist in the source object are grown in the destination.

4.5.5.4 Surface Inbetweening

In this method there is no matching necessary. All the skeletal elements from both source and destinationobjects define each intermediate model. However the force property of each source element is weighted.The weighting is gradually changed from one to zero as the inbetween progresses. Also dependent on timeis a second weighting applied to the force property of the destination skeletal elements. This value changesfrom zero to one. The shape of the weight value vs. time curve controls the shape of the intermediate model.

4-24

Page 136: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 20: Surface Inbetweening

This is shown in figure 20. A simple linear interpolation means that both source and destination objects arereduced to half the weight half way through the simulation. In practice this gives poor results as the surfacesaround each element no longer merge. If the source is weighted by a cosine function and the destinationweighted by a sine function each object is never weighted by less than 1p

2. The objects can still be matched

by one of the sorting techniques (hand, hierarchy, cellular), but the inbetweening process is different usingsurface inbetweening. An example is shown in [23].

4.5.6 Warping

A useful tool in our system is the ability to distort the shape of a surface, by warping space around it. Awarp is a continuous function, w , from IR3 into IR3. In the following section we suggest some specific warpfunctions that are useful for producing some unusual animations.

The warped surface is defined from equation 2 above:

Ftotal(P ) =

i=nXi=1

ciFi(ri)

ri = fi(P ) = dist(wi(P ); Qi)

where wi(P ) is the position of the point P in warped space. In fact each skeletal element may reside in adifferent warped space. So when evaluating the contribution from the ith skeletal element, P is first warpedto the appropriate position before evaluating the distance function. As a first example, we study a warpfunction wi(P ), which warps a point P to a point Q along a given vector v; it may be given by the vectorequation:

4-25

Page 137: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

P

Q

V

Figure 21: P is warped to Q. The original sphere is warped to an ellipsoid.

wi(P ) = P � v̂(v:p)

where p is P � S0;i.S0;i is the origin of the ith skeletonv̂ =

vkvk

To understand how this affects the iso-surface consider P to be a point some distance from the surfaceof a sphere, such that the point is warped in the direction of the center of the sphere, to a position Q whichis on the iso-surface. The value returned for P by the implicit function is the value that would have beenreturned for Q if warping were not in effect. In this case that value is the iso-value. Thus P becomes a pointon the surface and the sphere is warped to an ellipsoid. (See Figure 21)

Many kinds of warp are possible, by simply writing a function that transforms a point from ordinaryspace into warped space. Each skeletal element contributes in a local way to the warped space, since eachhas its own local warp function associated with it. John Lasseter notes [12] the importance of squash andstretch in traditional animation. The example he gives is of a ball traveling along a parabolic path andbouncing. The shape of the ball distorts into an ellipsoid to give the feeling of speed, when it bounces thedistortion changes so that the long axis of the ellipse is parallel to the ground, in other words a squash effect.

4-26

Page 138: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

To simulate the distortion of the ball to the ellipsoid, the usual computer graphics approach would be touse a scale operation over time. Since the scale is in the direction of the velocity vector v, two rotations arenecessary, to first align the object with one of the major axes, then to rotate back again. With a complex 3Dobject consisting of many skeletal elements, shape distortion using a scaling operation may not be exactlywhat the animator requires. For instance on the impact plane, the ball should flatten out and the distortionis different from the deformation when the ball is further away, an effect which cannot be achieved withlinear transformations. By exaggerating the non-linearity the ball could appear to be made of putty. Such anon-linear operation can easily be achieved by a warp operation, as shown in the example below.

Figure 22 shows some frames selected from an animation showing the putty like ball bouncing. This isimplemented in the warp function in the following manner:

Let P be a point in space at which the implicit function is to be evaluated. For simplicity, assume thecollision plane to be the plane y = 0.

w(P ) =

(P � �v̂(v:p)� p # S if P:y > 0(P:x;1; P:z) otherwise

Herep = P � S0

S0 = the origin of the skeletonp # = (P:x; 0; P:z) = h

�S0:yy0

�h�P:yy0

�h(t) = a decreasing differentiable function (e.g.a cubic polynomial) such that:

h(t) =

(1 for t � 00 for t � 1

� = clamp(1� 2:0 )

S = a parameter, typically around 0:5

The interpretation of the terms is as follows:

(P:x;1; P:z) guarantees that no part of the object protrudes below the collision plane.

�p # S accounts for spreading out the lower part of the object (squashing). The vector p # isparallel to the collision plane indicating that squashing should be a horizontally directed effect. Thefactor assures that squashing increases near the collision plane. No squash is applied if the centerof the objects is higher than y0 or if P is higher than y0. The parameter S controls the amount ofsquashing.

��v̂(v:p) is a modified version of the simple linear velocity warping as discussed in section 4. Thefactor � is introduced to quench the velocity warping near the impact point, to avoid discontinuitiesin the warp function.

4-27

Page 139: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 22: Frames from the bouncing ball animation.

4-28

Page 140: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

At the point of impact, v undergoes a discontinuous change,(v:x; v:y; v:z) ! (v:x;�v:y; v:z).

This would cause a discontinuous warp function if it was not compensated. The factor 2.0 in theexpression for � has been found experimentally to be a reasonable value. The function clamp(..)clamps its argument value between 0 and 1.

This approach initially aligns the warp vector with the velocity vector of the ball in the bouncing ballexample. When the impact occurs the ball will start to deform in a non-linear oblate fashion accordingto the �p # S term. At the same time the linear prolate deformation (due to ��v̂(:p)) subsides. Ourimplicit surface models can be a collection of skeletal elements, rather than a single spherical element, suchas the ball. Thus we can easily apply this non-linear warp to a complex shape. In figure 22 some framesare shown from an animation which applies this method to a bouncing ball (one soft primitive S = 0:25).The ball is distorted into an ellipsoid (Frame 1) whose long axis grows as the vertical velocity increases(Frame 2). On impact the ball warps (Frame 3) into a bulging shape. As the ball bounces it regains itsellipsoid shape and as its vertical velocity decreases the ball remains stretched along the horizontal axis byan amount corresponding to its horizontal velocity. The technique easily extends to skeletons consisting ofmany primitives.

In figure 23 the same method is shown to work for Nelson, the jumping bear (consisting of 25 softprimitives). The slug in Figure 24 is in fact a warp applied to three ellipsoid primitives. Warping can beapplied in space or time and may be non-linear, for example, a warp can be applied:

� To the space in which a model exists, then move the model.

� To the space over time, the model will change with time.

We have tried several different types of warp and the outstanding problem is to present warping to theanimator with a consistent user interface, so that custom warps may be designed.

4.6 Conclusions

In this paper some techniques for modeling, rendering and animating implicit surfaces have been presented.Attention has been focused on previously unpublished techniques such as glowing objects and the breakfastalgorithm which has been fully revised and a new version of a detailed paper on this algorithm willbe available from the University of Calgary, dept. of Computer Science via the World Wide Web inAugust 1996. Skeletal Implicit Surface techniques have proved useful for designing models but still awaitwidespread use in the design community. Before this can happen, better interactive tools need to be builtfor the designer and animator.

4-29

Page 141: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 23: Frames from Nelson the bouncing bear animation.

4-30

Page 142: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 24: Frames from the slug animation.

4-31

Page 143: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

4.7 Acknowledgments

I would like to thank the many students who have contributed so greatly to this research. In these notesAndrew Guy helped with the proof reading and made the fist and Radomir Mech made the glass dinosaurfrom one of my original models. I would also like to thank Jules Bloomenthal for his encouragementand ideas over the years and to Kees van Overveld, who is my co-worker in developing the Shrinkwrapand Breakfast algorithms as well as the work on extending our implicit surface polygonizer to CSG. I amparticularly in debt to my brother, friend and colleague, Geoff Wyvill, who started the whole thing off bysolving a problem in scientific visualization, the solution to which turned out to be so useful for buildingmodels.

This work is partially supported by the Natural Sciences and Engineering Research Council of Canadain the form of a research grant and equipment grants.

References

[1] James Blinn. A Generalization of Algebraic Surface Drawing. ACM Transactions on Graphics, 1:235,1982.

[2] Jules Bloomenthal. Polygonisation of Implicit Surfaces. Computer Aided Geometric Design, 4(5):341–355, 1988.

[3] Jules Bloomenthal and Ken Shoemake. Convolution Surfaces. volume 25, pages 251–256, August1991.

[4] Jules Bloomenthal and Brian Wyvill. Interactive Techniques for Implicit Modeling. Computer Graph-ics, 24(2):109–116, 1990.

[5] N. Burtnyk and M. Wein. Interactive Skeleton Techniques for Enhancing Motion Dynamics in keyFrame Animation. CACM, 19(10):564, Oct 1976.

[6] David R. Forsey and Richard H.Bartels. Hierarchical B-spline refinement. Computer Graphics (Proc.SIGGRAPH 88), 22(4):205–212, August 1988.

[7] Marie-Paule Gascuel. An Implicit Formulation for Precise Contact Modeling Between Flexible Solids.Computer Graphics (Proc. SIGGRAPH 93), pages 313–320, August 1993.

[8] Andrew Guy and Brian Wyvill. Controlled Blending For Implicit Surfaces. Proc. Implicit Surfaces’95), April 18-19 1995.

[9] Z. Kacic-Alesic and B. Wyvill. Controlled Blending of Procedural Implicit Surfaces. Technical Report90/415/39, University of Calgary, Dept. of Computer Science, 1990.

4-32

Page 144: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

[10] D. Kalra and A. Barr. Guaranteed Ray Intersections with Implicit Functions. Computer Graphics(Proc. SIGGRAPH 89), 23(3):297–306, July 1989.

[11] D. Kochanek. Interpolating Splines with local Tension, Continuity and Bias Control. ComputerGraphics (Proc. SIGGRAPH 84), 18(3):33–41, 1984.

[12] John Lasseter. Principles of Traditional Animation Applied to 3D Computer Animation. ComputerGraphics (Proc. SIGGRAPH 87), 21(4):35–44, July 1987.

[13] Martti Mantyla. An Introduction to Solid Modeling. Computer Science Press, Rockville, Maryland20850, 1988.

[14] Paul Ning and Jules Bloomenthal. An evaluation of implicit surface tilers. IEEE Computer Graphicsand Applications, 13(6):33–41, November 1993.

[15] H. Nishimura, A. Hirai, T. Kawai, T. Kawata, I .Shirakawa, and K. Omura. Object Modeling byDistribution Function and a Method of Image Generation. Journal of papers given at the ElectronicsCommunication Conference ’85, J68-D(4), 1985. In Japanese.

[16] W. Reeves. Inbetweening for Computer Animation Utilizing Moving Point Constraints. ComputerGraphics (Proc. SIGGRAPH 81), 2:263–269, 1981.

[17] A.A.G. Requicha. Representations for Rigid Solids: Theory, Methods, and Systems. ACM computingsurveys, 12(4):437–464, December 1980.

[18] Kees van Overveld and Brian Wyvill. Potentials, Polygons and Penguins. An efficient adaptivealgorithm for triangulating an equi-potential surface . pages 31–62, 1993.

[19] B. von Herzen. Application of Surface Networks to Sampling Problems in Computer Graphics. PhDthesis, CalTech, Dept. of Computer Science, 1988.

[20] Brian Wyvill. SOFT. SIGGRAPH 86 Electronic Theatre and Video Review, Issue 24, 1986.

[21] Brian Wyvill. The Great Train Rubbery. SIGGRAPH 88 Electronic Theatre and Video Review, Issue26, 1988.

[22] Brian Wyvill. Warping Implicit Surface for Animation Effects. pages 55–63, 1992.

[23] Brian Wyvill, Jules Bloomenthal, Geoff Wyvill, Jim Blinn, John Hart, Chandrajit Bajaj, and Thad Bier.Course Notes. SIGGRAPH ’93, Course #25, Modeling and Animating with Implicit Surfaces, 1993.

[24] Brian Wyvill and Kees van Overveld. Constructive Soft Geometry: The unification of CSG and ImplicitSurfaces. Technical report, University of Calgary, Dept. of Computer Science, 1995.

[25] G. Wyvill and A. Trotman. Ray tracing soft objects. Proc. CG International 90, 1990.

4-33

Page 145: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

[26] Geoff Wyvill, Craig McPheeters, and Brian Wyvill. Data Structure for Soft Objects. The VisualComputer, 2(4):227–234, February 1986.

4-34

Page 146: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

On Efficiently Representing Procedural Geometry

John C. HartSchool of EECS

Washington State UniversityPullman, WA 99164-2752

[email protected]

Abstract

Procedural geometry paradigms are analyzed and classified as either “data amplifier” or“lazy evaluation.” Lazy evaluation reduces the size of the geometric representation passed torenderers and increases the flexibility of procedural experimentation. Several existing procedu-ral geometry systems are compared in this light, and a new one is proposed, called proceduralgeometric instancing.

Inspired by shading languages, procedural geometric instancing is a modeling languagethat embeds functional calls in the geometric representation which are evaluated on demandduring rendering. Procedural geometric instancing enables articulation of highly detailed mod-els with the ability to inductively specify billions of objects with a few lines of code, and to passparameters, facilitating more complex relationships between parent and child geometries. Fur-thermore, it defines functions for local access to the world coordinate system which supportsglobal effects such as tropism. The hierarchical z-buffer is adapted to operate on hierarchicalbounding volumes, efficiently rendering procedural geometric instancing models.

1 Introduction

The procedural modeling of geometric detail poses at least two problems to computer graphicsresearch. The first and foremost problem is that of constructing procedures that generate realistichighly-detailed objects and textures. Two examples from natural modeling are the developmentalL-system to simulate plant growth [Prusinkiewicz & Lindenmayer, 1990] and an erosion model tosimulate terrain formation [Musgraveet al., 1989].

This discussion instead focuses on a second problem, that of efficiently managing immenseamounts of procedurally-generated geometric detail. One may have an extremely accurate modelof a tree described by millions of polygons, but rendering a forest of these trees may take so longas to prohibit animation. New tools need to be developed to solve this lesser-studied problem ofrepresenting procedural geometric detail efficiently.

These notes begin with a general analysis of the problem of managing procedural geometric de-tail. The further sections propose to solve these problems with a new paradigm called “ProceduralGeometric Instancing,”

5-1

Page 147: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

2 Managing Procedural Geometric Detail

2.1 Paradigms

User RendererArticulation Geometry+

-

Modeler

Figure 1: The “Data Amplifier” procedural modeling system.

Figure 1 illustrates a typical graphics system in the context of natural modeling. The user beginswith a conceptual goal object, then articulates the object to the modeler. The modeler interprets themodel, and converts it into an intermediate representation suitable for processing and rendering.The renderer accepts the intermediate representation and synthesizes an image of the object. Thisfigure is based on the typical graphical modeling system for a variety of applications. Denoting themodeler as the “application program,” Figure 1 is an instance of the general interactive computergraphics framework shown in Figures 1.5, 3.2 and 7.3 of Foleyet al. [1990].

The modeler in Figure 1 is a data amplifier. Smith [1984] coined this term to explain howfractal methods transform small amounts of information into highly-detailed objects described bymassive amounts of geometry. (The figure’s represents the data amplifier with an op-amp circuitis not far fetched. The electronic amplifier scales its output voltage and then combines it with theinput voltage. Such rescale-and-add processes form the basis of a variety of fractal proceduralmodels.)

Some procedural modelers pass low-level polygonal representations to the renderer through ex-tremely large scene description files (e.g. Appendix A of Prusinkiewicz & Lindenmayer [1990]).Such immense amounts of unnorganized geometric detail causes an intermediate storage problemof passing megabytes of polygonal data to the renderer. On computers lacking sufficient mainmemory, the excess is stored in secondary memory whose swapping degrades rendering time.Moreover, rendering large amounts of unorganized geometry is innefficient. A grid or octree struc-ture can organizes arbitrary geometric data, but at the expense of further increasing the storagerequirement.

The concept of lazy-evaluation avoids the intermediate representation problems. Such systemsimplement the model at the source-code level and compiling it directly into the renderer for on-demand evaluation during rendering (e.g. the C++ object-oriented implementation of subdivisionprocesses in Amburnet al. [1986]). The lazy evaluation paradigm is illustrated diagramatically inFigure 2.

This system facilitates a client-server relationship between the modeler and the renderer, al-lowing it to generate only the geometry it needs to draw an accurate picture, avoiding the need tostore and process a massive intermediate geometric representation.

5-2

Page 148: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

User ArticulationRenderer

(Client)

Modeler

(Server)

Geometry

Coordinates

Figure 2: The “Lazy Evaluation” procedural modeling system

The lazy-evaluation paradigm is useful in a variety of applications. For example, some visual-ize implicit surfaces (surfaces defined where some scalar function defined over 3-D space evaluatesto zero) using the data-amplifier paradigm by first sampling the space and converting the result-ing samples to polygons, then passing these polygons to a renderer. Others visualize the implicitsurface by ray tracing the shape directly, using the lazy-evaluation paradigm, determining the inter-section of the ray with the mathematically-defined surface. The former generates the entire surfacewhereas the latter concentrates its efforts only on the visible portion of the surface.

Implicit surfaces can be descibed by a single scalar function whereas procedural models typi-cally require many lines of source code to implement. Procedural shading has been standardizedby the Renderman shading language [Hanrahan & Lawson, 1990]. However, a standard model-ing little-language for the procedural generation of geometry does not currently exist. Section 3proposes such a language, based on the object instancing paradigm.

2.2 Hierarchical Organization and Bounding Volumes

The key to the efficient representation of detail has been the use of hierarchical organization andbounding volumes. The ability to cull significant portions of a large geometric database allows therenderer to focus its attention only on the visible components of the scene.

Since its inception, ray tracing has benefited from the hierarchical bounding of detailed scenes[Rubin & Whitted, 1980; Kay & Kajiya, 1986]. More recently, Greeneet al. [1993] has shownhow this hierarchical organization also benefits z-buffering.

Furthermore, several researchers have initiated study on the determination of global illumina-tion for highly-detailed natural scenes to simulate the reflectance of soil, sand and crop canopies tobetter monitor the earth’s resources from space [Goelet al., 1991; Gerstl & Borel, 1992; Chiu &Shirley, 1994]. Hanrahanet al. [1991] shows that the time and space complexity of a portion of theradiosity solution can be reduced when the scene is organized hierarchically, and such a techniquecould use the hierarchical organization of a procedural model to more efficiently determine canopyreflection.

Few tools exist for processing highly-detailed representations. While the application is ren-dering in Figures 1 and 2, other geometry processing applications exist, such as blending [Hart,1995] and collision detection. Computer-aided geometric design has devoted much of its studyto the advantages and tradeoffs of various representations of smooth curves and surfaces, manyof them hierarchical. Demonstrating similar advantages and tradeoffs of various representations

5-3

Page 149: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

of detail broadens the scope of computer-aided geometric design to include non-smooth (rough)curves and surfaces. Formulating detailed surfaces using the same hierarchical methods as somesmooth surfaces extends CAGD algorithms to operate on their rough counterparts.

Three-dimensional arrays of cells containing lists of objects the cell intersects have been simi-larly effective at hastening the rendering of detailed scenes [Snyder & Barr, 1987], though typicallycost more spatially than hierarchical bounding volumes.

For lazy evaluation of geometric primitives to be effective, some of the geometry must not beevaluated. Hierarchical methods are used on the assumption that all of the geometry in a scene isnot visible from any given vantage point. An example where this assumption fails is a level fieldof grass viewed from a reasonable altitude. In such cases where all of the geometry is visible,hierarchical methods fail to reduce (and in fact increase) time and space complexity.

2.3 Procedural Geometric Modeling and the Web

Procedural modeling could play a critical role in the multimedia, networking and the world wideweb. Two standards have become recently popular: Java and VRML. Java is a system that allowsprograms to be automatically loaded from a remote site and run safely on any architecture andoperating system. VRML stands for the Virtual Reality Modeling Language, and has become astandard for transmitting geometric scene databases.

In their current form, VRML geometric databases are transmitted in their entirety over thenetwork and Java has little support for generating complicated geometric databases. However,both can be enhanced to support the lazy-evaluation paradigm for procedural modeling.

One example extends the capabilities of Java or an equivalent language to support the gener-ation and hierarchical organization of detailed geometry. A user may then download this scriptand a renderer then runs this script to procedurally generate the necessary geometry to accomodatethe vantage point of the viewer. This example places the network at the “articulation” step of theparadigm.

A second example places the network at the “geometry/coordinates” bidirectional step of thelazy-evaluation paradigm. In this example, a powerful server generates the geometry needed by aremote client renderer to view a scene and transmits only this geometry over the network. As theclient changes viewpoint, the server then generates and transmits only the new geometry neededfor the new scene.

2.4 Current Procedural Geometry Models

Given the above paradigms, we can examine existing procedural geometry methods, specificallyL-systems, particle systems, recurrent iterated function systems and object instancing.

L-Systems: The L-system (a.k.a. graftal) is a popular model of detailed geometry, and has beenwell covered in the graphics literature: [Kawaguchi, 1982; Aono & Kunii, 1984; Smith, 1984;Prusinkiewiczet al., 1988]. In its most general form, an L-system is a context-sensitive parallelgrammar that operates on symbols whose geometric meaning is typically defined by the turtleparadigm.

5-4

Page 150: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

The turtle is controlled by simple instructions, typically one character long. For example “f”moves the turtle forward one unit whereas “F” does the same but leaves a trail. Although the turtlewas initially designed as a two-dimensional plotting tool for education [Abelson & diSessa, 1982],it has evolved into a full-featured three-dimensional graphics system [Prusinkiewicz & Linden-mayer, 1990] complete with characters that rotate about thex; y; z axes, change/restore coordinatesystems [Prusinkiewicz & Hammel, 1994], and trace out filled polygons/Bezier patches. Theseenhancements have greatly extended the power of L-systems and turtle graphics for articulatingand representing natural pheonema. However, the massive amounts of geometry traced out by theturtle lack structure, hampering efficient rendering and prompting the following observation:

Observation 1 Turtles are slow.

Particle Systems: Confronted with the problem of rendering a forest modeled by an L-system,Reeves & Blau [1985] developed a forward rendering method which approximated the visibilityand shadowing of trees and grass, based on the following observation:

Observation 2 [Reeves & Blau, 1985]“[Data amplification paradigms] create so much irregular,three-dimensional detail that exact visible surface and shading calculations become infeasible.”

They replaced the turtle with a particle, visually simulating complex geometries with an-tialiased one-dimensional streaks of shading. While structured particle systems have synthesizedrealistic-appearing images and animations, they do not yield a geometric structure useful for otherforms of processing.

They justified their approximations with the observation:

Observation 3 [Reeves & Blau, 1985]“The rich detail in the images tends to mask deviationsfrom an exact rendering.”

Kajiya & Kay [1989] merged the concept of a structured particle system with volume renderingto create a volume of shading parameters. Like particle systems, this technique paints detail withvoxels of shading, but uses volume rendering techniques to more accurately determine visibility.This technique simulated fur and carpeting.

The particle system approach [Reeves & Blau, 1985] follows the data amplification paradigmbut avoids the large intermediate representation by generating its geometry directly into the framebuffer. The volumetric version [Kajiya & Kay, 1989] similarly generates its geometry directly intoa volume array.

Recurrent Iterated Function Systems: The iterated function system was an early proceduralmodel of self-similar detail [Williams, 1971; Hutchinson, 1981; Barnsley & Demko, 1985], whichconsisted of transformations that map an object to each of its components, and more recently, therecurrent iterated function system [Barnsleyet al., 1989], whose transformations map collectionsof components to each component. These representations have been investigated several times inthe computer graphics literature for modeling various natural phenomena [Demkoet al., 1985;Barnsleyet al., 1988; Hart & DeFanti, 1991].

5-5

Page 151: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Hart & DeFanti [1991], using the lazy-evaluation paradigm, developed an efficient and exactmethod for determining the visibility and shadowing of forest scenes modeled with iterated func-tion systems. This method overcame the feasibility restrictions observed by Reeves & Blau [1985]by considering only exactly self-affine shapes (IFS attractors) and organizing the geometric detailvia hierarchical bounding volumes. The efficiency of the lazy evaluation system supported notonly the rendering of the forest scene, but its animation as well.

Object Instancing: While many natural objects have been observed to posess fractal structure,they are never exactly self-similar. Modern fractal models have hence focused on articulating andrepresenting the exceptions to self-similarity. Self-similar fractal models can be converted intothe more general object instancing structure, which allows exceptions to self-similar construction(such as terminating geometric refinement at a visible level of detail).

Oppenheimer [1986] used instancing to achieve real-time graphics speeds for the interactivemodeling of trees and other structures based on branching L-systems. Hart [1992] described amethod for translating a model from a turtle-based L-system into a standard instancing hierar-chy which organized the resulting geometry for more efficient processing and rendering usinglazy evaluation. This method replaced the turtle’s drawing commands with instances of geometricprimitives, the turtles motion commands with affine transformations and the L-system’s produc-tions with an instancing hierarchy.

This method successfully converted a small subfamily of L-systems from the data amplifierparadigm to the lazy-evaluation paradigm, specifically deterministic non-parametric context-freeL-systems without global effects (for example, tropism). In order to convert the more powerfulextensions of L-systems into the more efficient paradigm, object instancing needs to be enhancedto allow global interaction and parameter passing. Such extensions are the subject of the remainderof these notes.

3 Procedural Geometric Instancing

Geometric instancing is a time and space saving tool for modeling in computer graphics. Its con-cept is fundamental in computer graphics, first appearing in “Sketchpad” [Sutherland, 1963], andwas one of the first enhancements to ray tracing [Rubin & Whitted, 1980].

A geometric instance is a pointer to a predefined geometry, often accompanied by a transfor-mation and new shading rules. Although one could extend the instancing concept into an object-oriented graphics system, this classical definition divorcesgeometric instancingfrom the moremodern connotation ofobject instancingin object-oriented design. The classes in geometric in-stancing are simply other objects, and the properties inherited at instantiation are typically no moresophisticated than shading parameters.

Procedural geometric instancingaugments the standard instance with the addition of a pointerto a procedure. This procedure is executed at the time of instantiation (i.e. every time the objectappears in a scene). The procedure is passed a pointer to the instance structure, and can alter theinstance’s object pointer, its transformation and/or its shading parameters. The procedure also hasaccess to external global variables, such as the current object-to-world coordinate transformationmatrix.

5-6

Page 152: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Procedural geometric instancing augments standard instancing with the ability to pass param-eters and access world coordinates. The power of procedural geometric instancing is derived fromits ability to modify an instance’s geometry and transformation based on the instantiation’s pa-rameters and its state with respect to world coordinates. These features rely on lazy evaluation,the dynamic on-demand evaluation of expressions in the model during rendering at the time ofinstantiation.

3.1 Parameter Passing

One enhancement to the L-system model allows it to describe shapes using real values [Prusinkiewicz& Lindenmayer, 1990]. Among other abilities, parametric L-systems can create more complex re-lationships between parent and child geometries.

Parameters are by no means new to the instancing paradigm. For example, they are presentin SPHIGS [Foleyet al., 1990]. Their use here to control hierarchical subdivision, coupled withaccess to world coordinates, differentiates them from previous implementations. Parameters arebound at instantiation in the standard fashion using a stack to allow recursion. Parameters mayalter transformations or may be passed along to further instances. Both uses are demonstrated inthe following example.

Example 1 Inductive Instancing

Iterative instancing builds up vast databases hierarchically, inO(logn) steps, by instancing lists oflists [Hart, 1992]. For example, a list of four instances of a blade of grass makes aclump, fourclump’s make apatch and fourpatch’es make aplot — of 64 blades of grass. Though each listconsists of four instances of the same element, each instance’s transformation component differsto describe four distinct copies of the geometry.

Even the step-saving process of iterative instancing becomes pedantic when dealing with bil-lions of similar objects. Inductive instancing uses instancing parameters to reduce the order ofexplicit instancing steps in the definition fromO(logn) toO(1): Using the field-of-grass example,one defines an objectgrass(n) as a list of four instances ofgrass(n-1), and ends the inductionwith the definition ofgrass(0), which instances a single blade of grass, as shown in Figure 3.Hence, a single instance ofgrass(i) inductively produces a scene containing4i blades of grass.

Inductive instancing is similar in appearance to the predicate logic of Prolog. Organized prop-erly, the defined names may be compared against the calling instance name until a match is found.For example, in Figure 3grass(15) would not matchgrass(0) but would matchgrass(n).

3.2 Accessing World Coordinates

Objects are defined in a local coordinate system, and are instanced into a world coordinate system.In certain situations, an instance may need to change its geometry based on its location and/or ori-entation with respect to world coordinates. Given an object definition and a specific instantiation,letW denote the4�4 homogeneous transformation matrix that maps the object to its instantiation.The transformationW maps local coordinates to world coordinates.

5-7

Page 153: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

define grass(0) blade end

define grass(n)grass(n� 1)grass(n� 1) translate 2n(0:1; 0:0; 0:0)grass(n� 1) translate 2n(0:0; 0:0; 0:1)grass(n� 1) translate 2n(0:1; 0:0; 0:1)

end

grass(15)

Figure 3: Inductive instancing specification of a gigablade of grass

Letplocal be a point in an object’s local coordinates (a homogeneous column vector of the form(x; y; z; 1)T ). Then its position at instantiation is

pworld = Wplocal: (1)

Let vlocal = (vx; vy; vz; 0)T likewise be a vector. Then its direction at instantiation is

vworld = (W�1)Tvlocal: (2)

Procedural geometric instancing adopts the convention that within the scope of an instance,the object-to-world transformation is unaffected by the instance’s transformations. This solves anordering problem where a scale followed by a rotation (which ordinarily are mutually-commutativeoperations) might not be equivalent to a rotation followed by a scale if either depends on globalposition or orientation.

The following three examples demonstrate procedural models requiring access to world-coordinates.

Example 2 Tropism

L-systems simulate biological systems more realistically through the use of global effects. Onesuch effect istropism,an external directional influence on the branching patterns of trees [Prusinkiewicz& Lindenmayer, 1990]. Downward tropism simulates gravity resulting in sagging branches, side-ways tropism results in a wind-blown tree and upward tropism simulates branches growing towardsunlight. In each case, the tropism direction is uniform, regardless of the local coordinate systemof the branch it affects.

Given its global orientation, an instance affected by tropism can react by rotating itself into thedirection of the tropism. For example, Figure 4 models Figure 5, illustrating two perfectly ternarytrees, although one is made more realistic through the use of tropism.

The transformationtropism is equivalent to

tropism(v; �; t) � rotate �jj(W�1TvT )� tjj; (W�1T

vT )� t

whererotate expects its parameters in “angle, axis” format. Thistropism definition may be sub-stituted by hand, translated via a macro, or hard-coded as a new transformation.

5-8

Page 154: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

define limb(r; l) cone(l; r; 0:577r) end

define tree(0; r; l; t; �)limb(r; l)leaf translate (0; l; 0)

end

define branch(n; r; l; t; �)tree(n; r; l; t; �) rotate 30�,(0,0,1)

end

define tree(n; r; l; t; �)limb(r; l)branch(n � 1; 0:577r; 0:9l; t; �)

tropism((0; 1; 0; 0);��; t)translate (0; l; 0)

branch(n � 1; 0:577r; 0:9l; t; �)tropism((0; 1; 0; 0);��; t)rotate 120�,(0,1,0) translate (0; l; 0)

branch(n � 1; 0:577r; 0:9l; t; �)tropism((0; 1; 0; 0);��; t)rotate 240�,(0,1,0) translate (0; l; 0)

end

tree(8,.2,1,(0,-1,0),0) translate (-4,0,0)tree(8,.2,1,(0,-1,0),20) translate (4,0,0)

Figure 4: Perfect ternary trees specified with and without tropism.

5-9

Page 155: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 5: These trees have equivalent instancing structures except that the one on the right isinfluenced by downward tropism simulating the effect of gravity on its growth.

The objectlimb consists of an instance of tree rotated30�: Under standard instancing, thisobject could be eliminated and the30� could be inserted in the definition oftree. However, thisoperation is pulled out of the definition oftree because the world transformation matrixW used inthe definition oftropism is only updated at the time of instantiation. The separate defintion causesthe tropism effect to operate on a branch after it has rotated30� away from its parent’s major axis.

Tropism is typically constant, although a more accurate model would increase its severity asbranches become slimmer. Thompson [1942] demonstrates that surface tension dictates many ofthe forms found in nature. In the case of trees, the strength of a limb is proportionate to its surfacearea:l � r; whereas its mass (disregarding its child limbs) is proportionate to its volume:l � r2:

We can simulate this by simply increasing the degree of tropism� inversely with respect to thebranch radiusr: Hence,branch would be instantiated with the parameters

branch(n� 1; 0:577r; 0:9l; t; (1� r)�0)

where�0 is a constant, maximum angle of tropism.While variable tropism could be incorporated, via source-code modication, as a new parame-

terized character in the turtle-based L-system paradigm, procedural geometric instancing providesthe tools to articulate it in the scene description.

Example 3 Crop Circles

Prusinkiewiczet al. [1994] added a query command “?” to the turtle’s vocabulary that returned theturtles world position. This information was used to prune L-system development based on externalinfluences. The world-coordinate transformation can similarly detect the presence of an externalinfluence, and the geometry can appropriately respond. A crop circle such as those allegedly leftby UFO encounters is similar to the simulation of topiary, except that the response of pruning isreplaced with bending as demonstrated in Figure 6.

Such designs can be described implicitly [Prusinkiewiczet al., 1994], in the case of a circle, orthrough a texture map (cropmap) [Reeves & Blau, 1985], in the case of the teapot.

5-10

Page 156: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 6: Crop circles trampled in a field of cones.

Example 4 Geometry Mapping

The grass example from the previous section shows how geometry can be instanced an arbitrarynumber of times onto a given section of a plane. Consider the definition of a Bezier patch as amapping from the plane into space. This mapping takes blades of grass from the plane onto theBezier patch. Replacing the blades of grass with fine filaments of hair yields a fully geometricspecification of the fur previously modeled using volumes of shading parameters [Kajiya & Kay,1989].

3.3 Other Functions

While the power of procedural geometric instancing is based on its ability to pass parameters andaccess world coordinates, several other functions based on these abilities make specification ofprocedural models easier.

3.3.1 Random Numbers

Randomness can simulate the chaos found in nature, and is found in almost all procedural naturalmodeling systems. Moreover, various kinds of random numbers are useful for natural modeling.

The notation[a; b] returns a random number uniformly distributed betweena andb: The nota-tion fa; bg likewise returns a Gaussian-distributed random number.

The Perlin noise function provides a band-limited random variable [Perlin, 1985], and is im-plemented as the scalar-valued functionnoise. A typical invocation of the noise function using theworld coordinate position is specified:noise(W (0; 0; 0; 1)T ).

The summation of noise into1=f fractional Brownian motion forms the basis for several pro-cedural modeling methods.

Geometric randomness requires consistency. This was noted early by Fournieret al. [1982]where inconsistent random displacements threatened to rip the creases of midpoint displacementterrains. Procedural models need to consistently generate the exact same shape. Inconsistencies

5-11

Page 157: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

cause problems with independent pixels when ray tracing, shadow determination when z-buffering,and continuity when adjoining random surfaces. Procedural geometric instancing consistentlyseeds its random number generator using the world transformation matrixW:

Example 5 Meadows

Fractional Brownian motion models a variety of terrain. This example uses three octaves of a1=f 2 power distribution to model the terrain of a hilly meadow. Grass is instanced on the meadowthrough a translation procedurally modified by the noise function. The placement of the grass isfurther pertubed by a uniformly-random lateral translation and its orientation is perturbed by thenoise function.

Figure 7 specifies the meadow displayed in Figure 8. The vector-valued functionrot(x; �;a)returns the vectorx rotates by� about the axisa: Its use in the definition offnoise disguises thecreases and non-isotropic artifacts of the noise function due to a simplified interpolation function.

3.3.2 Levels of Detail

The scale at which geometry projects to the area of a pixel on the screen under the rules of per-spective is bound from above by the function

lod(x) = jjx� x0jj2 tan(�=2)=n (3)

wherex0 is the eyepoint,� is the field of view andn is the horizontal resolution. The conditionlod(world(0; 0; 0; 1)) > 1 was used to halt recursive subdivision of fractal shapes constructed byscaled instances of the unit sphere [Hart & DeFanti, 1991]. In typical use,lod replaces complexgeometries with simpler ones to optimize display of detailed scenes.

3.4 Residual vs. Non-Residual Models

The structures modeled by subdivision processes fall into two categories. Non-residual modelsinstance geometry only at the terminal level of subdivision whereas residual models instance ge-ometry at non-terminal levels, leaving a residue of the intermediate subdivision levels. Mandelbrot[1982] notes that for fractal subdivision processes, the actual fractal component results from theterminal level “limit set” geometry, but the dimension of the resulting set is dominated by the resid-ual geometry of the intermediate levels. The grass example in Figure 3 demonstrates non-residualsubdivision whereas the tree example in Figure 4 demonstrates residual subdivision.

3.5 Comparison with L-systems

L-systems are organized into families based on their representational power. The simplest familyis the deterministic context-free L-system. Parameters were added to handle geometric situationsrequiring non-integral lengths [Prusinkiewicz & Lindenmayer, 1990]. Stochasticism was addedto simulate the chaotic influences of nature. Various degrees of context-sensitivity can be used tosimulate the transmission of messages from one section of the L-system model to another duringdevelopment.

5-12

Page 158: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

#define NS 16 /* noise scale */#define fnoise(x) (NS (noise((1/NS) (x)) + 0.25 noise((2/NS) rot((x),30�,(0,1,0))) +

0.0625 noise((4/NS) rot((x),60�,(0,1,0)))))#define RES 0.1 /* polygonization resolution */

define plate(-1)polygon (-RES,fnoise(W (-RES,0,-RES,1)T ),-RES),

(-RES,fnoise(W (-RES,0, RES,1)T ), RES),( RES,fnoise(W ( RES,0, RES,1)T ), RES)

polygon ( RES,fnoise(W ( RES,0, RES,1)T ), RES),( RES,fnoise(W ( RES,0,-RES,1)T ),-RES),(-RES,fnoise(W (-RES,0,-RES,1)T ),-RES)

end

define plate(n)plate(n� 1) translate 2n( RES,0, RES)plate(n� 1) translate 2n(-RES,0, RES)plate(n� 1) translate 2n( RES,0,-RES)plate(n� 1) translate 2n(-RES,0,-RES)

end

define blade(0) polygon (-.05,0,0),(.05,0,0),(0,.3,0) end

define blade(n)blade(n� 1) scale (.9,.9,.9) rotate 10,(1,0,0) translate (0,.2,0)polygon (-.05,0,0),(.05,0,0),(.045,.2,0),(-.045,.2,0)

end

define grass(-2)blade(10)

rotate 360� noise((1/16) (W (0,0,0,1)T )),(0,1,0)translate ([-.05,.05],fnoise(W (0,0,0,1)T ),[-.05,.05])

end

define grass(n)grass(n� 1) translate 2n(:1; 0; :1)

grass(n� 1) translate 2n(�:1; 0; :1)

grass(n� 1) translate 2n(:1; 0;�:1)

grass(n� 1) translate 2n(�:1; 0;�:1)

end

plate(6)grass(6)

Figure 7: Specification for a grassy meadow.

5-13

Page 159: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Figure 8: A grassy meadow with an exagerated specular component to demonstrate reflection “hotspots.” No texture maps were used in this image.

Global influences affect only the resulting geometry, such as tropism, which can simulate theeffect of gravitational pull when determining the branching direction of a tree limb.

Figure 9 depicts the representational power of procedural geometric instancing with respectto the family of L-system representations. Standard geometric instancing can efficiently repre-sent only the simplest L-system subfamily, whereas there is currently no geometrically-efficientrepresentation for any form of context-sensitive L-system. Procedural geometric instancing is acompromise, efficiently representing the output of a stochastic context-free parametric L-systemwith global effects.

stochastic context-sensitive parametric L-systems

procedural geometric instancing� stochastic context-free parameteric L-systems

standard geometric instancing� deterministic context-free L-systems

Figure 9: Hierarchy of representations.

3.6 Rendering

Reeves & Blau [1985] rendered particles as antialiased lines, requiring a sort for visibility, whichwas approximated using a linear-time bucket sort. Kajiya & Kay [1989] rendered fine geometry by

5-14

Page 160: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

casting rays through of a volume of shading parameters. Procedural geometric instancing is basedon surface models and can choose between ray casting and z-buffering for exact visibility determi-nation. Both methods are made more efficient through the use of bounding volume hierarchies.

3.6.1 Ray Tracing

Ray tracing lends itself to rendering hierarchically-organized highly-detailed scenes. Ray castingoperates from the pixel to the object, and the hierarchical organization allows it to focus only onobjects that project onto the pixel.

Greeneet al. [1993] critically analyzed ray tracing and noticed that the intersection computa-tion for every ray begins at the top of the object hierarchy and traverses through the tree in orderuntil an object is hit. This traversal cost is compounded by the overhead of lazy evaluation re-quired by procedural geometric instancing. Moreover, extents and procedural bounding volumesyield sub-optimal bounding volumes that further delay ray intersection yielding the following ob-servation:

Observation 4 Ray tracing is slow.

While ray tracing served as the initial proof-of-concept rendering paradigm for proceduralgeometric instancing, we have abandoned it for the hierarchical z-buffer. One result of the imple-mentation of procedural geometric instancing in a ray tracing environment was the merging of listsand grids.

3.6.2 Z-Buffering

Greeneet al. [1993] developed a hierarchical variation of the z-buffer for rendering complexscenes. The hierarchical z-buffer organizes the objects using an octree, and the image using aquadtree.

The hierarchical z-buffer was originally designed to operate on polygonal data organized inan octree. While time efficient, the octree organization suffers the intermediate storage problemdescribed in the Introduction. We avoided this problem by adapting the hierarchical z-buffer,replacing the octree object hierarchy with a bounding volume hierarchy [Ramagopalrao, 1994].

Octree’d geometry can be processed in a mostly front-to-back order, which optimizes the per-formance of the hierarchical z-buffer. This ordering property partially transfers to the hierarchicalbounding volume case through the use of axially-sorted lists. The contents of each list are sortedsix times, in non-increasing order of their most extreme points in the positive and negativex; y andz directions. Each list is then threaded six times according to these orderings. At instantiation, wedetermine which of the six vectors:W�1T (�1; 0; 0; 0);W�1T (0;�1; 0; 0) or W�1T (0; 0;�1; 0)has the maximum dot product with a unit vector from the positionW (0; 0; 0; 1) pointing towardthe viewer. The elements of the list are then instantiated according to this order. While not perfect,this method does an effective job of instancing geometry globally in a front-to-back order.

The hierarchical z-buffer was designed to avoid rasterization of clusters of occluded detail.We have also found that in typical detailed natural environments, such as forests, the viewer isimmersed in the geometry. Most of such forests are clipped, and the use of bounding volumes

5-15

Page 161: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

allows entire structures to be culled immediately instead of independently culling every off-screenpolygon.

An initial (unoptimized) test on fields of grass show a speed up from 3m:23s to 0:23s for aplanar field of 65,536 blades of grass. Each blade of grass was modeled as a single triange, and thefield was viewed from an altitude higher than the height of a blade of grass. Hence, the performanceincrease in this case was due entirely to clipping and not occlusion.

Further tests on simple iteratively instanced scenes of office cubicles, viewed from slightlyabove the cubical wall height, showed the hierarchical z-buffer to render 32 cubicles in 79% of thestandard z-buffer rendering time, and 128 cubicles in 58% of the standard time.

4 Conclusion

Procedural geometric instancing is a language for articulating geometric detail. Its main impact isthe insertion of procedural “hooks” in the scene specification. These hooks allow it to solve theintermediate storage problem by on-demand lazy evaluation performed at the time of instantiation,and only for objects affecting the current rendering of the scene.

Procedural geometric instancing is a geometric complement to shading languages. It providesthe renderer with a procedural interface to the model’s geometry definition in the same way thata shading language provides the renderer with a procedural interface to the model’s shading defi-nition. Cook [1984] introduced shade trees as a procedural interface into rendering systems. Thisinterface can modify the appearance of input geometry by altering its normals, textures, and re-flectance functions. Shade trees even allow small perturbations to the underlying geometry throughdisplacement mapping, though major changes to the geometry are left to the modeler. Shade treesevolved into a sophisticated high-levellittle language [Hanrahan & Lawson, 1990]. This languageaccesses various shading parameters through variables, such as position and surface normal, inboth local and world coordinate systems. The shading language also supports high-level func-tions such as dot and cross products, noise generation, automatic differentiation and frequencyclamping. Procedural geometric instancing is a long overdue geometric counterpart to shadinglanguages, similarly providing high-level access to model information to dynamically alter shapeconstruction on-the-fly during rendering.

Procedural geometric instancing enjoys the best of both worlds — ease of articulation andefficient rendering. Procedural geometric instancing makes standard textual geometric descriptionsof natural models more compact and readable. The resulting representation is more efficientlyprocessed and rendered. It is general enough to handle special cases and provides the rendererwith a functional, parametric interface into the geometric representation.

The fundamental requirements for the procedural specification of geometry require parameterpassing and access to world coordinates. Section 3 introduced these features into the geometricinstancing paradigm and demonstrated their use for a variety of applications, in particular for theincorporation of global effects into a natural model.

5-16

Page 162: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

4.1 Implementation

Our procedural geometric instancing specification language was originally prototyped within therayshaderay-tracing system [Kolb, 1991].Rayshadealready had lazy evaluation tools for motionblur and other animation tasks which greatly eased the early implementation.

We have since moved on to the hierarchical z-buffer. We have written a stand-alone forwardrendering system calledpgi, based on the generic polygon scan converter [Heckbert, 1990] forrasterization and SRGP [Foleyet al., 1990] for display.

Some rendering systems, in particularrayshade,pre-apply their transformations to the bound-ing volume. Such enhancements are not possible with the dynamic transformations of proceduralgeometric instancing. Hence, in procedural geometric instancing, a bounding volume must beinstantiated before it can indicate the presence of visible geometry.

We are also anxious to begin developing a bounding volume implementation of the antialiasingversion of the hierarchical z-buffer described by Greene & Kass [1994]. We plan to combine theseelements into a system capable of efficiently rendering image sequences for the animation of highlydetailed natural scenes.

4.2 Future Work

The major obstacle to efficient rendering for procedural geometric instancing is the construction ofeffective bounding volume hierarchies. Since geometry is created on demand, a bounding volumemust be able to predict the extent of its contents. Such procedural bounding volumes were con-structed for fractal terrain models by Kajiya [1983] and Bouville [1985], but their generalizationto arbitrary subdivision processes remains unsolved.

Collision detection remains an elusive goal of procedural geometric instancing. Determining,at instantiation, whether a tree branch should be instanced or pruned depends on the existence ofanother tree branch, which in turn may depend on the existence of a third tree branch, which maydepend on the existent of the current tree branch. Solving such recurrences in a context-free datastructure apparently require exponential algorithms, and deserve further research.

Amburnet al. [1986] developed a system in which context was weighted between independedsubdivision-based models. Fowleret al. [1992] developed a geometric context-sensitive model ofphyotaxis based on the currently-generated geometry of a procedural model. A similar techniquecould model the upward tropism of the tips of branches on some evergreen trees. These tips bendupward depending on the visibility of sunlight. If the tree was instanced from the highest branches,working its way down to the ground level, the visibility of each branch with respect to the previoslyinstanced branches could be efficently computed using a separate frame buffer as a progressively-updated “light” map.

4.3 Acknowledgments

Przemyslaw Prusinkiewicz hosted a wonderful visit to the University of Calgary, and conversationswith him regarding this project have helped me considerably. Holly Rushmeier gave me someinteresting papers on canopy reflectance, which appear to link L-systems to radiosity. AnandRamagapalrao developed and implemented the bounding-box version of the hierarchical visibilityand antialiasing algorithms as part of his master’s project. Conversations with Daryl Hepting,

5-17

Page 163: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Brian Wyvill were also helpful. The SIGGRAPH reviewers provided their usual valuable andencouraging feedback when rejecting this paper.

References

[Abelson & diSessa, 1982] Abelson, H. and diSessa, A. A.Turtle Geometry. MIT Press, 1982.

[Amburnet al., 1986] Amburn, P., Grant, E., and Whitted, T. Managing geometric complexitywith enhanced procedural models.Computer Graphics20(4), Aug. 1986, pp. 189–195.

[Aono & Kunii, 1984] Aono, M. and Kunii, T. L. Botanical tree image generation.IEEE Com-puter Graphics and Applications4(5), Sept. 1984, pp. 10–34.

[Barnsley & Demko, 1985] Barnsley, M. F. and Demko, S. G. Iterated function schemes and theglobal construction of fractals.Proceedings of the Royal Society A399, 1985, pp. 243–275.

[Barnsleyet al., 1988] Barnsley, M. F., Jacquin, A., Mallassenet, F., Rueter, L., and Sloan, A. D.Harnessing chaos for image synthesis.Computer Graphics22(4), 1988, pp. 131–140.

[Barnsleyet al., 1989] Barnsley, M. F., Elton, J. H., and Hardin, D. P. Recurrent iterated functionsystems.Constructive Approximation5, 1989, pp. 3–31.

[Bouville, 1985] Bouville, C. Bounding ellipsoids for ray-fractal intersection.Computer Graphics19(3), 1985, pp. 45–51.

[Chiu & Shirley, 1994] Chiu, K. and Shirley, P. Rendering, complexity and perception. InPro-ceedings of 5th Eurographics Rendering Workshop, 1994.

[Cook, 1984] Cook, R. L. Shade trees.Computer Graphics18(3), July 1984, pp. 223–231.

[Demkoet al., 1985] Demko, S., Hodges, L., and Naylor, B. Construction of fractal objects withiterated function systems.Computer Graphics19(3), 1985, pp. 271–278.

[Foleyet al., 1990] Foley, J. D., van Dam, A., Feiner, S. K., and Hughes, J. F.Computer Graphics:Principles and Practice. Systems Programming Series. Addison-Wesley, ed., 1990.

[Fournieret al., 1982] Fournier, A., Fussel, D., and Carpenter, L. Computer rendering of stochas-tic models.Communications of the ACM25(6), June 1982, pp. 371–384.

[Fowleret al., 1992] Fowler, D. R., Prusinkiewicz, P., and Battjes, J. A collision-based model ofspiral phyllotaxis.Computer Graphics26(2), July 1992, pp. 361–368.

[Gerstl & Borel, 1992] Gerstl, S. A. W. and Borel, C. C. Principles of the radiosity method ver-sus radiative transfer for canopy reflectance modeling.IEEE Transactions on Geoscience andRemote Sensing30(2), March 1992, pp. 271–275.

[Goelet al., 1991] Goel, N. S., Bozehnal, I., and Thompson, R. L. A computer graphics basedmodel for scattering from objects of arbitrary shapes in the optical region.Remote Sensing ofEnvironment36, 1991, pp. 73–104.

5-18

Page 164: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

[Greene & Kass, 1994] Greene, N. and Kass, M. Error-bounded antialiased rendering of complexenvironments.Computer Graphics(Annual Conference Series), July 1994, pp. 59–66.

[Greeneet al., 1993] Greene, N., Kass, M., and Miller, G. Hierarchical Z-buffer visibility. InKajiya, J. T., ed.,Computer Graphics (SIGGRAPH ’93 Proceedings), vol. 27, Aug. 1993, pp.231–238.

[Hanrahan & Lawson, 1990] Hanrahan, P. and Lawson, J. A language for shading and lightingcalculations.Computer Graphics24(4), Aug. 1990, pp. 289–298.

[Hanrahanet al., 1991] Hanrahan, P., Salzman, D., and Aupperle, L. A rapid hierarchical radiosityalgorithm.Computer Graphics25(4), July 1991, pp. 197–206.

[Hart & DeFanti, 1991] Hart, J. C. and DeFanti, T. A. Efficient antialiased rendering of 3-D linearfractals.Computer Graphics25(3), 1991.

[Hart, 1992] Hart, J. C. The object instancing paradigm for linear fractal modeling. In Proc. ofGraphics Interface. Morgan Kaufmann, 1992, pp. 224–231.

[Hart, 1995] Hart, J. C. Implicit representations of rough surfaces. In Proc. ofImplicit Surfaces’95. (Eurographics Workshop), April 1995, pp. 33–44. Revised version to appear:ComputerGraphics Forum.

[Heckbert, 1990] Heckbert, P. S. Generic convex polygon scan conversion and clipping. In Glass-ner, A. S., ed.,Graphics Gems, pp. 84–86. Academic Press, 1990.

[Hutchinson, 1981] Hutchinson, J. Fractals and self-similarity.Indiana University MathematicsJournal30(5), 1981, pp. 713–747.

[Kajiya & Kay, 1989] Kajiya, J. T. and Kay, T. L. Rendering fur with three dimensional textures.Computer Graphics23(3), July 1989, pp. 271–280.

[Kajiya, 1983] Kajiya, J. T. New techniques for ray tracing procedurally defined objects.ACMTransactions on Graphics2(3), 1983, pp. 161–181. Also appeared inComputer Graphics 17,3(1983), 91–102.

[Kawaguchi, 1982] Kawaguchi, Y. A morphological study of the form of nature.ComputerGraphics16(3), July 1982, pp. 223–232.

[Kay & Kajiya, 1986] Kay, T. L. and Kajiya, J. T. Ray tracing complex scenes.Computer Graph-ics 20(4), 1986, pp. 269–278.

[Kolb, 1991] Kolb, C. Rayshade. public domain rendering software, 1991.

[Mandelbrot, 1982] Mandelbrot, B. B.The Fractal Geometry of Nature. W.H. Freeman, SanFrancisco, 1982.

[Musgraveet al., 1989] Musgrave, F. K., Kolb, C. E., and Mace, R. S. The synthesis and renderingof eroded fractal terrains.Computer Graphics23(3), July 1989, pp. 41–50.

5-19

Page 165: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

[Oppenheimer, 1986] Oppenheimer, P. E. Real time design and animation of fractal plants andtrees.Computer Graphics20(4), Aug. 1986, pp. 55–64.

[Perlin, 1985] Perlin, K. An image synthesizer.Computer Graphics19(3), July 1985, pp. 287–296.

[Prusinkiewicz & Hammel, 1994] Prusinkiewicz, P. and Hammel, M. Language restricted iteratedfunction systems, Koch constructions and L-systems. In Hart, J. C., ed.,New Directions forFractal Modeling in Computer Graphics, pp. 4–1 – 4–14. SIGGRAPH ’94 Course Notes, July1994.

[Prusinkiewicz & Lindenmayer, 1990] Prusinkiewicz, P. and Lindenmayer, A.The AlgorithmicBeauty of Plants. Springer-Verlag, New York, 1990.

[Prusinkiewiczet al., 1988] Prusinkiewicz, P., Lindenmayer, A., and Hanan, J. Developmentalmodels of herbaceous plants for computer imagery purposes.Computer Graphics22(4), August1988, pp. 141–150.

[Prusinkiewiczet al., 1994] Prusinkiewicz, P., James, M., and Mech, R. Synthetic topiary. InComputer Graphics (Annual Conference Series), July 1994, pp. 351–358.

[Ramagopalrao, 1994] Ramagopalrao, A. Rendering geometries using hierarchical bounding vol-umes and hierarchical z-buffer for visibility determination. Master’s thesis, School of EECS,Washington State University, Dec. 1994.

[Reeves & Blau, 1985] Reeves, W. T. and Blau, R. Approximate and probabilistic algorithms forshading and rendering structured particle systems.Computer Graphics19, July 1985, pp. 313–322.

[Rubin & Whitted, 1980] Rubin, S. M. and Whitted, T. A 3-dimensional representation for fastrendering of complex scenes.Computer Graphics14(3), 1980, pp. 110–116.

[Smith, 1984] Smith, A. R. Plants, fractals, and formal languages.Computer Graphics18(3), July1984, pp. 1–10.

[Snyder & Barr, 1987] Snyder, J. M. and Barr, A. H. Ray tracing complex models containingsurface tessellations.Computer Graphics21(4), 1987, pp. 119–128.

[Sutherland, 1963] Sutherland, I. E. Sketchpad: A man-machine graphical communication sys-tem. Proc. ofSpring Joint Computer Conference, 1963.

[Thompson, 1942] Thompson, D.On Growth and Form. University Press, Cambridge, 1942.abridged edition (1961) edited by J. T. Bonner.

[Williams, 1971] Williams, R. F. Composition of contractions.Bol. Soc. Brasil Mat.2, 1971, pp.55–59.

5-20

Page 166: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

Movie Placeholder

Page 167: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-1

Fractals and Procedural Models of Nature

F. Kenton MusgraveThe George Washington University

20101 Academic WayAshburn, VA 22011

(703) [email protected]

1. Introduction

These course notes are composed of two chapters and an appendix from my doctoraldissertation. [42]* As such, the text may make reference to sections (e.g., chapters orillustrations) not reproduced here. Please pardon this state of affairs, but we (or at least I)naturally regard the book [9] we created from the notes for this course in years past, as the"real" notes for this course. Those of you who've written a part of a book can surelyappreciate this!

Nevertheless, the material here and in the book is not identical. If you're reallyinterested in these topics, I recommend that you peruse them both. Particularly, the fourthpart of my section of these course notes consists of an essay on artistic process which,while not necessarily directly relevant to the general thrust of this SIGGRAPH course, isperhaps of interest; at any rate it certainly does not appear in the book.

1.1 Two Caveats on the Terrain ModelsFirst, when I was writing my dissertation, I had the immediate goal of getting out of

graduate school as soon as possible. Therefore I deleted all references to the term"multifractal" in the text, not wanting to be held accountable for a whole new domain ofknowledge at the eleventh hour. I substituted the term "heterogeneous" for "multifractal"throughout, to avoid incurring the undue scrutiny of my advisor, the esteemed BenoitMandelbrot. Having since escaped the Oppression of the Great Man (yes, I'm justkidding!) I've felt free to bandy about the term "multifractal". See the book for moredetails on that topic.

Second, there are two types of models presented in the terrain modling section: aphysical model (the erosion model) and a series of ontogenetic fractal models. I encourageyou to compare them in terms of complexity: the fractals models, I suggest, are striking intheir terse fecundity, while the physical erosion model is ... well, shall we say, "complex".Mandelbrot pounded into my thick head a healthy respect for (and fanatical adherence to?)

* The full text of my dissertation is available from the Yale computer science department (email ChrisHatchell: [email protected]) or in bound form from UMI. [39] Neither, due to the cost of colorreproduction, includes the color plates. You can obtain those, in the form of 35mm slides, directly fromthe author (email: [email protected]) at my cost of $40 U.S.

Page 168: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-2

Occam's razor: the metaphysical principle of science and engineering indicating that "thesimpler model is the preferred model". A shave with Occam's razor will generally performa neat partition of physical and ontogenetic models. It is my considered view that, withinthe academic field of computer graphics today, ontogenetic modeling is an example of thehonorable practice of engineering — the art of building something useful, by whatevermeans are convenient — while physical modeling is a weak form of science — the practiceof constructing, verifying, and refining formal models which reflect the behavior ofNature. But you should note that I've been rendered ornery and defensive of my peculiarpractice of graphics by my own perception of vacuous arrogance on the part of countlessreviewers and academics who've taken exception to my methods.

To fortify my little Jihad, let me start with an excerpt from my dissertation,illuminating this argument.

1.2 The Ontogenetic ManifestoMy work is image-driven. That is to say, the measure of success of the models I have

developed has generally been visual appearance rather than, for instance, scientific veracity.Thus the emphasis has been more on the creation of realistic-looking pictures, than on theconstruction of mathematically or physically correct models (the erosion models being anexception). Many of the models may seem ad hoc, but there is an underlying theme: theyhave been required to be (at least somewhat) elegant. That is, they are algorithmicallyminimal, implemented in a relatively small amount of C code, and reasonably efficient. Aseach subroutine in a renderer is typically called on the order of millions of times per image,efficiency is a primary concern in the discipline of image synthesis. Efficiency is often atodds with elegance; I have generally required both in my algorithms. Elegance in myalgorithms is often attributable to their fractal nature, which leads to what Smith [64] hastermed "database amplification": enormous geometric and visual complexity is abstractedinto terse procedural descriptions. The fractional Brownian motion which is the basis ofmost of our visual complexity is an outstanding example of this: the code need simplydescribe the construction of a spectral sum with a 1/ f β power spectrum for a givenexponent β .

There can be seen to be two polar extremes in approaches to modeling: the visuallydriven "if it looks good, it is good" approach, which I refer as ontogenetic modeling, andthe more rigorous approach of developing computational implementations of mathematicalmodels taken from the scientific literature, which in its most extreme form is sometimesreferred to as teleological modeling. [2] Early work in the (still-young) field of computergraphics was primarily of the first sort; as the field matures and ability to readily producereasonably good-looking images is more firmly established, the emphasis is shiftingtowards the second approach, as exemplified by "physically based" modeling. The firstapproach emphasizes efficiency; the second veracity. Not surprisingly, the first approachgenerally leads to better-looking pictures more quickly, while the second leads toimpressive-looking technical papers for publication and an increased understanding of the"right" way to model (as well as, often enough, experience in how not to tackle modelingproblems).

Physical models can entrain great elegance and descriptive power: witness thespectacular global illumination effects afforded by the semi-physical ray tracing model,based on the geometric optics associated with the particle model of photons. They can alsocreate computational nightmares: consider image synthesis based on the wave model ofphotons. [35] The radiative equilibrium models of radiosity exemplify a middle ground,where complexity of the implementation is moderate; computation time is generally

Page 169: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-3

significant; and the results are striking and highly desirable, yet often suffer significantvisual flaws due to simplifying assumptions. As the ultimate goal of realistic imagesynthesis is to have the right picture for the right reasons, both approaches to modeling arejustified and should be pursued. As the field matures, the twain shall meet; in themeantime, work from both ends of the spectrum is justified. In the author's view, the keyis intellectual honesty: when one is working from the ad-hoc, image-driven or ontogeneticapproach, it behooves us to be clear about what is and is not modeled accurately and withveracity, and to what extent.

The work presented in this dissertation is perhaps best viewed as breadth-firstresearch. The area of realistic imaging of synthetic landscapes was quite immature at theoutset of this work. It is a broad field, as the variety of topics addressed here indicates.Thus the research presented here comprises a quantity of relatively small results in disparateareas, united under the umbrella of a common goal: improved landscape imagery. It is nota depth-first inquiry into a recondite and/or picayune problem in computer science, couchedin the context of previous work on similar problems. Rather it represents a survey ofpreliminary approaches to a variety of problems which shall continue command theattention of researchers for some time to come.

Most of my technical results were motivated by artistic needs; the rest by recognitionof easy opportunity. The majority of my time has been spent on refining the artisticprocess, i.e., developing the procedural approach to realistic image synthesis of models ofnatural phenomena, and in mastering the medium, i.e., attaining the goal of superiorcraftsmanship by gaining control of the algorithmic processes, of devices for high-resolution display, and of the nonlinear transformations in color and contrast involved incolor reproduction. (Unfortunately, most of the work in color reproduction simply adds upto practical experience, and does not constitute publishable research. [36]) A significantamount of time and energy has been expended on the highly subjective, aesthetic tasks ofvisual composition and color usage; the reward of this has been effective artistic self-expression. The net result of my image-driven approach to research in computer graphicshas been the creation of images of aesthetic significance which simultaneously serve asillustrations of original image synthesis techniques and the descriptive power of fractalgeometry, and as examples of artworks born of a novel creative process.

The primary purpose of this work has been to illustrate and expand the descriptivepower of fractal geometry as a visual language of natural form. More specifically, it hasfocused on expanding and refining the uses of fractional Brownian motion as the basis formodels of natural phenomena, and clarifying its limitations (e.g., in terrain and turbulencemodels). Significant progress has been made in this, and directions for future work areclear.

1.3 See the EgressNow let us transition unceremoniously to the turgid first person plural and get on with

the technical content of our presentation.

Page 170: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-4

2. Terrain Modeling

2.1 IntroductionThis chapter covers our work in improving geometric models of fractal terrains. Our

fundamental contributions to the area of terrain modeling have been the development ofpoint-evaluated heterogeneous terrain models, and erosion processes to be applied to terrainmodels to generate drainage network features, fluvial deposits, and talus slopes. Theheterogeneous terrain models expand the repertoire of terrains which may be described withfractal models, but have no more basis in physics than do conventional, homogeneousfractional Brownian motion terrain models. The erosion models, on the other hand, arederived from the scientific literature of fluvial morphology and represent physicalsimulations of natural processes.

Standard fractal terrain models based on fractional Brownian motion (fBm) lackrealism partly because the statistical character of the surface is the same everywhere, i.e., itis a homogeneous, or stationary, stochastic function (or at least, it should be [27]). Toexpand the vocabulary of the fractal language of descriptions of Nature, we present a newapproach to the synthesis of fractal terrain height fields. In contrast to previous techniques,this method features locally independent control of the spectral sum comprising the fBm,and thus local control of fractal dimension, lacunarity, and crossover scales. Noisesynthesis [45] or rescale and add [59] achieves this flexibility by being point-evaluated, orcontext-free. This distinguishes it from Fourier synthesis and polygon-subdivisionschemes, which may not be so flexible.

In noise synthesis, modifications are made to the inner loop of the frequencysummation in the construction of the fractal function, which remains a variation on fBm.The modifications are sufficiently minor that most of the elegance of the native fBm modelis retained. Point-evaluation allows the parameterized modifications to vary locally,without reference to values of neighboring points, hence the context-independence.Varying the statistics of the terrain model with altitude or lateral position yields morerealistic first approximations to eroded landscapes. A slightly more subtle manipulation ofthe fBm construction loop is shown to yield a model of terrain on large scales, e.g., acrosstens or hundreds of kilometers.

Preliminary physical erosion models are outlined which simulate fluvial, thermal, anddiffusive erosion processes to create global stream/valley networks, talus slopes, androunding of terrain features. At the time of this writing, the erosion models suffer fromnumerical instabilities in the finite difference scheme used to model fluid transport. Thusthey have not yet been used in their foreseen application as computational testbeds forexperimental verification of the published formal transport models of fluvialgeomorphology. Work is currently underway to fix the transport problems, so that suchresearch may commence.

Page 171: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-5

2.2 BackgroundOur contributions in the area of fractal terrain models spring from four basic

observations of fBm based models: 1) Existing fBm schemes are flawed. These flawsrange from nonstationary statistics [27] or the creasing problem [29] in polygonsubdivision schemes to periodicity in Fourier-synthesis schemes. [71] 2) The small sum offrequencies used in synthesizing fBm for computer graphics purposes, allows the characterof the basis function to show through clearly. The basis function is usually implicit in thegeneration algorithm: a saw-tooth wave in polygon subdivision schemes, a sine wave inFourier synthesis, a piecewise tri-cubic polynomial in noise synthesis, etc. Varying thebasis function affects the character of the final surface. 3) Pure fBm is insufficient as amodel of natural terrains. FBm is statistically symmetric across the horizontal plane; realterrains are not. [29] FBm is homogeneous (i.e., stationary) and isotropic; real terrains arenot. 4) Terrain models based on fBm lack river networks, deltas, alluvial deposits, talusslopes and other such erosional features which are salient in natural terrains. (This may beseen as part of observation 3, but in practice it raises a different set of problems.)

Convincing fractal drainage systems are difficult to arrange in the first-orderalgorithmic generation of the surface, specifically because they result from global processesand thereby require global communication in the generation algorithm. In nature, drainagesystems are due not to the primary orogenesis (mountain-building process) itself, but to"post-processing" of the uplifted substrate by flowing water and glaciers. This suggeststhe approach of generating them by post-processing in synthetic models as well.

2.2.1. Origins of Fractal Terrain ModelsThe origin of fractal landscapes in computer graphics is this: some time ago, Benoit

Mandelbrot noticed the similarity between the trace of fractional Brownian motion overtime, and the skyline of jagged mountain peaks. [28] He reasoned that this process,extended to two dimensions, would result in a "Brownian surface" which would provide avisual approximation to mountains in nature. Some of Mandelbrot's earliest computergraphics images were of such surfaces [26] and indeed, they do capture the essence of"mountain-ness". Voss and Mandelbrot later used fractional Brownian noises to createsome very convincing forgeries of nature. [70] New terrain synthesis methods have sincebeen proposed by Fournier et al [10], Miller [33], and Lewis. [23]

Most synthetic fractal terrains hitherto have been varieties of fractional Brownianmotion, or, more loosely, 1/ f β surfaces. FBm is simply a mathematical phenomena whichhappens to resemble (very mountainous) natural terrains. There is no known, generativelink between the shape of fBm and that of real landscapes. While this is not unusual infractal descriptions of natural phenomena, it sometimes leads to questions about suchdescriptions, as noted by Sommerer [65]:

Although physical fractal spatial patterns are frequently observed, a quantitativeconnection between the measured numerical value of the fractal dimension andunderlying physics of the process has largely been lacking (exceptions are [6, 15,61, 66]). This lack of connection, while not reducing the utility of fractals forphenomenological characterization, has led to skepticism about the ultimatemeaningfulness of fractal descriptions in physics. [18, 55, 56]

Fortunately, our discipline is image synthesis, not physics, and therefore goodontogenetic descriptions are often more useful to us than thorough teleological ones.Furthermore, lack of knowledge of linkage does not necessarily imply that there exists nosuch link; it may simply be as-yet undiscovered.

Page 172: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-6

2.2.2. Limitations of FBm Terrain ModelsThe root of the problem with fBm surfaces as models of natural landscapes traces back

to the fact that fBm was chosen for use as such a model, for purely ontogenetic reasons:fBm looks like mountains. But native fBm is a mathematical function designed toincorporate certain properties. Chief among these is stationarity: fBm is a stochasticfunction which is statistically invariant under translation. Real terrains decidedly are not.FBm terrains are statistically symmetrical about the horizontal plane due to the symmetry ofthe Gaussian distribution of displacements inherent in their definition. Again, real terrainsare not. FBm terrains in general have no global erosion features, in part, because fBm isby design isotropic and stationary. Such homogeneity cannot encompass the morphologyof stream beds. In practice, fractal terrain models lack drainage features because of thedifficulties inherent in implementation and computation of models of such global structures,which require global communication and context-sensitivity. Free of concerns withcomputational efficiency, real terrains generate such structures spontaneously, reliably, andubiquitously.

In this chapter, we describe a flexible approach to the generation of variablesmoothness and asymmetry in fractal terrain models in the terrain model generation stage,and suggest a global, physical erosion post-processing process for height fields whichgenerates both local and global erosion features through a simulation of natural erosionprocesses.

2.3 Previous WorkWe now review the prior state of the art in fractal terrain synthesis.

2.3.1. Fractional Brownian MotionThis section describes fBm and methods used for its generation in computer graphics.

In an attempt to convey to the reader an intuitive grasp of fBm, we first offer several high-level views of fBm. For a thoroughly detailed presentation, we refer the reader to theexcellent existing literature, particularly Voss [71] and Saupe. [58]

2.3.1.1 Characterization of fBmFBm is a stochastic fractal function, which is characterized by its statistical behavior.

Here are several ways of viewing it:

• Definition: FBm is the integral over time of the increments of the path of GaussianBrownian motion, i.e., a random walk.

• FBm is a non-differentiable (read: infinitely wiggly) function of time, with acharacteristic power spectrum (e.g., graph of signal amplitude by frequency): thepower spectrum corresponds to the function 1/ f β , where f is frequency, and1≤ β ≤ 3.

• FBm is statistically self-affine: its statistics (and appearance) are similar at all scales,modulo a vertical scaling factor (this need for a different change in vertical scale, withhorizontal scaling, distinguishes self-affinity from self-similarity).

• FBm as a terrain model, is a two-dimensional extension of the (originally) one-dimensional fBm formalization.

Page 173: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-7

• (Fourier synthesis model:) fBm is a sum of sine waves with random phase, andamplitude scaled as 1/ f β .

• (Noise synthesis model:) fBm is a sum of band-limited random basis functions, withamplitude scaled as 1/ f β , where f is the mean frequency of the band-limited basisfunction.

• (Midpoint displacement model:) fBm is a sum of sawtooth waves of successivelydoubled frequencies, with Gaussian offsets for peaks, scaled by frequency f as 1/ f β .

Fractional Brownian motion is perhaps most succinctly described by the Weierstrass-Mandelbrot function [71]:

V(t) =f =−∞

∑ Af rfH sin(2π r − f t + θ f )

(2.2.1)

where A is a Gaussian random variable, r is the spatial resolution or lacunarity, θ is auniform random viable in the range [0,2π ] providing a random phase, and H is the Hölderexponent, which determines the fractal dimension. The Weierstrass-Mandelbrot function isbasically an infinite sum of sine waves at discrete frequencies and with random phase, witha certain size gap between successive frequencies (the lacunarity), and an exponent Hwhich scales amplitude by frequency. Amplitude relates to frequency f by the plot of thepower spectrum [71]

SWM ∝ 1f H +1

(2.2.2)

and hence the result is referred to as 1/ f noise.

The important point to note here is that pure fBm is composed of a sum of sine wavesof discrete frequencies, with random phases, and amplitudes related to frequency by the1/ f β weighting. Thus fBm for computer graphics purposes is generally constructed bysumming discrete frequencies of some basis function, with the proper amplitude scaling foreach frequency.

2.3.1.2. Band-LimitingNote that the Weierstrass-Mandelbrot function involves an infinite sum, generally a

bad idea for computations which are expected to terminate. It also contains an unboundedrange of frequencies while, as prescribed by the Nyquist sampling theorem, digital displayshave an upper limit to the spatial frequencies they can accurately reproduce. Thus, fBm forcomputer graphics is always discretized band-limited fBm, that is, fBm composed of afinite sum of discrete frequencies.

What does this mean, in practice? It means that we generally construct fBm as a sumof about three to eight frequencies of the basis function — a relatively simple, fastoperation for the computer. Note that we state a lower limit of three octaves of the basisfunction for fBm; with less than three octaves any claim of self-similarity or adherence to aparticular characteristic power spectrum becomes rather vacuous.

Page 174: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-8

2.3.1.3. Generation of fBmFBm generation methods can be categorized, for the purposes of computer graphics,

into procedural and non-procedural methods. Procedural models are evaluated when andwhere needed, at rendering time, whereas non-procedural models are evaluated globallyand stored in advance of rendering. Non-procedural methods, such as polygonsubdivision, are generally fast and easy to implement. Procedural methods, specificallynoise synthesis, [45] are flexible, memory-efficient (as only visible parts of the model needbe computed), and simple, given an implementation of the basis (i.e., "noise") function.They are not particularly efficient, in terms of computing time, as the noise functiongenerally requires many floating point operations to evaluate. Nevertheless, due to theflexibility and elegance of the procedural approach, noise synthesis remains the author'sfBm generation method of choice.

For background on procedural means for generating fBm and other stochasticprocedural textures, see Gardner [12], Lewis [23, 24], Mandelbrot et al [25], Musgrave etal [40, 45], Perlin [52], Saupe [59], Stam et al [67], and van Wijk. [73]

2.3.1.4. TerminologyWe now give a very brief description of some the mathematical terminology associated

with the generation of fractal terrains. For greater depth, see Saupe. [58]

We define Df as the fractal dimension of the surface, DE as the Euclidean dimensionof the surface, and H as the fractal dimension parameter. (Note that previous authorsspecifically, Bouville [4] and Miller, [33] sometimes erroneously refer to H as the fractaldimension.) For terrain models DE = 2 and Df = 3− H DE .

The fractal dimension Df , Euclidean dimension DE , fractal dimension parameter H,

and spectral exponent β of 1/ f β noise or fBm are related as:

Df = DE +1− H = DE + 3− β2

(2.2.3)

It follows that β = 1+ 2H and H = β −12

. Since DE = 2, for our purposes,

Df = 3− H = 7− β2

. Note that H is in the interval [0,1] and β is in the interval [1,3] for

fBm, by convention. Outside this range, the function may not formally be fBm.

Fractional Brownian motion in one dimension is a stochastic process X(t) with apower spectrum S( f ) scaling with f as

S( f ) ∝ 1f β (2.2.4)

where β is referred to as the spectral exponent of the fBm and is in the interval [1,3].

FBm itself is not a stationary process, but its increments I (∆t) = X(t + ∆t) − X(t) are;

that is, the expected value of I (t,∆t) is zero for all t and ∆t and the variance σ 2 of

Page 175: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-9

I (t,∆t) does not depend on t . In the special case of Brownian motion, H = 0.5 and σ 2

varies as ∆t2H . Thus for H = 0.5 increments are uncorrelated; for H > 0.5 (as in fractalterrains, where H is approximately equal to 0.8) increments are positively correlated; forH < 0.5 they are negatively correlated (corresponding to a very rough surface). In morethan one dimension fBm is a random field X(x,y,...) with X on any straight line being a

1/ f β noise.

Crossover scale is the scale at which fractal character vanishes. Upper crossover scaleis heuristically defined as the scale where vertical and horizontal displacements are equal.Thus, for a mountain range rising from sea level to peaks which are at most one kilometerhigh, the upper crossover scale is one kilometer. Lower crossover scale would be thesmaller scale at which the surface becomes smooth and non-fractal.

Lacunarity generally refers to gaps in fractals [28]; here it can be thought of as the gapbetween frequencies composing the fBm of the fractal terrain. Thus when iterativelysumming the frequencies composing fBm, if the frequency f i added at stage i is a multipleλ of f i −1,

f i = λf i −1 (2.3.5)

then λ is the lacunarity of the fBm. While lacunarity affects the texture of the fBm,this effect is usually only noticeable for λ > 2. Thus we usually let λ = 2 , as lower valuesinvolve more computation for a given frequency range of fBm and larger values can affectthe surface appearance.

2.3.2. Fractal Terrain ModelsMost fractal terrain models have been based on one of five approaches: Poisson

faulting [28, 70], Fourier filtering [28, 32, 70], midpoint displacement [10, 23, 29, 33],successive random additions [70], and summing band-limited noises. [11, 33, 59] Theapproach we have developed is of the last type, which we will refer to as the noisesynthesis method. We will now briefly review these five techniques (for a more detailedreview, see Voss [71] and Saupe [58]).

2.3.2.1. Poisson FaultingThe original terrain generation technique employed by Mandelbrot [28] and Voss [70]

was Poisson faulting. The Poisson faulting technique involves applying Gaussian randomdisplacements (faults, or step functions) to a plane or sphere at Poisson distributedintervals. The net result is a Brownian surface. This approach has been employed to createfractal planets by Mandelbrot and Voss. [28] It has the advantage of being suitable for useon spheres for creation of planets. Its primary drawback is the O(n3) time complexity,where n is the number of samples generated.

2.3.2.2. Midpoint DisplacementMidpoint displacement methods were introduced to the computer graphics community

as an efficient terrain generation technique by Fournier, Fussell, and Carpenter. [10] Weclassify the various midpoint displacement techniques by locality of reference: wireframe

Page 176: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-10

midpoint displacement, tile midpoint displacement, generalized stochastic subdivision, andunnested * subdivision.

Pure wireframe subdivision is used only in the popular triangle subdivision scheme[10] and involves the interpolation between two points in the subdivision process. Tilemidpoint displacement involves the interpolation of three or more non-collinear points; it isused in the "diamond-square" scheme of Miller [33], the square scheme of Fournier et al[10], and the hexagon subdivision of Mandelbrot and Musgrave. [29] Generalizedstochastic subdivision [23] interpolates several local points, constrained by anautocorrelation function. Miller [33] also proposed an unnested "square-square"subdivision scheme.

Wireframe and tile midpoint displacement methods are generally efficient and easy toimplement, but have fixed lacunarity and are nonstationary due to nesting (see Miller [33]for a disposition of the resulting artifacts). Generalized stochastic subdivision andunnested subdivision schemes are stationary; the former is flexible but not particularly easyto implement, while the latter features fixed lacunarity and is very simple to implement.Note that all midpoint displacement techniques produce true fractal surfaces [46] but simplyhave the wrong statistical characteristics to qualify as pure fractional Brownian motion.[27]

2.3.2.3. Successive Random AdditionsSuccessive random additions is a flexible unnested subdivision scheme. When points

determined in previous stages of subdivision are re-used, they are first displaced byaddition of a random variable with an appropriate distribution. Previous points need not bere-used; new grid points to be displaced can be determined from the previous level ofsubdivision by linear or nonlinear interpolation. Successive random additions featurescontinuously variable level of detail, which is useful for zooms in animation, and arbitrarylacunarity. The lacunarity λ depends on the change of resolution at successive generations;

time complexity of the algorithm is a function of λ and the final resolution R. Thesuccessive random additions algorithm is easy to implement.

2.3.2.4. Fourier SynthesisFourier synthesis generates fBm by taking the Fourier transform of a two dimensional

Gaussian white noise, then multiplying it in frequency space with an appropriate filter, andinterpreting the inverse Fourier transform of the product as a height field. Alternatively,one can simply choose the coefficients of the discrete Fourier transform, subject to theproper constraints, and interpret the inverse Fourier transform as above. [58] Advantagesof this approach include the availability of arbitrary lacunarity and precise control of globalfrequency content. Disadvantages include periodicity of the final surface, which canrequire that substantial portions of the computed height field patch be discarded, theO(nlogn) time complexity of the FFT algorithm, the level of complexity ofimplementation, and lack of local control of detail.

* For an explanation of the "nesting" issue, see Mandelbrot. [29]

Page 177: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-11

2.3.2.5. Heterogeneous ModelsThe issue of statistical symmetry across the horizontal plane in fractal terrain models

has been addressed by Mandelbrot and Voss [28, 70] through the use of nonlinear scalingin a post-processing step, and by Mandelbrot [29] through the use of random variableswith non-Gaussian distributions in the displacement process. These approaches yieldpeaks which are more jagged and valleys which are smoother, but they still lack globalerosion features. A global river system, created algorithmically at terrain generation time,has been demonstrated by Mandelbrot and Musgrave [29] but with less-than-satisfactoryresults, e.g., the resulting surface is far too regular to appear convincingly natural.

2.3.2.6. Summing Band-Limited Basis FunctionsWhat we call noise synthesis can be described as the iterative addition of tightly band

limited frequencies, each of which has a randomly varying, or "noisy", amplitude. Noisesynthetic surfaces have been used by Miller, [33] Gardner [11] and Saupe. [59] Miller hasused Perlin's procedural 1/ f β noise [34] as a displacement map [7] to add detail to the(otherwise straight) edges of polygons tessellating a Brownian surface of similar spectralcontent. Gardner has interpreted his noise function, based on a "poor man's Fourierseries" [12] (actually a variation of the Mandelbrot-Weierstrass function) as a height field.The quantization of altitude values of the height field yields terraced land, such as mesas.Our approach differs from Gardner's in that we exercise local control over frequencycontent based on the amplitude of existing signal and other functions. The Perlin noisefunction is notably more isotropic than Gardner's noise function, and is not periodic;Gardner's terrains and textures suffer visible artifacts due to these factors. In addition,Gardner's noise function requires some critical values of the constants for good results,which values must be determined through subjective experimentation. Driven by tablelookups, the Gardner noise function is much faster than the Perlin function.

Saupe independently developed an approach to noise synthesis similar to ours; heterms it rescale and add. Saupe's publication of that work featured an emphasis onmathematical foundations, while ours have emphasized applications. For a thoroughmathematical treatment of the issues of noise synthesis which is complimentary to thisdocument, see Saupe. [59]

2.3.3. Erosion ModelsKelley et al [19] have used empirical hydrology data to derive a system for the

generation of stream network drainage patterns, which are subsequently used to determinethe topography of a terrain surface. This approach features the global dependencenecessary for realistic fluvial erosion patterns, and has a strong basis in measurements ofreal physical systems. This approach to modelling fluvial erosion is relatively efficient;what it lacks is the detail of a fractal surface. While the stream network may be fractal, the"surface under tension" used for the terrain surface is not, and cannot be readily made sowithout disturbing the drainage basins and stream paths.

We propose a simple fluvial erosion simulation in which water is dropped on eachvertex in a fractal height field and allowed to run off the landscape, eroding and depositingmaterial at different locations as a function of energy and sediment load of water passingover each vertex. The erosion laws are taken from the literature of fluvial geomorphology.[1, 60] The model features the global communication necessary to create global features,and is slow despite the O(n) time complexity. We also present a global model forsimulation of what we refer to as thermal weathering. While fluvial erosion creates valleys

Page 178: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-12

and drainage networks, thermal weathering wears down steep slopes and creates talusslopes at their feet. The thermal weathering simulation can create realistic results in muchless computing time than the fluvial erosion simulation, and is also O(n) in timecomplexity. Diffusive erosion simulation is trivial; it simply consists of a progressive low-pass spatial filter over time. These models are discussed in section 2.4.

2.4 Original Terrain Synthesis ModelsWe now present our original models for fractal terrain synthesis. First we describe

our basis function, the so-called "noise" function, then we describe how we use it togenerate novel terrain models.

2.4.1. Noise FunctionNoise-synthetic terrain generation is accomplished by the addition of successive

frequencies of a band-limited "noise" function. The source of the noise we use is a versionof the Perlin [52] noise function. The ideal noise function for our purposes would bemonochromatic (i.e., single-frequency), homogeneous (invariant under translation), andisotropic (invariant under rotation). The Perlin function supplies a band-limited signal ofrandom amplitude variation; it is stationary and nearly isotropic.*

The Perlin noise function N:Rn → R is implemented as a set of random gradientvalues defined at integer points of a lattice or grid in space (of dimension n =1, 2, 3, or4) which are interpolated by a cubic function. At lattice points in space (points in spacewith integer coordinates), the value of the function is zero (a zero crossing) and its rate ofchange is the gradient value associated with that lattice point. The trajectory of thefunction, given by the gradient value at the integer points, is interpolated at non-integerpoints with the cubic function y = 3x2 − 2x3, which interpolant features second derivativecontinuity and zero rate of change at the end points, where x = 0 and x =1. Since thegradient might be, for instance, increasing at two consecutive lattice points i and i +1,there may also be a zero crossing between lattice points. This gives rise to frequencies inthe noise function higher than f , f being the primary frequency which is that of thespacing of the integer lattice.

The noise function can be modified to have an arbitrary, non-zero value at the latticepoints. This increases the variance of the function, but adds low frequency components tothe signal which cannot be controlled or subsequently removed [23]; this has implicationsfor the statistics of the surfaces to be generated. For an analysis of the spectralcharacteristics of such a noise function see Saupe. [59]

The band-limited Perlin noise function N(rp) outputs a signal with a fixed lower

frequency f equal to half the frequency of integers in the domain vp ∈ Rn . To scale the

frequency of N by a factor u, one simply performs a scalar multiplication uvp of the

domain vectors supplied to N . This has the effect of scaling the reference points in the

* It is geometrically impossible to reconcile the three criteria of monochromaticity, stationarity, andisotropism in a multidimensional Perlin noise function. If the frequency of an n dimensional Perlin

function is f along the axes of the grid upon which it is defined, then the frequency will be n f alongthe long diagonal of that grid.

Page 179: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-13

noise lattice, producing the desired frequency shift in the output of N . We will see thispractice used below.

2.4.2. Local Control of StatisticsWe now begin to describe how we have employed the flexibility of procedural

methods to construct terrain models of improved realism.

Subjective observation of natural landscapes reveals that in certain types of mountainranges there is a marked change in the statistics of the surface as one moves from thefoothills to the highest peaks. The foothills are more rounded, while the higher mountainsare more jagged. Sometimes, as in the eastern slope of the Sierra Nevada, the entiremountain range rises in a relatively short distance from a nearly flat plain. This change ofcharacter can be characterized as a change of fractal dimension Df , crossover scale, orboth.

Using the noise synthesis technique we can easily devise terrain models with suchfeatures, by modulating the power spectrum of the surface as a function of horizontalposition and/or vertical altitude, or of prior values in the spectral sum. These techniquesproduce heterogeneous terrain models. Note that we might also refer to such models asnonstationary, but as history has imbued that term with a pejorative cast in fractal terrainmodelling [27], we prefer to use the term "heterogeneous".

We will begin our exposition with a description of a procedural construction ofhomogeneous fBm. We will then describe our modifications to this algorithm, forgenerating heterogeneous terrains.

2.4.2.1. Homogeneous Procedural FBm Construction

Homogeneous fBm is simply a stochastic function with a 1/ f β power spectrum, for1≤ β ≤ 3. In a procedural context it may be described as:

fBm

vp( ) = N

vf i( )

i =1

n

∑ ϖi

where vp is the point at which the function is evaluated, n is typically between 3 and 12, N

is the basis (e.g., "noise") function, vf i = v

pλ i (usu. lacunarity λ =2.0), and ϖ = λ−0.5β (aconstant, determining the fractal dimension Df ). The randomness in the function isembodied in the noise function N .

Below is pseudo-code for Noise-based fBm:

fBm( vp, H, r, n )

result := Noise( vp )

for octave := 2 to n

vp :=

vp * r

amplitude := pow( r, -H * octave )result := result + amplitude * Noise(

vp )

return ( result )

Page 180: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-14

where H is the fractal dimension parameter, r is the lacunarity (usu. equal to 2.0), and n isthe number of octaves. Note how terse the fBm specification is.

C code to generate such fBm might look like:

double fBm( point, spectral_exp, lacunarity, octaves );Vector point;double spectral_exp, lacunarity, octaves;

{register double, i, result, amplitude, frequency=1.0;

result = Noise3( point );for( i=octaves-1; i>0.0; i-- ){

point.x *= lacunarity;point.y *= lacunarity;point.z *= lacunarity;frequency *= lacunarity;amplitude = pow( frequency, -spectral_exp );if ( i < 1.0 ) amplitude *= i; /* for octaves remainder */result += amplitude * Noise3( point );if (amplitude < VERY_SMALL) break;

}return( result );

} /* fBm() */

where point is the point at which the function is being evaluated; spectral_exp, thefractal dimension parameter, is typically ~0.5; lacunarity is almost always 2.0 (in fact,this might as well be hard-coded); and octaves determines the number of frequencies in thespectral sum (note that the variable name "octaves" is only appropriate when lacunarityequals 2.0, as "octave" implies a frequency doubling). This fBm function rolls off theremainder of the floating point value of octaves, for smooth transitions in adaptively band-limited textures.

Note that this function should not be regarded as statistically-accurate, as Saupe hasshown [59] that the cubic polynomial interpolant employed in the noise functions affectsthe power spectrum of the final discretized fBm sum in unexpected ways. The powerspectrum of Perlin's "Turbulence()" function [52] is, for instance, 1/ f 2 rather than theexpected 1/ f . Therefore, we should regard this and the other versions of fBm functionsdescribed here as parameterized versions of random fractal functions, the parameter valuesof which are to be interpreted subjectively rather than empirically. Formal correspondencebetween parameter values and fractal statistics could be established using the resultsdescribed by Saupe, but we have not found this to be necessary or useful for successfulcomputer graphics practice.

2.4.2.2. Local Modulation of Fractal DimensionIn our procedural approach, fractal dimension can be modulated locally by varying β

with position. This is a flexibility previously unavailable in fBm generation schemes.Plate 6.2.2 shows a patch which is planar on the left, to space filling on the right (modulothe upper and lower crossover scales, which are approximately 7 and 1/128, respectively).

In this case, we have β = x−1 (corresponding to H = 1x

− 32

), and x in the interval (0,1].

Page 181: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-15

In Plate 6.2.3, we linearly change fractal dimension Df from 2 ( H =1, β = 3) to 3

( H = 0, β =1) on the right. Note that this is not the same as going from planar (1/ f ∞ ) to

filling all of 3 space (1/ f 0), as in Plate 6.2.2.

2.4.2.3. Fractal Statistics by AltitudeWe now begin to describe our heterogeneous procedural terrain models. Our first

model varies the spectral exponent in the spectral summation by the magnitude of thecontribution of previous, lower frequencies. The idea is to keep the terrain smooth near anartificial "sea level", while allowing it to get rougher at higher altitudes, in a firstapproximation to naturally occurring, eroded terrains.

2.4.2.3.1. The Algorithm

Presuming that our basis (e.g., noise) function N:Rn→R has a range in the interval[−1,1], we may wish to translate N by a constant ct so that it is, for instance, always ornearly always positive (the sign will be important in later iterations). We may also wish toscale N by a factor cs to reduce or expand its range (the positive portion of which we maywish to keep normalized to a maximum value of 1, for instance). In the patch illustrated inPlate 6.2.4, we insert the lowest frequency first:

A0 = N

vf 0( ) + ct( ) cs + c0

where A0 is the initial height of a point in the height field, vf 0 is the initial object space

coordinate vector, rp0, of the height field position being calculated, and c0 is an offset

constant which determines the zero value or "sea level" of the terrain. Note that vf 0 can be

an element of R2 or R3, depending on the arity of the noise function domain.

Iterating fBm noise at lacunarity λ >1 requires that, at iteration n, the frequency

added is proportional to f 0λn( )−0.5β

. Setting the lowest frequency f 0 =1 gives a frequency

increment at iteration n of λ−0.5βn. Thus we have for the altitude Ai at stage i > 0:

Ai = Ai −1 + Ai −1 N

vf i( ) + ct( )csϖ

i

where ϖ = λ−0.5β (a constant) and vf i = v

pi −1λ . Note that for a noise function N:R2 → R,

we have vf i = v

p0λi . For N:R3 → R, we may have

vf i ≠ v

p0λi due to vertical displacement.

To clarify, here is pseudo-code for the algorithm. Note that it is only a minormodification to the fBm routine described above.

hfBm( vp, H, lacunarity, octaves, sea_level )

result := Noise( vp ) + sea_level /* Noise: R3 → −1,1[ ] */

for octave := 2 to octaves

vp :=

vp * lacunarity /* lacunarity usu. ≅ 2.0 */

amplitude := pow( lacunarity, -H * octave )result := result + min( 1.0, result ) + amplitude * Noise(

vp )

Page 182: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-16

return ( result )

Below is C code implementing this algorithm. The point is not to show the details of thecode, but rather its brevity.

/* heterogeneous fractional Brownian motion routine */doublehfBm( point, omega, octaves )

Vector point;double omega, octaves;

{register double lacunarity, a, o, prev;register int i;register Vector tp;

lacunarity = 2.0; a = 0.0; o = omega; tp = point;/* get initial value */

a = prev = 0.7 + Noise3( point );for( i=1; i<octaves; i++ ) {

tp.x *= lacunarity; tp.y *= lacunarity; tp.z *= lacunarity;/* get subsequent values, weighted by previous value */

prev = prev * o * ( 0.7 + Noise3(tp) ) ;a += prev;if (prev < SMALL) break;o *= omega;

} /* for */

return( 0.15/omega * a );} /* hfBm() */

2.4.2.3.2. DiscussionThis algorithm does not use a uniform spectral exponent β for all frequencies: it is a

monotonically decreasing function, as frequency increases, bounded above by the "base"fractal dimension which is the highest dimension attainable in the terrain. This amounts tomodulating both lower crossover scale and fractal dimension with altitude. Yet it is noteven as simple as that: the spectral exponent, which determines the fractal dimension,varies with frequency, since altitude also varies with frequency as the spectral summationproceeds. We have not yet elegantly characterized the complex statistical behavior of thisand the below heterogeneous fractal functions.

It is interesting to note that subjective experience indicates that modulation of crossoverscale is more important than modulation of fractal dimension, for generating realisticlooking terrain. That is, it is not so much that terrains have different roughnesses atdifferent locations, as that they have a rather uniform roughness, but expressed at differentscales. That changing crossover scale alone would have such a dramatic effect is notsurprising, for as Mandelbrot has pointed out [30], the fractal dimension of the Himalayasis approximately the same as that of the runway at the JFK airport; it is simply that theupper crossover scale of the latter is on the order of millimeters while that of the former ison the order of kilometers. Future work in terrain models might profit from experimentswith models of uniform fractal dimension, but heterogeneous crossover scales.

2.4.2.3.3. ResultsPlate 6.2.5 is a rendering of a detail of a 50x 50 patch similar to that in Plate 6.2.4.

Note that the triangles, which are barely visible due to bump mapping but can be discernedaround the snow-covered peak, are quite large in comparison with the overall image. By

Page 183: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-17

including only relatively low frequencies in the terrain, and leaving high frequency detailsto the texture map, we can get realistic terrains from very small height fields. Such heightfields can be rendered very rapidly. In Plates 6.2.4 and 4.2.5, as in subsequent plates ofterrain patches, λ = 2 .

Plate 6.2.14 illustrates a convincing model of ancient, heavily eroded models producedby the above terrain model. Here the difference in statistics between the (visible) low areasand the peaks is relatively small, and the fractal dimension is quite low throughout. Suchsmooth, rolling terrain could not be generated by standard polygon subdivision algorithms,as it requires a smooth basis function, rather than a saw-tooth wave. Careful inspectionreveals that the polygons tessellating the terrain surface are very large indeed — note thepiece-wise linear ridgelines. Thus the terrain model is composed of relatively fewpolygons, and renders very rapidly.

In Plate 6.2.6 the spectral exponent varies with both altitude and horizontal position.Here we have:

A0 = F x( ) N

vf 0( ) + ct( )cs + c0

with F x( ) = 2x, 2− 2x , assuming that x varies from 0 to 1. To give the ridge a more

natural path than that of a straight line, we add some noise to x before calculating F x( ) .The contribution of higher frequencies is again scaled as:

Ai = Ai −1 + Ai −1 N

vf i( ) + ct( )csϖ

i .

2.4.2.4. Modulating LacunarityIt is readily apparent that the global value for lacunarity λ is subject to exact user

control in the noise synthesis scheme. Computational cost in the creation of a modelinstance varies directly with the number of frequencies used. Generally, there is somedesired bandwidth for a given fractal model, dictated by display resolution, availablestorage space, desired level of geometric detail, etc. Cost per unit bandwidth varies as theinverse of the lacunarity. Thus surfaces generated with small lacunarity will be moreexpensive to compute than those with large lacunarity.

Due to the point-evaluated character of the noise synthesis method, we may alsoexercise local control over lacunarity. This can be accomplished by displacing the initialcoordinate

vp0 supplied to the noise function by a vector valued noise function

vN (e.g.,

Perlin's "DNoise()" [52] ). The effects of such local change of lacunarity are shown inPlate 6.2.7b where we modulate intensity I on the image plane as:

I = N

vp0 +

vN

vp0( )( ).

Note that local change of lacunarity interferes with the precise local control offrequency, as it amounts to expanding the band limits of the basis function. While it is notimmediately apparent that this local modulation of lacunarity is enormously useful forterrain synthesis, it has demonstrated value in the synthesis of textures such as clouds,smoke, and flames. The function illustrated in Plate 6.2.7b may be used as a novel basisfunction for the construction of fBm. Again, due to the small (usu. ~ 3−12 octaves, atlacunarity λ = 2) spectral sums used in constructing fBm for computer graphics purposes,

Page 184: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-18

the character of the basis function shows through clearly in the result. Compare the fBmconstructions in Plate 6.2.7c & d. A variation of the fBm shown in Plate 6.2.7d can beused in the clouds, providing a peculiar quality in the cloud texture, hitherto unavailable.Presumably, similar terrain models would also have a unique qualitative appearance, butwe have yet to experimented with such models.

2.4.2.5. A Large-Scale Terrain ModelThe simple observation that local minima (valleys) should be smoother at all scales has

led us to develop a novel terrain model, which turns out be useful for representing terrainson very large scales. Over large distances heterogeneity in topography is salient: plains,rolling hills, foothills, and alpine areas might all be present in a large area of terrain. Thiskind of variety is not present in naive fBm, which again is designed to be a homogeneousand isotropic function. Such heterogeneity in a fractal terrain model is useful, however,not only as a novel type of terrain model in its own right, but also in realistic modelling ofterrains on planetary scales. Such capability is required to realize our goal of situating ourrealistic landscape scenes in a coherent, global context, eventually to be exploredinteractively, perhaps in a VR (virtual reality) setting.

A surprisingly simple variation of the fBm functions described above allows us togenerate highly heterogeneous terrains, which are more appropriate to very large scaleterrain modelling than prior fractal terrain models. This model has applications in bothplanetary modelling (see Plates 6.4.3) and more local terrain modelling (Plates 6.2.8 and4.2.9).

2.4.2.5.1. The AlgorithmIn the above model, we made the fractal statistics of terrain models a function of

altitude and spatial position. The observation that local minima (valleys) on all scalesshould fill up with debris and thereby become smoother than local maxima, led to thefollowing construction, in which the amplitude of subsequent (higher) frequencies

vi = N

vf i( ) in the spectral sum are weighted by the amplitude of the previous (lower)

frequency vi −1:

Ai = Ai −1 +F vi −1( )vi

f iβ

where F v( ) is displaced, scaled and clamped linear function of v:

F v( ) =0 : v < 0

1 : v > cc

v

cc

: o.w.

The dynamics of this spectral construction is perhaps more clear in pseudo-code:

Terrain( vp, H, lacunarity, octaves, offset, threshold )signal := 0.5 * ( Noise(

vp ) + offset ) /* scale to range of 1 */

result := signalfor octave := 2 to octaves

Page 185: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-19

vp :=

vp * lacunarity

amplitude := pow( lacunarity, -H * octave )weight := clamp( signal/threshold, 0.0, 1.0) /* clamp to interval [0, 1] */signal := weight * 0.5 * ( Noise(

vp ) + offset )

result := result + amplitude * signalreturn ( result )

Below is C code implementing this algorithm. Again, the point is not to show the detailsof the code, but rather its brevity.

/* a highly heterogeneous fractal terrain function */double Terrain( point, spectral_exp, lacunarity, octaves, offset, threshold )

Vector point;double spectral_exp, lacunarity, octaves, offset, threshold;

{register double i, result, amplitude, frequency=1.0;register double signal, weight;

signal = result = 0.5*(offset + Noise3( point ));for( i=octaves-1; i>0.0; i-- ) {

point.x *= lacunarity;point.y *= lacunarity;point.z *= lacunarity;frequency *= lacunarity;amplitude = pow( frequency, -spectral_exp );if ( i < 1.0 ) amplitude *= i; /* for octaves remainder *//* weight successive contributions by previous signal */weight = signal / threshold;if ( weight > 1.0 ) weight = 1.0;if ( weight < 0.0 ) weight = 0.0;signal = weight * 0.5*(offset + Noise3( point ));result += amplitude * signal;if (amplitude<VERY_SMALL || weight<VERY_SMALL) break;

}return( result );

} /* Terrain() */

Comparing this code to that for homogeneous fBm, as seen in Section 2.3.2.1, we seethat the elegance of the fBm model has not been too severely compromised, yet we havegained much complexity in the statistical behavior if the resulting fractal function.

2.4.2.5.2. DiscussionThe operative feature of this algorithm is that subsequent contributions of higher

frequencies are weighted by the (displaced) value of previous, lower frequencies. Thus,once a local minimum has been established at a particular frequency, all higher frequencycontributions will be small-to-zero, keeping that area "smooth" at smaller scales. This isequivalent to a monotonically-decreasing power spectrum, or a monotonically-increasingspectral exponent. The fractal dimension of the function will never exceed that specified bythe spectral exponent β , but can be lower-to-flat at any scale. Thus we have stochasticallymodulated the fractal dimension, upper crossover scale, and bandwidth of the spectral sum.

Note that this algorithm is fundamentally flawed, as an implementation of the statedintention. That idea was to smooth out local minima at all scales, in imitation of valleysfilling up with dirt, mud, etc. This algorithm smoothes out areas where there is a localminimum in a higher frequency in the current spectral sum, not in the resulting function.The minima in higher frequencies may occur where the local slope of the existing terrain isarbitrarily steep, due to the previous contributions of lower frequencies. Correct

Page 186: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-20

implementation of the idea would require knowing the local derivative of the sum-to-date.This should be implemented in the future. The current model, flawed though it may be,has nevertheless proved a powerful descriptor of natural terrains.

2.4.2.5.3. ResultsThis model is useful for modelling very large-scale terrain, as seen in Plate 6.2.8: it

can generate plains in some areas, rolling hills in others, and jagged alpine mountains instill others. Again, this better mimics the appearance of Earthly terrains than thehomogeneous fBm usually used for fractal terrain models, but with little added complexityin the generation function or compromise to the fundamental elegance of the naive fBmmodel. One way to think of this model is that fBm represents the simplest possible form ofcomplexity — a precisely defined spectral sum — whereas this model bumps up thecomplexity by a notch: there is a complex, stochastic definition of the spectral sum itself.

In Plates 6.4.3 we have used the model in the creation of continents on an Earth-likeplanet. Note that the fractal dimension, or "wigglyness", of the coastline varies widelyfrom place to place; also, we have areas of the continents which are rather bland andfeatureless, along with highly complex and detailed behavior in other areas.

Plate 6.2.9 illustrates the utility of the model in creating more interesting terrains on thelocal scale. Here we have used 1− abs N

vp( )( ) as the basis function. Taking the absolute

value of the (cubic polynomial) noise function introduces discontinuities in the derivative ofthe surface, which create ridges at all scales.

2.5 Dynamic Erosion ModelsThe terrain models described above were designed as attempts to create convincing

emulations of eroded landscapes, at terrain generation-time. They are fairly successful inincreasing both realism of fBm-based terrains and the repertoire of landforms which can beso represented. Yet there remain many aspects of erosion features in Nature, which are notaddressed by these models: stream and river networks, alluvial fans, deltas, talus slopes,lakes, glacial valleys, cirques, moraines, etc. The most difficult to model of these are theglobally coherent features, notably the drainage networks associated with rivers andglaciers.

Such globally-coherent features require global communication, or context sensitivity.Without it, problems such as avoiding self-intersection of riverbeds cannot be reliablysolved. Context sensitive problems are notoriously difficult to solve in a computationallyefficient manner. In this section we offer a preliminary presentation of some dynamicerosion processes, designed to be used on existing terrain models in a post-processing passto add some of these challenging but important morphological features. One of thesemodels deserves no claim of computational efficiency; the other two are reasonably efficientin generating the desired features. Unfortunately it is, of course, the former algorithm andits intended results which are of by far the greatest interest.

Of all the areas addressed in this dissertation, this is the least developed. Work is verymuch underway in refining and extending the models and results discussed here. Thus wedisclaim this as a preliminary presentation of preliminary results. The promise of thisresearch is great, as are the difficulties faced (primarily in obtaining a stable, efficientnumerical solution to the nonlinear partial differential equations involved in modelling fluidtransport, and their boundary conditions). Work in the area will proceed into theforeseeable future.

Page 187: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-21

Our erosive processes fall into three categories: fluvial erosion, thermal weathering,and diffusive erosion. Fluvial erosion is that caused by running water. What we term"thermal weathering" subsumes the non-fluvial processes which cause rock to flake offsteep inclines and form talus slopes at their bases. Diffusive erosion includes processeswhich transport substrate laterally, without regard to local slope (i.e., whether uphill ordownhill). In this section we will illuminate these erosion simulation algorithms.

2.5.1. Fluvial ErosionThe fluvial erosion model involves depositing water ("rain") on vertices of the height

field and allowing the water, along with sediment suspended therein, to move to lowerneighboring vertices. The erosive power of a given amount of water is a function of itsvolume, the local slope of its transport path and the amount of sediment already carried inthe water; this function is known as the erosion law. [1, 60]

The fluvial erosion model is implemented by associating with each vertex v at time tan altitude at

v , a volume of water wtv and an amount of sediment st

v suspended in thewater. At each time step, we pass excess water and suspended sediment from v to eachneighboring vertex u. The amount of water passed, ∆w , is defined as:

∆w = min wtv, wt

v + atv( ) − wt

u + atu( )( )

If ∆w is less than or equal to zero, we simply allow a fraction of the sedimentsuspended in the water at v to be deposited at v:

at +1v = at

v + Kdstv

st +1v = 1− Kd( )st

v

This amounts to asymptotic settling out of suspended sediment in standing water.Otherwise, we set

wt +1v = wt

v − ∆w

wt +1u = wt

u + ∆w

Cs = Kc∆w

Here, Cs is the sediment capacity of ∆w . When passing sediment from v to n, we

remove at most this amount of sediment from stv and add it to st +1

u . If Cs is greater than stv,

a fraction of the difference is subtracted from atv and is added to st +1

u , which constitutes theerosion of soil from v. Finally, we allow a fraction of the sediment remaining at v to bedeposited as above. Thus, if st

v ≥ Cs, we set:

st +1u = st

u + Cs

at +1v = at

v + Kd stv − Cs( )

st +1v = 1− Kd( ) st

v − Cs( )Otherwise:

Page 188: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-22

st +1u = st

u + stv + Ks Cs − st

v( )at +1

v = atv + Ks Cs − st

v( )st +1

v = 0

The constants Kc , Kd , and Ks are, respectively, the sediment capacity constant, thedeposition constant and the substrate softness constant. Kc specifies the maximum amountof sediment which may be suspended in a unit of water. Ks specifies the softness of thesubstrate and is used to control the rate at which "bedrock" is converted to sediment. Kd

specifies the rate at which suspended sediment settles out of a unit of water and is added tothe altitude of a vertex.

Through the above process water and, more importantly, substrate mass from higherpoints on the landscape is transported to and deposited in lower areas. This movementconstitutes the communication necessary for modelling the global process of erosion.Unfortunately, it also involves finding a numerical solution to some challenging nonlinearpartial differential equations, and current work is focusing on finding a reasonable solutionthereto. (As the author's primary area of interest and expertise is computer graphics, notnumerical analysis, the involvement of other researchers in this project is essential.)

The resulting features bear reasonable resemblance to natural erosion patterns (seePlate 6.2.11). Ongoing research is concentrating on constructing a more sophisticated,physically accurate model, based on the erosion laws of the literature of fluvialgeomorphology. [1, 60]

Plates 6.2.10 and 4.2.11 show a 200x200 terrain patch before and after 2000 timesteps of fluvial and thermal erosion. The erosion simulation required approximately 4hours of CPU time on a Silicon Graphics Iris 4D/70 workstation. In this simulation,Kc = 5.0, Kd = 0.1, and Ks = 0.3. Note the gullies, confluences, and alluvial fans thathave appeared in the eroded patch, which is rendered as a dry wash, i.e., without waterpresent.

The uneroded patch shown in Plate 6.2.10 demonstrates a reasonable firstapproximation to an eroded landscape with a central stream bed. The uneroded patch wascreated by weighting the addition of always-positive noise values by the distance d of thepoint from the diagonal of the patch, which diagonal is also "higher" at the far end. Thestream bed is made non-linear by the addition of an fBm offset to the distance d . Thispatch demonstrates the flexibility of the noise synthesis method for terrain modelling; it didnot require much time to construct.

The distribution of rainfall on landscapes in nature is strongly influenced by adiabatics,or the behavior of moisture-laden air as it rises and descends. As air rises, it cools and therelative humidity rises. When the relative humidity becomes great enough clouds form;when the clouds become sufficiently dense, precipitation occurs. Wind blowing overmountains causes air to rise as it passes over the mountains, thus precipitation is muchgreater in the vicinity of mountain peaks. It is easy to include a rough approximation ofadiabatic effects in our erosion model by making precipitation a linear function of altitude.This has a significant effect on the erosion patterns produced. Obviously, some complexitycould be introduced by attempting to model prevailing winds, rain shadows, seasonalvariations, etc.

Page 189: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-23

In our use of the fluvial erosion model, we have simply allowed a fixed amount of rain(approximately one one-thousandth of the height of the vertex) fall at regular intervals(approximately every sixty to one hundred time steps). Mandelbrot [31] has shown thatrecords of flooding of the Nile river show a 1/ f noise distribution, i.e., large floodshappen with low frequency. A correspondingly noisy distribution in the rainfall rate wouldconstitute a more realistic simulation of nature. It is probable that it would have a long-termeffect on the erosion features created: hundred-year and thousand-year floods may well,after all, exert a greater morphogenic influence than all the intervening years of lessdramatic erosion. This is an idea that has yet to be explored.

2.5.2. Thermal WeatheringAnother erosion process we model is "thermal weathering", which is our catch-all term

for any process that loosens substrate, which subsequently falls down to pile up at thebottom of an incline. The thermal weathering process creates talus slopes of uniformangle. Thermal weathering is a kind of relaxation process and is both simple and fast. Ateach time step t+1, we compare the difference between the altitude at

vat the previous timestep t of each vertex v and its neighbors u to the (global) constant talus slope* ( T). If aneighbor is lower than the talus slope, we then move some fixed percentage ct of thedifference onto the neighbor.

at +1u =

atv − at

u > T : atu + ct at

v − atu − T( )

atv − at

u ≤ T : atu

With care taken to assure the equitable distribution of talus material to all neighboringvertices, the slope to the neighboring vertices asymptotically approaches the talus angle.

Plates 6.2.12 and 4.2.13 show a patch created with non-uniform lacunarity before andafter slumping or thermal weathering. This process has created a rough approximation ofsand dunes.

2.5.3. Diffusive ErosionDiffusive erosion processes, also known as dry creep, essentially amounts to a

progressive spatial low-pass filter over time. The dynamic of the process, caused in natureby bioturbation (as by the footfalls of grazing animals) and sediment transport by raindropsplashing, consists of the rounding-off of sharp features, both peaks and valleys. It istrivially implemented by transporting a quantity of substrate in each time step, as a functionof local slope:

at +1u = cd at

v − atu( )

where cd is the coefficient of diffusion. This process is exactly equivalent to repeatedconvolution with a low-pass filter kernel, as the diffusive transport mechanism may movematerial uphill as well as downhill. [3]

* The talus slope or angle of repose is a fixed angle or slope which is characteristic of rubble of a givensize, shape, and composition. Piles of rubble with sides steeper than this will spontaneously collapse underthe influence of gravity; slopes at or below this slope are stable.

Page 190: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-24

2.5.4. Discussion of Erosion ModelsAgain, this write-up represents a preliminary exposition of preliminary results. Many

extensions of this model have already been implemented, but are not yet sufficientlydeveloped to merit careful documentation. The models represent a rich area for futureresearch. We now mention a few of the foreseen (and, in some cases, already realized)extensions, modifications, and applications of this work.

One extension accounts for the differing hardnesses of bedrock, silt, and talus. This isaccomplished by adding appropriate fields to the vertex data structure and making thesimplifying assumption that silt is on top of talus, which is in turn on top of bedrock.Hardness increases from silt to bedrock. Another simple and interesting extension is tomodulate the hardness to the bedrock: this can be done in strata, as in sedimentary rock, orwith a space-filling fractal field of hardness values. Either can be easily implemented witha procedural solid texture, as described in Chapter 5.

Creating animations of erosion will be trivial, as it is a dynamic process which needonly be imaged at every n time steps and played back in sequence, to yield a visualizationof the process acting through time. It is hoped that the process of orogenesis (mountainbuilding) and perhaps the creation of features like the Grand Canyon could be producedusing a combination of the techniques mentioned above.

Other problems, such as the realistic rendering of landscapes including running water,dry washes, deltas, alluvial fans, etc., may keep researchers occupied for some time tocome, as they do not seem to admit to obvious, simple solutions.

2.6 ConclusionsWe have demonstrated some novel methods for generating fractal terrain models of

enhanced realism. These models include terrain generation-time models of erosion featuresand heterogeneity that enhances their resemblance to natural terrains. The noise synthesismodelling method derives most of its power from its point-evaluated, procedural nature. Itcan be utilized in procedural rendering of terrain models with adaptive level of detail, asdescribed in Chapter 3.

Our generation-time erosion models have been supplemented by dynamic erosionmodels which create globally context-sensitive features which are difficult to include ingeneration-time algorithms. These erosion models represent work in progress, and offerrich potential for future research.

The net results of this work has been a substantial improvement in the fidelity andvariety of "fractal forgeries of Nature" we can create.

Page 191: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-25

3. Procedural Textures

3.1 IntroductionIn this chapter will describe a very small part of our work in developing procedural

textures as models of natural phenomena. As these functions are developed and evaluatedin an entirely subjective manner, there is relatively little of scientific or technical depth here.These functions are, however, critical to the visual success of our images. As one of theforemost practitioners in this area, the author has been called upon to share hismethodology [38, 40 , 41 ] that others may profit from whatever insights may be obtainedfrom such an exposition. The main point of interest, in the context of this document, is theoverall strategy of encapsulating complex visual behavior reflecting that found in Nature, incomparatively terse functions or algorithms — the process of proceduralism.

3.1.1. Proceduralism as ParadigmProceduralism is a powerful paradigm for image synthesis. In a procedural approach,

rather than explicitly specifying and storing all the complex details of a scene or sequence,we abstract them into a function or an algorithm (i.e., a procedure) and evaluate thatprocedure when and where needed, i.e., via lazy evaluation. We gain a savings in storagespace, as the details are no longer explicitly specified but rather are implicit in theprocedure, and we shift the time requirements for specification of details from theprogrammer to the computer. We also gain the power of parametric control with itsconceptual abstraction (e.g., a number which makes mountains "rougher" or "smoother")and the serendipity inherent in an at-least-semiautonomous process: we are often pleasantlysurprised by unexpected behaviors, particularly in stochastic procedures.

Some aspects of image synthesis are perforce procedural, i.e., they can't practically beevaluated in advance: view-dependent specular shading and atmospheric effects, forexample. It is implausible, for example, to evaluate and store a realistic model ofatmospheric scattering in advance of rendering; rather, the atmospheric effects are morereadily evaluated along the interval between points of interest during the rendering process.

In an essential sense, anything done with a computer can be thought of as being"procedural", but we in computer graphics have a somewhat more specific idea of whatconstitutes "proceduralism", though the term may defy exact definition. It may (or maynot) be safe to say that when we computer graphicists think "procedural" we are usuallythinking of "modelling that is done at rendering-time", as opposed to being done in aprevious, separate modelling step.

The process of developing a procedural model embodies the basic loop of scientificdiscovery [47, 49 ]: A formal model is posited, observations and comparisons of the modeland Nature are made, the model is refined accordingly, and more observations are made.The process of observation and refinement proceeds in an iterative loop. Hanrahan hasobserved [13] that a fundamental difference between the way this loop proceeds intraditional sciences versus the development of a procedural model for computer graphics, isthe time required for a single iteration: for a traditional laboratory scientist this period may

Page 192: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-26

be on the order of years or even a lifetime; when developing a model for computer graphicsit is typically more like minutes.

3.1.2. Proceduralism and Fractal ModelsWe noted in chapter 1 that fractals are inherently procedural, as they are specified by

recursive or iterative algorithms, and that computer graphics have always been required tograsp the complexity of their behavior. Thus it is not surprising that much of what isvisually complex in computer graphics, has fractal underpinnings. There is simply noknown way of specifying complexity, more simply.

Our work has largely involved developing fractal models of natural phenomena forcomputer graphics. Being very taken with this concept of simple specification andencoding of complexity, proceduralism has been our chosen modus operandi. There arepowerful capabilities to be gained through this approach.

Fractals contain potentially unlimited high and low spatial frequency content. Theformer can play merry hell with the point sampling schemes of computer graphics, due tothe effects described by the Nyquist sampling theorem. The high frequency content can,and should, be parameterized in procedural fractal models. We should be able to vary thehigh frequency content with the screen resolution of the rendering, e.g., higher-resolutionimages need more small details. Optimally, we would also like to be able to vary thefrequency content adaptively in rendering, as the perspective projection makes feature sizeson the screen vary as the inverse square of their distance from the eye.

Procedural approaches can readily accommodate these needs, simply byparameterizing the number of iterations in the fractal construction loop, and assigning thatparameter appropriate values at the various locations in the scene. As accomplishing thisrepresents work currently in progress, we will not describe it here, but refer the interestedreader to the literature. [50]

3.1.3. Procedural Solid TexturesPerhaps the best-known form of proceduralism in computer graphics is in procedural

textures, as introduced in 1985 by Gardner [12], Peachey [51], and Perlin. [52] Aprocedural, or "solid" texture, rather than existing a priori as a two-dimensional imagewhich must be mapped or "wallpapered" onto surfaces, is implemented as a functiondefined throughout three-space and evaluated when and where needed, e.g., on visiblesurfaces of objects. Such functions can also be evaluated throughout a volume, as Perlinhas suggested in his "Hypertextures" paper [53], but this remains an exotic application —in fact, these wonderful procedural textures are so computationally expensive that manypractitioners may hesitate to use them even on surfaces, in everyday production imagesynthesis.

Currently it takes on the order of hours for several high performance workstations,working in parallel, to create a single high resolution image using the (sometimesparticularly elaborate) procedural textures described below. Inevitably (and hopefullywithin five to ten years) we will have improved the efficiency of these procedures, andhave sufficient computational power at our disposal, to evaluate such models in real-time.Then we will have the capability of interactively exploring the worlds we create, in a virtualreality setting.

Page 193: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-27

3.1.4. Proceduralism and Parallel ComputingHow do we currently overcome the high cost of evaluating these procedures? In a

word, parallelism. Perlin uses an AT&T Pixel Machine to evaluate his hypertextures [53];Sims has harnessed a Connection Machine 2 to evaluate his genetically-derived textureexpressions [62, 63]. These are both SIMD (single-instruction, multiple-data)implementations; the underlying architectures allowing access to massive, if inflexible,parallel processing power. Our work has been made possible by C-Linda [5], a minimalextension of the C programming language (by exactly six statements) which allowstransparent access to multiple processors in a MIMD (multiple-instruction, multiple-data)parallel environment.

3.1.5. OverviewBelow is our exposition on procedural textures. It consists largely of segments from

notes for courses we have participated in, on the topic. [38, 40, 41] It is a relatively smallsegment thereof, because this area is more at practice than research, and we feel that thebasic flavor of such work can be conveyed by a small number of examples. It may be seenas a continuation of discussion of procedural methods begun in chapter 2, the maindifference being that now we are interpreting the functions as surface textures, rather thanas height fields. The examples we give come from our current efforts at planetarymodelling, culminating in a model of an Earth-like planet which incorporates many of theelements described in this dissertation, upon the face of which we seek to situate our morelocal renderings. This model marks an important step towards the creation of a viable"virtual world", to be explored with future interactive technologies.

3.2 Planetary-Scale CloudsWe have developed a reasonable procedural model of the coloring of the surface of the

Earth (presented in section 3.6) and an atmosphere in which to ensconce the planet. Butthe most salient visual features of the Earth, as seen from space, are the bright whiteclouds, and the wonderful shapes they form in the global circulation of the atmosphere.We need, then, to develop a model of global cloud cover. While the model we present isinherently two-dimensional (though it could conceivably be implemented in threedimensions as a hypertexture [53]) and has visually obvious shortcomings,* it is verysimple and gives a reasonable first approximation to nature. We use it as a transparencymap applied to a sphere concentric with, and of radius slightly greater than, the earth.

We start with a special variety of fBm. We have found that an fBm constructed usingthe noise shown in Plate 6.2.7b as a basis function has a stringy, wispy character,somewhat more cloud-like than fBm constructed using the conventional Perlin noisefunction — compare Plate 6.2.7c and d. We refer to this noise VLNoise(), for "variablelacunarity noise". It is constructed by composition of noise functions: a vector-valuednoise function, DNoise(), is used to displace the texture-space coordinates of the argumentvector to a scalar-valued noise function. Again, this has the net effect of expanding theband limits off the noise function, while also changing its visual character significantly.

* The major shortcoming of this model, as well of every other procedural model involving turbulent fluidflow, is the lack of vortices. Short of full dynamic solution of the Navier-Stokes equation [75], there iscurrently no good, continuous model of fractal vortices in the computer graphics literature.

Page 194: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-28

Note how simple pseudo-code for this is:

VLNoise( vp )

return ( Noise( vp + DNoise(

vp + 0.5) )

Note that the argument vector vp, when passed to DNoise(), is displaced by a factor of

0.5 in all axes, to intentionally misregister the underlying integer lattices of the two noisefunctions.

C code implementing this function looks like:

double VLNoise( point, scale )Vector point;double scale;

{Vector temp, DNoise();double Noise();

temp = DNoise( point );

temp.x += scale * point.x;temp.y += scale * point.y;temp.z += scale * point.z;

return ( Noise(temp) );

} /* VLNoise() */

Here we employ the variable scale to modulate the magnitude of the texture-spacedistortion.

For simulating the swirling and streaming of the large-scale flow of global weathersystems, we can apply the same concept used in the construction of VLNoise(), that ofdistorting texture space by composition of noise functions. In this case, we add vector-valued noise to the evaluation-point vector passed to the fBm function VLfBm(), which inturn creates fBm from the VLNoise() basis function. The magnitude of the texture-spacedistortion is greater than that used above; the exact value used being determined simply byqualitative evaluation of the results.

Pseudo-code for the function might look like:

Planet_Clouds( vp, distortion, rescale, cutoff )

vq := distortion * DNoise(

vp ) /* get scaled distortion vector */

result := VLfBm( rescale * vp ) /* get fBm cloud texture, in rescaled texture space */

if result < cutoff /* clamp the result, to provide cloud-free areas */return( 0 )

else /* return a normalized value */return ( (result - cutoff) / ( 1 - cutoff ) )

The weather-system texture is generated by the following C function:

double Planet_Clouds( point, distortion, rescale, H, lacunarity, octaves, offset, cutoff )

Vector point;double distortion, rescale, H, lacunarity, octaves,

offset, cutoff;{

Vector p, s, DNoise();double result, fBm(), VLfBm();

Page 195: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-29

/* get "distortion" vector */p = DNoise( point );

/* scale distortion */SCALAR_MULT( distortion, p );

/* insert distortion */s = point;SCALAR_MULT( rescale, s );result = VLfBm( s, H, lacunarity, octaves );

/* adjust zero crossing (where the clouds disappear) */result += offset;if ( result < 0.0 )

return ( 0.0 );else /* normalize density */

return( result / 1.0+ offset );

} /* Planet_Clouds() */

This function, called with the argument vector (3.0, 0.7, 2.0, 9.0, 3.0, 0.0) createdthe clouds over the planet seen in Plate 6.4.3. The computed value in result is used torepresent the density of the clouds. The cloud texture modulates the transparency of thesurface to which it is applied, in a manner similar to the fog in Plate 6.3.1, and it shouldhave the ability to cast shadows, as seen in Plate 6.3.2. This requires evaluation of thetexture for shadow rays intersecting the cloud-sphere and attenuation of illumination by thecomputed cloud density.

This stretched cloud texture is also useful for more prosaic landscape renderings. [43,44]

The texture-space stretching described above varies smoothly, with the value ofDNoise(). It is also possible to apply an fBm-valued distortion; indeed this might seem tobe a logical way to emulate the fractal character of turbulence. Plate 6.3.3 shows this inpractice. Unfortunately, the result looks more like raw cotton than turbulent fluid flow,again due to the lack of vortices.

3.3 A Cyclonic StormOne of the most salient features of the Earth's clouds at the global scale, is the spiral

eddy structures of the cyclonic and anticyclonic flows. Models of cyclonic storms areunder development, as illustrated in Plate 6.3.4. A number of features similar to this (butless exaggerated), scattered with a Poisson disk distribution through the cloud texturedescribed above is foreseen to be a plausible approach to procedural modelling ofatmospheric eddies.

Below is a function designed not to create fractal eddies, but rather one large cyclonicstorm with fractal cloud distributions that vary with scale. It is quite preliminary and adhoc, but marks an interesting experiment in the modelling of natural phenomena withprocedural texture.

void Cyclone( texture, intersect, colour, max_radius, twist, scale, offset, omega, octaves )/* texture & intersection coords */

Vector texture, intersect;/* surface color */

Colour *colour;/* arguments specified in vector given in text below */

double max_radius, twist, scale, offset, omega, octaves;{

double radius, dist, angle, sine, cosine, eye_weight, value;

Page 196: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-30

Vector point;

/* rotate hit point to "cyclone space" */radius = sqrt(intersect.x*intersect.x+intersect.y*intersect.y);

if ( radius < max_radius ) { /* inside of cyclone *//* invert distance from center */

dist = max_radius - radius;dist *= dist * dist;angle = PI + twist*TWOPI*(max_radius-dist)/max_radius;sine = sin( angle );cosine = cos( angle );point.x = texture.x*cosine - texture.y*sine;point.y = texture.x*sine + texture.y*cosine;point.z = texture.z;

/* subtract out "eye" of storm */if (radius < 0.1*max_radius) { /* if in "eye" */

/* normalize */eye_weight = (.1*max_radius-radius)*10.;eye_weight = 1.- eye_weight; /* invert */eye_weight *= eye_weight; /* make nonlinear */eye_weight *= eye_weight; /* make nonlinear */

}else eye_weight = 1.; /* not in "eye" */

}else point = texture; /* not in storm radius */

if ( eye_weight ) { /* if in "storm" area */value = eye_weight *

(offset + scale*VLfBm(point,omega,2.,octaves));if ( value < 0.) value = -value;

}else value = 0.;

/* thin the (default == 1) density of the clouds */colour->red *= value;colour->green *= value;colour->blue *= value;

} /* Cyclone() */

This function, called with the argument vector (1.0, 0.5, 0.7, 0.5, 0.675, 4.0, 0.7),created the cyclone shown in Plate 6.3.4.

Note that in Plate 6.3.4, the large-scale features are distorted by stretching, while thesmall-scale cloud features are not. As the dynamics of the processes governing cloudformation and dissipation vary with scale, a single fractal description is not sufficient. (Ourmodel may be seen as a crude first approximation to viscous damping in turbulent flow.)The particular formulation used in Plate 6.3.4 is indicated by observations of nature (seeKelley [20] for striking photographs of the Earth's weather systems, seen from space).Incorporating variation of features with scale in our global cloud models may increase theirrealism; this is currently under investigation. Clearly much background work could bedone to link the statistical distribution of our clouds, at a variety of scales, to that of cloudsin nature.

3.4 VenusVenus is a planet which can be modelled remarkably well with a procedural texture.

Venus has a particularly simple, fractal appearance: its primary markings are the hugestreaks in the clouds, as distorted by the Coriolis effect. The Coriolis effect amounts to a

Page 197: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-31

twist of the clouds as the square of the radius from the axis of rotation; see Plate 6.3.5.The following function creates the entire effect when applied to a pale yellow sphere.

Coriolis( texture, intersect, colour, scale, twist, offset, omega, octaves )Vector texture, intersect;Colour *colour;double scale, twist, offset, omega, octaves;

{double radius_sq, angle, sine, cosine, value;Vector point;

radius_sq = intersect.x*intersect.x + intersect.y*intersect.y;angle = twist*TWOPI*radius_sq;sine = sin( angle );cosine = cos( angle );point.x = texture.x*cosine - texture.y*sine;point.y = texture.x*sine + texture.y*cosine;point.z = texture.z;

value = offset + scale*VLfBm( point, omega, 2., octaves );if ( value < 0.) value = -value;colour->red *= value;colour->green *= value;colour->blue *= value;

} /* Coriolis() */

3.5 Jupiter and SaturnGas giants such as Jupiter and Saturn are particularly easy to model, modulo the eddy

structure in the clouds. For this we use a function much like Perlin's "marble" texture[52]. This function perturbs appropriately-colored horizontal strata, which represent thecloud bands on the planet. Enclosing a sphere so textured in a fairly dense, hazyatmosphere can give a good approximation to the appearance of these planets. The imagein Plate 6.3.6 is the result of a few hours of work, emulating the Voyager image in Plate6.3.7. Note that we can readily get a close approximation, but that the eddy structure thatcharacterizes turbulent flow on all scales, is conspicuously missing.

We can emulate Saturn-like rings using a disk concentric with the planet, with anappropriate texture applied. (See Plate 6.3.8) Here we have used a one-dimensional fBmas a transparency map on the disk, indexed by radius and suitably rolled off at theappropriate inside and outside radius values. Again, the model is entirely stochastic andsubjective. Any resemblance to Saturn's rings, or the gaps therein, is entirely fictitious.

3.6 Terran Procedural TextureThe planet we geocentric humans are certain to be most interested in modelling is the

Earth. In this section we develop such a model. Note that in our planetary-scalerenderings the solid earth is represented, for reasons of rendering efficiency, with a smoothsphere to which an elaborate procedural texture is applied. There are two drawbacks to thisapproach, which might be ameliorated by using displacement maps in a non-raytracingrenderer: the "mountains" cannot cast shadows, and they do not rise up through theatmosphere. The latter is a significant effect in the appearance of the Earth from space -high mountain peaks rise above a significant portion of the atmospheric haze and scattering.See Plates 36, 77 and 113 in Kelley [20] for a view from space of the island of Hawaii,and see how Mauna Kea rises from hazy sea level right through much of the atmosphere.(Cartographers are well aware of this effect, and its usefulness as an altitude cue in

Page 198: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-32

topographic maps. [17]) Modelling this effect is an area for future work; for a firstapproximation, the smooth sphere is acceptable.

We now describe the step-by step development of a "terran" procedural texture, largelyas an illustrative example of how such complex texture functions come into being. In theprocess, we hope to illuminate the source of the "parameter proliferation" that tends toplague complex procedural textures, as well as comprehensive and more-scientific modelsof natural phenomena.* Plate 6.3.9 documents various steps in the development. C codefor the completed texture is provided at the end of the chapter..

3.6.1. Continents and OceansThe first step in creating an earth is to create continents and oceans. This can be

accomplished by quantizing a fractal (fBm) bump map as follows:

bump = VfBm(point);if ( dot(bump, surface.normal) < threshold )

surface.color = ocean_color;else surface.normal += bump;

where point is the ray/earth intersection point, VfBm() is a procedural vector-valued fBm,and threshold controls the "sea level". (Note that in our code fragments henceforth weassume that operators such as += are valid for vectors as well as well as scalars.) Thisquantization, with very simple blue/grey coloring, gives us the effect seen in Plate 6.3.9a.

3.6.2. Climatic Zones by LatitudeNext we provide a color lookup table to simulate climatic zones by latitude; see Plate

6.3.9b. Our goal is to have white polar caps, barren grey sub-Arctic zones blending intogreen, temperate zone forests, which in turn blend into buff-colored desert sandsrepresenting equatorial deserts. (Note that this is not necessarily the most accurate colorscheme for emulating the Earth, where the major deserts generally bracket green tropicalequatorial zones, and are more ruddy than buff-colored.) The coloring is accomplishedwith a 256-entry color lookup table, which is indexed by the latitude of the ray/earthintersection point.

3.6.3. Fractally Perturbing the Climatic ZonesThis rough coloring-by-latitude is then fractally perturbed, as in Plate 6.3.9c. We

accomplish this perturbation by adding a random component to the latitude value whendetermining the color, and taking into account the bump map, so that the "altitude" of theterrain may affect the climate. This can be accomplished with a code fragment similar tothis:

index = point.z + c1*fBm(point) + c2*DOT(bump, surface.normal);surface.color = colormap[index]

where fBm() is a scalar-valued procedural fBm routine and c1 and c2 are scalingparameters for adjusting the influence of latitude and terrain altitude. The dot product termrepresents the magnitude of the bump map in the direction normal to the surface; this

* It is precisely the problem of managing this huge n-space of parameters, that Sims' genetic algorithms[62, 63 ] so elegantly address.

Page 199: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-33

quantity should be computed and stored prior to applying the bump map. Note that altitudeand latitude represent two independent quantities that could be used as parameters to a two-dimensional color map; to date we have used only a one-dimensional color lookup table forsimplicity.

Next we add an exponentiation parameter to the value of index computed above, toallow us to "drive back the glaciers" and expand the deserts to a favorable balance, as inPlate 6.3.9d.

3.6.4. Adding Depth to the OceansWe now modify the oceans, adjusting the sea level for a pleasing coastline and making

the color a function of "depth" to highlight shallow waters along the coastlines (Plate6.3.9e). Depth of the water is calculated in exactly the same way as the "altitude" of themountains, i.e., as the magnitude of the bump vector in the direction of the surface normal.This depth value is used to darken the blue of the water almost to black in the deep areas;the blue provided by atmospheric scattering will bring the color up to a realistic value.Note that, while we have not yet implemented it, it would also be desirable to modify thesurface properties of the earth-sphere object in the ocean areas, specifically the specularhighlight, as this significantly affects the appearance of the Earth from space (again, seeKelley [20]).

3.6.5. Increasing Realism by Fractal Color PerturbationFinally, we note that the "desert" areas about the equator in Plate 6.3.9e are quite flat

and unrealistic in appearance. The Earth, by contrast, features all manner of random fractalmottling of color. By interpreting a vector-valued fBm as an rgb tuple [46], scaling itappropriately and adding the result to the color given by the index to the color lookup table,we can add significantly to the realism of our model - compare Plate 6.3.9e and f.

3.6.6. ResultThe resulting texture provides the surface for an Earth-like planet, the realism of which

is designed to be enhanced by the clouds described in section 5.2 and the atmospheremodel described in section 4.1.6. The ensemble is seen in Plates 6.3 and 4.4, with anothersimilarly-complex procedural model of a moon. Note that in those plates the fractalfunction used to create the continents is the heterogeneous fBm described in section2.3.2.5, thus the coastlines have a heterogeneous fractal dimension and the land masseshave an interesting variety of detail. This is our preliminary model of a synthetic planetdevised to contain enough complexity and variety to merit extended investigation, as acomplete world unto itself.

3.7 ConclusionsThe complexity of natural scenes is, in general, far greater than what we are currently

capable of modelling, geometrically. As long as this is an obstacle, surface textures will beuseful for providing visual complexity beyond that in the actual shapes of the objects insynthetic scenes.

The textures we have demonstrated here have proven usefulness in this application, asseen in the color plates from throughout this dissertation. They embody rich visualexpression, in relatively compact, formal, and deterministic procedural specifications;therein lies their power.

Page 200: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-34

The flexibility of the procedural approach to modelling fractal natural phenomena willbe demonstrated in the animation "Spirit of Gaea" which is in production at the time of thiswriting.* This animation, which is being made with the help of Robert Cook, MatthewPharr, and Gordon Palumbo (senior undergraduate students in the Yale Department ofComputer Science), consists of a logarithmic zoom from space down to the terrain of afractal planet. The range of scales will be extreme, starting with the entire planet appearingpixel-sized on the screen, and moving in to a close-up of details of the procedurally-rendered fractal terrain. To demonstrate the scaling properties of the procedural models,the zoom will be continuous, employing the same models (planet texture, atmosphere,cloud texture, and procedural height field) at all scales. While many such "Powers of Ten"zooms have been produced before, none has been made without changes of models atdifferent scales. Thus the "Spirit of Gaea" animation will be an unprecedented technicalachievement.

* A preliminary version of this animation, in MPEG form, may be found athttp://www.seas.gwu.edu/faculty/musgrave.

Page 201: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-35

3.8 Sample Texture Code with Auxiliary Functions

/* Below is an example of an elaborate procedural texture which, * when applied to the surface of a sphere using the right 20-something * magic parameter values and a just-so color map, can do a nice imitation * of an Earth-like planet, literally creating (part of) a "virtual world". * Note that this function is surface-coloring and bump-map effect, * thus the sphere remains geometrically smooth. * * There are 22 arguments to this procedure, stored in the textArg[] array. * The following vector of parameter values (plus a color map) created the * planet seen in the Color Plate 6.3.10 of these course notes: * * .25 2 -.48 10 0 .45 300 2.6 .7 20 .75 220 170 20 .6 4 1.125 1.2 -.085 1 .3 0 * * The code is in the form of one of the (many) cases in an enormous "switch" * statement in our ray tracer (John Amanatide's and Andrew Woo's * "Optik" from the University of Toronto's Dynamic Graphics Project). */

case PLANET:/* * Perform initialization of fractal (spectral exponent) parameter: * textArg[1] is lacunarity, or the gap between frequencies * textArg[2] is the fractal codimension parameter * textArg[4] serves as static storage for the computed exponent * (note that this is an idiosyncrasy of our implementation) * firstPlanetCall is a static flag variable, initialized to TRUE */

if(firstPlanetCall) {textArg[4] = pow(textArg[1],(-0.5-textArg[2]));firstPlanetCall = FALSE;

}

/* * Choose between fractal bump functions, based on the value in textArg[19] * (note that the high parameter number -- 19 -- indicates that this was * added quite late in the development of the texture). */

if ( !textArg[19] ) /* use a "standard" fBm bump function */bump = VfBm(texture, textArg[4], textArg[1], textArg[3]);

else { /* use a "multifractal" fBm bump function *//* get "distortion" vector, as used with clouds */

distort = DNoise( texture );/* scale distortion vector */

SMULT( textArg[20], distort );/* insert distortion vector */

texture = VADD( distort, texture );/* compute bump vector using displaced point */

bump = MVfBm(texture, textArg[4], textArg[1], textArg[3]);}

/* get the "height" of the bump, displacing by textArg[18] */chaos = -DOT(bump, hit->normal) + textArg[18];

/* set bump for land masses (i.e., areas above "sea level") */if( chaos > 0.) {

chaos *= textArg[5];hit->normal.x += textArg[0] * bump.x;hit->normal.y += textArg[0] * bump.y;hit->normal.z += textArg[0] * bump.z;Normalize(&hit->normal)

Page 202: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-36

}

/* if there's a colormap associated with the texture, use it */if( cmap ) {

/* use a scaled "z" value for offset in color map */temp = ABS(hit->intersect.z)*textArg[16];

/* fractally perturb color map offset using "chaos" *//* textArg[7] scales perturbation-by-z *//* textArg[17] scales overall perturbation */

temp = chaos*(textArg[7]*(1.-temp) + textArg[17]) + temp;if ( temp > 0.) /* if above "sea level" */

/* "mountains" appear too "chunky", *//* so exponentiate the color map offset */

offset = (int)(textArg[6]*pow(temp,textArg[15]));else offset = 0; /* (don't mess with oceans) */

/* now do oceans; textArg[11] sets polar ice caps */if ((offset < 0.) || ((chaos <= 0.) && (offset < textArg[11])))

offset = 0;

/* clamp color map offset to upper bound */if ( offset > 255 ) offset = 255;

/* set surface color to calculated color map entry */color = cmap[offset];

/* darken the "deep waters" *//* (note that "chaos" is less than 0 here) */

if ( offset == 0 ) {/* scale the effect */

chaos *= textArg[9];/* make the effect nonlinear, according to *//* the whim encoded in textArg[21] */

if ( textArg[21] )chaos *= 1.-texture.z*texture.z;

/* limit how dark deepest waters get */if ( chaos < -textArg[10] ) chaos = -textArg[10];

/* now darken the color of the deeper waters */color.red += chaos * color.red;color.green += chaos * color.green;color.blue += chaos * color.blue;

}/* else we are in the landmass areas, where the color *//* is ...boring, so we'll mottle it with a vector-valued *//* fBm, interpreted as an RGB value */

else if ( offset < textArg[12] ) { /* don't mottle snow! *//* scale size of color-bumps */

SMULT(textArg[13], texture);/* get the vector-valued fBm *//* (note that we've hard-coded some constants *//* in a feeble effort to fight parameter *//* proliferation.) */

bump = VfBm( texture, textArg[14], 2., 8.);/* using only bump.x is a "feature", *//* not a bug (don't ask me why!); *//* more hard-coded constants used, *//* to lessen parameter proliferation */

color.red += color.red * 0.5*textArg[8]*bump.x;color.green += color.green * 0.175*textArg[8]*bump.x;color.blue += color.green * 0.5*textArg[8]*bump.x;

/* now clamp errant color values */if ( color.red < 0.) color.red = 0.;if ( color.green < 0.) color.green = 0.;

Page 203: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-37

if ( color.blue < 0.) color.blue = 0.;if ( color.red > 1.) color.red = 1.;if ( color.green > 1.) color.green = 1.;if ( color.blue > 1.) color.blue = 1.;

}} else { /* no color map, so just use mottled texture */

color.red *= chaos;color.green *= chaos;color.blue *= chaos;

}break; /* whew! */

/* * And now for some of the auxiliary functions and data referred to above... */

/* fBm constructed with VLNoise() */double VLfBm( point, omega, lambda, octaves )

Vector point;double omega, lambda, octaves;

{register double l, a, o;register int i;register Vector tp;double VLNoise();

l = lambda; o = omega;a = VLNoise( point, 1.0 );for( i=1; i<octaves; i++ ) {

tp.x = l*point.x;tp.y = l*point.y;tp.z = l*point.z;a += o * VLNoise( tp, 1.0 );l *= lambda;o *= omega;if (o < VERY_SMALL) break;

}return( a );

} /* VLfBm() */

/* vector-valued fBm */Vector VfBm( point, omega, lambda, octaves )

Vector point;double omega, lambda, octaves;

{register double l, o;register int i;register Vector tp, n, a;

l = lambda; o = omega;a = DNoise( point );for( i=1; i<octaves; i++ ) {

tp.x = l*point.x;tp.y = l*point.y;tp.z = l*point.z;n = DNoise( tp );a.x += o * n.x;a.y += o * n.y;a.z += o * n.z;l *= lambda;o *= omega;if (o < VERY_SMALL) break;

}

Page 204: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-38

return( a );} /* VfBm() */

/* vector-valued multifractal fBm routine */Vector MVfBm( point, omega, lambda, octaves )

Vector point;double omega, lambda, octaves;

{register double tmp, lacunarity, o, weight;register int i;register Vector tp, tv, result;double Noise(), VLNoise();Vector DNoise();

lacunarity = lambda; o = omega; tp = point;result.x = 0.0; result.y = 0.0; result.z = 0.0;

/* get intitial value */weight = VLNoise( tp, 1.5 );if ( weight < 0.) weight = -weight;tv = DNoise(tp);result = SMULT( weight, tv );for( i=1; i<octaves; i++ ) {

tp = SMULT( lacunarity, tp );/* get subsequent values, weighted by previous value */

weight *= o * ( N_OFFSET + Noise(tp) ) ;if ( weight < 0.) weight = -weight;if ( weight > 1.0 ) weight = 1.0;if ( (weight<VERY_SMALL) && (weight>-VERY_SMALL) ) break;tv = DNoise(tp);tmp = MIN( weight, omega );tv = SMULT( tmp, tv );result = VADD( tv, result );o *= omega;

} /* for */

return( result );} /* MVfBm() */

/* * And now for the maniacs (we know you're out there) who'd type this in, * here's the color map used to create the planet seen in Color Plate 1: */char planet_map[256][3] = {{1,14,81}, {176,134,80}, {170,123,72}, {164,113,64}, {158,103,56}, {153,93,48}, {153,92,46}, {153,90,44}, {154,88,42}, {154,86,40}, {154,84,38}, {154,83,36}, {155,81,34}, {155,79,32}, {155,77,30}, {156,76,28}, {155,74,26}, {154,72,24}, {153,70,22}, {153,68,21}, {152,66,19}, {151,64,17}, {150,62,15}, {150,60,14}, {149,58,12}, {148,56,10}, {147,54,8}, {147,52,7}, {146,50,5}, {145,48,3}, {144,46,1}, {144,45,0}, {142,45,1}, {141,46,2}, {139,47,3}, {138,47,4}, {137,48,5}, {135,49,6}, {134,49,7}, {133,50,8}, {131,51,9}, {130,51,10}, {128,52,11}, {127,53,12}, {126,53,13}, {124,54,14}, {123,55,15}, {122,56,16}, {121,57,17}, {120,57,17}, {119,58,18}, {118,59,19}, {117,59,20}, {116,60,20}, {114,61,21}, {113,61,22}, {112,62,22}, {111,63,23}, {110,63,24}, {109,64,25}, {108,65,25}, {107,65,26}, {106,66,27}, {105,67,28}, {103,68,28}, {101,68,26}, {99,69,24}, {97,69,23}, {95,70,21}, {93,70,19}, {92,71,18}, {90,72,16}, {88,72,14}, {86,73,13}, {84,73,11}, {83,74,9}, {81,75,8}, {79,75,6}, {77,76,4}, {76,77,3}, {74,76,3}, {73,75,4}, {71,74,5}, {70,74,5}, {68,73,6}, {67,72,7}, {65,71,7}, {64,71,8}, {62,70,9}, {61,69,9}, {59,68,10}, {58,68,11}, {56,67,11}, {55,66,12}, {53,65,13}, {52,65,14}, {51,64,14}, {49,63,15}, {48,62,16}, {46,62,16}, {45,61,17}, {43,60,18}, {42,59,18}, {40,59,19}, {39,58,20},

Page 205: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-39

{37,57,20}, {36,56,21}, {34,56,22}, {33,55,22}, {31,54,23}, {30,53,24}, {29,53,25}, {28,52,25}, {27,51,25}, {25,50,24}, {24,50,24}, {22,49,24}, {21,48,23}, {19,47,23}, {19,47,23}, {19,47,23}, {19,47,23}, {19,47,24}, {19,47,24}, {19,47,24}, {20,47,25}, {20,47,25}, {20,47,25}, {20,47,26}, {20,47,26}, {20,47,26}, {21,47,27}, {21,47,27}, {21,47,27}, {21,47,28}, {21,47,28}, {21,47,28}, {21,47,28}, {21,47,28}, {21,48,28}, {21,48,28}, {21,48,28}, {21,48,28}, {21,49,28}, {21,49,28}, {21,49,28}, {21,49,28}, {21,50,28}, {21,50,28}, {21,50,28}, {21,50,28}, {21,51,28}, {23,52,30}, {26,53,32}, {28,54,34}, {31,55,37}, {34,56,39}, {36,57,41}, {39,58,43}, {41,59,46}, {44,60,48}, {47,61,50}, {49,62,53}, {52,63,55}, {54,64,57}, {57,65,59}, {60,66,62}, {62,67,64}, {65,68,66}, {68,69,69}, {69,69,69}, {69,69,69}, {69,70,70}, {70,70,70}, {70,70,70}, {70,70,70}, {71,71,71}, {71,70,70}, {72,70,69}, {72,69,69}, {73,69,68}, {74,68,68}, {74,68,67}, {75,67,67}, {76,67,66}, {76,66,66}, {77,66,65}, {78,65,65}, {78,65,64}, {79,64,64}, {80,64,63}, {81,64,63}, {82,63,62}, {84,62,61}, {94,74,73}, {104,87,86}, {115,100,99}, {125,113,112}, {135,125,125}, {146,138,138}, {156,151,151}, {166,164,164}, {177,177,177}, {196,196,196}, {216,216,216}, {235,235,235}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}, {255,255,255}};

Page 206: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-40

Formal Logic and Self-Expression

F. Kenton MusgraveThe George Washington University

Department of Electrical Engineering and Computer Science

Washington, DC 20052

Abstract

The digital computer can be used to synthesize images of Nature from firstprinciples of mathematics and the natural sciences; these images can in turnserve as vehicles of self-expression for the artist directing the synthesis.This peculiar artistic process juxtaposes the deterministic formalisms of thescientific method with the subjective aspects of visual aesthetics and with thepursuit of artistic self-expression, which may best be characterized as aspiritual undertaking. We claim that this novel creative process isunprecedented in the visual arts — not only because it can be practiced onlywith the aid of powerful computers, devices that are a recent phenomenon,but for deeper reasons as well. Furthermore, we claim that this new processhas exceptionally deep conceptual underpinnings entrained from the matureformal disciplines from which it derives. Generally, practitioners of theseformal fields — scientists and mathematicians — are not fully cognizant ofthe disciplines of visual aesthetics, just as practitioners of the fine arts are(though with arguably greater awareness) generally ignorant of the finerpoints of science and mathematics. This division of expertise has lead to thefalse contemporary cultural notion that the two methodologies are somehowinherently separate and irreconcilable. Not so, we will argue. Yet mutualappreciation will require mutual understanding of the richness of theunderlying disciplines; the purpose of this essay is therefore to illustrate thatart and science can be brought together in a single creative process, and toconvey some appreciation for the depth of the resulting fusion. This essay isaddressed to the intelligent layperson: We proceed by elucidating certainformal methods and how they relate to the computational process of imagesynthesis, then by pointing out some of the implications to the visual arts ofworking in this new process, and finally by illuminating the philosophicaldilemma posed by the pure use of formal methods as the means to achievingqualitative, even spiritual, humanistic goals.

Page 207: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-41

Page 208: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-42

4. Introduction: The Birth of a New Art Form

The advent of the digital computer represents a rare event in the fine arts: the birth ofan entirely new creative process. This process is distinguished by its deep conceptual rootsin areas normally held as alien to the arts and by its peculiar deterministic mechanism ofexpression, which is wholly novel to the practice of artistic creation. In this process thecomputer acts as a potent engine of interpretation, reifying in images the dry but austerelybeautiful abstractions of formal scientific models of the universe we inhabit. The images socreated may in turn become artworks, spun from a process based upon philosophy,mathematics and the physical sciences, and entraining the rigor, beauty, and intellectualdepth of those fields. Art and science, so often viewed as mutually inimical andirreconcilable, come together in pursuit of the common goal of visual aesthetics.

In the seventeenth century René Descartes and Isaac Newton, as natural philosophers,fleshed out a world view so compelling that, if the average educated person in our societytoday stops and thinks about it, it seems to be “the obvious way that things are”: In theCartesian universe with Newtonian dynamics, if we knew A) the position and velocity ofevery particle in a closed system and B) the rules for their interactions, and we hadsufficient power to compute all those interactions, we would have the power to predict thefuture, forever, for that system. If the closed system in question were the entire universe,this would have profound philosophical implications: There could be no free will; it wouldimply that we are all witless automatons, mere puppets in some sort of deterministic,already-written cosmic script. It would affirm the nihilistic philosophy of fatalism, andundermine the basis of human morality: that we have a choice in matters, and that what wechoose to do — and not to do — makes some kind of a difference.

In a broad view, the new artistic process we describe here juxtaposes formal logic withhuman self-expression. The former is founded upon determinism; a philosophicalassumption that denies the possibility of free will. Self expression, on the other hand, is anultimate manifestation of free will. I emphasize these two extremes, to highlight what isinteresting in this new process. One may substitute “aesthetic judgment” for “selfexpression” throughout the text, but I will maintain the latter usage, in the interest ofcontrast.

As a practitioner of the new process, I will attempt to illuminate both its specificconcerns and the significance of its links to the fields of formal logic, the natural sciences,and computer science. While the arguments presented are necessarily technical at turns, Iwill make every attempt to keep them comprehensible to the intelligent layperson. As well,I will attempt along the way to point out some of the implications of the advent of this newway of working, and how they relate to current trends in the visual arts.

4.1 The ThesisThe thesis I propose is this: Self-expression in representational imagery may be

obtained strictly through formal logic; and the practice of doing so marks a discontinuity ofsignificant import in the history of the creative process. In addition, I claim that theresulting artworks are conceptually enriched by the intellectual underpinnings of thisapproach: When an artwork represents the unaltered result of a deterministic logical

Page 209: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-43

derivation, it entrains a conceptual depth and richness not commonly achieved in the realmof visual arts.

Only time and our culture can determine the validity of the first two claims I make.The last I can illuminate; that is what this essay attempts to do.

4.2 FoundationsScience is the discipline of observing Nature and deriving potent and internally

consistent descriptions (models) of systems observed therein. Mathematics is the languageof science; it provides both a terse notation and a logically consistent framework in whichto couch such descriptions. Computer science is the study of the complex logical systemthat is the computer; it is largely based on the discipline of mathematical logic: Theoperation of the modern digital computer is described completely by, and at the lowest levelis literally implemented in terms of, the predicate calculus of formal logic.

It is worth pointing out that it is a specific sub-branch of computer science, numericalanalysis, which concerns itself with the problems of performing mathematical computationswith a digital “computer.” Note the sudden appearance of quotes around the wordcomputer — it turns out that this appellation is a misnomer: a (digital) computer is morerightly viewed as a symbol-manipulator, a string-rearranger,1 than as a mathematicalcalculating device. I point this out because this “string rearranger” model of the computerwill be essential to my treatment of the computer as an artistic tool and process.

4.2.1. Formal Logic and Formal SystemsOur arguments are based on the discipline of formal logic, the study of logic in its pure

form. Interestingly, a first college course in logic will typically be taught in the philosophydepartment — logic being, of course, the foundation of reasoning and reasoning being thefoundation of most precisely-communicable human understanding. A first course in formallogic may also be taught in the philosophy department. Formal logic is codified in thepredicate calculus, which explicitly lays out the valid and invalid forms of logical inferenceas well as the minimal set of rules required for full logical reasoning. More advancedcourses in formal logic are likely to be taught in the mathematics department, under themoniker mathematical logic. Mathematicians are intimately involved with formal logic, asall of their work is in the domain of the logical framework provided by formal systems, thereasoning systems of formal logic. Finally, formal logic is also taught in computer sciencedepartments, as computers are implemented in terms of logical operations; thus alltheoretical models of computation reduce to the study of formal logic and formal systems.We see then that formal logic is already a highly interdisciplinary field; in this essay we willextend its scope by establishing a direct linkage to the visual arts.

A formal system is a sort of game; it is a fundamental concept of mathematical, orformal logic.2 For our purposes we may think of a formal system as a set of given inputstrings, called axioms, along with a set of rules for performing transformations on, or

1 The term “string” has a very specific definition in computer science, but for our purposes it is sufficientto think of it as an arbitrary sequence of characters or digits.

2 See Douglas Hofstadter's “Gödel, Escher, Bach” [16] for a thorough layperson's treatment of formalsystems. Bertrand Russell and Alfred North Whitehead's “Principia Mathematica”[57] provides thedefinitive mathematical treatment thereof.

Page 210: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-44

changes to, those strings. The latter are called rules of production or rules of inference.Consecutive application of the rules of production to the axioms constitutes the derivationof a theorem in the system. The specific sequence of application of rules of production inthe derivation constitutes a formal proof of the theorem.

A simple, illustrative example of a formal system is Hofstadter’s MIU system. [16]In this system, the only recognized symbols from which to compose strings are thecharacters M, I, and U. The only axiom, or starting (i.e., input) string is MI. There arefour rules of production that may be applied to the axiom and its successors:

Rule I: If a string ends in I, you may add a U to the end.

Rule II: If you have Mx, where x is an MIU string, you may add Mxx to yourcollection.

Rule III: If III occurs in a string, it may be replaced with U.

Rule IV: If UU occurs inside a string, you may drop it.

Every string derived from the axiom by these rules of production may be added toyour collection of valid strings. Hofstadter challenges the reader to derive the string MUfrom the given axiom MI, using the rules of production given for the formal system. Notethat there is no ambiguity in these rules, no “maybes” or “kind-of-likes.” The results of anapplication of a rule are deterministic; but there is free choice in the order of application ofthe rules.

This is a very simple formal system, but it reflects exactly the behavior to which acomputer is constrained: modifying strings by the application of well-defined deterministicrules of production.

Why do we bring up this rigmarole? Because this is exactly how a computer operates.We can describe the functioning of a computer completely through this kind of formaltreatment; all other “higher-level” functions of a computer are built on top of, andimplement different instances of, such formal systems. Formal systems have, in turn, beenstudied intensively. Early in this century, Bertrand Russell and Alfred North Whitehead[57] set out to map all of mathematics into a single, unifying formal system; theirdifficulties were shown to be theoretically insurmountable by Kurt Gödel in his famousIncompleteness Theorem. (This theorem demonstrates that, for any ‘sufficiently powerful’formal system, there exist statements that are neither inconsistent with the system norprovable or disprovable within the system; in short, the system has nothing to say aboutthem. MU is such a statement in the MIU system.) In short, great minds of our centuryand before have worked on the ramifications of the machinations of formal systems; in fact,many smart mathematicians and logicians continue to do so today.

The consequence to us, of all the above, is that when we are using formal logic (i.e.,formal systems) we are “standing on the shoulders of giants,” intellectually. There is arich, preexisting mathematical and philosophical body of knowledge in this area, which weare implicitly drawing upon when we use the computer.

4.2.2. Artwork as TheoremHow does this concern us, artistically? As it turns out, all computer programs can be

mapped into formal systems. Thus, when we use a computer, we are using a formalsystem; we are utilizing formal logic. Every time we execute a computer program, we arecausing the derivation of a theorem in a formal system. “So what?” you might ask. This

Page 211: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-45

observation might seem to trivialize, to render vacuous, any claim that the derivation of atheorem in a formal system is in any way something special or intellectually weighty —after all, computers do it all the time, day in and day out, all around us.

But how often do we call the result art? (“Too often,” indeed.) How often is the artistcognizant of these arcane machinations? How often can the artist claim to have consciouslyengineered the entire procedure? When the formal system involved is a computer programwritten specifically by the artist for the purpose of producing the artwork, when theprogram itself embodies much or even most of the power to create the work, when theartwork represents something that could not have come into being in any other way, thenthese observations vis a vis formal logic become interesting. Indeed, they gain greatimport.

4.2.2.1. Theorem as Self-ExpressionIt is one thing to label a theorem derived in a formal system a work of art; it is another

to claim that work of art represents self-expression on the part of the artist. Scientifically,such a claim is weak, as it can be verified only by the artist; no independent formalverification is possible. I maintain that the claim can nevertheless be valid, and is evenreadily verifiable in many cases, if only qualitatively (as opposed to quantitatively). Thereexist examples of such artworks — the pure output of a computer program — wherein it isreadily evident that something of the artist’s soul has been bared. As an example I offer“Blessed State,” Plate 1.

Claiming self-expression purely through formal logic obviously involves massiveconstraints on what constitutes successful practice and an acceptable result in the artisticprocess. Yet another significant set of constraints is generated by the requirement that theworks be representational. Abstract expressionism is an honored aesthetic in its own right,and formal systems and the computer can be — and have been — used in this context. Butthe requirement of literal realism in formal imagery spawns a host of problems andconcerns that are only starting to be addressed, primarily in the research literature ofcomputer graphics. Representationalism in synthetic imagery remains, in general, an openproblem.

Many of the problems of generating realistic-looking synthetic imagery have beensolved, albeit sometimes in ad hoc ways; many such problems yet await satisfactorysolution. For example, specular reflection from glossy surfaces has been handled to nearlyeveryone’s satisfaction by ray tracing; however, general realistic lighting models —including atmospheric scattering of light and interreflection between semi-glossy surfaces— are still under active development. In short, there are some things we can do very wellwith the current techniques of computer graphics; others that are imminently doable but not-yet-done; and some which are, and are expected to remain, refractory. As an activeresearcher in the field of computer graphics, I am involved in the effort to move morephenomena from the category of “doable” to that of “done.” As a result, my own artworksmore often than not serve simultaneously as a form of aesthetic self-expression and asillustrations of techniques new to the field of computer graphics. This adds a dimension oftechnical significance to the works; however, I generally intend this to be transparent to theuninformed observer.

In fact, one of my key intentions as an artist is to keep this entire esoteric process that Iam describing transparent, to make it invisible to the viewer. There are a variety of reasonsfor this: First, I do not wish to immediately and automatically invoke the instinctive fear ofmathematics that the average person is prone to feeling (myself included). Second, it is aresearch goal to have the image look as natural, i.e., non-computer-generated, as possible;

Page 212: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-46

thus the formal process should be thoroughly sublimated in the result. Third, and mostimportant, is that it would be no better than arrogant and obfuscatory to require theaudience to confront and grapple with these issues — the images should be able to stand ontheir own as aesthetic visual statements, outside of this technical context. I say: “Let them,or let them fail.”

4.3 Deterministic Formalism and the Creative ProcessAn artist requires constraints, if for no other reason than to narrow down the “search

space” (to put it in computer science terminology) wherein the desired result is sought. Theformal logic approach certainly provides a rigorous set of constraints on the creativeprocess. It also provides some interesting side-effects.

The determinism of the logic involved means that the result is reproducible: repeatedruns of the same program with the same input provide, modulo the occasional hardwareglitch, precisely the same output. The artwork is reproduced exactly. (Or at least thenumerical metarepresentation of it is; more on this later.) This is true despite the fact thatrandomness is an essential element in all my images — the randomness employed is adeterministic randomness; it is not “truly” random, but what we computer scientists refer toas “pseudo-random.”3 Pseudo-random processes are simple yet sophisticatedconstructions from the mathematical discipline of number theory that are, for practicalpurposes, fully random (i.e., they lack discernible order or structure) yet which aresimultaneously fully deterministic and therefore exactly reproducible.

The fact that I constrain my artworks to be purely the output of a computer programinsures that they feature this peculiar reproducibility. This could never be true of apainting, for instance, as a brush stroke is not an exactly reproducible act, on themicroscopic scale at least. In the case of a computation the result is a string or, at thelowest level, a number or sequence of numbers or digits. This string or number can bechecked character-by-character, digit-by-digit, for exact fidelity; there is no ambiguity orlatitude for imprecision in the representation. Viewed in the light of computational result asartwork, and artwork as representational self-expression, this determinism and exactreproducibility are rather bizarre.

4.4 Distinguishing the ProcessIt is worthwhile to take a little time to point out what distinguishes this process from

the more traditional practices of fine arts such as painting, sculpture and photography.

4.4.1. DimensionalityThe product of this process is a two-dimensional image; this characteristic it shares

with painting and photography. Like a painter or photographer, the artist is responsible forchoosing an interesting point of view and framing for the image. As with a camera, ageometrically precise projection of the three-dimensional world onto the image plane isperformed; painters have much greater latitude here. Like a photographer, one is free toroam the three-dimensional world, even to employ cinematography to add motion in atemporal exploration.

3 Note that throughout the text, I will freely use the term “random,” generally meaning “pseudo-random”and thereby implying an ultimate determinism.

Page 213: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-47

In this new process, though, the artist is responsible for the creation of the entireworld being imaged: there are no preexisting objects “out there to be found” and creativelyimaged; all objects and all interesting visual detail must be created explicitly. The elegantmeans we have for creating such visual complexity are at the heart of what makes thisprocess successful and interesting.

4.4.2. Visual Complexity: Fractal ModelsFractal geometry [28] is the key to generating potentially unlimited visual complexity

in my work, and in computer graphics in general. Fractal geometry is a language of shape,similar to the language of planes, circles, spheres, triangles, cones and cylinders of themore familiar Euclidean geometry. But as Benoit Mandelbrot has observed [28]:

Clouds are not spheres, mountains are not cones, coastlines are not circles, andbark is not smooth, nor does lightening travel in a straight line...

The vocabulary of shape of fractal geometry provides, can describe such complex naturalshapes with striking elegance.

There are two key aspects to fractal descriptions of natural forms: self-similarity, or therepetition of similar shapes at different scales, and randomness in the model. The firstmeans that we need only describe one fundamental shape plus the relationship of itsmanifestation to the scale at which it is manifest — a very simple description indeed, for anobject of potentially unlimited complexity. (The complexity is simply a function of thenumber of different scales at which we manifest the basic shape; the shape itself is typicallysimple, e.g., a triangle or a sine wave.) The second aspect, randomness, is the key tohaving the resulting shapes look natural, rather than man-made or (worse still) computer-made. Control then takes the form of shaping statistical distributions in random processes,rather than explicit specification of exact form. Thus we exchange exact control over form,for power in automatic generation of complex shapes. [9]

4.4.3. Purity of Algorithmic ProcessOf course, I could employ my omnipotent powers in this synthetic universe to

intervene and make specific, local changes wherever I saw fit. In adherence to a self-imposed constraint of process, however, I do not allow myself to do this. This oftenproscribes the shortest route to a desired result (as in obtaining a desired hue in a givenhighlight) by disallowing local intrusions and modifications to the world or the image thatwould, in practice, be relatively easy to execute. What is gained in exchange, however, ispurity of algorithmic process. Creation of an image becomes a dance with the opportunitiesand serendipity granted by the powerful, random fractal models that I create, embellish and(more or less) control. By disallowing post-process meddling with the results of variousalgorithmic processes I employ, I gain two compelling benefits: legitimacy in illustratingthe descriptive power of these abstract fractal models, and claim to an elegance in thecreative process — the image is indeed a theorem proved, in one pass, in a formal system.4

4 One could technically contest the claim that this is a one-pass process, on the grounds that the terrainmodels I image have generally been generated outside of the rendering process, in a separate step. In myown defense I point out that A) I apply the same rules of algorithmic purity in the terrain-model generationprocess, B) the program that generates the terrain could readily be incorporated into the renderer thuscoupling them, and C) I am moving towards exactly that: a completely procedural process wherein even theterrain model is created on-the-fly as the picture is being created. (See Plates 2 and 3, which are entirelyprocedural.)

Page 214: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-48

Adherence to principles of algorithmic purity legitimizes one of the key claims I makeabout the significance of this process: that it entrains the intellectual depth of logic,mathematics and computer science as its foundations. In practice, it entails the pure use offormal logic to obtain the desired result. If I were to indulge in local meddling, this claimwould be compromised and/or invalidated.

Again, another (more or less arbitrary) constraint I impose upon the process is that theresults represent self-expression. Expressionism is a practice the popularity of whichperhaps waxes and wanes through the history of the fine arts; I do not claim that it makesmy work in any way “better,” I only note that it constitutes a significant constraint uponwhat I, as an artist, consider to be a successful result.

4.4.4. ProceduralismThese concerns lead us to proceduralism. [9, 54] Proceduralism is the practice of

abstracting complex behaviors into relatively terse functions or algorithms that do notcontain specific information about details of the phenomenon, but rather encode a givenbehavior in a formal set of instructions that specify the behavior everywhere it mightmanifest itself, and which may be evaluated only when and where such information isdesired (what we charmingly call “lazy evaluation” in computer science).

Thus, in the procedural approach, a “virtual world” is abstracted into a compactprocedure or set of procedures. These procedures are in turn controlled by a relatively fewparameters which affect (only) global control. Alvy Ray Smith [64] called this databaseamplification; I refer to the process of creating landscape images within this paradigm“playing God in a found Universe” — I may have God-like powers over these worlds, butin practice, because of the randomness they embody, they behave as if they have a will oftheir own. Furthermore, they have an ineffable sense of having existed a priori; ofsomehow being inherent in the timeless, universal formal procedures that specify them andof always having existed there as an aspect of Nature, or at least of Mathematics, justwaiting to be discovered. As an artist, I simply interpret these forms visually. Thus theymay represent, at least in part, “found art.” But there nevertheless remains enormouslatitude for the exercise of aesthetic judgment in the development of any given image. It is,after all, but one out of an unimaginably huge, if finite, multitude of images that might havebeen selected (more on this later).

4.4.4.1. Functions and AlgorithmsProceduralism in practice consists of devising functions which in turn are implemented

as algorithms, or unambiguous sequences of instructions telling the computer exactly whatto do, for a given input. Functions are a mathematical concept. They may be viewed verysimply as contraptions that change values given as input, to other values — the output.The input and output values might be very different: input may be numbers and outputcolors, or other stranger and more subtle mappings.

Mathematically, we refer to the action of a function f like this:

f :D→R

which simply says that function f sends (maps) input values from D (the domain) tovalues in R (the range). It is useful to distinguish the set of possible input values D fromthe set of possible output values R, as they may be quite different kinds of things.

Page 215: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-49

The simplest kind of function is a scalar-valued function of a single variable, denotedf x( ) = y . (We use lower case letters to refer to specific values, upper case to refer to the

entire set from which those values may be chosen.) A scalar value is just a single number.A function of one variable has only one input value.

Most interesting functions are the more complex vector-valued functions of severalvariables, denoted f x1,x2,...,xn( ) = y1,y2,...,ym[ ] . This particular function takes a number( n) of input values, and maps them to another number (m) of output values. Suchfunctions are more common in my images. They typically take more than three values asinput: the three spatial coordinates of the location where the function is being evaluated (asthe function is usually defined over all of space) plus a set of variables controlling thebehavior of the function. They output some small number of values, such as the primary-color components of a certain color and a spatial vector used to modify the apparentorientation of a surface (as with the water in Plate 1).

It is the concoction of functions like this with interesting visual behaviors, whichconstitutes the first step in this formal creative process. These functions are small parts of amuch larger program that orchestrates the overall creation of the picture. Examples of suchfunctions in action can be seen in the ripples in the water in Plate 1, as well as in theroughness of the moon and the coloring of the mountains. Each of these effects issuesentirely from the functions evaluated on the surfaces there. (Believe it or not, the water is aperfectly flat plane, and the moon is a perfectly smooth sphere!) The fact that the functionsare defined over all of space allows us to evaluate them anywhere we desire. Thus themoon is carved out of an infinite block of “moon-ness,” the mountains out of an infinitevirtual block of snow, rock and greenery, and the water out of an infinite expanse ofabstract “sea.”

4.4.4.2. Global Parametric ControlThe values xi ( i denoting the numbers 1 through n) which serve as input to our

functions are known as parameters. The parameters beyond the three spatial coordinates atwhich the function is being evaluated, are used to determine the overall behavior of thefunction. The way these functions are usually constructed, the parameter values affect thefunction’s output everywhere in space. This amounts to global parametric control of thefunction’s behavior.

In practice this means that, for instance, I may exactly specify a color for a lightsource; if I dislike the resulting hue in a particular highlight (a local effect) I may change thecolor of the light source accordingly, but this changes tones everywhere that light falls inthe scene. Similarly, if I dislike the shape or location of a given wave in the water ormountain peak in the terrain, I may change it, but this change will also affect all otherwaves or peaks and valleys. The randomness at the heart of the fractal models I use grantsboth enormous flexibility and expressive power, but it also entails complete abdication ofcontrol over specific details in relation to their global context. While this global parametriccontrol represents a profound creative constraint, it also entails an enormous (and oftenelegant) simplification of the final stage in the creation of the image: After the program iswritten, all that is to be done is to select values for these parameters.

4.4.5. Representationalism and ConceptualismWhen manual renderings were the only source of pictures, artists were deeply

concerned with accurate representation; they developed a full set of techniques for realisticrendering. Since the invention of the camera, representationalism has not generally been a

Page 216: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-50

vital issue in the visual arts. Our new process, however, reopens the problem ofrepresentationalism: We simply do not yet know, in general, how to reproduce the visualappearance and complexity of the everyday world in computer-synthesized imagery. Itmay thus push us back several steps in the cycle of aesthetic evolution (or is it simplyforward, one step?)

Recently, conceptualism has sometimes given the ideas behind an artwork workprecedence over the artwork’s physical manifestation. For that reason, I wish to emphasizethroughout this essay the depth of the conceptualism inherent in this process, and to cast afaint glimmer of light into those depths.

4.4.6. LightingThe artist’s responsibility for lighting in synthetic scenes brings this process into

relation with lighting as used for photography and stage performances. Directresponsibility for lighting is something new for landscape rendering, where artists havetraditionally relied on serendipity in Nature to provide striking effects. As the author of asynthetic world, we will find nothing that we do not explicitly create. And of course, dueto the nascence of the process, we have yet to approach the kinds of diverse, subtle andspectacular effects captured by the masters of more mature art forms like Bierstadt, Monet,Turner and Adams.

The process of providing synthetic lighting is exactly analogous to stage lighting. Wehave light sources with color, brightness, direction, and area of influence. We can positionthose lights wherever we want. We can have as many of them as we like (though inpractice I rarely use more than two — a warm sunlight and a cool skylight). In addition,we are responsible for specifying, mathematically, the interaction of light with surfaces inthe scene: are those surfaces mirror-like, glossy, or matte? Or something different, perhapscompletely unnatural? There are no set limits here. This mathematical treatment of lightand color also marks a new practice in the visual arts; we will expand upon it later.

4.4.7. A Model of the Creative ProcessA particularly fascinating view of the parametrically controlled creative process is that

of searching n-space for local maxima of an aesthetic gradient. Let me explain: We havecreated a procedural, parametrically-controlled model of a synthetic microcosm. Say thereare n independent parameters in that model and the specification of its projection onto theimage plane. As these parameters are independent, we can think of each as representing adegree of freedom, or an additional dimension or direction in which we may move. Takentogether, the n parameters define an n-dimensional space or n-space for short. In this spacewe are free to move not just up and down, right and left, or forward and back, but in awhole lot of other abstract directions as well. This may seem abstruse to the layperson, butmathematicians, scientists and engineers never hesitate to work in spaces with many moredimensions than the familiar three of our everyday world.

The task of the artist then is first to create these n parameters (n being usually aroundtwo to five hundred in my own images) and their (deterministic) meaning through creatingthe procedures or functions that they drive, then to “tweak” the values of these parametersto obtain a satisfactory result or image. The creation of the parameters in formulating theformal system corresponds to defining the n-space; the process of refining the parametervalues, or choosing the axioms to start with, corresponds to searching that n-space for localmaxima of an aesthetic gradient. A local maximum is location in the space from which alldirections lead “downhill,” that is, it is a kind of hilltop in n-space. “Downhill” is definedby the aesthetic gradient function — the completely subjective (non-deterministic)

Page 217: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-51

assessment on the part of the artist of what constitutes a “better” image, in terms of theparameter values. Obviously, this so-called “function” is not unambiguous: Its value willdepend on the criterion by which the image is being assessed, and even upon the mood ofthe artist at the moment of evaluation.5 The local maximum is then a point in n-space fromwhich a small move in any direction would result in a “less good” image.

Ambiguity notwithstanding, this n-space gradient-ascent model is more than justentertaining: It points out that a given image represents merely a local maximum of theaesthetic gradient field. Other, more global maxima (“higher hilltops,” corresponding to“better” pictures or possibly “better” self-expression) undoubtedly exist elsewhere in therich abstract n-space of potential images defined by the formal system. This is very muchakin to noting that a photographer might have gotten a better shot by choosing a differentvantage point or time, except that we have much, much more control here. Creating andsearching this n-space is, I submit, a singular way of obtaining self-expression.

4.4.7.1. Searching N-Space for Aesthetic MaximaWhat does this process look like, in practice? I have a bunch of numbers, usually

about two to five hundred, which define the entire scene I’m creating (other than thelandscape itself, which consists of thousands of numbers that, again, I don’t allow myselfto change or fiddle with). This is a lot of numbers to deal with. And it turns out that if youchange more than one or two at a time, the effects are usually conflated, and you can’t besure which change accomplished what effect. Thus I spend long hours massaging thevalues one or two at a time, until I am sufficiently satisfied or exhausted to “call it apicture.”

This is a very tedious process. It is also very obscure: No one else can hope to use myprograms — the meanings of the parameters are simply too obscure for another artist topractically deal with. In fact, I am only really cognizant of their intended effects when Icreate the functions; this intent is quickly forgotten in the complexities of my work anddaily life. If later I need to reconstruct that meaning, I generally have to go back and lookat the computer code that I’ve written to implement the functions, and figure it out byinspection, reverse engineering, and the memories of my original intent that the inspectiontriggers.

This is not a highly desirable interface or working methodology. When people ask me“Can other people use your programs, too?” I have to answer “No.” (I certainly lack thetime and patience to explain or document all of these things.) This deplorable state ofaffairs I would attribute to the youth of the method — it is certain to be improved over time.Powerful mathematical methods can be brought to bear in such endeavors. Principlecomponents analysis may be used, for instance, to reduce the dimensionality of theparameter space, and to maximize the effects of changes in parameter values (though theresulting reorientation of parameter vectors in n-space may destroy any original intuition asto parameter meaning).

5 The subjectivity of the aesthetic gradient function is brilliantly illustrated in Karl Sims’ installation ofinteractive genetic art at the Pompidou Museum in Paris, and in certain interactive web sites, where theviewer is allowed to “vote” on an image they favor, which is then developed further. While driven by adeterministic program, the process is highly divergent — different viewers will generally express differentpreferences, as dictated by their internal aesthetic gradient function which indicates to them “which directionto go in” to obtain a “better” image. My own experience with Sims’ system indicates that it might wellcomprise a semi-deterministic testbed for assessing a user’s aesthetic discrimination: A user with superioraesthetic discrimination will converge more rapidly on interesting images.

Page 218: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-52

4.4.7.2. Genetic ProgrammingOne very promising method for managing the creation and search of the high-

dimensional parameter space is genetic programming. In the genetic approach, we borrowsome concepts from biology, namely genotype, phenotype, mutation, and sexualreproduction. Genotype is the encoding of an organism’s form in its DNA, whilephenotype is the physical manifestation of that coded form in an actual organism. Mutationis the spontaneous change in the encoding itself, and sexual reproduction is therecombination of genotype information from two individuals, by “mixing and matching”parts of their genetic code. This is a powerful approach to creation — after all, it appears tohave gotten us to where we are today, as intelligent sentient beings.

Richard Dawkins popularized the genetic approach in his book “The BlindWatchmaker.” [8] Several artists are using genetic algorithms to create striking works(though they are not representational, in the sense that I am using here). Karl Sims createswonderful abstract images very rapidly with his genetic software, running on a massivelyparallel supercomputer. [63] My personal experience with his system showed it be anastonishingly fecund process. And it is as simple as can be: The computer puts up asequence of images, you pick one you like which the computer then proceeds to mutate foryou, or you pick two which the computer then “breeds” for your pleasure. Mutation israndom, and “natural” and sexual selection are performed by you, the user. WilliamLatham uses similar genetic methods in creating his fantastic sculptural forms of Cambrianbeasts-that-never were. [68]

While this genetic approach to the management of procedural models is incrediblypromising, it is currently limited to the creation of such free-form objects and images asLatham’s and Sims’. Indeed, one of the points that Dawkins stresses is that evolution (oflife on Earth) never has any goals as such. Rather, its only “values” are propagation andpersistence; organisms satisfying those two criteria are “successful,” those which do notare “failures.” Unfortunately, it is therefore not immediately apparent how to apply thegenetic methods of selection and random mutation to the evolution of models of non-biological natural phenomena or, more generally, to the problem of converging on anyhighly specific and complex a priori goal.

4.4.7.3. A Biological AnalogueRoman Verostko [69] has likened software that embodies an artist’s aesthetic judgment

to the genotype, and the resulting artwork to the phenotype. Program execution thencorresponds to epigenesis, the biological process of the development of an undifferentiatedcell, as a spore or an egg, into a complex organism.

The work of Verostko and others is closely related to the process I am describing here;we may be regarded as being of the same school of algorithmic art. Yet our processes arenot identical. Verostko’s “Hodos” system has a deterministic front end: the computerdriving the plotter. The back end, the plotter, is no more deterministic than any paintbrush.As a result, none of the phenotypes is exactly reproducible (not that that is a desired trait, itis simply a distinction between the processes). The main distinction between Verostko’sprocess and the one I am describing is that Hodos creates an artwork, while my processcreates only a metarepresentation (again, more on this later). In this sense, Hodos is moremature and complete; as we will see, our new process as yet lacks a satisfactory medium inwhich to manifest the final piece.

Page 219: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-53

4.5 What the Process Is NotTo further distinguish the process, it is worthwhile to point out certain aspects of what

the process is not, to clarify by defining the negative space around it.

4.5.1. A 2-D CanvasOne thing this process is not, is a flat canvas. While the final image is indeed two

dimensional, its creation takes place in three dimensions (excluding time). We areresponsible for the creation of an entire three dimensional world, which we proceed toimage by projecting it onto a film plane like a photographer, only doing so withmathematics.6 The potential of the process will be expanded when we gain the capabilityof rendering scenes at video frame rates — then the viewer will no longer need be passive,but will be able to enter the synthetic world and explore it, much as one moves about toinspect a piece of sculpture or a physical environment. In an immersive VR environment,this is foreseen to be quite an exciting development, though one better suited toentertainment than art, perhaps.

It is important to me, as an artist, to emphasize a certain point: The really interestinguses of the computer in the creation of artworks will not be in the traditional role of acanvas and paintbrush. Certainly, the computer can function as such and offers someunique capabilities, such as infinite erasure and reworking capabilities, not possible withpaints. But that does not mark a significant conceptual breakthrough, merely incrementalprogress for an established process. Not that there is anything wrong with using thecomputer in this way — most of the best computer art has been, and will continue to be,produced in this way. I simply wish to emphasize that the process I am describing hasvery little in common with that, aside from using some common hardware devices and theircommon aesthetic disciplines of composition, color usage, and so forth. The means ofcreation are utterly different, and it is only the new process that is truly significant as anintellectual event in the history of art, I maintain.

4.5.2. Local ControlAlmost every established process in the visual arts involves local control: details are

manipulated in isolation from the whole. Any given brush stroke, for instance, while itcertainly may indirectly affect, and be indirectly affected by, its global context, representsan absolutely local act. It does not directly affect anything beyond the area where the paintis applied.

Changing a global parameter, in contrast, immediately and directly affects everything,everywhere its function has influence. Thus I again wish to emphasize the contrast with,for instance, painting and sculpture, where the work is usually realized incrementally by aseries of fundamentally local actions. When working with global control only, we have amuch less precise control over details, but gain in return something akin to the power of“painting with a broad brush” — we cover a lot of territory with a single action.

6 Because the projection is described in the abstract, with mathematics, we may employ projections whichwould be difficult to impossible to obtain with a camera, such as one which instantaneously maps theentire celestial sphere to the image plane. [37] Indeed the abstract projections may be quite non-intuitive as,for example, in the projection of a four-dimensional quaternion Julia set down into three dimensions, andsubsequently down to the two dimensions of the image plane. [14]

Page 220: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-54

4.5.3. “Of the Hand”As the only access to expression is through the formal logic of the computer program,

there is no “evidence of the hand” in the final work (or if there appears to be, it is illusory).Some may find this anathema, but it is important to point it out as a distinction of theprocess. The mechanism of creation that I use is extraordinarily abstract and removed fromthe product. This is part of what is interesting and bizarre about the process: that suchprosaic imagery comes about through such indirection and abstraction. I claim that this issignificant in itself.

4.5.4. Pure MathematicsI am often mistaken for a mathematician. That I am not. While all the models

employed are based on logic, and many are mathematical models of natural phenomena, themathematics I employ is generally quite simple compared to what a “real” researchmathematician would be involved with.

Pure mathematics, after all, assiduously shuns applications and other associations with“reality.” And what I am up to, is recreating reality as we see it.

4.5.5. Computer as CreatorFinally, and most importantly, this process does not represent creative action on the

part of the computer. A computer, given no instructions, will just sit there dumb as a rock,if a little warmer. A computer (on a good day) will cheerfully do exactly what you tell it todo, with blinding speed and precision. It will never do anything useful that you, thehuman operator, did not describe explicitly and in excruciating detail precisely how to do(this is the tedious art of computer programming). Remember: the computer operates as aformal system, and that admits no ambiguity and no choice, only deterministic cut-and-dried yes-or-no instructions and conditionals. Certainly, the complexity of the instructionswe hand the computer rapidly surpasses our human ability to track every detail thereof,while the computer never loses track of one iota. But the computer remains a simpleton; avery fast and capable simpleton, but a simpleton nevertheless. If we puny humans weregiven eons of time and inhuman patience, we could track, produce, and reproduce everytiny detail of what the computer does — only we’d make a lot more mistakes along theway.

The point is, the computer acts as a powerful tool, maybe even like a semi-intelligentslave/apprentice in practice, but is in no way the creator, the author of the product. Itsimply did as it was manipulated to do, as with a paintbrush in the hand of a painter. Themain difference is that the form of the manipulation is highly abstract and rigorous, andvery different from the physical manipulation of tangible media that we are more familiarwith in the visual arts.

4.6 The Process in ActionHow does one proceed to create an image through this process? First, we have to

posit an abstract model of a world; then we must map that model into a formal system — acomputer program. Next we devise axioms, or input to the program. Finally, we run theprogram to create the output, which we will interpret as an image. This output is, like theinput, in the form of a string of symbols or values (i.e., numbers; ones and zeros). Such astring is hardly an image; therefore we call this the metarepresentation of the image. Thismetarepresentation still requires a considerable array of sophisticated machinery andmethodology of interpretation, to translate it into the intended image.

Page 221: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-55

We can then further subdivide the process of image creation into two separateundertakings: creating the metarepresentation and interpreting it. This essay concerns itselfprimarily with the first; it is here that the bulk of the intellectual content resides. Thesecond represents primarily an engineering problem, though there is a considerable dose ofcolor science involved and that is none too simple in itself. [74] In artistic terms, these twoparts correspond to process and medium: the first concerns itself with the machinations ofartistic creation while the second is about producing a physical manifestation. After thefirst part is done, all we have generated is a still highly-abstract and intangible form. It isthe second step that maps this abstraction into something that can be perceived in a sensibleway, and maybe even felt, held, or hung on a wall. It is interesting that the two, processand medium, are so neatly partitioned in this new way of working.

4.6.1. Creating a MetarepresentationAgain, the first phase is the creation of the metarepresentation: the theorem, the string,

the sequence of digits, the one huge number, the signature on a magnetic or optical storagemedium, or the image file; however you care to view it.

4.6.1.1. Creating the Formal SystemWe begin the process unconsciously as a young child: observing and cataloging

sights, phenomena, and behaviors in Nature. Over time we build some potent andinternally consistent models of Nature and the behavior and visual manifestations ofphenomena there: clouds, mountains, water, light and color, to name but a few. Sometraining in the sciences teaches us the practice of mapping this intuition into formal,mathematical models of the behavior of natural systems, and the practice of empiricaltesting and verification of those models. We become familiar with many such formalmodels that scientists before us have devised and refined, and we learn where to finddescriptions of such models — in the scientific literature. Becoming a practitioner ofcomputer graphics, we learn the practice of mapping such models into formal systems thatthe computer can efficiently use to generate pictures. Note the qualification “efficiently,” asthe scientific literature consists mainly of picayune and non-general models, along withsome very elegant and general ones that are simply not well suited to the practice of imagesynthesis: Witness the wave model of light. This is a potent, elegant model of Nature thatthe computer just can’t practically deal with for image synthesis, as it involves too muchcomplex calculation. What we require are models with potent descriptive capabilities,which also admit to reasonable computational implementations.

It is this formulation of a model of Nature and its mapping it into a computer programthat constitutes the first phase of the process. It is in the act of creating the functions, in thewriting of the program, that we create the parameters and give them their functional“meaning” (the program semantics). The program, again, represents the rules ofproduction in the formal system, which will be repeatedly applied to the axioms, or theinput, in the process of deriving the theorem that is the result or image metarepresentation.

Our tools at this stage are such abstractions as shaping functions, e.g., polynomialswith continuity in a desired number of derivatives, numerical integration methods, logic inthe form of conditional “if/then” statements, and algebra as applied to color (more on thatlater). Largely by combination and recombination of a series of standard building blocks,such as fractal functions, bump maps, color maps, etc., we construct a relatively small setof functions with which we intend to generate a world, and the given image of that world.

The process of generating the formal system is so involving that, in practice, almost allof my own images have come about as verifications of some abstract idea that I was

Page 222: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-56

attempting to map into such a system. In this sense they represent illustration of the modelbeing developed; I use the word “illustration” deliberately, despite the stigma that may beattached to it in the visual arts. Keep in mind that in our new paradigm, representationalismis no longer a “pedestrian concern” — it is again an unsolved problem, and we are activelyworking towards solving it. Thus the work cannot be dismissed as “mere illustration” or“simply representational”; these are highly honorable labels in our context. This may markan inversion of contemporary thinking in the visual arts.

There is one inevitable and undesirable side effect of this stage of the process:parameter proliferation. In the process of developing a potent model of complexphenomena, we almost always end up introducing a large number of parameters thatcontrol the behavior of our models or functions. This means that the artist will be facedwith a bewildering array of values which must be fixed, to create an image, and refined, tocreate an artwork. Again, we currently know of no way around this, but that may just beanother symptom of the youth of our endeavor.

4.6.1.2. Generating AxiomsThe next step is to formulate values for those multifarious parameters. This is not

quite as bleak a prospect as it may sound, as the same intuition that drove the formulationof the model and the functions, also informs the choice of values for the parameters. Thuswe are not groping in complete darkness; we generally have a good idea of where to startand how to change the values to obtain the desired effects.

Nevertheless, as described before, fixing these values is a long and tedious process inpractice. The goal is the creation of an input file, to be fed to the program upon itsexecution, which in turn results in an image. The process in practice consists of sitting infront of a terminal, working in a text editor to change the strings in the input file, runningthe program with the modified file, inspecting the results, going back into the editor tomake changes, running again, and so forth. I generally spend the equivalent of about twoto six weeks of full time work in this loop, for each of my finished images.

But it is important to note that the procedure isn’t quite as neat and sequential as I’vepresented it to be so far. These first two stages are not really so distinct — while I amrefining the parameters to the functions, I am generally simultaneously developing,extending and refining the functions themselves. Since the ultimate theorem proved isdetermined by both the axioms and the rules of production, we naturally massage both theaxioms and the rules more or less simultaneously as we develop that theorem into the imagewe desire. Furthermore, even the author of the formal system would not generally care tobe confronted with the need to explicitly specify every single parametric value in the modelin the input file — there are simply too many hundreds of them. For this reason, many ofthe axiomatic values are hard-coded as constants in the program and thus are not part of theinput file. (This constitutes poor programming practice, from a computer sciencestandpoint, but is nevertheless necessary from a practical, user’s standpoint.) Thus theseparation of axioms into input and rules of production into program is not very precise. Itwould actually be easy to be very thorough about so partitioning the system, but in practiceit is neither necessary nor desirable.

4.6.1.3. Deriving the Theorem: EpigenesisOnce we have a set of production rules and axioms — a complete formal system — we

may proceed to derive a theorem, to create an image. Again, this means firing up theprogram and running it with the given input file. Execution time for the program varieswidely for my own images, from a minimum of about a minute to a maximum of several

Page 223: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-57

weeks. This at a rate of tens or hundreds of millions of operations per second7 — there areobviously many, many steps in the derivation of the theorem, far more than any humanbeing could ever hope to perform or even follow.

Again, each of these operations (other than, perhaps, memory accesses) represents atransformation to a string: One sequence of ones and zeros is translated, deterministically,into another. The sum total transformation is that of translating the input file into an image,an image that may represent self-expression in an artwork to the person orchestrating theexecution of the program.

The formal system embodies the aesthetic judgment of the artist; those judgments areimplicit in its construction. Execution of the program, derivation of the theorem,corresponds to the epigenesis discussed by Verostko [69] and Waddington. [72] Asuccessful result reflects the artist’s aesthetic judgment and may represent self-expressionfor the artist, derived through deterministic mechanism. Again, this juxtaposition ofdeterminism and free will is at the heart of what makes the process interesting, from aphilosophical standpoint. Determinism ultimately precludes free will, yet here it is used asthe vehicle of expressing free will and the latitude for expression of individual judgmentwhich free will grants.

4.6.1.4. The Loop of Scientific DiscoveryGregory Nielson points out [49] that this process embodies the basic loop of scientific

discovery: One posits a formal model, observes the behavior of the model in comparison toNature, then refines the model and makes further observations, proceeding in an iterativeloop. Perhaps the main difference between mainstream science and this practice incomputer graphics, is the time required for a single iteration of the loop: For a scientist, itmay be decades, even a lifetime or longer, whereas in computer graphics it is typicallymeasured in minutes.

4.6.1.5. The Role of IntuitionBoth science and art are ultimately driven by intuition. No scientist derives potent

models of Nature through exhaustive search of all the possibilities provided by firstprinciples. Neither does any mathematician originally get to the proof a hard theorem bysimple extrapolation of logical principles. Rather, they both retrofit their (originally)intuitive conjectures with a deterministic logical derivation to advance them to the state oflogical conclusions. These logical derivations then become what both mathematics and thephysical sciences base their clams of irrefutable legitimacy upon. And indeed, when well-formed, these arguments are (logically) irrefutable and because of this, when they are fullycomprehended they may have a truly compelling and seductive character of somehowreifying, or at least reflecting, the self-evident design of the universe.8 But if not for therole of intuition in positing the original conjecture and in formulating the logical derivation,computers would immediately leave us all in the dust, intellectually, because we couldprogram them to do the same far faster and more accurately than we humans. Curiously,though — and to the great detriment of the field of “artificial intelligence” — it turns out

7 I often perform my computations in parallel on several computers, each of which is capable of performingseveral million operations per second.

8 Further analysis of the character of this seductive appeal leads down a more epistemological path than wechoose to pursue in this essay; we refer the interested reader to Kuhn. [21]

Page 224: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-58

that, ultimate expositions in deterministic proof notwithstanding, no mathematician orscientist can explain exactly how they originally conjectured the result, or even how theyarrived at the formal derivation finally presented. No, in the creative process scientists,mathematicians, and artists all rely on intuition to the same degree and in exactly the sameway. It is in only aspects of their final respective products that they so differ: Scientists’and mathematicians’ final product is the logical edifice itself; the artist’s final product is aphysical object or temporal event,. the accurate apprehension of which is often highlydependent on dynamic intangibilities such as cultural context.

The point is, none of us knows precisely how to get where we want to go a priori, butwe all conjecture worthwhile goals and eventually intuit some path that indeed gets us toour desired ends. Such is the magic of human intelligence, and this is what continues todistinguish us from any “artificial intelligence” yet devised.

4.6.1.6. The Role of SerendipityFinally, we must note the role of serendipity in this formal process. The fact is, we

don’t always know exactly what the results of our derivations will be, and we can’trealistically expect to always be able to accurately foresee the behavior of our deterministicmodels (the emerging science of chaos is making that abundantly clear).

Serendipity emerges from the unforeseeable, as with random models; from theunforeseen, as with a model that has not yet been subjected to thorough intellectualscrutiny; and from errors and mistakes, as with typos and program bugs. Each of thesefactors has played an important role in the genesis of my own images. Plate 1, forexample, did not come from a preconceived idea for a visual composition. Rather it camefrom the unforeseen, or a sort of bug: I had moved the program that I expected to generatea thoroughly familiar mountain range, to another computer. This new computer had adifferent random number generator, which I had not foreseen in writing and porting myprogram. Thus when I ran the program I was confronted with a wholly unexpectedlandscape, which serendipitously harmonized with the large moon I had put in the sky, butnot yet scaled own to a reasonable size. Perhaps every artist can tell similar tales, but hereit is important to see that, though we work through a formal, deterministic process we arestill in an intimate dance with chance, the unknown, and the unpredictable.

4.6.2. Interpreting the MetarepresentationAs I said before, the theorem we derive is nothing more than a string of symbols in the

computer’s memory. Nothing tangible or image-like about that, yet. But we do intend animage, and we have (thankfully) a preexisting machinery of interpretation for thatmetarepresentation. I will now outline that machinery, and sketch how that machinery iscurrently woefully inadequate to the creation of works of art. This, too, is a symptom ofimmaturity of the process and medium, and will change for the better with time.

The problem at hand is how to map the formal metarepresentation, i.e., the string orsequence of numbers, to a certain appearance in a physical manifestation. Obviously, wehave enormous latitude in this transformation, as the metarepresentation has no intrinsicmeaning: It is merely the deterministic result of applying a series of abstract transformationsto some input symbols; there is no meaning in that other than what we (more or less

Page 225: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-59

arbitrarily) ascribe to it.9 Also obviously, we always had a certain interpretation in mindfor the result, throughout the process.

Unfortunately, when we leave the idealized, uncertainty-free world of formal logic andits embodiment in the computer to enter the “meat” world of physical manifestations, welose the grace and precision of Boolean digital representation and enter the fickle,imprecise, and heinously ill-defined world of things analog, physical, and continuous. Thereal, “analog” world is far less well behaved than the formal and deterministic world inwhich have been dwelling. We face a whole new, different, and largely unrelated set ofproblems, problems usually without the clean, irrefutable solutions we’ve been using.This is the world of color monitors, color printers, and photographic reproduction. This iswhere we do well to hand our theorem over to the artisans skilled in working with suchthings, and beg, cajole, plead with and threaten them to do our bidding.

Such is the real world, with which our abstract idealizations must eventually interface.

4.6.2.1. Numbers as ColorsWe have a huge string, usually of hundreds of millions of symbols, or megabytes of

data, which we wish to interpret as a picture. “How?” one might ask. Well, againfortunately, there are conventions for this interpretation that we can follow to make ourlives easier.

The primary convention is to regard the string as a sequence of numbers, usuallycomprised of eight 0/1 symbols or digits each. Such an eight-bit string can, by logical andmathematical convention, encode a single number between 0 and 255, inclusive (those 256values correspond to the 28 possible distinct combinations of eight ones and zeros).According to the tristimulus [22] model of human vision, we can encode all perceptiblecolors into combinations of exactly three primary colors.10 By more or less arbitraryconvention, we may interpret our string of eight-bit numbers as representing consecutivetriplets of eight-bit values for those primary colors. Thus we know what the derived string“means”: It is a sequence of color values for pixels (pixels being the atomic colored dots ofwhich our final image is composed). These color values proceed in a canonical order, asdo the pixels they are meant to represent. (There is a wide variety of standard digital imagefile formats which specify the actual form and sequence of data elements, such as GIF,TIFF, TARGA, etc., but they all simply represent different conventions for encodings ofthe same information.)

This interpretation is arbitrary, but then so is any interpretation of an intrinsicallymeaningless formalism. By being as specific as we can be about the intended meaning or

9 An essential element of the full philosophical definition of formal systems is that they are closedsystems: They make absolutely no reference to anything outside of themselves. Thus they are perfectlydivorced from any intrinsic meaning — as implied, for instance, by correspondence to the behavior of real-world phenomena. While this has bizarre consequences for the foundations of scientific reasoning, for us,in this artistic inquiry, it simply points to the fact that all “meaning,” no matter how intentional or self-evident in our results, is simply an act of creative ascription and not an inherent property of the system.

10 The tristimulus model of color vision is predicated on the long-standing observation that there appear tobe three distinct types of color receptors in the human retina, which are sensitive to red, green, and bluelight. Recent genetic research indicates that there may actually exist the potential for several long-wavelength (red) photoreceptors. [48] This might have interesting consequences for our best models of therange and limits of human visual perception.

Page 226: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-60

interpretation of the metarepresentation, we take on another arbitrary set of constraints thatgreatly simplify our task.

4.6.2.2. The Finite Number of Possible OutcomesAs each pixel is represented by three eight-bit numbers, it can have exactly one of

28 × 28 × 28 = 224 ≈ 16 million values. If we have, say, 210 ≈ 1 million pixels in the image

then the entire image can take on exactly one of 22410

values. While 22410

is a very largenumber, it is finite. Thus, at a given number of pixels (or image resolution) and a givennumber of possible colors, there is a large but finite number of pictures that can be

represented.11 The actual number will be considerably less than 22410

, of course, as nohuman observer would be able to distinguish between the different visual representations ofmany slightly different metarepresentations.

We can then view our elaborate logical formalisms and derivations as simply selectorsthat choose for one out of a truly vast, but finite, set of possible outcomes. That this setexists, perfectly defined a priori, adds to the sense that this is all “found art” that exists, andalways has existed, in the immutable formalism of that predefined set. The simplicity of

the definition of that set — as all possible combinations of 22410

ones and zeros — is partof the compelling beauty of the mathematical logic that underlies the artistic processes weare illuminating here.

4.6.2.3. Additive vs. Subtractive ColorAnother factor that distinguishes working with the computer from most other visual

media, is that we work in an additive color space, versus the more familiar subtractivecolors. The difference is that when using pigments, one is subtracting color energy out ofthe impinging light that illuminates the work. If there is no illumination there is no visiblework, and presumably the optimum illuminant is white light, as it contains all the colors inequal proportions to start with. In the subtractive model, a red pigment absorbs the greenand blue energy in a white illuminant, and reflects the red.

In computer graphics, we start with a dark (optimally, black) surface, and add in thecolor energy we desire. Thus a red area is simply made to emit red light, and the work isvisible in complete darkness (and conversely, may be hard to see clearly in a brightly litenvironment). This convention came about because the standard output device forcomputer graphics is a television monitor, as opposed to a canvas or sheet of paper.

The main difference between additive and subtractive color, is that the primary colorsof the two systems are complementary. In subtractive color (contrary to what you weretaught in grade school), the primaries are magenta, yellow and cyan. In additive color,they are red, green, and blue. Thus, for instance, in additive color we must learn to think

11 A fascinating observation is that many of these 22410

possibly bit-strings will actually be illegal topossess or circulate as some will, by exhaustion of all possible images, represent scenes such as childpornography! Of course, the chances of deriving such a string are minuscule and furthermore, as Gödel’s

Incompleteness Theorem shows, not all 22410

strings are necessarily derivable within any given formalsystem or program. As Terence McKenna points out, it will be more fascinating still when we developformal systems so potent in their range of visual (or other) expression that the formal system itself will beperceived as a sufficient threat to the dominant paradigm that it will be declared illegal.

Page 227: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-61

of yellow as a sum of red and green (not immediately obvious), and brown as a dimversion of a reddish orange.

We also find that images developed on the luminous monitor may not be nearly sostriking when mapped to a subtractive medium. Plate 1 is one of the few examples in myown experience, that looks fairly good in both media — though there is a magical luminousquality on the monitor, which is missing in a reflective print.

There is a hard-copy solution to this: back-lit transparencies. Unfortunately, these arequite expensive to produce: The light box alone can cost several hundred dollars (and beugly to boot) for good-sized print. Back-lit transparencies to have one significantadvantage over reflective prints, however: The computer image’s inherent lack of surfacedetail, as in the impasto of a painting, is obscured as one’s attention is simply not naturallydrawn to the physical surface in a luminous display.

4.6.2.4. Archival ReproductionColor reproduction from digital data is a difficult problem. It seems unlikely that a

television monitor would be accepted as an artwork by collectors or the art consumingpublic. Monitors are large, heavy, low-resolution, and, to face facts squarely, they lookbig TVs and not like something to hang on your wall. The market is, and will remain forsome time to come, for (thin) two-dimensional images-on-a-surface, like a painting or aprint; not for four-inch-deep, ungainly light boxes with dangling power cords, andcertainly not for a big, ugly, expensive, high-quality video monitors.

Thus we face the problem of making high-quality reflective prints of the artworks,which both the artist and the collector can be happy with. Achieving the artist’s satisfactionmay require a large investment of time and money on the his or her part, to find a printinghouse to produce such objects. The artist can expect to spend several thousand dollars onthis, and what is produced is not generally a one-of-kind object, but a series of prints. Thisaffects the market for the work; it is not like a painting, but more at a lithograph orphotographic print series.

The second criterion, making the collector happy, complicates the reproductionproblem further. Serious collectors require archival artworks — pieces that can beexpected to last 100 years, without significant fading or other such degradation. This rulesout color photographic prints, none of which are considered archival. (Gloss Cibachromeprints are considered to be semi-archival, i.e., they may last about 50 years; no backlittransparency even comes close, due to the high, UV-rich light levels in a light box.) Whatthis leaves us with, at the time of this writing, is four-color offset printing. Such prints canbe made on acid-free paper, or at very high (400 dot-per-inch) line screen resolution usingcarbon pigments on a polyester substrate. The former is the equivalent of a qualitylithographic print; the latter is superarchival, with a life expectancy of about 500 years, butis very expensive and constrained to modest physical dimensions.

These problems mean, in practice, that color reproduction is largely an unsolvedproblem. It is not realistic to expect the artist to be able to sink several thousand dollarsinto each finished work, as artists are notoriously indigent. Thus I for one consider myselfto be, so far, an artist without a product.

4.6.2.5. What is the Product?Given that there were a product, we face the well-known question in computer art:

What exactly is the “product”? Is it an object, such as a color print? Is it the

Page 228: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-62

metarepresentation, the image data? Is it the formal system? Or is it the formal system plusits machinery of interpretation, i.e., the program, the input, and the computer that runs it?

Of all of these possibilities, the only reasonable one is the first: The product is sometangible hard-copy object or print. The metarepresentation is not particularly valuable as itis exactly reproducible, due to the determinism of the process that creates it, and because itis so difficult to translate into an art object. The product cannot be the formal system — Ihave years invested in the program that creates all of my images; I would not sell exclusiverights for its use for any price. And even if I did I am capable, in principle at least, ofrecreating an exactly equivalent formal system, and indeed upon a sale of this sort I wouldimmediately have to do my best to do exactly that, just to be able to get back to work again.That would no doubt lead to disgruntlement among the collectors of my work. Finally, it isabsurd to propose the last option, that the work consists of both the program and thecomputers that run it: Even if I were to give the software away for free, the hardware costcould reach into six figures and that hardware would have no special value whatsoever tothe collector, as the computers I use are always off-the-shelf units, exactly replaceable bythe manufacturer. That is, it has absolutely no uniqueness associated with the image — itwould be like including the paints, paintbrushes and easel in the sale of a painting; they areof no use to the collector, are generally quite replaceable to the artist, and are of no directrelevance to the finished piece.

4.6.2.6. What Constitutes the Original?A final image is typically rendered at a very high resolution: perhaps three hundred

dots per inch, at a final print size of two to four feet on a side. The television monitor onwhich the images are developed can display at most about two thousand pixels (dots) onthe horizontal axis, and typically closer to one thousand. Therefore the image that is sent tothe printing device, regardless of what technology or medium that device is based on, isusually of much higher resolution than can ever be previewed — the first preview is ofwhat comes off the press, so to speak. Therefore, given that the product is the finalphysical print, that print is also the original in a very real and significant sense, as therenever existed any visible, full-resolution representation or even metarepresentation prior tothe final print. This may have consequences to the value of the print, in the eyes ofcollectors.

4.7 DiscussionLet us now briefly discuss the implications of this new process.

4.7.1. What Role Intent and Understanding?As I have pointed out, the computer can be quite readily be used as a novel canvas,

paint, and paintbrush, for use as with prior two-dimensional media for the visual arts.Used that way, the resulting works will be essentially “of the hand,” and thus part of theexisting continuum of two dimensional media.

When the artwork is algorithmic, issuing directly and unmodified from a formaldescription, it becomes more interesting. When the algorithm is deterministic, it becomesmore interesting still (after all, artists such as Sol Lewitt have produced non-deterministicalgorithmic artworks for some time now).

But I maintain that deterministic algorithmic artwork is only truly significant when theartist is also the author of the formal system, and can claim to understand it thoroughly andto have intended (modulo serendipity) to create the result produced. Thus artworks created

Page 229: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-63

by someone else using my software would lack conceptual significance, even if they weremore aesthetically sophisticated. If Picasso had invented a “Picasso engine,” and othersused it to create Picasso-like works, these works would simply would not be quite thesame as an original Picasso, after all — even if others were able to “improve upon”Picasso.

The artist can only really claim to have accomplished self-expression through formallogic, when he or she authored, for that specific purpose, the formal system through whichthe expression is obtained.

4.7.2. What of Turnkey Systems?What then, of turnkey software for creating computer art? There are many powerful

programs becoming available that unlock the substantial potential of the digital medium,and there will continue to be ever more, of greater sophistication, power, and novelty.Programs such as Adobe Illustrator and Photoshop are revolutionizing the way manyartists, and perhaps most designers, work.

There is, and will always be, a role for such systems. Indeed, the vast majority ofpracticing “computer artists” will always use such “canned,” preexisting software. Itwould be absurd to propose that all, or even many, artists pay the substantial dues requiredto get up to speed in this peculiar process I am describing. No, this process will alwaysexist and be practiced on the fringes — there will never be more than a handful of peoplewho are qualified to use this process, requiring as it does an extensive background in art,science, mathematics, logic, and computers.

Let me use an analogy: there have been great drivers, for almost as long as there havebeen cars. But these drivers are rarely the builders of the cars they drive. Indeed, nosingle person can expect to build an automobile of any sort, much less a race car, withoutthe help of many others (no more can I expect to build the computers I use, or to haveinvented every technique I apply). But a good driver, whose vehicle is largely the result oftheir own creative vision, would always be a special competitor, though they might neverturn in the fastest time.

There will always be room for the virtuoso users of tools provided by others, and suchusers can always be expected to predominate the field of performers. The greatest violinmaker is not the greatest violinist. Likewise, there will always arise, here and there, nowand again, visionaries with “the madness of the poet” who will create their own tools anddo with them what might never have occurred to others. And there is, at least, alwayssome significance to being the first to have done something of interest and of significantdifficulty. This process I describe is probably best characterized as such an undertaking.

4.7.3. The Role of Traditional MediaNew as it may be, this process certainly does not stand outside precedence. As the

result is a two-dimensional work, all the rules and discipline of two-dimensional art apply,most saliently those of visual composition and color usage. As the modeling is done inthree dimensions, rules of form and lighting also apply. When animation is undertaken,the rules of cinematography will come into play. When we produce a tangible product, anysort of physical manifestation, all the rules and practice of the medium in which thatproduct is executed will apply. We cannot presume to create a new art form in a vacuum;we will need to borrow and appropriate everything we can use, from what has comebefore.

Page 230: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-64

We may, however, need to invent a viable new medium in which to represent theproduct. It may be that computer art as a whole will not truly come into its own, until someessentially new display technology, such as large, bright, flat-panel color displays or laserprojectors, comes into common usage. Immersive VR technology, for instance, holdsconsiderable promise as the unique, new medium for the apotheosis of computer imagery.

4.7.4. Mastering the Process and MediumAs painting has been mastered, so must this new process. Painting, photography and

sculpture did not reach maturity overnight; neither can we expect computer art to do so.The fact is, the computer artwork has not yet been produced which could stand a side-by-side comparison with, say, a great van Gogh painting. My own best image would pale,stood beside a Bierstadt. The austere beauty of the underlying formalism denies computer-generated imagery access to the fascinating, continuous behavior of such a medium as oilpaint — there is simply nowhere near the amount of information in a standard digital imagefile, as there is in a well-executed painting. The range of scales over which a good paintingis interesting is also generally much larger than that for a computer-generated image,fractals notwithstanding. There are at least two scales at which a good painting isinteresting: the large scale, where visual composition predominates, and the small scale,where surface texture, impasto, juxtaposition of colors (as with Seraut) in a stroke, etc.,provide another visual richness. We will need to include such complexity, or simply findanother grounds for legitimacy with as great an aesthetic significance, before we can callour works truly fine art.

One interesting and useful distinction was drawn by Ansel Adams, who posited theanalogy of the negative as the score, and the print as the performance. In this analogy, wecurrently have the capability in this new process to produce the negative or the score,almost literally. But we currently lack the means of translating this score into an impressiveperformance. That is the challenge of creating the final artwork.

One wonderful distinction of the process I’ve presented is its simultaneous use of bothanalytic and intuitive thinking. Sitting at the computer creating an image, one must rapidlyswitch back and forth between the “right brain” mental faculties required to assess aestheticissues and the “left brain” analytic processes required to deal with the logic-basedmachinery of production. This is certainly an unusual way to go about producing a visualartwork; its closest analogue may be in musical composition.

4.8 ConclusionsThis new process may mark a truly novel event in the history of creative process in the

fine arts. Provided, of course, that the artist intends, understands, and can in some validsense take responsibility for, the formalisms behind the product. I am claiming that anumber, along with the appropriate (and well-defined) interpretation machinery canrepresent artistic self-expression, that this number can be derived deterministically, and thatthe method of this derivation adds conceptual significance to the result.

Be careful to note that I am not claiming that the machine is self-expressing. Acomputer has no more aesthetic ability than any inanimate object, and indeed, it can bemore refractory than most. The expression is the human artist’s; the computer is the toolthrough which the artist makes his or her statement, it is at best an idiot-savant assistant.

Biographically, I wish to add that it is fortunate that landscapes are my personalpredilection for self-expression. In painting and photography, I have always preferredlandscapes. When I entered the field of computer graphics research, landscape modeling

Page 231: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-65

and rendering were in their infancy; it has been my pleasure to substantially improve thestate of the art in such through the course of my doctoral research [39] at Yale Universityunder Benoit Mandelbrot, the father of fractal geometry. In a remarkable bit of serendipity,I appear to have been the right person in the right place at the right time. There was anarrow temporal opportunity, that I happened to precisely meet— had I shown up a fewmonths or years earlier or later, the opportunity would not have existed.

4.8.1. Constraints and OpportunitiesLet me quickly recap the significant constraints and opportunities of this new process,

as I see them.

4.8.1.1. Working in Three DimensionsWhile sculptors and stage designers have worked in three dimensions for millennia,

the peculiar way in which we do so in this new process is significantly different. We differat least in scale: we are creating landscapes, entire planets, and even, potentially, a finitesynthetic universe. The challenges are different, and appropriate practice will thereforeundoubtedly be different. Thus we will need to invent and refine some newmethodologies. While landscape rendering has as rich a precedence as any other area ofvisual art, prior landscape artists were not generally responsible for creating their entirescene, in full three dimensions. Soon, when interactive exploration of our scenes becomespossible, we may also find ourselves confronted with responsibility for guiding, throughwhatever means we find artful, the explorations of visitors to our worlds.

4.8.1.2. Algebraic ColorWhile we cannot and should not expect to redefine the rules of color usage, neither can

we manipulate color in the ways which visual artists are accustomed. First, we work in theunfamiliar additive color space, where heuristics for mixing colored pigments are eitherinverted or simply invalidated. Second, there is no physical system in which color interacts— it is all simply a model. Nothing happens at all, except for what we explicitly specify.We may seek to have those specifications mimic as closely as possible the behavior of thereal world (a very hard thing to do, in general) or we may bend or break such laws in oursystem. In any case, the specification and interactions of colors on surfaces is couched inthe mathematical language of algebra — certainly an unfamiliar way of dealing with colorfor the average studio artist. Colors are all numbers; they mix by the arithmetic operationsof addition, subtraction, and multiplication, and they are often modulated by exponentialoperators (such as gamma correction).

Color theory for computer graphics is often elegant, and is quite internally consistent.But it is not something familiar to the average artist.

4.8.1.3. ProceduralismProceduralism, the practice of encoding behaviors in formally defined, deterministic

functions, is at the very heart of this process. Strict adherence to this practice is whence theintellectual significance we claim for the process emanates. We can gain a wonderfulelegance in this approach, as with the fractal models that can so succinctly describemanifestations of potentially unbounded visual complexity. It is a significant challenge tomaintain the discipline of using only such relatively simple logical constructs for visualexpression, and it is a significant constraint to work only with global parametric control.

Page 232: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-66

There can come great benefits from such discipline, though. Imagine a procedurallydefined planet, or array of planets, which possesses a wealth of detail everywhere, detailthat the artist did not explicitly and laborious specify, but which issues directly andautomatically from the functions from which the model is composed. Plates 2 and 3illustrate an example of such a model: Plate 2 shows an entire planet and Plate 3 is alandscape that is actually physically situated on the face of that planet! The animation“Spirit of Gaea,” currently in production, will serve as a proof of this concept. In thisanimation, the point of view will move in from deep space, up to the planet, down throughits atmosphere to its landscape, up very close to the terrain (the equivalent of a few feetaway), then straight back out into deep space. This will all be accomplished with a singleprocedural model, and while rendering will be far from real-time, it is only a matter ofengineering to get to where we can move around the planet (and its universe) interactively,at will. That will be an unprecedented development.

4.8.1.4. Ambiguities in Logic and ArtOur use of formal logic for self-expression entrains with it the precision and lack of

ambiguity of mathematics and science. Lack of ambiguity is not familiar, or evendesirable, in the arts. But such precision in the creative process does not in any waypreclude the kind of deliberate ambiguity that lends depth and interest to art. Rather, itstands beneath, as an unusually solid foundation for artistic creation. Its use allowsscientific models to mapped into creative opportunities — something that I personally findan exciting undertaking, having always been fascinated with the beauty of such models intheir own right. Finally, our basis in formal logic entrains with it the intellectual depth ofthe philosophical discipline of logic and the mathematical models of the sciences. These aredeep conceptual roots indeed, which we have only begun to tap.

4.8.2. Some Parting QuestionsI will conclude with some questions, questions that do admit to immediate answer.

• How do we obtain self-expression through formal logic?

I claim to have done so, but I can no more tell you how than the average painter cantell you precisely how they painted a particular painting. I hope that, with my ownartworks, I have shown it to be possible, and I fervently hope to be surpassed by futurepractitioners.

• How do we know when we have?

If my claim is valid, it should be verifiable. There are only two ways to do this: Askthe artist, and ask yourselves, the audience. Success or failure will be found to be a ficklething.

• So what’s new here?

I have attempted to illuminate that in this essay, nevertheless I feel very incompleteabout it. My own analysis of this event is preliminary; I may spend the rest of my daysfleshing it out. It seems to me that, as is typical in new areas of intellectual inquiry, theideas formulated and presented to date are perforce preliminary and vague. Certainly myown arguments could benefit from a deeper foundation in the history of art. But I hope thatthe time is ripe to begin to expound them, that they might be honed or discredited throughthe dialectic.

• Is it important?

Page 233: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-67

Time, of course, will tell, at least in the eyes of our culture. Obviously, I think so.But then, I am primarily trained as a scientist rather than as an artist, and I am certainly notan authority on art history. Nevertheless, I do know enough to recognize and put myprofessional reputation at stake, that something big is going on here. Unfortunately, therequirements for a full appreciation are backgrounds in mathematical logic, naturalsciences, and computer science, as well as aesthetic training and sensitivity. Thus theaudience who can apprehend, and perhaps be impressed by, these arguments is necessarilysmall.

• Will it fly?

Again, time will tell. If I continue to suffer occasional visual inspiration, I may helpbring it to maturity as an art form. I am certainly counting on others to help, and hopefullyto soon create works that will make my own appear crude and preliminary. Fortunately Iam not alone in my views or my efforts. To quote Judson Rosebush [54]:

In practice, proceduralist computer art is among the most contemporary productsof our culture, and will increasingly be appreciated as a major art movement bythis and future generations.

If Mr. Rosebush and I are correct, we may be witnessing one of the truly definitiveevents in the history of Art.

Page 234: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-68

References

1. Anderson, M.G., Modelling Geomorphological Systems. ed. M.G. Anderson. 1988,New York: John Wiley & Sons.

2. Barr, A.H., Teleological modeling, in Making them move: mechanics, control, andanimation of articulated figures, N.I. Badler, B.A. Barsky, andD. Zeltzer, Editors.1991, Morgan Kaufmann: p. 315-321.

3. Bolton, E., personal communications1991, Yale University:

4. Bouville, C., Bounding Ellipsoids for Ray-Fractal Intersection. Computer Graphics,July 1985. 19(3): p. 45-52.

5. Carriero, N. and D. Gelerenter, Linda in Context. Communications of the ACM,April, 1989. 32(4): p. 444-458.

6. Constantin, P., I. Procacchia, and K.R. Sreenivasan, Isoscalar Contours inTurbulence. Phy. Rev. Lett., 1991. 67: p. 1739.

7. Cook, R.L., Shade Trees. Computer Graphics, July, 1984. 18(3): p. 223-230.

8. Dawkins, R., The Blind Watchmaker. 1987, New York: W. W. Norton & Co.

9. Ebert, D.S., ed. Textures and Modeling: A Procedural Approach. 1994, AcademicPress: Cambridge, MA.

10. Fournier, A., D. Fussell, and L. Carpenter, Computer Rendering of StochasticModels. Communications of the ACM, 1982. 25: p. 371-384.

11. Gardner, G.Y. SIGGRAPH course notes: Functional Modelling. in SIGGRAPH'88. 1988. Atlanta:

12. Gardner, G.Y., Visual Simulation of Clouds. Computer Graphics, July, 1985.19(3): p. 297-303.

13. Hanrahan, P., personal communications1991, Princeton University:

14. Hart, J.C. and T.A. DeFanti, Efficient Antialiased Rendering of 3-D Linear Fractals.Computer Graphics, July 1991. 25(4): p. 91-100.

15. Hentschel, H.G.E. and I. Procacia, Phys. Rev., 1984. A29: p. 1461.

16. Hofstadter, D.R., Gödel, Escher, Bach: an Eternal Golden Braid. 1979, New York:Vintage Books.

17. Imhof, E., Cartographic Relief Presentation. 1982, New York: Walter de Gruyter.

Page 235: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-69

18. Kadanoff, L.P., Physics Today, February, 1986. 39: p. 6.

19. Kelley, A.D., M.C. Malin, and G.M. Nielson, Terrain Simulation Using a Model ofStream Erosion. Computer Graphics, August, 1988. 22(4): p. 263-268.

20. Kelley, K.W., The Home Planet. 1988, New York: Addison Wesley.

21. Kuhn, T.S., The Structure of Scientific Revolutions. 1970, Chicago: University ofChicago Press.

22. Land, E.H., Six Eyes of Man. Three-Dimensional Imaging, SPIE, 1977. 120: p.43-50.

23. Lewis, J.P., Generalized Stochastic Subdivision. ACM Transactions on Graphics,July, 1987. 6(3): p. 167-190.

24. Lewis, J.P., Algorithms for Solid Noise Synthesis. Computer Graphics, July, 1989.23(3): p. 263-270.

25. Lovejoy, S. and B.B. Mandelbrot, Fractal Properties of Rain, and a Fractal Model.Tellus, 1985. 37A: p. 209-232.

26. Mandelbrot, B.B., Fractals: Form, Chance, and Dimension. 1977, San Fransico: W.H. Freeman and Co.

27. Mandelbrot, B.B., Comment on Computer Rendering of Stochastic Models.Communications of the ACM, 1982. 25(8): p. 581-583.

28. Mandelbrot, B.B., The Fractal Geometry of Nature. 1982, New York: W. H.Freeman and Co.

29. Mandelbrot, B.B., Fractal landscapes without creases and with rivers, in The Scienceof Fractal Images, H.O. Peitgen and D. Saupe, Editor^Editors. 1988, Springer-Verlag: New York. p. 243-260.

30. Mandelbrot, B.B., personal communications1988,

31. Mandelbrot, B.B. and J.R. Wallis. Some Long-Run Properties of the GeophysicalRecords. in Water Resources Research 5. 1969.

32. Mastin, G.A., P.A. Watterberg, and J.F. Mareda, Fourier Synthesis of OceanWaves. IEEE Computer Graphics and Applications, March, 1987. 7(3): p. 16-23.

33. Miller, G.S.P., The Definition and Rendering of Terrain Maps. Computer Graphics,1986. 20(4): p. 39-48.

34. Miller, G.S.P., personal communications1988,

35. Moravec, H.P., 3D Graphics and Wave Theory. Computer Graphics, August 1981.15(3): p. 289-296.

36. Musgrave, F.K., Some Tips for Making Color Hardcopy, in Graphics Gems II, J.Arvo, Editor. 1991, Academic Press: Boston.

Page 236: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-70

37. Musgrave, F.K., A Panoramic Virtual Screen for Ray Tracing, in Graphics GemsIII, D.B. Kirk, Editor. 1992, Academic Press: Boston.

38. Musgrave, F.K. Procedural Landscapes and Planets. in SIGGRAPH '93 courseProcedural Modeling and Rendering Techniques. 1993. Anaheim, California: ACMSIGGRAPH.

39. Musgrave, F.K., Methods for Realistic Landscape Imaging. 1994, Ann Arbor,Michigan: UMI Dissertation Services (Order Number 9415872).

40. Musgrave, F.K. Uses of Fractional Brownian Motion in Modelling Nature. in FractalModelling in 3D Computer Graphics and Imaging - SIGGRAPH '91 Course 14.July, 1991.

41. Musgrave, F.K. Procedural Models of Natural Phenomena. in Notes forSIGGRAPH '92 Course 23: Procedural Modelling and Rendering Techniques. July,1992.

42. Musgrave, F.K., Methods for Realistic Landscape ImagingMay, 1993, YaleUniversity Dept. of Computer Science:

43. Musgrave, F.K., About the Cover: Natura ex Machina II. IEEE Computer Graphicsand Applications, November, 1990. 10(6): p. 5-7.

44. Musgrave, F.K., A Note on Ray Tracing Mirages. IEEE Computer Graphics andApplications, November, 1990. 10(6): p. 10-12.

45. Musgrave, F.K., C.E. Kolb, and R.S. Mace, The Synthesis and Rendering ofEroded Fractal Terrains. Computer Graphics, July, 1989. 23(3): p. 41-50.

46. Musgrave, F.K. and B.B. Mandelbrot, Natura ex Machina (About the Cover). IEEEComputer Graphics and Applications, January, 1989. 9(1): p. 4-7.

47. Musgrave, F.K. and B.B. Mandelbrot, The Art of Fractal Landscapes. IBM Journalof Research and Development, September, 1991. 35(4): p. 535-539.

48. Neitz, M. and J. Neitz, Numbers and Ratios of Visual Pigment Genes for NormalRed-Green Color Vision. Science, 1995. 267(February 17): p. 1013-1016.

49. Nielson, G.M., Visualization in Scientific and Engineering Computation. IEEEComputer, September 1991. 24(9): p. 58-66.

50. Peachey, D. Procedural Textures. in Notes for SIGGRAPH '92 Course 23:Procedural Modelling and Rendering Techniques. 1992.

51. Peachey, D.R., Solid Texturing of Complex Surfaces. Computer Graphics, 1985.19(3): p. 279-286.

52. Perlin, K., An Image Synthesizer. Computer Graphics, July, 1985. 19(3): p. 287-296.

53. Perlin, K. and E.M. Hoffert, Hypertexture. Computer Graphics, July, 1989. 23(3):p. 253-262.

Page 237: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-71

54. Rosebush, J., The Proceduralist Manifesto. Leonardo, 1989. (supplemental issue):p. 55-56.

55. Ruelle, D., Proc. R. Soc. London Ser., 1990. A 427: p. 241.

56. Ruelle, D., ed. Chance and Chaos. 1991, Princeton University Press: Princeton, NJ.178.

57. Russell, B. and A.N. Whitehead, Principia Mathematica. 1962, CambridgeUniversity Press.

58. Saupe, D., Algorithms for random fractals, in The Science of Fractal Images, H.O.Peitgen and D. Saupe, Editors. 1988, Springer-Verlag: New York. p. 71-136.

59. Saupe, D., Point Evaluation of Multi-Variable Random Fractals, in Visualisierung inMathematik und Naturissenschaft - Bremer Computergraphik Tage 1988, H. Juergenand D. Saupe, Editors. 1989, Springer-Verlag: Heidelberg.

60. Schumm, S.A., Experimental Fluvial Geomorphology. 1987, New York: JohnWiley & Sons.

61. Shaw, T.M., Phys. Rev. Lett., 1987. 59: p. 1671.

62. Sims, K. Interactive Evolution of Dynamical Systems. in Proceedings of theEuropean Conference on Artificial Life. December 1991. Paris: MIT Press.

63. Sims, K., Artificial Evolution for Computer Graphics. Computer Graphics, July1991. 25(4): p. 319-328.

64. Smith, A.R., Plants, Fractals, and Formal Languages. Computer Graphics, July,1984. 18(3): p. 1-10.

65. Sommerer, J.C. and E. Ott, Particles Floating on a Moving Fluid: A DynamicallyComprehensible Physical Fractal. Science, 1993. 259(15 January): p. 335-339.

66. Sreenivasan, K.R., R. Ramshankar, and C. Menveau, Proc. R. Soc. London Ser.,1989. A 421: p. 79.

67. Stam, J. and E. Fiume. A Multiple-Scale Stochastic Modelling Primitive. inProceedings of Graphics Interface '91. June, 1991. Morgan-Kaufmann.

68. Todd, S. and W. Latham, Evolutionary Art and Computers. 1992, London:Academic Press.

69. Verostko, R., Epigenetic Painting: Software as Genotype. Leonardo, 1990. 23(1): p.17-23.

70. Voss, R.F., Random Fractal Forgeries, in Fundamental Algorithms for ComputerGraphics, R.A. Earnshaw, Editor. 1985, Springer-Verlag: Berlin.

71. Voss, R.F., Fractals in nature: from characerization to simulation, in The Science ofFractal Images, H.O. Peitgen and D. Saupe, Editors. 1988, Springer-Verlag: NewYork. p. 21-70.

Page 238: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

6-72

72. Waddington, C.H., The Strategy of Genes. 1957, London: George Allen andUnwin.

73. Wijk, J.J.v., Spot Noise: Texture Synthesis for Data Visualization. ComputerGraphics, 1991. 25(4): p. 309-318.

74. Wyszecki, G. and W.S. Stiles, Color Science: Concepts and Methods, QuantitativeMethods and Formulas. 1967, New York: Wiley-Interscience.

75. Yaeger, L., C. Upson, and R. Meyers, Combining Physical and Visual Simulation -Creation of the Planet Jupiter for the Film "2010". Computer Graphics, August,1986. 20(4): p. 85-93.

Page 239: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

7 - 1

Procedural Modeling with Artificial Evolution

Brief Talk Outline

Karl Sims

1 Introduction

Procedural modeling:procedural descriptions --> complex results.

How to efficiently explore the hyper-space of a procedural grammar?How to avoid having to adjust each individual parameter by hand?How to avoid having to understand the procedural model?How to enable the computer "learn" what you like?How to enable the computer to surpass what you could design?

Procedural procedural modeling:some meta process --> procedural descriptions --> complex results.

1.1 Simulating Darwinian Evolution

A biological analogy:

Genotype: the prodedural information - a coded description for generating so mPhenotype: the result - an entity generated from a genotype.Fitness: the survivability of the phenotype - some measure of "success".

A simple evolutionary cycle of selection and reproduction can be simulated as f o

a. Generate a population of genotypes to initialize the population. (These migh tgenerated from scratch at random, or seeded with previously existing genotype

b. Grow a phenotype from each new genotype.

c. Determine a fitness for each phenotype. A simple way to determine fitness is to interactively inspect each phenotype and choose one or two of them to survi v1.0) and the others will be removed (fitness = .0).

d. Remove the individuals with low fitness from the population.

e. Reproduce the surviving individuals (those with the highest fitness) by replic agenotypes. When replicating, introduce random mutations in the new genotyp e

Page 240: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

7 - 2

Optionally make the new genotypes by mating -- combine the genetic material f rthan one parent.

f. Repeat the cycle for each new generation -- go back to step b.

This is the same basic process as a genetic algorithm -- often used to perform o p

2 Forms of Genetic Grammars

2.1 Biological genotypes

DNA --> enzymes --> organismInspiring, but not practical to simulate.Simplify, and modify for reasonable computer simulation...

2.2 Parameter sets

The phenotypes are created by an algorithm that varies the results depending o nparameters. For N parameters, an N-dimensional "genetic space" is defined.

Example: procedurally grown plants.

2.3 Lisp Expression Hierarchies

- Genotypes can vary in length, and a "genetic hyper-space" is defined.- Functions can be included in genotypes, not just data.

Genotypes of this form are often appropriate for representing functions.

Examples:

2D textures color <-- f(x,y)Color as a function of x and y pixel coordinates.

3D volume textures color <-- f(x,y,z)Color as a function of x,y, and z coordinates.

Image processing functions color <-- f(x, y, image)Color as a function of an input image and pixel coordinates.

3D parametric shapes xyz <-- f(u,v)Spatial coordinates as a function of parametric variables.

Dynamical systems A 0 = f(x,y), dA/dt = f(A)Reaction diffusion equations, etc.

Page 241: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

7 - 3

2.3 Directed Graphs

- Can describe recursive and multiply instanced structures.- Can describe dynamical systems.

Example: Virtual creatures' 3D morphology, and control systems.

3 Methods of fitness determination

3.1 Interactively by the user.

Pros:- Fitness is very flexible, and can depend on individual users' aesthetics.- The user has some degree of control.

Cons:- The generation time needs to be fast enough to keep the user from being bore d- The population size needs to be fairly small, so each phenotype can be observ e

3.2 Defined measurable level of success.

Pros:- The process can be automated to evolve towards some given goal.- Human user time is not required.- Larger populations and more generations can usually be simulated.

Cons:- Interesting goals can be difficult to specify.- The evolution may never reach the goal.- Some loss of control.

3.3 Emergent fitness -- survival in an evolving environment.

Pros:- Automated, and the "fitness" function is also automated in a sense.- More like biological evolution, might be more useful as a study of evolution.- Perhaps more interesting results will emerge.

Cons:- More complex environments must be simulated.- Significant loss of control.

4 Genetic Morphs

Interpolate in genetic space to make smooth (although not always smooth) cros sfrom one evolved phenotype to another.

Page 242: SIGGRAPH ’96 COURSE 10 NOTES Procedural …bobl/cpts548/u05_proc...SIGGRAPH ’96 COURSE 10 NOTES Procedural Modeling and Animation Techniques Monday August 5, 1996 Organizer David

7 - 4

References by the speaker:

Sims, K., "Artificial Evolution for Computer Graphics," Computer Graphics (Siggra p'91 proceedings) Vol.25, No.4, July 1991, pp.319-328.

Sims, K., "Interactive Evolution of Equations for Procedural Models," The VisualComputer, Vol.9, 1993, pp.466-476.

Sims, K., "Evolving Virtual Creatures," Computer Graphics, Annual Conference Seri e(Siggraph '94 proceedings), July 1994, pp.15-22.

Sims, K., "Evolving 3D Morphology and Behavior by Competition," Artificial Life IVProceedings, ed. by R.Brooks & P.Maes, MIT Press, 1994, pp.28-39.

Other references:

Dawkins, R., The Blind Watchmaker, Harlow Longman, 1986.

Goldberg, D.E., Genetic Algorithms in Search, Optimization, and Machine Learning,Addison-Wesley, 1989.

Holland, J.H., Adaptation in Natural and Artificial Systems, University of Michigan Press,Ann Arbor, 1975.

Koza, J., Genetic Programming: on the Programming of Computers by Means of NaturalSelection, MIT Press, 1992.

Ray, T., "An Approach to the Synthesis of Life," Artificial Life II, ed. by Langton,Taylor, Farmer, & Rasmussen, Addison-Wesley, 1991, pp.371-408.

Todd, S., and Latham, W., Evolutionary Art and Computers, Academic Press, London,1992.

Yaeger, L., "Computational Genetics, Physiology, Metabolism, Neural Systems,Learning, Vision, and Behavior or PolyWorld: Life in a New Context," Artificial Lifeed. by C.Langton, Santa Fe Institute Studies in the Sciences of Complexity, Proc eVol. XVII, Addison-Wesley, 1994, pp.263-298.