pure contour drawings using dynamic lines · figure 3: conceptual overview of line drawing...

9
Pure Contour Drawings Using Dynamic Lines Mayank Singh * Texas A&M University Donald H. House Clemson University Figure 1: Representative Line Drawing using Dynamic Lines — Left to right: Cow, Chess Pawn, Station Wagon, 1932 Dodge Abstract Real world line drawings are generated by the motion of artist’s hand over a paper plane. Our system attempts to reproduce line drawings by mimicking this dynamic motion. The artist maintains his or her gaze on the underlying model, conceptualizes the shape of the line that best depict the part of the model and then with the smooth motion of his or her hand produces a line on the paper. We present a conceptual framework along with a working prototype that attempts to closely follow this real world process. The frame- work builds poses for the model, highlights interesting feature lines on the model, creates a representative shape of the feature line on the paper and then with the motion of a moving pen generates line drawings. The framework also provides flexibility and support for existing techniques and algorithm available in non-photorealistic rendering research, in terms of finding perceptual aids, selection of feature lines, and rendering algorithms. We provide several re- sults from a simple prototype system following this structure that demonstrates the validity of the concept. Keywords: non-photorealistic rendering, dynamic line drawing, physically based, conceptual framework 1 Introduction The shape of the model, it’s salient features, surface texture, and how it is being lit are all depicted by the shape and quality of it’s * e-mail: [email protected] e-mail:[email protected] line drawing. Past research in Non-photorealistic Rendering (NPR) has focused mainly towards two distinct areas, namely: Line Identification and Line Rendering. Line or Feature Identification attempts to identify a set of points on the surface of the model that best represented an artist’s vision of the model’s line drawing. These lines are undeniably artistic and are extremely helpful in aiding the user understand the shape of the model. Silhouettes and other heuristic contours fall in this category of research. The other highly researched area in NPR deals with the artistic ’look and feel’ rendering of the above mentioned feature set of points. The feature point set can be rendered as short sketchy strokes or smooth hand-drawn looking splines. Modulating render- ing styles is one simple means of achieving varied looking results. All these efforts in NPR have produced some very convincing and aesthetically pleasing non-photorealistic well-contained re- sults. One missing component in this body of research is the part dealing with the process of line creation. Our framework attempts to address this missing component. Figure 2: Observational Drawing by an Artist Figure 2 is a conceptual depiction of how an artist would create a line drawing of a model. He or she would first select a pose based

Upload: others

Post on 20-Mar-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Pure Contour Drawings Using Dynamic Lines · Figure 3: Conceptual Overview of Line Drawing Framework The top part of Figure 3 provides shows how the process of line drawings flows

Pure Contour Drawings Using Dynamic Lines

Mayank Singh∗

Texas A&M UniversityDonald H. House†

Clemson University

Figure 1: Representative Line Drawing using Dynamic Lines — Left to right: Cow, Chess Pawn, Station Wagon, 1932 Dodge

Abstract

Real world line drawings are generated by the motion of artist’shand over a paper plane. Our system attempts to reproduce linedrawings by mimicking this dynamic motion. The artist maintainshis or her gaze on the underlying model, conceptualizes the shapeof the line that best depict the part of the model and then with thesmooth motion of his or her hand produces a line on the paper.

We present a conceptual framework along with a working prototypethat attempts to closely follow this real world process. The frame-work builds poses for the model, highlights interesting feature lineson the model, creates a representative shape of the feature line onthe paper and then with the motion of a moving pen generates linedrawings. The framework also provides flexibility and support forexisting techniques and algorithm available in non-photorealisticrendering research, in terms of finding perceptual aids, selectionof feature lines, and rendering algorithms. We provide several re-sults from a simple prototype system following this structure thatdemonstrates the validity of the concept.

Keywords: non-photorealistic rendering, dynamic line drawing,physically based, conceptual framework

1 Introduction

The shape of the model, it’s salient features, surface texture, andhow it is being lit are all depicted by the shape and quality of it’s

∗e-mail: [email protected]†e-mail:[email protected]

line drawing.

Past research in Non-photorealistic Rendering (NPR) has focusedmainly towards two distinct areas, namely: Line Identification andLine Rendering.

Line or Feature Identification attempts to identify a set of points onthe surface of the model that best represented an artist’s vision ofthe model’s line drawing. These lines are undeniably artistic andare extremely helpful in aiding the user understand the shape of themodel. Silhouettes and other heuristic contours fall in this categoryof research.

The other highly researched area in NPR deals with the artistic’look and feel’ rendering of the above mentioned feature set ofpoints. The feature point set can be rendered as short sketchystrokes or smooth hand-drawn looking splines. Modulating render-ing styles is one simple means of achieving varied looking results.

All these efforts in NPR have produced some very convincingand aesthetically pleasing non-photorealistic well-contained re-sults. One missing component in this body of research is the partdealing with the process of line creation. Our framework attemptsto address this missing component.

Figure 2: Observational Drawing by an Artist

Figure 2 is a conceptual depiction of how an artist would create aline drawing of a model. He or she would first select a pose based

Page 2: Pure Contour Drawings Using Dynamic Lines · Figure 3: Conceptual Overview of Line Drawing Framework The top part of Figure 3 provides shows how the process of line drawings flows

on his or her artistic discretion. Next, the artist would select thefeature of the model that needs to be drawn. Once the feature isselected, it’s representation on the paper plane is imagined by theartist. Now while keeping his or her gaze on the model, the artist’smoving hand would generate smooth strokes on the paper. Thisset of curves together form the line drawing of the model. Theprocess of generating lines runs in a loop with the brain of the artistconstantly adapting to the feedback generated by the line drawingso far. Thus becoming a time-dependent constantly changing andadapting process.

Our framework attempts to adhere to this process of how an obser-vational artist would create a line drawing as closely as possible.

The framework operates upon a three dimensional mesh model. Itpresents the user with a set of suggestive poses that best describethe shape of the object. Using the framework, the user can thenselect a set of feature lines on the model. The selected feature lineswill now become the basis for generating dynamic lines by movingthe pen on the paper plane. The strokes thus generated can then berendered with a variety of techniques.

The essential contribution of this paper is the introduction of a con-ceptually simple, yet flexible framework that capturing the processof line drawing.

2 Background

Since we are attempting to understand how to draw three-dimensional objects with lines, all our underlying algorithms alsooperate on three-dimensional polygonal mesh.

2.1 Framework for producing NPR

OpenNPAR by Halper et. al. [Halper et al. 2003] and Pro-grammable Style for NPR Line Drawing by Grabli et. al. [Grabliet al. 2004] are two of the most significants efforts to put togethera conceptual as well as a working framework for producing non-photorealistic drawings. OpenNPAR is a designer toolbox for pro-ducing non-photorealistic drawings, not particularly aimed at pro-ducing pure contour drawings. The essential subparts of this sys-tem are Surface operations, building Strokes and Images based op-erations. Each part is linked to other, aiding the designers to goback and forth depending upon their intention. Grabli’s work wasinspired by programmable shading systems such as that of Pixar’sRenderman. The NPR operators can be applied on any geometryusing a simple scripting language.

2.2 Components of the Framework

In order to better understand NPR in context to our framework, wehave classified NPR literature amongst the following four compo-nents :

• Observing the Shape of the Model (Perceptual Cues)

• Lines to Depict the Shape of the Model (Set of Feature Points)

• Creating the Line (Stroke Generation)

• Instrument of Choice (Rendering Style)

2.2.1 Observing the Shape of the Model

Finding the right pose is a critical first step for an artist. If possible,he or she would explore a set of alternatives to find the best possi-ble pose for the model to be drawn. Computationally, one way of

finding the right pose could be maximizing the projection of inter-esting surface areas. Other issues that could be considered includelighting, and depth complexity.

Several researchers have done work in this area. Kamada andKawai [Kamada and Kawai 1988] defined a view to be optimalif it minimizes the number of degenerated faces under orthogonalprojection. Roberts and Marshal [Roberts and Marshall 1998] ap-proached the problem by minimizing the angle between the nor-mal of the face and the view direction. Since depth is powerfulcue for perception of shape, Stoev and Straβer used the idea of notonly maximizing the projected area but also the depth of the scene.Based on Information Theory principals, Vazquez et. al [Vazquezet al. 2001] introduced a formal vocabulary to measure the qualityof a viewpoint for a certain model, that they call viewpoint entropy.

2.2.2 Lines to Depict the Shape of the Model

In his book, The Natural Way To Draw, Nicoladis [Nicoladis 1969]states that one of the first and the simplest lessons for an aspiringartist undertaking an observational line drawing is to lock the gazeon the model and follow the contours (defined as outlines of themodel). Similar advice is found in Edwards [Edwards 1926-]. Eachtime the contour comes to an apparent end, the pen or the pencilstarts where the line turned inwards on the model. Nicoladis makesa distinction between outline and contours by defining the formeras a line the eyes can follow but not the sense of touch [Nicoladis1969].

For the purpose of this paper, we will follow the NPR literature thatcategorizes lines in a drawing as either silhouettes or contours. Asilhouette is a line separating planes facing towards and away-fromthe observer. This set of lines naturally includes outlines, but alsoincludes lines formed by ridges across the face of an object. Allother lines will be referred to as contours.

Dooley and Cohen [Dooley and Cohen 1990] first demonstrated thecomputation of silhouettes, along with boundary and discontinuityusing a hiddenness matrix. Later Benichou [Benichou and Elber1999] and Elber [G. 1999] introduced algorithms to preprocess apolygonal mesh to compute silhouette lines in real time. In 2003,Isenberg et. al. [Isenberg et al. 2003] compiled a comprehensivelist of silhouette extraction algorithms operating in either mesh orimage space.

Watanabe [Kouki Watanabe 2001] presents a robust algorithm todetect perceptually salient features on the surface using curvatureinformation on a 3D mesh. Another set of feature lines that are ofinterest are Ridges and Valleys. Belyaev et. al. [Belyaev et al. 1998]derived mathematical formulae to compute ridges and ravines onimplicit surfaces. Page et. al. [Page et al. 2001] used a normalvoting approach to compute ridge and valley lines on 3D objects.Ohtake et. al. [Ohtake et al. 2004] used curvature filtering to obtainhigh quality ridges and valley lines on a 3D mesh.

DeCarlo et. al. [DeCarlo et al. 2003] presented the idea of draw-ing contours on the surface called Suggestive Contours. These arecontours where the radial curvature (curvature in the viewing direc-tion) is zero and the derivative of radial curvature is positive. Juddet. al. [Judd et al. 2007] introduced the idea of using maximumcurvature change on the projected image space to derive another setof contours called Apparent Ridges to depict the shape of an object.These Apparent Ridges successfully show the ridge-like areas onthe object.

Page 3: Pure Contour Drawings Using Dynamic Lines · Figure 3: Conceptual Overview of Line Drawing Framework The top part of Figure 3 provides shows how the process of line drawings flows

2.2.3 Creating the Line

In order to render features of interest as lines, connectivity infor-mation must be built over the feature points. In 2003, Sousa andPrusinkiewicz [Sousa and Prusinkiewicz 2003] introduced the ideaof using B-Splines to give their drawing a hand drawn look andfeel. For each feature type, the vertices of the edges are treated ascontrol points for splines. This produced a static piece of geometrywith varying degree of smoothness. Grabli et. al. [Grabli et al.2004] also used the idea of chaining in their Programmable Stylefor NPR Line Drawings. These chains were built on screen spaceand were used as backbones for stylistic operations. House andSingh [House and Singh 2007] introduced the concept of drawingdynamic lines by having a control system guide a pen over a paperplane.

2.2.4 Instrument of Choice

For the purpose of rendering high quality aesthetic strokes, Goochet. al. [Gooch et al. 1998] first introduced the idea of modu-lating line thickness based on the perspective depth of the corre-sponding point in 3D space. Sousa and Prusinkiewicz [Sousa andPrusinkiewicz 2003] mention the idea of further taking into ac-count the curvature of the surface, while rendering a line. Usinghand-drawn artistic examples as reference material, Goodwin et. al.[Goodwin et al. 2007] was able to draw a strong link between thewidth of the strokes and lighting levels on the surface. The widthof the line is modulated based on lighting, curvature and depth.

3 Our Framework

Figure 3: Conceptual Overview of Line Drawing Framework

The top part of Figure 3 provides shows how the process of linedrawings flows in real-world. Below it is an over-view of the struc-ture of how our line drawing framework’s design works. It’s com-ponents closely mimic those of the real-world counterpart. Theartist’s model and lighting are represented by a 3D model and light-ing system as in conventional 3D graphics. The artist’s gaze andobservation yield identification of various features that might berendered, leading to an intention to draw a set of lines. Similarly,in our framework, a feature extractor will identify surface featuresto be drawn, and these go to a planner that organizes a set of linesto be drawn. The artist draws lines by movement of the hand anddrawing instrument over the paper, which results in rendered lineson the drawing surface. In the framework, the planner sends linesto a control system and dynamic controller recreates the art of linecreation by moving a pen over the paper plane.

We used this conceptual overview as a guide in designing and build-ing a prototype line drawing system. The prototype is meant to testour ideas and gain some experience with the overall structure. Ourfuture work is to flesh out each of the conceptual components in theform of a more fully configured drawing machine. To show howthe prototype system works, we examine each step in detail.

3.1 Model and Lighting

The artist positions him or herself before the model. If possiblethe model would be placed on a turn table and an aesthetic choiceof pose is made. Ideally, the lighting conditions would be modu-lated to best highlight the aspects of model that the artist wishes toemphasis upon.

The system operates upon a three dimensional triangulated polygo-nal mesh with a simple white headlight attached to the camera.

In an attempt to guide novice users towards more aestheticallypleasing poses for the model, our system highlights a subset ofcanonical viewpoints. These viewpoints are off-axis, slightly abovethe ground in respect to the model places at the center. This defi-nition of canonical viewpoints is based off the work of Blanz et al.[Blanz V. 1999] for finding aesthetically pleasing poses. The subsetof points is selected from a uniformly sampled set of points, createdby linearly subdividing a bounding octahedron. The vertices of thesubdivided octahedron are then projected onto the bounding sphere.

Figure 4: Shaded Model of Max Planck

Figure 4 shows a shaded model of Max Planck that we will followthrough the stages of the process.

3.2 Observation

Once the pose of the model and, in respect to the model, the posi-tion of artist is decided, the artist would decide upon the features ofthe model that he or she would like to depict. For each of the thesefeatures, a representative line is imagined. The line can either ac-curately depict the exact shape of the underlying shape, or looselyrepresent the shape. The former is referred to as Contour Drawingsand latter as Gestural Drawings [Nicoladis 1969].

Since our system has access to a polygonal mesh of the model,we can extract a variety of feature lines such as silhouettes [Isen-

Page 4: Pure Contour Drawings Using Dynamic Lines · Figure 3: Conceptual Overview of Line Drawing Framework The top part of Figure 3 provides shows how the process of line drawings flows

berg et al. 2003], ridges and vallies [Ohtake et al. 2004], sugges-tive contours [DeCarlo et al. 2003] and apparent ridges [Judd et al.2007]. Potentially, any algorithm that operates upon a polygonalmesh could be used. Figure 5 and Figure 6 shows examples of themodel of Max Planck with Silhouettes, Suggestive Contours andApparent Ridges highlighted.

Figure 5: Silhouettes and Suggestive Contours

Figure 6: Silhouettes and Apparent Ridges

3.3 Form Intention

Once the artist decides upon the features he or she would like todraw, its representative shape in two-dimensional plane is imag-ined, and plan is made of how best to draw lines these features.

In our framework, the selected set of feature lines are stitched to-gether and projected onto the paper plane. This projected set oflines then form the backbone for tracker motion. The tracker in theframework is the set point for the Proportional, Integral & Deriva-tive (PID) controller.

The feature lines are stitched together in three dimensional spaceusing a simple graph building algorithm. Each feature type has it’sown graph of vertices and edges. Along with the vertices, other in-formation is stored as well, such as normal of the vertex, occlusionflag, distance of the vertex from the eye.

The feature edges can run across the face of the polygon or alongthe edges of the polygonal mesh. The graph building algorithmvaries slightly for each case.

Figure 7: Graph of Edges

Let’s examine the graph building algorithm in detail: Case 1: Fea-ture edges run across the face: For each type of feature (i.e. sil-houette, suggestive contour or apparent ridges), we build a separategraph of edges. To begin with, each edge is marked as unvisited,and each edge is augmented with information of the face it’s con-tained within.

Next the framework builds a order-less key hash table with faceindex as the key and edge index as the value. This provides a O(logn) time for edge-face search.

Now for each edge in the feature set, we build an adjacency matrixas shown in Figure 7 . Since the underlying mesh is a triangulated,we have at-most only three faces to look at, for each edge. Hav-ing the adjacency information, we select an unvisited edge and ran-domly pick one of the unvisited edges from the adjacency matrix.The adjacency matrix provides a constant time search for continu-ing edge. This process is repeated until we reach an edge that hasno continuing edge in its vicinity.

Then the whole stitching process is repeated until there are no un-visited edges in the feature set.

Case 2: Feature edge running along the edges: Again, all edges aremarked unvisited. For each edge, a lookup tuple of vertex indices ispre-computed. Starting with an unvisited edge, the next followingedge is found, using the vertex index. As soon as the follow-upedge is found, it’s marked visited. This technique is very similar tothe one described in House and Singh [House and Singh 2007].

Feature edges such as silhouettes, suggestive contours and apparentridges run across the faces, while the boundary features run alongthe edges of the triangle.

Figure 8 and Figure 9 show examples of feature chains on MaxPlanck model. The former highlights silhouettes and suggestivecontours and the later shows silhouettes and apparent ridges. The

Page 5: Pure Contour Drawings Using Dynamic Lines · Figure 3: Conceptual Overview of Line Drawing Framework The top part of Figure 3 provides shows how the process of line drawings flows

Figure 8: Model with graph of feature edges. Features are Silhou-ette and Suggestive Contours.

minimum span of the graph is set to four edges. Adjusting the min-imum span aids in getting rid of short noisy strokes. The occlusionflag is turned on, thus showing the occluded features edges in lightershade.

Figure 9: Model with graph of feature edges. Features are Silhou-ette and Apparent Ridges.

Also for each vertex in the graph an occlusion flag is set. The occlu-sion for each vertex is computed by offsetting the vertex along it’snormal by a tiny fraction and comparing its projected depth valuewith the corresponding depth value of the pixel in the depth buffer.This occlusion flag is used later for tracker motion and subsequentlyfor pen motion.

3.3.1 Move Hand

A very critical aspect of line drawing in life is that the act of draw-ing is a dynamic process, carried out by the nervous system in con-junction with a physical hand and drawing instrument. The line,

to the artist, is not a piece of geometry, but its shape is constantlyadapted while the artist draws it. The artist’s brain constantly ad-justs the hand to form the shape of the line, as it is being drawn onthe paper.

Our framework also generated lines by the dynamic motion of mov-ing pen.

The graph of edges is project onto a two dimensional paper plane.The vertices of the graph act as basis control points. The distancebetween two consecutive control points is subdivided into pixel dis-tance. For each pixel a set of values are interpolated: it’s corre-sponding position in 3d space, it’s normal in 3d space, it’s occlusionflag, it’s distance from the eye in 3d space. This per-pixel simplifiedsampling is now the path of tracker motion.

The tracker is the set-point of a control system. Attached to thecontrol system is the pen. Now as the tracker moves along thispath, one pixel at a time, the pen is dragged along thus generatinga line on the paper. Thus, the lines generated by the motion of penare dynamic in nature and can be adapted as they are drawn. Inessence, this captures the underlying motion of artist’s hand.

The control system is PID system with a zero-length spring.

Depending upon how the tracker is moved and how control systemvariables are configured, the lines thus generated can very closelyfollow the feature edges or loosely represent them.

Figure 10: Model drawn with a slow moving pen

Figure 10 shows an example of the model with tracker path andpen motion. The black lines are overlaid onto the tracker’s path.As one can notice from the figure, that the pen follows the tracker’spath very closely. An important distinction to be made here in com-paring with the past work that fits splines to feature lines to give ahand drawn look and feel ([Sousa and Prusinkiewicz 2003]) is thatthe pen motion will preserve the geometry of the underlying modelwithout adding additional smoothing to the model’s line drawing.

Smoother, flowing lines can be generated by tuning the control sys-tem’s parameters such as the spring or the damping constant. Figure11 shows an example where the spring is loose and the dampingconstant is lowered as well.

Page 6: Pure Contour Drawings Using Dynamic Lines · Figure 3: Conceptual Overview of Line Drawing Framework The top part of Figure 3 provides shows how the process of line drawings flows

Figure 11: Model drawn with a lowered spring and damping con-stant

3.4 Render Strokes

The artist can generate line drawings using any instrument of choicesuch as pen, pencil or brush.

Figure 12 shows the model drawn with a thick nib pen. In oursystem, once a set of points is generated by the motion of pen, thepoints can be rendered using technique. The motion of the pen canalso take into account its interaction with the medium it is beingdrawn upon.

Figure 13 and Figure 14 shows the an example where the penmotion is rendered as simple lines with constant width and lineswith varying width.

Our system takes into account some very simple principals of mod-ulating line quality such as pre-setting the minimum thickness ofline, changing it’s width based on the perspective depth of themodel, the lighting of the model and the surface curvature of themodel.

Figure 16 and Figure 15 shows the an example where the parame-ters of control system are modified to have low and high damping,thus producing wavy lines.

3.5 Results

4 Future Work

So far, we have developed a prototype system that produces simpleline drawings from polygonal models, using a standard set of fea-tures already developed in the NPR literature. We intend to extendthis work in each of four distinct areas.

We plan to add an automated system for suggesting a set of goodposes for the 3D model along with recommending and adjustingappropriate lighting conditions for the model.

So far we have tested our system with Silhouettes, Suggestive Con-tours and Apparent Ridges. An obvious problem with highlightingfeatures on a discrete polygonal mesh is numerical inaccuracy andnoise. Even after locating the edges that satisfy the feature find-ing algorithm, building a set of connections between these edges is

Figure 12: Model drawn with a thick pen

Figure 13: Model of a cow rendered with constant thickness lines

non-trivial. Even a small numerical inaccuracy produces disjoints,while traversing the feature edges. We would like to improve uponour graph building algorithm to make it more robust.

For the purpose of demonstrating the viability of our frameworkwe are using a Proportional–Integral–Derivative (PID) controller asour control system. In future, we would like to explore the optionof having a more complex and smarter control system that wouldreassess its motion based on a varied set of parameters.

We have only focused on producing line drawings made by pen.There is no reason why this cannot be extended to other styles ofrendering such as charcoal, paint brush and oil. Each one wouldadd an additional set of complexity as well as provide a richer setof options for the user.

5 Conclusion

We have proposed a simple conceptual framework within which acomplete high-quality line drawing system could be developed. Inthis paper, we have demonstrated the effectiveness of our frame-work with a small set of examples.

The suggested approach follows very closely the process used byan artist producing an observational line drawing, while providingfor the flexibility to incorporate existing and new advancements inNPR research.

Page 7: Pure Contour Drawings Using Dynamic Lines · Figure 3: Conceptual Overview of Line Drawing Framework The top part of Figure 3 provides shows how the process of line drawings flows

Table 1: Time Complexity

Model Triangles No. of Chains Edge Graph (ms) Pen Motion (ms) Feature TypesMax Planck 98260 241 39 186 Silhouettes and Suggestive ContoursMax Planck 98260 499 75 277 Silhouettes and Apparent Ridges

Cow 92846 198 42 182 Silhouettes and Suggestive ContoursCow 92846 421 178 291 Silhouettes and Apparent Ridges

’32 Dodge 16646 616 1813 649 SilhouettesStation Wagon 8814 1133 1909 932 Silhouettes

Chess 7872 10 05 36 Silhouettes and Suggestive Contours

Figure 14: Model of a cow rendered with varying thickness lines

Figure 15: Model of a station wagon drawn with wiggling lines

The pen motion provides for additional smoothing compared to theunderlying feature edges. Even if the model is coarse, the line draw-ings produced have very smooth strokes.

The system provides a paradigm shift in terms of thinking abouthow line drawings are made. It takes into account the dynamism ofhuman line drawing, both in terms of the intentionality of the artist,and the dynamics of the hand and the pen. We believe that this isa key step forward in terms of building a thinking machine that candraw.

References

BAREQUET, G., DUNCAN, C. A., GOODRICH, M. T., KUMAR,S., AND POP, M. 1999. Efficient perspective-accurate silhouettecomputation. In SCG ’99: Proceedings of the fifteenth annualsymposium on Computational geometry, ACM, New York, NY,USA, 417–418.

BELYAEV, A., PASKO, A., AND KUNII, T. 1998. Ridges andravines on implicit surfaces. cgi 00, 530.

Figure 16: Model of a station wagon drawn with wiggling lines

Figure 17: ’32 Dodge

BENICHOU, F., AND ELBER, G. 1999. Output sensitive extractionof silhouettes from polygonal geometry. In PG ’99: Proceed-ings of the 7th Pacific Conference on Computer Graphics andApplications, IEEE Computer Society, Washington, DC, USA,60.

BLANZ V., TARR M. J., B. H. H. 1999. What object attributesdetermine canonical views? Perception 28, 5, 575 – 600.

DECARLO, D., FINKELSTEIN, A., RUSINKIEWICZ, S., ANDSANTELLA, A. 2003. Suggestive contours for conveying shape.In SIGGraph-03.

DOOLEY, D., AND COHEN, M. 1990. Automatic illustration of 3dgeometric models: lines. In SI3D ’90: Proceedings of the 1990symposium on Interactive 3D graphics, ACM, New York, NY,USA, 77–82.

Page 8: Pure Contour Drawings Using Dynamic Lines · Figure 3: Conceptual Overview of Line Drawing Framework The top part of Figure 3 provides shows how the process of line drawings flows

Figure 18: ’32 Dodge drawn in gestural style

Figure 19: ’32 Dodge drawn with wiggly lines

EDWARDS, B. 1926-. The new drawing on the right side of thebrain, 2nd revised ed. ed. New York : Jeremy P. Tarcher/Putnam,c1999.

G., E. 1999. Interactive line art rendering of freeform surfaces.Computer Graphics Forum 18, 3, 1–12.

GIRSHICK, A., INTERRANTE, V., HAKER, S., AND LEMOINE, T.2000. Line direction matters: an argument for the use of prin-cipal directions in 3d line drawings. In NPAR ’00: Proceedingsof the 1st international symposium on Non-photorealistic anima-tion and rendering, ACM, New York, NY, USA, 43–52.

GOOCH, A., GOOCH, B., SHIRLEY, P., AND COHEN, E. 1998. Anon-photorealistic lighting model for automatic technical illus-tration. In SIGGRAPH ’98: Proceedings of the 25th annual con-ference on Computer graphics and interactive techniques, ACM,New York, NY, USA, 447–452.

GOODWIN, T., VOLLICK, I., AND HERTZMANN, A. 2007.Isophote distance: a shading approach to artistic stroke thick-ness. In NPAR ’07: Proceedings of the 5th international sym-

posium on Non-photorealistic animation and rendering, ACM,New York, NY, USA, 53–62.

GRABLI, S., TURQUIN, E., DURAND, F., AND SILLION, F. 2004.Programmable style for npr line drawing. In Rendering Tech-niques 2004 (Eurographics Symposium on Rendering), ACMPress.

HALPER, N., SCHLECHTWEG, S., AND STROTHOTTE, T. 2002.Creating non-photorealistic images the designer’s way. In NPAR’02: Proceedings of the 2nd international symposium on Non-photorealistic animation and rendering, ACM, New York, NY,USA, 97–ff.

HALPER, N., ISENBERG, T., RITTER, F., FREUDENBERG, B.,MERUVIA, O., SCHLECHTWEG, S., AND STROTHOTTE, T.2003. Opennpar: A system for developing, programming, anddesigning non-photorealistic animation and rendering. pg 00,424.

HOUSE, D. H., AND SINGH, M. 2007. Line drawing as a dynamicprocess. In PG ’07: Proceedings of the 15th Pacific Conferenceon Computer Graphics and Applications, IEEE Computer Soci-ety, Washington, DC, USA, 351–360.

INTERRANTE, V., FUCHS, H., AND PIZER, S. 1996. Illustratingtransparent surfaces with curvature-directed strokes. vis 00, 211.

ISENBERG, T., FREUDENBERG, B., HALPER, N.,SCHLECHTWEG, S., AND STROTHOTTE, T. 2003. A de-veloper’s guide to silhouette algorithms for polygonal models.IEEE Computer Graphics and Applications 23, 4, 28–37.

JUDD, T., DURAND, F., AND ADELSON, E. H. 2007. Apparentridges for line drawing. ACM Trans. Graph. 26, 3, 19.

KAMADA, T., AND KAWAI, S. 1988. A simple method for com-puting general position in displaying three-dimensional objects.Comput. Vision Graph. Image Process. 41, 1, 43–56.

KIM, S. Y., KIM, H., KIM, B., AND KOO, B. 2006. A unifiedframework for 3d non-photorealistic rendering. In SIGGRAPH’06: ACM SIGGRAPH 2006 Research posters, ACM, New York,NY, USA, 110.

KNILL, D. C. 1992. Perception of surface contours and surfaceshape: from computation to psychophysics. J. Opt. Soc. Am. A9, 9, 1449.

KNILL, D. C. 2001. Contour into texture: information contentof surface contours and texture flow. J. Opt. Soc. Am. A 18, 1,12–35.

KOENDERINK, J., AND VAN DOORN, A. 1982. The shape ofsmooth objects and the way contours end. Perception 11, 129–137.

KOENDERINK J J, VAN DOORN A J, K. A. M. L. T. J. T. 2001.Ambiguity and the ’mental eye’ in pictorial relief. Perception30(4), 431–448.

KOENDERINK, J. 1984. What does the occluding contour tell usabout solid shape? Perception 13, 321–330.

KOUKI WATANABE, A. G. B. 2001. Detection of salient curvaturefeatures on polygonal surfaces. Computer Graphics Forum 20, 3(September), 385–392.

NICOLADIS, K. 1969. The natural way to draw : a working planfor art study, 2nd revised ed. ed. Houghton Mifflin Co.

OHTAKE, Y., BELYAEV, A., AND SEIDEL, H.-P. 2004. Ridge-valley lines on meshes via implicit surface fitting. In SIGGRAPH

Page 9: Pure Contour Drawings Using Dynamic Lines · Figure 3: Conceptual Overview of Line Drawing Framework The top part of Figure 3 provides shows how the process of line drawings flows

’04: ACM SIGGRAPH 2004 Papers, ACM, New York, NY,USA, 609–612.

PAGE, D. L., KOSCHAN, A., SUN, Y., PAIK, J., AND ABIDI,M. A. 2001. Robust crease detection and curvature estimationof piecewise smooth surfaces from triangle mesh approximationsusing normal voting. cvpr 1, 162.

PHILLIPS F, TODD JT, K. J. K. A. 2003. Perceptual representationof visible surfaces. Perception and Psychophysics 65, 5 (July),747–762.

ROBERTS, D., AND MARSHALL, A. 1998. Viewpoint selec-tion for complete surface coverage of three dimensional objects.BMVC98, xx–yy.

SOUSA, M. C., AND PRUSINKIEWICZ, P. 2003. A few good lines:Suggestive drawing of 3d models. Proceedings of Eurographics2003: Computer Graphics Forum 22, 3.

STOEV, S. L., AND STRAβER, W. 2002. A case study on au-tomatic camera placement and motion for visualizing historicaldata. In VIS ’02: Proceedings of the conference on Visualization’02, IEEE Computer Society, Washington, DC, USA.

VAZQUEZ, P.-P., FEIXAS, M., SBERT, M., AND HEIDRICH, W.2001. Viewpoint selection using viewpoint entropy. In VMV’01: Proceedings of the Vision Modeling and Visualization Con-ference 2001, Aka GmbH, 273–280.