conservative visibility preprocessing using extended...

13
To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION Conservative Visibility Preprocessing using Extended Projections Fr´ edo Durand†‡, George Drettakis, Jo¨ elle Thollotand Claude Puech iMAGIS - GRAVIR/IMAG - INRIA Laboratory for Computer Science - MIT Abstract Visualization of very complex scenes can be significantly acceler- ated using occlusion culling. In this paper we present a visibility preprocessing method which efficiently computes potentially vis- ible geometry for volumetric viewing cells. We introduce novel extended projection operators, which permits efficient and conser- vative occlusion culling with respect to all viewpoints within a cell, and takes into account the combined occlusion effect of multiple occluders. We use extended projection of occluders onto a set of projection planes to create extended occlusion maps; we show how to efficiently test occludees against these occlusion maps to deter- mine occlusion with respect to the entire cell. We also present an improved projection operator for certain specific but important con- figurations. An important advantage of our approach is that we can re-project extended projections onto a series of projection planes (via an occlusion sweep), and accumulate occlusion information from multiple blockers. This new approach allows the creation of effective occlusion maps for previously hard-to-treat scenes such as leaves of trees in a forest. Graphics hardware is used to ac- celerate both the extended projection and reprojection operations. We present a complete implementation demonstrating significant speedup with respect to view-frustum culling only, without the computational overhead of on-line occlusion culling. KEYWORDS: Occlusion culling, visibility determination, PVS 1 Introduction Visualization of very complex geometric environments (millions of polygons) is now a common requirement in many applications, such as games, virtual reality for urban planning and landscaping etc. Efficient algorithms for determining visible geometry are a key to achieving interactive or real-time display of such complex scenes; much research in computer graphics is dedicated to this do- main. Object simplification using different levels of detail (LOD) (e.g., [Cla76, FS93]) or image-based approaches (e.g., [SLSD96]) have also been used to accelerate display, either as an alternative or in tandem with visibility algorithms. Visibility culling algorithms try to reduce the number of primi- tives sent to the graphics pipeline based on occlusion with respect to the current view. In view frustum culling only the objects contained in the current view frustum are sent to the graphics pipeline. Occlusion culling attempts to identify the visible parts of a scene, thus reducing the number of primitives rendered. We can distin- guish two classes of occlusion culling: point-based methods which perform occlusion culling on-the-fly for the current viewpoint, and preprocessing approaches which perform this calculation before- hand, typically for given regions (volumetric cells). Point-based methods are very effective, and in particular can treat the case of occluder fusion, i.e. the compound effect of mul- tiple occluders. This is important, for example in a forest: each iMAGIS is a joint research project of CNRS/INRIA/UJF/INPG. E-mail: {Fredo.Durand|George.Drettakis|Joelle.Thollot| Claude.Puech}@imag.fr http://www-imagis.imag.fr/ individual leaf hides very little, but all the trees together obscure ev- erything behind them. However, point-based methods have signifi- cant computational overhead during display, and cannot be simply adapted for use with pre-fetching if the model cannot fit in memory. No previous preprocessing method exists which can handle oc- cluder fusion. In addition, the success of such preprocessing meth- ods is often tied to the particular type of scene they treat (e.g., ar- chitectural environments [TS91, Tel92]). In this paper we present a visibility preprocessing algorithm based on a novel extended projection operator, which generalizes the idea of occlusion maps to volumetric viewing cells. These op- erators take occluder fusion into account, and result in an occlusion culling algorithm which is efficient both in memory and computa- tion time. Our algorithm results in a speedup of up to 18 compared to optimized view-frustum culling only. In addition, using repeated re-projection onto several projection planes, we can treat particu- larly difficult scenes (e.g., forests), for which we obtain speedups of 24, again compared to view-frustum culling. An example is shown in Fig. 1. 1.1 Previous work It is beyond the scope of this paper to review all previous work on visibility. Comprehensive surveys can be found in e.g., [Dur99]. In particular we do not review analytical 3D visibility methods (e.g., [PD90, Dur99]) because their algorithmic complexity and their ro- bustness problems currently prevent their practical use for large scenes. In what follows, we first briefly overview preprocessing oc- clusion culling algorithms and then discuss point-based approaches. Occlusion culling techniques were first proposed by Jones [Jon71] and Clark [Cla76]. Airey et al. [ARB90] and Teller et al. [TS91, Tel92] were the first to actually perform visibility pre- processing in architectural environments. They exploit the fact that other rooms are visible only through sequences of portals (doors, windows). These methods have proven very efficient in the context of walkthroughs where they can be used in conjunction with LOD [FS93] for faster frame-rates. The problem of data size and the consequent treatment of disk pre-fetching and network bandwidth has been addressed, notably by Funkhouser (e.g., [Fun95, Fun96]). Applications to global lighting simulation have also been demon- strated, e.g., [TH93]. Unfortunately these methods rely heavily on the properties of indoor scenes, and no direct generalization has been presented. Visibility for terrain models has also been treated (e.g., [Ste97]). Preprocessing algorithms capable of treating more general scenes have recently begun to emerge (e.g., [COFHZ98, COZ98, WBP98]). Nonetheless, they are currently restricted to occlusions caused by a single convex occluder at a time. Since they cannot handle the general case of occluder fusion they compute potentially visible sets which are often very large. However, the technique by Schaufler et al. [SDDS00] in this volume can also handle occluder fusion for volumetric visibility. A novel approach using conservative occluder simplification has also recently been proposed [LT99] to decrease the cost of occlu- sion preprocessing. On the other hand, point-based methods have been proposed which perform an occlusion culling operation for each frame on-

Upload: others

Post on 31-Mar-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

Conservative Visibility Preprocessing using Extended Projections

Fredo Durand†‡, George Drettakis†, Joelle Thollot†and Claude Puech †

†iMAGIS∗- GRAVIR/IMAG - INRIA ‡Laboratory for Computer Science - MIT

Abstract

Visualization of very complex scenes can be significantly acceler-ated using occlusion culling. In this paper we present a visibilitypreprocessing method which efficiently computes potentially vis-ible geometry for volumetric viewing cells. We introduce novelextended projection operators, which permits efficient and conser-vative occlusion culling with respect to all viewpoints within a cell,and takes into account the combined occlusion effect of multipleoccluders. We use extended projection of occluders onto a set ofprojection planes to create extended occlusion maps; we show howto efficiently test occludees against these occlusion maps to deter-mine occlusion with respect to the entire cell. We also present animproved projection operator for certain specific but important con-figurations. An important advantage of our approach is that we canre-project extended projections onto a series of projection planes(via an occlusion sweep), and accumulate occlusion informationfrom multiple blockers. This new approach allows the creation ofeffective occlusion maps for previously hard-to-treat scenes suchas leaves of trees in a forest. Graphics hardware is used to ac-celerate both the extended projection and reprojection operations.We present a complete implementation demonstrating significantspeedup with respect to view-frustum culling only, without thecomputational overhead of on-line occlusion culling.

KEYWORDS: Occlusion culling, visibility determination, PVS

1 Introduction

Visualization of very complex geometric environments (millionsof polygons) is now a common requirement in many applications,such as games, virtual reality for urban planning and landscapingetc. Efficient algorithms for determining visible geometry are akey to achieving interactive or real-time display of such complexscenes; much research in computer graphics is dedicated to this do-main. Object simplification using different levels of detail (LOD)(e.g., [Cla76, FS93]) or image-based approaches (e.g., [SLSD96])have also been used to accelerate display, either as an alternative orin tandem with visibility algorithms.

Visibility culling algorithms try to reduce the number of primi-tives sent to the graphics pipeline based on occlusion with respect tothe current view. In view frustum culling only the objects containedin the current view frustum are sent to the graphics pipeline.

Occlusion culling attempts to identify the visible parts of a scene,thus reducing the number of primitives rendered. We can distin-guish two classes of occlusion culling: point-based methods whichperform occlusion culling on-the-fly for the current viewpoint, andpreprocessing approaches which perform this calculation before-hand, typically for given regions (volumetric cells).

Point-based methods are very effective, and in particular cantreat the case of occluder fusion, i.e. the compound effect of mul-tiple occluders. This is important, for example in a forest: each

∗iMAGIS is a joint research project of CNRS/INRIA/UJF/INPG.E-mail: {Fredo.Durand|George.Drettakis|Joelle.Thollot|Claude.Puech}@imag.fr http://www-imagis.imag.fr/

individual leaf hides very little, but all the trees together obscure ev-erything behind them. However, point-based methods have signifi-cant computational overhead during display, and cannot be simplyadapted for use with pre-fetching if the model cannot fit in memory.

No previous preprocessing method exists which can handle oc-cluder fusion. In addition, the success of such preprocessing meth-ods is often tied to the particular type of scene they treat (e.g., ar-chitectural environments [TS91, Tel92]).

In this paper we present a visibility preprocessing algorithmbased on a novel extended projection operator, which generalizesthe idea of occlusion maps to volumetric viewing cells. These op-erators take occluder fusion into account, and result in an occlusionculling algorithm which is efficient both in memory and computa-tion time. Our algorithm results in a speedup of up to 18 comparedto optimized view-frustum culling only. In addition, using repeatedre-projection onto several projection planes, we can treat particu-larly difficult scenes (e.g., forests), for which we obtain speedups of24, again compared to view-frustum culling. An example is shownin Fig. 1.

1.1 Previous work

It is beyond the scope of this paper to review all previous work onvisibility. Comprehensive surveys can be found in e.g., [Dur99]. Inparticular we do not review analytical 3D visibility methods (e.g.,[PD90, Dur99]) because their algorithmic complexity and their ro-bustness problems currently prevent their practical use for largescenes. In what follows, we first briefly overview preprocessing oc-clusion culling algorithms and then discuss point-based approaches.

Occlusion culling techniques were first proposed by Jones[Jon71] and Clark [Cla76]. Airey et al. [ARB90] and Teller etal. [TS91, Tel92] were the first to actually perform visibility pre-processing in architectural environments. They exploit the fact thatother rooms are visible only through sequences of portals (doors,windows). These methods have proven very efficient in the contextof walkthroughs where they can be used in conjunction with LOD[FS93] for faster frame-rates. The problem of data size and theconsequent treatment of disk pre-fetching and network bandwidthhas been addressed, notably by Funkhouser (e.g., [Fun95, Fun96]).Applications to global lighting simulation have also been demon-strated, e.g., [TH93]. Unfortunately these methods rely heavily onthe properties of indoor scenes, and no direct generalization hasbeen presented. Visibility for terrain models has also been treated(e.g., [Ste97]).

Preprocessing algorithms capable of treating more generalscenes have recently begun to emerge (e.g., [COFHZ98, COZ98,WBP98]). Nonetheless, they are currently restricted to occlusionscaused by a single convex occluder at a time. Since they cannothandle the general case of occluder fusion they compute potentiallyvisible sets which are often very large. However, the technique bySchaufler et al. [SDDS00] in this volume can also handle occluderfusion for volumetric visibility.

A novel approach using conservative occluder simplification hasalso recently been proposed [LT99] to decrease the cost of occlu-sion preprocessing.

On the other hand, point-based methods have been proposedwhich perform an occlusion culling operation for each frame on-

Page 2: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

Figure 1: Top: A 7.8 M poly forest scene (only a tenth of the leavesare rendered here). The inset shows a close-up on the viewing cell(in red) and one of the projection planes. Bottom: Visibility com-putation from the view cell. In yellow we show the nodes of thehierarchy of bounding boxes culled by our method. We also showone of the projection planes used.

the-fly. Greene et al. [GKM93] create a 2D hierarchical z-buffer,used in conjunction with a 3D octree hierarchy to accelerate the oc-clusion of hidden objects. A related algorithm using hierarchicalocclusion maps was introduced by Zhang et al. [ZMHH97], whichuses existing graphics hardware. This approach includes “impor-tant” blocker selection and approximate culling for regions almostoccluded. It is important to note that these approaches only workfor a single given viewpoint and are thus unsuitable for precomput-ing visibility information; In a different vein, Luebke and George[LG95] presented an algorithm which extends view-frustum cullingby restricting views through convex portals. Wonka and Schmal-stieg [WS99] use a z-buffer from above the scene to perform occlu-sion culling in terrains.

Other point-based methods include the work by Coorg and Teller[CT96, CT97] which use spatial subdivision and the maintenanceof separating planes with the viewpoint to achieve rapid culling ofhidden surfaces. A similar technique is presented by Hudson et al.[HMC+97].

H. L. Lim [Lim92] proposed a fuzzy hidden surface removaltechnique1. He defined fuzzy projections as logical-AND andlogical-OR operations on views from sets of viewpoints. By usinga modified z-buffer, he was then able to perform visibility computa-tion from a volume of space. The basic idea of extended projections,which we are about to present in Section 2, is similar to this idea.

1This reference came to our attention well after the completion of thecamera-ready version of our paper. It is thus missing from the proceedingsversion.

We will discuss fuzzy projections further in Section 7.4.

1.2 Overview

Our visibility preprocessing algorithm adaptively subdivides thescene into viewing cells, which are the regions of observer move-ment. For each such cell, we compute the set of objects which arepotentially visible from all the points inside the cell. This set iscalled the PVS or potentially visible set [ARB90, Tel92, TS91].

To compute these sets efficiently, we introduce a novel extendedprojection operator. If we consider each individual viewpoint in acell, the extended projections are an underestimate of the projec-tion of occluders, and an overestimate for the occludees. By defin-ing these operators carefully, we can check if an occludee is hiddenwith respect to the entire cell by simply comparing the extendedprojections of the occludee to the extended projections of the oc-cluder. Extended projections can handle occluder fusion and areefficient in the general case. We also present an improved extendedprojection for occludees for specific, but not uncommon configura-tions. Once occluders have been projected into a given projectionplane, we can re-project them onto other planes, aggregating the ef-fect of many small occluding objects. This occlusion sweep allowsus to create occlusion maps for difficult cases such as the leaves ina forest.

The rest of this paper is organized as follows. In the next sec-tion we define the extended projection operators, and show how tocompute them (section 3). An improvement for specific cases ispresented in section 4. We then introduce a reprojection operatorand the occlusion sweep in section 5. In section 6 we describe thepreprocess to compute PVS’s and discuss the interactive viewingalgorithms which use the result of the preprocess. In section 7 wepresent the results of our implementation, together with a discus-sion and future work. The interested reader will find more detailsin the extended version present in the proceedings CD-ROM or onthe authors’ web page, as well as in the first author’s thesis [Dur99].

2 Extended projections

To compute the potentially visible geometry in every direction fora given viewing cell, we perform a conservative occlusion test foreach object (occludee) of the scene, with respect to every viewpointin the cell. To do this efficiently, we use a representation of occlu-sion caused by occluders which is based on extended projectionsonto a plane.

2.1 Principle

In point-based occlusion culling algorithms [GKM93, ZMHH97],occluders and occludees are projected onto the image plane. Occlu-sion is detected by testing if the projection of an occludee is con-tained in the projection of the occluders (overlap test) and if thisoccludee is behind (depth test) for the given viewpoint.

Our approach can be seen as an extension of these single view-point methods to volumetric viewing cells. This requires the def-inition of extended projection operators for occludees and occlud-ers. To determine whether an occludee is hidden with respect toall viewpoints within the viewing cell the new projection operatorsneed to satisfy the following conditions: (i) the extended projectionof the occludee must be contained in the extended projection of theoccluders and (ii) the occludee must be behind the occluders.

Even though we describe our method for a single plane, sixplanes will actually be necessary to test occlusion in all directions.The position of the projection plane is an important issue and willbe discussed in section 6.1.

We define a view as the perspective projection from a point ontoa projection plane. However, in what follows, the projection plane

2

Page 3: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

will be shared by all viewpoints inside a given cell, resulting insheared viewing frusta.

2.2 Extended projectionsWe next define extended projection operators for both occludersand occludees using views as defined above.

Definition 1 We define the extended projection (or P rojection) ofan occluder onto a plane with respect to a cell to be the intersec-tion of the views from any point within the cell.

Definition 2 The extended projection (or P rojection) of an oc-cludee is defined as the union of all views from any point of thecell.

In what follows, we will simply use P rojection to refer to anextended projection. The standard projection from a point will stillbe named view.

Fig. 2 illustrates the principle of our extended projection. TheP rojection of the occluder is the intersection of all views onto theprojection plane (dark gray), while the P rojection of the triangularoccludee is the union of views, shown in light green.

����

��������

���������� ��

��������

����������������������������

����������������������������

Figure 2: Extended projection of an occluder and an occludee. Theview from point V is shown in bold on the projection plane. Aview from another viewpoint is also shown with thinner lines. Theextended projection of an occluder on a plane is the intersection ofits views. For an occludee, it is the union of the views.

This definition of P rojection yields conservative occlusion tests.To show this consider the case of a single occluder (Fig. 2). As-sume (for the purposes of this example) that an occludee is behindthe occluder. It is declared hidden if its P rojection is contained inthe P rojection of the occluder. This means that the union of theoccludee views is contained in the intersection of the views of theoccluder. From any viewpoint V inside the cell, the view of theoccludee is contained in the view of the occluder.

Consider now the case of an occludee whose P rojection iscontained in the cumulative P rojection of two (or more) occluders.This means that from any viewpoint V in the cell, the view of theoccludee is contained in the cumulative view of the occluders. Tosummarize, we have:

viewV (occludee) ⊂SV∈cell viewV (occludee)

⊂SoccludersT

V∈cell viewV (occluder)

⊂Soccluders viewV (occluder)

The occludee is thus also hidden in this case (see Fig. 3). OurP rojection operators handle occluder fusion. We do not however

�������������� ��

��������

� �

�����

������

Figure 3: P rojections handle occluder fusion of two occluders Aand B. We show the example of a view from point V . The view ofthe occludee is contained in the cumulative view of the two occlud-ers, as determined by the P rojection occlusion test.

claim that they always find all occluder fusions; as will be dis-cussed in section 7.3, the position of the projection plane is central.Note that convexity is not required in any of our definitions, just asfor point-based occlusion-culling.

2.3 Depth

Unfortunately there is no one-to-one correspondence between apoint in a P rojection and a projected point of an object, as witha standard perspective view (see Fig. 4(a)). We can see that manydepth values correspond to a single point in the P rojection, depend-ing on which point of the viewing cell is considered.

���� ���������� ��

��������

��������

����

���������� ��

��

���

��

(a) (b)

Figure 4: (a) We show the points of the occluder corresponding to Pin the view from V and V ′. The light blue region of the occluder cor-responds to the set of points which project on P. (b) The Depth ofa point P in the P rojection of an occluder A is the maximum of thedepth of the corresponding points (zA2 here). In the case of multipleoccluders, the occluder closest to the cell is considered (occluder Ahere). Note that Depths are negative here.

Depth comparison is more involved than in the single viewpointcase since a simple distance between the viewpoint and a projectedpoint cannot be defined (see Fig 4(a)). Our definition of depth mustbe consistent with the properties of occlusion; For each ray ema-nating from a viewpoint inside the cell and going through the pro-jection plane, depth must be a monotonic function of the distanceto the viewpoint. We define depth along the direction orthogonalto the projection plane. We chose the positive direction leaving thecell and placed zero at the projection plane.

Definition 3 We define the extended depth (or Depth) of a point inthe P rojection of an occluder as the maximum of the depth of all

3

Page 4: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

the corresponding projected points.Similarly, the extended depth (Depth) of a point in the P rojectionof an occludee is the minimum depth of its projected points.

See Fig. 4(b) for an illustration where for point P and occluderA, depth zA2 is maximum and is thus used as Depth.

If the Depth of a point in the P rojection of an occluder is smallerthan the Depth of the point in the P rojection of an occludee, allthe corresponding points of the occludee are behind the occluderfrom any viewpoint inside the cell. As a result, this definition ofDepth satisfies our conservative depth test requirement and yieldsvalid occlusion computation.

We construct a Depth Map as follows: For each point of the pro-jection plane, we define the value of the Depth Map as the Depth ofthe occluder closest to the cell which projects onto it (that is theoccluder with the minimum Depth). In the example of Fig. 4(b),occluder A is closest to the cell, and thus chosen.

2.4 Implementation choices

Until now, our definitions have been quite general, and do not de-pend on the cell, plane, occludee nor on the way that we test forcontainment of an occludee P rojection in an occluder P rojection,or the way that Depths are compared. We now present the choiceswe have made for our implementation.

The viewing cells are non-axis-aligned bounding boxes. Thisallows them to fit the geometry more tightly. This has proved tobe especially useful for the city example, where streets are longviewing cells with poor aspect ratios which are not well suited toaxis aligned boxes.

The projection planes will be restricted to the three directions ofthe cell (note that these three directions depend on the cell). Again,this allows to capture occlusion more efficiently for long viewingcells such as streets.

The occludees are organized in a hierarchy of axis-alignedbounding boxes. Oriented bounded boxes [GLM96] could also beused to fit the geometry more tightly, but this would make the com-putation of occludee projections (Section 3.1) more expensive.

Evidently, the projection planes we use are finite, rather thaninfinite, rectangles. We use a pixel-map representation of theDepth Map. This may at first seem like a concession to conser-vatism, but we will see in section 3.2 that a conservative rasteriza-tion is used. This allows the use of the graphics hardware simpli-fying most of the computation, and it avoids the robustness issuesinherent to geometrical computations.

We store a Depth value for each pixel of the Depth Map. Asdescribed above, for each pixel we consider the closest occluder,i.e. the minimum Depth. Occluder fusion is handled by the naturalaggregation in the Depth Map. Following [GKM93] we organizethe Depth Map into a pyramid for efficient testing. We call it theHierarchical Depth Map.

3 Computation of extended projections

Using the extended projections we defined, we can efficiently testoccludee P rojections against occluder P rojections in a preprocessto find the potentially visible geometry for viewing cells. We nextdescribe how to compute P rojections for occludees and then foroccluders (both convex and concave).

3.1 Occludee Projection

Recall that the P rojection of an occludee is the union of its views.Our cells are convex as is the bounding box of an occludee. TheP rojection of such a box reduces by convexity to the 2D convexhull of its views from the vertices of the cell.

To simplify computation, we use the bounding rectangle ofthe P rojection on the projection plane as an overestimate of theP rojection (see Fig. 5). We then split the problem into two sim-pler 2D cases. We project the cell and the occludee bounding boxonto two planes orthogonal to the projection plane and parallel tothe sides of the cell. The 2D projection of the cell is a rectangle,while the 2D projection of the occludee bounding box is a hexagonin general (Fig. 5 shows a special case of a quadrilateral for sim-plicity).

���������� ��

����

����������� ����

��������������

��������������

Figure 5: Occludee Projection is reduced to two 2D problems.

We then compute the separating and supporting lines [CT97] ofthe rectangle and hexagon. The intersections of these lines withthe projection plane define a 2D bounding segment. A boundingrectangle on the projection plane is defined by the Cartesian productof the two 2D segments as illustrated in Fig. 5. Separating lines areused when the occludee is between the cell and the projection plane,while supporting lines are used if the occludee lies behind the plane.

This method to compute an occludee P rojection is general andalways valid, but can be overly conservative in certain cases. Insection 4 we will present an improvement of this P rojection forsome particular, but not uncommon, configurations.

3.2 Projection of convex occludersusing intersections

By convexity of the cell and occluder, the intersection of all possi-ble views from inside the cell is the intersection of the views fromthe vertices of the cell. This is illustrated in Fig. 6(a). ThisP rojection can be computed using standard geometric intersectioncomputation.

We have nevertheless developed an efficient method which takesadvantage of the graphics hardware. It is a multipass method usingthe stencil buffer. The stencil buffer can be written, and comparedto a test value for conditional rendering.

The basic idea is to project the occluder from each vertex of thecell, and increment the stencil buffer of the projected pixels withoutwriting to the frame-buffer. The pixels in the intersection are thenthose with a stencil value equal to the number of vertices.

The consistent treatment of Depth values (as described in section2.3) in the context of such an approach requires some care. Moredetails on the hardware implementation are given in appendix A.

If standard OpenGL rasterization is used, the P rojections com-puted are not conservative since partially covered pixels on the

4

Page 5: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

cell

occluder projectionplane

stencil buffer

cell

occluder

0

0 0 0 0 0 0 0 0 0 000

0

00

0

0 0 0 0 0 0 0 0 0 000

0

00

0

0 0 0 0 0 1 0 0 0 000

0

00

0

0 0 0 0 0 1 1 0 1 000

0

00

0

0 0 0 0 1 1 1 1 1 100

0

00

0

0 0 0 1 1 2 2 3 2 101

0

00

0

0 0 0 1 2 2 2 3 3 101

0

00

0

0 0 1 1 2 3 4 4 3 211

0

00

0

0 1 1 2 2 4 4 4 3 312

0

00

0

0 0 2 2 2 4 4 4 4 312

0

01

0

0 1 1 1 2 3 3 3 3 323

0

01

0

0 1 1 2 2 2 2 2 2 211

0

00

0

1 1 1 1 1 1 1 2 2 211

0

01

0

0 0 0 0 0 0 0 0 0 000

0

00

0

0 0 0 0 0 0 0 0 0 000

0

00

0

0 0 0 0 0 0 0 0 0 000

0

00

(a) (b)

Figure 6: Convex occluder Projection. (a) The Projection of anoccluder is the intersection of the views from the vertices of the cell(we have represented a square cell for simplicity). (b) Computationusing the stencil buffer. The values of the stencil buffer indicate thenumber of views projecting onto a pixel.

edges may be drawn. We use a technique proposed by Wonka et al.[WS99] which “shrinks” polygons to ensure that only completelycovered pixels will be drawn. Each edge of a displayed polygon istranslated in the 2D projection plane by a vector of (+/-1 pixel, +/-1 pixel) towards the interior of the polygon (the sign is determinedby the normal). The 2D lines corresponding to the edges are trans-lated and the vertices are computed by intersection. Note that onlysilhouette edges need be translated. If the polygons to be displayedare too small, the shrinking results in a void P rojection.

3.3 Concave occluder slicingConcave polygonal meshes can be treated in our method, by com-puting the P rojection of each individual triangle of the mesh. How-ever, some gaps will appear between the P rojections, resulting inthe loss of the connectivity of occluders. To overcome this prob-lem, we use the following simple observation: the P rojection of aclosed manifold lying in the projection plane is the object itself. Wethus consider the intersection of concave objects with the projectionplane, which we call a slice of the object.

If the projection plane cuts the object, we compute the intersec-tion and a 2D contour found. The normals of the faces are used toaccurately treat contours with holes. The contour is then conserva-tively scan-converted with the value of the Depth Map set to zero(i.e. the Depth value of the projection plane).

4 Improved Projection of occludees

In this section we present an improvement to the extended projec-tion calculation for the case of convex or planar occluders for con-figurations in which our initial P rojection yields results which aretoo conservative. In what follows, we discuss only the case wherethe occludee is between the projection plane and the cell. If the oc-cludee is behind the plane, the P rojection which we have presentedin section 2.2 yields satisfying results.

In Fig. 7(a) we show a 2D situation in which our P rojectionis too restrictive. The P rojection of the occludee is not containedin the P rojection of the occluder, even though the occludee is ev-idently hidden from any viewpoint in the cell. As illustrated inFig. 10(a), we will show that in this case we can use the support-ing lines instead of the separating lines in the computation of theoccludee P rojection.

To improve our occlusion test, we will first discuss conditions re-quired to prove that an occludee is hidden by an occluder. We willthen deduce a sufficient condition on the occludee P rojection toyield valid tests. In 2D, this condition can be simply translated into

the definition of an improved P rojection. Based on this 2D con-struction, we develop an improved 3D occludee P rojection, using aprojection approach similar to that of section 3.1.

����

���������� ��

��������

�������� ����

���������� ��

��������

���

! �!

Figure 7: (a) Configuration where the initial P rojection is too re-strictive. The P rojection of the occludee is not contained in theP rojection of the occluder, even though it is obviously hidden. (b)Any point P′ inside the cone defined by P and the cell and behindthe occluder is hidden.

4.1 Some properties of umbra

Before defining the actual improved P rojection , we introducetwo important properties uses to demonstrate that our improvedP rojection provides valid occlusion results.

Property 1 For a given point P in the occluder umbra region withrespect to a cell, all points P′ in space behind the occluder, whichare also contained in the cone defined by P and the cell, are hiddenwith respect to the cell.

The occluder umbra region with respect to a cell is the umbra(totally hidden) volume which results if the cell is considered asan area light source. This property is illustrated in Fig. 8. Theproof is evident for convex occluders since the cone defined by P′and the cell is contained in the cone defined by P. The section ofthe occluder which occludes P is a superset of the section whichoccludes P′.

���������� ��

���������� ��

������������������������

��

����

Figure 8: Case of a concave planar occluder. The intersection ofthe occluder and cone (defined by point P and the cell) is shown indark blue. Note that the planar occluder need not be parallel to theprojection plane.

The 3D case of concave planar occluders is similar to the convexcase. Consider a point P in the umbra of a concave occluder (Fig.

5

Page 6: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

8). Since P is in the umbra, the cone defined by P and the cell is“occluded”: the intersection of this cone and the occluder is equalto the intersection of the cone and the plane of the occluder. Theintersection of the cone defined by P′ (the light blue inner square inFig. 8) is a subset of this intersection. P′ is thus also hidden.

This is not true for general concave occluders, as illustrated inFig. 9.

����

���������� ��

�"

Figure 9: Point P′ is in the cone defined by point P in the shadowof the concave occluder A

SB, but it can see the view cell.

Planarity of the occluder is required to ensure that the intersec-tion of the cone and the occluder is convex. The planar occluder canbe concave and have a hole (as in Fig. 8), but if P is in the umbra,the intersection is convex.

To yield valid occlusion tests, our improved P rojection musthave the following property:

Property 2 The union of cones defined by each point of the im-proved P rojection of the occludee and the visibility cell must con-tain the occludee.

To see why this property is important, consider the cone definedby P and the cell in Fig 7(b). The points of the occludee containedin this cone are occluded by Property 1. Consider the union of conesdefined by all the points of a hypothetical improved P rojection andthe cell. If the occludee is contained in this union of cones, anypoint of the occludee is in one of these cones, and is thus hidden byProperty 1. Note that occluder fusion is still taken into account: allpoints P defining the cones need not be hidden by the same convexor planar occluder.

����

���������� ��

��������

����

���������� ��

��������

� �������������������

! �!

��������

Figure 10: Improved P rojection in 2D.

4.2 2D improved Projection

An improved P rojection respecting property 2 is defined in 2D byconsidering the supporting lines of the cell and the occludee as il-lustrated in Fig. 10(a). However, if the occludee is too small, thetwo supporting lines intersect in front of the projection plane at thevanishing point. In this case, any point P between the intersec-tions of the two supporting lines and the projection plane satisfiesproperty 2, (Fig. 10(b)). In practice, we use the mid-point in ourcalculations.

Note that this computation using supporting lines is the same forobjects behind the projection plane, since the supporting lines thendefine the limits of the union of the views of the occludee. As op-posed to standard P rojection presented in section 2.2, we no longerhave to consider separating lines. In the case where the occludee isin front of the projection plane, we have substituted the penumbraby the umbra.

We now summarize our 2D improved P rojection for any oc-cludee (in front or behind the projection plane): We compute theintersection with the projection plane of the upper and lower sup-porting lines defined by the cell and the occludee. If the plane isbeyond the vanishing point (i.e., if the intersection of the lower sup-porting line is above the intersection of the upper line), we considerthe mid-point of the two intersections, otherwise we consider theentire segment. The point or segment are then used in the occlusiontest. This is illustrated in Fig. 10. The 2D improved P rojection ofan occludee is a segment or a point, depending on whether the pro-jection plane is behind the vanishing point.

4.3 3D improved Projection

Unfortunately, supporting planes cannot be used in 3D as simply assupporting lines, and the vanishing point is ill-defined. Even if theumbra volume of an occludee intersects the projection plane, theunion of cones defined by the cell and points of the plane in umbraare not guaranteed to contain the occludee.

We thus project onto two planes orthogonal to the projectionplane and parallel to faces of the cell, as illustrated in Fig. 11(a).On each plane we use our 2D improved P rojection. The Carte-sian product of these two 2D improved P rojections defines our 3Dimproved P rojection.

The 3D improved P rojection is the Cartesian product of 2D im-proved P rojections which are points or segments. It is a rectan-gle (segment × segment), a segment (segment × point) or a point(point × point).

This improved 3D P rojection verifies property 2. Consider apoint P′ of the occludee. Its projection P′

i , i = 1,2 onto each of thetwo planes is inside a cone defined by a point Pi of the 2D improvedP rojection and the projection of the cell. The 3D point P′ is thus inthe cone defined by the cell and the Cartesian product P of P1 andP2 (see Fig. 11(a)).

This is true because the cell is the Cartesian product of its 2Dprojections since it is a box and the 2D planes are parallel to thecell faces. Thus a cone defined by a point P and the cell is theCartesian product of the cones defined by the 2D projections of thecell and the 2D projection of P.

Our 3D improved P rojection thus yields conservative occlusiontests in the case of convex or planar blockers. As mentioned previ-ously in the 2D case, occluder fusion is still handled since all conescontaining the occludee need not be hidden by the same occluder.

5 Occluder reprojectionand occlusion sweep

In the previous sections, we have limited the discussion to a sin-gle projection plane for a given cell. However, it can be desirableto use multiple parallel projection planes to take into account theocclusion due to multiple groups of occluders (Fig. 11(b)).

An important property of our method is that we can re-projectoccluders onto subsequent planes. The aggregated P rojections arereprojected to compute a Depth Map on new projection planes(Fig. 11(b)). Occluder fusion occurring on the initial plane is thusalso taken into account on the new planes. We next describe howwe perform reprojection and its generalization which we call theocclusion sweep.

6

Page 7: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

���������� ��

#$�����������������

����

�$�����������������

�$���

������

�����

�����������

��

��

��

���

���

!

��� ����������� ��

�������������� ����

���������� ����

�%��� ������������������

�������

����������

����

�������� �

���������� ��

�%��� ������������%��� ���������� ����������&�����

�����'�����

�! �!

$ �$

(

)

�� *

����

���!

Figure 11: (a) The 3D improved P rojection is the Cartesian product of two 2D improved P rojections. Any point P′ of the occludee iscontained in a cone defined by one point P of the 3D improved P rojection and the cell. This cone can be constructed by considering thetwo corresponding 2D projections. (b) If projection plane 2 is used for re-projection, the occlusion of group 1 of occluders is not taken intoaccount. The shadow cone of one cube shows that its P rojection would be void since it vanishes in front of plane 2. The same constraintsapply for group 2 and plane 1. It is thus desirable to project group 1 onto plane 1, and re-project the aggregate equivalent occluder onto plane2. (c) Occluder reprojection. The extended occlusion map of plane 1 is re-projected onto plane 2 from the center of the equivalent “cell”. Itis then convolved with the inverse image of the equivalent “cell” (dark region on plane 2).

5.1 Reprojection

The P rojections of occluders can be reprojected only if the initialprojection plane is behind them. In this case, the initial P rojectionsare inside the umbra of the occluders and can thus be used as a sin-gle conservative equivalent occluder (see appendix B for a proof).

The P rojections onto the initial projection plane (and the con-servative bit-map encoding) define a new planar occluder. The re-projection scheme we are about to present is in fact an extendedprojection operator for the special case of planar blockers parallelto the projection plane.

We base our reprojection technique on the work by Soler andSillion [Max91, SS98] on soft shadow computation, even thoughwe are interested only in the umbra region. They show that in thecase of planar blockers parallel to the source and to the receiver,the computation of soft shadows is equivalent to the convolution ofthe projection of the blockers with the inverse image of the source.Their method is an approximation in the general case, but we willuse it in the particular case of parallel source, blockers and receiver,where it is exact.

We are nearly in this ideal case: our blocker (the P rojectionson the initial projection plane) and the receiver (the new projec-tion plane) are parallel. However our light source (the cell) is avolume. We define an equivalent “cell” which is parallel to the pro-jection planes and which yields conservative P rojection on the newprojection plane. Its construction is simple and illustrated in Fig.11(c). We use the fact that our projection planes are actually finiterectangles. Our equivalent “cell” is the planar rectangle defined bythe face of the cell closest to the plane, the supporting planes of thecell and the final projection rectangle.

Any ray going through the cell and the projection plane also in-tersects our equivalent “cell”. Thus if an object is hidden from theequivalent “cell”, it is also hidden from the cell.

To obtain a conservative umbra region, the inverse image of ourequivalent cell (i.e. the convolution kernel) is conservatively ras-terized (overestimated as opposed to the occluder underestimatedconservative rasterization), which is straightforward since it is a 2Drectangle.

The convolution method computes continuous grey levels. Toobtain a binary Occlusion Map, we only keep the black pixels (i.e.the umbra region). A Depth map can be used on the final plane.The Depth of the re-projected equivalent occluder is the depth of

the initial plane.

5.2 Occlusion sweep

To handle the case where multiple concave or small occludershave to be considered, we generalize re-P rojection to the occlusionsweep. This is a sweep of the scene by parallel projection planesleaving the cell. Occlusion is aggregated on these planes using re-P rojection.

We project the occluders which lie in front of the current pro-jection plane P onto P and also compute the slices of the concaveobjects which intersect the plane. We then advance to the followingprojection plane, by re-projecting the P rojections of the previousplane. We compute the new slices of concave objects, and projectthe occluders which lie between the two planes. This defines theDepth Map of the new projection plane.

The distance ∆D between two projection planes is chosen tomake optimal use of the discrete convolution. The size of the convo-lution kernel (the inverted image of the equivalent cell) must be aninteger number K of pixels (there is then no loss in the conservativerasterization). Let D be the distance between the initial plane andthe equivalent “cell”, and C the size of the equivalent “cell”. N isthe resolution of the Depth Map, and P is the size of the projection

plane. Applying Thales theorem gives: ∆D = D(K−1)PCN .

In practice we use K = 5 pixels. Note that this formula results inplanes which are not equidistant. This naturally translates the factthat occlusion varies more quickly near the viewing cell.

6 Occlusion culling algorithm

The elements presented so far (P rojection and re-P rojection of oc-cluders and occludees) can now be put together to form a completeocclusion culling algorithm. The algorithm has two main steps:pre-processing and interactive viewing.

6.1 Preprocess

The preprocess creates the viewing cells and the PVS’s correspond-ing to the original input scene. Viewing cells are organized in aspatial hierarchical data-structure, potentially related to the specificapplication (e.g., the streets of a city). The geometry of the scene

7

Page 8: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

itself is organized into a separate spatial hierarchy (e.g., a hierarchyof bounding boxes).

Two versions of the preprocess have been implemented, onewhich simply uses the P rojection onto 6 planes for each view cell(used in the city example) and one which uses the occlusion sweep(used for the forest scene).

Adaptive preprocess

We start by performing an occlusion computation for each view-ing cell. First, we choose the appropriate occluders and projec-tion planes (see the following two sections). We then P roject theoccluders and build a Hierarchical Depth Map. Finally, the oc-cludees are tested recursively. If a node is identified as hidden orfully visible, the recursion stops. By fully visible, we mean thatits P rojection intersects no occluder P rojection, in which case nochild of this node can be identified as hidden. The occludees de-clared visible are inserted in the PVS of the cell.

If we are satisfied with the size of the PVS, we proceed to thenext cell. Otherwise, the cell is subdivided and we recurse on thesub-cells. Nonetheless, occlusion culling is only performed on theremaining visible objects, i.e. those contained in the PVS of theparent.

Performing computation on smaller viewing cells improves oc-clusion detection because the viewpoints are closer to each other.The views from all these viewpoints are thus more similar, resultingin larger occluder P rojections, and smaller occludee P rojections.

The termination criterion we use is a polygon budget: if the PVShas more than a certain number of polygons, we subdivide the cell(up to a minimum size threshold). A more elaborate criterion wouldbe to compare the PVS to sample views from within the cell. Notethat our adaptive process naturally subdivides cells more in zonesof larger visibility changes. However, more elaborate discontinuity-meshing-like strategies could be explored.

PVS data can become overwhelmingly large in the memory re-quired. To avoid this we use a delta-PVS storage mechanism. Westore the entire PVS for a single arbitrary initial cell (or for a smallnumber of seed or “key” cells). Adjacencies are stored with eachcell; a cell simply contains the difference with respect to the PVSof the neighboring cells. Our storage scheme is thus not very sen-sitive to the number of viewing cells, but to the actual complexityof the visibility changes. Other compression schemes could also beimplemented [vdPS99].

Occluder selection

For each viewing cell we choose the set of relevant occluders us-ing a solid angle heuristic similar to those presented previously[CT97, ZMHH97, HMC+97]. The importance of an occluder isjudged based on its approximate solid angle. In practice, it is com-puted at the center of the viewing cell. To optimize this selection,we use a preprocess similar to the one proposed in previous ap-proaches [HMC+97, ZMHH97]. For each occluder, we precom-pute the region of the scene for which it may be relevant. We storethis information in a kd-tree. Then, for each cell, we use the oc-cluders stored at the nodes of the kd-tree that the cell spans.

To improve the efficiency of occlusion tests, we have also im-plemented an adaptive scheme which selects more occluders in thedirection where many occludees are still identified as visible, in amanner similar to Zhang [ZMHH97].

Since our preprocess is recursive, we also use the PVS of theparent cell to cull hidden occluders. Since, as we shall see, theP rojection of the occluders is the bottleneck of the method, thisresults in large savings.

Choice of the projection plane

The heuristic we use is simple, based on the optimization of thenumber of pixels filled in our Depth Map. We place a candi-

date plane just behind each occluder. We evaluate the size of theP rojection on each such plane for each occluder. This method isbrute force, but remains very fast.

Moreover, since we discard occluders which are hidden from theparent cell, the heuristic is not biased towards regions where manyredundant occluders are present.

Six projection planes are used to cover all directions. Unlike e.g.,the hemicube [CG85] methods, our six planes do not define a box.The planes are extended (e.g., by 1.5 used in our tests) to improveocclusion detection in the corner directions, as shown in Fig. 15(c).

6.2 Interactive Viewing

For on-line rendering we use the SGI Performer library, whichmaintains a standard scene-graph structure [RH94]. A simple flagfor each node determines whether it is active for display. Each timethe observer enters a new cell, the visibility status of the nodes ofthe scene-graph are updated. This is very efficient thanks to ourdelta-PVS encoding. Nodes which where previously hidden are re-stored as visible, while newly hidden ones are marked as inactive.The viewer process adds very low CPU overhead, since it only per-forms simple scene-graph flag updates.

Dynamic objects with static occluders can be treated using ourextended projection approach. However, dynamic occluders can-not be handled. A hierarchy of bounding boxes is constructed inthe regions of space for which dynamic objects can move [SC96].During preprocess, these bounding boxes are also tested for occlu-sion. In the viewing process the dynamic object is displayed, if thebounding box containing the dynamic object is in the current PVS.

In the urban driving simulator example, the roads are the dy-namic object bounding box hierarchy. During the interactive walk-through, a car is displayed only if the street in which it lies is visiblefrom the current street of the observer (see the video).

One of the advantages of our preprocess is that it could be usedfor scenes which are too big to fit into main memory, or to be com-pletely loaded on-the-fly from the network. The techniques devel-oped by Funkhouser et al. [Fun96] can easily be adapted. A sep-arate process is in charge of the database management. Using thePVS of the neighboring viewing cells, the priority of the objectswhich are not yet loaded is evaluated. Similarly, a priority orderis computed to delete invisible objects from memory. The predic-tion offered by our method cannot be achieved by previous onlineocclusion culling methods.

7 Implementation and results

We have implemented two independent systems for the preproces-sor and the viewer. The preprocessor uses graphics hardware ac-celeration wherever possible, notably for the P rojection and convo-lution. The Depth Maps are read from graphics memory, and theoccludee test is performed in software. The delta-PVS’s computedare stored to disk and made available to the interactive viewer.

Our current implementation of the viewer is based on SGI Per-former [RH94]. Performer implements a powerful scene-graph andview-frustum culling, providing a fair basis for comparison. Alltimings presented are on an SGI Onyx2 Infinite Reality 200MhzR10K using one processor for the preprocess and two processorsfor the viewer.

7.1 Projection onto a single projection plane

The test scenes we have used for the single plane method consistof a model of a city district which contains a total number of about150,000 polygons replicated a variable number of times. The im-proved P rojection was used to cull occludees lying between theprojection plane and occluders more efficiently.

8

Page 9: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

We first present statistics on the influence of the resolution of theDepth maps on running time of the preprocess in Fig. 12. Sur-prisingly, the curve is not monotonic. If the resolution is below256x256, the occlusion efficiency (i.e. the percentage of geome-try declared hidden) of the method is low because of the conserva-tive rasterization. More recursion is thus needed, where more oc-cluders are used because they have not been culled from the parentcell. If the resolution is higher than 256x256, occlusion efficiencyis not really improved but the time required to build the Hierar-chical Depth Map becomes a bottleneck (the huge increase for aresolution of 1,024x1,024 may be explained by cache failure). Thispart of the algorithm could have been optimized using the graphicshardware, but the very low occlusion efficiency gain made us use aresolution of 256x256 for the rest of our computations. Since weuse pyramids, only resolutions which are powers of 2 were used.

0

1

2

3

20s

time

(s/c

ell)

geom

etry

dec

lare

d hi

dden

(%

)

resolution resolution

total

128 256 512 1024 128

39.0

92.2 92.6 92.7

256 512 1024

pyramid consruction

occluder projectionoccludee test

occluder selection 0

20

40

60

80

100

Figure 12: (left) Preprocess running time (sec./cell) versusDepth map resolution. (right) Geometry declared hidden vs. res-olution of the Depthmap. All timings for a scene consisting of600,000 polygons.

We varied the complexity of the input scene by replicating thedistrict. The average preprocessing time per cell is presented inFig. 13. The projection of the occluder is the most time-consumingtask. A log-log linear fit reveals that the observed growth of the totalrunning time is in

√n where n is the number of input polygons. If

the total volume of all viewing cells varies proportionally to thenumber of input polygons, the growth is then n1.5

The improved P rojection presented in section 4 results in PVS5 to 10% smaller. This is not dramatic, but recall that it comesat no cost. In addition the implementation is simpler, since onlysupporting lines are considered.

1

0.8

0.6

0.4

0.2

00 500 1000 1500 2000 2500

time

(s/c

ell)

input scene (Kpoly)

total

occluder projection

occluder selection

occludee test

pyramid construction

Figure 13: Preprocess running time versus input scene size.

We used a scene consisting of the city district replicated 12 times(1.8M polygons) and 3,000 moving cars of 1.4K polygons each,resulting in a total of 6M polygons. We performed the prepro-cess for the streets of only one district. The 1,500 initial visibilitycells were subdivided into 6,845 leaf cells by our adaptive method(12,166 cells were evaluated, where parent cells are included). Theaverage occlusion efficiency was 96% and the delta-PVS required

60 MBytes of storage. The total preprocess took 165 minutes(0.81s/cell), of which 101 minutes for the occluder P rojection. Wecan extrapolate that the preprocess would take 33 hours for thestreets of the 12 districts. For an 800 frame walkthrough, an aver-age speed-up of 18 was obtained over SGI Performer view frustumculling (Fig. 14). This is lower than the average geometry ratio of24 (i.e. # polys a f ter f rustum cull

# polys a f ter occlusion cull ) because of constant costs in thewalkthrough loop of Performer. Fig. 15 illustrates our results.

���

���

��������������� ��

������� ������ �

���

�����

���

����

����

����

����

���

���

�� �

���

����

����

� �

����

����

����

����

����

����

���

���

����

����� ������������ ��

������� ������ �

(a) (b)

Figure 14: Statistics gathered during interactive walkthrough fora scene of 6M polygons on an Onyx2. (a) Total frame time(app+cull+draw) in seconds. (b) Number of triangles sent to thegraphics pipeline.

As an informal comparison, we have implemented the algorithmof Cohen-Or et al. [COFHZ98, COZ98]. For the city model, theiralgorithm declares four times more visible objects on average andthe computation time in our implementation is 150 times higherthan for extended projection. The large difference in speed is dueto the necessity in their algorithm to cast rays between each celland each occludee. Our implementation of their method uses earlytest termination when a ray is occluded, but our ray-caster could bemuch improved, so the comparison should be taken as an indicationonly.

7.2 Occlusion sweep

To test the occlusion sweep, we used a model of a forest containingaround 7,750 trees with 1,000 leaves each (7.8M triangles). TheP rojection of the leaves close to the projection plane were com-puted using the convex occluder P rojection using the stencil buffer.The size of the convolution kernel was fixed to 5 pixels, and weused 15 planes for the sweep. The occlusion sweep took around23 seconds per cell, 59 minutes for all 158 cells (no adaptive re-cursion was performed). This is slower than for the city because15 successive planes are used for the occlusion sweep. 95.5% ofthe geometry was culled. Fig. 17 shows our sweeping process (weshow only one quadrant of the forest for clarity). Observe how theleaves aggregate on the Occlusion Map.

��

���

�����

���

����

����

����

�����

����

��

������������ ��

������� ������ � �

�����

������

������

������

������

������

������

������

������

������

������������ ��

������� ������ �

���

�� �

���

����

����

� �

(a) (b)

Figure 16: Statistics gathered during interactive walkthrough for aforest scene scene of 7.8M polygons on an Onyx2. (a) total frametime (app+cull+draw) in seconds. (b) Number of triangles sent tothe graphics pipeline.

For a walkthrough of 30 sec, we obtained an average speed up

9

Page 10: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

(a) (b) (c)

Figure 15: Results of our algorithm. (a) The scene from a bird’s-eye view with no culling; the scene contains 600,000 building polygons and2,000 moving cars containing 1,000 polygons each. (b) The same view using the result of our visibility culling algorithm (the terrain andstreet are poorly culled because of a poor hierarchical organization of our input scene). (c) Visualization of the occlusion culling approach,where yellow boxes represent the elements of the scene-graph hierarchy which have been occluded.

Figure 17: The sweeping process: (a) Part of our 7.8M polygon for-est model, (b)-(d) three positions for the sweep projection planes.The yellow bounding boxes are the culled occludees.

of 24 for the interactive display, achieving a framerate between 8.6and 15 fr/s (Fig.16).

7.3 DiscussionWe first have to note that the occlusions our method identifiesare a subset of those detected by a point-based method [GKM93,ZMHH97]. Advantages of those methods also include their abil-ity to treat dynamic occluders and the absence of preprocess orPVS storage. However, our method incurs no cost at display time,while in the massive rendering framework implemented at UNC[ACW+99] two processors are sometimes used just to perform oc-clusion culling. Moreover, for some applications (games, network-based virtual tourism, etc.), the preprocessing time is not really aproblem since it is performed once by the designer of the applica-tion. Our PVS data then permits an efficient predictive approach todatabase pre-fetching which is crucial when displaying scenes overa network or which cannot fit into main memory.

We now discuss the conditions in which our method succeedsor fails to detect occlusion (infinite resolution of the Depth Map ishere assumed). In Fig. 18 we represent in grey the volume corre-sponding to all the possible P rojections of the occluders. The ac-tual P rojections corresponds to the intersection of these P rojectionvolumes with the projection plane. The occlusion due to a single oc-cluder is completely encoded if the occluder is in front of the planeand if its P rojection volume intersects the plane (18(a)). If the planeis farther away, the P rojection of the occluder becomes smaller, but

so do the improved P rojections of the occludees (however, if reso-lution is taken into account, more distant planes are worse becauseof our conservative rasterization).

cell

A

(a)

projectionplane 1

projectionplane 2

occludeecell

A

(b)

projectionplane 2

projectionplane 1

BB

Figure 18: P rojection volumes are represented in grey (supportingand separating lines are not represented for clarity). (a) Plane 1does not completely capture the occlusion due to A (e.g., for theoccludee). Plane 2 does, but it does not capture the occlusion dueto B. (b) Plane 1 capture occlusion fusion between A and B whileplane 2 does not.

On the other hand, occluder fusion occurs between two occlud-ers when their P rojection volumes intersect in the region of theplane (Fig. 18(b)). In this case, the position of the plane is crucial,hence our heuristic which tries to maximize the projected surfaceon the plane. More elaborate heuristics could search for intersect-ing P rojection volumes.

There are situations in which even a perfect occlusion cullingmethod (i.e. an exact hidden-part removal) cannot cull enough ge-ometry to achieve real-time rendering. For example, if the observeris at the top of a hill or on the roof of a building, the entire citymay be visible. Other display acceleration techniques should thusbe used together with our occlusion culling.

The trees used in the forest scene are coarse and made of largetriangles. A typical high quality 3D model of tree may requirearound 50,000 triangles, and leaves are often smaller than our trian-gles. However, we consider that our results are obtained on a scenewhich is at least half-way between previous simple architectural ex-amples with large occluders and a high quality forest scene.

7.4 Fuzzy VisibilityBuilding on concepts from the AI community, H. L. Lim definesfuzzy projections as the 1-cut of a fuzzy set [Lim92]. The degree ofa point x on the projection plane with respect to a projected objectS is defined as the number of times the projection maps onto x di-vided by the total number of viewpoints. The fuzzy projection (or

10

Page 11: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

equivalently the extended projection) is then the set of points x withvalue 1, that is the 1-cut of this fuzzy set.

The main difference between the definition proposed by H. L.Lim and our extended projection as defined in Section 2.2 is that heuses direction space: each point on the projection plane (or projec-tion cube) corresponds to a direction, which is equivalent to placingthe projection plane at infinity. In contrast, using our definition, oc-clusion due to occluders whose shadow is finite are captured.

H. L. Lim also proposes an efficient method for the computationof fuzzy projections of concave meshes, by “shrinking” the edgesof the silhouette. This method could be adapted to improve ourtreatment of concave objects.

8 Conclusions and future work

We have presented extended projection operators which permit con-servative occlusion tests with respect to volumetric viewing cells.Our operators yield conservative occlusion tests and can handle oc-cluder fusion. We have presented an efficient implementation ofboth operators, as well as an improvement in the case of convex orplanar blockers.

We have defined a reprojection operator which allows us to re-project the P rojections computed on a given projection plane ontoanother one, allowing us to define an occlusion sweep. The re-sults we obtain show a significant speed-up of 15 times comparedto a high-end interactive rendering library with view frustum cullingonly on a 6 million polygon city model.

Results have also been presented showing that our occlusionsweep makes it possible to compute occlusion caused by the cu-mulative effect of many small objects, such as leaves in a forest,with respect to a volumetric viewing cell.

Future work

Future work includes the computation of P rojections for portalsin architectural environments and the use of unions of convexshapes for the P rojection of concave occluders. Concave polyg-onal meshes could also be projected from the center point of thecell, then “shrunk” to compute the P rojection.

The speed of our method could make its use possible in anon-demand fashion, computing only visibility information for theneighbourhood of the observer. The extended projection conceptscan also be used to allow some prediction with on-line occlusionculling methods [GKM93, ZMHH97]. Instead of only testing thebounding box of the objects, their extended projection with respectto a volume centered around the observer could also be tested.

Our method could also be applied to global illumination compu-tation [TH93], LOD for animation, e.g., [CH97], etc. The occlusionsweep could be extended to compute soft shadows.

To be really efficient for complex and cluttered scenes such asforest, our method should be extended to compute semi-quantitativeocclusion to drive Level of Detail or image-based acceleration: themore hidden the object, the coarser its display. Human perceptionand masking effects should then be taken into account.

Acknowledgments

The ideas of this paper were initially developed when the first au-thor was invited to the University of Stanford by Leo Guibas. Thediscussions with Leo and Mark de Berg have been invaluable forexploring and refining our ideas. Many thanks to Seth Teller, PierrePoulin, Fabrice Neyret and Cyril Soler. This work was supported inpart by a research grant of NSF and INRIA (INT-9724005).

References

[ACW+99] D. Aliaga, J. Cohen, A. Wilson, Eric Baker, H. Zhang,C. Erikson, K. Hoff, T. Hudson, W. Stuerzlinger,R. Bastos, M. Whitton, F. Brooks, and D. Manocha.MMR: An interactive massive model rendering sys-tem using geometric and image-based acceleration.In ACM Symp. on Interactive 3D Graphics, 1999.

[ARB90] J. Airey, J. Rohlf, and F. Brooks, Jr. Towards im-age realism with interactive update rates in complexvirtual building environments. In ACM Symp. on In-teractive 3D Graphics, 1990.

[CG85] M. Cohen and D. Greenberg. The hemicube: A ra-diosity solution for complex environments. In Com-puter Graphics (Proc. Siggraph), 1985.

[CH97] D. A. Carlson and J. K. Hodgins. Simulation levels ofdetail for real-time animation. In Graphics Interface,1997.

[Cla76] J. H. Clark. Hierarchical geometric models for visi-ble surface algorithms. Communications of the ACM,October 1976.

[COFHZ98] D. Cohen-Or, G. Fibich, D. Halperin, and E. Zadi-cario. Conservative visibility and strong occlu-sion for visibility partitionning of densely occludedscenes. In Eurographics, 1998.

[COZ98] D. Cohen-Or and E. Zadicario. Visibility streamingfor network-based walkthroughs. In Graphics Inter-face, 1998.

[CT96] S. Coorg and S. Teller. Temporally coherent conser-vative visibility. In ACM Symp. On ComputationalGeometry, 1996.

[CT97] S. Coorg and S. Teller. Real-time occlusion cullingfor models with large occluders. In ACM Symp. onInteractive 3D Graphics, 1997.

[Dur99] Fredo Durand. 3D Visibility, analysis and applica-tions. PhD thesis, U. Joseph Fourier, Grenoble, 1999.http://www-imagis.imag.fr.

[FS93] T. Funkhouser and C. Sequin. Adaptive display al-gorithm for interactive frame rates during visualiza-tion of complex virtual environments. In ComputerGraphics (Proc. Siggraph), 1993.

[Fun95] T. Funkhouser. RING - A client-server system formulti-user virtual environments. ACM Symp. on In-teractive 3D Graphics, 1995.

[Fun96] T. Funkhouser. Database management for interactivedisplay of large architectural models. In Graphics In-terface, 1996.

[GKM93] N. Greene, M. Kass, and G. Miller. Hierarchical Z-buffer visibility. In Computer Graphics, (Proc. Sig-graph), 1993.

[GLM96] Stefan Gottschalk, Ming Lin, and Dinesh Manocha.OBB-Tree: A hierarchical structure for rapid inter-ference detection. In Holly Rushmeier, editor, SIG-GRAPH 96 Conference Proceedings, Annual Confer-ence Series, pages 171–180. ACM SIGGRAPH, Ad-dison Wesley, August 1996. held in New Orleans,Louisiana, 04-09 August 1996.

[HMC+97] T. Hudson, D. Manocha, J. Cohen, M. Lin, K. Hoff,and H. Zhang. Accelerated occlusion culling usingshadow frusta. In ACM Symp. on Computational Ge-ometry, 1997.

[Jon71] C. B. Jones. A new approach to the ‘hidden line’problem. The Computer Journal, 14(3):232–237,August 1971.

[LG95] D. Luebke and C. Georges. Portals and mirrors: Sim-ple, fast evaluation of potentially visible sets. In ACMSymp. on Interactive 3D Graphics, 1995.

11

Page 12: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

[Lim92] Hong Lip Lim. Toward a fuzzy hidden surface algo-rithm. In Computer Graphics International, 1992.

[LT99] F. Law and T. Tan. Preprocessing occlusion for real-time selective refinement. In ACM Symp. on Interac-tive 3D Graphics, 1999.

[Max91] Max. Unified sun and sky illumination for shad-ows under trees. Comp. Vision, Graphics, and ImageProcessing. Graphical Models and Image Processing,53(3):223–230, May 1991.

[PD90] H. Plantinga and C. R. Dyer. Visibility, occlusion,and the aspect graph. Int. J. of Computer Vision, 5(2),1990.

[RH94] J. Rohlf and J. Helman. IRIS performer: A high per-formance multiprocessing toolkit for real–Time 3Dgraphics. In Computer Graphics (Proc. Siggraph),1994.

[SC96] O. Sudarsky and C.Gotsman. Output-sensitive vis-ibility algorithms for dynamic scenes with applica-tions to virtual reality. In Proc. Eurographics Conf.,1996.

[SDDS00] G. Schaufler, J. Dorsey, X. Decoret, and F. Sillion.Conservative volumetric visibility with occluder fu-sion. In Computer Graphics (Proc. Siggraph), 2000.

[SLSD96] J. Shade, D. Lischinski, D. Salesin, and T. DeRose.Hierarchical image caching for accelerated walk-throughs of complex environments. In ComputerGraphics (Proc. Siggraph), 1996.

[SS98] C. Soler and F. Sillion. Fast calculation of softshadow textures using convolution. In ComputerGraphics, (Proc. Siggraph), 1998.

[Ste97] A. James Stewart. Hierarchical visibility in terrains.Eurographics Workshop on Rendering 1997, June1997.

[Tel92] S. J. Teller. Visibility Computations in Densely Oc-cluded Polyhedral Environments. PhD thesis, UCBerkeley, 1992.

[TH93] S. Teller and P. Hanrahan. Global visibility algo-rithms for illumination computations. In ComputerGraphics (Proc. Siggraph), 1993.

[TS91] S. Teller and C. Sequin. Visibility preprocessingfor interactive walkthroughs. In Computer Graphics(Proc. Siggraph), 1991.

[vdPS99] M. van de Panne and J. Stewart. Effective compres-sion techniques for precomputed visibility. In Euro-graphics Workshop on Rendering, 1999.

[WBP98] Y. Wang, H. Bao, and Q. Peng. Accelerated walk-throughs of virtual environments based on visibilityprocessing and simplification. In Proc. EurographicsConf., 1998.

[WS99] P. Wonka and D. Schmalstieg. Occluder shadows forfast walkthroughs of urban environments. In Proc.Eurographics Conf., 1999.

[ZMHH97] H. Zhang, D. Manocha, T. Hudson, and K. E. Hoff III.Visibility culling using hierarchical occlusion maps.In Computer Graphics (proc. Siggraph), 1997.

A Extended projection using OpenGL

Recall that in Section 3.2 we described how to project convex oc-cluders onto a projection plane as the intersection of the views fromthe vertices of the viewing cell. Here we present the details of anefficient OpenGL implementation. One of the problems is that dur-ing the projection of convex occluders we need to write consistentz-values and also treat the case of multiple blockers. An efficientway to do this in OpenGL is to use the stencil buffer, and a slightlyinvolved z-buffer.

ProjectBlocker ( occluder A, cell C, projection plane P )for each vertex vi of C

project A onto P in softwarecreate 2D polygons pi, i = 1..8

endforenable stencil buffer, increment by one// do z-test in case previous blocker// mapped to same pixelsenable z-testdisable z-writefor each 2D polygon pi, i = 1..8

render pi orthographicallyendfor// initialize for max calculationenable z-writerender polygon p1

// use inverted z for max calculationenable invert z-modefor each 2D polygon pi, i = 2..8

render polygon pi

endfor

Figure 19: Efficient OpenGL implementation of blocker projection.

For a perspective projection, depth is considered from the view-point. Mapping the z value to our definition of depth requires anaddition to set the zero on the projection plane. Unfortunately,OpenGL stores 1

z in the z-buffer, preventing a simple addition.For a given occluder and a given cell, we project (in software)

the blocker onto the projection plane, including the calculation of zvalues. The resulting 2D polygons are then rendered orthographi-cally using a stencil buffer. Z-testing is performed with respect toz-values potentially written by a previously projected blocker, butdepth values are not written. The stencil buffer is incremented byone. After all the polygons corresponding to each cell vertex havebeen rendered, the umbra region is defined by the region of thestencil buffer with the value 8 (i.e. blocked with respect to all cellvertices).

The eight 2D polygons are rendered again, using the stencilbuffer to restrict writing to the umbra region only. The first poly-gon is rendered and z-values are written to the z-buffer. The 7 otherpolygons are then rendered but the z-test is inverted. This results inthe maximum z-value being written to the z-buffer.

This process is summarized in the pseudo-code of Figure 19.

B Validity of the reprojection

We now show that the P rojection of several occluders can be usedas a single conservative equivalent occluder, i.e. an occludee hiddenby this P rojection is also hidden by the occluders. We prove thefollowing more general property.

Property 3 Consider an extended light source, any object A (con-vex or concave) and U the umbra region of A. Then the shadow ofany subset U ′ of U lies inside U.

To prove this, consider a point P in the umbra of U′ (Fig. 20).Any ray r going through P and the source intersects U′. Consideran intersection point P′. Since P′ ∈ U ′ ⊂U , P′ is in the umbra ofA. Thus any ray (r for example) going through P′ and the sourceintersects A. We have shown that any ray going through P and thesource intersects A. P is thus in the umbra of A. Note that property3 presupposes neither convexity nor planarity of the object A.

If the cell is considered as a light source, this proves that anysubset of the umbra of a set of occluders is a conservative versionof these occluders. As we have seen, the P rojection of an occluder

12

Page 13: Conservative Visibility Preprocessing using Extended ...people.csail.mit.edu/fredo/PUBLI/Sig2000/occluext.pdfplanes will actually be necessary to test occlusion in all directions

To appear in the SIGGRAPH 2000 conference proceedings – UPDATED EXTENDED VERSION

++�

��

Figure 20: Umbra of the subset of an umbra. Point P is in theumbra of U ′ which is a subset of the umbra U of A. It is thus alsoin the umbra of A.

which lies in front of the projection plane is its umbra on the plane.This P rojection can thus be re-Projected as a new occluder. If theoccluder lies behind the projection plane, its P rojection does notlie inside its umbra because the projection plane is closer to theviewing cell than the occluder. Thus property 3 does not apply.

This proof together with property 1 is also an alternative proof ofthe validity of P rojection when occluders are in front of the plane.

13