triangulation of cubic panorama for view synthesis

9
Triangulation of cubic panorama for view synthesis Chunxiao Zhang, Yan Zhao, and Falin Wu* School of Instrumentation Science and Optoelectronics Engineering, Beihang University, Beijing 100191, China *Corresponding author: [email protected] Received 16 March 2011; revised 16 June 2011; accepted 17 June 2011; posted 24 June 2011 (Doc. ID 144242); published 21 July 2011 An unstructured triangulation approach, new to our knowledge, is proposed to apply triangular meshes for representing and rendering a scene on a cubic panorama (CP). It sophisticatedly converts a compli- cated three-dimensional triangulation into a simple three-step triangulation. First, a two-dimensional Delaunay triangulation is individually carried out on each face. Second, an improved polygonal trian- gulation is implemented in the intermediate regions of each of two faces. Third, a cobweblike triangula- tion is designed for the remaining intermediate regions after unfolding four faces to the top/bottom face. Since the last two steps well solve the boundary problem arising from cube edges, the triangulation with irregular-distribution feature points is implemented in a CP as a whole. The triangular meshes can be warped from multiple reference CPs onto an arbitrary viewpoint by face-to-face homography transfor- mations. The experiments indicate that the proposed triangulation approach provides a good modeling for the scene with photorealistic rendered CPs. © 2011 Optical Society of America OCIS codes: 100.3010, 100.4994. 1. Introduction A cubic panorama (CP) consisting of six equirectan- gular faces not only provides the additional scene images on the top and bottom faces in contrast to a cylindrical panorama but also is easily stored and converted compared with a spherical panorama. It has received much attention in recent years, being widely applied in telepresence and virtual navigation since it can easily produce an immersive experience [1]. The well-known Google Street Viewimple- ments virtual navigation by displaying panoramic scenes located at hot spots [2]. Unlike hopping among the hot spots, Street Slidecan accomplish a seamless transition between bubbles by dynami- cally altering the alignment and visible portions of each image to simulate a pseudoperspective view of the street side from a distance [3]. However, the unconstrained navigation through arbitrary views is more attractive for allowing users to freely choose their view-scan routes rather than being restricted to only zooming in/out at these hot spots or along the baselines. The image-based rendering technique provides an alternative of conventional model-based rendering, one advantage of which is to generate photorealistic virtual views with considerably less computational resources regardless of the scene complexity [4]. The quality of synthesized images depends on the primitives used to represent the scene. The patch- based representation can alleviate the aperture problem that is commonly acute for the pixel-based representation. The irregular patch-based represen- tation usually associates with segmentation and a pseudodense matching propagation operation [5,6]. In contrast, polygonal patches are more easily manipulated by transforming their vertices, espe- cially triangular meshes. The concept of occlusion- adaptive, content-based triangular meshes is initi- ally introduced in motion compensation and track- ing, where affine transformation is used to track a single moving object against a still background [7]. For the virtual view synthesis under wide-baseline reference views, Siu and Lau proposed a hybrid two-dimensional (2-D)/three-dimensional (3-D) mesh, called relief occlusion-adaptive mesh (ROAM), where 3-D meshes (i.e., matched triangular meshes) represent some common regions of the scene in 3-D spatial domain and 2-D meshes (i.e., mismatched 0003-6935/11/224286-09$15.00/0 © 2011 Optical Society of America 4286 APPLIED OPTICS / Vol. 50, No. 22 / 1 August 2011

Upload: falin

Post on 03-Oct-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Triangulation of cubic panorama for view synthesis

Triangulation of cubic panorama for view synthesis

Chunxiao Zhang, Yan Zhao, and Falin Wu*School of Instrumentation Science and Optoelectronics Engineering, Beihang University, Beijing 100191, China

*Corresponding author: [email protected]

Received 16 March 2011; revised 16 June 2011; accepted 17 June 2011;posted 24 June 2011 (Doc. ID 144242); published 21 July 2011

An unstructured triangulation approach, new to our knowledge, is proposed to apply triangular meshesfor representing and rendering a scene on a cubic panorama (CP). It sophisticatedly converts a compli-cated three-dimensional triangulation into a simple three-step triangulation. First, a two-dimensionalDelaunay triangulation is individually carried out on each face. Second, an improved polygonal trian-gulation is implemented in the intermediate regions of each of two faces. Third, a cobweblike triangula-tion is designed for the remaining intermediate regions after unfolding four faces to the top/bottom face.Since the last two steps well solve the boundary problem arising from cube edges, the triangulation withirregular-distribution feature points is implemented in a CP as a whole. The triangular meshes can bewarped from multiple reference CPs onto an arbitrary viewpoint by face-to-face homography transfor-mations. The experiments indicate that the proposed triangulation approach provides a good modelingfor the scene with photorealistic rendered CPs. © 2011 Optical Society of AmericaOCIS codes: 100.3010, 100.4994.

1. Introduction

A cubic panorama (CP) consisting of six equirectan-gular faces not only provides the additional sceneimages on the top and bottom faces in contrast toa cylindrical panorama but also is easily storedand converted compared with a spherical panorama.It has received much attention in recent years, beingwidely applied in telepresence and virtual navigationsince it can easily produce an immersive experience[1]. The well-known “Google Street View” imple-ments virtual navigation by displaying panoramicscenes located at hot spots [2]. Unlike hoppingamong the hot spots, “Street Slide” can accomplisha seamless transition between bubbles by dynami-cally altering the alignment and visible portions ofeach image to simulate a pseudoperspective viewof the street side from a distance [3]. However, theunconstrained navigation through arbitrary viewsis more attractive for allowing users to freely choosetheir view-scan routes rather than being restricted toonly zooming in/out at these hot spots or along thebaselines.

The image-based rendering technique provides analternative of conventional model-based rendering,one advantage of which is to generate photorealisticvirtual views with considerably less computationalresources regardless of the scene complexity [4].The quality of synthesized images depends on theprimitives used to represent the scene. The patch-based representation can alleviate the apertureproblem that is commonly acute for the pixel-basedrepresentation. The irregular patch-based represen-tation usually associates with segmentation and apseudodense matching propagation operation [5,6].In contrast, polygonal patches are more easilymanipulated by transforming their vertices, espe-cially triangular meshes. The concept of occlusion-adaptive, content-based triangular meshes is initi-ally introduced in motion compensation and track-ing, where affine transformation is used to track asingle moving object against a still background [7].For the virtual view synthesis under wide-baselinereference views, Siu and Lau proposed a hybridtwo-dimensional (2-D)/three-dimensional (3-D)mesh, called relief occlusion-adaptive mesh (ROAM),where 3-D meshes (i.e., matched triangular meshes)represent some common regions of the scene in 3-Dspatial domain and 2-D meshes (i.e., mismatched

0003-6935/11/224286-09$15.00/0© 2011 Optical Society of America

4286 APPLIED OPTICS / Vol. 50, No. 22 / 1 August 2011

Page 2: Triangulation of cubic panorama for view synthesis

triangular meshes) represent the occluded regionsvisible only to individual images [8,9]. This methodcan not only represent object discontinuities butalso support arbitrary view synthesis rather thanthe purely linear interpolation of view morphing[10–12]. In addition, its fast rendering speed is at-tractive for real-time free-view virtual navigationcompared to many methods based on dense matches[13–15]. Therefore, ROAM can be employed to rendera virtual CP at an arbitrary viewpoint from neighbor-ing multiple reference CPs. The matching points fortriangular meshes could be detected by the existingmethods, such as scale-invariant feature transformand the method proposed in [16]. Then the triangu-lation could be performed on one CP, and the corre-sponding triangles on other CPs would be confirmedsimultaneously. The consistency checking would becarried out for each triangular pair to judge whetherit is matched or not. And thus triangular meshes arerespectively grouped into 2-D and 3-D meshes by es-timating the similarity. The novel view can be finallyrendered by warping triangular meshes from refer-ence views by homography transformations. Thesetwo groups of triangular meshes would be weightedwith different strategies so as to well handle the oc-clusion problem. More detailed descriptions of thisfull process are given in [8]. All reference viewsshould be calibrated, so that the homography trans-formation used for warpingmeshes can bemade fromeach reference view to a novel view.

Because of the specific structure of a CP, where thescene imaging is disrupted at the edges of a cube, it isimpossible to directly apply 2-D Delaunay triangula-tion in an unfolded cubic-panorama image. The tri-angulation in a CP is essentially a 3-D probleminstead of a 2-D planar problem in cylindrical panor-ama, greatly increasing the complexity. In respectthat little research is devoted to the triangulationin a CP, this paper presents an unstructured trian-gulation approach, new to our knowledge, for theirregular-distribution feature points on a cubic

surface. The classical 2-D Delaunay triangulationis performed in the central region of each face, andtwo particular bridging triangulations involving 3-D operations are designed for intermediate regions.One is an improved polygonal triangulation for theintermediate regions between each two faces [left-front (L-F), front-right (F-R), right-back (R-B),back-left (B-L)]. The other is a cobweblike triangula-tion after unfolding four faces [left (L), front (F), right(R), back (B)] to top/bottom [up (U), down (D)] face.Moreover, since the divided triangular meshes mightcover more than one cube face, the homographytransformation between two planes should be madefor individual face so as to warp the triangular meshto the novel view as a whole.

To address the aforementioned problems caused bythe cube-edge discontinuity, the feasible solutionsare provided in this paper, involving the unstruc-tured 3-D triangulation and face-to-face homographytransformation, thereby achieving the scene model-ing based on CPs by ROAM. Section 2 illustrates thedetails of the proposed CP triangulation with theirregular-distribution matching points, especiallytwo bridging triangulations for dealing with theboundary problem in 3-D triangulation. Consideringthat the triangular meshes might cover more thanone face, Section 3 presents a face-to-face homogra-phy transformation for warping meshes from refer-ence CPs to a novel view. The experiment isconducted in Section 4, to validate the proposed un-structured triangulation with randomly distributedpoints, and the synthesized CP by the ROAMmethodis also analyzed. The final conclusion is given inSection 5.

2. Triangulation of CP

A. Overview

The proposed CP triangulation approach converts acomplicated 3-D triangulation into a simple three-step triangulation, as shown in Fig. 1.

unfolded cubic

panorama

Triangulation of a polygon

defined by the boundaries of two adjacent

faces (Subsection 2.3)

Alltriangles

Temporarily removed affected pointson U/D

2D Delaunay triangulationon each face (L, F, R, B)

2D Delaunay triangulationon face U/D

Boundary of a set of triangles on face U/D

(Subsection 2.2)

A cobwebliketriangulation

(Subsection 2.4)

The closest boundary loop

outside face U/D

(Subsection 2.4)

Inserting the affected points

(Subsection 2.5)

Rectifying the boundary

(Subsection 2.5)

Boundary of a set of triangles on each face

(Subsection 2.2)

Unfolding faces L, F, R, B to

face U/D

Step I Step II Step III

Fig. 1. Pipeline of CP triangulation.

1 August 2011 / Vol. 50, No. 22 / APPLIED OPTICS 4287

Page 3: Triangulation of cubic panorama for view synthesis

Step I. Carrying out 2-D Delaunay triangulationson each face (L, F, R, B, U, D), illustrated as theshaded region in Fig. 2.

Step II. Performing improved polygonal triangu-lations in the intermediate regions L-F, F-R, R-B,and B-L, respectively. Taking B-L as an example,the points are selected from boundaries on face Band the translated face L as vertices of the polygonshown as the dark region in Fig. 3. An improved poly-gonal triangulation is carried out in this region.

Step III. Executing cobweblike triangulations inthe remaining intermediate regions surroundingface U and D, respectively. The faces L, F, R, and Bwould be unfolded to face U=D (Fig. 4). Then the clo-sest boundary loop outside face U=D, drawn in thethick line, is confirmed from the divided triangles,and the cobweblike triangulation is implementedin the region enclosed by the boundary of triangleson face U=D and this closest loop, shown as the grayregion in Fig. 4.

In this paper, the intersection judgment of two seg-ments is frequently used in the proposed CP triangu-lation. It involves the intersection of two planes(Fig. 5). ab and ef are two segments on a cubic sur-face, which define two planes passing the center c ofthis cube. If a, b, e, and f are not on the same plane,both planes must intersect at a line presented as avector cg�!. Hence, on the plane defined by a, b,and g, with c as an origin, it is easy to judge the vector

cg�!whether within the range of vector cb�!

and vectorca�! or not. When g is located within ab and ef , thesetwo segments are established to intersect.

The details of CP triangulation are discussed next.

B. Boundary of a Set of Triangles

For all edges of a set of triangles, the boundary mustbe composed of the edges never repetitively existing.Thus, the ending points of these edges are linkedsuccessively to form the boundary of this set.

C. Improved Triangulation in a Polygon

Taking the polygonal triangulation of B-L as an ex-ample, the boundaries are respectively acquired fromexisting triangles on faces B and L in the way de-scribed in Subsection 2.B. The vertices on faces Band L are identified from both of boundaries. Thepiecewise lines linking the top point to bottom pointare selected in the left part of face B and the rightpart of face L, respectively. After translating face Lto the right of face B, the region enclosed with thesetwo piecewise lines need be rectified to construct a

Fig. 2. Step I of CP triangulation.

Fig. 3. Step II of CP triangulation.

Fig. 4. Step III of CP triangulation.

Fig. 5. Intersection judgment of two segments on CP.

4288 APPLIED OPTICS / Vol. 50, No. 22 / 1 August 2011

Page 4: Triangulation of cubic panorama for view synthesis

polygon without any self-intersection. Figure 6(a)shows a self-intersection where the segment cb inter-sects some part of the piecewise lines. Once the self-intersection happens, the point f with the maximumdistance to the line cb is selected from the verticesbelow the segment cb as an update bottom point. Thetop points of two piecewise lines are checked to up-date in the same way. The rectified polygon is shownin the gray region in Fig. 6(b).

In view of established approaches of a triangula-tion for a convex polygon, the idea of triangulationfor a concave polygon is to iteratively depart the se-parable convex vertices and its corresponding trian-gles until the remaining region is convex. In Fig. 7(a),the gray region is a concave region with four convexvertices a, d, c, and f . The vertex c is not separablebecause the segment linking two adjacent pointsg, f intersects a side be of the concave polygon. InFig. 7(b), a convex vertex c is not separable eithersince the segment f b passes the existing trianglesshown as the grid region. Hence, in this polygonal tri-angulation, a separable convex vertex satisfies twoconstraints, without intersecting any side of a poly-gon and any existing triangles. For the separable ver-tex a, a triangleΔabd is departed and a new polygonremoved vertex a is iteratively analyzed in the sameway until the remaining region is convex.

D. Cobweblike Triangulation

After getting the triangles on each face and their in-termediate regions, the boundary of a set of triangles

on face U=D (inner boundary) and the closest bound-ary loop on unfolded four faces (outer boundary) canbe acquired, both of which enclose the gray region inFig. 8. As shown in Fig. 8, taking the region sur-rounding face U as an example, faces L, F, R, andB are unfolded to face U and the cobweblike triangu-lation is implemented in this intermediate region.Attention should be paid to make sure the twoloops separate without intersecting at any part. Thecase of two loops intersecting will be discussed inSubsection 2.E. Here we presume that they do notjoin at any point.

The nearest point to face U is chosen as the begin-ning point of searching outer boundary. If neglectingthe searching direction, the search may be stuck in alocal loop, without visiting other vertices on theboundary. In Fig. 8, when a is a beginning point ofsearch and k is its sequent point, the next searchingpoint is possibly b, and then a is next, which gives riseto a dead loop. For the knot point with more than twoconnected boundary segments (e.g., k), the invalidsegments should be removed to reserve the two validsegments for the correct searching direction. In Fig. 9,u is the center point of the triangle set on face U.When the segment bu intersects any part of bound-ary like ak, the segment bk is regarded as an invalidsegment to be removed. The other two valid seg-ments ak and dk can guarantee a successful search-ing loop. For the bottom intermediate region, alloperations associated to face U are substituted byface D, and the valid segments are bk and ck instead.

Prior to a cobweblike triangulation, each vertex ofouter boundary should link at least one vertex of in-ner boundary without intersecting any part of bound-ary. In Fig. 10, vertex a is against this condition, bothof whose adjacent points b and c meet this condition.Thus, the triangle Δabc is constructed, and thesegment bc replaces prior segments bac to form arectified outer boundary drawn in a thick line.

Fig. 6. (a) Self-intersection; (b) rectified polygon without self-intersection.

Fig. 7. A convex vertex c is not separable in both cases. Fig. 8. Cobweblike triangulation.

1 August 2011 / Vol. 50, No. 22 / APPLIED OPTICS 4289

Page 5: Triangulation of cubic panorama for view synthesis

Given the inner and outer boundaries, the idea ofcobweblike triangulation is as follows:

a. Keeping the same ordering direction. If the or-dering direction of the vertices of inner boundary isclockwise, the ordering direction of outer verticesshould be clockwise, and vice versa.

b. Finding the closest outer vertex for each innervertex. Because the ordering directions of inner andouter boundaries are same, the closest outer vertex ofan inner vertex is never arranged prior to the closestouter point of its prior inner vertex. It means that theouter vertex 40 is never the closest outer vertex of in-ner vertex 2, when outer vertex 00 is the closest onefor inner vertex 1, as shown in Fig. 11, because thetopological confliction occurs when outer vertex 40 isprior to outer vertex 00, while inner vertex 2 is poster-ior to inner vertex 1.

c. Going on triangulation in the gray region en-circled by two adjacent inner vertices and their clo-sest outer vertices. The first step is to get the validinner vertices for each outer vertex. In Fig. 11, thevalid inner vertices of 20 can only be one choice off2g, f3g, and f2; 3g. Because the outer vertex hasat least one valid inner vertex to connect, the sameordering direction makes the valid inner vertices notbeyond the two inner vertices 2 and 3. The secondstep is to successively link two adjacent outer ver-tices and its minimum common valid inner vertexfor a triangle. The last step is to find the remainingundivided region, which is the last triangle. Thewhole process is to successively get triangles Δ10202,Δ20303, and Δ2320.

E. Posterior Procedure of Cobweblike Triangulation

When the inner boundary intersects the outer bound-ary, the affected region on face U=D is enclosed withthe outer boundary and the border of face U=D, andthe triangles in the unaffected region of face U=D arepreserved to form an update inner boundary.

After performing the cobweblike triangulation, thepoints in the affected region of faceU are inserted suc-cessively to divide one triangle into three triangles.

This is the whole procedure of CP triangulation.Given the matching points on the reference CPs,the proposed unstructured triangulation is performedon one CP, and triangular pairs are confirmed simul-taneously among these reference CPs, which are sub-sequently grouped into 2-D/3-D triangular meshes byconsistency checking to represent the scene.

3. Face-to-Face Homography Transformation

The virtual CP synthesis is to map the divided trian-gular meshes from reference CPs to a novel view-point by face-to-face homography transformations.Because of the cube-edge discontinuity, the projec-tion of a triangle on a cubic surface generally coversmore than one face, presenting three cases accordingto the different numbers of contained cubic vertices,illustrated as the gray region in Fig. 12.

Figure 13 illustrates the reprojection of triangularmeshes from one of the reference CPs (e.g., CP1), tothe novel view (e.g., CP2). The trianglesΔabc in CP1andΔa0b0c0 in CP2 correspond to the common spatialregion, whose projections on the cubic surface arepresented as Δadg on face1i adjoining quadrangle▱dbcg on face1j and Δa0d0g0 on face2j , respectively.The face-to-face homography transformation fromΔabc to Δa0b0c0 can be written as

Hface1i →face2j¼ KR−1

facejR−1

12ðI þ T12~πTÞRfaceiK−1; ð1Þ

where I is the identical matrix, K is the intrinsicmatrix of the CP, Rfacei is the rotation matrix of facei in a CP, R12 and T12 denote the rotation matrix and

Fig. 9. Removing invalid segments of a knot.

Fig. 10. Rectifying the outer boundary.

Fig. 11. Topological ordering confliction.

4290 APPLIED OPTICS / Vol. 50, No. 22 / 1 August 2011

Page 6: Triangulation of cubic panorama for view synthesis

translation matrix from CP2 to CP1, respectively,and ~π is a vector composed of the normalized param-eters of spatial planar formula, which is defined byspatial points corresponding to the vertices of thetwo triangles. After warping Δadg from CP1 toCP2, segment d0g0 must be within the Δa0b0c0 onface2j , and the warped ▱d0b0c0g0 is also withinΔa0b0c0, adjoining the warped Δa0d0g0. If exchangingthe roles of CP1 and CP2, the transformation is towarp Δa0b0c0 from CP2 to CP1. After warpingΔa0b0c0 from face2j to face1i , the part outside Δadgis naturally outside the border of face1i , so only thepart within face1i is selected. After warping Δa0b0c0

from face2j to face1j , the part within face1j is kept inthe same way, then the whole-triangle warping is im-plemented from CP2 to CP1.

Once the triangles from each reference CP are re-projected onto the novel view, the weights should beindividually established for warped 2-D/3-D trian-gles so as to intelligently deal with the visibility var-iation, whose details are described in [8].

4. Experiments and Discussion

The proposed unstructured triangulation approachhas been implemented in C/C++ and tested on aPC with a configuration of Core i7 2:67GHz CPU,2GB RAM. We input an arbitrary amount of ran-domly distributed points on a cubic surface to testthe capability of the proposed approach. The numberof points on each face should be more than three for

constructing at least one triangle. The divided trian-gles after performing the proposed triangulationshould never overlap each other at any condition.Then the real feature points, which are detectedand matched from a group of calibrated CPs bythe method proposed in [16], are inputted to furtherverify the efficiency of the proposed triangulation forunstructured points. The reference CPs used in ourexperiments were taken at University of Ottawa, andthe raw data from a Point Grey Ladybug camerahave been converted to CPs (each face with a resolu-tion of 512 × 512) using the methods described in[17]. The divided triangular meshes can be warpedfrom reference CPs to a novel view by face-to-facehomography transformations. By comparing the ren-dered virtual CP (a composite of warped meshes withdifferent weights) and the ground truth at the novelviewpoint, the perspective correctness of face-to-facehomography transformation could be validated.

Figures 14–16 show the divided triangles when thenumber of input points is 50, 500, and 2000, respec-tively. To clearly illustrate the three steps of triangu-lation, the results are displayed as different regionsfor each step. The off-white regions are the resultsafter performing Step I, and the greyer regions be-tween each two faces are divided by the improved

Fig. 12. Three kinds of different projections of a triangle Δabd.

Fig. 13. Face-to-face homography transformation.

Fig. 14. Triangulation given 50 randomly distributed points on acubic surface.

1 August 2011 / Vol. 50, No. 22 / APPLIED OPTICS 4291

Page 7: Triangulation of cubic panorama for view synthesis

polygonal triangulation in Step II. In Step III, theouter boundaries for the cobweblike triangulationare drawn as the thick piecewise lines. It is observedthat all the triangles do not overlap each other, andour proposed CP triangulation can well handle irre-gular-distribution points. We also study the perfor-mance of proposed triangulation. Figure 17 showsthe relationship between the run time of currentcodes and the number of input points, which presentsan approximately linear curve. Since there are still alarge number of temporary values to repeatedly com-pute and redundant operations not to simplify, theperformance of this program can be expected to befurther improved after optimizing the codes.

Then we choose a group of typical CPs to be experi-mental images. The matching points are found outamong these reference CPs by the method proposedin [16]. Figures 18 and 19 show the matches in tworeference CPs. The triangular meshes can be con-firmed by performing the proposed triangulationafter inputting these matches, as shown in Figs. 20and 21. There are no triangles overlapping in Fig. 20,

while some corresponding triangles overlap eachother in Fig. 21. These ragged triangles are actuallycaused by mismatches or occlusion, especially in tex-tureless regions and close objects with a large scalingvariation. Most of ragged triangles remain in thegroup of 2-D triangles. As the method presented in[8], the triangle pairs are reprojected to a novel

Fig. 15. Triangulation given 500 randomly distributed points ona cubic surface.

Fig. 16. Triangulation given 2000 randomly distributed points ona cubic surface.

0 200 400 600 800 1000 1200 1400 1600 1800 20000

1

2

3

4

5

6

7

8

Number of points

Ru

n t

ime

(/s)

Fig. 17. Relationship between the run time and the number ofinput points.

Fig. 18. Matching points in the first CP.

Fig. 19. Matching points in the second CP.

4292 APPLIED OPTICS / Vol. 50, No. 22 / 1 August 2011

Page 8: Triangulation of cubic panorama for view synthesis

viewpoint with distinct weights to generate thesynthesized CP as shown in Fig. 22. Comparing withits ground truth in Fig. 23, the rendered image is

photorealistic and perspectively correct in general,except some local artifacts and distortions. The fide-lity of synthesized CP indicates the face-to-facehomography transformation is an effective warpingfunction of triangular meshes for CP synthesis. Interms of the distortions arising from occlusion, thesegmentation by inserting more matching points inthis region may be an intuitive way to further relieveartifacts.

5. Conclusions

In this paper, an unstructured triangulation ap-proach for the irregular-distribution points on a cubicsurface is presented, which converts a complicated3-D triangulation into a simple three-step triangula-tion. The last two 3-D bridging triangulation steps,involving improved polygonal triangulation for theimmediate regions among four faces and cobwebliketriangulation after unfolding four faces to top/bottomface, are taken to sophisticatedly solve the boundaryproblem. This approach takes the special cubic struc-ture into account to overcome the difficulty of directlyapplying 2-D triangulation in CP. The experimentsdemonstrate that the proposed approach is able toimplement triangulations in CPs given unstructuredmatching points, and a photorealistic virtual CP canbe generated through reprojecting the divided trian-gular meshes by the face-to-face homography trans-formations. In this paper, the novel view is generatedonly from its neighboring reference CPs, and the var-iation of reference views will result in the illumina-tion jump in environment walkthrough. Our futurework will investigate how to achieve smooth transi-tion of virtual views.

The authors want to thank Prof. Eric Dubois fromUniversity of Ottawa for his valuable commentsand help. This work was supported in part by theNational Natural Science Foundation of China(NSFC) and the Research Fund for the DoctoralProgram of High Education of China.

Fig. 20. Triangles in the first CP after performing CPtriangulation.

Fig. 21. Triangles in the second CP after performing CPtriangulation.

Fig. 22. Synthesized CP.

Fig. 23. Ground truth of the synthesized CP.

1 August 2011 / Vol. 50, No. 22 / APPLIED OPTICS 4293

Page 9: Triangulation of cubic panorama for view synthesis

References1. S. E. Chen, “QuickTime VR: an image-based approach to vir-

tual environment navigation,” in Proceedings of the 22nd An-nual Conference on Computer Graphics and InteractiveTechniques (ACM, 1995), pp. 29–38.

2. Wikipedia, “Google Street View” (2011), http://en.wikipedia.org/wiki/Google_Street_View.

3. J. Kopf, B. Chen, R. Szeliski, and M. Cohen, “Street slide:browsing street level imagery,” ACM Trans. Graph. 29(4),Article 96 (2010).

4. C. Zhang, “A survey on image-based rendering: representa-tion, sampling and compression,” Signal Process. ImageCommun. 19, 1–28 (2004).

5. M. Lhuillier and Q. Long, “Match propagation for image-basedmodeling and rendering,” IEEE Trans. Pattern Anal. MachineIntell. 24, 1140–1146 (2002).

6. C. L. Zitnick and S. B. Kang, “Stereo for image-based render-ing using image over-segmentation,” Int. J. Comput. Vis. 75,49–65 (2007).

7. Y. Altunbasak and A.M. Tekalp, “Occlusion-adaptive, content-based mesh design and forward tracking,” IEEE Trans. ImageProcess. 6, 1270–1280 (1997).

8. A. M. K. Siu and R. W. H. Lau, “Image registration forimage-based rendering,” IEEE Trans. Image Process. 14,241–252 (2005).

9. A. M. K. Siu and R. W. H. Lau, “Relief occlusion-adaptivemeshes for 3D imaging,” in Proceedings of the InternationalConference onMultimedia and Expo (IEEE, 2003), pp. 101–104.

10. S. M. Seitz and C. R. Dyer, “Physically-valid view synthesisby image interpolation,” in Proceedings of IEEE Workshopon Representation of Visual Scenes (IEEE, 1995), pp. 18–25.

11. S. M. Seitz and C. R. Dyer, “View morphing,” in Proceedings ofthe 23rd Annual Conference on Computer Graphics and Inter-active Techniques (ACM, 1996), pp. 21–30.

12. M. Goesele, J. Ackermann, S. Fuhrmann, C. Haubold,R. Klowsky, D. Steedly, and R. Szeliski, “Ambient pointclouds for view interpolation,” ACM Trans. Graph. 29,95 (2010).

13. F. Shi, R. Laganiere, and E. Dubois, “On the use of ray-tracingfor viewpoint interpolation in panoramic imagery,” in Proceed-ings of the 2009 Canadian Conference on Computer and RobotVision (IEEE, 2009), pp. 200–207.

14. S. Ince, “Occlusion-aware view interpolation,” EURASIP J.Image Video Process. 2008, 803231 (2008).

15. S. Fleck, F. Busch, P. Biber, and W. Straper, “Graph cut basedpanoramic 3D modeling and ground truth comparison witha mobile platform—the Wagele,” Image Vis. Comput. 27,141–152 (2009).

16. C. Zhang, E. Dubois, and Y. Zhao, “Intermediate cubic-panorama synthesis based on triangular re-projection,” in17th IEEE International Conference on Image Processing(IEEE, 2010), pp. 3985–3988.

17. M. Beermann and E. Dubois, “Acquisition processing chainfor dynamic panoramic image sequences,” in IEEE Interna-tional Conference on Image Processing (IEEE, 2007),pp. 217–220.

4294 APPLIED OPTICS / Vol. 50, No. 22 / 1 August 2011