towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity...

19
Int. J. of Heavy Vehicle Systems, Vol. 11, Nos 3/4, 2004 434 Towards computer-aided reverse engineering of heavy vehicle parts using laser range imaging techniques D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik,* and M. Abidi Imaging, Robotics, and Intelligent Systems Laboratory, 336 Ferris Hall, The University of Tennessee, Knoxville, Tennessee 37996-2100, USA Email: [email protected] *Image Processing and Intelligent Systems Laboratory, Department of Image Engineering, Graduate School of Advanced Imaging Science, Multimedia and Film, Chung-Ang University, Seoul, Korea Abstract: We present an integrated system to automatically generate computer-aided design (CAD) models of existing vehicle parts using laser range imaging techniques. The proposed system integrates data acquisition, model reconstruction, and post processing to generate a set of models from real-world automotive parts. This range image-based, computer-aided reverse engineering (CARE) system has potential for faster model reconstruction over traditional reverse engineering technologies. As part of this system, we also propose a novel crease detection algorithm, which segments the surfaces of reconstructed models along smoothness discontinuities. We present results for both the CARE system and the proposed crease detection algorithm for a set of automotive parts. Keywords: computer-aided reverse engineering (CARE), heavy vehicles, image-based modelling, reverse engineering, vehicle modelling. Reference to this paper should be made as follows: Page, D., Koschan, A., Sun, Y., Zhang, Y., Paik, J. and Abidi, M. (2004) ‘Towards computer-aided reverse engineering of heavy vehicle parts using laser range imaging techniques’, Int. J. of Heavy Vehicle Systems, Vol. 11, Nos 3/4, pp. 434–452. 1 Introduction Reverse engineering (RE) of heavy vehicle components serves as a valuable feedback path in a distributed design team, and an important validation tool in a rapid manufacturing system. Heavy military vehicles, in particular, with their special design requirements and operating conditions, benefit significantly when RE is an element of their system design. Traditional methods for reconstructing the exterior surfaces of objects for RE include coordinate measuring machines (CMM), which are often tedious and time consuming. In computer vision and image processing, recent advances in 3D Copyright © 2004 Inderscience Enterprises Ltd.

Upload: others

Post on 19-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Int. J. of Heavy Vehicle Systems, Vol. 11, Nos 3/4, 2004 434

Towards computer-aided reverse engineering of heavy vehicle parts using laser range imaging techniques

D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik,* and M. Abidi Imaging, Robotics, and Intelligent Systems Laboratory, 336 Ferris Hall, The University of Tennessee, Knoxville, Tennessee 37996-2100, USA Email: [email protected]

*Image Processing and Intelligent Systems Laboratory, Department of Image Engineering, Graduate School of Advanced Imaging Science, Multimedia and Film, Chung-Ang University, Seoul, Korea

Abstract: We present an integrated system to automatically generate computer-aided design (CAD) models of existing vehicle parts using laser range imaging techniques. The proposed system integrates data acquisition, model reconstruction, and post processing to generate a set of models from real-world automotive parts. This range image-based, computer-aided reverse engineering (CARE) system has potential for faster model reconstruction over traditional reverse engineering technologies. As part of this system, we also propose a novel crease detection algorithm, which segments the surfaces of reconstructed models along smoothness discontinuities. We present results for both the CARE system and the proposed crease detection algorithm for a set of automotive parts.

Keywords: computer-aided reverse engineering (CARE), heavy vehicles, image-based modelling, reverse engineering, vehicle modelling.

Reference to this paper should be made as follows: Page, D., Koschan, A., Sun, Y., Zhang, Y., Paik, J. and Abidi, M. (2004) ‘Towards computer-aided reverse engineering of heavy vehicle parts using laser range imaging techniques’, Int. J. of Heavy Vehicle Systems, Vol. 11, Nos 3/4, pp. 434–452.

1 Introduction

Reverse engineering (RE) of heavy vehicle components serves as a valuable feedback path in a distributed design team, and an important validation tool in a rapid manufacturing system. Heavy military vehicles, in particular, with their special design requirements and operating conditions, benefit significantly when RE is an element of their system design. Traditional methods for reconstructing the exterior surfaces of objects for RE include coordinate measuring machines (CMM), which are often tedious and time consuming. In computer vision and image processing, recent advances in 3D

Copyright © 2004 Inderscience Enterprises Ltd.

Page 2: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Reverse engineering of parts using laser range imaging techniques 435

free form scanning, however, have led to efficient, accurate, and fast laser-based systems that rapidly generate high fidelity computer models of existing vehicle parts. These scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997). Before these scanning systems become a robust technology, computer vision researchers must address a few technical challenges. In this paper, we explore these challenges and present results from an RE system developed within our laboratory. We also present a novel algorithm for crease detection in reconstructed models from this system.

1.1 Motivation

Although RE is a broad field that encompasses many concepts, our specific definition is the ability to create a computer-aided design (CAD) model of a real-world part (Bernardini et al., 1999a). By contrast, forward engineering is the creation of a real-world part from a CAD model. The automation of forward engineering, or computer-aided manufacturing (CAM), has significantly impacted recent technologies in system design. CAM has also introduced rapid prototyping into the design loop and facilitated changes on demand after the deployment of a design (Yan and Gu, 1996). The automation of RE, or computer-aided reverse engineering (CARE), promises to impact the design process in a similar fashion, especially with a collaborative, distributed design team. CARE allows electronic dissemination of as-built parts for comparison of original designs with manufactured results. Additionally, CARE allows construction of CAD models of existing parts when such models no longer exist as when parts are out of production (Thompson et al., 1999).

For military vehicles, this latter capability is of additional importance when repairs and modifications occur during combat. Of particular interest is the Mobile Parts Hospital initiative within the US Army National Automotive Center (NAC) at the Research, Development and Engineering Command (RDECOM). The vision for the parts hospital is an emergency manufacturing unit that is designed for frontline deployment. Although the hospital should ideally have access to a CAD database, CAD models for a part may not necessarily be available, for example for vehicles that have undocumented field modifications. A CARE scanner allows even an untrained − in terms of engineering practices − soldier to create high-quality CAD models for manufacturing. Additionally, a CARE scanner is a valuable tool for documenting part failures. Such data provides an electronic history of the part life cycle and aids future designs.

1.2 CMM systems versus computer vision systems

The current approach to CARE involves CMM technology. A CMM is essentially a mechanical measurement device that uses a multi-degree-of-freedom manipulator with a touch probe. CMMs are a mature technology with highly accurate and repeatable results, and thus for part inspection, CMMs have become an industry standard. CMMs, however, do have a few drawbacks, and we propose that computer vision systems could either aid CMMs themselves or replace them entirely with future research.

As with most mechanical systems, the manipulator arm and the touch probe of the CMM have physical limitations. For example, the range of motion of the manipulator restricts the size and variety of parts that the system can scan. Another drawback is the

Page 3: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

436 D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik and M. Abidi

setup time and personnel training that CMMs require. Path planning for the probe requires the intuition of a skilled operator to ensure coverage of a part and to avoid collisions. Computer vision techniques, in particular laser range imaging, offer potential to alleviate or avoid many of these problems. With imaging systems, the mechanical limitations of a manipulator and probe are no longer a factor, but rather the viewing constraints of the camera and the ability to move the camera around the part become the limiting factors, which are much less restrictive. By analogy, consider aerial imaging for terrain mapping as opposed to walking that same terrain. CMMs are similar to walking across the terrain, while ranging imaging is similar to flying over the terrain. Also, the computer framework of vision systems − as opposed to the mechanical nature of CMMs − offers more opportunity to automate the RE process and thus to reduce the training requirements of operators. Finally, computer vision significantly increases data acquisition rates over CMMs.

1.3 Research areas

Computer-vision-based CARE technology is only in its infancy and has not reached the maturity that one finds with CMM and CAM. The three main areas of research are data acquisition, model reconstruction, and post processing. (Interestingly, reconstruction and post-processing algorithms are also of direct benefit to CMM technology.) The contributions of this paper are:

• an integrated CARE system using acquisition, reconstruction, and post-processing algorithms from computer vision,

• a novel crease detection algorithm implemented in the above system, and

• a set of reconstructed models of automotive parts using this system.

In Sections 2, 3, and 4, we discuss the technical issues associated with each area of the integrated system. In Section 4, we also present the new crease detection algorithm. Then, in Section 5, we present the reconstructed models from the CARE system developed from the research in the previous sections. A block diagram of the system appears in Figure 1. Finally, we conclude in Section 6 with a brief summary of our research and a discussion of future directions.

Figure 1 The proposed block diagram of the integrated CARE system.

2 Range image acquisition

In comparison to CMM systems and their contact sensors, computer vision systems are a more stand-off, less intrusive approach to gathering surface measurements of automotive parts. Our proposed approach is to use range imaging cameras. We define range image

Page 4: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Reverse engineering of parts using laser range imaging techniques 437

acquisition as the process of determining the distance (or depth) from a given camera location to each of the surface points on a part (Besl, 1989). Figure 2 is an example of a colour-coded range image (although reproduced here in greyscale). The most well-known method of range image acquisition is passive triangulation, also known as stereo vision. Stereo vision involves coordinating two cameras to generate depth information about a scene. Unfortunately, computer implementations of stereo vision lack the accuracy and precision necessary for RE. More accurate solutions to range image acquisition are laser-based techniques. These methods include continuous wave modulation, time-of-flight estimation, and structured-light triangulation (Bernardini et al., 1999a). Within our laboratory, we have studied each type of these sensors with examples shown in Figure 3.

(a) (b)

Figure 2 An example of a range image (a) for an automotive part (b). The greyscale coding in (a) indicates the distance (or range) from the camera.

(a) (b) (c)

Figure 3 Three range systems. (a) Perceptron wave modulation system (Perceptron, Inc., 1994); (b) LMS-Z210 time-of-flight sensor (RIEGL Laser Measurement Systems, 2001); (c) MAPP 2500 Ranger sheet-of-light system (Integrated Vision Products, 2000).

Page 5: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

438 D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik and M. Abidi

2.1 Sheet-of-light range scanner

Our research indicates that the most promising method for a CARE scanner is structured light triangulation. Structured light systems are similar to stereo vision except that, instead of two cameras, these systems consist of a single camera and an active light source replacing the other camera. With a structured light system, the range value is a function of three parameters: the baseline distance between the light source and the camera, the viewing angle of the camera, and the illumination angle of the light source, see Figure 4(a). We have the following equation for the range value, r:

( , , , , , )r F i j Bα β= … , (1)

where F is a function, possibly nonlinear, i and j respectively represent the horizontal and vertical position of a pixel in the coordinates of the range image, α is illumination angle, β is the camera view angle, and B is the baseline distance between camera and light source. These variables are the dominating parameters of a larger set that affect the range values. We can usually ignore the other parameters and focus mainly on the above (Integrated Vision Products, 2000).

(a) (b)

Object Object

p

r Image

a Image Plane

Stripe Plane Camera Point α Laser β B Origin Camera Laser B Origin

Figure 4 Diagrams of structured light systems. (a) A point-laser system. The laser illuminates point p and projects to pixel a. (b) A sheet-of-light system. The laser stripe across the object projects to the image plane of the camera such that it illuminates only one pixel in each image column.

The type of light source determines the type of a structured light system. The three methods of illumination are single point, sheet-of-light, and coded light. Selection of an appropriate light source depends primarily on the scanning speed. Accuracy, itself, varies little among each of the systems where one can expect millimetre to sub-millimetre accuracy. Although CMMs have better accuracy by an order of magnitude, active triangulation systems require further improvements to overcome the CMMs in the near future. The advantage of these systems, however, is their faster scanning speeds, with the sheet-of-light systems providing the fastest and most efficient scanning configuration of the three systems (Integrated Vision Products, 2000). For example, some sheet-of-light systems can scan approximately one million points per second. CMMs have difficulty

Page 6: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Reverse engineering of parts using laser range imaging techniques 439

competing with such speeds. Figure 4(a) is an illustration and an example of a laser stripe is shown in Figure 5(a).

(a) (b) (c)

Figure 5 (a) A laser stripe from a sheet-of-light system projects onto a set of objects. (b) A Note

2.2 Calibration problem

Camera calibration essentially involves transforming the range image data (consisting of

(2)

where T is, in general, a nonlinear transformation. The vector represents the measured

onfiguration. The mo

2.3 Occlusion problem

The occlusion problem results from surfaces on a part occluding or obstructing the view of other surfaces on the same part. Consider the distributor cap in Figure 6(a). The knob

calibration target for the MAPP 2500 Ranger System. (c) A range image of the target in (b).the dots on each plane that serve as fudicial markers for calibration pairs.

pixel coordinates for row i and column j and of the range value r) into camera coordinates x, y, and z. Typically, image coordinates r = ( r, i, j )t are represented as a vector of integers, and the camera frame coordinates x = ( x, y, z )t are a vector of floating point numbers. We can write the following equation to define calibration.

T( )=x r ,

xcoordinates of a point p on the surface of an object. We must know T to transform each of the points in a range image into the coordinate frame of the camera.

The calibration problem involves estimating T for a given system cst common approach is to place a target of known dimension into the field of view of

the range camera. An example target appears in Figures 4(b) and (c). The target contains special calibration markers so that one can easily match the image data rk to the known camera frame data xk where we use the subscript k to denote the kth such pairing. Given, k = 1,…, K, target pairs, we can use either closed form or iterative optimisation techniques to estimate T, depending on the linearity of the transform. Several authors propose calibration methods, for example, in Krotkov (1991) and Beraldin et al. (1993). We use a method (Integrated Vision Products, 2000) that first calibrates the projection of image pixels into the world coordinates using the calibration target and then a second step that calibrates where the sheet-of-light plane falls on the calibration target. The equation details are outlined in Faugeras (1993).

Page 7: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

440 D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik and M. Abidi

projections at the top of the cap create an intricate structure such that when viewing from a particular angle, an observer is unable to see the surfaces behind a knob. The closest knob occludes, or obscures, the surfaces behind it. For a range imaging system, this problem means that we are unable to collect data points for the occluded surfaces if we only view the object from a single angle. To overcome this problem, we naturally take multiple views of an object such that we image each surface and have thus full coverage, see Figure 6(b).

Camera View C Camera View D

e

(a) (b)

Distributor cap example. (b) Multiple camera views avoid e ame segment

Object

Camera View A Camera View B

Figure 6 The occlusion problem. (a)occlusions. View B is able to see the hashed lin sregion for view A. In addition, the backsides of the object, which view A and B can not see, are well within the view of C and D.

e

elf ystems have vast amounts of data storage, but the problem

is t

e acquisition is error measurement. As with most data acquisition systems, measurement of error is a problem that is ever present. We define

on:

frame coordinates, and e is the measurement error

gment while the s is an occluded

With multiple views, however, the scanning system becomes more complicated inthat we now have more data to handle, and we must ensure that we properly cover thobject. The fact that we have more data to handle is not necessarily a problem in itssince most modern computer s

hat each new view of an object leads to a new camera coordinate frame. So, the question becomes how do we align the local coordinate frames for each viewing angle into a single world coordinate frame. This problem is known as the registration problem (Bergevin et al., 1996), and we address it in a later section. A more immediate challenge is to ensure that we properly cover the entire surface of an object. This challenge is essentially a sensor placement problem where solutions are known as next best view algorithms (Wong et al., 1998).

2.4 Measurement error

The third research area for rang

error with the following equati

= +x p e , (3)

where x is the measured point in camera frame coordinates, p is the actual point on the object surface also in the camera

Page 8: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Reverse engineering of parts using laser range imaging techniques 441

vector. Multipsensor signals to poor reflective properties of surfaces. Because of the random nature of

le sources of error arise in range systems from digital quantisation of

these errors, we lump them into a single concept of noise error. An example of error measurement appears in Figure 7. Error characterisation (Hashemi et al., 1994) and methods to overcome error (Page et al., 2001) are important research avenues.

(a) (b)

Figure 7 Example of quantisation error. (a) A single range image of a water neck part. (b) Zoom view of (a) highlighting the quantisation error.

After data acquisition, we have a set of range images representing multiple viewpoints around an object. The task now is to reconstruct a CAD model from these multiple range

a multi-faceted problem that is perhaps one of the most difficult problems in computer vision and attracts more attention in the field. A

3.1

As c to overcome occlusions. Recall Figure 6. As the camera moves to a new view, the resulting data is

v istration is the process whereby we align these multiple views and their associated coordinate frames into a single global coordinate

3 Model reconstruction

images. Model reconstruction is

frequently cited paper that first formalised the problem in the context of range imaging is Hoppe et al. (1992) with another important work by Curless and Levoy (1996). The basic fundamental problems for reconstruction are:

• aligning multiple views into a global coordinate frame, and

• integrating and merging aligned views into a CAD representation.

Multiple view registration

dis ussed previously, multiple views of an object are necessary

relati e to the new view position. Reg

frame. The registration problem is essentially recovering the rigid transformation given the raw range data. We define the rigid transform as

Page 9: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

442 D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik and M. Abidi

= +y Rx t , (4)

where R represents a rotation matrix, t a translation vector, x the point in camera frame coordinates, and y the same point in world coordinates. Registration involves finding R and t. From Hor

(c)

(a) and (b) Views of the knob from different angles. (c) The resulting integrated mesh.

Once we have multiple views registered, we next need to integrate these views into a single consistent surface representation as shown in Figure 8. For this step, we can either

e data as a cloud of points (Hoppe et al., 1992) or a set of

n’s work (Horn et al., 1988), given three or more pairs of non-coplanar

corresponding 3D points between two views, the unknown rigid transformation has a closed form solution. Thus, the registration problem becomes a point matching problem to establish point pairs.

The most popular algorithm for registration is the Iterative Closest Point (ICP) algorithm (Besl and McKay, 1992; Chen and Medioni, 1992; Zhang, 1994). The ICP algorithm requires an initial estimate of the registration transformation and uses this estimate to establish closest point pairs. With these pairs, the algorithm estimates the new transformation that better aligns the data and iterates these steps until some threshold.

The problems with ICP are the need for an initial registration and extension of the algorithm for more than two views. A few possible methods for establishing the initial estimate are Spin Images (Johnson and Hebert, 1997) and Point’s Fingerprint (Sun and Abidi, 2001). As these papers highlight, the difficulty with establishing even a rough initial registration is the need for a sufficient amount of overlap between views. By overlap, we mean to say that two views must observe at least part of the same surfaces of an object. The other problem with ICP is that it only registers two views. In practice we must register multiple views for a single object. Chen and Medioni (1992) and Bergevin et al. (1996) proposed multi-view registration methods. Campbell and Flynn (2001) present a survey and comparison of registration methods.

(a) (b)

An integration algorithm brings together two views of a knob protruding from a plane Figure 8 into a single surface representation.

3.2 View integration

consider the registered rangsurfaces (Turk and Levoy, 1994). The objective of view integration is to reconstruct the topology and surfaces of an object from the range samples.

One of the most overlooked aspects of surface reconstruction is sampling density. Although the Nyquist sampling criteria is an elegant theory for 1D functions, extensions

Page 10: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Reverse engineering of parts using laser range imaging techniques 443

to 3D surfaces is unclear, particularly when the surfaces are piecewise smooth as is the case for automotive parts. A few authors have fortunately attempted to address sampling con

he zero level set of that function. Amenta et

), and a very practical algorithm is the method of Curless and Levoy (19

The above algorithms lead to a triangle mesh representation that approximates the object from the range measurements. For CAD applications, these meshes typically require

ole filling, surface smoothing (including spline fitting), and region segmentation. In this section, we briefly overview a few of these

straints such as Hoppe et al. (1992) and Amenta et al. (1998). If we assume that we do have some suitable sampling of an object’s surfaces, a wide variety of algorithms are available to convert these samplings into triangle meshes. The two main categories are non-volumetric and volumetric methods.

Among the non-volumetric approaches, Hoppe et al. (1992) is the most common in the literature. They assume the input to their algorithm is an unorganised cloud of points. Their approach is to define a signed distance function from the point cloud data and then approximate the object under consideration as t

al. (1998), Edelsbrunner and Mücke (1994), and Bernardini et al. (1999b) have introduced slightly different algorithms that interpolate among the unorganised point cloud instead of approximating as in Hoppe et al. With a different approach, Turk and Levoy (1994) and Soucy and Laurendeau (1995) propose ‘zippering’ methods whereby they first represent each range view as a triangle mesh (instead of as a point cloud) and then stitch these meshes together after registration. The advantage of these stitching methods is that they leverage the sensor information about the organisation of the range points. Algorithms that consider the data as unorganised point clouds, on the other hand, ignore the geometry of the data collection process. Thus, although the point cloud methods are often more theoretically elegant, they are not as practical as other methods such as zippering.

The volumetric approaches are yet another avenue to the reconstruction problem. These methods decompose the space around the data into a grid of volume elements (voxels). The earliest attempt for volumetric methods is a Delaunay approach proposed by Boissonnat (1984

96) with the Digital Michelangelo Project (Levoy et al., 2000). In general, the volumetric methods tend to provide more practical results for real data sets over the non-volumetric methods because these methods often use space carving that overcomes many of the problems discussed previously, such as occluded data and poor registration. The drawback to volumetric methods is the memory requirements and data structures necessary to manipulate the volume representation, especially since most reconstructed surfaces occupy only a small portion of the total volume in and around the surfaces. Curless and Levoy use a special run-length coding to handle these unoccupied regions.

4 Post-processing algorithms

additional post processing, such as h

techniques, and then we present a novel algorithm for region segmentation, specifically a crease detection algorithm.

Page 11: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

444 D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik and M. Abidi

4.1 Example algorithms

A variety of post-processing algorithms are available for improving the quality of the mesh representations for CAD purposes. For example, a problem that often arises with many reconstruction techniques is holes in the reconstructed surfaces as in Figure 9. These holes arise from incomplete scans and, in practice, are inevitable. Some hole filling algorithms include Turk (1992) and Schroeder et al. (1992). A second post-processing example is surface smoothing. Two types of smoothing are important to consider. The first type involves only smoothing the triangle meshes, themselves. A few important algorithms are Welch and Witkin (1994), Taubin (1995), Desbrun et al. (1999), and Schneider and Kebbelt (2000). The second type of smoothing involves fitting smooth surfaces, such as splines to the meshes. This second type is important since most CAD systems represent objects with splines. The challenge with fitting smooth surfaces to triangle meshes is establishing parameterisations and boundaries for the fitting process. Methods that address these problems are Eck and Hoppe (1996), Bajaj et al. (1997), Guo (1997), and Bernardini et al. (1999a). A third type of post-processing algorithm is segmentation. Although different segmentation algorithms are available, such as Mangan and Whitaker (1999) and Wu and Levine (1997), we focus on a specific example in the next subsection.

(a) (b)

Figure 9 (a) Reconstruction of a pulley with a hole on the right side. (b) Results after a hole-filling algorithm. The original point cloud data is from the US Army RDECOM National Automotive Center.

4.2 Crease detection algorithm

We now present a novel post-processing algorithm for crease detection. Creases in surfaces are smoothness discontinuities that occur where two or more smooth surfaces join. For example, creases occur at the edges and corners of a cube. Such discontinuities are typical in CAD models and automotive parts. Detection of these creases is important for such tasks as spline fitting and part decomposition.

The algorithm that we propose combines the robust normal voting in Page et al. (2001) with the morphological operations of Rössl et al. (2000). Our algorithm begins by

Page 12: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Reverse engineering of parts using laser range imaging techniques 445

estimating the surface normal vector at each vertex of a triangle mesh. This step follows the normal voting algorithm as outlined in Page et al. with a slight improvement. Then, we apply the morphological operations of Rössl et al. to identify vertices whose normal vectors fall below a discontinuity threshold, which determines where creases in the surface occur. This combined approach is more robust than Rössl et al. alone, which is justified by the graphs in Page et al. We outline our proposed method in the following paragraphs, and Figure 10 shows an example.

(a) (b) (c)

Figure 10 Results from proposed crease detection algorithm. (a) Initial threshold after normal vector voting. (b) Morphological operations clean up undesired artifacts. (c) Results from thinning operation and then a connected components analysis. The greyscale labels denote a unique surface such that crease discontinuities bound each surface.

Consider a vertex, v, of a mesh. To estimate the surface normal of v, the normal voting algorithm first uses the fast marching algorithm (Kimmel and Sethian, 1998) to find triangles in the geodesic neighbourhood of v. The user specifies a geodesic distant that defines the extent of this neighbourhood. Typically this parameter is three times the average edge length of triangles in the mesh. We label this neighbourhood − a subset of the original mesh − as Mv. Using the continuity constraints in Medioni et al. (2000), each triangle, fi, in Mv generates a vector, ni, at v. We define ni as

T2ii f f= −n n u n

iu , (5)

where nfi is the normal of fi and u is the unit vector from v to the centre of fi. We define u as follows

i

i

f v

f v

−=

p pu

p p (6)

where pfi is the center of fi and pv is the vector location of v. The algorithm collects these votes from each triangle as a weighted vector sum.

i v

i if M

w∈

= ∑v n , (7)

where the weight, wi, is an exponential decay function of the triangle area and its geodesic distance from v. This formulation differs from Page et al. (2001) in that we no longer collect votes as a covariance matrix and, thus, do not need to decompose such a

Page 13: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

446 D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik and M. Abidi

matrix, a significant computational benefit. Now, we compute the length of v and then threshold as in Figure 10(a). This approach avoids the eigen analysis of the previous method. After thresholding, we are left with a set of vertices where the normal voting algorithm has little agreement with regard to the normal vector. We interpret these vertices as either on or near a crease discontinuity. We finally apply the morphological opening, closing, and thinning operations as discussed in Rössl et al. (2000) to improve our initial regions. Figure 10(b) shows the results of opening and closing while Figure 10(c) shows the final segmentation after thinning.

5 Part database results

Based on the above research, we have developed a CARE system within our laboratory. The objective of this system has been to explore CARE technology in the context of vehicle parts. This study has led to the generation of 3D models with examples shown in Figure 11. We consider details in generating these models in the following subsections.

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 11 Reconstruction examples: (a) hand crank; (b) disc brake; (c) distributor casing; (d) water neck; (e,f,g,h) reconstructed 3D models. These models and their raw data are available at http://iristown.engr.utk.edu/~page/database/.

5.1 Range image acquisition

For the experiment, we used the MAPP 2500 Ranger System, which is a sheet-of-light system (Integrated Vision Products, 2000). The Ranger consists of a special 512 × 512 pixel camera and a low-power stripe laser as shown in Figure 12. The Ranger designers have specifically tailored the camera and its supporting electronics to integrate image processing functions onto a single parallel-architecture chip. This chip contained within

Page 14: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Reverse engineering of parts using laser range imaging techniques 447

the camera housing has dedicated range processing functions that allow for high-speed acquisition of nearly one million points per second.

(a) (b) (c)

Figure 12 The MAPP 2500 Ranger System. (a) SMART Camera with on-board 512x512 sensor integrated with Ranger processor. (b) Stripe laser with less than 500 mW power and a 400-700 nm wavelength. (c) Optical filter attached to the camera.

The most common arrangement of the system is to mount the camera and the laser source relative to the proposed target area to form a triangle where the camera, laser, and target are each corners of the triangle. Recall Figure 4(b). The angle where the laser forms a corner is typically a right angle, such that the laser strip projects along one side of the triangle. The angle, α, at the camera corner is typically 30 to 60 degrees. The baseline distance, denoted by B, between the camera and the laser completes the specification of the triangle. Given B and α, the equation for range is as follows, and recalling Equation (1).

tan( )tan

fr s Bf s

α sα

−=

+ (8)

where f represents the focal length of the camera and s is the detected pixel row of the laser on the sensor (Integrated Vision Products, 2000). We calibrate the system using the target in Figure 5(b). The range resolution, ∆r, for this geometry is as follows.

2( )( cos sin )

Bfr s sf sα α

−∆ = ∆

+ (9)

where ∆s is the pixel width (Integrated Vision Products, 2000). Notice that the resolution is a function of the sensor offset position, denoted by s, which results in nonlinear range increments. Further, note that B, which controls the size of the system triangle, determines the system resolution. Finally, we have added a band-pass optical filter (685 nm) to the camera lens to minimise the effects of spectral reflections and inter-surface reflections. Figure 12(c) shows the lens configuration.

5.2 Model reconstruction

After scanning and calibrating the data, the next step is reconstruction. The objects in Figure 11 require multiple scans from the system described. We have used a brute force method to select each of the views with a trial-and-error approach. Table 1 summarises

Page 15: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

448 D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik and M. Abidi

the number of views necessary to reconstruct each of the models. We register each of these scans using both an automatic algorithm (Sun and Abidi, 2001) and a manual algorithm. The computing platform for these algorithms is an SGI Octane with a single 195 MHz MIPS R10000 processor and 128 megabytes of memory. The manual algorithm was necessary when the automatic algorithm would fail. The ICP algorithm was used to refine the results for both methods. Recall that Equation (4) defines registration. Once the views are registered, we reconstruct the triangle meshes with an integration algorithm based on Hilton et al. (1998). This method defines an implicit field in a volumetric grid using the raw data from each range scan. We compute a signed distance function to the raw data at each voxel of the grid. After integrating each view into the voxel structure, we extract the final mesh model using the marching cubes algorithm (Lorensen and Cline, 1987). Finally, we smooth the models with a regularization technique similar to Sun et al. (2000).

Table 1 Number of views necessary to reconstruct each vehicle part.

Number of views Size of final 3D model (number of triangles)

Crank 35 93 752 Disc brake 13 73 553 Distributor casing 15 117 036 Water neck connector 28 117 564

(a) (b) (c)

Figure 13 Crease detection results for other piecewise smooth models: (a) disc brake; (b) water neck; (c) distributor casing.

5.3 Crease detection results

Using the models from the above reconstructions, we apply our proposed crease detection algorithm, as discussed in Section 4. Besides the disc brake shown in Figure 10, the water neck and distributor casing are the other parts that have piecewise smooth discontinuities. Figure 13 shows the crease detection results and the subsequent surface segmentation for each of these parts. The greyscale labels in these figures represent a different smooth surface. The boundaries between each such surface are crease

Page 16: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Reverse engineering of parts using laser range imaging techniques 449

discontinuities. This segmentation is useful for other post-processing algorithms, such as spline fitting where we might fit one spline to each smooth surface.

6 Summary and conclusions

In this paper, we have presented the results of a CARE system developed within our laboratory and a new crease detection algorithm that we have integrated into this system. With our CARE system, we have explored the research challenges facing the reconstruction of automotive parts from range images. The discussion in previous sections highlighted these issues and noted specific research that addresses them. For data acquisition, the primary areas of research are data calibration, the occlusion problem, and measurement error. For reconstruction, the areas are view registration and view integration. We have addressed each of these issues using the IVP MAPP 2500 scanner and algorithms developed in our laboratory. The resulting system has led to the creation of a database of automotive parts as shown in Figure 11. These models are available on line and are intended to serve the research community.

Based on our experience in generating these models, we have found that the occlusion problem is one of the more significant factors in the quality of the reconstructed models, and unfortunately, it is often ignored in the research literature. Recall Table 1 and in particular the crank model. The crank is a rather small object with approximately a 10 cm diameter wheel, but because of the four spokes on the crank, accurate reconstruction required 35 different views. Many of those views only contributed a few dozen triangles to the final model. For view selection, we used a trial-and-error approach where we would reconstruct the object and then go back to our scanner and acquire new data for occluded regions. In contrast to our brute force approach, next-best-view research is an important topic that could aid this view selection process and bring some order to it. Few reconstruction algorithms, including those in CMM research, address view selection − other than to state that an adequate number of views are necessary. As CARE technology matures, we believe that addressing the occlusion problem both during data acquisition and model reconstruction is an important topic of future research.

With the occlusion problem, the solution is to use additional views to completely cover the exterior of a surface. A question that arises is with regard to the interior features of an object. Consider a gear box for example. With range scanning we only reconstruct the outside shell, i.e. the exterior surfaces, of an object. We only reconstruct the box and not the gearing inside the box. To scan the interior features, a user must disassemble the object into its constituent pieces. Then, we would use range scanning to reconstruct each of these pieces individually. With the gear box, an individual would need to disassemble and remove the gears from inside the box and then scan them. From an imaging standpoint, this problem is not strictly an occlusion problem since no view from the exterior of the objects allows a range scanner to ever see the interior features. Systems that do allow reconstruction of interior features without disassembly are computed tomography (CT) scanning systems (Flisch et al., 1999). In this paper, we have not investigated these techniques, but many of the imaging principles such as registration, integration, and processing are directly applicable to the CT point clouds that one might use for CARE.

Page 17: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

450 D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik and M. Abidi

Besides the CARE system, another contribution of this paper is the crease detection algorithm outlined in Section 4. This algorithm is a first step for generating complete spline models of free-form surfaces. Splines are smooth surfaces by definition. Subsequently, when fitting splines to triangle mesh models, we must estimate where smoothness discontinuities occur. The proposed algorithm detects these discontinuities and segments the mesh into distinct smooth surfaces. The next phase of our research is to use the output from this algorithm and methods such as Eck and Hoppe (1996) and Bernardini et al. (1999a) to generate spline representations of the automotive parts.

The system proposed in this paper represents our initial investigation into CARE technology using computer vision techniques. For future research, we intend to consider new avenues to improve the accuracy of the system and to expand the types of parts in our database. In particular, we plan to compare these models with reconstructions from CMM data. The goal of such a comparison would be to validate computer vision as an RE technology and to verify the accuracy of our models. This comparison would also focus on the occlusion problem and thus evaluate view selection in the context of CMMs. Additionally, the current system requires some user intervention throughout the reconstruction process. With additional research, we seek to automate fully this process, end to end.

Acknowledgements

The authors would like to thank Coryne Forest and Robert Berlin with the U. S. Army RDECOM National Automotive Center for the point cloud data of the pulley. Further we would like to acknowledge Faysal Boughorbel, Umayal Chidambaram, and Brad Grinstead for their contributions to this research. This work was supported by the DOD/RDECOM/NAC/ARC, R01-1344-18, by the University Research Program in Robotics under grant DOE-DE-FG02-86NE37968, by FAA/NSSA Program, R01-1344-48/49 and by the National Science Foundation under grant INT-0318427.

References Amenta, N., Bern, M. and Kamvysselis, M. (1998) ‘A new voronoi-based surface reconstruction

algorithm’, Proc. of ACM SIGGRAPH, pp. 415-421. Bajaj, C., Bernardini, F. and Xu, G. (1997) ‘Reconstructing surfaces and surfaces on surfaces’,

Algorithmica, Vol. 19, pp. 243-261. Beraldin, J., El-Hakim, S. and Cournoyer, L. (1993) ‘Practical range camera calibration’, Proc.

SPIE Conference Videometrics II, pp. 21-31. Bergevin, R., Soucy, M., Ganon, H. and Laurendeau, D. (1996) ‘Towards a general multi-view

registration technique’, IEEE Trans. Patt. Anal. Machine Intell., Vol. 18, No. 5, pp. 540-547. Bernardini, F., Bajaj, C.L., Chen, J. and Schikore, D.R. (1999a) ‘Automatic reconstruction of 3D

CAD models from digital scans’, Intl. J. of Comp. Geometry and Applications, Vol. 9, Nos. 4&5, pp. 327-369.

Bernardini, F., Mittleman, J., Rushmeier, H., Silva, C. and Taubin, G. (1999b) ‘The ball-pivoting algorithm for surface reconstruction’, IEEE Trans. on Vis. and Comp. Graphics, Vol. 5, No. 4, pp. 349-359.

Besl, P.J. (1989) ‘Active optical range imaging’, in Advances in Machine Vision (J. L. C. Sanz, ed.), Springer-Verlag, New York, NY, pp. 1-53.

Besl, P.J. and McKay, N.D. (1992) ‘A method for registration of 3D shapes’, IEEE Trans. Patt. Anal. Machine Intell., Vol. 14, No. 2, pp. 239-256.

Page 18: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

Reverse engineering of parts using laser range imaging techniques 451

Boissonnat, J.D. (1984) ‘Geometric structures for three-dimensional shape representation’, ACM

Trans. On Graphics, Vol. 3, No. 4, pp. 266-286. Campbell, R. and Flynn, P. (2001) ‘A survey of free-form object representation and recognition

techniques’, Computer Vision and Image Understanding, Vol. 81, No. 2, pp. 166-210. Chen, Y. and Medioni, G. (1992) ‘Object modeling by registration of multiple range images’,

Image and Vision Computing, Vol. 10, No. 3, pp. 145-155. Curless, B. and Levoy, M. (1996) ‘A volumetric method for building complex models from range

images’, Proc. of ACM SIGGRAPH, pp 303-312. Desbrun, M., Meyer, M., Schroder, M. and Barr, A. (1999) ‘Implicit fairing of irregular meshes

using diffusion and curvature flow’, Proc. of ACM SIGGRAPH, pp. 317-324. Eck, M. and Hoppe, H. (1996) ‘Automatic reconstruction of B-splines surfaces of arbitrary

topological type’, Proc. of ACM SIGGRAPH, pp. 325-334. Edelsbrunner, H. and Mücke, E.P. (1994) ‘Three-dimensional alpha shapes’, ACM Trans. Graph.,

Vol. 13, No. 1, pp. 43-72. Faugeras, O. (1993) Three Dimensional Computer Vision—A Geometric Viewpoint, MIT Press. Flisch, A., Wirth, J., Zanini, R., Breitenstein, M., Rudin, A., Wendt, F. and Golz, R. (1999)

‘Industrial computed tomography in reverse engineering applications’, Proc. of Computerized Tomography for Industrial Applications and Image Processing in Radiology, BB 67-CD Paper 8, Berlin, Germany.

Guo, B. (1997) ‘Surface reconstruction from points to splines’, Computer-Aided Design, Vol. 29, No. 4, pp. 269-277.

Hashemi, K.S., Hurst, P.T. and Oliver, J.N. (1994) ‘Sources of error in a laser rangefinder’, Review of Scientific Instruments, Vol. 65, No. 10, pp. 3165-3171.

Hilton, A., Stoddart, A., Illingworth, J. and Windeatt, T. (1998) ‘Implicit surface-based geometric fusion’, Computer Vision and Image Understanding, Vol. 69, pp. 273-291.

Hoppe, H., DeRose, T., Duchamp, T., McDonald, J. and Suetzle, W. (1992) ‘Surface reconstruction from unorganized points’, Proc. of ACM SIGGRAPH, pp. 71-78.

Horn, B., Hilden, H. and Negahdaripour, S. (1988) ‘Closed-form solution of absolute orientation using orthonormal matrices’, J. Optical Society of America A (Optics and Image Science), Vol. 5, No. 7, pp. 1127-1135.

Integrated Vision Products (2000) MAPP 2500 Ranger User Manual, Integrated Vision Products, Sweden.

Johnson, A. and Hebert, M. (1997) ‘Surface registration by matching oriented points’, Proc. Intl. Conf. on Recent Advances in 3D Digital Imaging and Modeling, pp. 121-128.

Kimmel, R., and Sethian, J.A. (1998) ‘Computing geodesic paths on manifolds’, Proc. of the Natl. Academy of Sciences, Vol. 95, pp. 8431-8435.

Krotkov, E. (1991) ‘Laser rangefinder calibration for a walking robot’, Proc. IEEE International Conference on Robotics and Automation, Vol. 3, pp. 2568-2573.

Levoy, M., Pulli, K., Curless, B., Rusinkiewicz, S., Koller, D., Pereira, L., Ginzton, M., Anderson, S., Davis, J., Ginsberg, J., Shade, J. and Fulk, D. (2000) ‘The Digital Michelangelo Project: 3D scanning of large statues’, Proc. of ACM SIGGRAPH, pp. 131-144.

Lorensen, W.E. and Cline, H.E. (1987) ‘Marching cubes: a high resolution 3D surface construction algorithm’, Proc. of ACM SIGGRAPH, pp. 163-169.

Mangan, A.P. and Whitaker, R.T. (1999) ‘Partitioning 3D surface meshes using watershed segmentation’, IEEE Trans. on Vis. and Computer Graphics, Vol. 5, No. 4, pp. 308-321.

Medioni, G., Lee, M. and Tang, C. K. (2000) A computational framework for segmentation and grouping, Elsevier, Amsterdam.

Page 19: Towards computer-aided reverse engineering of heavy ... · scanning systems offer the opportunity to incorporate RE more fully into the mainstream design flow (Várady et al., 1997)

452 D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik and M. Abidi

Page, D.L., Koschan, A.F., Sun, Y., Paik, J.K. and Abidi, M.A. (2001) ‘Robust crease detection and curvature estimation of piecewise smooth surfaces from triangle mesh approximations using normal voting’, Proc. Intl. Conf. on Computer Vision and Pattern Recognition, Vol. 1, pp. 162-167.

Perceptron, Inc. (1994) Product Information, Perceptron, Inc., 23855 Research Drive, Farmington Hills, MI 48335.

RIEGL Laser Measurement Systems (2001) Laser Mirror Scanner LMS-Z210(-HT) Technical Documentation and Users Instructions, RIEGL Laser Measurement Systems, Austria.

Rössl, C., Kobbelt, L. and Seidel, H.P. (2000) ‘Extraction of feature lines on triangulated surfaces using morphological operators’, Smart Graphics, 2000 AAAI Symposium, Vol. vii, No. 181, pp. 71-75.

Schneider, R. and Kobbelt, L. (2000) ‘Generating fair meshes with G1 boundary conditions’,

Geometric Modeling and Processing Proceedings, pp. 251-261. Schroeder, W.J., Zarge, J.A. and Lorensen, W.E. (1992) ‘Decimation of triangle meshes’, Proc. of

ACM SIGGRAPH, pp. 65-70. Soucy, M. and D. Laurendeau (1995) ‘A general surface approach to the integration of a set of

range views’, IEEE Trans. on Patt. Anal. And Machine Intell., Vol. 14, No. 4, pp. 344-358. Sun, Y. and Abidi, M.A. (2001) ‘Surface matching by 3D point's fingerprint’, Proc. IEEE Int'l

Conf. on Computer Vision, Vol. 2, pp. 263-269. Sun. Y., Paik, J.K., Price, J.R. and Abidi, M.A. (2000) ‘Dense range image smoothing using

adaptive regularization’, Proc. IEEE Int’l Conf. on Image Processing, Vol. II, pp. 744-747. Taubin, G. (1995) ‘A signal processing approach to fair surface design’, Proc. of ACM

SIGGRAPH, pp. 351-358. Thompson, W.B., Owen, J.C. and de St. Germain, H.J. (1999) ‘Feature-base reverse engineering of

mechanical parts’, IEEE Trans. on Robotics and Automation, Vol. 15, pp. 57-66. Turk, G. (1992) ‘Re-tiling polygonal surfaces’, Proc. of ACM SIGGRAPH, pp. 55-64. Turk, G. and Levoy, M. (1994) ‘Zippered polygon meshes from range images’, Proc. of ACM

SIGGRAPH, pp. 311-318. Várady, T., Martin, R.R. and Cox, J. (1997) ‘Reverse engineering of geometric models - an

introduction’, Computer-Aided Design, Vol. 29, No. 4, pp. 255-268. Welch, W. and Witkin, A. (1994) ‘Free-form shape design using triangulated surfaces’, Proc. of

ACM SIGGRAPH, pp. 247-256. Wong, L.M., Dumont, C. and Abidi, M.A. (1998) ‘Determining optimal sensor poses in 3-D object

inspection’, Conf. on Quality Control By Artificial Vision, November, Japan, pp 371-377. Wu, K. and Levine, M.D. (1997) ‘3D part segmentation using simulated charge distribution’, IEEE

Trans. on Patt. Anal. And Machine Intell., Vol. 19, No. 11, pp. 1223-1235. Yan, X. and Gu, P. (1996) ‘A review of rapid prototyping technologies and systems’, Computer-

Aided Design, Vol. 28, No. 4, pp. 307-318. Zhang, Z. (1994) ‘Iterative point matching for registration of free-form curves and surfaces’, Intl.

J. of Computer Vision, Vol. 13, No. 2, pp. 119-152.