00061706.pdf

Upload: saqib-zafar

Post on 24-Feb-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/25/2019 00061706.pdf

    1/8

    1072 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 12, NO. 11, NOVEMBER 1990

    Partial Shape Classification Using ContourMatching in Distance Transformation

    Hong-Chih Liu and Mandyam D. Srinath, Senior Member, ZEEE

    Abstract-In this paper, an algorithm is presented to recognize andlocate partially distorted two-dimensionAI shapes without regard to theirorientation, location, and size. The proposed algorithm first calculatesthe curvature function from the digitized image of an object. The pointsof local maxima and minima extracted from the smooth curvaturefunction are used as control points to segment the boundary and to guidethe boundary matching procedure. The boundary matching procedureconsiders two shapes at a time, one shape from the template data bank,and the other being the object under classification. The procedure triesto match the control points in the unknown shape to those of a shapefrom the template data bank, and estimates the translation, rotation, andscaling factors to be used to normalize the boundary of the unknownshape. The chamfer 3/4 distance transformation and a partial distancemeasurement scheme are used as the final step to measure the similaritybetween these two shapes. The unknown shape is assigned to the classcorresponding to the minimum distance. The algorithm developed in thispaper has been successfully tested on partial shapes using two sets of

    data, one with sharp corners, and the other with curve segments. Thisalgorithm not only is computationally simple, but also works reasonablywell in the presence of a moderate amount of noise.

    Index Terms-Distance transforma tions, edge matching, image recog-nition, local features, shape identification.

    I. INTRODUCTION

    ECOGNITION of shapes which are only partially correct isR mportant in many im age analy sis applications. In this paper,we present an algorithm to recognize and locate two-dimensionalshapes which are partially distorted. The algorithm is designed

    the following objeckves.The algorithm should be able to handle images whichcontain sharp comers, e.g., industrial parts or aircraft, aswell as those images which contain only curve segm ents,

    e.g., the shape of a lake.The algorithm must be computationally simple.The algorithm must be responsive to arbitrary changesin orientation, position, and scale of the objects underclassification.

    The proposed algorithm tries to estimate the orientational,scaling, and translational data between a test shape and modelshape by using a small number of control points extracted fromboth shapes. The goodness of this estimation can be representedby a numerical value.

    The algorithm first calculates the curvature functions fromthe digitized image of an object. The points of local maximaand minima are extracted from the smoothed curvature function,and are used as control points to segment the boundary and to

    Manuscript received January 15, 1990; revised M arch 12, 1990. This workwas supported in part by the Defense Advanced Research Projects Agencyunder Grant MDA-903-86-C-0182.

    H.-C. Liu is with the Institute of Nuclear Energy Research,P.O. Box 3-11,Lung-Tan, Taiwan, Republic of China.

    M. D. Srinath is with the Departmentof Electrical Engineering, SouthernMethodist University, Dallas,TX 75275.

    IEEE L o g Number 9037573.

    guide the boundary matching procedure. The boundary matchingprocedure considers two shapes at a time, one shape fromthe template data bank, and the other being the object underclassification. The procedure tries to match the control points inthe test shape to those of a shape from the template data bank,and estimates the translation, rotation, and scaling factors to beused to normalize the boundary of the test shape. The cha mfer314distance transformation [3] and a partial distance measurementscheme are proposed as the final step to measure the similaritybetween these two shapes. The test shape is assigned to the classcorresponding to the minimum distance.

    This algorithm differs from manyof the previous methodsfor partial shape classification. The Bolles and Cain technique[2] requires knowledge about the scale of the object under

    classification. Their algorithm also requires that objects containseveral special local features such as holes and comers. Objectsthat are characterized by large and continuous arcs cannot belocated. Gorman et al.s and Kuhls algorithm[SI can recognizeobjec ts with different orientation and location. Precise knowledgeabout the scale of the object is not required. But an automaticmethod for selecting a threshold value to perform the poly-gon approximation required in the algorithm is not presented.Appropriate threshold values must be found experimentally.Mitchell and Gogans [13] Singer and Chellappas [16] Linand Chellappas[12] and Tejwani and Joness [17] algorithmsare very computationally expensive.Knoll and Jains l l ] andPerkins [14] algorithms require precise knowledge of the scale.It can be seen that most of the previous works with partial shapeclassification either concentrate on shapes with known scaleorare very computationally expensive. The algorithm developed in

    this paper has been successfully tested on partial shapes usingtwo sets of data (one with sharp com ers and the other with curvesegm ents) without knowledge abou t the scale, orientation, andlocation of the object. The algorithm not only is computationallysimple, but also works reasonably well in the presence of amoderate amount of noise.

    The pap er is structured as follows. In Sectio n 11, boundary seg-mentation based on control point extraction from the smoothedcurvature function is discussed. The boundary matching proce-dure is presented in S ection 111. In Se ction IV, a chamfer314distance transformation and partial distance measurement schemeare presented to calculate the goodness of the match betweentwo shapes. Experimental procedures and results are presentedin SectionV. Section VI provides conclusions.

    11. BOUNDARYEGMENTATION

    The present algorithm uses the local maximum and minimumpoints of the smoothed curvature function to segment the bound-ary of the object. If the boundary funct ion, defined parametricallyby s t ) = x t ) , y t ) ) , s twice differentiable, theslopefunction

    0162-8828/90/1100-1072 01.00 1990 IEEE

  • 7/25/2019 00061706.pdf

    2/8

    LIU AND SRINATH: PARTIAL SHAPE CLASSIFICATION USING CONTOUR MATCHING 1073

    (the angle between the tangent line and the positivex axis) ofthe boundary can be computed by

    dY gdx x

    The curvature function k(t) (the derivative of the slope withrespect to the arc length) is defined as

    where x, y, i nd y are the first and second derivatives ofx t )and y ( t ) with respect to t .

    In practice, only the digitized boundary fu nctions are available.Thus, the discrete nature of the image causes the curvature to bequantized. Also, because the curvature function is the output ofthe second derivative of the boundary function, directly usingthe angle between successive line segments joining the sam plepoints of the digitized boundary function gives rise to unreliablemeasures of curvature. Techniques such as filtering the curvaturefunction must be introduced to reduce the effect of quantizationand to smooth the curvature function.

    Fig. 1 show s the process of extracting the curvature functionfrom the object silhouette image.A n encoding algorithm is firstapplied to the input imag e to obtain the cha in code representationof the boundary. The edge gradient directionof each boundarypixel is computed by the gradient direction at each boundarypixel of the binary image. At pixelP whose neighbors have graylevels

    A B CD P EF G H

    the Sobel difference is computed by

    1A z ~ ~ [ C + 2 E + H ) - A + 2 D + F ) ]

    1A y = ; [ A + 2 B + C ) - F + 2 G + H ) ] .

    The edge gradient direction of pixelP is defined as

    AA ,

    tan- 2

    A normalization step is performed here to eliminate theartificial discontinuities from wraparound that often occur inthe edge gradient direction. More precisely, the edge gradientdirection is calculated from the arctangentof the Sobe l difference.Since the arctangent function only returns an angle in the rangeof -T,T), ny angle direction outside this range is wrappedaround, thus resulting in an artificial discontinuity in the edgegradient direction. The normalization step traces the function andsearch es for local discontinuities greater than7r or less than -7r.An offset of t 2 ~ s added to the function at each followingpoint to correct the wraparound. This will result in a continuousfunction along the entire contour, with the exception of the initialand final points of the contour. For a closed contour, thesetwopoints will always have an artificial discontinuity of2 ~ .

    After normalization, the curvature function is calculated byconvo lving the edge direction function of the boundary pointO s)with the first derivative of a Gaussian functionG s) [6] where

    ~ 3 ) exp (- )

    O b j e c t S i l h o u e t t e I m a g e

    Boundary Fo l lowi ng

    Edge Di rec t ion Computa t ionwCurva ture Computa t ion

    Find Loca l t l ax ima/Pl in ima

    C o n tr o l l P o i n t s

    extraction.Fig. 1. Image processing, curvature function calculation, and control points

    Varying the value for o in the filter controls the degree ofsmoothness of the resulting curve.

    The points of local maxima and minima are determined fromthe smoothed curvature function and are assigned as the controlpoints to break the boundary of the shape into segments. Each

    shape is represented by an ordered sequence of line segmentswith control points as vertex points. The value ofc serves togovern the num ber of vertices that will be found. Proper selectionof the value of c is important to the success of this algorithm.Each line segment must be long enough to have distinguishingcharacteristics, but also short enough to ensure that an adequatenumber of segments is used to describe the shape.

    Fig. 2 show the curvature fu nctions of several scaled aerialviews of Lake Huron. These curvature functions have beensmoo thed by the first derivativ e of the Gaussian function, with thevalue of c being chosen proportional to the number of boundarypoints of the shape. From these results, it is clear that when thenumber of boundary points of a complete shape is known, thecurvature functions can be properly smoothed such that objectsof the same class have similar curvature functions. The useof con trol points extracted from these curvature fu nctions willtherefore segment these shapes in the same manner, withoutregard to size, orientation, and location. When the size of theshape is unknown, an autom atic method to select the valueof to make the local maxima and minimaof the smoothedcurvature function meet the above criteria must be developed.Experimental studies show that choos ing a value for whichbreaks the boundary of the sh ape into approximately40 segmentscan meet the above requirements: line segm ents long enough tohave distinguishing characteristics, yet short enough to ensurethat an adequate number of segments are usedto describe theshape, and breaking up the boundary of a shape in a similarmanner irrespective of its size. This choice of40 segments forthe shape may be shape dependent in that itis possible that othersegm ent numbers may perform better on a different set of shapes.

    Fig. 3 show s the original shap es and shapes created by linkingall the control points extracted from the local maxim a and minimaof the smoothed curvature function. The number of boundarypoints for these models ranges from439 to 713 pixels, and thesmoo thing factor ranges from4.117 to 6.891. Although thiskind of boundary seg men tation does not necessarily correspondto the human visual mechanism, it is computationally simple andrelatively invariant to size changes.

  • 7/25/2019 00061706.pdf

    3/8

    1074 IEEE TRANSACTIONS ON PAITERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.12 NO. 11, NOVEMBER 1990

    m2 0

    m30m0 )m

    V

    Fig. 2. Shape and curvature function of Lake Huron at different scales. Thenumber in the upper right comer of each shape represents the length of thecontour. The curvature functions are normalized along both the 8 and s axesto make the comparison easier.

    111. BOUNDARY ATCHING

    In the previous section, a process is presented to divide agiven shape into segments.A boundary matching process whichcalculates the rotational, scaling, and translational data betweentwo shapes is presented in this section. The matching processconsiders two shapes at a time, one shape from the model databank, and the other being the object under classification. Thematching is done from local to global, that is, from matchingone segment pair to matching groups of segment pairs. Similarto the method used in[E] here are two passes in the matchingprocess. Pass 1 compares segments between two shapes. Onlythose segm ent pairs which are compatib le will be used in pass2.Pass 2 matches groups of segment pairs. The matching processdeveloped in [15] was designed to handle shapes with arbitrarychanges in orientation and position. Possible matches are de-termined by comparing the segment length and by comparingthe angle between the current segm ents and their respectivesuccessors. If they do match, then the orientation differencewill be stored in an array called thedispari ty array. Since thematching algorithm considered here also has to deal with shapesof different scale, the comparison must be extended to includethe length ratio and angle between the current segments andtheir respective successors. The transformation which maps twosegm ents is thus specified by a q uad of parameters(8, A x , A y , S )where 8 is the angle difference between two m atched segments,Sis the length ratio of these two segm ents, andA x and A y are thenecessary translation s in order to match these two segme nts afterrotation and scaling. The details of the two passes are describedin the following sections.

    P a ss l S e g m e n t M at ch in g

    The segment match between two shapes can be briefly de-scribed by the following steps.

    Step 1: The boundary of each shape is represented by anordered sequence of line segments with control points as vertexpoints.

    Step 2: For each shape, number each segment in order aroundthe boundary; thus, consecutive segment numbers correspond to

    adjacent segments in the boundary.Step 3: Calculate the length ratio between the current segmen ts

    and their respective successors.Step 4: Calculate the angle between the current segm ents and

    their respective successors.Step 5: A segments match is recorded if the data from Steps3

    and 4 of one shape match those data from the second shapewithin a threshold (chosen by the user).

    The algorithm compares every segment of the shape fromthe template data bank to every segment from the shape underclassification. To see how the length ratio and angle betweentwo segments are defined, refer to Fig. 4 Assume that themodel shape is represented by vertices u l ,W , uz, Z), us, 3)and that the unknown object is represented by vertices

    21 , I), n , y d , (Q, ys), (Q, y d . k t he segment pairs to bematched be bounded by [ u z , Z ) , u3 , us)] and [(xl, yl), ZZ, YZ ].

    The con ditions for these two segm ents to match will be

    and

    where To and TI are the thresholds chosen by the user.If the segment pairs are compatible in terms of length ratio

    and the angle between the two segments, the transformationinformation between these two segmentsis entered into asegment match record. The record consists of a list ofall segmen ts from the two sh apes for which the anglescomputed in Step3 do not differ by more than 20" andthe length ratios in Step4 do not differ by more than30 . F ig. 5(a) show s an exam ple of a segment match

    record.Each seg ment will have many possible matches if only these

    two criteria are used, but there should be fewer cases whereseveral consecutive segments in one shape match consecutivesegments in the other shape. Thus, a group match procedure willbe used in pass 2.

    ass Z - G r o u p M a t c h

    The pass 2 procedure removes those segments for which lessthan three consecutive se gmen ts in one shape match consecutivesegments in the other shape. Since these segments are notmembers of any consecutive segment sequence, they usuallyrepresent the extraneous data and should be deleted. Using thesegment match record shown in Fig. 5(a) as an example, con-secutive compatible segment pairs between shape1 and shape 2are segments 25-39 of shape1, which match segments 13-27of shape 2. Pass 2 will only retain transformation data corre-sponding to this match because they are the most likely correctmatchin g pairs [see Fig. 5(b)J. If matched pairs of le ss thantwo consecutive segments are located, the algorithm considersthat these two shapes belong to different classes, and no furtherdistance calculation will be performed. If more than two sets of

  • 7/25/2019 00061706.pdf

    4/8

    LIU ND SRINATH: PAR SHAPE CLASSIFICATION USING CONTOUR MATCHING 1075

    P IC-;

    mmodelobject

    Fig. 3. Template shapes and shapes created, by linking the control points.

    Y 2 )

    Y l

    7 4 1 110

    Fig. 4 Example contours for the explanation of segment pair matching.

    matched consecutive segments are located between two shapes,the algorithm only records the one with the largest number ofmatched pairs.

    The final task to be accomplished at this pass is to compute therotational, scaling, and translational average of those segmentsthat match. The distance calculation will use the average of these

    quad parameters e, Ax Ay S) to normalize the unknown shape.Examples of partial shapes and their successful boundary matchesare shown in Fig. 6.

    IV. DISTANCE ALCULATION

    The final step of the algorithm uses the distance transformationand distance measurement techniques to measure the similaritybetween two shapes. s explained later, the distance transfor-

    mation converts the boundary pixels of a sh ape into a gray-levelimage where all pixels have a value corresponding to the distanceto the nearest boundary pixel. Then each boundary point ofthe second shape is normalized based on the(e, Ax Ay S)calculated from pass2 of the boundary matching procedure andsuperimposed on this image. The goodness of the match betweenthe two shapes can be computed from the rms average of thenormalized boundary pixels of the second shape on the distanceimage created from the first shape.

    Distance Transformation

    Because of the discrete nature of the digital image and theinfluence of noise on the boundary points, itis an unnecessarywaste of effort to compute exact Euclidean distances[7] from theinexact boundary pixels. In most digital image processing appli-

    cations, it is preferable to use integers to represent distance. Goodinteger approximationsof Euclidean distance can be computedby the process known as chamfer3/4 distance [l] [3]-[SI.

    According to [ 5 ] , the chamfer 3/4 distance can be calculatedsequentially by a two-pass algorithm. First, a distance imageiscreated such that each boundary pixel is set to zero and eachnonboundary pixel is set to infinity. The forward pass modifies

  • 7/25/2019 00061706.pdf

    5/8

    1076

    36 2.1 68.507 4 ).818 21 1.835 1.02537 ?5 71.5li5 65.656 -206.81:1 0.9 I )38 26 l i 3 . I I ) l li3.177 -18'2.9~II U

    IEEE TRANSACTIONS ON PATWRN ANALYSIS N D MACHINE INTELLIGENCE, VOL. 12, NO. 11, NOVEMBER 1990

    , 30 27 D . I I I R 5.1.509 -1D ,:UIl 1 . I l l 0

    Fig. 6. Examples of successful boundary matching between unknown shapeand template shape.

    where the v i j is the value of the pixel in position iJ). Somesample distance images are shown inFig. 7.

    Distance Measurement

    As stated earlier, in order to measure the similarity betweentwo shapes, one shape is converted into a distance image bythe chamfer 314 distance transformation. The boundary pointsof the second shape are normalized based on 19, Ax Ay S)calculated from the group matching procedure. These normalizedboundary points are superimposed on the distance image createdfrom the first shape as shown inFig. 8. The nns average [4] ofthe normalized boundary pixelson the distance image is usedto measure the goodness of the match.A perfect match betweenthe two shapes will result in zero distance, as each normalized

    boundary point from the second sh ape will then be a zero distancepoint in the distance image. If the matchingis inexact, the averageof these values gives a measureof the goodness of the match.For example, a distance of 2 means that each boundary pixelof the second shape is, on the average, located two pixels awayfrom the first shape after being normalized by the transformationdata (6, An Ay S).

    Using every pixel of the boundary to calculate the distanceworks very well when the object under classification is undis-torted or only a small part of it is distorted. When a largepart of the object is distorted, the distance measurement schemecannot serve the purposeof distance calculation.A n example ofa distorted shape is shown in Fig. 9(a). The segment matchingrecords against two template shapes are listed in Fig. 9(b) and(c), respectively. From these transformation data, the proposedalgorithm assumes that both template shapes have the possibility

    of matching the un known s hape. Fig. 9(d) and (e) show theresult of normalizing the boundaryof the template shapes andsuperimposing these points on the unknown shape. Fig. 9(d)shows a good match for boundary points which are not distorted.However, using every pixel of the boundary to measure thesimilarity yields a distance of15.153 between these two shapes.Fig. 9(e) showsan incorrect matching between two shapes with

  • 7/25/2019 00061706.pdf

    6/8

    LIU AND SRINATH: PARTIAL SHAPE CLASSIFICATION USING CONTOUR MATCHING

    Shape 1SegmentNum her

    1234

    1077

    Shape 2Segment 8 A x A Y sNiim her

    13 120.846 -91.561 -269.599 0.95914 118.072 -87.824 -274.294 1.OOO15 121.369 -100.828 -278.071 1.02716 118.961 -88.785 -274.614 0.999

    Fig. 7. Examples of chamfer 3/4 distance image. The d istances are gray-levelcoded: the larger the distance, the lighter the tone.

    Boundary pixels

    Boundary pixels on distance image

    Fig. 8. Computation of the distance. The normalized boundary pixel isplaced over the distance images. The rms average of the normalized boundarypixels on the distance image is used to measure the goodness of the fit.

    a distance of 8.674. Obviously, the unknown object will beclassified into a wrong class.

    When a human recognizes a partially distorted shape, hisjudgment relies mainly on the undistorted part. Thus, a distancemeasure sch eme which is based on parts of a shape is proposedto overcome the problem. Because there is no knowledge abouthow severely the object is distortedor which part or parts of theobject is distorted, an appro ximate procedure is used to find thatpart of the boundary which corresp onds to a match between on ehalf of the boundary points. Thatis, the boundary of the template

    Shape 1 Shape 2Segment SegmentNumber Number

    (b)

    (e)

    Fig. 9. Example of misclassified shape. (a) Unknown partial shape. @),(c)The segm ent matching records between template shape and unknown shape.(d),(e) The matched boundary between t w o template shapes and unknownshape.

    shape is broken intoN equal length segments with an arbitrarystarting point. Thelocal distance between each segment and theunknown object is defined similarly to the edge distance of[4]:

    1 / 2[ 71=1where n is the number of boundary points in the segment andv

    are their superimposed distance values. The average is dividedby 3 to comp ensate for the unit distance3 in the chamfer 314distance transformation. Th en the minimum distance betweenNI2consecutive segments is considered to be thepart ial dis tancebetween two shapes.

    The results displayed in Fig. 10 show that partial distancemeas urem ents can correctly classify the unknown shap e shown in

  • 7/25/2019 00061706.pdf

    7/8

    1078 IEEE TRANSACTIONS ON PATIERN

    b) (4

    Fig. 10. Local distance and partial distance of distance of shapes in Fig. 9.

    Fig. 9(a). Both template shapes are broken into ten equal lengthsegm ents. After using the transformation data from Fig. 9@) and(c) to normalize the segments, the local distances are listed inFig. lO(a) and (c). It can be seen that some of these segmentsmatch very well with the boundary of the unknown object.Fig. 10(d) shows that the best match for five boundary segmentsis obtained by using segments [9, 10,1, 2, 31 with a partialdistance of 1.825, which is better than the partial distance of3.885 from Fig. 10(b) for the misclassified shape.

    V. EXPERIMENTALESULTSThe algorithm described above was implemented on a sequent

    S81 computer sys tem us ing images obtained f rom a T W I X55/256c image processor system with aVAX 11/750 host com-puter. Both template data and partial shape data were determinedfrom images of o bject silhouettes cut out of paper. Im ages arecollected with size 256 x 256 and 8 b gray levels, and thenthresholded at 12 8 o create the object silhoue tte shapes. For eachclass of shape, one template imag e (with arbitrary orientation andlocation) is collected and stored in the template database. Partialshapes were generated by a method similar to [9]: chopping0, 10, 20, and 30 of the contour, and replacing the choppedout portion either by a straight lineor by a straight line-90turn-straight line to close the contour. For each class, ten partialshapes, each with diff erent orientation location and size, arecollected for each percentage of distorted pixels on the boundary.Different sizes of the partial shapes are obtained by moving thecamera closer to and farther from the object. The size of each

    partial shape varies from 0.8 to 1.2 times the size of the shapein the template database.Two sets of data were used in the classification experiments

    to test the effectiveness of the proposed algorithm. The first setconsists of four kinds of aircraft. These shapes were selectedto study the effects of sharp corners and straight segments onthe proposed algorithm. The second data set consists of four

    ANALYSIS AND MACHINE INTELLIGENCE, VOL. 12, NO. 11, NOVEMBER 1990

    classes of shapes which are the aerial views of lakes Erie, Huron,Michigan, and Superior. Each of these lake shapes contains onlylong curve segments. Fig. 3 shows the images in the templatedatabase. Som e sample partial shape s are shown inFig. 11 .

    It should be noted that the distance measure is asymmetric.That is, the distance calculated by conve rting one sha pe intoa distance image and superimposing the other shape onto thisimage is not equal to the distance calculated by interchanging

    the two shapes . In this investigation, the input unknown shapeis converted into a distance image, and the template shape isto be normalized and superimposed on the distan ce image. Thisassignment has the advantage that in partial distance measure-ments, the superimposed shape will be the complete shape. It iseasier to get 50% boundary points out of a complete shape, anda better partial distance measurement can be achieved from thisassignment.

    In the first experiment, the partial distance measurementschem e was used to test the performance of this algorithm. Th ealgorithm has been successfully used to recognize partial shapesof lake s and aircrafts without any misclassification.

    The performance of this algorithm on noisy images wasalso studied in the second experiment. The noisy shapes weregenerated by a method similar to that in [lo] .To create the noisyshapes, for every image in the partial shape database, use a chain

    code to trace the boundary points. Thenfor each boundary point,randomly select one of its eight-neighbors, and reverse its valuefrom 0 to 1 or vice versa to create a new noisy image. Thealgorithm performs very well without any misclassification forlake shapes, and only one shape misclassified for aircraft with20 distortion.

    VI. CONCLUSIONS

    This paper has presented a new method to recognize a partialshape without knowing its orientation, location, and size. Themethod uses the control points extracted from the smoothedcurvature function to segment the shape.A two-pass boundarymatching process is introduced to estimate the transformationdata between two shapes.A partial distance measurement schemeis implemented to measure the goodness of the boundary match.Experimental results on shapes with sharp corners and circulararcs show that the method yields results which are satisfactory.

    REFERENCES

    H. G. Barrow, J.M. Tenenbaum, and H. C. Wolf, Parametriccorrespondence and chamfer matching: h o new techniques forimage matching, in Proc. 5th Int. Joint Conf. Artificial Intell.,

    R. C. Bolles and R. A. Cain, Recognizing and locating partialvisible objects: The local-feature-focus method, inRobot vision,A. Pugh, Ed., 1984.G. Borgefors, Distance transformations in digital images,Com-put. vision, Graphics, Image Processing, vol. 34, pp. 344-371.-, n improved version of the chamfer matching algor ithm,in Proc. Int. Con Pattern Recognition, 1984.-, Hierarchical chamfer matching:A parametric edge match-ing algorithm,IEEE Trans. Pattern Anal. Machine Intell., vol. 10,pp. 849-865, Nov. 1988.J.F. Canny, Finding lines and edges in images,M.S. thesis,Massachusetts Inst. Technol., Cam bridge,1983.P. E. Daniellson, Euclidean distance mapping,Comput. Vision,Graphics, Image Processing, vol. 14, pp. 227-248, 1980.J.W. Gorman, O.R. Mitchell, and F.P. Kuhl, Partial shaperecognition using dynamic programming,IEEE Trans. PatternAnal. Machine Intell., vol. 10, pp. 257-266, Mar. 1988.T. A. Gorgan, Shape recognition and description:A comparativestudy, Ph.D. dissertation, Purdue Univ., West Lafayette,IN, 1983.

    1977, pp. 659-663.

  • 7/25/2019 00061706.pdf

    8/8

    LIU AND SRINATH: PARTIAL SHAPE CLASSIFICATION USING CONTOUR MATCHING 1079

    Fig. 11. Typical partial shapes with (a) O , (b) lo , (c) 20 , and (d) 30 of the boundary distorted.

    [101 L. Gupta, Conto ur transformations for shape classification, Ph.D.dissertation, Southern Methodist Univ., Dallas, TX, 1986.

    [ l l ] T. F. Knoll and R. C. Jain, Recognizing partially visible objectsusing feature indexed hypotheses, IEEE J. Robot. Automation,vol. RA-2, pp. 3-13, Mar. 1986.

    [12] C. C. Lin and R. Chellappa, Classification of partial 2-D shapesusing Fourier descriptors, IEEE Trans. Pattern Anal. MachineIntell., vol. PAMI-9, pp. 686-689, Sept. 1987.

    [13] O.R. Mitchell and T.A. Gorgan, Global and partial shapesdiscrimination for computer vision, Opt. Eng., vol. 23, no. 5,

    [14] W. A. Perkins, A model-based vision system for industrial parts,IEEE Trans. Comput., vol. C-27, pp. 126-143, Feb. 1978.

    [15] K.E. Price, Matching closed contours, in Proc. IEEE 1984Workshop Comput. vision, 1984, pp. 130-134.

    [16] P. F. Singer and R. Chellappa, Machine perception of partiallyspecified planar shapes, in Proc. IEEE Conf: Comput. VisionPattern Recognition, 1985.

    [17] Y.J. Tejwani and R.A. Jones, Machine recognition of partialshapes using feature vectors, IEEE Trans. Syst., Man, Cybern.,

    pp. 484-491, 1984.

    vol. SM C-1 2, pp. 504-516, JUly/AUg. 1985.

    Hong-Chih Liu received the B.S. degree inelectronic engineering from Chung-Yuan Uni-versity, Chung-Li, Taiwan, Repu blic of China, in1973, the M.S. degree in electrical engineeringfrom Washington University, St. Louis, MO, in1981, and the Ph.D. degree in electrical en-gineering from Southern Methodist University,Dallas, TX, in 1989.

    He is currently a Research Assistant at theInstitute of Nuclear Energy Research, Lung-Tan,Taiwan, Republic of China.

    Mandyam D. Srinath (M59SM74) receivedthe B.Sc. degree from the University of M ysore,India, in 1954, the diploma in electrical tech-nology from the Indian Institute of Science,Bangalore, in 1957, and the M.S. and Ph.D.degrees in electrical engineering from the Uni-versity of Illinois, Urbana, in 1959 and 1962,respectively.

    He has been a faculty member of the De-partment of Electrical Engineering, Universityof Kansas, La wrence, and the Indian Institute of

    Science, Bangalore. He is currently a Professor of Electrical Engine eringat Southern Methodist University, Dallas, TX, where he has been since1967. He has published numerous papers in signal processing, controland estimation theory, and image processing, and is the coauthor of thebooks, Introduction to Statistical Signal Processing with Applications(Wiley-Interscience, 1979) and Continuous and Discrete Time Signals

    and Systems (Prentice-Hall, 1990).