a report1

Upload: adithya-reddy

Post on 09-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 A REPORT1

    1/30

    A REPORT

    ON

    A GUIDED TOUR TO IMAGE PROCESSING ANALYSIS AND ITSAPPLICATION

    BY

    Names of the Student DisciplineAdithya Reddy Gangidi Electrical and Electronics Engineering

    Prepared in partial fulfillment of the

    IASc-INSA-NASI Summer Research Fellowship

    AT

    INDIAN STATISTICAL INSTITUTE, KOLKATA

    1

  • 8/8/2019 A REPORT1

    2/30

    IASc-INSA-NASI Summer Research Fellowship

    Station: Indian Statistical Institute Center: Kolkata

    Duration: 8 weeks Date of Start: 28th

    April, 08

    Date of Submission: 12 h July 08

    Title of the Project : A Guided Tour to Image Processing Analysis and its Application

    Name: Adithya Reddy Gangidi

    Discipline: B. E. (Hons.) Electrical and Electronics Engineering

    Name of Guide: Prof. Malay Kumar Kundu

    Key Words: Image processing, Image Enhancement, Texture measurement, Fuzzy Logic

    Project Areas: Soft Computing

    Signature of Guide Signature of Student

    2

  • 8/8/2019 A REPORT1

    3/30

    ACKNOWLEDGEMENTS

    I would like to take this opportunity to thank my guide Prof. Malay Kumar Kundu,Machine Intelligence Unit, ISI Kolkata for the excellent supervision. He has been a

    constant source of motivation and I am really honored to be able to work under hisguidance. He constantly helped me get a lot of clarity to understand various topicsinvolved in this project.

    I thank Dr. P. Maji for providing knowledge and guidance throughout my fellowship. Ithank him for his willingness to help me achieve my goals.

    I would also like to extend my gratitude towards Mr. G Madhavan , Executive SecretaryIndian Academy of Sciences for addressing all my correspondences and enquiries.

    Finally, My heartfelt thanks to everyone at ISI and IAS and all others whose names I did

    not mention, but who contributed in any form towards the successful completion of the project.

    3

  • 8/8/2019 A REPORT1

    4/30

    ABSTRACT

    The project involved understanding image processing techniques like enhancement and

    working on automatic selection of object enhancement operator based on fuzzy set

    theoretic measures and implementing these operations on C. It involved modifying the

    existing approach, which minimized grayness ambiguity and spatial ambiguity

    (compactness). The modification involved incorporating a measure of texture ambiguity

    (entropy) and total connectedness. A fuzzy membership function was assigned to both of

    them. Textural ambiguity, a function of textural membership is minimized along with

    maximizing connectedness, to arrive at an optimum point with enhanced texture and

    maintaining the original connectivity of the pixels. Based on the above results a nonlinear

    enhancement function is chosen. Finally as an application of image processing, Imagethinning algorithms are applied on welding seam images, as a starting step of the

    algorithm for vision based seam tracking. This work focuses in the direction of getting

    more deterministic measure of welding groove parameters and thus on accurate control.

    4

  • 8/8/2019 A REPORT1

    5/30

    CONTENTS

    1. Chapter 1

    Introduction Scope of the project

    2. Chapter 2 Existing enhancement algorithms Implementation

    3. Chapter 3 Some relevant definitions

    A. Texture a brief introductionA. Entropy

    B. Non-linear enhancement functions

    4. Chapter 4 Automatic enhancement

    A. Previous approachB. Modified approach

    (i) Calculation of Textural Ambiguity(ii) Calculation of Connectedness(iii) Algorithm(iv) Implementation

    5. Chapter 5

    Application of Image Processing: Seam TrackingA. Seam TrackingB. Previous approachC. Usage of thinning algorithms

    Chapter 1

    5

  • 8/8/2019 A REPORT1

    6/30

    INTRODUCTION

    In the scientific community, a lot of essential work goes into the application of digitalimage processing. In particular, digital image processing is the only practical technology

    for Classification, Feature extraction, Pattern recognition, Projection, Multi-scale signalanalysis. One of the most elemental steps in image processing is enhancement alsotermed as pre-processing.

    In the beginning of my project I learnt about various steps involved in digital image processing like enhancement in spatial and frequency domain, I understood themechanics of spatial filtering and the use of smoothing spatial filters used for blurring,sharpening spatial filters used for highlighting fine details. In sharpening I encounteredthe use of the Laplacian (2 nd order derivatives) and the Gradient (1st order derivative)needed for such enhancements. Then I started segmentation specifically edge detectionin detail using gradient operators like Roberts, Prewitts and Sobels operators and

    Laplacian operators and representation and description. I then went on implementingenhancement algorithms in C language, such as basic gray level manipulation, logtransforms, negatives and thresholding. I also implented histogram construction in C.

    I started to work on modifying an algorithm for Automatic selection of a nonliner function appropriate for object enhancement of a given image. I referred to [1], and learntthe basics of Fuzzy sets, operations of fuzzy sets and fuzzy relation and composition [5].The previous algorithm, minimizes fuzziness (ambiguity) in both grayness and in spatialdomain. Entropy being a measure of grayness ambiguity and compactness being ameasure of spatial ambiguity.

    In my modification, I tried to incorporate texture instead of brightness. For this purpose Ireferred to [1] which explains various methods involved in texture measurement. I used astatistical approach edges per unit area and implemented it using a Roberts Gradient

    Function. I also calculated the connectedness associated with the image.

    I then assigned a fuzzy membership function to all these measures and optimized them,maintaining the original connectivity and enhancing the image texture.

    SCOPE OF THE PROJECT

    The basic need of image enhancement is to improve the quality for visual judgment of the picture. Most of the existing enhancement techniques are heuristic and problemdependent. When an image is processed for visual interpretation, it is ultimately up to theviewers to judge its quality for a specific application and how well a particular methodworks. The process of evaluation of image quality therefore becomes subjective whichmakes the definition of a well-processed image an elusive standard for comparison of algorithm performance. Hence, it becomes necessary to have an iterative process withhuman interaction in order to select an appropriate operator for obtaining such a desired

    processed output. Given arbitrary image, problems like choosing an appropriate nonlinear

    6

  • 8/8/2019 A REPORT1

    7/30

    function without prior knowledge on image statistics and knowing the function how canone quantify the enhancement quality for obtaining the optimal one arise. To resolve thishuman interaction in an iterative process is required.

    Therefore, to avoid such human interaction we apply the theory of fuzzy sets. The

    original algorithm minimizes (optimizes) two types of ambiguity (fuzziness), namely,ambiguity in grayness and ambiguity in geometry of an image containing an object. Weextend this further by understanding the concept of texture in image processing. This isdone by choosing edges per unit area as a statistical measure of image texture. This isdone in order to obtain the automatic enhancement of the image texture.

    Chapter 2

    7

  • 8/8/2019 A REPORT1

    8/30

    EXISTING ENHANCEMENT ALGORITHMS

    The principal objective of enhancement is to process an image so that the result is moresuitable than the original image for a specific application. It is to highlight certainfeatures of interest in an image. When an image is processed for visual interpretation, the

    viewer is the ultimate judge of how well a particular method works. Visual evaluation of image quality is a highly subjective process, thus making the definition of a good imagean elusive standard by which to compare algorithm performance. Some of the simplestimage enhancement techniques are gray-level transformation functions.

    There are some basic types of gray-level transformation functions used frequently for image enhancement: negative, logarithmic, gray-level manipulation and thresholding.

    Image negative: The negative of an image with gray levels in the range [0,L-1]where L being the maximum gray-level value, is obtained by using the followingtransformation function:

    s = L 1 - r s: transformed gray-level valuer: initial gray-level value

    This reverses the intensity levels of an image and produces the equivalent photographic negative. It is suited for enhancing white or gray detail embedded indark regions of an image.

    Gray-level manipulation: Given an image, we can perform several types of gray-level manipulation by just multiplying each gray-level value with a constant.The value of constant usually taken is 3 or 5.

    Log transformation: This transformation maps a narrow range of low gray-levelvalues in an input image to a wider range of output levels. This can also be termedas expansion of gray levels. The basic type of log transform is given by thefollowing expression:

    s = c log (1 + r)Where, c is a constant and r >= 0.

    Thresholding operation : This operation will create a 2 level image called a binary image. It is also termed as contrast stretching/enhancement. If the chosenthreshold is r, all values below r are darken or taken as 0 and all values above r

    are brighten or taken as L-1.

    Averaging Filter Operation: This is a smoothing operation used for blurring andnoise reduction. The output of running a smoothing, linear spatial filter is theaverage of the pixels contained in the neighborhood of the filter mask.

    IMPLEMENTATION

    8

  • 8/8/2019 A REPORT1

    9/30

    As a first step I processed a PGM image (Portable Gray Map) in C. I used basic file I/Oinstructions in C and read the value of each of the pixels into an array. This involved

    basic understanding of the PGM format specifications.

    Once you get an image data read into an array, all transformations on image would be ineffect achieved by manipulating each data-element in this array. The data-elements in thearray are nothing but the gray-level value at each pixel position.

    An array picture is made after processing the PGM image. It consistsof value of grayness of all the pixels. Each grayness value read aspicture[row][col], modified and written into a temp variable this inturn is written into the tempWrite the temp variable into the new image for each of themodification to the pixel.

    I implemented the code for several images and I have displayed theresults for one particular image shown in figure 1. I have chosen theimage shown in figure 1 to be my original image and then performedthe following operations on it:

    Image Negative: As shown in figure 2temp=(255-(picture[row][col]));In our image the maximum gray-level value was 256.

    Gray Level Manipulation: As shown in figure 3temp=(picture[row][col]*5);

    Constant value is chosen as 5.

    Log transformation: As shown in figure 4temp=(20*log(1+picture[row][col]));

    Threshold Operation: As shown in figure 5

    Threshold Value of 127if (picture[row][col] > 127 )

    temp=255;else

    temp=0;

    Histogram plot: As shown in figure 6

    The histogram of a digital image with gray-levels in the range [0,L-1] is a discretefunction, which gives us the probability of occurrence of all the gray-levels present in thegiven image.

    9

  • 8/8/2019 A REPORT1

    10/30

    Averaging Filter Operation: As shown in figure 7if ((col==0) ||(row==0)||(row==(numRows-1))||(col==(numCols-1)))

    temp=255;else

    temp= (picture[row-1][col+1]+picture[row-1][col]+picture[row-1][col-1]+picture[row][col+1]+picture[row][col]+picture[row][col-1]+picture[row+1][col-1]+picture[row+1][col]+picture[row+1][col+1])/9;

    Figure 1: Original Image (baloon.pgm) Figure 2: Image negative

    Figure 3: Gray Level Manipulation Figure 4: Log Transformation

    10

  • 8/8/2019 A REPORT1

    11/30

    Figure 5: Threshold Operation Figure 6: Histogram Plot

    Figure 7: Pre-filtered Image Figure 7: Post-filtered Image

    (Smoothing Operation)

    11

  • 8/8/2019 A REPORT1

    12/30

    Chapter 3

    SOME RELEVANT DEFINTIONS

    A. Texture a brief introduction

    Texture is an important characteristic for the analysis of many types of images. Texturemeasures look for visual patterns in images and how they are spatially defined. It can beseen in all images from multi-spectral scanner images obtained from aircraft or satellite

    platforms (which the remote sensing community analyzes) to microscopic images of cellcultures or tissue samples (which the biomedical community analyzes). Texture, whendecomapsable has two basic type of dimensions on which it is described. The firstdimension is for describing the primitives out of which the image texture is composed i.etonal primitives or local properties and the second dimension is concerned with thespatial organisation of these tonal primitives. Thus, image texture can be quantitativelyevaluated as having properties of fineness, coarseness, smoothness, granulation,

    randomness and many more. There are statistical as well as structural approaches to themeasurement and characterization of image texture.

    HARALICK in [1] summarizes some of the extraction techniques and models, whichinvestigators have been using to measure textural properties.

    The number and types of its primitives and the spatial organization or layout of its primitives describe an image texture. The spatial organization may be random, may havea pair-wise dependence of one primitive on a neighboring primitive, or may have adependence of n primitives at a time. The dependence may be structural, probabilistic,or functional (like a linear dependence).

    One such measure of texture is quoted here:

    Edge Per Unit Area : Rosenfeld and Troy [3] and Rosenfeld and Thurston [4]suggested the amount of edge per unit area for a texture measure. The primitive here isthe pixel and its property is the magnitude of its gradient. The gradient can be calculated

    by any one of the gradient neighborhood operators. For some specified window centeredon a given pixel, the distribution of gradient magnitudes can then be determined. Themean of this distribution is the amount of edge per unit area associated with the given

    pixel. The image in which each pixel's value is edge per unit area is actually a defocusedgradient image.

    B. Entropy

    The entropy of a given image gives us global information and provides an averageamount of fuzziness in grayness of an image say X. This is the degree of difficulty(ambiguity) in deciding whether a pixel would be treated as black (dark) or white(bright). The difficulty is minimum when the fuzzy membership is 0 or 1 (that is the

    12

  • 8/8/2019 A REPORT1

    13/30

    image is crisp with either fully black or white pixels.) and maximum when the fuzzymembership is 0.5 (that is semi-bright pixels).

    Given an image X, the entropy H (X) can be calculated as:

    C. Non-linear enhancement functions

    There are 4 basic types of non-linear mapping functions used for enhancement. Differentforms of non-linear enhancement functions along with their formulas are discussed.

    The mapping function in Figure l is represented by

    Where the parameter b is a positive constant.

    When applied on an image, makes the dark area (lower range of gray level) stretched andthe bright area compressed, resulting in an increase in the contrast within the darker areaof the image.

    The mapping function in Figure 3 is represented by

    Where F e and F d are positive constants and is the value of f(X mn ) for X mn == 0The application of this mapping function to an image will produce the direct oppositeeffect as that of the above mentioned function.

    13

  • 8/8/2019 A REPORT1

    14/30

    The mapping function in Figure 2 is represented by

    The use of this function will result in stretching of the middle range gray levels.

    The mapping function in Figure 4 is represented by

    Where F e and F d are positive constants and is the value of f(X mn ) for X mn == 0When used as mapping function, will compress drastically the midrange values, and atthe same time it will stretch the gray levels of the upper and lower ends.

    Figure 1 Figure 2

    14

  • 8/8/2019 A REPORT1

    15/30

    Figure 3 Figure 4

    15

  • 8/8/2019 A REPORT1

    16/30

    Chapter 4

    AUTOMATIC ENHANCEMENT

    A. Previous approach

    As discussed in chapter 1, this approach is an attempt to demonstrate an application of thetheory of fuzzy sets to avoid human iterative interaction and to make the task of subjective evaluation objective.

    An algorithm for automatic selection of a nonlinear function appropriate for objectenhancement of a given image is described in [2]. The algorithm does not need iterativevisual interaction and prior knowledge of image statistics in order to select thetransformation function for its optimal enhancement. A quantitative measure for evaluating enhancement equality has been provided based on fuzzy geometry. Theconcept of minimizing fuzziness (ambiguity) in both grayness and in spatial domain, as

    used by Pal and Rosenfeld [4], has been adopted in [2]. The selection criteria are further justified from the point of bounds of the membership function. The effectiveness of thealgorithm is demonstrated for unimodal, multimodal and right skewed images when

    possible nonlinear transformation functions are taken into account.

    The proposed algorithm in [2] has three parts. Given an input image X and a set of nonlinear transformation functions, it first of all enhances the image with a particular enhancement function with its varying parameters. The second phase consists of measuring both spatial ambiguity and grayness ambiguity of the various enhanced X'using the algorithm in [4], and of checking if these measures posses any valley(minimum) with change in the parameters. The same procedure is repeated in the third

    stage for other functions. Among all the valleys, the global one is selected. Thecorresponding function with the prescribed parameter values can be regarded as optimal,and the value of ambiguity corresponding to the global minimum can be viewed as aquantitative measure of enhancement quality.

    B. Modified approach

    This process has been modified to enhance the texture automatically. This report fullydescribes the modifications proposed and the results obtained with such methods. In [2]the spatial ambiguity in grayness and compactness in the image are minimized. Instead,here, the spatial ambiguity in texture is minimized. But this enhancement is actually prior

    to the feature extraction stage of Content based image retrieval. So the texturedistribution needs to be as close to that of initial image as possible. Hence the measure of connectedness in the image is also maximized.

    16

  • 8/8/2019 A REPORT1

    17/30

    1. Calculation of Textural Ambiguity

    A 3x3 gradient operator is run over the initial image (say Y) and a 5X5 operator averaging is run on the resultant image to get edges per unit area image. Each valuein this image is put the Zadehs S function, which yields the membership value at

    each location, thus constituting a membership plane for texture (X).

    The entropy can be calculated by the expression defined in chapter 3.

    2. Calculation of Connectedness

    In each neighborhood the center pixel value is subtracted from each of the 4neighbors and minimum of absolute values of gradients is considered to construct animage. This resultant image is used to calculate the membership function for Connectedness with the Zadehs S function. The sum of Membership function valuesat each pixel is considered to be the measure of disconnectedness in the whole image.

    3. Algorithm

    Enhance the given Image by any of the four enhancement functions with aspecific set of parameter values.

    Calculate the product of texture entropy and disconnectedness. Change the parameter value and repeat the calculation of product. This process is

    done for a set of parameter values and then the product plot is probed for a globalminimum in product. If a global minimum is found then process is stopped.

    If, its not then select another form of enhancement function and repeat the process until a minima is found for one of the form of the function.

    The image corresponding to the minima is the enhanced image.

    4. Implementation

    Algorithm is tested for the following cases:

    Case 1: A smoothened image is given as input to the algorithm and the product givesglobal minima for function form 4.

    The original image (LANDSAT image) is shown in Figure 1. After the application of the algorithm mentioned in (3) the output image is shown in figure 2.

    Figure 3 shows the product plot of textural ambiguity and disconnectedness as the parameter for functional form 4 (as described in chapter 3) varies.

    .

    17

  • 8/8/2019 A REPORT1

    18/30

    Figure 1 : Input Image1 Figure 2 : Enhanced Image

    Figure 3: Product Plot of Textural Ambiguity and Total connectedness

    18

  • 8/8/2019 A REPORT1

    19/30

    Case 2: The image shown in Figure 1 is taken as another input image for thisalgorithm. Figure 2 shows its corresponding histogram. It can be seen that it isconcentrated in the middle.

    Figure 3 shows the output image obtained after the application of the algorithmdescribed in (3) and Figure 4 shows the product plot of textural ambiguity anddisconnectedness as the parameter for functional form 4 (as described in chapter 3)varies.

    Figure 1: Input Image2

    Figure 2: the LANDSAT image's Histogram - Mostly on the center

    19

  • 8/8/2019 A REPORT1

    20/30

    Figure 3: the LANDSAT image corresponding to extended Histogram that is theafter enhancement image

    Figure 4 : Product Plot of Textural Ambiguity and disconnectedness as theparameter for functional form 4 varies.

    20

  • 8/8/2019 A REPORT1

    21/30

    Case 3: The image shown in Figure 1 is taken as another input image for thisalgorithm. It can be seen that its histogram concentrated in the left region.

    Figure 2 shows the output image obtained after the application of the algorithmdescribed in (3) and Figure 3 shows the product plot of textural ambiguity and total

    connectedness as the parameter for functional form (as described in chapter 3)varies.

    Figure 1 Input Image3 Figure 2 Exposure corrected

    Figure 3 Product Plot of Textural Ambiguity and total connectedness

    21

  • 8/8/2019 A REPORT1

    22/30

    Case 4: The image shown in Figure 1 is taken as another input image for thisalgorithm. It can be seen that its histogram concentrated in the right region (over-exposed).

    Figure 2 shows the output image obtained after the application of the algorithm

    described in (3) and Figure 3 shows the product plot of textural ambiguity and totalconnectedness as the parameter for functional form (as described in chapter 3) varies.

    Figure 1: Over Exposed image Figure 2: Corrected image

    Figure 1 Product Plot of Textural Ambiguity and total Connectedness

    22

  • 8/8/2019 A REPORT1

    23/30

    Chapter 4

    APPLICATION OF IMAGE PROCESSING: SEAM TRACKING

    A. Seam Tracking:

    1. Introduction:

    The use of robots in manufacturing industry has increased rapidly during the past decade.Arc welding is an actively growing area and many new procedures have been developedfor use with new lightweight, high strength alloys. One of the basic requirements for suchapplications is seam tracking. Seam tracking is required because of the inaccuracies in

    joint fit-up and positioning, war page, and distortion of the work piece caused by thermalexpansion and stresses during the welding.

    2. Description of apparatus arrangement:

    A laser beam is projected on to a measurement surface where it is scattered from surfaceand its image is detected by an optical detector, a camera. The images [6] contain thedetail of groove measurements. PC processes the images, infers the details and gives thefeedback to motor.

    Fig 1: Apparatus of the system with interconnections

    23

  • 8/8/2019 A REPORT1

    24/30

    B. Previous Approach:

    As quoted in [6] the algorithm involved takes the Image ROI as direct input (with a thick LASER line) and then operations like filtering, edge detection and thresholding are

    performed on the image.

    The algorithm in previous approach involves the following steps:

    Calibrate the image with help of set of blocks of known dimensions.

    Get the image and store it into an array variable.

    Convert the true color image into gray scale image.

    Select a region of interest for further processing.

    Filter the image by removing noises.

    Detect the edge of the images using edge detection operator. Once edge detection is done program, execute the MATLAB codes to find

    various parameters required.

    Calculate the pixel values of edge and root centers.

    Estimate the amount of deviation in successive values of edge center using the

    calibration measurements.

    Give corresponding feedback to the motor so as to correct the deviation.

    Figure 2 Typical region of interest obtained from the welding groove for LASER linesource welding seam tracking apparatus

    24

  • 8/8/2019 A REPORT1

    25/30

    Figure 3 Typical region of interest obtained from the calibration bar for LASER linesource welding seam tracking apparatus

    As we observe in both the cases the LASER line source is 6-8 pixels wide. But a precisemeasurement demands a very thin LASER line source. Also template matching and other methods of measurement demand thin lines. As to obtain a further thin LASER line, weneed a costly source; we can use thinning algorithms to obtain a thin LASER line.

    C. Using thinning algorithms:

    Algorithm Desciption:

    In [7] a parallel gray-tone thinning algorithm has been quoted. Gray tone thinning (GT)can be thought of as a generalization of the two-tone thinning algorithm. In the two-tone

    thinning algorithm, the object pixels, which are adjacent to the background, are mappedto the background value. Similarly, in GT, pixels, which are very close to the background both in location and gray level, are mapped to the local maximum value (local background value). This similarity suggests that a two-tone thinning algorithm can bemodified to suit the gray level environment.

    25

  • 8/8/2019 A REPORT1

    26/30

    To implement this algorithm for a gray level picture with similar checking of conditions,the neighborhood pixels around the candidate pel are to be mapped temporarily to somecompatible state.

    The threshold value calculated over (NN) window is given by

    Where p i for a (3 x 3) window is as specified above.For (3x3) window

    The mapped threshold value of the neighborhood pixels is

    The Algorithm:It is a two-pass algorithm.Pass 1:

    3

  • 8/8/2019 A REPORT1

    27/30

    Implementation of thinning algorithms in C:This algorithm is coded in C and for several cases of seam tracking images obtained theautomatic enhancement and thinning algorithms are run in conjunction to obtain the

    thinned version of images.The results are as shown:

    Figure 4 Calibration Image Thinned version

    Figure 5 Seam Image thinned version

    27

  • 8/8/2019 A REPORT1

    28/30

    FUTURE PROSPECTS OF WORK:

    The Algorithm quoted in [6] is a typical Machine vision algorithm but not much of intelligence is embedded into it. Which, if done, can make measurements less error

    prone. Soft computing tools like fuzzy logic can be used to modify the existing algorithm.Automatic enhancement and thinning algorithms are efforts in this direction. Further

    progress can be made by incorporating contour searching algorithms and detecting patterns in the search results. The ambiguity in detecting the corners can be modeled byfuzzy corner ness [8].Prof. Kundus guidance and the exposure to image processing and soft computing toolsobtained at Center for Soft Computing Research, I believe, will help me make more

    progress in this direction.

    28

  • 8/8/2019 A REPORT1

    29/30

    CONCLUSION:

    Various forms of enhancement functions are implemented in C. A fuzzy measure for textural ambiguity and disconnectedness has been proposed. An Algorithm for automaticenhancement using fuzzy sets has been modified using minimization of texturalambiguity and disconnectedness such that it automatically enhances the texture in theimage. The algorithm is tested on images of various kinds and the outputs areencouraging. As an application of image processing for seam tracking, thinningalgorithms are successfully used to process welding seam images. Work has been done indirection so as to suggest an algorithm, which has a more deterministic control for seamtracking.

    29

  • 8/8/2019 A REPORT1

    30/30

    REFERENCES:

    [1] R. M. Haralick, Statistical and structural approaches to texture , Proc. IEEE, vol.

    67, no. 5, pp. 786-804, May 1979.

    [2] Malay K. Kundu and Sankar K. Pal. `` Automatic selection of object enhancement operator with quantitative justification based on fuzzy set theoretic measure '', PatternRecognition Letters, vol. 11, pp. 811--829, 1990.

    [3] A. Rosenfeld and E. Troy, Visual texture analysis Tech. Rep.70-116, University of Maryland, College Park, MD, June 1970.Atso in Confmnce Record for Symposium onFeature Extration and Selection in Pattern Recognition, Argonne, IL, IEEE Publication7OC-51C, Oct. 1970, pp. 115-124.

    [4] A. Rosenfeld and M. Thurston, Edge and curve detection for visual scene analysis ,ZEEE T m Cornput, vol. C-20, pp.562-569, May 1971.

    [5] Lee, Kwang H., First Course on Fuzzy Theory and Applications, Series: Advances inSoft Computing, Vol. 27, Springer-Verlag, Berlin, March 2005,

    [6]. Akash Raman, Aditya Reddy & Harish Reddy. Laser Vision Based Seam TrackingSystem For Welding Automation. The International Conference on Image Processing,Computer Vision, and Pattern Recognition (IPCV, WorldComp'08 Congress, Las Vegas,USA), July, 2008

    [7] Malay Kumar Kundu, Bidyut Baran Chaudhuri, D. Dutta Majumder: A parallelgraytone thinning algorithm (PGTA). Pattern Recognition Letters 12(8): 491-496 (1991)

    [8] Minakshi Banerjee, Malay K. Kundu: Content Based Image Retrieval withMultiresolution Salient Points. ICVGIP 2004: 399-404

    [8] D. Phillips; Image Processing in C: Analyzing and Enhancing Digital Images, RandDPublications, 1994.