detection diatoms
TRANSCRIPT
-
8/14/2019 Detection Diatoms
1/6
Comparison of Different Feature Extraction
Techniques in Content-Based Image Retrieval for CT
Brain ImagesWan Siti Halimatul Munirah Wan Ahmad 1 and Mohammad Faizal Ahmad Fauzi 2
Faculty of Engineering, Multimedia University
Cyberjaya, [email protected]@mmu.edu.my
AbstractContent-based image retrieval (CBIR) system helps
users retrieve relevant images based on their contents. A reliable
content-based feature extraction technique is therefore required
to effectively extract most of the information from the images.
These important elements include texture, colour, intensity or
shape of the object inside an image. CBIR, when used in medical
applications, can help medical experts in their diagnosis such asretrieving similar kind of disease and patients progress
monitoring. In this paper, several feature extraction techniques
are explored to see their effectiveness in retrieving medical
images. The techniques are Gabor Transform, Discrete Wavelet
Frame, Hu Moment Invariants, Fourier Descriptor, Gray Level
Histogram and Gray Level Coherence Vector. Experiments are
conducted on 3,032 CT images of human brain and promising
results are reported.
I. INTRODUCTIONThe advancement in computer technologies produces huge
volume of multimedia data, specifically on image data. As a
result, studies on content-based image retrieval (CBIR) has
emerged and been an active research nowadays. CBIR systemis used to find images based on the visual content of the
images, and the retrieved results will have visually similarappearance to the query image. In order to describe the imagecontent, low level arithmetic features are extracted from the
image itself [3]. Numerous elements such as texture, motion,colour, intensity and shape have been proposed and used toquantitatively describe visual information [4]. The imagefeatures that were generated using specific algorithms are thenstored and maintained in a separate database.
A number of previous works have been done addressing
different techniques of the image elements for image retrieval.In 2002, Nikolou et al. [8] has proposed a fractal scanning
technique to be used in colour image retrieval with DiscreteCosine Transform (DCT) and Fourier descriptors as feature
extraction techniques. Qiang et al. [10] has developed aframework of CBIR based on global colour moments in HSVcolour space. Later in 2006, a user concept pattern learning
framework has been presented by Chen et al. [9] for CBIRusing HSV colour features and Daubechies wavelettransformation. The works on the CBIR for medical
applications are rarely found before; however it is gaining alot of attention recently due to large number of medicalimages in digital format generated by medical institutions
everyday. In 2003, Zheng et al. [7] has developed a content-
based pathology image retrieval system based on imagefeature types of color histogram, texture representation byGabor transform, Fourier coefficients, and waveletcoefficients. Recently, Rahman et al. [13] has proposed aCBIR framework which consists of machine learning methodsfor image prefiltering, statistical similarity matching and
relevance feedback scheme for medical images. The featuresare extracted using colour moment descriptor, gray-level co-occurrence matrix as texture characteristics and shape features
based on Canny edge detection.In this paper, detail comparison on the accuracy of different
feature extraction techniques are discussed and experimented
on medical images. The motivation is to get the best techniqueto be used in further medical image retrieval application. Thetechniques are from texture, colour and shape elements; where
texture techniques are Gabor Transform and Discrete WaveletFrame, colours are Gray Level Histogram and Gray Level
Coherence Vector, and shape methods are Hu MomentInvariants and Fourier Descriptors.
This paper is organized as follows. The next section brieflydescribes the feature extraction techniques used in thecomparison, and then followed by review of medical images
used in the experiment in Section III. Experimental setup isdiscussed in Section IV, followed by the results anddiscussions in Section V. Finally the conclusion is presentedin Section VI.
II. REVIEW OF FEATURE EXTRACTION TECHNIQUESA. Gabor Transform (texture)Gabor transform is a technique that extracts texture
information from an image. The one used in this research is atwo-dimensional Gabor function proposed by Manjunath andMa [1]. Expanding the mother Gabor wavelet forms a
complete but non-orthogonal basis set. The non-orthogonalityimplies that there will be redundant information betweendifferent resolutions in the output data. This redundancy has
been reduced by [1] with the following strategy: Let Ul and Uh
denote the lower and upper frequency of interest, Sbe the totalnumber of scales, andKbe the total number of orientations (ortranslations) to be computed. Then the design strategy is to
ensure that the half-peak magnitude support of the filter
978-1-4244-2295-1/08/$25.00 2008 IEEE MMSP 2008503
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.
-
8/14/2019 Detection Diatoms
2/6
responses in the frequency spectrum touch each other asshown in Fig. 1, forS= 4 andK= 6. The Gabor transform is
then defined by:
= 111111 ),(),(),( dydxyyxxgyxIyxW mnmn (1)
where * indicates the complex conjugate and m, n are integers,m = 1,2,,Sand n = 1,2,,K. The Gabor transform therefore
produce SxKnumber of the output images, and energy within
each image is used as feature, resulting in SxKdimension offeatures where S=6 andK=4.
=
=
=
1
0
1
0
),(M
m
N
n
jiWy (2)
Fig. 1. Frequency spectrum of 2D Gabor transforms
B.Discrete Wavelet Transform (texture)Discrete Wavelet Frame (DWF) [2] is an overcomplete
wavelet decomposition in which the filtered images are notsub-sampled. This results in four wavelet coefficient imageswith the same size as the input image. The images are from
low-low (LL), low-high (LH), high-low (HL) and high-high(HH) channels. The decomposition is then continued on theLL channels just as normal wavelet transform, but since the
image is not sub-sampled, the filter has to be upsampled byinserting zeros in between its coefficients. The number ofchannels generated for the DWF is 3l+1, where l is the
number of decomposition levels. The energy within each
channel is used as feature. With l=3, 10-dimension of featurevector is produced.
C.Hu Moment Invariants (shape)For this shape representation, invariant moments used are
based on one that was derived by Hu [11]. Hu defined sevensuch moments that enables moment calculations which areinvariant under translation and changes in scale and rotation.It includes skew invariant which can distinguish mirrorimages of otherwise identical images. The seven moments are
used as features, hence producing 7-dimensional featurevector.
D.Fourier Descriptor (shape)Fourier Descriptors (FDs) is a powerful feature for
boundaries and objects representation. Consider an N-pointdigital boundary; starting from an arbitrary point (x0, y0) and
following a steady counterclockwise direction along theboundary, a set of coordinate pairs (x0, y0), (x1, y1),,(xN-1,yN-1) can be generated. These coordinates can be expressed ina complex form such as
1,...,2,1,0),()()( =+= Nnnjynxn (3)
The discrete Fourier transform (DFT) ofz(n) gives
=
=
1
0
10,2
exp)()(N
n
NkN
knjnzka
(4)
The complex coefficients a(k) are called the FourierDescriptors of the boundary. 64-point Discrete FourierTransform (DFT) is used which results on 64-dimension offeature vector.
E. Gray Level Histogram (intensity)Colour histograms are the most common way of describing
low-level colour properties of images. Since medical imagesare only available in grayscale, a simpler histogram calledgray level histogram (GLH) is used to describe intensity ofgray level colour map. A GLH is presented by a set of bins,
where each bin represents one or more level of gray intensity.It is obtained by counting the number of pixels that fall intoeach bin based on their intensity [6]. Fig. 2 shows an exampleof GLH for different images using 64 bins histogram.
Fig. 2. Example of gray level histogram distribution
with number of bins = 64
F. Gray Level Coherence Vector (intensity)Gray Level Coherence Vector (GLCV) is another technique
for extracting intensity features of an image. The idea of usingit is somewhat similar to Colour Coherence Vector (CCV)
proposed by G. Pass et al. [5]. This technique incorporatessome spatial information about an image, where each pixel isclassified in a given bin as either coherent or incoherent. A
pixel is coherent if it belongs to a large connected group ofsimilar pixels; otherwise it is incoherent. The first process isto discretize the gray colourspace, where only n distinct
numbers of gray colours (or bins) are used in the image.The next process is to categorize the pixels within a bin as
either coherent or incoherent, by comparing the bin size with apredefined threshold value . Value of and n used in [5] is300 and 64 accordingly. In this experiment, the bin size is alsoset to 64 and several values of were tested, and the optimal
value of is found to be 2600. All tested images in our imagedatabase contains 262,144 (512x512) pixels, so coherentregion is set to be approximately 1% of the image. From =2600, the average of coherent pixels for all images in our
database is about 70%; and from bin size of 64, number offeatures produced are 128-dimensional, where 64 for eachcoherent and incoherent vectors respectively.
504
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.
-
8/14/2019 Detection Diatoms
3/6
III.MEDICAL IMAGESMedical image collection used in this experiment is
provided by Putrajaya Hospital, Malaysia. It consists of 3,032
computed tomography (CT) images of human brain in theDICOM image format. The images are of 512x512resolutions, scanned from 95 patients with each patient having
scans ranging from 15 to 56. To quantitatively evaluate the performance of the texture- and intensity-based featureextraction techniques, the images are divided into 4 differentclasses according to visual similarity, called general image
classification. The ability of the system to retrieve imagesfrom the same class to the query images indicates the accuracyof the feature extraction techniques. To evaluate the performance of the shape-based techniques, differentclassification is used, called shape image classification. Theclassification is based on the head contour obtained by
segmenting the head from its background using fuzzy C-means clustering algorithm. Note that the shape-based featureextraction techniques employed will only search for similar
shape of the head itself, and not the shapes of different object
inside it. Visually the shape of the head can also be classifiedinto 4 different classes. Some examples of the images for both
the general and shape classifications are shown in Table I andII. From general image classification, 638 of 3,032 images inthe database belong to Class 1, 808 from Class 2, 1134 from
Class 3 and 452 images are from Class 4. For shape imageclassification, 293 belong to Class 1, 1012 from Class 2, 981from Class 3 and 746 from Class 4.
TABLE I
GENERAL IMAGE CLASSIFICATION
Class 1 Class 2 Class 3 Class 4
Totalimages
per class
638 808 1134 452
ExampleA
ExampleB
TABLE II
SHAPE IMAGE CLASSIFICATION
Totalimages
per class293 1012 981 746
Example
A
ExampleB
IV.EXPERIMENTAL SETUPThe retrieval system consists of 2 stages, namely the offline
feature extraction stage, and the online retrieval stage. During
the offline stage, the six feature extraction techniques areapplied to all 3,032 images in the database. Different lengthsof feature vectors are generated according to the techniques
used (Table III). These vectors are stored in separate featurevector databases according to the different techniques. During
the online stage, the feature vector of the query image iscomputed using one selected technique and is compared to allfeature vectors in the feature vector database of that technique.Distance metric is used to compute the similarity betweenfeature vectors of the database image. Small distance impliesthat the corresponding image is similar to the query image and
vice versa. Images are then retrieved based on increasingdistance. The flow of this process is shown in Fig. 4.
TABLE III
DIFFERENT LENGTH OF FEATURE VECTORS
Technique FV Length
Gabor Transform 24
Discrete Wavelet Frame 10
Hu Moment Invariants 7
Fourier Descriptor 64
GrayLevel Histogram 64
GrayLevel Coherence Vector 128
Fig. 3. Offline feature extraction stage
Fig. 4. Online stage of retrieval process
Measuring dissimilarity between images is of centralimportance for retrieving images by content. In this work, L1and L2 metrics, as well as the normalized version of those are
considered with respect to the suitable extraction technique.L1 metric is also known as Manhattan distance, calculated by
taking the absolute differences between the feature vectors;whereas L2 metric is known as Euclidean distance, calculated by examining the root of squared differences between thefeature vectors. Normalized Euclidean and Manhattan metrics
are computed by dividing the feature vectors difference with astandard deviation of that particular feature over the entiredatabase. The four distance metrics are given below:
Euclidean = =
n
k
jkik xx1
2)( (5)
505
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.
-
8/14/2019 Detection Diatoms
4/6
Manhattan = =
n
k
jkik xx1
)( (6)
Normalized Euclidean = =
n
k k
jkik xx
1
2)(
(7)
Normalized Manhattan = =
n
k k
jkik xx
1
)(
(8)
where k is the standard deviation of the kth feature in the FV
database.After the performance of each individual technique is
obtained, the best technique among the intensity, texture andshape features is chosen for experiment. The selectedtechniques are combined to see if the retrieval performancecan be further improved. This can be achieved by adding-up
the dissimilarity measures of the combined techniques withoutaffecting the relative distances between the query image andthe database images of each technique.
V. RESULTS AND DISCUSSIONSIn the initial setup of the experiment, eight images wereselected (two from each class) to test all the techniques with
all distance metrics to find the most suitable metric to be used
for each technique. The result is summarized in Table IV. Itwas found that different feature extraction techniques givedifferent performance for each distance metrics. Gabor
transform shows the best results using normalized Manhattanmetric, Discrete Wavelet Frame performs best usingnormalized Euclidean metric, Fourier descriptor presents high
accuracy using Manhattan metric, while Hu momentinvariants, gray level histogram and gray level coherencevector show high accuracy when using Euclidean distancemetric.
TABLE IVRETRIEVAL ACCURACY FORALL FEATURE EXTRACTION TECHNIQUES
TESTED USING ALL DISTANCE METRICS
Technique
Average retrieval accuracy of 8
query images for TOP 50 (%)
E M NE NM
Gabor 58.25 58 59.25 59.5
DWF 43.25 44.5 67.75 66.25
Hu moment 62 55.75 51 45.25Fourier Desc. 89.75 91.75 90.75 88
GLH 71.25 70.75 69.25 71.5
GLCV 74 73.25 25 25
E=Euclidean, M=Manhattan, NE=Normalized Euclidean
and NM=Normalized Manhattan
To evaluate the performance of each feature extractiontechnique, all 3,032 CT Brain images are used as query one-by-one to check if similar images from the same patient and
class are retrieved successfully. It is easier to do analysis bychecking the similarity per patient instead of all images fromthe database because the total number of images to beconsidered is then smaller. This operation involves hybrid-based image retrieval from our previous work in [12] wherePatientID is used as input in text-query and is combined with
CBIR. As an example, Patient14 (ID 156027) has 25 scans 6
of them belong to Class 1, 5 to Class 2, 10 to Class 3 and 4 toClass 4. First image from Class 1 is selected as the query
image, keyword 156027 is used with field Patient ID, andsystem will retrieve all 25 images of Patient14 according toincreasing distance. Perfect retrieval for this query image
would be the retrieval of the other five images from Class 1(excluding the query image itself) within the top 5 rankedimages, followed by images from other classes. The average
recognition rate is used to evaluate the retrieval accuracy, andits calculation is as shown in (9). For example, if there are Nimages in Class 1 for a particular patient, then the averagerecognition rate is computed as the number of images from
similar class within the topNretrieved images.
Averagerecognition =
rate
No. of images found from the sameclass within top N retrieved images (9)
No. of images per class, N
Retrieval process is performed to all CT brain images from95 patients in image database using the six feature extraction
techniques with suitable distance metric as discussed previously. Table V summarizes the retrieval accuracy for
each class of all texture and intensity techniques, and Table VIfor the two shape techniques. From Table V, recognition rateof all techniques for Class 1 and Class 3, and to some extentClass 4, is satisfactory but not for Class 2. The reason is thatclassification was done visually based on human vision and
some ambiguity are present where images from Class 2 canalso be classified as Class 1 or Class 3. Hence, it affects theoverall accuracy of Class 2 images. Overall, the average
recognition rate per patient is recorded to be above 70% fortexture and intensity extraction techniques. From Table VI,the accuracy of shape classification of both techniques varies
according to classes. Retrieval for Class 3 recorded highestaccuracy. However the accuracy is substantially lower
compared to the other four techniques from Table V becauseshape features are represented based on the contour of thesegmented object in the image and it depends a lot on thesegmentation accuracy itself. This problem can be fixed with better segmentation and shape extraction techniques todistinguish images of human brain in CT scans.
TABLEV
PERCENTAGE OF RETRIEVAL ACCURACY FORTEXTURE AND INTENSITY
TECHNIQUES
Technique% Average for 95 patient per class % Average
per patientClass 1 Class 2 Class 3 Class 4
Gabor 78.45 59.48 84.43 76.78 74.51
DWF 80.72 64.91 84.44 78.98 77.05GLH 86.98 55.31 88.26 60.88 72.02
GLCV 85.48 56.11 87.49 62.84 72.21
TABLEVI
PERCENTAGE OF RETRIEVAL ACCURACY FORSHAPE TECHNIQUES
Technique% Average for 95 patient per class % Average
per patientClass 1 Class 2 Class 3 Class 4
Hu moment 52.75 43.14 67.1 32.22 48.53
FourierDescriptor
47.55 67.05 75.35 74.28 67.39
506
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.
-
8/14/2019 Detection Diatoms
5/6
The other observation recorded is that the averagerecognition rate varies from patient to patient, depending on
the difficulty level in visually classifying the images into a particular class. Certain patients recorded a low averagerecognition rate because some of its images can be visuallyclassified into 2 classes, hence, effecting the recognitionmeasurement. The average recognition rate for all 95 patientsusing all techniques is presented in Fig. 5.
The experiments were conducted using Matlab 7.3 on anIntel Core Duo 2.0GHz processor with 1GB memory. Averagetime taken for each technique to complete retrieval process issummarized in Table VII. For texture image element, average
time recorded for both techniques are the same, but whenreferring to the retrieval accuracy, DWF gives better results.As for gray level intensity, the histogram technique performs
the retrieval process much faster than the coherence vectortechnique, even though both techniques result similar retrievalaccuracy. Between the two tested shape features, Hu momenttechnique can execute the retrieval up to three times fasterthan Fourier Descriptor (FD). However, after considering thepoor retrieval accuracy, FD is chosen for further analysis in
experimenting combination of feature extraction techniques.
TABLEVII
AVERAGE RETRIEVAL TIME FOREACH TECHNIQUE
Technique Average time taken
Gabor Transform 5s 6sDiscrete Wavelet Frame 5s 6sGray Level Histogram 11s 12s
Gray Level Coherence Vector 19s 20sHu moment 3s 4s
Fourier Descriptor 10s 11s
Since the combination of feature extraction techniques
involved summation operation of the techniques dissimilaritymeasures, the distance for each technique cannot be toodominant compared to others, so a small modification isneeded. For gray level histogram (GLH), the distance is very
large which ranges from 107
up to 109
because the featurevector consists of number of pixels in a specific bin. Tonormalize the GLH features, total number of pixels in each
bin is divided by the total number of pixels for all bins. ForFD technique, it was found that using Manhattan distance
metric produce a very small measurement, hence NormalizedEuclidean is used as a replacement because it generates
second highest accuracy in Table IV. There is no change inDWF technique. Results for the combination of techniques areshown in Table VIII. From the table, it can be seen that thecombination of DWF and FD techniques give the highestaverage retrieval rate. The pattern of accuracy per class isequivalent to the one in Table V, where Class 1 and Class 3
give better results, as well as Class 4, but a bit low for Class 2.Combining DWF with either GLH or FD performs retrievalfaster compared to the other combinations. Obviously moretime is needed to compute combination of all three techniques.
It is also interesting to note that combining all threetechniques does not further improve the retrieval accuracy, infact it performs worst than all the combination-of-two
techniques. This shows that we cannot simply bundle togethera lot of feature extraction methods in order to get higheraccuracy.
TABLEVIII
PERCENTAGE OF RETRIEVAL ACCURACY FORMULTI-TECHNIQUES
Techniques
% Average for 95 patient perclass
%Average
perpatient
AveragetimetakenClass 1 Class 2 Class 3 Class 4
DWF + GLH 88.04 57.93 88.19 66.37 77.81 10s 11s
DWF + FD 82.03 64.94 84.46 78.98 80.60 11s 12s
GLH + FD 86.97 55.31 88.26 60.89 75.34 15s 16s
DWF + GLH
+ FD69.17 61.37 87.96 72.93 74.01 18s 19s
To ease the work of testing and analyzing the images, a
graphical user interface (GUI) was developed using Matlabenvironment. It consists of two main panels which are QueryPanel (left side) and Result Panel (right side). The
development of this system is meant for flexible hybridretrieval system, so in the Query Panel, the type of retrievalcan be selected content-based (CBIR), text-based (TBIR) or both (Hybrid). Accuracy of the system can be analyzed
visually by looking at the Result Panel. Fig. 6 shows anexample of retrieval results obtained by texture extractiontechnique of DWF. As can be seen, visually similar scans are
retrieved accordingly.
Fig. 5. Average recall rate for 95 patients
507
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DO RIO GRANDE DO NORTE. Downloaded on May 23, 2009 at 09:01 from IEEE Xplore. Restrictions apply.
-
8/14/2019 Detection Diatoms
6/6
Fig. 6. GUI for image retrieval system
VI.CONCLUSIONSAn efficient content-based image retrieval system requires
excellent content-based technique to effectively use most ofthe information from the images. In this paper, a study hadbeen carried out on six feature extraction techniques from the
texture, intensity and shape image to acquire detailcomparisons on retrieval accuracy for each techniqueelements on medical images. The experiment was performed
on 3,032 CT human brain images from 95 patients, visually
divided into four classes and each image is tested in order toget the average recognition rate. Technique with the highestaccuracy among different approach is combined. Reportedresults show that the best texture extraction technique isDiscrete Wavelet Frame (DWF); for intensity is Gray Level
Histogram (GLH) and for shape feature is Fourier Descriptor(FD). For the combination of techniques, DWF and FDcombination gives the most excellent result. These techniquescan be used in medical applications to provide a reliable
image retrieval system.Our current work is using these promising techniques to
retrieve medical images based on region of interest, instead of
the whole image. A block-based algorithm has beendeveloped based on a simple gray level histogram with image partitioning algorithm. We are integrating the block based
method with the DWF, GLH and FD techniques.
ACKNOWLEDGEMENT
The authors would like to acknowledge Putrajaya Hospital,Malaysia, for contributing the medical images used in thisstudy. This work is funded by the Ministry of Science,Technology and Innovation Malaysia under the Science Fundgrant (ID: 01-02-01-SF0014).
REFERENCES
[1] B. S. Manjunath and W. Y. Ma, Texture features for browsing andretrieval of image data, IEEE Trans. on Pattern Analysis and Machine
Intelligence, vol. 18, pp. 837-842, Aug. 1996.
[2] M. Unser, Texture classification and segmentation using waveletframes, IEEE Trans. on Image Processing, vol. 4, pp. 1549-1560, Nov.
1995.
[3] S. Liapis and G. Tziritas, Color and Texture Image Retrieval UsingChromaticity Histograms and Wavelet Frames, IEEE Trans. on
Multimedia, vol. 6, no. 5, pp. 676-686, Oct. 2004.
[4] V. N. Gudivada and V. V. Raghavan, Content based image retrievalsystems, IEEE Computer, vol. 28, Sept. 1995.
[5] G. Pass, R. Zabih, and J. Miller, Comparing Images Using ColourCoherence Vectors, Proc. Fourth ACM International Multimedia
Conference, Boston, MA, pp. 65-74, 1996.
[6] A. Coman, Exploring the Colour Histograms Dataspace for Content-based Image Retrieval, Technical Report TR 03-01, Univ. of Alberta,
Canada, Jan. 2003.
[7] L. Zheng, A.W. Wetzel, J. Gilbertson and M.J. Becich, Design andAnalysis of a Content-Based Pathology Image Retrieval System, IEEE
Trans. on Info. Tech. in Biomed., vol. 7, no. 4, pp. 245-255, Dec. 2003.
[8] N. Nikolaou and N. Papamarkos, Image Retrieval Using a FractalSignature Extraction Technique, IEEE Trans. on DSP, pp. 1215-1218,
2002.
[9] S.-C. Chen, S.H. Rubin, M.-L. Shyu and C. Zhang, A Dynamic UserConcept Pattern Learning Framework for Content-Based Image
Retrieval, IEEE Trans. on Systems, Man and Cybernetics, vol. 36, no.
6, pp. 772-783, Nov. 2006.[10] X. Qiang and Y. Baozong,A New Framework of CBIR Based on KDD,
6th ICSP02 Proc., vol. 2, pp. 973-976, Aug. 2002.
[11] M.K. Hu, Visual Pattern Recognition by Moment Invariants, ComputerMethods in Image Analysis, IRE Trans. on Info. Theory, vol. 8, 1962.
[12] W. S. Halimatul Munirah W Ahmad, M. Faizal A. Fauzi, W. M.Diyana W. Zaki and R. Logeswaran, Hybrid Image Retrieval System
Using Text and Gabor Transform for CT Brain Images, MMU
International Symposium on Info. and Comm. Tech. 2007
(M2USIC2007), Selangor, Malaysia, TS2B3, Nov 2007.
[13] M. M. Rahman, P. Bhattacharya and B. C. Desai, A Framework forMedical Image Retrieval Using Machine Learning and Statistical
Similarity Matching Techniques with Relevance Feedback, IEEE
Trans. on Info. Tech. in Biomed., vol. 11, no. 1, pp. 58-69, Jan. 2007.
508