author's personal copy - gandhigram rural institutefully automatic brain extraction algorithm...

13
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright

Upload: others

Post on 26-Feb-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

This article appeared in a journal published by Elsevier. The attachedcopy is furnished to the author for internal non-commercial researchand education use, including for instruction at the authors institution

and sharing with colleagues.

Other uses, including reproduction and distribution, or selling orlicensing copies, or posting to personal, institutional or third party

websites are prohibited.

In most cases authors are permitted to post their version of thearticle (e.g. in Word or Tex form) to their personal website orinstitutional repository. Authors requiring further information

regarding Elsevier’s archiving and manuscript policies areencouraged to visit:

http://www.elsevier.com/copyright

Page 2: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

Fully automatic brain extraction algorithm for axial T2-weighted magneticresonance images

K. Somasundaram n, T. Kalaiselvi

Department of Computer Science and Applications, Gandhigram Rural Institute, Gandhigram, Tamilnadu 624302, India

a r t i c l e i n f o

Article history:

Received 15 April 2008

Accepted 19 August 2010

Keywords:

Brain extraction algorithms

Diffusion process

Overlap test

Morphological operations

Region selection

Similarity index

T2-weighted MRI scans

a b s t r a c t

In this paper we propose two brain extraction algorithms (BEA) for T2-weighted magnetic resonance

imaging (MRI) scans. The T2-weighted image is first filtered with a low pass filter (LPF) to remove or

subdue the background noise. Then the image is diffused to enhance the brain boundaries. Using

Ridler’s method a threshold value for intensity is obtained. Using the threshold value a rough binary

brain image is obtained. By performing morphological operations and using the largest connected

component (LCC) analysis, a brain mask is obtained from which the brain is extracted. This method uses

only 2D information of slices and is named as 2D-BEA. The concept of LCC failed in few slices. To

overcome this problem, 3D information available in adjacent slices is used which resulted in 3D-BEA.

Experimental results on 20 MRI data sets show that the proposed 3D-BEA gave excellent results. The

performance of this 3D-BEA is better than 2D-BEA and other popular methods, brain extraction tool

(BET) and brain surface extractor (BSE).

& 2010 Elsevier Ltd. All rights reserved.

1. Introduction

Magnetic resonance imaging (MRI) technique plays an im-portant role in diagnosing several diseases in human brain.Sequences of images in MRI, called slices, are obtained from any ofthe three orientations, axial (neck to head), sagittal (ear to ear) orcoronal (front to back). Three types of images based on protondensity (PD), longitudinal relaxation time (T1) and transverserelaxation time (T2) are produced in MRI. Experts always combinemultispectral MRI information of a patient to take a decision onthe location, extension and prognosis and diagnose the brainabnormalities.

Numerous brain extraction algorithms (BEAs) are available inneuroradiological research. These BEAs are useful for several post-automatic image processing operations like segmentation, regis-tration and compression. Some of the popular BEAs are statisticalparameter mapping (SPM) [1], brain extraction tool (BET) [2],brain surface extractor (BSE) [3], 3dIntracranial [4], MRI wa-tershed [5], model based level sets (MLS) [6], exbrain [7] andSimon Fraser University (SFU) method [8]. The above-mentionedBEAs employ a single algorithmic strategy and are found toperform less satisfactorily than the BEAs based on hybridstrategies [9,10]. Some of the popular BEAs with hybrid strategiesare Minneapolis Consensus Strip (MCS) [9], brain extraction meta

algorithm (BEMA) [10] and hybrid watershed algorithm (HWA)[11]. The processing time of hybrid algorithms is always higherthan the single algorithms.

Most of the existing BEAs take T1-weighted image of normalsubject as input for processing. T1-weighted images are taken asgold standard in terms of anatomical or morphological imaging dueto their high resolution, often isotropic voxels, intrinsic 3-D natureand excellent gray matter (GM)—white matter (WM) contrast-to-noise ratio [7]. But T2-weighted images are highly sensitive to mostpathologic processes. Since T2-weighted images are most sensitivefor detecting brain pathology, patients with suspected intracranialdisease are to be first screened with T2-weighted spin-echo andFLAIR images [12]. T1-weighted images are to be taken only ifT2-weighted images show abnormalities. Therefore, T2-weightedimages are to be processed before proceeding to T1-weightedimages. This kind of analysis could reduce the time and spacecomplexity phenomenon of the computer aided detection (CAD)process. But it appears that the algorithms for automaticallyextracting the brain volume from T2-weighted images as well asfrom abnormal volumes are very limited.

A recent study done by Fennema et al. [13], reported that BET,BSE and 3dIntracranial were applicable for T2-weighted images.BET and BSE are most popular methods and are used by severalstudies either to evaluate their proposed methods or to analysethe performance over the scanning protocol, imaging, subject-specific and clinical characteristics [6,9,10,13–15].

BET, developed by Smith [2], is a brain surface model basedmethod. Initially, a rough brain mask is created using twothresholds estimated from image histogram. Then a tessellated

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/cbm

Computers in Biology and Medicine

0010-4825/$ - see front matter & 2010 Elsevier Ltd. All rights reserved.

doi:10.1016/j.compbiomed.2010.08.004

n Corresponding author. Tel.: 91 451 2452371; fax: 91 451 2453071.

E-mail addresses: [email protected] (K. Somasundaram),

[email protected] (T. Kalaiselvi).

Computers in Biology and Medicine 40 (2010) 811–822

Page 3: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

sphere centered at the approximate centre of gravity (COG) of thebrain is expanded towards the brain’s edge. This is purely adeformable approach and the expansion is controlled by twoparameters, smoothness criteria and local intensity threshold.

BSE, developed by Shattuck et al. [3], is an edge basedapproach. Initially, the scans are smoothed by anisotropicdiffusion filter and then the edges are identified by Marr–Hildrethedge detector [3]. Then a component is selected as brain based onthe size, location and intensity within the frame using a sequenceof morphological and connected component operations. Dilationoperation is done to fill the small holes. BSE is controlled by fourparameters: diffusion constant, edge constant, iteration para-meter and erosion size.

BET tends to produce smoothed edges than other methods andoften includes additional non-brain tissues [3,6,13–15]. Thedegree of over extraction is often high with BSE [14,15]. BothBET and BSE require customization of parameters to start them[14]. They failed to skull-strip the T1-weighted images withextremely high noise or poor contrast resolution [6]. Fennemaet al. [13] and Hartley et al. [14] concluded that the accuracy ofthese methods on abnormal data sets were sensitive to subjectand clinical characteristics. Therefore, care should be taken beforeusing either of them for any application.

In this paper, we present a BEA that overcomes the aboveproblems and works efficiently for T2-weighted images. Theproposed algorithm is a two stage BEA for T2-weighted axial scansof 3–5 mm thickness. This is an intensity-based approach anduses anatomical knowledge about the brain. Low pass filter (LPF)and anisotropic diffusion are performed on the T2-weightedimage. From the resulting image an optimal threshold value forintensity is computed. Using the threshold value a binary image ofthe coarse brain is generated. Finally, the fine brain is extractedfrom the coarse brain by performing the morphological andregion selection operations. Both 2D and 3D approaches to extractthe brain are presented. It is fully automated and no humanintervention or any initial parameter is needed. This unsupervisedtool is also adaptable as the extracted brain is subjected tosimilarity measure to select the separated brain portions. Thisscheme was tested using twenty data sets containing both normaland abnormal volumes. Experimental results show that theproposed method accurately extracts the brain from low contrastT2-weighted images and performed better than the existing wellknown brain extraction algorithms BET and BSE. The paper isorganized as follows. In Section 2, we present our methods. InSection 3, the materials used in our experiments are given. Theexperimental results and discussions are given in Section 4.Conclusions are given in Section 5 and the summary of the workin Section 6.

2. Method

The proposed methods consist of two stage brain extractionalgorithm to extract the brain from T2-weighted MRI images. Theflowchart of our method is given in Fig. 1. In the first stage a maskfor the coarse brain is generated using filtering and threshodingoperations. In the second stage, a segmentation operation basedon morphological operations and connected component analysis(CCA) is done to extract the fine brain from the coarse brainportion obtained in the first stage.

In clinical practice, T2-weighted volumes are usually thickslices that have anisotropic nature of voxels of size ratios x:y:z ofapproximately 1:1:5. Anisotropy in voxel dimensions, especiallywith thick slices, causes poor performance of many 3D imageoperations [7,16,17]. One way to utilize the 3D information in T2-weighted image processing is to convert the image data to

isotropic form. But Lemieux et al. [7] suggested to consider thedata that was actually acquired in clinical practice for any imagebased studies and not to have a post-processing that willinfluence clinical image acquisition practice significantly. Hencethe neighborhood operations involved in our method like filtering,diffusion process, morphological operations and connected com-ponent analysis are strongly implemented in 2D operations. Inone of our methods, the third dimension (3-D) of the image isused to ensure that the component extracted is a brain portion byperforming an overlap test between adjacent slices.

2.1. Stage-1: generating coarse brain

First we process the image with a LPF. The LPF is applied forsubduing or removing small details that appeared at the back-ground and enhancing large features like brain portion. Thisprocess also removes the background noise that is introducedduring digital image construction phase. In all T2-weighted MRhead scans, the skull appears darker than brain and other tissues.Its intensity is comparable to that of background. In T2-weighted

Low Pass Filter

Anisotropic Diffusion

INPUT: Original Slice

Binary Dilation

Bilevel Thresholding

Rough Brain Mask

Binary Erosion

Brain Area Selection

OUTPUT: Final Brain Mask

Stage-1

Stage-2

START

STOP

Fig. 1. Flowchart of our 2D-BEA.

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822812

Page 4: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

image, the cerebrospinal fluid (CSF) is bright, and muscles areoften dark. Thus, the bright cerebral CSF compartment inT2-weighted image around the brain separates the brain tissues(WM and GM) from non-brain tissues like meninges, skull, scalpand eyes. This boundary is enhanced in brightness and subduesother uniform areas by diffusion process. This helps to recoverthe clear edges lost in the LPF [18]. With this process it is easy totrace the brain–skull boundary and to separate the brain fromnon-brain tissues. The scalp–skull boundary is not strong inT2-weighted images. Therefore, they are diffused much and thusnot preserved. This diffusion process also helps to compute anintensity threshold value automatically to segment the brain fromnon-brain tissues. Thus, the LPF and diffusion smooth out thebrain tissues by preserving brain borders. Finally, an optimalthreshold value for intensity is calculated using which a roughbrain mask is produced.

2.1.1. Low pass filter

The original MRI T2-weighted image f(x, y) is first subjected toLPF. LPF in the frequency domain is given by

Lðu, vÞ ¼Hðu, vÞFðu, vÞ ð1Þ

where F(u, v) is the Fourier transform of input image f(x, y), H(u, v)is the transform function of LPF, L(u, v) is the Fourier transform ofoutput image, u and v are frequency variables. The filtered imageis obtained simply by taking the inverse Fourier transform (IFT) ofL(u, v):

Iðx, yÞ ¼ IFTðLðu, vÞÞ ð2Þ

LPF produces a blurred or smoothed image. As the size of LPFincreases it will smooth out the entire image including the sharpedges especially the cerebral CSF borders of T2-weighted scans.Hence we consider a small filter of size 3�3 pixel that is used toremove the background noise in the MR image as given inFig. 2(a). It has a behavior similar to smoothing by standardaveraging.

As an illustration, two sample slices with different anatomyare selected and shown in column 1 of Fig. 3. Among them, theslice at row 1 is chosen from normal volume. It consists of moresubstructures in the scan than other slices. Another slice at row 2is chosen with abnormality and background clutter. The resultsobtained at different stages are shown in Figs. 3 and 4. The imagetreated with LPF is given in column 2 of Fig. 3.

2.1.2. Diffusion

We then apply a diffusion process on the filtered image. Fordiffusion we use the anisotropic diffusion equation given byPerona and Malik [18]:

@I

@t¼ divðCðrIÞrIÞ ð3Þ

where rI is a local image gradient and C(rI) is the diffusionfunction, which is a monotonically decreasing function of theimage gradient magnitude. We have chosen the diffusion functionas given by Perona and Malik [18] as

CðrIÞ ¼ expð�ð9rI9=kÞ2Þ ð4Þ

where k is a diffusion constant. Eq. (3) can be discretized using thefour nearest neighbors as

Inþ1i, j ¼ In

i, jþDtðCNrNIþCSrSIþCErEIþCWrWIÞni, j, ð5Þ

where N, S, E and W represent north, south, east and westdirection, respectively. rI is the local gradient and Dt is aniteration constant. The local gradient rI is calculated usingnearest neighbor differences.

The 2-D anisotropic diffusion process is controlled by thenumber of iterations (n) and diffusion constant (k). Moreiterations produce more blurring but this effect is subtlecompared to the changes in the diffusion constant k. The behaviorof the diffusion function also depends on k. Larger the value of k

larger will be the blur. The diffusion constant k controls therelation between the diffusion strength and the local edgestrength and is to be tuned for a particular application. In ourmethod, we have set k to 60 that is the nominal edge gradient atcerebral CSF junctions in T2-weighted images. Small number ofiterations (n) is considered. The diffused image is shown incolumn 3 of Fig. 3.

2.1.3. Thresholding

The diffused image I(x, y) is further processed to generate abinary image. For this, an optimal intensity threshold value (Topt)for I(x, y) is calculated using Ridler’s method given by Milan Sonkaet al. [19]. Topt is used to separate objects from the surroundinguniform background [19]. In Ridler’s method, the initialization isdone by considering pixels at the corners of the image as thebackground pixels and the remainder as the object pixels. Thisassumption is precisely applicable for the MRI slices where theregions of interest (ROIs) in arbitrary shapes are surrounded bydark background in order to make a rectangle/square shapedslices.

The coarse binary image grb(x, y) is obtained as

grbðx, yÞ ¼1 if Iðx, yÞZTopt

0 otherwise

�ð6Þ

A binary image grb(x, y) obtained by applying the thresholdcondition given in Eq. (6) on the diffused image I(x, y) is shown incolumn 4 of Fig. 3. This binary image is taken as coarse brain maskand is passed as input for stage-2.

2.2. Stage-2: generating final brain portion

In this stage, the morphological operations are performed onthe binary image obtained in Stage-1 to segment the fine brainportion. These operations are primarily used to simplify the imagestructure, detect and preserve the main shape characteristics ofobjects [19]. Further, a connected component analysis (CCA) isdone for selecting the brain region. We use two primarymorphological operations, erosion and dilation, to remove thenon-brain regions such as eyes and surrounding dura without

1 1 1

1 1 1

1 1 1 1/9

Centre point

0 0 1 1 1 1 1 0 0

0 1 1 1 1 1 1 1 0

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1

0 1 1 1 1 1 1 1 0

0 0 1 1 1 1 1 0 0

Fig. 2. (a) 3�3 kernal of LPF and (b) structuring element (STEL) used for

morphological operations.

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822 813

Page 5: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

losing much brain tissues. For performing the morphologicaloperations we define a structuring element (STEL) as shown inFig. 2(b). STEL is a square element of d�d pixel. We need curvedcorners in STEL to treat curved boundaries of brain. Therefore,three pixel positions in each corner of STEL is disabled by settingthem to ‘0’ value. All the remaining pixel positions are active witha value ‘1’. The active points with 1 form an octagonal with anoval shape and therefore we denote it by the symbol Od. The oval-shaped STEL is necessary to produce appropriate results forerosion and dilation at curved boundaries of brain regions of thebinary image.

2.2.1. Erosion

Erosion is a process of peeling away a layer of points at theboundary of the binary image. Erosion decomposes complicatedobjects into several simpler ones. Erosion is done using the STEL.We used a two dimensional STEL of size d as shown in Fig. 2(b). O9

is wide enough to detach the eyes and other small structures fromthe brain in axial scans. The eroded image X1 is obtained as

X1 ¼ grbYO9 ð7Þ

where Y represents erosion operation. Erosion detaches theweakly connected regions from the brain portion. Sometimes it

Fig. 3. Results of Stage 1. Two T2 scans with normal and abnormal anatomy are given in row 1 and row 2. Original scans are given in column 1, low pass filtered images in

column 2, diffused images in column 3 and binary images of rough brain portion are in column 4.

Fig. 4. Results of Stage 2. Eroded images are in column 1, selected brain portions in column 2, dilated images (Brain Mask) in column 3 and final brain portions extracted

using brain mask are in column 4.

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822814

Page 6: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

detaches the brain tissues from the surrounding dura also. Theresult obtained by performing erosion process on column 4 ofFig. 3 is shown in column 1 of Fig. 4.

2.2.2. Brain area selection

Erosion process decomposes the binary image into severalisolated regions. A test has to be done to determine which of theregions form the brain portion. It is assumed that brain isthe largest connected component (LCC). Therefore the LCC amongthe regions, obtained by erosion, is taken as brain. The run lengthidentification scheme for region labeling and selection given byMilan Sonka et al. [19] is used to find the LCC. By applying LCCprocedure on X1, we get the brain region X2 as

X2ðx, yÞ ¼1 if Iðx, yÞALCCðX1Þ

0 otherwise

�ð8Þ

2.2.3. Dilation

Next we perform dilation operation on the selected brainportions. Dilation is a process of growing a layer of pixels at theboundary of the binary image. The dilation operation is done withthe same STEL that was used for erosion. The dilated imageobtained is

X3 ¼ X2 � O9 ð9Þ

where � represents dilation operation. The size and shape ofpoints to be included is defined by STEL. Dilation increases thesize of the binary image and is needed if the original size of brainis to be preserved [19]. This is mainly used to recapture the braintissues that were lost in the process of erosion or thresholdingsteps. The result obtained after performing dilation operation oncolumn 2 of Fig. 4 is shown in column 3 of Fig. 4.

The dilated binary image X3 is the final binary mask of thebrain portion and is used to extract the brain from the originalMRI scan (ffb). The final brain portion is obtained as

ffbðx, yÞ ¼f ðx, yÞ if X3ðx, yÞ ¼ 1

0 otherwise

�ð10Þ

The result of final brain portion obtained by our method isshown in column 4 of Fig. 4. All operations performed so far are onthe same slice and is in 2D only. Therefore we name this processas 2D-BEA.

2.3. 3D-BEA

The assumption that brain is the LCC in the image failed incertain slices. In such cases, 2D-BEA extracted only partial brainportions. Such slices are

i. The very upper slices, near the top of head, where the cerebralhemispheres are separated by longitudinal fissure. Hence theymay appear in two regions.

ii. The very lower portions, where the temporal and frontal lobesmay be separated from cerebellum. They may appear in morethan two regions.

In order to select the complete brain, an additional process isintroduced in our 2D-BEA. This process makes use of informationin the third dimension and we call this extended method as 3D-BEA. The flowchart of our 3D-BEA is shown in Fig. 5. Brummeret al. [16], have suggested that the geometric continuity of brainin a volume could be exploited in neighboring slices to select theregions corresponding to brain. In a MRI head stack, there is aclose similarity between two successive slices. We make use of

this similarity in our method. The similarity between twosuccessive slices is checked by computing a similarity index. Thisis used to check whether the correct portion of the brain isextracted or not. The slice-to-slice similarity index J is calculatedusing Jaccard coefficient [20] as

jðA, BÞ ¼TðA \ BÞ

TðA [ BÞð11Þ

where A and B are two data sets and T(X) is the total number ofpixels in a region X. We make use of this J in our scheme.

The value of J is calculated between the brain mask of the currentslice and that of the previous slice. When J is greater than 70% thenthe brain is assumed to have been correctly extracted from the sliceand the result produced by the brain mask is deemed to be final brain,otherwise an overlap test (OT) is performed to select the disconnectedor missed brain regions. The critical value 70% for J was selected afterrunning the algorithm on several data sets with different J values. Wecall this method as 3D-BEA.

2.4. Overlap test procedure

The OT is done to find how much percentage of a region of thecurrent brain mask overlaps with the previous brain mask. Thisprocedure starts with the eroded binary image X1 formed byEq. (7) that was initially produced in Stage-2. The overlap ratioV is calculated for each region (R) of X1 with the brain mask (P) ofprevious slice using

V ¼TðR \ PÞ

TðRÞð12Þ

If the ratio V is above 95% then the region R is treated as a brainportion, otherwise it will be discarded. Hence for the eroded brainX1, with more than one connected component (Ri, i¼1, y, m),Eq. (8) for the brain area selection is modified as

X2ðx, yÞ ¼1 if ðx,yÞAR1 and Vi495% for i¼ 1 m

0 otherwise

�ð13Þ

The selection of 95% for V is obtained after running ouralgorithm on several data sets. To recover the pixels lost duringerosion and thresholding, the dilation operation is performed onX2 using Eq. (9). The dilated binary image is treated as the brainmask to produce the final brain portion.

For the lower slices, in addition to overlap ratio measure,spatial information about the region within the slice is also usedto select the brain region. This is done to avoid the eyes and othernon-brain regions even though they satisfy the overlap ratio.Removal of eye portion from an MRI slice is the most difficult taskfor any BEA. For removing them, a knowledge based test isperformed using the spatial information of the overlapped regionsand brain mask of previous slice. The overlapped regions that arepresent entirely within the upper half of the current slice andthe rectangle that bounds the brain mask of previous slice areconsidered for the spatial analysis. First the location of thebounded rectangle within the previous slice is checked. The topborder of the rectangle should be present within the upper half ofthe previous slice. If so, the location of overlapped region is thentested. If it is entirely present within the upper left (II quadrant)or upper right (I quadrant) regions of bounded rectangle then itwill be marked as non-brain region and removed from final brainportion. Otherwise it will be selected as region of brain. Thisspatial knowledge was obtained after testing our algorithm usingseveral lower slices of different MR brain volumes.

The 3D-BEA starts with the centre slice for each brain volume, atapproximately Z/2 position, where Z is the total number of slices to be

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822 815

Page 7: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

processed. This centre slice has a single connected region as brainwhere LCC works without ambiguity. The axial brain volume isdivided into two halves: lower slices (LS) between centre slice andneck and upper slices (US) between centre slice and head. Ourprocessing will propagate from the centre slice and move to LS andthen from centre slice to US, one direction at a time, and produce thebrain mask of each slice. These brain masks are used to produce thefinal brain portion of each T2-weighted image in a volume.

3. Materials

Twenty data sets of normal and abnormal subjects were usedin our experiments. The details of the data sets used are given inTable 1. The first ten data sets (v01–v10) consists of 2 normalvolumes and 8 abnormal volumes including brain tumor(Neoplastic disease) and multiple sclerosis taken from the website‘The Whole Brain Atlas’ (WBA) maintained by the Department of

no

no

Z = total number of slices in brain volume U = middle slice of the volume

Let Y = U Let Q(Y) = brain mask of slice U produced by 2D-BEA

UPPER SLICES LOWER SLICES

INPUT:T2-weighted MRI Volume

START

2D-BEA for slice U

Read the slice U

yes

no

yes

Save Q(Y)

Is Y Z?

P = Q(Y)

Y=Y+1

Let Q (Y) = brain mask of slice Y produced by 2D-BEA

Calculate J between P and Q(Y)

Is J<70?

Overlap Test

Let Q(Y) = brain mask of slice Y produced by OT procedure

2D-BEA for slice Y

Read the slice Y

no

yes

yes

Save Q(Y)

Is Y > 0?

P = Q(Y)

Y=Y-1

Let Q (Y) = brain mask of slice Y produced by 2D-BEA

Calculate J between P and Q(Y)

Is J<70?

Overlap Test

Let Q(Y) = brain mask of slice Y produced by OT procedure

2D-BEA for slice Y

Read the slice Y

OUTPUT: Final brain volume generated using Q(Y)

STOP

Fig. 5. Flowchart of our 3D-BEA.

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822816

Page 8: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

Radiology and Neurology at Brigham and Woman’s Hospital,Harvard Medical School, the Library of Medicine and the AmericanAcademy of Neurology. The remaining ten data sets (v11–v20)were taken from KGS Advanced MR and CT Scans, Madurai,Tamilnadu, India, which were acquired from 1.5T Siemensmachine for 8 normal subjects and 2 abnormal subjects affectedby tumors. The dimension of each of volume v01–v10 is 256�256pixels and slice thickness varies from 3–5 mm with 260 mm fieldof view. Hence the pixel dimension is fitted to 1 mm�1 mm. Theslices taken from 1.5T Siemens machines have a dimension of448�512 pixels and 5 mm thickness (except V18: 3 mm) with40–50% inter-slice gap. The field of view is fixed to 210�240 mmand hence the pixel dimension is set to 0.47 mm�0.47 mm.Naturally, the voxel dimension is in anisotropic form in all ourdata sets.

4. Results and discussions

For evaluating the performance of our BEAs we applied our 2D-BEA and 3D-BEA on 20 sets of T2-weighted volume and extractedthe brain portions. For illustration, the results obtained byapplying 2D-BEA on v03, (Fig. 6: 7 slices per row) of an abnormalbrain volume of 51 year old woman containing 56 slices, the 56thslice being an empty slice, are shown in Fig. 7. In 2D-BEA, the LCCis taken as brain area in each slice and could mislead to selectionof non-brain portions also. Such wrong extraction can be seen inthe first 9 slices and the last 6 slices (in last row) in Fig. 7.Application of our 3D-BEA on the same data set is shown in Fig. 8.It can be seen in Fig. 8 that the correct regions of brain area (slices7–9 and 50) are identified and extracted. It is to be noted that iteven selects the disconnected brain regions like temporal lobesand cerebellum (slices 14–17).

For comparing the performance of our 2D-BEA and 3D-BEAwith that of BET and BSE, we considered the data set v13, whichhas 19 slices. For BSE we used MRIcro 1.40 and for BSE we usedBrainSuite 2.0. The parameter setting for each BEA is given inTable 2. For both BET and BSE the parameters are selected so as toget best results. The brain portions extracted are shown in Fig. 9for slices 1–10 (LS) and in Fig. 10 for slices 11–19 (US). It can beseen in Fig. 9 that our 2D-BEA was unable to extract correct brainportion in the first 4 slices. It over extracted the brain area in thefirst 4 slices, while BET under extracted the brain and BSE

completely failed to detect the brain portion. But our 3D-BEAgives much better and satisfactory results for the first 4 slices. Forslices 5–10, both of our methods 2D- and 3D-BEA are found togive better results than that of BET and BSE. For the upper slices(Fig. 10) from 11 to 16 both of our methods found to give accurateresults where as BET and BSE found to remove brain area as non-brain area. For slices 17–19, 3D-BEA and BET are extracting thebrain accurately, 2D-BEA failed in slice 19 and BSE could notdetect the brain in all the three slices. This qualitative evaluationis done with all our data sets and found that 3D-BEA gave anacceptable and accurate performance than 2D-BEA, BET and BSEmethods.

Manual segmentation masks were unavailable for the imagescollected from the WBA website and KGS scan centre; therefore,they were evaluated qualitatively by visual inspection byneurological experts. The experts inspected, separately, the finaloutputs of our BEA and confirmed that the results of 3D-BEA areacceptable and all of them opined appreciated its performanceover 2D-BEA and other methods.

The average processing times of the proposed 2D-BEA and3D-BEA methods on WBA data sets were approximately 0.85 and0.92 s/slice, respectively. For the KGS data sets the averageprocessing times of our proposed 2D-BEA and 3D-BEA were

Table 1Details of data sets used.

Dataset Volume identity Gender Age Clinical Total slices

1 v01 Female 81 Normal 54

2 v02 Female 76 Normal 43

3 v03 Female 51 Anaplastic astrocytoma 56

4 v04 Male 35 Astrocytoma 29

5 v05 Male 62 Metastatic adenocarcinoma 24

6 v06 Female 42 Metastatic bronchogenic carcinoma 24

7 v07 Male 75 Meningioma 27

8 v08 Male 22 Sarcoma 24

9 v09 Male 30 Multiple scelerosis 24

10 v10 – – Multiple scelerosis 54

11 v11 Female 18 Normal 19

12 v12 Female 19 Normal 19

13 v13 Male 43 Normal 19

14 v14 Female 32 Normal 19

15 v15 Male 45 Normal 19

16 v16 Male 51 Normal 19

17 v17 Female 39 Normal 19

18 v18 Male 43 Normal 19

19 v19 Female 55 Tumor 19

20 v20 Male 38 Tumor 19

Table 2BEA parameters used with existing and proposed methods.

BEA Fixed parameter Value

BET (existing) Fractional intensity threshold 0.5 (default)

Threshold gradient 0.0 (default)

BSE (existing) Diffusion iteration (n) 3

Diffusion constant (k) 100

Edge constant 0.5

Erode size 3

2D-BEA (proposed) LPF kernel size 3

Diffusion iteration (n) 3

Diffusion constant (k) 60

Morphological element (STEL) O9 (Fig. 2(b))

3D-BEA (proposed) LPF kernel size 3

Diffusion iteration (n) 3

Diffusion constant (k) 60

Morphological element (STEL) O9 (Fig. 2(b))

Overlap index (J) 70%

Overlap ratio (V) 95%

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822 817

Page 9: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

approximately 3.5 and 4 s/slice, respectively. The experimentswere performed in a 1.73 GHz Intel Pentium dual-core processor,Windows XP with 1 GB RAM using Matlab 6.5. BET and BSE tookless than 1 min per dataset but the excess portion produced byBET and over extraction by BSE is unavoidable. This shows thatthe manual intervention is required to redefine the final resultsproduced by BET and BSE.

The predefined ranges of J that are used in our experiment aregiven in Table 3. These values are computed using the brain sizeand slice thickness including inter-slice gap. T2-weighted scansusually consist of much more anisotropic voxels that are taken asthick slices. In clinical approach generally 3 or 5 mm thickness isconsidered for T2-weighted scans with 40–50% of thickness asinter-slice gap. Hence for the T2-weighted slices the slice

thickness including inter-slice gap range from 3 to 7.5 mm. Afteranalyzing the MR axial T2-weighted head scans generated byclinical practice the brain size in axial direction is varied from 120to 150 mm. The MR head scans of several subjects varied in sex,age and disease produced by Siemens and GE machine are utilizedfor analyzing and defining the relationship between J, slicethickness and brain size as given in Table 3.

The sole limitation of our study is that the comparisonbetween the results of existing and proposed methods was doneonly qualitatively in the form of visual inspection done by themedical experts as there is no ‘‘gold standard’’ available for T2images. For a quantitative evaluation, an additional work has tobe done either to construct the gold standards of tested data setsand compare the results or to design a statistical procedure to test

Fig. 6. A T2-weighted brain volume (v03) of abnormal subject.

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822818

Page 10: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

the quality of the extracted brain portion. Our future study willfocus on these issues to demonstrate the robustness of ourmethod over other existing BEA methods quantitatively.

5. Conclusion

We have developed two brain extraction algorithms 2D-BEAand 3D-BEA to extract brain portion from T2-weighted MRI headscans automatically. Both of them do not require any initialparameters that are to be supplied by the user. Evaluationprocedure based on the qualitative analysis proved that, theproposed 3-D BEA performs far better than BET and BSE methods.The proposed methods work well on both normal and abnormaltypes of T2-weighted images. In the absence of ‘‘gold standard’’for T2 images this study lacks the quantitative analysis, and willbe taken up as a separate study in future.

6. Summary

The work presents two brain extraction algorithms (BEAs) forT2-weighted MRI axial head scans. T2-weighted MRI axial scansare best for analyzing the pathological changes occurring in thehuman brain. The standard scanning procedure is needed only ifsome abnormalities are found with T2-weighted axial scans.

Brain extraction is an essential preprocessing tool for severalcomputer-aided brain processing techniques like brain tissue seg-mentation, brain tumor/lesion detection, brain image compressionand registration. This tool helps to speed up and produce accurateresults for the computer-aided diagnosis. Several popular and existingbrain extraction algorithms are commonly focused on T1-weightedscans and were developed using the properties of T1-weighted MRIscans and failed to work for T2-weighted scans. Only a very few BEAlike BET, BSE and 3dIntracranial worked with T2-weighted scans. But

Fig. 7. Brain portions extracted from v03 given in Fig. 6 using 2D-BEA.

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822 819

Page 11: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

they also failed to give satisfactory results. Hence we have developedtwo new BEAs for T2-weighted scans. Our methods make use ofsimple techniques like diffusion, thresholding and morphologicaloperations to remove the non-brain tissues from the T2-weightedhead scans. Both 2D and 3D informations are used in this algorithm.

Initially all the scans were processed using a low pass filter toremove the background noise, if any, occurred during the acquisitiontime. Next an anisotropic diffusion process is applied to highlight thebrain portion by suppressing the skull and scalp regions. Then thesimple erosion operation is performed to disconnect the brain fromweakly connected regions. Then region selection is done using runlength identification scheme to select the brain regions. Finally, adilation operation is done to expand the brain by adding the braintissues that are lost during the previous operations. Jaccard similarityindex (J) is used to compare the extracted brain portion betweenadjacent slices and therefore this method is adaptable.

The proposed methods were tested using twenty data setscontaining both normal and abnormal volumes. The proposedalgorithms work better on both normal and abnormal data sets. Asno hand-stripped or manual extraction volumes were available, onlyqualitative validation could be performed. The results of theautomated brain extraction algorithm was visually analyzed byseveral experts from radiology and neurology fields and confirmedthat the results as adequate to further processing.

Experimental results show that the proposed 3D-BEA perfectlyextracts the entire brain even if they are separated by fissures. It alsoeliminates the non-brain tissues like eye and scalp even if they havethe similar intensities to brain tissues. The performances of theproposed methods are found to be better than the well knownmethods BET and BSE. The proposed methods are automatic tool andhence can be used as a part of any automatic brain image processingsystem.

Fig. 8. Brain portions extracted from v03 given in Fig. 6 using 3D-BEA.

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822820

Page 12: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

The extraction of brain from the T2-weighted data sets reducedthe file size of the MRI and thus decreases the transmission time in anetwork application. Therefore, stripping the skull in MRI facilitates

fast access to an expert doctor even across hospitals in remote places.Hence our method may find a place in telemedicine technologies,particularly in developing countries like India.

Fig. 9. Brain extraction result for slices 1–10 (LS) of v13. Row 1 shows original slices. Row 2 shows the brain extracted by 2D-BEA, row 3 by 3D-BEA, row 4 by BET and row 5 by BSE.

Fig. 10. Brain extraction result for slices 11–19 (US) of v13. Row 1 shows original slices. Row 2 shows the brain extracted by 2D-BEA, row 3 by 3D-BEA, row 4 by BET and

row 5 by BSE.

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822 821

Page 13: Author's personal copy - Gandhigram Rural InstituteFully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images K. Somasundaramn, T. Kalaiselvi Department

Author's personal copy

Conflict of interest statement

None declared.

Acknowledgements

The authors wish to thank Dr. K.G. Srinivasan, MD, RD,Consultant Radiologist and Dr. K.P. Usha Nandhini, DNB, KGSAdvanced MR & CT Scan, Madurai, Tamilnadu, India, and Dr. N.Karunakaran DMRD, DNB, Consultant – Radiodiagnosis, MeenakshiMission Hospital and Research Centre, Madurai, Tamilnadu, India,for providing the MR Head scans and for giving the qualitativevalidation. The authors would also wish to thank Dr. R. Durairaj, MD(Paed), DM (Neuro), Consultant Neurologist, Department of Neurol-ogy, Dr. K. Selvamuthukumaran, MCh (Neuro), Sr. Consultant,Department of Neuro Surgery, Meenakshi Mission Hospital andResearch Centre, Madurai, Tamilnadu, India, Dr. S.P. Balachandran,MD, DM (Neuro), Neurologist, Dindigul Neuro Centre, DindigulDistrict, Tamilnadu, and Dr. R.S. Jayasree, Scientist, Sree ChitraTirunal Institute for Medical Science and Technology (SCTIMST),Thiruvananthapuram, Kerela, India, for their help in verifying theresults. The authors thank Louis Lemieux, Professor, Department ofClinical and Experimental Epilepsy, University College London,Stephen M. Smith Professor, FMRIB Centre, Oxford University andDavid Rottenberg, Professor, Department of Neurology and Radi-ology, University of Minnesota, for their suggestions throughpersonal correspondence. The authors wish to thank the unknownreferees for their constructive suggestions which resulted inpresenting this paper in a clear and precise manner.

This work is catalysed and funded by Science for Equity,Empowerment and Development (SEED) Division, Department ofScience and Technology (DST), Government of India, New Delhi,Grant no. SP/YO/011/2007 under the Scheme for Young Scientistsand Professionals (SYSP).

References

[1] J. Ashburmer, K.J. Friston, Voxel based morphometry: the methods, Neuro-Image 11 (2000) 805–821.

[2] S.M. Smith, Fast robust automated brain extraction, Human Brain Mapping 17(2002) 143–155.

[3] D.W. Shattuck, S.R. Sandor-Leahy, K.A. Schaper, D.A. Rottenberg, R.M. Leahy,Magnetic resonance image tissue classification using a partial volume model,NeuroImage 13 (5) (2001) 856–876.

[4] B.D. Ward, in: Intracranial Segmentation, Biophysics Research Institute,Medical College of Wisconsin, Milwaukee, WI, 1999.

[5] H. Hahn, H.O. Peitgen, The skull stripping problem in MRI solved by a single3D watershed transform, Paper Presented at the Proceedings of MICCAI, LNCS1935, 2000, pp. 134–143.

[6] A.H. Zhuang, D.J. Valentino, A.W. Toga, Skull-stripping magnetic resonancebrain images using a model-based level set, NeuroImage 32 (1) (2006) 79–92.

[7] L. Lemieux, G. Hagemann, K. Krakow, F.G. Woermann, Fast, accurate, andreproducible automatic segmentation of the brain in T1-weighted volumeMRI data, Magnetic Resonance in Medicine 42 (1) (1999) 127–135.

[8] M.S. Atkins, B.T. Mackiewich, Fully automatic segmentation of the brain inMRI, IEEE Transactions on Medical Imaging 17 (1) (1998) 98–107.

[9] K. Boesen, L. Rehm, K. Schaper, S. Stoltzner, R. Woods, E. Luders, D.Rottenberg, Quantitative comparison of four brain extraction algorithms,NeuroImage 22 (2004) 1255–1261.

[10] D.E. Rex, D.W. Shattuck, R.P. Woods, K.L. Narr, E. Luders, K. Rehm, S.E.Stolzner, D.A. Rottenberg, A.W. Toga, A meta-algorithm for brain extraction inMRI, NeuroImage 23 (2004) 625–637.

[11] F. Segonne, A.M. Dale, E. Busa, M. Glessner, D. Salat, H.K. Hahn, B. Fischl, Ahybrid approach to the skull stripping problem in MRI, NeuroImage 22 (2004)1060–1075.

[12] J.R. Hesselink, in: Basic Principles of MR Imaging, Department of Radiology,University of California, San Diego, 2007 accessed: 12 November 2007./http://spinwarp.ucsd.edu/NeuroWeb/Text/br-100.htmS.

[13] C. Fennema-Notestine, I.B. Ozyurt, C.P. Clark, S. Morris, A. Bischoff-Grethe,M.W. Bondi, T.L. Jernigan, B. Fischl, F. Segonne, D.W. Shattuck, R.M. Leahy,D.E. Rex, A.W. Toga, K.H. Zou, G.G. Brown, Quantitative evaluation ofautomated skull-stripping methods applied to contemporary and legacyimages: effects of diagnosis, bias correction, and slice location, Human BrainMapping 27 (2) (2006) 99–113.

[14] S.W. Hartley, A.I. Scher, E.S.C. Korf, L.R. White, L.J. Launer, Analysis andvalidation of automated skull stripping tools: a validation study based on296 MR images from the Honolulu Asia aging study, NeuroImage 30 (2006)1179–1186.

[15] J.M. Lee, J.H. Kim, I.Y. Kim, J.S. Kwon, S.I. Kim, Evaluation of automated andsemi-stripping algorithms: similarity index and segmentation error, Compu-ters in Biology and Medicine 33 (6) (2003) 495–507.

[16] M.E. Brummer, R.M. Mersereau, R.L. Eisner, R.J. Lewine, Automatic detectionof brain contours in MRI data sets, IEEE Transactions on Medical Imaging 12(2) (1993) 153–166.

[17] S.P. Raya, Low-level segmentation of 3-D magnetic resonance brainimages—a rule based system, IEEE Transaction on Medical Imaging 9(1990) 327–393.

[18] P. Perona, J. Malik, Scale-space and edge detection using anisotropic diffusion,IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (7) (1990)629–639.

[19] Milan Sonka, Vaclav Hlavac, Roger Boyle, in: Image Processing: Analysis andMachine Vision, second ed., Brooks/Cole Publishing Company, 1999 1999.

[20] P. Jaccard, The distribution of flora in the alpine zone, New Phytologist 11 (2)(1912) 37–50.

Somasundaram K was born in the year 1953. He received the MSc degree inPhysics from University of Madras, Chennai, India, in 1976, the Post GraduateDiploma in Computer Methods from Madurai Kamaraj University, Madurai, India,in 1989 and the PhD degree in theoretical Physics from Indian Institute of Science,Bangalore, India, in 1984. He is presently the Professor and Head of theDepartment of Computer Science and Applications, and Head, Computer Centreat Gandhigram Rural Institute, Gandhigram, India. From 1976 to 1989, he was aProfessor with the Department of Physics at the same Institute. He was previouslya Researcher at an International Centre for Theoretical Physics, Trieste, Italy, and aDevelopment Fellow of Commonwealth Universities at the school of Multimedia,Edith Cowan University, Australia. His research interests are in image processing,image compression and medical imaging. He is a Life member of Indian Society forTechnical Education. He is also an annual member in ACM, USA, and IEEEComputer Society, USA.

Kalaiselvi T was born in Tamilnadu, India, in 1974. She received her Bachelor ofScience (BSc) degree in Mathematics and Physics in 1994 and Master of ComputerApplications (MCA) degree in 1997 from Avinashilingam University, Coimbatore,Tamilnadu, India. From 1997 to 1998 she was a Lecturer in Departmentof Computer Science, Avinahilingam University, India. Since 1998, she has beenwith Gandhigram Rural Institute, Dindigul, Tamilnadu, India, where she was aLecturer in Department of Computer Science and Applications till 2006. From 2006she was a full time research scholar in the same department and awarded PhDdegree in February 2010. In 2008, she received a project from Department ofScience and Technology (DST), Government of India, under Scheme for YoungScientist and Professional (SYSP), Society for Equity, Empowerment andDevelopment (SEED) Division for 3 years (2008–2011). Now she is a youngscientist, working as a Principal Investigator (PI) of the sanctioned project in thesame department. Her research focuses on Brain Image Processing and braintumor or lesion detection from MR Head Scans to enrich the Computer AidedDiagnostic process, Telemedicine and Tele radiology services. She is a member inACM, India.

Table 3The Jaccard coefficient (J) was computed using the slice thickness and brain size.

The inter-slice gap is also added with the slice thickness value. In the axial

direction the brain size is ranged from 120 to 150 mm.

Slice thickness (including inter-slice gap) Similarity percentage used (J)

5.5 mm and above (up to 7.5 mm) Above 65%

3 mm and above (up to 5.4 mm) Above 70%

2 mm and above (up to 2.9 mm) Above 75%

1 mm and above (up t0 1.9 mm) Above 80%

K. Somasundaram, T. Kalaiselvi / Computers in Biology and Medicine 40 (2010) 811–822822