05941901 brain disorder detection

5
Brain Disorder Detection using Artificial Neural Network A.RajaRajan Assistant Professor, Department of Computer Science and Engineering, PRIST University, Vallam ,Thanjavur. Email: [email protected] Abstract -This paper is regarding an XML based language to perform artificial neural network application (ANN). Here we see how Neural XML (NXML) can be used for an 'intelligent' task that is for identifying images based on various criteria with an example of interesting 'pseudo' brain disorder detection. This is the model in which artificial neural networks are based. Thus far, artificial neural networks haven't even come close to modeling the complexity of the brain, but they have shown to be good at problems which are easy for a human but difficult for a traditional computer, such as image recognition and predictions based on past knowledge. The algorithm which we use here is BPN. Back-propagation is well suited to pattern recognition problems. In this study we considered a perceptron based feed forward neural network for the detection of brain disorder. Key words : Artificial Neural Network, XML, Nerual XML, BPN. I. INTRODUCTION One type of network sees the nodes as ‘artificial neurons’. These are called artificial neural networks (ANNs). An artificial neuron is a computational model inspired in the natural neurons. Natural neurons receive signals through synapses located on the dendrites or membrane of the neuron. When the signals received are strong enough, the neuron is activated and emits a signal though the axon. This signal might be sent to another synapse, and might activate other neurons. An artificial neural network is a system based on the operation of biological neural networks. Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform. Artificial neural networks (ANN) are among the newest signal-processing technologies in the engineer's toolbox. An Artificial Neural Network is an adaptive, most often nonlinear system that learns to perform a function (an input/output map) from data. [5] The nonlinear nature of the neural network processing elements (PEs) provides the system with lots of flexibility to achieve practically any desired input/output map. [1] An artificial neural network consists of a pool of simple processing units which communicate by sending signals to each other over a large number of weighted connections. II. ARCHITECTURE OF ARTIFICIAL NEURAL NETWORK The basic architecture consists of three types of neuron layers: input, hidden, and output layers. As show in Figure 1, in feed-forward networks, the signal flow is from input to output units, strictly in a feed-forward direction. The data processing can extend over multiple (layers of) units, but no feedback connections are present. A neural network has to be configured such that the application of a set of inputs produces the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to train the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule. Figure 1. A Feed Forward Network. Here we are using Feed-forward networks to perform the operations. [2] Feed-forward networks have the following characteristics: Neurons are arranged in layers, with the first layer taking in inputs and the last layer producing outputs. The middle layers have no connection with the external world, and hence are called hidden layers. Each neuron in one layer is connected to every neurons on the next layer. Hence information is constantly "fed forward" from one layer to the next., and this explains why these networks are called feed-forward networks. There is no connection among neuron in the same layer. In neural network, the most popular method of learning is called Back-Propagation. Learning in feed-forward networks belongs to the realm of supervised learning. In Supervised learning or Associative learning, the network is trained by providing it with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the neural network (self- supervised). 2.1 Topology of a multi-layer perceptron / Back Propagation Algorithm 268 ___________________________________ 978-1- 4244 -8679-3 / 11/$26.00 ©2011 IEEE

Upload: kvkirann

Post on 16-Nov-2015

213 views

Category:

Documents


0 download

DESCRIPTION

medical image processing

TRANSCRIPT

  • Brain Disorder Detection using Artificial Neural Network

    A.RajaRajan

    Assistant Professor, Department of Computer Science and Engineering, PRIST University, Vallam ,Thanjavur.

    Email: [email protected]

    Abstract -This paper is regarding an XML based language to perform artificial neural network application (ANN). Here we see how Neural XML (NXML) can be used for an 'intelligent' task that is for identifying images based on various criteria with an example of interesting 'pseudo' brain disorder detection. This is the model in which artificial neural networks are based. Thus far, artificial neural networks haven't even come close to modeling the complexity of the brain, but they have shown to be good at problems which are easy for a human but difficult for a traditional computer, such as image recognition and predictions based on past knowledge. The algorithm which we use here is BPN. Back-propagation is well suited to pattern recognition problems. In this study we considered a perceptron based feed forward neural network for the detection of brain disorder.

    Key words : Artificial Neural Network, XML, Nerual XML, BPN.

    I. INTRODUCTION One type of network sees the nodes as artificial

    neurons. These are called artificial neural networks (ANNs). An artificial neuron is a computational model inspired in the natural neurons. Natural neurons receive signals through synapses located on the dendrites or membrane of the neuron. When the signals received are strong enough, the neuron is activated and emits a signal though the axon. This signal might be sent to another synapse, and might activate other neurons. An artificial neural network is a system based on the operation of biological neural networks. Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform. Artificial neural networks (ANN) are among the newest signal-processing technologies in the engineer's toolbox. An Artificial Neural Network is an adaptive, most often nonlinear system that learns to perform a function (an input/output map) from data. [5] The nonlinear nature of the neural network processing elements (PEs) provides the system with lots of flexibility to achieve practically any desired input/output map. [1] An artificial neural network consists of a pool of simple processing units which communicate by sending signals to each other over a large number of weighted connections.

    II. ARCHITECTURE OF ARTIFICIAL NEURAL NETWORK The basic architecture consists of three types of neuron

    layers: input, hidden, and output layers. As show in Figure 1, in feed-forward networks, the signal flow is from input to

    output units, strictly in a feed-forward direction. The data processing can extend over multiple (layers of) units, but no feedback connections are present. A neural network has to be configured such that the application of a set of inputs produces the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to train the neural network by feeding it teaching patterns

    and letting it change its weights according to some learning rule.

    Figure 1. A Feed Forward Network.

    Here we are using Feed-forward networks to perform the operations. [2] Feed-forward networks have the following characteristics:

    Neurons are arranged in layers, with the first layer taking in inputs and the last layer producing outputs. The middle layers have no connection with the external world, and hence are called hidden layers.

    Each neuron in one layer is connected to every neurons on the next layer. Hence information is constantly "fed forward" from one layer to the next., and this explains why these networks are called feed-forward networks.

    There is no connection among neuron in the same layer.

    In neural network, the most popular method of learning is called Back-Propagation. Learning in feed-forward networks belongs to the realm of supervised learning. In Supervised learning or Associative learning, the network is trained by providing it with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the neural network (self-supervised).

    2.1 Topology of a multi-layer perceptron / Back Propagation Algorithm

    268

    ___________________________________ 978-1-4244 -8679-3/11/$26.00 2011 IEEE

  • The principal importance of a neural network is not only the way a neuron is implemented but also how their interconnections (more commonly called topology) are made. The topology of a human brain is too complicated to be used as a model because a brain is made of hundreds of billions of connections which can't be effectively described using such a low-level (and highly simplified) model. The topology we will study is therefore not the topology of a human (or even fruitfly!) brain but actually a simple topology designed for easy implementation on a digital computer.

    One of the easiest forms of this topology at the moment is made of three layers:

    one input layer (the inputs of our network) one hidden layer one output layer (the outputs of our network)

    The Back- propagation network topology is show in

    Figure 2. All neurons from one layer are connected to all neurons in the next layer.

    Figure 2. Back-propagation Network topology

    III. BPN TRAINING ALGORITHM The following description tends to assume a pattern

    classification problem, since that is where the BP network has its greatest strength. However, you can use back-propagation for many other problems as well, including compression, prediction and digital signal processing. When you present your network with data and find that the output is not as desired, what will you do? The answer is obvious: we will modify some connection weights. Since the network weights are initially random, it is likely that the initial output value will be very far from the desired output. We wish to improve the behavior of the network. Which connection weights must be modified, and by how much, to achieve this objective? To put it another way, how do you know which connection is responsible for the greatest contribution to the error in the output? Clearly, we must use an algorithm which efficiently modifies the different connection weights to minimize the errors at the output. This is a common problem in engineering; it is known as optimization. The famous LMS algorithm was developed to

    solve a similar problem; however the neural network is a more generic system and requires a more complex algorithm to adjust the many network parameters.

    One algorithm which has hugely contributed to neural network fame is the back-propagation algorithm. The principal advantages of back-propagation are simplicity and reasonable speed (though there are several modifications which can make it work faster). Back-propagation is well suited to pattern recognition problems.

    The training algorithm for a BPN consists of the following steps:

    Selection and Preparation of Training Data Repetition Running Hazards

    3.1 Selection and Preparation of Training Data

    A neural network is useless if it only sees one example of a matching input/output pair. It cannot infer the characteristics of the input data for which you are looking for from only one example; rather, many examples are required. The best training procedure is to compile a wide range of examples (for more complex problems, more examples are required) which exhibit all the different characteristics you are interested in. It is important to select examples which do not have major dominant features which are of no interest to you, but are common to your input data anyway.

    If possible, prior to training, add some noise or other randomness to your example (such as a random scaling factor). This helps to account for noise and natural variability in real data, and tends to produce a more reliable network.

    If you are using a standard unscaled sigmoid node transfer function, please note that the desired output must never be set to exactly 0 or 1! The reason is simple: whatever the inputs, the outputs of the nodes in the hidden layer are restricted to between 0 and 1 (these values are the asymptotes of the function. To approach these values would require enormous weights and/or input values, and most importantly, they cannot be exceeded. By contrast, setting a desired output of (say) 0.9 allows the network to approach and ultimately reach this value from either side, or indeed to overshoot. This allows the network to converge relatively quickly. It is unlikely to ever converge if the desired outputs are set too high or too low.

    Once again, it cannot be overemphasized: a neural network is only as good as the training data! Poor training data inevitably leads to an unreliable and unpredictable network. Having selected an example, we then present it to the network and generate an output.

    3.2 Repetition

    Since we have only moved a small step towards the desired state of a minimized error, the above procedure must be repeated many times until the MSE drops below a specified value. When this happens, the network is performing satisfactorily, and this training session for this particular example has been completed.

    269

  • Once this occurs, randomly select another example, and repeat the procedure. Continue until you have used all of your examples many times (`many' may be anywhere between twenty or less and ten thousand or more, depending on the particular application, complexity of data and other parameters).

    3.3 Running

    Finally, the network should be ready for testing. While it is possible to test it with the data you have used for training, this isn't really telling you very much. Instead, get some real data which the network has never seen and present it at the input. Hopefully it should correctly classify, compress, or otherwise process (however you trained it!) the data in a satisfactory way.

    3.4 Hazards

    A consequence of the back-propagation algorithm is that there are situations where it can get `stuck'. Think of it as a marble dropped onto a steep road full of potholes. The potholes are `local minima' - they can trap the algorithm and prevent it from descending further. In the event that this happens, you can resize the network (add extra hidden-layer nodes or even remove some) or try a different starting point (i.e. randomise the network again). Some enhancements to the BP algorithm have been developed to get around this - for example one approach adds a momentum term, which essentially makes the marble heavier - so it can escape from small potholes. Other approaches may use alternatives to the Mean Squared Error as a measure of how well the network is performing.

    IV. THE SCHEME OF THE NEURAL MODEL DESCRIPTION FILES PROCESSING

    Instance of processing of files with neural model description is presented in Figure 3. As a tool for creation of electronic format of the neural network models description the language XML was chosen. [4] XML notation permits to describe structured data and is widely used as the universal basis for development of specialized languages of various nature objects description. As advantage of XML usage can be referred standardized access to a data, stored in XML documents, not dependent from their structure and subject area.

    Figure 3. The scheme of the neural model description files processing.

    V. METHOD OF BRAIN DISORDER DETECTION We are using 13 images as shown in Figure 4 to train the

    network. According to the density of white pixels on a corner, we are assigning a value or condition (0, 1, 2 or 3) to each image, starting counter clockwise from the bottom left corner. The value of each image is shown below.

    Figure 4. Sample images of Brain

    We are using the density of white pixels on a particular corner, to detect the possibility of Brain Tumor. For example, see the first image. It has more white pixels at bottom left. [5] We assume that,

    If the density of white is at bottom left corner, the condition is 0

    If the density of white is at bottom right, the condition is 1

    If the density of white is at top right, the condition is 2

    If the density of white is at top left, the condition is 3

    Let us define the following possibilities. Condition 0 - No problem, brain has no tumor Condition 1 - Somewhat infected Condition 2 - Infected Condition 3 - Critical, should change the brain

    The assumptions and conditions are shown in the Table

    1.

    DENSITY CONDITIONS DEFECT

    BOTTOM LEFT 0

    NO PROBLEM

    BOTTOM RIGHT 1

    SOME WHAT

    INFECTED

    TOP RIGHT 2 INFECTED

    TOP LEFT 3 CRITICAL

    Table 1. Assumptions and Conditions

    The following are the steps involved to find the brain disorder detection

    270

  • Creating a Neural Network. Training the Neural Network. Running the Neural Network.

    5.1 Creating a Neural Network

    The principal importance of a neural network is not only the way a neuron is implemented but also how their interconnections (more commonly called topology) are made.

    Neural Network can be created using NXML tool. [4] To perform an action on a neural network, you should load it initially. Using NXML, you can specify which network to load, what operation to perform on the network, etc.

    To create a network, we should determine The number of neurons in input layer. The number of neurons in hidden layer and the

    number of hidden layers The number of neurons in output layer.

    Let us see how to decide this Neurons in input layer - We are providing a 16 x

    16 image as input. When we digitize this picture, we can see that there are 16 x 16=256 pixels in each image. So, let us take 256 neurons in input layer - we will feed the value of each pixel (1 for white pixel and 0 for black) to each neuron in the input layer.

    Neurons in output layer - We have four conditions - condition 0 (00 in binary), condition 1 (01 in binary), condition 2 (10 in binary), and condition 3 (11 in binary) - so, let us take two neurons in the output layer, because the highest output value we need (i.e, 3) requires two bits to represent it.

    Neurons in hidden layer - Let us assume that we have one hidden layer, and it has the number of neurons equal to the neurons in the input layer.

    5.2 Training the Neural Network

    A neural network has to be configured such that the application of a set of inputs produces the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to 'train' the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule. [3]

    For training the neural network, create an nxml file, train.n.xml. [4] All images are in the samples folder, relative to the folder where train.n.xml resides.The 'Network' tag is used to load an xml file from disk.

    It has the following attributes. LoadPath - The path of the xml file which holds

    our network. In this case, it is densitydetect.xml SaveOnFinish - Specify whether the network

    should be saved after a DataBlock operation is performed

    SavePath - You can save the network to a new file path density.xml.

    A DataBlock tag specifies which operation we are going to perform.

    It has the following attributes. Type - The type of operation to perform. It can be

    either Train or Run. TrainCount - The number of times the operation

    should be performed. It is not required if Type attribute is Run.

    5.3 Running the Neural Network

    Create a file, run.n.xml, to run the trained networks.The only change is that, the Type of Datablock is changed to 'Run' from 'Train'. Also, we are not providing any OutputValues - because we expect the network to predict and write these output values. Once the network is trained, we can use it to check image samples.

    Here we have three images for testing. Based on the density of white pixels, it will provide the corresponding results.

    Test 1:

    Test 2:

    Test 3:

    In the image Test 1, the density of white is more at top right, so based on the condition that we set, it belongs to condition 2 and the corresponding result is Infected. The same scenario of testing is carried on the other two images which will provide results as Somewhat infected and critical respectively.

    VI. CONCLUSION A simplistic approach for brain disorder detection using

    artificial neural networks has been described. Despite the computational complexity involved, artificial neural networks offer several advantages. Our future work is focused on use of ANNs for detection of various other problems in human brain.

    REFERENCE

    [1] Leighton, R.R. The Aspirin/MIGRANES neural network software. Users manual. MITRE Corp. 1992. 119p.

    [2] Fiesler, E. Neural network classification and formalization. Computer Standarts and interfaces. v.16, Elseiver Science publishers, Amsterdam, 1994. 13p.

    [3] Atencia, M. A., G. Joya and F. Sandoval. A formal model for definition and simulation of generic neural networks. Neural Processing Letters 11: 87-105, 2000. Kluwer Academic Publishers.

    [4] Extensible Markup Language (XML) 1.0. W3C Recommendation. http://www.w3.org/TR/1998/REC-xml-19980210.

    271

  • [5] Taylor, M.; Lisboa, P. Techniques and Application of Neural Networks 1993, Ellis Horwood.

    BIOGRAPHY A.Rajarajan is an Assistant professor of Computer Science and Engineering at the PRIST University, Thanjavur. He received his B.Sc in Physics from Bharathidasan University, Trichy and Master of Computer Applications from Anna University, Chennai. He has presented papers in various National and International conferences. His major interests are machine learning and neural network learning paradigms.

    272

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description >>> setdistillerparams> setpagedevice