integrated circuits course ( introduction to vlsi circuit ... · pdf file• the firing...

40
Neural Computing Defintion : Neural computing is the study of nets of adaptable nodes which,throughA process of learning from task examples,store experiential knowledge and make it available for use.

Upload: duongnhu

Post on 08-Mar-2018

222 views

Category:

Documents


4 download

TRANSCRIPT

Neural Computing

• Defintion : Neural computing is the study of nets of adaptable nodes which,throughA process of learning from task examples,store experiential knowledge and make it available for use.

A Simple Adaptable NodeTAN

• A device for explaining what any adaptable node has to do, calling the Toy Adaptive Node (TAN)

A Simple Adaptable NodeTAN

• The node may operate in two modes:• 1- It can be taught during which time its function can

change.• 2- Used during time it can not change. • Such modes can be determined by setting the

Teach/Use terminal to 1 for teaching or to 0 for using.

A Simple Adaptable Node

• For example, a 3-input neuron is taught to output 1 when the input (X1,X2 and X3) is 111 or 101 and to output 0 when the input is 000 or 001. The truth table is;

• X1: 0 0 0 0 1 1 1 1 • X2: 0 0 1 1 0 0 1 1 • X3: 0 1 0 1 0 1 0 1• OUT: 0 0 0/1 0/1 0/1 1 0/1 1

Firing Rules

• The word firing comes from the language of the biologist interested in the real neurons in the brain. A fire rule determines how one calculates whether the node should fire for any input patterns. In a simple TAN model, one can relate the 1 state of node outputs to firing,and the 0 to the absence of firing.

Firing Rules

• The firing rule is an important concept in neural networks and accounts for their high flexibility. A firing rule determines how one calculates whether a neuron should fire for any input pattern. It relates to all the input patterns, not only the ones on which the node was trained.

Firing Rules

• Take a collection of training patterns for a node, some of which cause it to fire (the 1-taught set of patterns) and others which prevent it from doing so (the 0-taught set). Then the patterns not in the collection cause the node to fire if, on comparison , they have more input elements in common with the 'nearest' pattern in the 1-taught set than with the 'nearest' pattern in the 0-taught set. If there is a tie, then the pattern remains in the undefined state.

Pattern Recognition

• Pattern recognition can be implemented by using a feed-forward. The following Fig. shows a FF 3 TANs

Pattern Recognition

• An important application of neural networks is pattern recognition. Pattern recognition can be implemented by using a feed-forward (figure 1) neural network that has been trained accordingly. During training, the network is trained to associate outputs with input patterns. When the network is used, it identifies the input pattern and tries to output the associated output pattern. The power of neural networks comes to life when a pattern that has no output associated with it, is given as an input. In this case, the network gives the output that corresponds to a taught input pattern that is least different from the given pattern.

Pattern Recognition

• Such net is trained to recognize a T pattern,and an H pattern is used to train the net not to fire.

The output F1,F2,and F3 black (1) , i.e. fire, while the output F1,F2,and F3 white (0)i.e. not to fire.

Perceptron

• The simplest form of a perceptron ( Rosenblatt, 1962 ) is shown in the following Fig.

Perceptron

• Such pereceptron turns out to be a McCulloch and Pitts node with some additional, fixed, preprocessing , units called A1,A2, …..,Aj,…Ap (association units),and their task is to extract specific, localized features from some input image. In fact, perceptrons were presented as pattern recognition devices, although their applications may be as general as those of any neural net.

Thershold Logic Unit (TLU)• According to a simplified account, the human brain

consists of about ten billion neurons -- and a neuron is, on average, connected to several thousand other neurons. By way of these connections, neurons both send and receive varying quantities of energy. One very important feature of neurons is that they don't react immediately to the reception of energy. Instead, they sum their received energies, and they send their own quantities of energy to other neurons only when this sum has reached a certain critical threshold. The brain learns by adjusting the number and strength of these connections. Even though this picture is a simplification of the biological facts, it is sufficiently powerful to serve as a model for the neural net.

Thershold Logic Unit (TLU)

• The simplest a neuron model is the TLU. Its basic operation is to perform a weighted sum of its inputs and then output a '1', if this sum exceeds a threshold and '0' otherwise.

Threshold logic units (TLUs

Training Single TLU

• TLU geometry TLU definition

• internal knowledge representation• an abstract computation tool that calculates

)()( θ−⋅= XWfXf s

X)(Xf

W

Input :output :

weight :threshold : θ

transfer : )(sf s

1

0 s

)(sf s

TLU

Binary threshold neurons• McCulloch-Pitts (1943): influenced Von Neumann!

First compute a weighted sum of the inputs from other neurons Then send out a fixed size spike of activity if the weighted sum

exceeds a threshold. Maybe each spike is like the truth value of a proposition and

each neuron combines truth values to compute the truth value of another proposition!

=y

ii

iwxz ∑=

This image cannot currently be displayed.

θ≥z1 if

0 otherwisey

z

1

0threshold

This image cannot currently be displayed.

Sigmoid neurons

• These give a real-valued output that is a smooth and bounded function of their total input. Typically they use the

logistic function They have nice derivatives

which make learning easy (see lecture 3).

• If we treat as a probabilityof producing a spike, we get stochastic binary neurons.

How a TLU learns

• How a TLU learns• Since TLUs can classify, they know stuff. Neural nets are

also supposed to learn. Their learning mechanism is modeled on the brain's adjustments of its neural connections. A TLU learns by changing its weights and threshold. Actually, the weight-threshold distinction is somewhat arbitrary from a mathematical point of view. Recall that the critical point at which a TLU outputs 1 instead of 0 is when the SUM(Xi * Wi) >= theta,else y=0.

How a TLU learns

22

History of Artificial Neural Networks (ANNs)

• Pre-1940: von Hemholtz, Mach, Pavlov, etc. General theories of learning, vision, conditioning No specific mathematical models of neuron operation

• 1940s: Hebb, McCulloch and Pitts Hebb: Explained mechanism for learning in biological neurons McCulloch and Pitts: First neural model

• 1950s: Rosenblatt, Widrow and Hoff First practical networks (Perceptron and Adaline) and corresponding

learning rules• 1960s: Minsky and Papert

Demonstrated limitations of existing neural networks New learning algorithms not forthcoming, most research suspended

• 1970s: Amari, Anderson, Fukushima, Grossberg, Kohonen Progress continues, although at a slower pace

• 1980s: Grossberg, Hopfield, Kohonen, Rumelhart, etc. Important new developments cause a resurgence in the field

(Backpropagation algorithm)

History of Artificial Neural Networks (Details)• McCulloch and Pitts (1943): first neural network model• Hebb (1949): proposed a mechanism for learning, as increasing the

synaptic weight between two neurons, by repeated activation of one neuron by the other across that synapse (lacked the inhibitory connection)

• Rosenblatt (1958): Perceptron network and the associated learning rule• Widrow & Hoff (1960): a new learning algorithm for linear neural

networks (ADALINE)• Minsky and Papert (1969): widely influential book about the limitations of

single-layer perceptrons, causing the research on NNs mostly to come to an end.

• Some that still went on: Anderson, Kohonen (1972): Use of ANNs as associative memory Grossberg (1980): Adaptive Resonance Theory Hopfield (1982): Hopfield Network Kohonen (1982): Self-organizing maps

• Rumelhart and McClelland (1982): Backpropagation algorithm for training multilayer feed-forward networks. Started a resurgence on NN research again.

McCulloch and Pitts Model

The Computer Metaphor of Mind

Q: What is the computer metaphor of mind?

A: The brain is a computer.

George A. Miller

• Started off as a behaviourist.• Interested in information

theory.• Helped develop the idea that

the similarities between humans and machines were real

McCulloch & Pitts – Neural Networks

Warren McCulloch

• A neuropsychiatrist.• Provided the biological

information necessary to appropriately model the behaviours of neurons.

Walter Pitts

• A mathematician.• Provided the math necessary

to model the behaviour of neurons.

Model Neuron

Body Axon

Nerve Properties

Nerve Properties

Logical Operations

Neuron A Neuron B

“If A then B”

Applications for MCP Model

AND /OR Gates

XOR Gate

XOR Gate

Linear Separability

Linearning and Generalization

Generalization in Function Approximation