Download - Neural networks
![Page 1: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/1.jpg)
Neural Networks
![Page 2: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/2.jpg)
Neural Networks
2
![Page 3: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/3.jpg)
Natural Neural Networks
• Signals “move” via electrochemical signals• The synapses release a chemical transmitter –
the sum of which can cause a threshold to be reached – causing the neuron to “fire”
• Synapses can be inhibitory or excitatory
3
![Page 4: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/4.jpg)
Natural Neural Networks
• We are born with about 100 billion neurons
• A neuron may connect to as many as 100,000 other neurons
4
![Page 5: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/5.jpg)
Natural Neural Networks
• Many of their ideas still used today e.g.– many simple units, “neurons” combine to give
increased computational power– the idea of a threshold
5
![Page 6: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/6.jpg)
Modelling a Neuron
• aj :Activation value of unit j
• wj,i :Weight on link from unit j to unit i
• ini :Weighted sum of inputs to unit i
• ai :Activation value of unit i• g :Activation function
j
jiji aWin ,
6
![Page 7: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/7.jpg)
Activation Functions
• Stept(x) = 1 if x ≥ t, else 0 threshold=t• Sign(x) = +1 if x ≥ 0, else –1• Sigmoid(x) = 1/(1+e-x)
7
![Page 8: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/8.jpg)
Building a Neural Network
1. “Select Structure”: Design the way that the neurons are interconnected
2. “Select weights” – decide the strengths with which the neurons are interconnected– weights are selected so get a “good match” to
a “training set”– “training set”: set of inputs and desired
outputs– often use a “learning algorithm”
8
![Page 9: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/9.jpg)
Basic Neural Networks• Will first look at simplest networks• “Feed-forward”– Signals travel in one direction through net– Net computes a function of the inputs
9
![Page 10: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/10.jpg)
The First Neural Neural Networks
Neurons in a McCulloch-Pitts network are connected by directed, weighted paths
-1
2
2X1
X2
X3
Y
10
![Page 11: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/11.jpg)
The First Neural Neural Networks
If the on weight on a path is positive the path is excitatory,otherwise it is inhibitory
-1
2
2X1
X2
X3
Y
11
![Page 12: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/12.jpg)
The First Neural Neural Networks
The activation of a neuron is binary. That is, the neuron either fires (activation of one) or does not fire (activation of zero).
-1
2
2X1
X2
X3
Y
12
![Page 13: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/13.jpg)
The First Neural Neural Networks
For the network shown here the activation function for unit Y is
f(y_in) = 1, if y_in >= θ else 0
where y_in is the total input signal receivedθ is the threshold for Y
-1
2
2X1
X2
X3
Y
13
![Page 14: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/14.jpg)
The First Neural Neural Networks
Originally, all excitatory connections into a particular neuron have the same weight, although different weighted connections can be input to different neurons
Later weights allowed to be arbitrary
-1
2
2X1
X2
X3
Y
14
![Page 15: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/15.jpg)
The First Neural Neural Networks
Each neuron has a fixed threshold. If the net input into the neuron is greater than or equal to the threshold, the neuron fires
-1
2
2X1
X2
X3
Y
15
![Page 16: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/16.jpg)
The First Neural Neural Networks
The threshold is set such that any non-zero inhibitory input will prevent the neuron from firing
-1
2
2X1
X2
X3
Y
16
![Page 17: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/17.jpg)
Building Logic Gates
• Computers are built out of “logic gates”
• Use threshold (step) function for activation function– all activation values are 0 (false) or 1 (true)
17
![Page 18: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/18.jpg)
The First Neural Neural Networks
AND Function
1
1X1
X2
Y
AND
X1 X2 Y
1 1 1
1 0 0
0 1 0
0 0 0
Threshold(Y) = 2
18
![Page 19: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/19.jpg)
The First Neural Networks
AND FunctionOR Function
2
2X1
X2
Y
OR
X1 X2 Y
1 1 1
1 0 1
0 1 1
0 0 0
Threshold(Y) = 2
19
![Page 20: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/20.jpg)
Perceptron• Synonym for Single-Layer,
Feed-Forward Network
• First Studied in the 50’s
• Other networks were known about but the perceptron was the only one capable of learning and thus all research was concentrated in this area
20
![Page 21: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/21.jpg)
Perceptron• A single weight only affects
one output so we can restrict our investigations to a model as shown on the right
• Notation can be simpler, i.e.
jWjIjStepO 0
21
![Page 22: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/22.jpg)
What can perceptrons represent?
AND XORInput 1 0 0 1 1 0 0 1 1Input 2 0 1 0 1 0 1 0 1Output 0 0 0 1 0 1 1 0
22
![Page 23: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/23.jpg)
What can perceptrons represent?
0,0
0,1
1,0
1,1
0,0
0,1
1,0
1,1
AND XOR
• Functions which can be separated in this way are called Linearly Separable• Only linearly separable functions can be represented by a perceptron • XOR cannot be represented by a perceptron
23
![Page 24: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/24.jpg)
XOR
• XOR is not “linearly separable”– Cannot be represented by a perceptron
• What can we do instead?1. Convert to logic gates that can be represented by
perceptrons2. Chain together the gates
24
![Page 25: Neural networks](https://reader036.vdocument.in/reader036/viewer/2022062312/55503ff4b4c905b2788b48ae/html5/thumbnails/25.jpg)
Single- vs. Multiple-Layers
• Once we chain together the gates then we have “hidden layers” – layers that are “hidden” from the output lines
• Have just seen that hidden layers allow us to represent XOR– Perceptron is single-layer– Multiple layers increase the representational power, so
e.g. can represent XOR
• Generally useful nets have multiple-layers – typically 2-4 layers
25