neural networks. plan perceptron linear discriminant associative memories hopfield networks ...
TRANSCRIPT
![Page 1: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/1.jpg)
Neural Networks
![Page 2: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/2.jpg)
![Page 3: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/3.jpg)
Plan
Perceptron Linear discriminant
Associative memories Hopfield networks Chaotic networks
Multilayer perceptron Backpropagation
![Page 4: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/4.jpg)
Perceptron
Historically, the first neural net Inspired by human brain Proposed
By Rosenblatt Between 1957 et 1961
The brain was appearing as the best computer Goal: associated input patterns to recognition
outputs Akin to a linear discriminant
![Page 5: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/5.jpg)
Perceptron
Constitution
1/0
1/0
1/0
1/0
∑
∑
∑
{0/1}
{0/1}
{0/1}
Input layer / retina Output layer
Connections/Synapses
![Page 6: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/6.jpg)
Perceptron
Constitution
1/0
1/0
1/0
1/0
∑
∑
∑
{0/1}
{0/1}
{0/1}
Input layer / retina Output layer
Connections/Synapses
![Page 7: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/7.jpg)
Perceptron
Constitution
1/0
1/0
1/0
1/0
∑
∑
∑
{0/1}
{0/1}
{0/1}
aj= ∑i xiwij
x0
x1
x2
x3
w0j
w1j
w2j
w3j
oj=f(aj)Sigmoid
![Page 8: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/8.jpg)
Perceptron
Constitution
aj= ∑i xiwij
x0
x1
x2
x3
w0j
w1j
w2j
w3j
oj=f(aj)
aj : activation of the j output neuron
xi : activation of the i input neuron
wi,j : connection parameter between input neuron i and output neuron j
oj : decision ruleoj = 0 for aj <= θj, 1 for aj > θj
![Page 9: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/9.jpg)
Perceptron
Need an associated learning Learning is supervised
Based on a couple input pattern and desired output If the activation of output neuron is OK => nothing happens Otherwise – inspired by neurophysiological data
If it is activated : decrease the value of the connection If it is unactivated : increase the value of the connection
Iterated until the output neurons reach the desired value
![Page 10: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/10.jpg)
Perceptron
Supervised learning How to decrease or increase the connections ? Learning rule of Widrow-Hoff Closed to Hebbian learning
wi,j(t+1) = wi,j
(t)+n(tj-oj)xi = wi,j(t)+∆wi,j
Desired value of output neuron j
Learning rate
![Page 11: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/11.jpg)
11
Theory of linear discriminant
Compute:
g(x) = WTx + Wo
And:
Choose:
class 1 if g(x) > 0
class 2 otherwise
But how to find W on the basis of the data ?
g(x) > 0
g(x) < 0
g(x) = 0
![Page 12: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/12.jpg)
12
Gradient descent:
iW
EW
ii
,
)(exp1/1 xgY
In general a sigmoid is used for the statistical interpretation: (0,1)
The error could be least square: (Y – Yd)2
Or maximum likelihood: )1log()1(log YYYY dd
But at the end, you got the learning rule: jXYYdW )(
Easy to derive = Y(1-Y)
Class 1 if Y > 0.5 and 2 otherwise
![Page 13: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/13.jpg)
Perceptron limitations
Limitations Not always easy to learn But above all, cannot separate not linearly separable data
Why so ? The XOR kills NN researches
for 20 years (Minsky and Papert were responsable)
Consequence We had to wait for the magical hidden layer And for backpropagation
1,0
0,1
0,0
1,1
![Page 14: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/14.jpg)
Associative memories
Around 1970 Two types
Hetero-associative And auto-associative
We will treat here only auto-associative Make an interesting connections between
neurosciences and physics of complex systems
John Hopfield
![Page 15: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/15.jpg)
Auto-associative memories
Constitution
InputFully connected neural networks
![Page 16: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/16.jpg)
Associative memories
Hopfield -> DEMO
Fully connected graphs
Input layer = Output layer = Networks
The connexions have to be symmetric
OUTIN
It is again an hebbian learning rule
![Page 17: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/17.jpg)
Associative memories
Hopfield
The newtork becomes a dynamical machine
It has been shown to converge into a fixed point
This fixed point is a minimal of a Lyapunov energy
These fixed point are used for storing «patterns » Discrete time and asynchronous updating
input in {-1,1} xi sign(j wijxj)
![Page 18: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/18.jpg)
Mémoires associatives
Hopfield
The learning is done by Hebbian learning
Over all patterns to learn:
pj
patterns
piij XXW
![Page 19: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/19.jpg)
19
My researches: Chaotic encoding of memories in brain
![Page 20: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/20.jpg)
Multilayer perceptron
xINPUT
I neurons
hHIDDEN
L neurons
oOUTPUTJ neurons
Connection Matrices
W Z
Constitution
![Page 21: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/21.jpg)
Multilayer Perceptron
aj= ∑i xiwij
x0
x1
x2
x3
w0j
w1j
w2j
w3j
oj=f(aj)
Zj0
Zj1
Zj2
Constitution
![Page 22: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/22.jpg)
Error backpropagation
Learning algorithm How it proceeds :
Inject an input Get the output Compute the error with respect to the desired
output Propagate this error back from the output layer to
the input layer of the network Just a consequence of the chaining derivative of
the gradient descent
![Page 23: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/23.jpg)
Backpropagation
Select a derivable transfert function Classicaly used : The logistics
And its derivative
xexf
1
1)(
)](1)[()(' xfxfxf
![Page 24: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/24.jpg)
Backpropagation
The algorithm1. Inject an entry
![Page 25: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/25.jpg)
Backpropagation
Algorithm1. Inject an entry
2. Compute the intermediate hh=f(W*x)
![Page 26: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/26.jpg)
Backpropagation
Algorithm1. Inject an entry
2. Compute the intermediate h
o=f(Z*h)
3. Compute the output o
h=f(W*x)
![Page 27: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/27.jpg)
Backpropagation
Algorithm1. Inject an entry
2. Compute the intermediate h
3. Compute the output o
4. Compute the error output
sortie=f’(Zh)*(t - o)o=f(Z*h)
![Page 28: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/28.jpg)
Backpropagation
Algorithm1. Inject an entry
2. Compute the intermediate h
3. Compute the output o
4. Compute the error output
5. Adjust Z on the basis of the error
Z(t+1)=Z(t)+n sortie h = Z(t) + ∆(t)Z
sortie=f’(Zh)*(t - o)
![Page 29: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/29.jpg)
Backpropagation
Algorithm1. Inject an entry
2. Compute the intermediate h
3. Compute the output o
4. Compute the error output
5. Adjust Z on the basis of the error
6. Compute the error on the hidden layer
cachée=f’(Wx)*(Z sortie)
Z(t+1)=Z(t)+n sortie h = Z(t) + ∆(t)Z
![Page 30: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/30.jpg)
Backpropagation
Algorithm1. Inject an entry
2. Compute the intermediate h
3. Compute the output o
4. Compute the error output
5. Adjust Z on the basis of the error
6. Compute the error on the hidden layer
7. Adjust W on the basis of this error
W(t+1)=W(t)+n cachée x = W(t) + ∆(t)W
cachée=f’(Wx)*(Z sortie)
![Page 31: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/31.jpg)
Backpropagation
Algorithm1. Inject an entry
2. Compute the intermediate h
3. Compute the output o
4. Compute the error output
5. Adjust Z on the basis of the error
6. Compute the error on the hidden layer
7. Adjust W on the basis of this error
W(t+1)=W(t)+n cachée x = W(t) + ∆(t)W
![Page 32: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/32.jpg)
Backpropagation
Algorithm1. Inject an entry
2. Compute the intermediate h
3. Compute the output o
4. Compute the error output
5. Adjust Z on the basis of the error
6. Compute the error on the hidden layer
7. Adjust W on the basis of this error
![Page 33: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/33.jpg)
Neural network
Simple linear discriminant
![Page 34: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/34.jpg)
Neural networks
Few layers – Little learning
![Page 35: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/35.jpg)
Neural networks
More layers – More learning
![Page 36: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/36.jpg)
36
![Page 37: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/37.jpg)
37
![Page 38: Neural Networks. Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation](https://reader037.vdocument.in/reader037/viewer/2022102801/56649e455503460f94b39e07/html5/thumbnails/38.jpg)
Neural networks
Tricks
Favour simple NN (you can add the structure in the error)
Few layers are enough (theoretically only one) Exploit cross validation…