mvn- based multilayer feedforward neural network (mlmvn) and a backpropagation learning algorithm...
TRANSCRIPT
![Page 1: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/1.jpg)
MVN- based Multilayer Feedforward Neural Network
(MLMVN)and a Backpropagation
Learning Algorithm
Introduced in 2004-2007
1
![Page 2: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/2.jpg)
MLMVN I. Aizenberg and C. Moraga, "Multilayer
Feedforward Neural Network based on Multi-Valued Neurons and a Backpropagation Learning Algorithm", Soft Computing, vol. 11, No 2, January, 2007, pp. 169-183.
I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, "Blur Identification by Multilayer Neural Network based on Multi-Valued Neurons", IEEE Transactions on Neural Networks, vol. 19, No 5, May 2008, pp. 883-898.
2
![Page 3: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/3.jpg)
MVN- based Multilayer Feedforward Neural Network
Hidden layers Output layer
1
2
N
3
![Page 4: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/4.jpg)
MLMVN: Key Properties Derivative-free learning algorithm based on the
error- correction rule Self-adaptation of the learning rate for all the
neurons Much faster learning than the one for other
neural networks A single step is always enough to adjust the
weights for the given set of inputs independently on the number of hidden and output neurons
Better recognition/prediction/classification rate in comparison with other neural networks, neuro-fuzzy networks and kernel based techniques including SVM 4
![Page 5: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/5.jpg)
MLMVN: Key Properties
MLMVN can operate with both continuous and discrete inputs/outputs, as well as with the hybrid inputs/outputs:
continuous inputs discrete outputs discrete inputs continuous outputs hybrid inputs hybrid outputs
5
![Page 6: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/6.jpg)
A Backpropagation Derivative- Free Learning
Algorithm
kmT - a desired output of the kth neuron from the mth (output) layer
kmY - an actual output of the kth neuron from the mth (output) layer
kmkmkm YT * - the network error for the kth neuron from output layer
*1km
mkm s
- the error for the kth neuron from output layer
11 mm Ns -the number of all neurons on the previous layer (m-1, to which the error is backpropagated) incremented by 1
6
![Page 7: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/7.jpg)
A Backpropagation Derivative- Free Learning
Algorithm
11
1
111
1
1121
)(1
)(11 jj N
i
ijkij
j
N
i
ijkij
ijkj
kj ws
wws
The error for the kth neuron from the hidden (jth) layer, j=1, …, m-1
1 ;,...,2 ,1 11 smjNs jj
-the number of all neurons on the previous layer (previous to j, to which the error is backpropagated) incremented by 1
The error backpropagation:
7
![Page 8: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/8.jpg)
A Backpropagation Derivative- Free Learning
Algorithm
Correction rule for the neurons from the mth (output) layer (kth neuron of mth layer):
kmkmkmkm
imkmkmkm
ikmi
n
Cww
niYn
Cww
)1(~
,...,1 ,~
)1(~
00
1
8
![Page 9: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/9.jpg)
A Backpropagation Derivative- Free Learning
Algorithm
Correction rule for the neurons from the 2nd through m-1st layer (kth neuron of the jth layer (j=2, …, m-1):
1
0 0
, 1,...,( 1) | |
( 1) | |
kjkj kji i kj ij
kj
kjkj kjkj
kj
Cw w Y i n
n z
Cw w
n z
9
![Page 10: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/10.jpg)
A Backpropagation Derivative- Free Learning
Algorithm
11
110
10
11
111
||)1(~
,...,1 ,||)1(
~
kk
kkk
ikk
kki
ki
zn
Cww
nixzn
Cww
Correction rule for the neurons from the 1st hidden layer:
10
![Page 11: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/11.jpg)
Criteria for the convergence of the learning process
Learning should continue until either minimum MSE/RMSE criterion will be satisfied or zero-error will be reached
11
![Page 12: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/12.jpg)
MSE criterion
N
ss
N
s kskm E
NW
N 11
2* 1)()(
1
λ is a maximum possible MSE for the training data
N is the number of patterns in the training set
is the network square error for the sth pattern * 2( )s kms
k
E
12
![Page 13: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/13.jpg)
RMSE criterion
N
ss
N
s kskm E
NW
N 11
2* 1)()(
1
λ is a maximum possible RMSE for the training data
N is the number of patterns in the training set
is the network square error for the sth pattern * 2( )s kms
k
E
13
![Page 14: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/14.jpg)
MLMVN Learning: Example
1 1
2 2
3 3
(exp(4.23 ),exp(2.10 )) ,
(exp(5.34 ),exp(1.24 )) ,
(exp(2.10 ),exp(0 )) .
X i i T
X i i T
X i i T
Suppose, we need to classify three vectors belonging to three different classes:
1 2 3exp 0.76 , exp 2.56 , exp 5.35 .T i T i T i
1 2 3, ,T T T
[arg( ) 0.05,arg( ) 0.05], 1,2,3,j jT T j
Classes
are determined in such a way that the argument of the desired output of the network must belong to the interval
14
![Page 15: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/15.jpg)
MLMVN Learning: Example
Thus, we have to satisfy the following conditions: arg arg 0.05iArg z
jT - e
iArg ze
, where
is the actual output.
and for the mean square error
20.05 0.0025E
.
15
![Page 16: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/16.jpg)
MLMVN Learning: Example
Let us train the 21 MLMVN
(two hidden neurons and the single neuron in the output layer
x1
x2
21, xxf
11
12
21
16
![Page 17: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/17.jpg)
MLMVN Learning: ExampleThe training process converges after 7 training epochs.
Update of the outputs:
17
![Page 18: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/18.jpg)
MLMVN Learning: Example
Epoch
1 2 3 4 5 6 7
MSE
2.4213
0.1208
0.2872
0.1486
0.0049
0.0026
0.0009
20.05 0.0025E 18
![Page 19: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/19.jpg)
MLMVN: Simulation Results
19
![Page 20: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/20.jpg)
Simulation Results: Benchmarks
All simulation results for the benchmark problems are obtained using the network with n inputs
nS1 containing a single hidden layer with S neurons
and a single neuron in the output layer:
Hidden layer Output layer20
![Page 21: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/21.jpg)
The Two Spirals Problem
21
![Page 22: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/22.jpg)
Two Spirals Problem:complete training of 194
points
Structure of thenetwork
A network type The results
2401
MLMVNTrained completely,
no errors
A traditional MLP, sigmoid activation function
still remain 4% errors
2301
MLMVNTrained completely,
no errors
A traditional MLP, sigmoid activation function
still remain 14% errors
22
![Page 23: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/23.jpg)
Two Spirals Problem:cross-validation (training of 98
points and prediction of the rest 96 points)
The prediction rate is stable for all the networks from 2261 till 2401:
68-72%
The prediction rate of 74-75% appears 1-2 times per 100 experiments with the network 2401
The best known result obtained using the Fuzzy Kernel Perceptron (Nov. 2002) is 74.5%,
But it is obtained using more complicated and larger network
23
![Page 24: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/24.jpg)
Mackey-Glass time series prediction
Mackey-Glass differential delay equation:
),()(1.0)(1
)(2.0)(10
tntxtx
tx
dt
tdx
The task of prediction is to predict )6( tx
from )18( ),12( ),6( ),( txtxtxtx
.
24
![Page 25: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/25.jpg)
Mackey-Glass time series prediction
Training Data: Testing Data:
25
![Page 26: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/26.jpg)
Mackey-Glass time series prediction
RMSE Training: RMSE Testing:
26
![Page 27: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/27.jpg)
Mackey-Glass time series prediction
Testing Results:
Blue curve – the actual series; Red curve – the predicted series
27
![Page 28: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/28.jpg)
Mackey-Glass time series prediction
# of neurons on the hidden layer
50 50 40
ε- a maximum possible RMSE
0.0035 0.0056 0.0056
Actual RMSE for the training set (min - max)
0.0032 - 0.0035 0.0053 – 0.0056 0.0053 – 0.0056
Min 0.0056 0.0083 0.0086 Max 0.0083 0.0101 0.0125
Median 0.0063 0.0089 0.0097 Average 0.0066 0.0089 0.0098
RMSE for the testing
set SD 0.0009 0.0005 0.0011 Min 95381 24754 34406 Max 272660 116690 137860
Median 145137 56295 62056
Number of training epochs
Average 162180 58903 70051
The results of 30 independent runs:
28
![Page 29: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/29.jpg)
Mackey-Glass time series prediction
Comparison of MVN to other models:
MLMVN min
MLMVN average
GEFREX M. Rosso, Generic Fuzzy
Learning, 2000 min
EPNet You-Liu,
Evolution. system, 1997
ANFIS J.S. R. Jang,
Neuro-Fuzzy, 1993
CNNE M.M.Islam
et all., Neural
Networks Ensembles,
2003
SuPFuNIS S.Paul et
all, Fuzzy-Neuro, 2002
Classical Backprop.
NN 1994
0.0056 0.0066 0.0061 0.02 0.0074 0.009 0.014 0.02
MLMVN outperforms all other networks in:
•The number of either hidden neurons or supporting vectors
•Speed of learning
•Prediction quality 29
![Page 30: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/30.jpg)
Real World Problems:
Solving Using MLMVN
30
![Page 31: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/31.jpg)
Generation of the Genetic Code using MLMVN
31
![Page 32: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/32.jpg)
Generation of the Genetic Code using MLMVN I. Aizenberg and C. Moraga, "The
Genetic Code as a Function of Multiple-Valued Logic Over the Field of Complex Numbers and its Learning using Multilayer Neural Network Based on Multi-Valued Neurons", Journal of Multiple-Valued Logic and Soft Computing, No 4-6, November 2007, pp. 605-618
32
![Page 33: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/33.jpg)
Genetic code There are exactly four different nucleic
acids: Adenosine (A), Thymidine (T), Cytidine (C) and Guanosine (G)
Thus, there exist 43=64 different combinations of them “by three”.
Hence, the genetic code is the mapping between the four-letter alphabet of the nucleic acids (DNA) and the 20-letter alphabet of the amino acids (proteins)
33
![Page 34: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/34.jpg)
Genetic code as a multiple-valued function
Let be the amino acid Let be the nucleic acid Then a discrete function of three variables
, 1, ..., 20jG j
, , , , 1, 2, 3ix A G C T i
1 2 3( , )jG f x , x x
is a function of a 20-valued logic, which is partially defined on the set of four-valued variables
34
![Page 35: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/35.jpg)
Genetic code as a multiple-valued function A multiple-valued logic over the field of
complex numbers is a very appropriate model to represent the genetic code function
This model allows to represent mathematically those biological properties that are the most essential (e.g., complementary nucleic acids A T; G C that are stuck to each other in a DNA double helix only in this order
35
![Page 36: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/36.jpg)
DNA double helix
The complementary nucleic acids
A T; G C
are always stuck to each other in pairs
36
![Page 37: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/37.jpg)
Representation of the nucleic acids and amino acids
ND
T
F
SA
AY
G
W
M
I
H
Q
T K
ERG
C
LV
PC
The complementary nucleic acids
A T; G C
are located such that
A=1; G=i
T = -A= -1 and
C= - G = -i
All amino acids are distributed along the unit circle in the logical way, to insure their closeness to those nucleic acids that form each certain amino acid 37
![Page 38: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/38.jpg)
Generation of the genetic code The genetic code can be generated
using MLMVN 341 (3 inputs, 4 hidden neurons and a single output neuron – 5 neurons)
There best known result for the classical backpropagation neural network is generation of the code using the network 312220 (32 neurons)
38
![Page 39: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/39.jpg)
Blurred Image Restoration (Deblurring) and Blur Identification by MLMVN
39
![Page 40: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/40.jpg)
Blurred Image Restoration (Deblurring) and Blur Identification by MLMVN
I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, "Blur Identification by Multilayer Neural Network based on Multi-Valued Neurons", IEEE Transactions on Neural Networks, vol. 19, No 5, May 2008, pp. 883-898.
40
![Page 41: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/41.jpg)
Problem statement: capturing
Mathematically a variety of capturing principles can be described by the Fredholm integral of the first kind
where x,t ℝ2, v(t) is a point-spread function (PSF) of a system, y(t) is a function of a real object and z(x) is an observed signal.
2
2( ) ( , ) ( ) , ,z x v x t y t dt x t
M
icr
osco
py
Tom
o
graphy
P
hot
o
41
![Page 42: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/42.jpg)
Image deblurring: problem statement Mathematically blur is caused by the convolution
of an image with the distorting kernel. Thus, removal of the blur is reduced to the
deconvolution. Deconvolution is an ill-posed problem, which
results in the instability of a solution. The best way to solve it is to use some regularization technique.
To use any kind of regularization technique, it is absolutely necessary to know the distorting kernel corresponding to a particular blur: so it is necessary to identify the blur.
42
![Page 43: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/43.jpg)
Image deblurring: problem statement
The observed image given in the following form:
where “*” denotes a convolution operation, y is an image, υ is point-spread function of a system (PSF) , which is exactly a distorting kernel, and ε is a noise.
In the continuous frequency domain the model takes the form:
where is a representation of the signal z in the Fourier domain and denotes a Fourier transform.
( ) ( )( ) ( )z x v y x x
( ) ( ) ( ) ( )Z V Y
2( ) { ( )}Z F z x R {}F
43
![Page 44: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/44.jpg)
Blur Identification
We use MLMVN to recognize Gaussian, motion and rectangular (boxcar) blurs.
We aim to identify simultaneously both blur (PSF), and its parameters using a single neural network.
44
![Page 45: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/45.jpg)
Considered PSFPSF in time domain
PSF in frequency domain
Gaussian Motion Rectangular45
![Page 46: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/46.jpg)
Considered PSF
The Gaussian PSF: is a parameter (variance)
The uniform linear motion:
h is a parameter (the length of motion)
The uniform rectangular:
h is a parameter (the size of smoothing area)
2 21 2
2 2
1( ) exp
2
t tv t
2 21 2 1 2
1, / 2, cos sin ,
( )otherwise,0,
t t h t tv t h
1 22
1, , ,
( ) 2 2otherwise,0,
h ht t
v t h
2
46
![Page 47: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/47.jpg)
Degradation in the frequency domain:
True Image Gaussian Rectangular HorizontalMotion
VerticalMotion
Images and log of their Power Spectra log Z 47
![Page 48: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/48.jpg)
Training Vectors
We state the problem as a recognition of the shape of V , which is a Fourier spectrum of PSF v and its parameters from the Power-Spectral Density, whose distortions are typical for each type of blur.
48
![Page 49: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/49.jpg)
Training Vectors
1 2, min
max min
log logexp 2 ( 1) ,
log log
k k
j
Z Zx i K
Z Z
),...,( 1 nxxX The training vectors are formed as follows:
for
(| ( ) |)Log Z
(0,0)
1 2 2
1 2
2 1
1,..., / 2 1, for , 1,..., / 2 1,
/ 2,..., 2, for 1, 1,..., / 2 1,
1,...,3 / 2 3, for 1, 1,..., / 2 1,
j L k k k L
j L L k k L
j L L k k L
LxL is a size of an image
the length of the pattern vector is n= 3L/2-349
![Page 50: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/50.jpg)
Examples of training vectors
True Image Gaussian Rectangular
HorizontalMotion
VerticalMotion 50
![Page 51: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/51.jpg)
Neural Network 5356
Hidden layers Output layer
1
2
n
Blur 1
Blur 2
Blur N
Training (pattern) vectors
51
![Page 52: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/52.jpg)
Output Layer Neuron
Reservation of domains on the unit circle for the output
neuron
52
![Page 53: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/53.jpg)
Simulation
1, 1.33, 1.66, 2, 2.33, 2.66, 3 ;
Experiment 1 (2700 training pattern vectors corresponding to 72 images): six types of blur with the following parameters: MLMVN structure: 53561) The Gaussian blur is considered with 2) The linear uniform horizontal motion blur of the lengths 3, 5, 7, 9; 3) The linear uniform vertical motion blur of the length 3, 5, 7, 9; 4) The linear uniform diagonal motion from South-West to North- East blur of the lengths 3, 5, 7, 9; 5) The linear uniform diagonal motion from South-East to North- West blur of the lengths 3, 5, 7, 9; 6) rectangular has sizes 3x3, 5x5, 7x7, 9x9.
53
![Page 54: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/54.jpg)
ResultsClassification Results
Blur
MLMVN, 381 inputs,5356,
2336 weights in total
SVMEnsemble from
27 binary decision SVMs,
25.717.500 support vectors in total
No blur 96.0% 100.0%
Gaussian 99.0% 99.4%
Rectangular 99.0% 96.4
Motion horizontal 98.5% 96.4
Motion vertical 98.3% 96.4
Motion North-East Diagonal 97.9% 96.5
Motion North-West Diagonal 97.2% 96.5 54
![Page 55: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/55.jpg)
Restored images
Blurred noisy image:
rectangular 9x9
Restored
Blurred noisy image:Gaussian, σ=2
Restored
55
![Page 56: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/56.jpg)
Classification of gene expression microarray data
56
![Page 57: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/57.jpg)
Classification of gene expression microarray data I. Aizenberg and J. Zurada., "Solving Selected
Classification Problems in Bioinformatics Using Multilayer Neural Network based on Multi-Valued Neurons (MLMVN)", Proceedings of the International Conference on Artificial Neural Networks (ICANN-2007), Lecture Notes in Computer Science (J. Marques de Sá et al. -Eds.), vol. 4668, Part I, Springer, Berlin, Heidelberg, New York, 2007, pp. 874-883.
57
![Page 58: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/58.jpg)
“Lung” and “Novartis” data sets
4 classes in both data sets “Lung”: 197 samples with 419
genes “Novartis”: 103 samples with 697
genes
58
![Page 59: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/59.jpg)
K-folding cross validation with K=5
“Lung”: 117-118 samples of 197 in each training set and 39-40 samples of 197 in each testing set
“Novartis”: 80-84 samples of 103 in each training set and 19-23 samples of 103 in each testing set
59
![Page 60: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/60.jpg)
MLMVN
Structure of the network: n64 (n inputs, 6 hidden neurons and 4 output neurons)
The learning process converges very quickly (500-1000 iterations that take up to 1 minute on a PC with Pentium IV 3.8 GHz CPU are required for both data sets)
60
![Page 61: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/61.jpg)
Results: Classification Rates
MLMVN: 98.55% (“Novartis) MLMVN: 95.945% (“Lung”) kNN (k=1) 97.69% (“Novartis”) kNN (k=2) 92.55% (“Lung”)
61
![Page 62: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/62.jpg)
Wisconsin Breast Cancer Database
62
![Page 63: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/63.jpg)
Wisconsin Breast Cancer Database 699 samples drawn from the classes "Benign" (458 samples) and "Malignant" (241 samples). Each entry is described by a 9-
dimensional vector of 10-valued features
Training set contains 350 randomly selected samples and test set contains the rest of 349 input samples
63
![Page 64: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/64.jpg)
MLMVN
Structure of the network: 981 (9 inputs, 8 hidden neurons and a single output neuron)
“Discrete input continuous output" hidden neurons and the "continuous inputs discrete output" output neuron
64
![Page 65: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/65.jpg)
Results: Classification Rates
MLMVN 981: 95.99% Standard backpropagation network
991: 91.2% Standard backpropagation network
991 followed by the fuzzy rules based classifier: 95%
65
![Page 66: MVN- based Multilayer Feedforward Neural Network (MLMVN) and a Backpropagation Learning Algorithm Introduced in 2004-2007 1](https://reader035.vdocument.in/reader035/viewer/2022062314/56649e8e5503460f94b91f65/html5/thumbnails/66.jpg)
Some Prospective Applications Solving different recognition and classification
problems, especially those, where the formalization of the decision rule is a complicated problem
Classification and prediction in biology including protein secondary structure prediction
Time series prediction Modeling of complex systems including hybrid
systems that depend on many parameters Intelligent Image Processing: Edge Detection,
Nonlinear Filtering (MLMVN can be used as a nonlinear filter), Impulse noise Detection
Etc. …66