brescia dm 3 mlsupervised iparte -...
TRANSCRIPT
Data Mining
What’s Machine Learning
M. Brescia - Data Mining - lezione 3 2
Field of study that gives computers the ability to learn without being explicitly programmed.
Arthur Samuel (1959)
A computer program is said to learn from experience E with respect to some task T and some
performance measure P, if its performance on T, as measured by P, improves with experience
E.
Tom Mitchell (1998)
Machine Learning is a scientific discipline that is concerned with the design and development
of algorithms that allow computers to learn based on data-driven resources (sensors,
databases). A major focus of machine learning is to automatically learn to recognize complex
patterns and make intelligent decisions based on data.
ML origins: from Aristotele to Darwin
M. Brescia - Data Mining - lezione 3 3
The Greek philosopher Aristotle was one of the first to attempt to codify
"right thinking," that syllogism is, irrefutable reasoning processes. His
syllogisms provided patterns for argument structures that always yielded
correct conclusions when given correct premises. For example, "Socrates
is a man; all men are mortal; therefore, Socrates is mortal." These laws of
thought were logic supposed to govern the operation of the mind; their
study initiated the field called logic
By 1965, programs existed that could, in principle, process any
solvable problem described in logical notation. The so-called
logicist tradition within artificial intelligence hopes to build on such
programs to create intelligent systems and the ML theory
represents their demonstration discipline. A reinforcement in this
direction came out by integrating ML paradigm with statistical
principles following the Darwin’s Nature evolution law
ML supervised paradigm
M. Brescia - Data Mining - lezione 3 4
In supervised ML we have a set of data points or observations for which we know the desired
output, expressed in terms of categorical classes, numerical or logical variables or as
generic observed description of any real problem. The desired output is in fact providing
some level of supervision in that it is used by the learning model to adjust parameters or
make decisions allowing it to predict correct output for new data.
Finally, when the algorithm is able to correctly predict observations we define it a classifier.
Some classifiers are also capable of providing results in a more probabilistic sense, i.e. a
probability of a data point belonging to class. We usually refer to such model behavior as
regression
ML supervised process (1/2)
M. Brescia - Data Mining - lezione 3 5
Pre-processing of data
build input patterns appropriate for feeding into our supervised learning algorithm. This
includes scaling and preparation of data;
Create data sets for training and evaluation
randomly splitting the universe of data patterns. The training set is made of the data used by
the classifier to learn their internal feature correlations, whereas the evaluation set is used to
validate the already trained model in order to get an error rate (or other validation
measures) that can help to identify the performance and accuracy of the classifier. Typically
you will use more training data than validation data;
Training of the model
We execute the model on the training
data set. The output result consists of
a model that (in the successful case)
has learned how to predict the
outcome when new unknown data
are submitted;
ML supervised process (2/2)
M. Brescia - Data Mining - lezione 3 6
Validation
After we have created the model, it is of course required a test of its performance accuracy,
completeness and contamination (or its dual, the purity). It is particularly crucial to do this on
data that the model has not seen yet. This is main reason why on previous steps we
separated the data set into training patterns and a subset of the data not used for training.
Use
If validation was successful the model
has correctly learned the underlying
real problem. So far we can proceed
to use the model to classify/predict
new data.
Verify Model
verify and measure the generalization capabilities of the model. It is very easy to learn every
single combination of input vectors and their mappings to the output as observed on the
training data, and we can achieve a very low error in doing that, but how does the very same
rules or mappings perform on new data that may have different input to output mappings?
If the classification error of the validation
set is higher than the training error, then
we have to go back and adjust model
parameters.
Knowledge Base
Train Set
Blind Test Set
Analysis of results
Train Test
M. Brescia - Data Mining – ViaLactea Progress Meeting – Catania Feb 14, 2014
Machine Learning - Supervised
The World
Trained
Network
New Knowledge
M. Brescia - Data Mining – ViaLactea Progress Meeting – Catania Feb 14, 2014
Machine Learning - Supervised
Glossary
M. Brescia - Data Mining - lezione 3 9
§ Data can be tables, images, streaming vectors. They may be represented under the form of
numbers, percentages, pixel values, literals, strings, probabilities, any other entity giving an
information on a physical/conceptual/simulated event or phenomena of our world.
§ Dataset is a set of samples representing a problem. All samples must be expressed in a
uniform way (i.e. same dimensions and representation).
§ Pattern is a sequence of symbols/values identifying a single sample of any dataset.
§ Feature is an atomic element of a pattern, i.e. a number or symbol representing one
characteristic of the pattern (carrier of hidden information).
§ Target (supervised dataset) is usually a label (number/symbol) or a set of labels
representing the solution (desired/known output) of a single pattern. If unknown or missing,
the pattern belongs to the unsupervised category of datasets.
§ Base of Knowledge (BoK) is the ensemble of datasets in which the patterns contain the
target (known solutions to a real problem). It is always available for supervised ML.
Examples of BoK - Astronomy
M. Brescia - Data Mining - lezione 3 10
Examples of BoK - Astronomy
M. Brescia - Data Mining - lezione 3 11
UMAG,GMAG,RMAG,IMAG,ZMAG,nuv,fuv,YMAG,JMAG,HMAG,KMAG,w1,w2,w3,w4,zspec
20.38,20.46,20.32,20.09,20.04,0.65,3.21,19.28,18.963,19.286,17.505,16.828,15.238,12.238,8.579,1.824
19.465,19.368,19.193,19.015,0.219,1.397,18.29,17.76,16.97,15.77,14.26,13.2,10.76,8.158,0.459,1.934
17.995,17.934,17.873,1.865,0.132,16.863,16.597,15.902,14.75,13.33,12.28,9.5,7.37,0.478,20.49,2.247
20.13,20.36,1.43,4.22,19.906,19.409,18.427,17.935,17.076,15.589,12.619,8.863,1.4365, 8.15,0.45,1.93
Dataset: set of galaxies observed by a space telescope. Each pattern has 15 features
(different emission flux wavelength) + one target (the velocity dispersion of each galaxy,
called redshift)
Usually such BoK is made by hundreds of
thousands of galaxies (patterns).
The ML problem is to learn to predict their
redshift for new objects observed in
further space missions.
Examples of BoK - Astronomy
M. Brescia - Data Mining - lezione 3 12
Dataset: large multi-band image of a nebula, million of patterns of galaxies and stars (their
spectra). Features are peaks in the object spectrum and target is the type of object. ML
problem: learn to classify objects (star/galaxy separation)
star
galaxy
Examples of BoK – Web Traffic
M. Brescia - Data Mining - lezione 3 13
Dataset: huge amount of TCP/UDP packets over the network, to be classified respecting
privacy and evaluating their impact on the CPU load and transmission frequency
Examples of BoK – fusion reactors
M. Brescia - Data Mining - lezione 3 14
Goals: In the core of a tokamak there is the vacuum vessel where the fusion plasma is confined by
means of strong magnetic fields and plasma currents (up to 4 tesla and 5 mega amperes).
a) to “surf” jet discharges, to find the list of discharges with parameters required by the user
b) Help for decision-making
c) to integrate new and existing diagnostic faults and processing
d) to monitor and improve data quality and validation of scientific production.
e) to understand the effective use of a diagnostic, in order to improve the efficiency of data storage
and production but also to identify redundant, unusable or false data
f) search for the data analysis can be completely substituted using a simple query (with an easy and
quick interface)
g) to develop new diagnostic systems: for example producing a cross correlation between
measurements of the same physical quantity made by different techniques
h) to obtain important and unpredictable results using data mining system that allow to discover
hidden connections in the data
i) Cost reductions for analysis tools and maintenance.
Examples of BoK – facial recognition
M. Brescia - Data Mining - lezione 315
Goal: identification of a face among millions of image samples, based on the dimension reduction of the
facial parameter space and pattern recognition
Examples of BoK – smoke & fire detection
M. Brescia - Data Mining - lezione 316
fast smoke/fire detection, on-line alert
Examples of BoK – wine classification
M. Brescia - Data Mining - lezione 317
chemical analysis of wines grown in the same region in Italy but
derived from three different cultivars.
The analysis determined the quantities of 13 constituents
found in each of the three types of wines.
1) Alcohol
2) Malic acid
3) Ash (cenere)
4) Alcalinity of ash
5) Magnesium
6) Total phenols
7) Flavanoids
8) Nonflavanoid phenols
9) Proanthocyanins
10) Color intensity
11) Hue
12) OD280/OD315 of diluted wines
13) Proline
14) Target class:
1) Aglianico
2) Falanghina
3) Lacryma christi
14.23 1.71 2.43 15.6 127 2.8 3.06 .28 2.29 5.64 1.04 3.92 1065 1
13.2 1.78 2.14 11.2 100 2.65 2.76 .26 1.28 4.38 1.05 3.4 1050 1
13.16 2.36 2.67 18.6 101 2.8 3.24 .3 2.81 5.68 1.03 3.17 1185 2
14.37 1.95 2.5 16.8 113 3.85 3.49 .24 2.18 7.8 .86 3.45 1480 3
13.24 2.59 2.87 21 118 2.8 2.69 .39 1.82 4.32 1.04 2.93 735 1
14.2 1.76 2.45 15.2 112 3.27 3.39 .34 1.97 6.75 1.05 2.85 1450 3
14.39 1.87 2.45 14.6 96 2.5 2.52 .3 1.98 5.25 1.02 3.58 1290 2
14.06 2.15 2.61 17.6 121 2.6 2.51 .31 1.25 5.05 1.06 3.58 1295 2
…
Examples of BoK - Medicine
M. Brescia - Data Mining - lezione 3 18
Machine learning systems can be used to develop the knowledge bases used by expert systems. Given a
set of clinical cases that act as examples, a machine learning system can produce a systematic description
of those clinical features that uniquely characterize the clinical conditions. This knowledge can be
expressed in the form of simple rules, or often as a decision tree.
it is possible, using patient data, to automatically construct pathophysiological models that describe the
functional relationships between the various measurements. For example a learning system that takes
real-time patient data obtained during cardiac bypass surgery, and then creates models of normal and
abnormal cardiac physiology. These models might be used to look for changes in a patient's condition if
used at the time they are created. Alternatively, if used in a research setting, these models can serve as
initial hypotheses that can drive further experimentation.
One particularly exciting development has been the use of
learning systems to discover new drugs. The learning system is
given examples of one or more drugs that weakly exhibit a
particular activity, and based upon a description of the
chemical structure of those compounds, the learning system
suggests which of the chemical attributes are necessary for
that pharmacological activity.
ML Functionalities
M. Brescia - Data Mining - lezione 3 19
In the DM scenario, the ML model choice should always be accompanied by the functionality domain. To
be more precise, some ML models can be used in a same functionality domain, because it represents the
functional context in which it is performed the exploration of data.
Examples of such domains are:
Dimensional reduction;
Classification;
Regression;
Clustering;
Segmentation;
Forecasting;
Data Model Filtering;
Statistical data analysis;
The core of Machine Learning
M. Brescia - Data Mining - lezione 3 20
Whatever being the functionality or the model of interest in the machine learning context, the key point is
always the concept of LEARNING
More in practice, having in mind the functional taxonomy described in the previous slide, there are
essentially four kinds of learning related with ML for DM:
Learning by association;
Learning by classification;
Learning by prediction;
Learning by grouping (clustering);
Learning by association
M. Brescia - Data Mining - lezione 3 21
The learning by association consists of the identification of any structure hidden between data. It does not
mean to identify the belonging of patterns to specific classes, but to predict values of any feature
attribute, by simply recalling it, i.e. by associating it to a particular state or sample of the real problem..
It is evident that in the case of association we are dealing with very generic problems, i.e. those requiring
a precision less than in the classification case. In fact, the complexity grows with the range of possible
multiple values for feature attributes, potentially causing a mismatch in the association results.
In practical terms, fixed percentage thresholds are given in order to
reduce the mismatch occurrence for different association rules, based
on the experience on that problem and related data. The
representation of data for associative learning is thus based on the
labeling of features with non-numerical values or by alpha-numeric
coding.
Learning by classification
M. Brescia - Data Mining - lezione 3 22
Classification learning is often named simply “supervised” learning, because the process to learn the right
assignment of a label to a datum, representing its category or “class”, is usually done by examples.
Learning by examples stands for a training scheme operating under supervision of any oracle, able to
provide the correct, already known, outcome for each of the training sample. And this outcome is
properly a class or category of the examples. Its representation depends on the available Base of
Knowledge (BoK) and on its intrinsic nature, but in most cases is based on a series of numerical attributes,
related to the extracted BoK, organized and submitted in an homogeneous way.
The success of classification learning is
usually evaluated by trying out the
acquired feature description on an
independent set of data, having
known output but never submitted to
the model before.
Learning by prediction
M. Brescia - Data Mining - lezione 3 23
Slightly different from classification scheme is the prediction learning. In this case the outcome consists of
a numerical value instead of a class label (often called REGRESSION).
The numeric prediction is obviously related to a quantitative result, because is the predicted value much
more interesting than the structure of the concept behind the numerical outcome.
Learning by clustering
M. Brescia - Data Mining - lezione 3 24
Whenever there is no any class attribution, clustering learning is used to group data that show natural
similar features. Of course the challenge of a clustering experiment is to find these clusters and assign
input data to them.
The data could be given under the form of categorical/numerical tables and the success of a clustering
process could be evaluated in terms of human experience on the problem or a posteriori by means of a
second step of the experiment, in which a classification learning process is applied in order to learn an
intelligent mechanism on how new data samples should be clustered.
The clustering can be performed in a top-down
(from largest clusters down to singles), or
bottom-up (from singles up to larger clusters).
Both types may be represented by dendograms
How to learn data?
M. Brescia - Data Mining - lezione 3 25
In the wide variety of possible applications for ML, DM is of course one of the most important, but also
the most challenging. Users encounter as much problems as massive is the data set to be investigated. To
find hidden relationships between multiple features in thousands of patterns is hard, especially by
considering the limited capacity of human brain to have a clear vision in a multiple than 3D parameter
space.
In order to deeply discuss the learning of data we recall the paradigm of ML, by distinguish between data
where features are provided with known labels (target attributes), defined as supervised learning, and
data where features are unlabeled, called unsupervised learning. With such concepts in mind we can
discuss in the next sections the wide-ranging issues of both kinds of ML.
Machine Learning & Statistical Models
M. Brescia - Data Mining - lezione 3 26
Neural
Networks
Feed Forward
Recurrent / Feedback
• Perceptron
• Multi Layer Perceptron
• Radial Basis Functions
• Competitive Networks
• Hopfield Networks
• Adaptive Reasoning Theory
• Bayesian Networks
• Hidden Markov Models
• Mixture of Gaussians
• Principal Probabilistic Surface
• Maximum Likelihood
• χ2
• Negentropy
Decision
Analysis
• Fuzzy Sets
• Genetic Algorithms
• K-Means
• Principal Component Analysis
• Support Vector Machine
• Soft ComputingStatistical
Models
• Decision Trees
• Random Decision Forests
• Evolving Trees
• Minimum Spanning TreesHybrid
Artificial Neural Networks
M. Brescia - Data Mining - lezione 3 27
Artificial Neural Network:
- consists of simple, adaptive processing units, called neurons
- the neurons are interconnected, forming a large network
- parallel computation, often in layers
- nonlinearities are used in computations
Important property of neural networks: learning from
input data.
- with teacher (supervised learning)
- without teacher (unsupervised learning)
Artificial neural networks have their roots in:
- neuroscience
- mathematics and statistics
- computer science
- engineering
Neural computing was inspired
by computing in human brains
ANN - introduction
M. Brescia - Data Mining - lezione 3 28
Application areas of neural networks:
– modeling
– time series processing
– pattern recognition
– signal processing
– automatic control
Neural networks resemble the brain in
two respects:
1. The network acquires knowledge from its environment
using a learning process (algorithm)
2. Synaptic weights, which are inter-neuron connection
strenghts, are used to store the learned information.
Fully connected 10-4-2 feedforward
network with 10 source (input) nodes,
4 hidden neurons, and 2 output neurons.
Principle of neural modeling
The inputs are known or they can be measured.
The behavior of outputs is investigated when input varies.
All information has to be converted into vector form.
Benefits of ANNs
M. Brescia - Data Mining - lezione 3 29
Nonlinearity
- Allows modeling of nonlinear functions and processes.
- Nonlinearity is distributed through the network.
- Each neuron typically has a nonlinear output.
- Using nonlinearities has drawbacks: local minima.
Input-Output Mapping
- In supervised learning, the input-output mapping is learned from training data.
- For example known prototypes in classification.
- Typically, some statistical criterion is used.
- The synaptic weights (free parameters) are modified to optimize the criterion.
Adaptivity
- Weights (parameters) can be retrained with new data.
- The network can adapt to non-stationary environment.
- However, the changes must be slow enough.
Evidential Response
Contextual Information
Fault Tolerance
VLSI (Very Large Scale Integration) Implementability
Uniformity of Analysis and Design
Neurobiological Analogy
- Human brains are fast, powerful, fault tolerant, and use massively parallel computing.
- Neurobiologists try to explain the operation of human brains using artificial neural networks.
- Engineers use neural computation principles for solving complex problems.
Model of a Neuron
M. Brescia - Data Mining - lezione 3 30
A neuron is the fundamental information processing unit of a neural network.
The neuron model consists of three (or four) basic elements:
A set of synapses or connecting links:
- Characterized by weights (strengths).
- xj denote a signal at the input of synapse j.
- When connected to neuron k, xj is multiplied by the synaptic weight wkj .
- weights are usually real numbers.
An adder (linear combiner):
- Sums the weighted inputs wkjxj
An activation function:
- Applied to the output of a neuron,
limiting its value.
- Typically a nonlinear function.
- Called also squashing function.
Sometimes a neuron includes an
externally applied bias term bk
Neuron mathematics
M. Brescia - Data Mining - lezione 3 31
Mathematical equations describing neuron k:
�� � ∑ ������� (1) �� � � �� �� (2)
Here:
- uk is the linear combiner output;
- ϕ(.) is the activation function;
- yk is the output signal of the neuron;
- x1, x2, . . . , xm are the m input signals;
- wk1,wk2, . . . ,wkm are the respective m synaptic weights;
A mathematically equivalent representation:
Add an extra synapse with input x0 = +1 and weight wk0 = bk
�� � ∑ �������� (1)
�� � � �� (2)
The equations are now slightly simpler:
Typical activation functions
M. Brescia - Data Mining - lezione 3 32
Threshold function ϕ(u) = 1, u ≥ 0; ϕ(u) = 0, u < 0;
Piecewise-linear function: saturates at 1 and 0
Sigmoid function:
• Most commonly in ANNs;
• The figure shows the logistic function defined by (3);
• The slope parameter a is important;
• When a � infinite, the logistic sigmoid approaches the
threshold function;
• Continuous balance between linearity and non-linearity;
• The tanh(au) allows the activation functions to have
negative values (so far, it is one of most used when
network parameters (weights) are normalized in [-1, +1];
� � � ����� (3)
Stochastic model of a neuron
M. Brescia - Data Mining - lezione 3 33
The activation function of the McCulloch-Pitts early neuronal model (1943) is the threshold function.
The neuron is permitted to reside in only two states, say x = +1 and x = −1.
In the stochastic model, a neuron fires (switches its state x) according to a probability.
The state is x = 1 with probability P(v)
The state is x = −1 with probability 1 − P(v)
A standard choice for the probability is the sigmoid type function P(v) = 1 / [1 + exp(−v/T )]
Here T is a parameter controlling the uncertainty in firing, called pseudotemperature..
Neural networks can be represented in terms of signal-flow graphs.
Nonlinearities appearing in a neural network cause that two different types of links
(branches) can appear:
1. Synaptic links having a linear input-output relation:
2. Activation links with a nonlinear input-output relation: �� � � ��
�� � �����
Neurons as signal-flow graphs
M. Brescia - Data Mining - lezione 3 34
Signal-flow graph consists of directed branches
• The branches sum up in nodes
• Each node j has a signal xj
• Branch kj starts from node j and ends at k; wkj is the synaptic weight corresponding the signal damping
Three basic rules:
– Rule 1
Signal flows only to the direction of arrow.
Signal strength will be multiplied with strengthening factor wkj
– Rule 2
Node signal = Sum of incoming signals from branches
– Rule 3
Node signal will be transmitted to each outgoing branch
Neuron as an architectural graph
M. Brescia - Data Mining - lezione 3 35
Single-loop feedback system
M. Brescia - Data Mining - lezione 3 36
Feedback: Output of an element of a dynamic system affects to the input of this element.
• Thus in a feedback system there are closed paths.
• Feedback appears almost everywhere in natural nervous systems.
• Important in recurrent networks
• Signal-flow graph of a single-loop feedback system
ANN are feedback systems
M. Brescia - Data Mining - lezione 3 37
There are three fundamentally different classes of network architectures
1 - Single-layer feed-forward network
M. Brescia - Data Mining - lezione 3 38
The simplest form of neural networks.
• The input layer of source nodes onto an output layer of neurons (computation nodes).
• The network is strictly a feedforward or acyclic type, because there is no feedback.
• Such a network is called a single-layer network.
Single-Layer: Perceptron
M. Brescia - Data Mining - lezione 3 39
La rete SLP che emula la funzione AND:
2 - Multi-Layer feed-forward networks
M. Brescia - Data Mining - lezione 3 40
In a multilayer network, there is one or more hidden layers.
• Their computation nodes are called hidden neurons or hidden units.
• Hidden neurons can extract higher-order statistics and acquire more global information.
• Typically, input signals of a layer consist of the output signals of the preceding layer only.
Multi-Layer Perceptron
M. Brescia - Data Mining - lezione 3 41
La rete MLP che emula la funzione XOR:
3 – Recurrent Networks
M. Brescia - Data Mining - lezione 3 42
A recurrent neural network has at least one feedback loop.
• In a feedforward network there are no feedback loops.
• Recurrent network with:
- No self-feedback loops to the “own” neuron.
- No hidden neurons The feedback loops have a profound impact on the learning
capability and performance of the network.
The unit-delay elements result in a nonlinear dynamical
behavior if the network contains nonlinear elements.
Knowledge representation
M. Brescia - Data Mining - lezione 3 43
Definition: Knowledge refers to stored information or models used by a person or machine to
interpret, predict, and appropriately respond to the outside world.
• In knowledge representation one must consider:
1. What information is actually made explicit;
2. How the information is physically encoded for subsequent use.
• A well performing neural network must represent the knowledge in an appropriate way.
• A real design challenge, because there are highly diverse ways of representing information.
• A major task for a neural network: learn a model of the world (environment) where it is
working.
Two kinds of information about the environment:
1. Prior information = the known facts.
2. Observation (measurements). Usually noisy, but give examples for training the neural
network.
• The examples can be:
- labeled, with a known desired response (target output) to an input signal.
- unlabeled, consisting of different realizations of the input signal.
• A set of pairs, consisting of an input and the corresponding desired response, form a set of
training data or training sample
Knowledge representation
M. Brescia - Data Mining - lezione 3 44
An example: Handwritten digit recognition
• Input signal: a digital image with black and white pixels.
• Each image represents one of the 10 possible digits.
• The training sample consists of a large variety of hand-written digits from a real-world
situation.
• An appropriate architecture in this case:
- Input signals consist of image pixel values.
- 10 outputs, each corresponding to a digit class.
• Learning: The network is trained using a suitable algorithm with a subset of examples.
• Generalization: After this, the recognition performance of the network is tested with data
not used in learning.
Rules for knowledge representation
M. Brescia - Data Mining - lezione 3 45
The free parameters (synaptic weights and biases) represent knowledge of the surrounding
environment.
• Four general rules for knowledge representation.
• Rule 1. Similar inputs from similar classes should produce similar representations inside
the network, and they should be classified to the same category.
• Let xi denote the column vector �� � ��, ���, … , ��� �
Rules for knowledge representation
M. Brescia - Data Mining - lezione 3 46
Rule 2: Items to be categorized as separate classes should be given widely different
representations in the network.
• Rule 3: If a particular feature is important, there should be a large number of neurons
involved in representing it in the network.
• Rule 4: Prior information and invariances should be built into the design of a neural
network.
How to build invariance in ANNs
M. Brescia - Data Mining - lezione 3 47
Classification systems must be invariant to certain transformations depending on the
problem.
• For example, a system recognizing objects from images must be invariant to rotations and
translations.
• At least three techniques exist for making classifier-type neural networks invariant to
transformations.
1. Invariance by Structure
- Synaptic connections between the neurons are created so that transformed versions of the
same input are forced to produce the same output.
- Drawback: the number of synaptic connections tends to grow very large.
2. Invariance by Training
- The network is trained using different examples of the same object corresponding to
different transformations (for example rotations).
- Drawbacks: computational load, generalization mismatch for other objects.
3. Invariant feature space
- Try to extract features of the data invariant to transformations.
- Use these instead of the original input data.
- Probably the most suitable technique to be used for neural classifiers.
- Requires prior knowledge on the problem.
How to build invariance in ANNs
M. Brescia - Data Mining - lezione 3 48
• Optimization of the structure of a neural network is difficult.
• Generally, a neural network acquires knowledge about the problem through training.
• The knowledge is represented by a distributed and compact form by the synaptic
connection weights.
• Neural networks lack an explanation capability, are not able to handle uncertainty, and do
not gain from probabilistic evolution of data samples.
• A possible solution: integrate a neural network and artificial intelligence into a hybrid
system.• connessionismo
• apprendimento
•generalizzazione
Reti Neurali
• incertezza
• incompletezza
• approssimazione
Logica Fuzzy
• ottimizzazione
• casualità
• evoluzione
Algoritmi Genetici
•robustezza
MLP – learning – Back Propagation
M. Brescia - Data Mining - lezione 3 49
Output error Stopping threshold
Activation function
Law for
updating
hidden
weights
learning rateMomentum to jump
over the error surface
Backward phase
with the back
propagation of
the error
Forward phase
with the
propagation of
the input
patterns
through the
layers
Back Propagation learning rule
M. Brescia - Data Mining - lezione 3 50
Other typical problems of the back-propagation algorithm are the speed of convergence and the
possibility of ending up in a local minimum of the error function.
Back Propagation requires that the activation function used by the artificial neurons (or "nodes")
is differentiable. Main formulas are:
•(3) and (4) are the activation function for a neuron of the, respectively, hidden layer and output layer. This
is the mechanism to process and flow the input pattern signal through the “forward” phase;
•At the end of the “forward” phase the network error is calculated (inner argument of the (5)), to be used
during the “backward” or top-down phase to modify (adjust) neuron weights;
•(5) and (6) are the descent gradient calculations of the “backward” phase, respectively, for a generic
neuron of the output and hidden layer;
•(7) and (8) are the mot important laws of the backward phase. They represent the weight modification
laws, respectively, between output and hidden layers (7) and between hidden-input (or hidden-hidden if
more than one hidden layer is present in the network topology) layers. The new weights are adjusted by
adding to the old ones two terms:
•ηδhf(j): this is the descent gradient multiplied by a parameter, defined as “learning rate”, generally
chosen sufficiently small in [0, 1[, in order to induce a smooth learning variation at each backward
stage during training;
•αΔw_oldjh: this is the weight variation multiplied by a parameter, defined as “momentum”, generally
chosen quite high in [0, 1[, in order to give an high change to the weights to prevent the “local
minima”
w_newjh=w_oldjh+ηδhf(j)+αΔw_oldjh (7)
w_newhi=w_oldhi+ηδif(h)+αΔw_oldhi (8)
BP – Regression error estimation
M. Brescia - Data Mining - lezione 3 51
MLP with BP can be used for regression and classification problems. They are always treated as
optimization problems (i.e. minimization of the error function to achieve the best training in a supervised
fashion).
For regression problems the basic goal is to model the conditional distribution of
the output variables, conditioned on the input variables.
This motivates the use of a sum-of-squares error function. But for classification
problems the sum-of-squares error function is not the most appropriate choice.
BP – Classification error estimation
M. Brescia - Data Mining - lezione 3 52
By assigning a sigmoidal activation function on the output layer of the neural network, the outputs can be
interpreted as posterior probabilities.
In fact, the outputs of the network trained by minimizing a sum-of-squares error function approximate
the posterior probabilities of class membership, conditioned on the input vector, using the maximum
likelihood principle by assuming that the target data was generated from a smooth deterministic function
with added Gaussian noise. For classification problems, however, the targets are binary variables and
hence far from having a Gaussian distribution, so their description cannot be given by using Gaussian
noise model.
Therefore a more appropriate choice of error function is needed.
Let us now consider problems involving two classes. One approach to such problems would be to use a
network with two output units, one for each class. But let’s discuss an alternative approach in which we
consider a network with a single output y. We would like the value of y to represent the posterior
probability for class C1. The posterior probability of class C2 will then be given by 1( | )P C x 2( | ) 1P C x y= −
This can be achieved if we consider a target coding scheme for which t = 1 if the input vector belongs to
class C1 and t = 0 if it belongs to class C2. We can combine these into a single expression, so that the
probability of observing either target value is the Bernoulli distribution equation
1( | ) (1 )t tP t x y y −= − 1( ) (1 )n nn t n t
n
y y −−∏
ln (1 ) ln(1 )n n n nE t y t y = − + − − ∑By minimizing the negative logarithm, we get to the
cross-entropy error function in the form
MLP heuristic rules – activation function
M. Brescia - Data Mining - lezione 3 53
If there are good reasons to select a particular activation function, then do it
Mixture of Gaussian � Gaussian activation function;
Hyperbolic tangent;
Arctangent;
Linear threshold;
General “good” properties of activation function
Non-linear;
Saturate – some max and min value;
Continuity and smooth;
Monotonicity: convenient but nonessential;
Linearity for a small value of net;
Sigmoid function has all the good properties:
Centered at zero;
Anti-symmetric;
f(-net) = - f(net);
Faster learning;
Overall range and slope are not important;
MLP heuristic rules – activation function
M. Brescia - Data Mining - lezione 3 54
We can also use bipolar logistic function as the activation function in hidden and output layer. Choosing
an appropriate activation function can also contribute to a much faster learning. Theoretically, sigmoid
function with less saturation speed will give a better result.
It can be manipulated its slope and see how it affects the learning speed. A larger slope will make weight
values move faster to saturation region (faster convergence), while smaller slope will make weight values
move slower but it allows a refined weight adjustment
MLP heuristic rules – scaling of data
M. Brescia - Data Mining - lezione 3 55
Standardize
Large scale difference
error depends mostly on large scale feature;
Shifted to Zero mean, unit variance
Need to be done once, before training;
Need full data set;
Target value
Output is saturated
In the training, the output never reach saturated value;
Full training never terminated;
Range [-1, +1] is suggested;
Scaling input and target values
MLP heuristic rules – hidden nodes
M. Brescia - Data Mining - lezione 3 56
Number of hidden units governs the expressive power of net and the complexity of decision
boundary;
Well-separated � fewer hidden nodes;
complicated distribution or large spread over parameter space � many hidden nodes;
Heuristics rule of thumb:
More training data yields better result;
Number of weights << number of training data;
Number of weights ≈ (number of training data)/10 (impossible for massive data);
Adjust number of weights in response to the training data:
Start with a “large” number of hidden nodes, then decay, prune weights…;
MLP heuristic rules – hidden layers
M. Brescia - Data Mining - lezione 3 57
In the mathematical theory of ANNs, the universal approximation theorem states:
A standard MLP feed-forward network with a single hidden layer, which contains finite number
of hidden neurons, is a universal approximator among continuous functions on compact
subsets of Rn, under mild assumptions on the activation function
The theorem was proved by George Cybenko in 1989 for a sigmoid activation function, thus it
is also called the Cybenko theorem.
Kurt Hornik proved in 1991 that not the specific activation function, but rather the feed-
forward architecture itself allows ANNs to be universal approximators.
Then, in 1998, Simon Haykin added the conclusion that a 2-hidden layer feed-forward ANN
has more chances to converge in local minima than a single layer network.
Cybenko., G. (1989). "Approximations by superpositions of sigmoidal functions", Mathematics of Control,
Signals, and Systems, 2 (4), 303-314
Hornik, K. (1991). "Approximation Capabilities of Multilayer Feedforward Networks", Neural Networks,
4(2), 251–257
Haykin, S. (1998). Neural Networks: A Comprehensive Foundation, Volume 2, Prentice Hall. ISBN 0-13-
273350-1.
MLP heuristic rules – hidden layers
M. Brescia - Data Mining - lezione 3 58
One or two hidden layers are OK, so long as differentiable activation function;
But one layer is generally sufficient;
More layers � may be induce more chance of local minima;
Single hidden layer vs double (multiple) hidden layer:
single is good for any approximation of continuous function;
double may be good some times, when the parameter space is largely spread;
Problem-specific reason of more layers:
Each layer learns different aspects (different level of non-linearity);
Each layer is an hyperplane performing a separation of parameter space;
Recently, according the experimental results discussed in Bengio & LeCun 2007, problems
where the data are particularly complex and with a high variation in the parameter space
should be treated by “deep” networks, i.e. with more than a computation layer.
Hence, the universal ANN theorem has evident limits! The choice must be driven by the
experience!!!
MLP heuristic rules – weight initialization
M. Brescia - Data Mining - lezione 3 59
Not to set zero – no learning take place;
Selection of good Seed for Fast and uniform learning;
Reach final equilibrium values at about the same time;
For standardized data:
Choose randomly from single distribution;
Give positive and negative values equally –ω < w < + ω;
If ω is too small, net activation is small – linear model;
If ω is too large, hidden units will saturate before learning begins;
the particular initialization values give influences to the speed of convergence. There are
several methods available for this purpose.
The most common is by initializing the weights at random with uniform distribution inside the
interval of a certain small range of number. In the MLP-BP we call this
method HARD_RANDOM.
Another better method is by bounding the range as expressed in the equation below. We call
this method with just RANDOM
MLP heuristic rules – weight initialization
M. Brescia - Data Mining - lezione 3 60
Widely known as a very good weight initialization method is the Nguyen-Widrow method.
We call this method as NGUYEN. Nguyen-Widrow weight initialization algorithm can be expressed as the
following steps:
Remember that:
MLP heuristic rules – parameters
M. Brescia - Data Mining - lezione 3 61
Benefit of preventing the learning process from terminating in a shallow local minimum;
α is the momentum constant;
converge if 0 ≤ | α| < 1, typical value = 0.9;
α = 0: standard Back Propagation
Smaller learning-rate parameter makes smoother path;
increase rate of learning yet avoiding danger of instability;
First choice : η ≈ 0.1;
Suggestion : η is inversely proportional to square root of number of synaptic connection ( m-1/2) ;
May change during training;
There is also available an adaptive learning rule. The idea is to change the learning rate automatically
based on current error and previous error. The formula is:
The idea is to observe the last two errors rate in the direction that would have reduced the second error.
Both variable E and Ei are the current and previous error. Parameter A is a parameter that will determine
how rapidly the learning rate is adjusted. Parameter A should be less than one and greater than zero.
MLP heuristic rules – training error
M. Brescia - Data Mining - lezione 3 62
Standard rules to evaluate the learning error are MSE (Mean Square Error) and RMSE (Root MSE)
��� � ∑ ��� !�"#$��% &'(�)*+,
-��� � ∑ ��� !�"#$��% &'(�)*+,
�� � .� / 0�."12.�% �
sometimes it may happen that a better solution for MSE is a worse solution for the net.
To avoid this problem we can use the so-called convergence tube: for a given radius R the error
within R is placed equal to 0, obtaining:
if .� / 0�."12.�% � 3 -:�� � .� / 0�."12.�% �if .� / 0�."12.�% � 6 -:�� � 0
Don’t believe it? Let’s see an example
MLP heuristic rules – training error
M. Brescia - Data Mining - lezione 3 63
simple classification problem with two patterns, one of class 0 and one of class 1.
Class 0 � 0.49
Class 1 � 0.51
If the solution are 0.49 for the class 0 and 0.51 for the class 1, we have an efficiency of 100% (each pattern
correctly classified) and a MSE of 0.24, a solution of 0 for the class 0 and 0.49 for the class 1 (efficiency of
50%) gives back a MSE equal to 0.13 so the algorithm will prefer this kind of solution
Class 0 � 0.49
Class 1 � 0.51MSE = 0.24 and efficiency = 100% � OK
Class 0 � 0.0
Class 1 � 0.49MSE = 0.13 and efficiency = 50% � BAD
But the selected is the bad one! � GASP!
So far, using convergence tube with R = 0.25 in the first case we have a MSE
equal to 0 in the second MSE = 0.13 so the algorithm recognize the first
solution as better than the second.
Class 0 � 0.49 (Ek = 0)
Class 1 � 0.51 (Ek = 0)MSE = 0.0 and efficiency = 100% � OK
Class 0 � 0.0 (Ek = 0)
Class 1 � 0.49 (Ek = 0.26)MSE = 0.13 and efficiency = 50% � BAD
the selected is the good one! � OK!
Radial Basis Functions
M. Brescia - Data Mining - lezione 3 64
A linear model for a function y(x) takes the form:
The model f is expressed as a linear combination
of a set of m fixed functions often called basis
functions
Any set of functions can be used as the basis set although it helps if they are well behaved
(differentiable).
Combinations of sinusoidal waves (Fourier series):
Logistic functions (common in ANNs):
Radial functions are a special class of functions. Their response decreases (or increases)
monotonically with distance from a central point.
The center, the distance scale and the precise shape of the radial function are parameters of
the neural model which uses radial functions as neuron activations.
Radial Functions
M. Brescia - Data Mining - lezione 3 65
Radial Basis Function Networks
M. Brescia - Data Mining - lezione 3 66
The radial basis function network are a special kind of MLP, where each of n components of the input
vector x feeds forward to m basis functions whose outputs are linearly combined with weights wj into the
network model output f(x). RBF are specialized as approximations of functions.
When applied to supervised learning, the least
squares principle leads to a particularly easy
optimization problem. If the model is
And the training set is �, �8 9#
then the least squares recipe is to minimize the
sum squared error
If a weight penalty term is added to the sum
squared error, then the minimized cost
function is
MLP trained by Quasi Newton rule
67
∑∑==
−==P
p
ppP
pp
wdwxy
PwE
PwE
1
2
1
));((2
1)(
2
1)(min
kkkk dww α+=+1
Nk Rd ∈ DIRECTION OF SEARCHRk ∈α
)( kk wEd −∇=
)()(2 kkk wEdwE −∇≈∇
Descent gradient (BP)
Genetic Algorithms (GA)
Hessian approx. (QNA)
operatorsgeneticd k =
pE is a measure of the error related to the p-th pattern
M. Brescia - Data Mining - lezione 3
MLP-BP Algorithm
M. Brescia - Data Mining - lezione 3 68
By making the mathematical relations in practice, we can derive the complete standard algorithm for a
generic MLP trained by BP rule as the following (ALG-1):
Let us consider a generic MLP with m output and n input nodes and with ���"�% the weight between i-th
and j-th neuron at time (t).
1) Initialize all weights :;<"=% with small random values, typically normalized in [-1, 1];
2) Present to the network a new pattern > � "?@, … , ?A% together with the target B> � "B>@, … , B>C% as
the value expected for network output;
3) Calculate the output of each neuron j (layer by layer) as: D>< � E"∑ :;<?>;; %, except for the input
neurons (in which their output is the input itself);
4) Adapt the weights between neurons at all layers, proceeding in the backward direction from output
to input layer, with the following rule: :;<"F�@% � :;<"F% GH><D>< I∆:"F%, where η is the gain term
(also called learning rate) and μ the momentum factor, both typically in ]0,1[;
5) Goto 2 and repeat steps for all input patterns of the training set;
MLP-QNA Algorithm
M. Brescia - Data Mining - lezione 3 69
Let us consider a generic MLP with �"�% the weight vector at time (t).
1) Initialize all weights :"=% with small random values (typically normalized in [-1, 1]), set constant K,
set t= 0 and L"=% � M;
2) Present to the network all training set and calculate N":"F%% as the error function for the current
weight configuration;
3) If t=0
then O"F% � /PN"F% (gradient of error function)
else O"F% � /L"F�@%PN"F�@%
4) Calculate :"F�@% � : F / QO"F% where Q is obtained by line search expression Q"F% � / O F RL"F%O F RSO"F%
5) Calculate T"F�@% with equation L"F�@% � L"F% >>R>RU /
L F U URL FURL F U URL"F%U VVR
6) If N : F�@ 3 K then t=t+1 and goto 2, else STOP
)()(2 kkk wEdwE −∇≈∇
http://dame.dsf.unina.it/documents/DAME_MLPQNA_Model_Mathematics.pdf
Machine Learning for control systems
M. Brescia - Data Mining - lezione 3 70
An hybrid solution combines control schemes with NN, (VSPI + NN = NVSPI), to obtain an optimized adaptive
control system, able to correct motorized axis position in case of unpredictable and unexpected position
errors.
NVSPI = Neural VSPISubmit to a MLP network a dataset
(reference position trajectories) through the
system in order to teach the NN to recognize
fault condition of the VSPI response.
VSPI = Variable Structure PI
"A Neural Tool for Ground-Based Telescope
Tracking control",
Brescia M. et al. : AIIA NOTIZIE, Anno XVI, N°4,
pp. 57-65, 2003
Machine Learning & Statistical Models
M. Brescia - Data Mining - lezione 3 71
Neural
Networks
Feed Forward
Recurrent / Feedback
• Perceptron
• Multi Layer Perceptron
• Radial Basis Functions
• Competitive Networks
• Hopfield Networks
• Adaptive Reasoning Theory
• Bayesian Networks
• Hidden Markov Models
• Mixture of Gaussians
• Principal Probabilistic Surface
• Maximum Likelihood
• χ2
• Negentropy
Decision
Analysis
• Fuzzy Sets
• Genetic Algorithms
• K-Means
• Principal Component Analysis
• Support Vector Machine
• Soft ComputingStatistical
Models
• Decision Trees
• Random Decision Forests
• Evolving Trees
• Minimum Spanning TreesHybrid
Genetic Algorithms
M. Brescia - Data Mining - lezione 3 72
A class of probabilistic optimization algorithms
Inspired by the biological evolution process
Uses concepts as “Natural Selection” and “Genetic
Inheritance” (Darwin 1859)
Originally developed by John Holland (1975)
A genetic algorithm maintains a population of candidate solutions for the
problem at hand, and makes it evolve by iteratively applying a set of stochastic
operators
GA artificial vs natural
M. Brescia - Data Mining - lezione 3 73
Genetic Algorithms
selection
population evaluation
modification
discard
deleted members
parents
modifiedoffspring
Evaluatedoffspring
initiate &
evaluate
Nature
Genetic operators
M. Brescia - Data Mining - lezione 374
s1 = 1111010101
s2 = 1110110101
Before:
After:
s1` = 1110110101
s2` = 1111010101
crossover
Before:
s1 = 1110110100
After:
s2` = 1111010101
mutation
Maintain best Nsolutions in the next population
elitism
Extracts k individuals fromthe population with uniformprobability (without re-insertion) and makes themplay a “tournament”,where the probability for anindividual to win isgenerally proportional to itsfitness. Selection pressureis directly proportional tothe number k ofparticipants
Rank Tournament
Roulette wheel
All above operators are quite invariant in
respect of the particular problem.
What drastically has to change is the
fitness function (how to evaluate
population individuals)
Individual i will have a
probability to be chosen ∑i
if
if
)(
)(
Population
M. Brescia - Data Mining - lezione 3 75
Chromosomes could be:
Bit strings (0101 ... 1100)
Real numbers (43.2 -33.1 ... 0.0 89.2)
Permutations of element (E11 E3 E7 ... E1 E15)
Lists of rules (R1 R2 R3 ... R22 R23)
Program elements (genetic programming)
... any data structure ...
population
Initial Population
(5,3,4,6,2) (2,4,6,3,5) (4,3,6,5,2)
(2,3,4,6,5) (4,3,6,2,5) (3,4,5,2,6)
(3,5,4,6,2) (4,5,3,6,2) (5,4,2,3,6)
(4,6,3,2,5) (3,4,2,6,5) (3,6,5,1,4)
Example of genetic evolution
Select Parents
(5,3,4,6,2) (2,4,6,3,5) (4,3,6,5,2)
(2,3,4,6,5) (4,3,6,2,5) (3,4,5,2,6)
(3,5,4,6,2) (4,5,3,6,2) (5,4,2,3,6)
(4,6,3,2,5) (3,4,2,6,5) (3,6,5,1,4)
Try to pick the better ones.
Example of genetic evolution
Create Off-Spring – 1 point
(5,3,4,6,2) (2,4,6,3,5) (4,3,6,5,2)
(2,3,4,6,5) (4,3,6,2,5) (3,4,5,2,6)
(3,5,4,6,2) (4,5,3,6,2) (5,4,2,3,6)
(4,6,3,2,5) (3,4,2,6,5) (3,6,5,1,4)
(3,4,5,6,2)
Example of genetic evolution
(3,4,5,6,2)
Create More Offspring
(5,3,4,6,2) (2,4,6,3,5) (4,3,6,5,2)
(2,3,4,6,5) (4,3,6,2,5) (3,4,5,2,6)
(3,5,4,6,2) (4,5,3,6,2) (5,4,2,3,6)
(4,6,3,2,5) (3,4,2,6,5) (3,6,5,1,4)
(5,4,2,6,3)
Example of genetic evolution
(3,4,5,6,2) (5,4,2,6,3)
Mutate
(5,3,4,6,2) (2,4,6,3,5) (4,3,6,5,2)
(2,3,4,6,5) (4,3,6,2,5) (3,4,5,2,6)
(3,5,4,6,2) (4,5,3,6,2) (5,4,2,3,6)
(4,6,3,2,5) (3,4,2,6,5) (3,6,5,1,4)
Example of genetic evolution
Mutate
(5,3,4,6,2) (2,4,6,3,5) (4,3,6,5,2)
(2,3,4,6,5) (2,3,6,4,5) (3,4,5,2,6)
(3,5,4,6,2) (4,5,3,6,2) (5,4,2,3,6)
(4,6,3,2,5) (3,4,2,6,5) (3,6,5,1,4)
(3,4,5,6,2) (5,4,2,6,3)
Example of genetic evolution
Eliminate
(5,3,4,6,2) (2,4,6,3,5) (4,3,6,5,2)
(2,3,4,6,5) (2,3,6,4,5) (3,4,5,2,6)
(3,5,4,6,2) (4,5,3,6,2) (5,4,2,3,6)
(4,6,3,2,5) (3,4,2,6,5) (3,6,5,1,4)
Tend to kill off the worst ones.
(3,4,5,6,2) (5,4,2,6,3)
Example of genetic evolution
Integrate
(5,3,4,6,2) (2,4,6,3,5)
(2,3,6,4,5) (3,4,5,2,6)
(3,5,4,6,2) (4,5,3,6,2) (5,4,2,3,6)
(4,6,3,2,5) (3,4,2,6,5) (3,6,5,1,4)
(3,4,5,6,2)
(5,4,2,6,3)
Example of genetic evolution
Restart
(5,3,4,6,2) (2,4,6,3,5)
(2,3,6,4,5) (3,4,5,2,6)
(3,5,4,6,2) (4,5,3,6,2) (5,4,2,3,6)
(4,6,3,2,5) (3,4,2,6,5) (3,6,5,1,4)
(3,4,5,6,2)
(5,4,2,6,3)
Example of genetic evolution
When to use a GA
M. Brescia - Data Mining - lezione 3 85
Alternate solutions are too slow or overly complicated
Need an exploratory tool to examine new approaches
Problem is similar to one that has already been successfully solved by using a GA
Want to hybridize with an existing solution
Benefits of the GA technology meet key problem requirements
Benefits of GAs
M. Brescia - Data Mining - lezione 3 86
Concept is easy to understand
Modular, separate from application
Supports multi-objective optimization
Good for “noisy” environments
Always an answer; answer gets better with time
Inherently parallel; easily distributed
Many ways to improve a GA application as knowledge about problem domain is gained
Easy to exploit previous or alternate solutions
Flexible building blocks for hybrid applications
Soft Computing – MLP with GAs
M. Brescia - Data Mining - lezione 3 87
Errore output Soglia di convergenza
Funzione di attivazione
Fase a ritroso
con retro
propagazione
dell’errore
Fase in avanti
con
propagazione
dell’input
attraverso le
funzioni di
attivazione
Diverse
configurazioni di
pesi (popolazioni
di reti neurali)
ottenute con
evoluzione
genetica
Se la matrice dei pesi di una MLP la identifichiamo come un cromosoma, avremo una
popolazione di matrici di pesi (popolazione di MLP) evoluta tramite operatori genetici.
Soft Computing – MLP with GAs
M. Brescia - Data Mining - lezione 3 88
Linked genes represent individual neuron weight values and thresholds which
connect this neuron to the previous neural network layer. The genetic algorithm
of the optimization over genes, structured like this, is standard.
GAME (GA Model Experiment)
M. Brescia - Data Mining - lezione 3 89
Given a generic dataset with N features and a target t, pat a generic input pattern of the
dataset,12. � W, ⋯ , W+, . and g(x) a generic real function, the representation of a generic
feature fi of a generic pattern, with a polynomial sequence of degree d is:
Y W� ≅ 2� 2[ W� ⋯ 2\[\ W�Hence, the k-th pattern (patk) with N features may be represented by:
0�."12.�% ≅ ∑ Y W� ≅ 2� ∑ ∑ 2�[� W�\�+�+� (1)
The target tk, concerning to pattern patk, can be used to evaluate the approximation error of
the input pattern to the expected value:
�� � .� / 0�."12.�% �
With NP patterns number (k = 1, …, NP), at the end of the “forward” phase (batch) of the GA,
we have NP expressions (1) which represent the polynomial approximation of the dataset.
In order to evaluate the fitness of the patterns, the Mean Square Error (MSE) or Root Mean
Square Error (RMSE) may be used:
��� � ∑ ��� !�"#$��% &'(�)*+, -��� � ∑ ��� !�"#$��% &'(�)*
+,Cavuoti, S. et al. (2012). “Genetic Algorithm Modeling with GPU Parallel Computing Technology”. “Neural
Nets and Surroundings, Smart Innovation, Systems and Technologies”, Vol. 19, p. 11, Springer
GAME (GA Model Experiment)
M. Brescia - Data Mining - lezione 3 90
]^�_`a b c bdc � e ∙ ] 1
where N is the number of features of the patterns and B is a multiplicative factor that
depends from the g(x) function, in the simplest case is just 1, but can arise to 3 or 4
With 2100 patterns, 11 features each, the expression for the single (k-th) pattern, using (1)
with degree 6, will be:
0�."12.�% ≅hY W� ≅ 2� hh2�ijk lW�m
�
�
� hh�� kno lW�
m
�
�for k = 1,…,2100.
]^�_`a b c bdc � 2 ∙ 11 1 � 23
]^�rd+dc � 6 ∙ 2 1 � 13
]^�rd+dc � t ∙ e 1
where d is the degree of the polynomial.
We use the trigonometric polynomial sequence, given by the following expression,
g � � 2� ∑ 2� cos x� 9� ∑ �� sin x�9�
B= 2
References
M. Brescia - Data Mining - lezione 3 91
Brescia, M.; 2011, New Trends in E-Science: Machine Learning and Knowledge Discovery in
Databases, Contribution to the Volume Horizons in Computer Science Research, Editors:
Thomas S. Clary, Series Horizons in Computer Science, ISBN: 978-1-61942-774-7, available
at Nova Science Publishers
Kotsiantis, S. B.; 2007, Supervised Machine Learning: A Review of Classification Techniques,
Informatica, Vol. 31, 249-268
Shortliffe, E. H.; 1993, The adolescence of AI in medicine: will the field come of age in the
'90s?, Artif Intell Med. 5(2):93-106. Review.
Hornik, K.; 1989, Multilayer Feedforward Networks are Universal Approximators, Neural
Networks, Vol. 2, pp. 359-366, Pergamon Press.
Brescia, M. et al.; 2003, A Neural Tool for Ground-Based Telescope Tracking control, AIIA
NOTIZIE, periodico dell’Associazione Italiana per l’Intelligenza Artificiale, Anno XVI, N° 4, pp.
57-65.
Bengio & LeCun; 2007, Scaling Learning Algorithms towards AI, to appear in “Large Scale
Kernel Machines” Volume, MIT Press.