artificial neural networks: an alternative approach to risk – based design
DESCRIPTION
Artificial Neural Networks: An Alternative Approach to Risk – Based Design. By George Mermiris. Introduction. Inspiration from the study of the human brain and physical neurons Response speed for physical neurons is 10 -3 s compared to electrical circuits with 10 -9 s - PowerPoint PPT PresentationTRANSCRIPT
Artificial Neural Networks:
An Alternative Approach toRisk – Based Design
By George Mermiris
Introduction
Inspiration from the study of the human brain and physical neuronsResponse speed for physical neurons is 10-3 s compared to electrical circuits with 10-9 sMassive parallel structure: 1011 neurons with 104 connections per neuronThe efficiency of the brain is directly dependent on the accumulated experience new connections are established which determine our capabilities
The Biological Model
Cell Body
Dendrites
Synapses
Axon
Artificial Neural Networks (ANN):Basic Forms, Feed-Forward Networks
a = f(n) = f(wp + b)
1,1 1 1,2 2 1,R Rn w p w p ... w p b
a = f(n) = f(wp + b)
General pattern:
• p: input vector• w: weight matrix• b: bias vector• n: net output of the neuron• : activation function• a: output vector of the network
Multi-Neuron, Single-Layer ANN
a = f(n) = f(Wp + b)
Multi-Layer, Multi-Neuron Network
Abbreviated Form of a Network
Activation Functions
xe11)x(f
xx
xx
eeee)x(f
f (x) x
Linear Function
Log – Sigmoid Function
Hyperbolic Tangent Sigmoid Function
Training Neural Networks
The training of a network has the same concept as for humans: the larger its experience the better its response
For an ANN the learning is established with suitable adjustment of its weights and biases
Requirements: training data and proper algorithm
The Backpropagation Algorithm
A three-fold concept1. Performance Index: Approximate Square
Error: F(x) = (t - a)T(t – a) = eTeThe Steepest Descent Algorithm for function F and modifications:
k 1 kFw ww
k 1 kFb bb
k 1 k k k x x g g: gradient
The Backpropagation Algorithm2. Chain Rule of
Calculus:
3. Calculation of the first derivatives of the performance index starting from the last layer and backpropagating to the first (!)
F F nw n w
F F nb n b
Levenberg – Marquardt algorithm: Main variation of the method based on the concept of Newton’s method with small approximation
Example 1: Resistance Experiment
Case 1: 1 cm wave amplitude
ANN Architecture:1-4-3-1
Activation Function: Log – Sigmoid for hidden layers and Linear for output layer
Example 1: Resistance Experiment
Example 1: Resistance Experiment
Case 2: 2 cm wave amplitude
ANN Architecture:1-3-2-1
Activation Function: Log – Sigmoid for hidden layers and Linear for output layer
Example 1: Resistance Experiment
Example 2: Section Areas Curve Input: L, Amax, , LCB, Cp ANN Architecture: 5-10-12-21 Activation Function: Log – Sigmoid for hidden layers
and Linear for output layer
pW
(105)
51
b
(101)
+
logs
ig
W (121
0)b
(121)
logs
ig
+
W (211
0)b
(211)
pure
lin
+
a (101)
a (121)
a (211)
Example 2: Section Areas Curve Training Set
- L=[153 156 159 … 180], in m - Amax=[335 345 355 … 425], in m2
- =[36000 37000 38000 … 45000] , in m3
- LCB=[-2.4 –2.5 –2.6 … -3.3], in m - Cp=[0.702 0.688 0.660 …0.588]
Ordinates of SA curves for each combination
Generalisation Sets [L Amax LCB Cp] - Set1=[160 360 38500 –2.65 0.6664] - Set2=[178.5 420 44500 –3.25 0.594] - Set3=[150 325 35000 –2.3 0.718]
“Network input”
“Testing the network”
“Network output”
Example 2: Section Areas Curve (Set1)
Example 2: Section Areas Curve (Set2)
Example 2: Section Areas Curve (Set3)
1. Readily applicable to any stage of the design process, especially at the preliminary design where rough approximations are necessary
2. Potential to include different design parameters in the training set and avoid iterations
3. Results are obtained very fast with high accuracy4. No highly sophisticated mathematical technique
is involved, only basic concepts of Linear Algebra and Calculus
5. Very short computer times in common PC’s
Strong points of ANN
1. Basic requirement is the existence of historical data for the creation of training set
2. Not readily applicable to novel ship types3. The results are very sensitive to the network’s
architecture and the training method selected each time, although these two parameters are very easily adjusted
4. There is no specific network architecture for a specific calculation: different architectures can provide the same results. The general rule is to use the simplest possible network
Weak points of ANN
Other networks and training algorithms: recurrent ANNFuture Work
Suitable database for creating the training set for different applicationsApplication to the Global Ship Design including Risk Data and Human Reliability Data
Thank You!