ph.d. final exam neural network ensonification emulation: training and application

Post on 01-Feb-2016

52 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Ph.D. Final Exam Neural Network Ensonification Emulation: Training And Application. JAE-BYUNG JUNG Department of Electrical Engineering University of Washington August 8, 2001. Overview. Review of adaptive sonar Neural network training for varying output nodes On-line training - PowerPoint PPT Presentation

TRANSCRIPT

Ph.D. Final Exam

Neural Network Ensonification Emulation: Training And Application

JAE-BYUNG JUNG

Department of Electrical EngineeringUniversity of WashingtonAugust 8, 2001

2

Overview

Review of adaptive sonarNeural network training for varying output nodes– On-line training– Batch-mode training

Neural network inversionSensitivity analysis

Maximal area coverage problemConclusions and ideas for future works

3

INTRODUCTION- Sonar Surveillance

Software model emulating acoustic propagation Computationally intensiveNot suitable for real time control

Control

Environment

Sonar Performance

Map

SonarSurveillance

System

4

Sonar Performance Map

5

Sonar Data 1

The physical range-depth output surveillance area is sampled by 30 ranges from 0 to 6 km at steps of 0.2 km and 13 depths from 0 to 200m at steps of 15m Data Size : 2,500 pattern vectors. (2,000 patterns are used for training the neural network, and 500 patterns are not used for training and reserved for testing.

InputParameters

1 Sonar depth [m]

2 Wind speed [m/s]

3 Surface sound speed [m/s]

4 Bottom sound speed [m/s]

5 Bottom type, grain size

OutputParameters

1~39013x30 range-depth SE values [dB]

6

Sonar Data 2

The wider surveillance area is considered including 75 sampled ranges from 0 to 15 km at steps of 0.2 km and 20 sampled depths from 0 to 400m at steps of 20m The shape of SE map varies depending on different bathometry Data Size : 8,000 pattern vectors. (5,000 patterns are used for training the neural network, and 3,000 patterns are not used for training and reserved for testing)

7

Neural Network Replacement

Fast Reproduction of SE MapInversion (Derivative Existence)Real-time Control (Optimization)

8

Training NN

High dimensionality of output space Multi-Layered Perceptrons Neural Smithing (e.g. input data jittering, pattern

clipping, and weight decay …)

Widely varying bathymetry Adaptive training strategy for flexible output

dimensionality

9

MLP Training

out

MLP

in

t

i

Multilayer perceptrons (MLP’s) typically use a fixed network topology for all training patterns in a data set.

10

MLP Training with varying output

out

MLP

in

DCt

t

i

(supervisory) DCi

We consider the case where the dimension of the output nodes can vary from training pattern to training pattern

11

Flexible Dimensionality

Generally, MLP must have a fixed network topology A single neural network can not handle flexible network topology

A modular neural network structure that has local experts for different dimension specific training patterns It becomes increasingly difficult, however, to implement a large number of neural networks as the number of local experts increases

12

Flexible Dimensionality

)(O),(O)(O A nnn

Let’s define a new output vector, O, for transformation to fix output dimensionality as

where, O(n) is the nth actual output training pattern vector, OA(n) is an

arbitrary output vector for O(n) and filled with arbitrary “don’t care” constant numbers

The dimensionality becomes enlarged to be spanned and is fixed as

where N is the number of training pattern vectors, Span(·) represents a dimensional span from each output vector to the maximally expandable dimensions over N different pattern vectors, and D(·) represents a dimensionality of the output vector

)(O)O(

1

nSpanDminDN

n

13

Flexible Dimensionality

Train a single neural network using a fixed-dimensional output vector O(n) by 1) Filling arbitrary constant value into OA(n) High spatial freq. components are washed out2) Smearing neighborhood pixels to OA(n) Still need to train unnecessary part

(longer training time)

OA(n) can be ignored when O(n) is projected onto O(n) in the testing phase.

14

Don’t Care Training

Inputs : I(n)={IC(n), IP(n)}

IP(n) : profile inputs • describe output profile• assign each output neuron to either O(n) or

OA(n)

IC(n) : characteristic inputs • contain the other input parameters

Outputs : O’(n) ={O(n), OA(n)}

OA(n) : “Don’t care” category • The weights associated with these

neurons are not updated for the nth pattern vector

O(n) : Normal weight correction• The other weights associated with O(n)

are updated with step size modification

Input Layer

Hidden Layer

Output Layer

. . . . . .

. . .

. . . . . .

IC(n) IP(n)

O(n) OA(n)

15

Don’t Care Training

Advantages Significantly reduced training time by not correcting weights in

don’t care category Boundary problem is solved Less training vectors required Focus on active nodes only

Drawbacks Rough weight space due to irregular weight correction Possibly leading to local minima

16

Step Size Modification

Even amount of opportunity for weight correction to every output neuronStatistical information : From the training data set, frequency of weight correction associated with each output neuron are updated.

N

ofoP j

jW

)()(

17

Step Size Modification

neuronshiddenallfor,

)(Oneuronoutputfor,0

)(Oneuronoutputfor,)(

A

H

jO

j

jjW

Oj

no

nooP

neuronshiddenallfor,ˆˆ

neuronoutputfor,)(

ˆˆ

H

jjW

Oj o

oP

18

Performance Comparison

MSE (Mean Squared Error) MSE can not represent the training performance well due to the different vector size(dimensionality)Average MSE : pixel wise representation of MSE

N

nMSE

nojj

jjjj

nEN

E

nenE

nonondne

j

)(1

)(2

1)(

)(Ofor,)(-)()(

)}(O|{

2

N

nAMSE nD

nE

NE

))(O(

)(1

19

Performance Comparison

: Training of neural networks

20

Training Sample

21

Performance Comparison

: Generalization performance from testing error

22

Testing Sample

23

Contributions: Training

A novel neural network learning algorithm for data sets with varying output node dimension is proposed. – Selective weight update

• Fast convergence• Improved accuracy

– Step size modification • Good generalization• Improved accuracy

24

Inversion of neural network

NN Training: Finding W from given input-output relationship

NN Inversion: Finding I from given target output T

W

I

OT

W

I

OT

W : NN Weight

I : Input

O : Output

T : Target

25

Inversion of neural networkWhen we want to find a subset of input vector, i, so that minimize the objective function, E(i), which can be denoted as

E(i) = 0.5(ti – oi)2

where oi is the neural network output for input I, and ti is the desired output for input i.If is the kth component of the vector it, then gradient descent suggests the recursion

where is the step size and t is the iteration index

Iteration for inversion in the equation can be solved as follows

tk

tk

tk

i

Eii

1

Iki

Ek

k

,

where, for any neuron,

HIjwnet

Ojtonet

OHmjmjjj

jjjj

j ,:)(

:))((

,

neuronth the tosignal incoming of sum weighted theis

neuronth theofoutput desired theis

neuronth theof activation theis

function squashingneuron th theof derivative theis

neuron toneuron connecting lue weight va theis

lyrespective neuronsoutput andhidden input, of sets theare ,,

jnet

jt

jo

j

mjw

OHI

j

j

j

j

jm

26

Single element inversion

The subset of outputs to be inverted is confined in one output pixel at a time during an inversion process while other outputs are floated.A single input control parameter is achieved during the iterative inversion process while 4 environmental parameters are clamped (fixed) to specific values [wind speed = 7m/s, sound speed at surface = 1500m/s, sound speed at bottom = 1500m/s, and bottom type = 9(soft mud)].

27

Multiple parameter inversion and maximizing the target area

Multiple output SE values can be inverted at a timeThe output target area is tiled with 2x2 pixel regions. Thus, 2x2 output pixel groups are inverted one at a time to find out the best combination of these 5 input parameters to satisfy the corresponding SE values

28

Pre-Clustering for NN Inversion

Inversion Improvement

Training Improvement

Separate training of partitioned data sets (Local Experts)

Pre-clustering of data set from the output space

29

Unsupervised Clustering

Partitioning a collection of data points into a number of subgroupsWhen we know the number of prototypes K-NN, Fuzzy C-Means, …

When no a priori information is available. ART, Kohonen SOFM, …

30

Adaptive Resonance Theory

Unsupervised learning network developed by Carpenter and Grossberg in 1987ART1 is designed for clustering binary vectors ART2 accepts continuous-valued vectors

... ...

...

... ...

... ...

...

Q

V

XW

U

P

R

Y

+

+

+

+

F2 layer

F1 layer

SInput Vector(Pattern)

F1 layer is an input processing field

comprising the input portion and the interface portion.

F2 layer is a cluster unit that is a

competitive layer in that the units compete in a winner-take-all mode for the right to learn each input pattern.

The third layer is a reset mechanism that controls the degree of similarity of

patterns placed on the same cluster

31

Training Phase

Training

ANN 1

sub-data

(cluster 1)

Training

ANN 2

sub-data

(cluster 2)

Training

ANN K

sub-data

(cluster K)

ART2

Entire Data Set

: Unsupervised Learning

: Supervised Learning

2.19

1.95

1.571.69

1.58

1.89

0

0.4

0.8

1.2

1.6

2

2.4

RM

S E

rro

rs

No Clustering

Cluster1

Cluster2Cluster3

Cluster4

Cluster5

32

Testing Comparison

33

Inversion Phase

ART2 Cluster 1(N-Dimension)ART2 Cluster 1(N-Dimension)

Cluster Selection (Projection)Cluster Selection (Projection) Desired Output(M-Dimension)Desired Output(M-Dimension)

ART2 Cluster 3(N-Dimension)ART2 Cluster 3(N-Dimension)

ART2 Cluster K(N-Dimension)

ART2 Cluster K(N-Dimension)

ANN 2InversionANN 2

InversionANN 1

InversionANN K

Inversion

Optimal Input ParametersOptimal Input Parameters

34

Inversion from ART2 Modular Local Experts

Multiple parameter inversion and maximizing the target area.The output target area is tiled with 2x2 pixel regions. Thus, 2x2 output pixel groups are inverted one at a time to find out the best combination of these 5 input parameters to satisfy the corresponding SE values.

35

Contributions: Inversion

A new neural network inversion algorithm was proposed whereby several neural networks are inverted in parallel. Advantages include the ability to segment the problem into multiple sub-problems which each can be independently modified as changes to the system occur over time. The concept is similar to the mixture of experts problem applied to neural network inversion.

36

Sensitivity Analysis

Feature selection as neural network is being trained or after the training. Useful to eliminate superfluous input parameters [Rambhia].– reducing the dimension of the decision space and

– increasing speed and accuracy of the system.

When implemented in hardware, the non-linearity occuring in the operation of various network component may practically make a network impossible to train significantly [Jiao]. – very important in the investigation of non-ideal effects. (important

issue from the view point of engineering).

Once neural network is trained, it is very important to determine which of the control parameters are critical to the decision making at a certain operating point.

37

NN SensitivityThe neural network sensitivity shows how sensitively the output OT responds

with respect to the change of the input parameter k.

ik

OT

ODC

(Target Surveillance Area)

(Don’t Care Area)

38

NN Sensitivity

Chain rule is used to derivek

T

i

O

kn

n

n

T

k

T

i

h

h

h

h

h

h

O

i

O

1

1

2

1

where hn represents neurons in nth hidden layer.

where h represents hidden neurons.

Generally, for n hidden layers

k

T

k

T

i

h

h

O

i

O

39

tk

tk

tk

i

Eii

1

Inversion vs. Sensitivity

Inversion Sensitivity

rulechain using

input output to from

findski

O

input new update to

rulechain using

input error tooutput from

findstki

E

ti

neti oi ei

-1

… neti oi…

40

NN Sensitivity – output neuron

1. Output Layers : find local gradient of oi at output neuron i

))(1)(()(

and )),exp(1/(1)(

,,

)(

iii

ii

jijji

ii

ii

netnetnet

netnet

whnetwhere

netnet

o

neti oi

i

wij

hj

41

TOi

jiijjj wnet )(

NN Sensitivity – hidden neuron

2. Hidden Layers

: find gradient at hidden neuron j with respect to OT

netj hj

i

wji

j

j(netj)

42

Hj

kjjk w

NN Sensitivity – input neuron

3. Input Layer

: find gradient of OT at input neuron k

kk

T

i

O

ik jwkj

k

43

Nonlinear Sensitivity

Absolute Sensitivity

Relative(Logarithmic) Sensitivity

kk

T

i

O

T

kk

kT

kT

kk

TT

k

T

O

i

iO

iO

ii

OO

iln

Oln

/

/

/

/

)(

)(

44

Sonar Sensitivity

45

Contributions: Sensitivity Analysis

Once neural network is trained, especially, it is very important to determine which of the control parameters are critical to the decision making at a certain operating point such that environmental situation or/and control criteria is given.

46

Multiple Objects Optimization

Optimization of multiple objects in order to satisfy system’s maximum cooperating performance.The composite effort of the system team is significantly more important than a single system’s individual performance

47

Target Covering Problem

Multiple rectangular boxes move to cover the circular target area.Each box is situated by 4 parameters including 2 position variables, (x0, y0), an orientation, , and an aspect ratio, r. The area of each box is fixed.

48

Box Parameters

θ

WL

(x0,y0)

x

y

W

θ)x(xθ)y(y

L

θ)y(yθ)x(xr,θ,y(xf xy

sincosΠ

sincosΠ), 0000

00

else,02

1if,1

)(,,x

xandArWr

ALwhere

49

Target Parameters

(c1,c2)

x

y

R

2

22

21

2

)()(),(

R

cycxyxt

where (c1, c2) is center of gravity and

R is radius of circular target.

50

Aggregation & Evaluation

N

i

iiiixy ryxfyxg ),,,(),( 00

Aggregation of N boxes :

Evaluation of Coverage :

),(),(),( yxtyxgyxeval

51

Genetic Algorithm(Optimization)

Optimization deals with problems of function maximization or minimization with several variables usually subject to certain constraints in general. While traditional search algorithms commonly impose severe constraints to the functions to be minimized (or maximized) such as continuity, derivative existence or unimodality, genetic algorithms work in a different ways, acting as a global probabilistic search method.Chromosomal Representation.

x1 y1

1 r1 xN yN

N rN

Box 1 Box N

. . .

. . .

52

Experiment 1 : 2-Box Problem

Box and Target Parameters A = 2000 chromosomes R = 64 (C1,C2) = (128,128)

GA Parameters Population Size = 20

chromosomes Bit Resolution = 8 bits Probability of Crossover = 0.7 Probability of Mutation = 0.03

53

Experiment 2 : 4-Box Problem

Box and Target Parameters A = 1000 R = 64 (C1,C2) = (128,128)

GA Parameters Population Size = 30 chromosomes Bit Resolution = 8 bits Probability of Crossover = 0.8 Probability of Mutation = 0.02

54

MULTIPLE SONAR PING OPTIMIZATION

Sonar Coverage : When the output pixels above a certain threshold value are only meaningful, those pixels in the output surveillance area are considered as “covered”.

Maximization of the aggregated sonar coverage from the given number of pings allows minimization of the counter detection by the object for which we are looking.

Genetic algorithm based approach to find the best combination of control parameters when the environment is given.

55

Multiple Sonar Ping Coverage Optimization

56

Aggregation

Maximization

Sigmoid squashing function

Summation

MAX

SUM

KOOO ,, 21

maxO

maxO

A

jm

K

mj OO Max ,

1max,

)max,(max,exp1

1

jOjO

N

jjOA

1max,

57

Genetic Algorithm : Population

The sonar control parameter is in the range of [10m 130m] and the required precision is 3 places after the decimal point. So, the required number of bits for a depth is 17. Thus the population size has 40 chromosomes and each has 68(=4x17) bit strings

17 bits 17 bits 17 bits17 bits

17 bits 17 bits 17 bits17 bits

17 bits 17 bits 17 bits17 bits... ... ... ...

Chromosome 1 :

Chromosome 2 :

Chromosome P :

depth 1 depth 2 depth 3 depth 4

58

4 Sonar Ping Coverage Problem

59

2 Sonar Ping Problem

60

Contributions: Maximal Area Coverage

The systems need not be the replications of each other but can, for example, specialize in different aspects of appeasing the fitness function. The search can be constrained, for example, a constraint imposing module can be inserted.

61

Conclusions

A novel neural network learning algorithm (Don’t care training with step size modification) for data sets with varying output dimension is proposed. A new neural network inversion algorithm was proposed whereby several neural networks are inverted in parallel. (the ability to segment the problem into multiple sub-problems) The sensitivity of neural network is investigated. Once neural network is trained, especially, it is critical to the decision making at a certain operating point. This can be done through input parameter sensitivity analysis.There exist numerous generalizations of the fundamental architecture of maximal area coverage problem that allow application to a larger scope of problems.

62

Ideas for Future Works

More work could be done for more accurate training of sonar data. Especially, multi resolution neural networks could help extract discrete detection maps.

Data pruning using nearest neighbor analysis before training or query-based learning using sensitivity analysis during training could improve the training time or/and accuracy.

Extensive research for the use of evolutionary algorithms to improve the inversion speed and precision. Particle swarm optimization or genetic algorithms could be considered for more flexibility on imposing feasibility constraints.

63

top related