nural network er.abhishek k. upadhyay

Post on 06-Aug-2015

52 Views

Category:

Engineering

5 Downloads

Preview:

Click to see full reader

TRANSCRIPT

04/15/2023

HAMMING NET AND MAXNET

PRESENTED BY:

ER.Abhishek k. upadhyay

ECE(REGULAR),2015

1

04/15/2023

A neural network is a processing device, whose design was inspired by the design and functioning of human brain and their components.

Different neural network algorithms are used for recognizing the pattern.

Various algorithms differ in their learning mechanism.

All learning methods used for adaptive neural networks can be classified into two major categories:Supervised learning Unsupervised learning

Introduction

2

04/15/2023

Its capability for solving complex pattern recognition problems:-Noise in weightsNoise in inputsLoss of connectionsMissing information and adding information.

Problems

3

04/15/2023

Hamming net and MaxnetThe primary function of which is to retrieve in a

pattern stored in memory, when an incomplete or noisy version of that pattern is presented.

This is a two layer classifier of binary bipolar vectors.

The first layer of hamming network itself is capable of selecting the stored class that is at minimum HD value to the test vector presented at the input.

The second layer MAXNET only suppresses outputs.4

04/15/2023

Contd

5

04/15/2023

Contd The hamming network is of the feed forward

type. The number of output neurons in this part equals the number of classes.

The strongest response of a neuron of this layer indicated the minimum HD value between the input vector and the class this neuron represents.

The second layer is MAXNET, which operates as a recurrent network. It involves both excitatory and inhibitory connections.

6

04/15/2023 7

04/15/2023

The purpose of the layer is to compute, in a feed forward manner, the values of (n-HD).

Where HD is the hamming distance between the search argument and the encoded class prototype vector.

For the Hamming net, we have input vector Xp classes => p neurons for outputoutput vector Y = [y1,……yp]

Contd

8

04/15/2023

for any output neuron ,m, m=1, ……p, we have

Wm = [wm1, wm2,……wmn]t and m=1,2,……p

to be the weights between input X and each output neuron.

Also, assuming that for each class m, one has the prototype vector S(m) as the standard to be matched.

Contd

9

04/15/2023

For classifying p classes, one can say the m’th output is 1 if and only if

output for the classifier areXtS(1), XtS(2),…XtS(m),…XtS(p)

So when X= S(m), the m’th output is n and other outputs are smaller than n.

Contd

X= S(m) W(m)

=S(m) => happens only

10

04/15/2023

Xt S(m) = (n - HD(X , S(m)) ) - HD(X , S(m)) ∴½ XtS(m) = n/2 – HD(X , S(m))

So the weight matrix:

WH=½S

Contd

)()(2

)(1

)2()2(2

)2(1

)1()1(2

)1(1

2

1

pn

pp

n

n

H

SSS

SSS

SSS

W

11

04/15/2023

By giving a fixed bias n/2 to the input then

netm = ½XtS(m) + n/2 for m=1,2,……p

or netm = n - HD(X , S(m))

To scale the input 0~n to 0~1 down, one can apply transfer function as

f(netm) = 1/n(netm) for m=1,2,…..p

Contd

12

04/15/2023

Contd

13

04/15/2023

So for the node with the the highest output means that the node has smallest HD between input and prototype vectors S(1)……S(m) i.e.

f(netm) = 1

for other nodes f(netm) < 1

The purpose of MAXNET is to let max{ y1,……yp } equal to 1 and let others equal to 0.

Contd

14

04/15/2023 15

04/15/2023

So ε is bounded by 0<ε<1/p and

ε: lateral interaction coefficient

Contd

)(1

1

1

1

pp

MW

16

04/15/2023

And

So the transfer function

Contd

0

0 0)(

netnet

netnetf

17

kk

kM

k

netfY

YWnet

1

04/15/2023

Each entry of the updated vector decreases at the k’th recursion step under the MAXNET update algorithm, with the largest entry decreasing slowest.

18

Contd

04/15/2023

Step 1: Consider that patterns to classified are a1, a2 … ap,each pattern is n dimensional. The weights connecting inputs to the neuron of hamming network is given by weight matrix.

Algorithm

pmpp

n

n

H

aaa

aaa

aaa

W

21

22121

11211

2

1

19

04/15/2023

Step2: n-dimensional input vector x is presented to the input.

Step3: Net input of each neuron of hamming network is

netm = ½XtS(m) + n/2 for m=1,2,……p

Where n/2 is fixed bias applied to the input of each neuron of this layer.

Step 4: Out put of each neuron of first layer is, f(netm) =1/n( netm) for

m=1,2,…..p

Contd

20

04/15/2023

Step 5: Output of hamming network is applied as input to MAXNET

y0=f(netm)

Step 6: Weights connecting neurons of hamming network and MAXNET is taken as,

Contd

)(1

1

1

1

pp

MW

21

04/15/2023

Where ε must be bounded 0< ε <1/p. the quantity ε can be called the literal interaction coefficient. Dimension of WM is p×p.

Step 7: The output of MAXNET is calculated as,

k=1, 2, 3…… denotes the no of iterations.

Contd

0

0 0)(

netnet

netnetf

k1k

kM

k

netfY

YWnet

22

04/15/2023

Ex: To have a Hamming Net for classifying C , I , T

thenS(1) = [ 1 1 1 1 -1 -1 1 1 1 ]t

S(2) = [ -1 1 -1 -1 1 -1 -1 1 -1 ]t

S(3) = [ 1 1 1 -1 1 -1 -1 1 -1 ]t

So,

23

Example

111111111

111111111

111111111

HW

04/15/2023 24

04/15/2023

For

And

25

22

1 nXWnet H

Y

netft

9

5

9

3

9

7

t

t

net

X

537

111111111

04/15/2023

Input to MAXNET and select =0.2 < 1/3(=1/p)

So,

And

26

15

1

5

15

11

5

15

1

5

11

MW

kk

kM

k

netfY

YWnet

1

04/15/2023

K=o

27

333.0

067.0

599.0

333.0

067.0

599.0

555.0

333.0

777.0

12.02.0

2.012.0

2.02.01

01

0

netfY

o

net

04/15/2023

K=1

K=2

28

t

t

Y

net

2

0

1

120.00520.0

120.0120.0520.0

t

t

Y

net

3

0

2

096.00480.0

096.014.0480.0

04/15/2023

K=3

The result computed by the network after four recurrences indicates the vector x presented at i/p for mini hamming distance has been at the smallest HD from s1.

So, it represents the distorted character C.

29

t

t

Y

net

4

7

0

3

00461.0

10115.0461.0

04/15/2023

Noise is introduced in the input by adding random numbers.

Hamming Network and MaxNet network recognizes correctly all the stored strings even after introducing noise at the time of testing.

Effect of Noise in Inputs

30

04/15/2023

In the network, neurons are interconnected and every interconnection has some interconnecting coefficient called weight.

If some of these weights are equated to zero then how it is going to effect the classification or recognition.

The number of connections that can be removed such that the network performance is not affected.

Loss of Connection

31

04/15/2023

Missing information means some of the on pixels in pattern grid are made off.

For the algorithm, how many information we can miss so that the strings can be recognized correctly varies from string to string.

The number of pixels that can be switched off for all the stored strings in algorithm.

Missing Information

32

04/15/2023

Adding information means some of the off pixels in the pattern grid are made on.

The number of pixels that can be made on for all the strings that can be stored in networks.

Adding Information

33

04/15/2023

The network architecture is very simple.

This network is a counter part of Hopfield auto associative network.

The advantage of this network is that it involves less number of neurons and less number of connections in comparison to its counter part.

There is no capacity limitation.

Merits

34

04/15/2023

The hamming network retrieves only the closest class index and not the entire prototype vector.

It is not able to restore any of the key patterns. It provides passive classification only.

This network does not have any mechanism for data restoration.

It’s not to restore distorted pattern.

Demerits

35

04/15/2023

Jacek M. Zurada, “Introduction to artificial Neural Systems”, Jaico Publication House. New Delhi, INDIA

Amit Kumar Gupta, Yash Pal Singh, “Analysis of Hamming Network and MAXNET of Neural Network method in the String Recognition”, IEEE ,2011.

C.M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, Oxford, 2003.

References

36

04/15/2023 37

Thanks

top related