class 13(a) march 2012.pdf

Upload: spk-sudhin

Post on 04-Jun-2018

227 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 Class 13(a) march 2012.pdf

    1/14

    EE04 804(B) Soft Computing Ver. 1.2Module - 2

    The Complete Back Propagation AlgorithmMarch 23 rd,2011

    5/18/2012 1

    Dr. Sasidharan Sreedharanwww.sasidharan.webs.com

  • 8/14/2019 Class 13(a) march 2012.pdf

    2/14

    Backpropagation networks: architecture,multilayer perceptron, back propagationlearning input layer, hidden layer, output layercomputations, calculation of error, training of ANN, BP algorithm, momentum and learningrate, Selection of Various parameters in BPnetworks.

    Variations in Standard BP algorithms- Adaptivelearning rate BP, resilent BP, Levnberg- Marquardt,and conjugate gradient BP algorithms (basicprinciple only) - Applications of ANN

    Contents

  • 8/14/2019 Class 13(a) march 2012.pdf

    3/14

    Step 1: Normalization

    2l m l

    Normalize the inputs and outputs with respect totheir maximum values.

    Neural network works better if the input andoutput lie between 0 - 1.

    Step 2: Neurons in the hidden layer

    Neurons in the hidden layer should be with in thelimit

  • 8/14/2019 Class 13(a) march 2012.pdf

    4/14

    Step 3: Initialize weights

    = 0

    o

    o

    o o

    V random w eights

    W random w eights

    V W

    Initialize weight to small random values between -1

    and +1

    Can be assumed as 1 and the threshold values can be taken as zero

  • 8/14/2019 Class 13(a) march 2012.pdf

    5/14

    Step 4: Input layer computation

    I I

    O I

    By using linear activation function, the output of theinput layer can be computed as.

  • 8/14/2019 Class 13(a) march 2012.pdf

    6/14

    Step 5: Hidden Layer

    T

    H I I V O

    Compute the inputs to the hidden layer bymultiplying with the corresponding weight ofsynapses

    Assuming hidden layer evaluate the output using thefunction

    1

    (1 ) H i I H

    O

    e

  • 8/14/2019 Class 13(a) march 2012.pdf

    7/14

    Step 6: Output Layer

    0T

    H I W O

    Compute the inputs to the output layer bymultiplying with the corresponding weight

    Assuming output layer evaluate the output usingsigmoid function

    1

    (1 )o j I o

    Oe

  • 8/14/2019 Class 13(a) march 2012.pdf

    8/14

    Step 6: Error Calculation

    2( ) j o j p T Q E n

    Calculate the error and the difference between thenetwork output and the desired output

  • 8/14/2019 Class 13(a) march 2012.pdf

    9/14

    Step 7: Find {d}

    Output layer evaluate the output using sigmoidfunction

    ( ) (1 )k ok ok ok d T O O O

  • 8/14/2019 Class 13(a) march 2012.pdf

    10/14

    Step 7: Find [Y]

    [ ] H Y O d

    Step 8

    1[ ]t t

    W W Y

  • 8/14/2019 Class 13(a) march 2012.pdf

    11/14

    Step 9: Find [e]

    [ ]e W d

    Step 10

    * ( )(1 )i H i H id e O O

  • 8/14/2019 Class 13(a) march 2012.pdf

    12/14

    Step 11: Find [X]

    [ ] * * I I X O d I d

    Step 12

    1t t

    V V X

  • 8/14/2019 Class 13(a) march 2012.pdf

    13/14

    Step 13: Find [X]

    11

    11

    [ ] [ ]

    [ ] [ ]

    t t t

    t t t

    V V V

    W W W

    Step 14

    p E e r ror ra te

    nset

    Repeat steps (4-14) until the convergence in the error rate isless than the tolerance value

    Step 15

  • 8/14/2019 Class 13(a) march 2012.pdf

    14/14

    Regards

    www.sasisreedhar.webs.com