1.neural presentation

15
A PAPER PRESENTATION ON “OPTIMIZED APPROXIMATION ALGORITHM IN NEURAL NETWORKS WITHOUT OVERFITTINGYinyin Liu, Janusz A. Starzyk, Senior Member, IEEE, and Zhen Zhu, Member, IEEE Presented by-Suresh Kumar Chhetri 069/MSI/620 Date- 2070/3/12

Upload: susasuresh

Post on 24-Nov-2015

2 views

Category:

Documents


0 download

DESCRIPTION

my master presentation

TRANSCRIPT

  • A PAPER PRESENTATION ON OPTIMIZEDAPPROXIMATION ALGORITHM IN NEURAL

    NETWORKS WITHOUT OVERFITTINGYinyin Liu, Janusz A. Starzyk, Senior Member, IEEE, and

    Zhen Zhu, Member, IEEE

    Presented by-Suresh Kumar Chhetri069/MSI/620

    Date- 2070/3/12

  • ABSTRACT An optimized approximation algorithm(OAA) is

    proposed to address the overfitting problem in functionapproximation using neural networks.

    The optimized approximation algorithm avoidsoverfitting.

    The algorithm has been applied to problems ofoptimizing the number of hidden neurons in a multilayerperception.

  • INTRODUCTION Without prior knowledge of system properties ,we can only

    obtain a limited number of observations and use a set a basisfunctions to fit the data.

    In NN learning, adding more hidden neurons is equivalent toadding more basis functions approximations.

    In order to optimize the number of hidden neurons, severaltechniques have been developed.

    If the NN training uses back propagation(BP)algorithm, it hasbeen increasing the number of hidden neurons.

  • INTRODUCTION CONTD. Using an excessive number of basis function will cause

    overfitting. Too many epochs used in BP training will lead to

    overtraining, which is a concept similar to overfitting. The performance of NN can be improved by introducing

    additive noise to the training samples. To find the optimal network constructive/destructive

    algorithm were adopted.

  • Fig- variation of errors in function approximations.

  • INTRODUCTION CONT. A signal to noise to ratio figure(SNRF) is defined to

    measure the goodness of fit using the training error. Based on the SNRF measurement, an optimized

    approximation algorithm(OAA) is proposed to avoidoverfitting in function approximation.

  • ESTIMATION OF SIGNAL TONOISE RATIO FIGUREA. SNRF of the error signal. The SNRF can be precalculated for a signal that

    contains solely WGN. The comparison of SNRF of the error signal with that of

    WGN determines whether WGN dominates in the errorsignal.

  • ESTIMATION OF SIGNAL TONOISE RATIO FIGUREB. SNRF Estimation for 1-D Function Approximation. The error signal = + = + , ( i=1,2N)..(1)

    where N represents the number of samples. n=noisecomponent and s=useful signal. The energy of signal e ,

    =C( , )= .(2) The SNRF of the error signal

    = / =C( , )/C( , )-C( , )..(3) The SNRF for WGN =1/N.(4)

  • ESTIMATION OF SIGNAL TONOISE RATIO FIGUREC. SNRF Estimation for multidimensional function

    approximation

    = = .(5)where noise level at each sample.

    weighted combination of .weighted of sample.

  • OPTIMIZED APPROXIMATIONALGORITHM1) Assume that an unknown function F, with input space X

    , is described by N training samples as F( )= , ( i=1,2N).

    2) The signal detection threshold is precalculated for the givennumber of samples N based on (N) =1.7/N.

    3) Select B as the initial value for the target parameter.4) Use the MLP to obtain the approximation function= (i= 1, 2,.N).

  • OPTIMIZED APPROXIMATIONALGORITHM5. Calculate the error signal = (i= 1,2,N).6. Determine SNRF of the error signal for a 1-D problem

    using equation 3.7. Stop if the is less than or if B

    exceeds its maximum value. Otherwise, increment B andrepeat steps 4-7.

    8. If is equal to or less than , F is theoptimized approximation.

  • SIMULATION

    Fig Simulation (a) SNRF Of error signal and the threshold. (b)Training performance . (c) Validation performance.

  • CONCLUSION An optimized approximation algorithm is proposed to

    solve the problem of overfitting in functionapproximation using NNs.

    The algorithm can automatically detect overfitting basedon the training error only.

    It can be applied optimization of any learning model.

  • REFERENCES1.S. I. Gallant, Perceptron-based learning algorithms,

    IEEE Trans.Neural Netw., vol. 1, no. 2, pp. 179191,Jun. 1990.

    2. S. Lawrence, C. L. Giles, and A. C. Tsoi, Lessons inneural network training: Overtting may be harder thanexpected, in Proc. 14th Nat.Conf. Artif. Intell., 1997,pp. 540545.

    3. R. Reed, Pruning algorithmsA survey, IEEE Trans.Neural Netw.,vol. 4, no. 5, pp. 740747, Sep. 1993.

  • Thank you !!!