neural network learning rules

27
Artificial Neural Networks

Upload: vijaya-lakshmi

Post on 22-Nov-2015

42 views

Category:

Documents


10 download

TRANSCRIPT

  • Artificial Neural Networks

  • Learning is the process by which the free parameters of a neural network get adapted through a process of stimulation by the environment in which the network is embedded.Definition of Learning

  • Learning ANN has the ability to learn by interacting with its environmentLearning is accomplished through an adaptive procedure, known as learning rule or algorithm, whereby the weights of the network are incrementally adjusted so as to improve a predefined performance measure over time

  • Basic Learning RulesHebbian learning RulePerceptron Learning RuleDelta learning RuleWidrow Hoff Learning RuleCorrelation Learning RuleWinner take all learning Rule

  • The General Learning Rule

  • Illustration for Weight Learning Rules

  • Hebbian Learning

    When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.

    This is often paraphrased as "Neurons that fire together wire together." It is commonly referred to as Hebb's Law.

  • Hebbs Law can be represented in the form of two rules:If two neurons on either side of a connection are activated synchronously, then the weight of that connection is increased.If two neurons on either side of a connection are activated asynchronously, then the weight of that connection is decreased. Hebbs Law provides the basis for learning without a teacher. Learning here is a local phenomenon occurring without feedback from the environment.

  • Hebbian Learning RuleThe learning signal is equal to the neurons output

    FEED FORWARD UNSUPERVISED LEARNING

  • Example 1 c=1 and f(net)=sgn(net)Final answer is

  • Example 2Final answer:

  • Perceptron Learning ruleLearning signal is the difference between the desired and actual neurons responseLearning is supervised

  • Example 1Final answer is

  • Example 2Final answer is

  • Delta Learning RuleOnly valid for continuous activation functionUsed in supervised training modeLearning signal for this rule is called deltaThe aim of the delta rule is to minimize the error over all training patterns

  • Delta Learning Rule Contd.Learning rule is derived from the condition of least squared error.Calculating the gradient vector with respect to wi Minimization of error requires the weight changes to be in the negative gradient direction

  • f(net) = (1-o2) Assuming a Bipolar Continuous Sigmoid activation function.Final answer is

  • Widrow-Hoff learning RuleAlso called as least mean square learning ruleIntroduced by Widrow(1962), used in supervised learningIndependent of the activation functionSpecial case of delta learning rule wherein activation function is an identity function ie f(net)=netMinimizes the squared error between the desired output value di and neti

  • Correlation Learning RuleSimilar to Hebbian learningSupervised learningr = di

    wi = c di x

  • In competitive learning, neurons compete among themselves to be activated.While in Hebbian learning, several output neurons can be activated simultaneously, in competitive learning, only a single output neuron is active at any time.The output neuron that wins the competition is called the winner-takes-all neuron.Competitive learning

  • The basic idea of competitive learning was introduced in the early 1970s.In the late 1980s, Teuvo Kohonen introduced a special class of artificial neural networks called self-organizing feature maps. These maps are based on competitive learning.The neuron with the maximum output and its neighbors are allowed to adjust their weights.

  • Winner-Take-All learning rules

  • Winner-Take-All Learning rule ContdCan be explained for a layer of neuronsExample of competitive learning and used for unsupervised network trainingLearning is based on the premise that one of the neurons in the layer has a maximum response due to the input xThis neuron is declared the winner with a weight

  • The winner selection is based on the criterion of maximum activation among all p neurons participating in a competition or finding a weight vector closest to the input vector x, and updating only the weights connecting the inputs and the neuron with the maximum response.

  • Summary of learning rules

  • THANK YOU