brain-like design of sensory-motor programs for robots g. palm, uni-ulm
Post on 19-Dec-2015
213 views
TRANSCRIPT
The cerebral cortex is a huge associative memory,
or rather a large network of associatively connected topographical areas.
Associations between patterns are formed by Hebbian learning.
Even simple tasks require the interaction of many cortical areas.
Modelling Cortical Areas withAssociative Memories
Andreas Knoblauch & Günther PalmDepartment of Neural Information Processing
University of Ulm, Germany
• Introduction• Neural associative memory
• Willshaw model
• Spiking associative memory (SAM)
• Modeling Cortical Areas for the MirrorBot project• Cortical areas for the minimal scenario „Bot show plum!“
• Implementation of the language areas using SAM
• Summary and Discussion
Overview
Associative Memory (AM)
P1 , P
2 , ... , P
M AM
Addressing with one or more noisy patterns P
X = P
i1 + P
i2 + ... + P
im + noise
AM (Pi1
, Pi2
, ... , Pim
)
(1) Learning patterns:
(2) Retrieving patterns
Neural Associative Memory (NAM)
Binary Willshaw model - sparse coding:
pattern P = 1 1 1 {0,1}n
k = O(log n)
n neurons O(n2/log2n) patterns can be storedmemory capacity ln 2 0.7 bit/synapse
- extensions:- iterative retrieval (Schwenker/Sommer/Palm 1996, 1999)
- spiking associative memory (Wennekers/Palm 1997)
(mainly for biological modelling)
(Willshaw 1969, Palm 1980, Hopfield 1982)
k
n
Binary Willshaw -NAM: Learning
Patterns P(k) {0,1}n , k=1,...,M
P(1) = 1 1 1 1 0 0 0 0
Memory matrix
Aij = min ( 1 , P(k) · P(k))
k i j
P(1) 1 1 1 1 0 0 0
1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
i
j
Binary Willshaw -NAM: Learning
Patterns P(k) {0,1}n , k=1,...,M
P(1) = 1 1 1 1 0 0 0 0P(2) = 0 0 1 1 1 1 0 0
Memory matrix
Aij = min ( 1 , P(k) · P(k))
k i j
P(1) 1 1 1 1 0 0 0 P(2) 0 0 1 1 1 1 0
1 0 1 1 1 1 0 0 0 1 0 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 0 1 0 0 1 1 1 1 0 0 1 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0
i
j
Binary Willshaw -NAM: Retrieving
Learned patterns:P(1) = 1 1 1 1 0 0 0 0P(2) = 0 0 1 1 1 1 0 0
Address pattern:PX = 0 1 1 0 0 0 0 0
PX A
0 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0
Binary Willshaw -NAM: Retrieving
Learned patterns:P(1) = 1 1 1 1 0 0 0 0P(2) = 0 0 1 1 1 1 0 0
Address pattern:PX = 0 1 1 0 0 0 0 0
neuron potentials:x = APX
PX A
0 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0
x 2 2 2 2 1 1 0
Binary Willshaw -NAM: Retrieving
Learned patterns:P(1) = 1 1 1 1 0 0 0 0P(2) = 0 0 1 1 1 1 0 0
Address pattern:PX = 0 1 1 0 0 0 0 0
Neuron potentials:x = APX
Retrieval result:PR = x
PX A
0 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0
x 2 2 2 2 1 1 0
PR (=2) 1 1 1 1 0 0 0
NAM and Problems with Superpositions: Retrieving (1)
Classical: Addressing with 1/2 pattern (k/2) + noise (f)
NAM and Problems with Superpositions: Retrieving (2)
Classical: Addressing with 1/2 pattern (k/2) + noise (f)
Superposition: Addressing with2 x 1/2 pattern + noise
NAM and Problems with Superpositions: Solutions?
Classical: Addressing with 1/2 pattern (k/2) + noise (f)
Superposition: Addressing with2 x 1/2 pattern + noise
Possible Solutions: Spiking neuron models Iterative retrieval? combination?
Working Principle of Spiking Associative Memory
Interpretation of classical potentials x as dx/dt temporal dynamics- the most excited neurons fire first- breaking of symmetry by feedback
- pop-out of one pattern- suppression of others
Problem: How to achieve threshold control?
(e.g. Wennekers/Palm '97)
Counter Model of Spiking Associative Memory
States („counters“) of neuron i : CH
i(t) : # spikes received heteroassociatively until time t
CAi(t) : # spikes “ auto- “ “ “
C(t) : # all spikes
Instantaneous Willshaw-Retrieval-Strategy at time t:
neuron i probably is part of the pattern to be retrieved, if
CAi(t) C(t)
Simple linear example: dx
i / dt = a CH
i + b ( CA
i - C) , b >> a > 0 , 1
Knoblauch/Palm 2001
Overview
A minimal cortical model for a very simple scenario, “Bot show plum!”
Information flow and binding in the model: Hearing and understanding “Bot show plum!” Reacting: Seeking the plum, and pointing to the plum
Minimal model “Bot show plum!” - Overview
- 3 auditory sensory areas- 2 grammar areas- 3 visual sensory areas
Minimal model “Bot show plum!” - Overview
- 3 auditory sensory areas- 2 grammar areas- 3 visual sensory areas- 1 somatic sensory area
Minimal model “Bot show plum!” - Overview
- 3 auditory sensory areas- 2 grammar areas- 3 visual sensory areas- 1 somatic sensory area- 1 visual attention area
Minimal model “Bot show plum!” - Overview
- 3 auditory sensory areas- 2 grammar areas- 3 visual sensory areas- 1 somatic sensory area- 1 visual attention area- 4 goal areas
Minimal model “Bot show plum!” - Overview
- 3 auditory sensory areas- 2 grammar areas- 3 visual sensory areas- 1 somatic sensory area- 1 visual attention area- 4 goal areas- 3 motor areas
Minimal model “Bot show plum!” - Overview
- 3 auditory sensory areas- 2 grammar areas- 3 visual sensory areas- 1 somatic sensory area- 1 visual attention area- 4 goal areas- 3 motor areas
Together 17 cortical areas (incl. 2 sequence areas A4/G1)
Minimal model “Bot show plum!” - Overview
- 3 auditory sensory areas- 2 grammar areas- 3 visual sensory areas- 1 somatic sensory area- 1 visual attention area- 4 goal areas- 3 motor areas
Together 17 cortical areas (incl. 2 sequence areas A4/G1)
+ evaluation fields
Minimal model “Bot show plum!” - Overview
- 3 auditory sensory areas- 2 grammar areas- 3 visual sensory areas- 1 somatic sensory area- 1 visual attention area- 4 goal areas- 3 motor areas
Together 17 cortical areas (incl. 2 sequence areas A4/G1)
+ evaluation fields+ activation fields
Minimal model “Bot show plum!” - Integration
Input fromrobot sensors -
robot software - simulation environment -
Minimal model “Bot show plum!” - Integration
Input fromrobot sensors -
robot software - simulation environment -
Output to - robot actors - robot software - simulation environment