mechanisms and models of persistent neural activity: linear network theory mark goldman center for...
TRANSCRIPT
Mechanisms and Models of Persistent Neural Activity:
Linear Network Theory
Mark GoldmanCenter for Neuroscience
UC Davis
Outline
1. Neural mechanisms of integration: Linear network theory
2. Critique of traditional models of memory-related activity & integration, and possible remedies
In many memory & decision-making circuits, neurons accumulate and/or maintain signals for ~1-10 seconds
Issue: How do neurons accumulate & store signals in working memory?
stimulus
neuronal activity(firing rates)
timeaccumulation
storage (working memory)
Most neurons intrinsically have brief memoriesPuzzle:
synapser
tneuron
synaptic input
firing rate r
Input stimulus
~10-100 ms
Neural Integrator of the Goldfish Eye Movement System
Bob Baker
David Tank
SebastianSeung
Emre Aksay
Eye position:
excitatory
inhibitory
(data from Aksay et al., Nature Neuroscience, 2001)
Eye velocity codingcommand neurons
The Oculomotor Neural Integrator
Integratorneurons:
time
persistent activity: stores running total of input commands
Network Architecture
RL Eye Position
100
0
Right side neurons
firin
g ra
te
(Aksay et al., 2000)
Firing rates:
4 neuron populations:
Inhibitory
Excitatory
background inputs& eye movement commands
RL Eye Position
Left side neurons
firin
g ra
te
100
0
midline
Recurrentexcitation
Recurrent(dis)inhibition
Standard Model: Network Positive Feedback
Typical isolated single neuron firing rate: tneuron
time
Neuron receiving network positive feedback:
Command input:
(Machens et al., Science, 2005)
1) Recurrent excitation
2) Recurrent (dis)inhibition
(H.S. Seung, D. Lee)
Eye position representedby location alonga low dimensional
manifold (“line attractor”)
saccade
Many-neuron Patterns of Activity Represent Eye Position
Activity of 2 neurons
Line Attractor Picture of the Neural Integrator
No decay along directionof eigenvector witheigenvalue = 1
Decay along direction of eigenvectors with eigenvalue < 1
“Line Attractor” or “Line of Fixed Points”
Geometrical picture of eigenvectors:
r1
r2
Outline
1) A nonlinear network model of the oculomotor integrator,and a brief discussion of Hessians and sensitivity analysis
2) The problem of robustness of persistent activity
3) Some “non-traditional” (non-positive feedback) models of integration
a) Functionally feedforward models
b) Negative-derivative feedback models
[4) Project: Finely discretized vs. continuous attractors, and noise:phenomenology& connections to Bartlett’s & Bard’s talks]
Eye position:
excitatory
inhibitory
(data from Aksay et al., Nature Neuroscience, 2001)
Eye velocity codingcommand neurons
The Oculomotor Neural Integrator
Integratorneurons:
time (secs)
persistent activity: stores running total of input commands
Network Architecture
RL Eye Position
100
0
Right side neurons
firin
g ra
te
(Aksay et al., 2000)
Firing rates:
4 neuron populations:
Inhibitory
Excitatory
background inputs& eye movement commands
RL Eye Position
Left side neurons
firin
g ra
te
100
0
midline
Recurrentexcitation
Recurrent(dis)inhibition
Network Model
Wij = weight of connection
from neuron j to neuron i
Firing rate dynamics of each neuron:
( )( ) (( ) )ipsi ipsiij jj
contraine
contraijuro jn ji i iW s
drW s T
tr B
drf tr
same-side excitation
opposite-side inhibition
Intrinsicleak
Bkgd.input
Burstcommandinput
Burst commands& Tonic background inputs
Outputs
Wcontra
Wipsi
Firing rate changes
Network Model
Wij = weight of connection
from neuron j to neuron i
Firing rate dynamics of each neuron:
( )( ) (( ) )ipsi ipsiij jj
contraine
contraijuro jn ji i iW s
drW s T
tr B
drf tr
Intrinsicleak
Firing rate changes
Bkgd.input
Burstcommandinput
Burst commands& Tonic background inputs
Outputs
Wcontra
Wipsi
For persistent activity: must sum to 0
(1
)ineuro
in
B tr dt
Integrator!
same-side excitation
opposite-side inhibition
Fitting the Model
Conductance-based model fit by constructing a cost functionthat simultaneously enforced:
• Intracellular current injection experiments• Database of single-neuron tuning curves• Firing rate drift patterns following focal lesions
( )( ) (( ) )ipsi ipsiij jj
contraine
contraijuro jn ji i iW s
drW s T
tr B
drf tr
Intrinsicleak
Firing rate changes
Bkgd.input
Burstcommandinput
same-side excitation
opposite-side inhibition
)( ()) (ipsi ip contra cons traij ji j j
iij j iWW s s Tr r rf
During fixations (B=0, dr/dt = 0):
Model Integrates its Inputs andReproduces the Tuning Curves of Every Neuron
solid lines: experimental tuning curves
boxes: model rates (& variability)
Network integrates its inputs …and all neurons precisely match tuning curve data
gray: raw firing rate (black: smoothed rate)green: perfect integral
Fir
ing
rate
(H
z)
Time (sec)
Inactivation Experiments Suggest Presenceof a Threshold Process
time
firin
g ra
te
Experiment: Remove inhibition
Inactivate
stable at high rates
drift at low rates
Record
Model:
Persistence maintained at high firing rates:
These high rates occur when inactivated side would be at low rates
Suggests such low rates are below a threshold for contributing
Two Possible Threshold Mechanisms Revealed by the Model
Mechanism 2 • High-threshold cells dominate the inhibitory connectivity
Mechanism 1 • Synaptic thresholds
Synaptic nonlinearity s(r) & anatomical connectivity Wij for 2 model networks:
Exc
Right side neurons
Left sideneurons
Inh
syn
ap
tica
ctiv
atio
n
firing rate
Right sideneurons
Left sideneurons
Exc
Inh
low-thresholdinhibitory neurons
syn
ap
tica
ctiv
atio
n
firing rate
Mechanism for generating persistent activity
Network activity when eyes directed rightward:
Implications:
-The only positive feedback LOOP is due to recurrent excitation
-Due to thresholds, there is no mutual inhibitory feedback loop
Right sideLeft side
Excitation, not inhibition, maintains persistent activity!
Inhibition is anatomically recurrent, but functionally feedforward
Sensitivity Analysis:Which features of the connectivity are most critical?
Cost function curvature is described by “Hessian” matrix of 2nd derivatives:
Cost function surface:
-1-0.5
00.5
1
-1
-0.5
0
0.5
10
2
4
6
8
10
12
cost
C
W1
W2
Insensitivedirection
(low curvature)
Sensitivedirection
(high curvature)
2 ( )ij
i j
CostH
W W
2 2 2
21 1 2 1
2 2
22 1 2
2 2
21
( ) ( ) ( )
( ) ( )
( ) ( )
N
N N
Cost Cost Cost
W W W W W
Cost Cost
W W W
Cost Cost
W W W
diagonal elements: sensitivity to varying a single parameter
off-diagonal elements: interactions
Oculomotor Integrator:Which features of the connectivity are most critical?
2( ) ( )ijk
ij ik
CostH
W W
Hessian of the fit’s costfunction onto neuron i:
1. Diagonal elements: sensitivity to mistuning individual weights
3 mostimportant components!
2. Largest principal components: most sensitive patterns of weight changes
Sensitive & Insensitive Directions in Connectivity Matrix
Eigenvector 10:Offsetting changes
in weights
Eigenvector 1:make all connections
more excitatory
Eigenvector 2:strengthen excitation
& inhibition
Eigenvector 3:vary high vs. low
threshold neurons
exc inh exc inh exc inh
perturb perturb perturb perturb
Sensitive directions (of model-fitting cost function) Insensitive
Fisher et al., Neuron, in press
Diversity of Solutions: Example Circuits Differing Only in Insensitive Components
Two circuits with different connectivity…
…but near-identical performance…
…differ only in theirinsensitive eigenvectors
1bio
drr r I
dw
t Integrator equation:
~ 30 secnetwork
Experimental values:
Synaptic feedback w must be tuned to accuracy of:
| | ~ 0.3w1 %bio
network
~ 100 msbioSingle isolated neuron:
Integrator circuit:
|1 |wbio
network
W
r(t)I
w
Issue: Robustness of Integrator
W
r(t)externalinput
w
Need for fine-tuning in linear feedback models
Fine-tuned model:wneuron r
drexternal inpur t
dt
decay feedback
Leaky behavior Unstable behavior
r
r (decay)wr (feedback)
dr/dtr
r (decay)wr (feedback)dr/dt
rate
time (sec)
rate
time (sec)
Geometry of Robustness & Hypothesis for Robustness on Faster Time Scales
1) Plasticity on slow time scales: Reshapes the trough to make it flat
2) To control on faster time scales:
-OR- Fill attractor with viscous fluid to slow drift
Add ridges to surface to add“friction”-like slowing of drift
Course project!
Questions:
1) Are positive feedback loops the only way to perform integration? (the dogma)
2) Could alternative mechanisms describe persistent activity data?
Working memory task not easily explained by traditional feedback models
5 neurons recorded during a PFC delay task (Batuev et al., 1979, 1994):
Response of Individual Neurons inLine Attractor Networks
All neurons exhibit similar slow decay:Due to strong coupling that mediates positive feedback
Time (sec)
Time (sec)
Neu
ron
alfi
rin
g r
ates
Su
mm
edo
utp
ut
Problem 2: To generate stable activity for 2 seconds (+/- 5%) requires 10-second long exponential decay
Problem: Does not reproduce the differences between neurons seen experimentally!
Feedforward Networks Can Integrate!
Chain of neuron clusters that successively filter an input
Simplest example:(Goldman, Neuron, 2009)
Feedforward Networks Can Integrate!
Chain of neuron clusters that successively filter an input
Simplest example:
Integral of input!(up to duration ~Nt)(can prove this works analytically)
(Goldman, Neuron, 2009)
Same Network Integrates Any Input for ~Nt
Improvement in Required Precision of Tuning
Time (sec)
Time (sec)
Neu
ron
alfi
rin
g r
ates
Su
mm
edo
utp
ut
Feedback-based Line Attractor:10 sec decay to hold 2 sec of activity
Time (sec)
Time (sec)
Neu
ron
alfi
rin
g r
ates
Su
mm
edo
utp
ut
Feedforward Integrator2 sec decay to hold 2 sec of activity
Feedforward Models Can Fit PFC Recordings
Line Attractor Feedforward Network
Recent data: “Time cells” observed in rat hippocampal recordings during delayed-comparison task
Data courtesyof H. Eichenbaum
[Similar to dataof Pastalkova et al.,
Science, 2008; Harvey et al.,Nature, 2012]
feedforward
progression
(Goldman, Neuron, 2009)
Generalization to Coupled Networks: Feedforward transitions between patterns of activity
Feedforward network
0 0 0
1 0 0
0 1 0
WConnectivity matrix Wij:
Geometric picture:
Recurrent (coupled) network
recurrent -1RWRWMap each neuron toa combination ofneurons by applyinga coordinate rotationmatrix R
(Schur decomposition)
(Math of Schur: See Goldman, Neuron, 2009; Murphy & Miller, Neuron, 2009; Ganguli et al., PNAS, 2008)
Responses of functionally feedforward networks
Feedforward network activity patterns
Functionally feedforward activity patterns…
Effect of stimulating pattern 1:
& neuronal firing rates
Math Puzzle: Eigenvalue analysis does not predict long time scale of response!
Line attractornetworks:
Feedforward networks:
Eigenvaluespectra:
Neuronal responses:
Imag
(l)
Real(l) 1 persistent mode
Real(l)
Imag
(l)
1 no persistent mode???
(Goldman, Neuron, 2009; see also: Murphy & Miller, Neuron 2009; Ganguli & Sompolinsky, PNAS 2008)
Math Puzzle: Schur vs. Eigenvector Decompositions
Answer to Math Puzzle: Pseudospectral analysis
Eigenvalues l:
Satisfy equation: (W – l1)v =0
Govern long-time asymptotic behavior
Pseudoeigenvalues le: Set of all values le that satisfy the inequality: ||(W – le1)v|| <e
Govern transient responses
Can differ greatly from eigenvalues when eigenvectors are highly non-orthogonal (nonnormal matrices)
( Trefethen & Embree, Spectra & Pseudospectra, 2005)
Black dots: eigenvalues;
Surrounding contours: colors give boundaries of set of pseudoeigenvals., for different values of e
(from Supplement to Goldman, Neuron, 2009)
Answer to Math Puzzle: Pseudo-eigenvaluesNormal networks:
Feedforward networks:
Eigenvalues Neuronal responses
Imag
(l)
Real(l) 1 persistent mode
Real(l)
Imag
(l)
1 no persistent mode???
Answer to Math Puzzle: Pseudo-eigenvaluesNormal networks:
Feedforward networks:
Eigenvalues Neuronal responses
Imag
(l)
Real(l) 1 persistent mode
Real(l)
Imag
(l)
transiently acts likepersistent mode
(Goldman, Neuron, 2009)
1
-1
0
1
Pseudoeigenvals
Challenging the Positive Feedback PicturePart 2: Corrective Feedback Model
Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback transformation
x y
f
g-
+
f(y)
x – f(y) g(x – f(y))
Equation:
1
11
( ( ))
( ) ( )
for f(y)>>g ( : ( ))
y g x f y
g y f y
y f
x
y x
1( )f xClaim:
Integration from Negative Derivative Feedback
Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback signal
x y g
-
+
d
dt
( )x t dt Integrator!
Positive feedback mechanism
Energylandscape(Wpos=1):
Positive vs. Negative-Derivative Feedback
Derivative feedback mechanism
.der
ddrr Input
dt
rW
dt
(-) correctivesignal
(+) correctivesignal
time
firin
g ra
te
.pos
drr Input
dtW r
Negative derivative feedback arisesnaturally in balanced cortical networks
Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced
slow fast
Lim & Goldman, Nature Neuroscience, in press
Negative derivative feedback arisesnaturally in balanced cortical networks
Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced
slow fast
Lim & Goldman, Nature Neuroscience, in press
Networks Maintain Analog Memory and Integrate their Inputs
Robustness to Loss of Cells orIntrinsic or Synaptic Gains
Change: -intrinsic gains -synaptic gains -Exc. cell death -Inh. cell death
Balanced Inputs Lead to Irregular SpikingAcross a Graded Range of Persistent Firing Rates
Spiking model structure:Model output (purely derivative feedback) :
Experimental distribution ofCV’s of interspike intervals:
(Compte et al., 2003)
Summary
1) Tuned positive feedback (attractor model)
Short-term memory (~10’s seconds) is maintained by persistent neural activity following the offset of a remembered stimulus
Possible mechanisms
2) Feedforward network (possibly in disguise)
-Disadvantage: Finite memory lifetime ~ # of feedforward stages -Advantage: Higher-dimensional representation can produce
many different temporal response patterns -Math: Not well-characterized by eigenvalue decomposition;
Schur decomposition or pseudospectral analysis better
3) Negative derivative feedback -Features: Balance of excitation and inhibition, as observed
Robust to many natural perturbations Produces observed irregular firing statistics
Acknowledgments
Itsaso Olasagasti (USZ)Dimitri Fisher
Theory (Goldman lab, UCD)David Tank (Princeton Univ.) Emre Aksay (Cornell Med.)Guy Major (Cardiff Univ.)Robert Baker (NYU Medical)
Experiments