![Page 1: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/1.jpg)
Mechanisms and Models of Persistent Neural Activity:
Linear Network Theory
Mark GoldmanCenter for Neuroscience
UC Davis
![Page 2: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/2.jpg)
Outline
1. Neural mechanisms of integration: Linear network theory
2. Critique of traditional models of memory-related activity & integration, and possible remedies
![Page 3: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/3.jpg)
In many memory & decision-making circuits, neurons accumulate and/or maintain signals for ~1-10 seconds
Issue: How do neurons accumulate & store signals in working memory?
stimulus
neuronal activity(firing rates)
timeaccumulation
storage (working memory)
Most neurons intrinsically have brief memoriesPuzzle:
synapser
tneuron
synaptic input
firing rate r
Input stimulus
~10-100 ms
![Page 4: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/4.jpg)
Neural Integrator of the Goldfish Eye Movement System
Bob Baker
David Tank
SebastianSeung
Emre Aksay
![Page 5: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/5.jpg)
Eye position:
excitatory
inhibitory
(data from Aksay et al., Nature Neuroscience, 2001)
Eye velocity codingcommand neurons
The Oculomotor Neural Integrator
Integratorneurons:
time
persistent activity: stores running total of input commands
![Page 6: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/6.jpg)
Network Architecture
RL Eye Position
100
0
Right side neurons
firin
g ra
te
(Aksay et al., 2000)
Firing rates:
4 neuron populations:
Inhibitory
Excitatory
background inputs& eye movement commands
RL Eye Position
Left side neurons
firin
g ra
te
100
0
midline
Recurrentexcitation
Recurrent(dis)inhibition
![Page 7: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/7.jpg)
Standard Model: Network Positive Feedback
Typical isolated single neuron firing rate: tneuron
time
Neuron receiving network positive feedback:
Command input:
(Machens et al., Science, 2005)
1) Recurrent excitation
2) Recurrent (dis)inhibition
![Page 8: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/8.jpg)
(H.S. Seung, D. Lee)
Eye position representedby location alonga low dimensional
manifold (“line attractor”)
saccade
Many-neuron Patterns of Activity Represent Eye Position
Activity of 2 neurons
![Page 9: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/9.jpg)
Line Attractor Picture of the Neural Integrator
No decay along directionof eigenvector witheigenvalue = 1
Decay along direction of eigenvectors with eigenvalue < 1
“Line Attractor” or “Line of Fixed Points”
Geometrical picture of eigenvectors:
r1
r2
![Page 10: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/10.jpg)
Outline
1) A nonlinear network model of the oculomotor integrator,and a brief discussion of Hessians and sensitivity analysis
2) The problem of robustness of persistent activity
3) Some “non-traditional” (non-positive feedback) models of integration
a) Functionally feedforward models
b) Negative-derivative feedback models
[4) Project: Finely discretized vs. continuous attractors, and noise:phenomenology& connections to Bartlett’s & Bard’s talks]
![Page 11: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/11.jpg)
Eye position:
excitatory
inhibitory
(data from Aksay et al., Nature Neuroscience, 2001)
Eye velocity codingcommand neurons
The Oculomotor Neural Integrator
Integratorneurons:
time (secs)
persistent activity: stores running total of input commands
![Page 12: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/12.jpg)
Network Architecture
RL Eye Position
100
0
Right side neurons
firin
g ra
te
(Aksay et al., 2000)
Firing rates:
4 neuron populations:
Inhibitory
Excitatory
background inputs& eye movement commands
RL Eye Position
Left side neurons
firin
g ra
te
100
0
midline
Recurrentexcitation
Recurrent(dis)inhibition
![Page 13: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/13.jpg)
Network Model
Wij = weight of connection
from neuron j to neuron i
Firing rate dynamics of each neuron:
( )( ) (( ) )ipsi ipsiij jj
contraine
contraijuro jn ji i iW s
drW s T
tr B
drf tr
same-side excitation
opposite-side inhibition
Intrinsicleak
Bkgd.input
Burstcommandinput
Burst commands& Tonic background inputs
Outputs
Wcontra
Wipsi
Firing rate changes
![Page 14: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/14.jpg)
Network Model
Wij = weight of connection
from neuron j to neuron i
Firing rate dynamics of each neuron:
( )( ) (( ) )ipsi ipsiij jj
contraine
contraijuro jn ji i iW s
drW s T
tr B
drf tr
Intrinsicleak
Firing rate changes
Bkgd.input
Burstcommandinput
Burst commands& Tonic background inputs
Outputs
Wcontra
Wipsi
For persistent activity: must sum to 0
(1
)ineuro
in
B tr dt
Integrator!
same-side excitation
opposite-side inhibition
![Page 15: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/15.jpg)
Fitting the Model
Conductance-based model fit by constructing a cost functionthat simultaneously enforced:
• Intracellular current injection experiments• Database of single-neuron tuning curves• Firing rate drift patterns following focal lesions
( )( ) (( ) )ipsi ipsiij jj
contraine
contraijuro jn ji i iW s
drW s T
tr B
drf tr
Intrinsicleak
Firing rate changes
Bkgd.input
Burstcommandinput
same-side excitation
opposite-side inhibition
)( ()) (ipsi ip contra cons traij ji j j
iij j iWW s s Tr r rf
During fixations (B=0, dr/dt = 0):
![Page 16: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/16.jpg)
Model Integrates its Inputs andReproduces the Tuning Curves of Every Neuron
solid lines: experimental tuning curves
boxes: model rates (& variability)
Network integrates its inputs …and all neurons precisely match tuning curve data
gray: raw firing rate (black: smoothed rate)green: perfect integral
Fir
ing
rate
(H
z)
Time (sec)
![Page 17: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/17.jpg)
Inactivation Experiments Suggest Presenceof a Threshold Process
time
firin
g ra
te
Experiment: Remove inhibition
Inactivate
stable at high rates
drift at low rates
Record
Model:
Persistence maintained at high firing rates:
These high rates occur when inactivated side would be at low rates
Suggests such low rates are below a threshold for contributing
![Page 18: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/18.jpg)
Two Possible Threshold Mechanisms Revealed by the Model
Mechanism 2 • High-threshold cells dominate the inhibitory connectivity
Mechanism 1 • Synaptic thresholds
Synaptic nonlinearity s(r) & anatomical connectivity Wij for 2 model networks:
Exc
Right side neurons
Left sideneurons
Inh
syn
ap
tica
ctiv
atio
n
firing rate
Right sideneurons
Left sideneurons
Exc
Inh
low-thresholdinhibitory neurons
syn
ap
tica
ctiv
atio
n
firing rate
![Page 19: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/19.jpg)
Mechanism for generating persistent activity
Network activity when eyes directed rightward:
Implications:
-The only positive feedback LOOP is due to recurrent excitation
-Due to thresholds, there is no mutual inhibitory feedback loop
Right sideLeft side
Excitation, not inhibition, maintains persistent activity!
Inhibition is anatomically recurrent, but functionally feedforward
![Page 20: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/20.jpg)
Sensitivity Analysis:Which features of the connectivity are most critical?
Cost function curvature is described by “Hessian” matrix of 2nd derivatives:
Cost function surface:
-1-0.5
00.5
1
-1
-0.5
0
0.5
10
2
4
6
8
10
12
cost
C
W1
W2
Insensitivedirection
(low curvature)
Sensitivedirection
(high curvature)
2 ( )ij
i j
CostH
W W
2 2 2
21 1 2 1
2 2
22 1 2
2 2
21
( ) ( ) ( )
( ) ( )
( ) ( )
N
N N
Cost Cost Cost
W W W W W
Cost Cost
W W W
Cost Cost
W W W
diagonal elements: sensitivity to varying a single parameter
off-diagonal elements: interactions
![Page 21: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/21.jpg)
Oculomotor Integrator:Which features of the connectivity are most critical?
2( ) ( )ijk
ij ik
CostH
W W
Hessian of the fit’s costfunction onto neuron i:
1. Diagonal elements: sensitivity to mistuning individual weights
3 mostimportant components!
2. Largest principal components: most sensitive patterns of weight changes
![Page 22: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/22.jpg)
Sensitive & Insensitive Directions in Connectivity Matrix
Eigenvector 10:Offsetting changes
in weights
Eigenvector 1:make all connections
more excitatory
Eigenvector 2:strengthen excitation
& inhibition
Eigenvector 3:vary high vs. low
threshold neurons
exc inh exc inh exc inh
perturb perturb perturb perturb
Sensitive directions (of model-fitting cost function) Insensitive
Fisher et al., Neuron, in press
![Page 23: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/23.jpg)
Diversity of Solutions: Example Circuits Differing Only in Insensitive Components
Two circuits with different connectivity…
…but near-identical performance…
…differ only in theirinsensitive eigenvectors
![Page 24: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/24.jpg)
![Page 25: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/25.jpg)
1bio
drr r I
dw
t Integrator equation:
~ 30 secnetwork
Experimental values:
Synaptic feedback w must be tuned to accuracy of:
| | ~ 0.3w1 %bio
network
~ 100 msbioSingle isolated neuron:
Integrator circuit:
|1 |wbio
network
W
r(t)I
w
Issue: Robustness of Integrator
![Page 26: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/26.jpg)
W
r(t)externalinput
w
Need for fine-tuning in linear feedback models
Fine-tuned model:wneuron r
drexternal inpur t
dt
decay feedback
Leaky behavior Unstable behavior
r
r (decay)wr (feedback)
dr/dtr
r (decay)wr (feedback)dr/dt
rate
time (sec)
rate
time (sec)
![Page 27: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/27.jpg)
Geometry of Robustness & Hypothesis for Robustness on Faster Time Scales
1) Plasticity on slow time scales: Reshapes the trough to make it flat
2) To control on faster time scales:
-OR- Fill attractor with viscous fluid to slow drift
Add ridges to surface to add“friction”-like slowing of drift
Course project!
![Page 28: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/28.jpg)
Questions:
1) Are positive feedback loops the only way to perform integration? (the dogma)
2) Could alternative mechanisms describe persistent activity data?
![Page 29: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/29.jpg)
Working memory task not easily explained by traditional feedback models
5 neurons recorded during a PFC delay task (Batuev et al., 1979, 1994):
![Page 30: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/30.jpg)
Response of Individual Neurons inLine Attractor Networks
All neurons exhibit similar slow decay:Due to strong coupling that mediates positive feedback
Time (sec)
Time (sec)
Neu
ron
alfi
rin
g r
ates
Su
mm
edo
utp
ut
Problem 2: To generate stable activity for 2 seconds (+/- 5%) requires 10-second long exponential decay
Problem: Does not reproduce the differences between neurons seen experimentally!
![Page 31: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/31.jpg)
Feedforward Networks Can Integrate!
Chain of neuron clusters that successively filter an input
Simplest example:(Goldman, Neuron, 2009)
![Page 32: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/32.jpg)
Feedforward Networks Can Integrate!
Chain of neuron clusters that successively filter an input
Simplest example:
Integral of input!(up to duration ~Nt)(can prove this works analytically)
(Goldman, Neuron, 2009)
![Page 33: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/33.jpg)
Same Network Integrates Any Input for ~Nt
![Page 34: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/34.jpg)
Improvement in Required Precision of Tuning
Time (sec)
Time (sec)
Neu
ron
alfi
rin
g r
ates
Su
mm
edo
utp
ut
Feedback-based Line Attractor:10 sec decay to hold 2 sec of activity
Time (sec)
Time (sec)
Neu
ron
alfi
rin
g r
ates
Su
mm
edo
utp
ut
Feedforward Integrator2 sec decay to hold 2 sec of activity
![Page 35: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/35.jpg)
Feedforward Models Can Fit PFC Recordings
Line Attractor Feedforward Network
![Page 36: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/36.jpg)
Recent data: “Time cells” observed in rat hippocampal recordings during delayed-comparison task
Data courtesyof H. Eichenbaum
[Similar to dataof Pastalkova et al.,
Science, 2008; Harvey et al.,Nature, 2012]
feedforward
progression
(Goldman, Neuron, 2009)
![Page 37: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/37.jpg)
Generalization to Coupled Networks: Feedforward transitions between patterns of activity
Feedforward network
0 0 0
1 0 0
0 1 0
WConnectivity matrix Wij:
Geometric picture:
Recurrent (coupled) network
recurrent -1RWRWMap each neuron toa combination ofneurons by applyinga coordinate rotationmatrix R
(Schur decomposition)
(Math of Schur: See Goldman, Neuron, 2009; Murphy & Miller, Neuron, 2009; Ganguli et al., PNAS, 2008)
![Page 38: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/38.jpg)
Responses of functionally feedforward networks
Feedforward network activity patterns
Functionally feedforward activity patterns…
Effect of stimulating pattern 1:
& neuronal firing rates
![Page 39: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/39.jpg)
Math Puzzle: Eigenvalue analysis does not predict long time scale of response!
Line attractornetworks:
Feedforward networks:
Eigenvaluespectra:
Neuronal responses:
Imag
(l)
Real(l) 1 persistent mode
Real(l)
Imag
(l)
1 no persistent mode???
(Goldman, Neuron, 2009; see also: Murphy & Miller, Neuron 2009; Ganguli & Sompolinsky, PNAS 2008)
![Page 40: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/40.jpg)
Math Puzzle: Schur vs. Eigenvector Decompositions
![Page 41: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/41.jpg)
Answer to Math Puzzle: Pseudospectral analysis
Eigenvalues l:
Satisfy equation: (W – l1)v =0
Govern long-time asymptotic behavior
Pseudoeigenvalues le: Set of all values le that satisfy the inequality: ||(W – le1)v|| <e
Govern transient responses
Can differ greatly from eigenvalues when eigenvectors are highly non-orthogonal (nonnormal matrices)
( Trefethen & Embree, Spectra & Pseudospectra, 2005)
Black dots: eigenvalues;
Surrounding contours: colors give boundaries of set of pseudoeigenvals., for different values of e
(from Supplement to Goldman, Neuron, 2009)
![Page 42: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/42.jpg)
Answer to Math Puzzle: Pseudo-eigenvaluesNormal networks:
Feedforward networks:
Eigenvalues Neuronal responses
Imag
(l)
Real(l) 1 persistent mode
Real(l)
Imag
(l)
1 no persistent mode???
![Page 43: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/43.jpg)
Answer to Math Puzzle: Pseudo-eigenvaluesNormal networks:
Feedforward networks:
Eigenvalues Neuronal responses
Imag
(l)
Real(l) 1 persistent mode
Real(l)
Imag
(l)
transiently acts likepersistent mode
(Goldman, Neuron, 2009)
1
-1
0
1
Pseudoeigenvals
![Page 44: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/44.jpg)
Challenging the Positive Feedback PicturePart 2: Corrective Feedback Model
Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback transformation
x y
f
g-
+
f(y)
x – f(y) g(x – f(y))
Equation:
1
11
( ( ))
( ) ( )
for f(y)>>g ( : ( ))
y g x f y
g y f y
y f
x
y x
1( )f xClaim:
![Page 45: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/45.jpg)
Integration from Negative Derivative Feedback
Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback signal
x y g
-
+
d
dt
( )x t dt Integrator!
![Page 46: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/46.jpg)
Positive feedback mechanism
Energylandscape(Wpos=1):
Positive vs. Negative-Derivative Feedback
Derivative feedback mechanism
.der
ddrr Input
dt
rW
dt
(-) correctivesignal
(+) correctivesignal
time
firin
g ra
te
.pos
drr Input
dtW r
![Page 47: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/47.jpg)
Negative derivative feedback arisesnaturally in balanced cortical networks
Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced
slow fast
Lim & Goldman, Nature Neuroscience, in press
![Page 48: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/48.jpg)
Negative derivative feedback arisesnaturally in balanced cortical networks
Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced
slow fast
Lim & Goldman, Nature Neuroscience, in press
![Page 49: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/49.jpg)
Networks Maintain Analog Memory and Integrate their Inputs
![Page 50: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/50.jpg)
Robustness to Loss of Cells orIntrinsic or Synaptic Gains
Change: -intrinsic gains -synaptic gains -Exc. cell death -Inh. cell death
![Page 51: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/51.jpg)
Balanced Inputs Lead to Irregular SpikingAcross a Graded Range of Persistent Firing Rates
Spiking model structure:Model output (purely derivative feedback) :
Experimental distribution ofCV’s of interspike intervals:
(Compte et al., 2003)
![Page 52: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/52.jpg)
Summary
1) Tuned positive feedback (attractor model)
Short-term memory (~10’s seconds) is maintained by persistent neural activity following the offset of a remembered stimulus
Possible mechanisms
2) Feedforward network (possibly in disguise)
-Disadvantage: Finite memory lifetime ~ # of feedforward stages -Advantage: Higher-dimensional representation can produce
many different temporal response patterns -Math: Not well-characterized by eigenvalue decomposition;
Schur decomposition or pseudospectral analysis better
3) Negative derivative feedback -Features: Balance of excitation and inhibition, as observed
Robust to many natural perturbations Produces observed irregular firing statistics
![Page 53: Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis](https://reader035.vdocument.in/reader035/viewer/2022070414/5697c0151a28abf838ccdf16/html5/thumbnails/53.jpg)
Acknowledgments
Itsaso Olasagasti (USZ)Dimitri Fisher
Theory (Goldman lab, UCD)David Tank (Princeton Univ.) Emre Aksay (Cornell Med.)Guy Major (Cardiff Univ.)Robert Baker (NYU Medical)
Experiments