statistical methods for understanding neural codes
TRANSCRIPT
![Page 1: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/1.jpg)
Statistical methods for
understanding neural codes
Liam Paninski
Department of Statistics and Center for Theoretical Neuroscience
Columbia University
http://www.stat.columbia.edu/∼liam
May 9, 2006
![Page 2: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/2.jpg)
The neural code
Input-output relationship between
• External observables x (sensory stimuli, motor responses...)
• Neural variables y (spike trains, population activity...)
Probabilistic formulation: p(y|x)
![Page 3: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/3.jpg)
Example: neural prosthetic design
Nicolelis, Nature ’01
Donoghue; Cyberkinetics, Inc. ‘04
(Paninski et al., 1999; Serruya et al., 2002; Shoham et al., 2005)
![Page 4: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/4.jpg)
Basic goal
...learning the neural code.
Fundamental question: how to estimate p(y|x) from
experimental data?
General problem is too hard — not enough data, too many
inputs x and spike trains y
![Page 5: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/5.jpg)
Avoiding the curse of insufficient data
Many approaches to make problem tractable:
1: Estimate some functional f(p) instead
e.g., information-theoretic quantities (Nemenman et al., 2002;
Paninski, 2003b)
2: Select stimuli as efficiently as possible
e.g., (Foldiak, 2001; Machens, 2002; Paninski, 2003a)
3: Fit a model with small number of parameters
![Page 6: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/6.jpg)
Part 1: Neural encoding models
“Encoding model”: pθ(y|x).
— Fit parameter θ instead of full p(y|x)
Main theme: want model to be flexible but not overly so
Flexibility vs. “fittability”
![Page 7: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/7.jpg)
Multiparameter HH-type model
— highly biophysically plausible, flexible
— but very difficult to estimate parameters given spike times alone(figure adapted from (Fohlmeister and Miller, 1997))
![Page 8: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/8.jpg)
Cascade (“LNP”) model
— easy to estimate: spike-triggered averaging
(Simoncelli et al., 2004)
— but not biophysically plausible (fails to capture spike timing
details: refractoriness, burstiness, adaptation, etc.)
![Page 9: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/9.jpg)
Two key ideas
1. Use likelihood-based methods for fitting.
— well-justified statistically
— easy to incorporate prior knowledge, explicit noise
models, etc.
2. Use models that are easy to fit via maximum likelihood
— concave (downward-curving) functions have no
non-global local maxima =⇒ concave functions are easy to
maximize by gradient ascent.
Recurring theme: find flexible models whose loglikelihoods are
guaranteed to be concave.
![Page 10: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/10.jpg)
Filtered integrate-and-fire model
dV (t) =
(
−g(t)V (t) + IDC + ~k · ~x(t) +0∑
j=−∞
h(t − tj)
)
dt+σdNt;
(Gerstner and Kistler, 2002; Paninski et al., 2004b)
![Page 11: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/11.jpg)
Model flexibility: Adaptation
![Page 12: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/12.jpg)
The estimation problem
(Paninski et al., 2004b)
![Page 13: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/13.jpg)
First passage time likelihood
P (spike at ti) = fraction of paths crossing threshold for first time at ti
(computed numerically via Fokker-Planck or integral equation methods)
![Page 14: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/14.jpg)
Maximizing likelihood
Maximization seems difficult, even intractable:
— high-dimensional parameter space
— likelihood is a complex nonlinear function of parameters
Main result: The loglikelihood is concave in the parameters,
no matter what data {~x(t), ti} are observed.
=⇒ no non-global local maxima
=⇒ maximization easy by ascent techniques.
![Page 15: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/15.jpg)
Application: retinal ganglion cellsPreparation: dissociated salamander and macaque retina
— extracellularly-recorded responses of populations of RGCs
Stimulus: random “flicker” visual stimuli (Chander and Chichilnisky, 2001)
![Page 16: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/16.jpg)
Spike timing precision in retina
0 0.25 0.5 0.75 1
RGC
LNP
IF
0
200
rate
(sp
/sec
) RGCLNPIF
0 0.25 0.5 0.75 10
0.5
1
1.5
varia
nce
(sp2 /b
in)
0.07 0.17 0.22 0.26
0.6 0.64 0.85 0.9
(Pillow et al., 2005)
![Page 17: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/17.jpg)
Linking spike reliability and
subthreshold noise
(Pillow et al., 2005)
![Page 18: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/18.jpg)
Likelihood-based discrimination
Given spike data, optimal decoder chooses stimulus ~x according
to likelihood: p(spikes|~x1) vs. p(spikes|~x2).
Using correct model is essential (Pillow et al., 2005)
![Page 19: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/19.jpg)
Generalization: population responses
Pillow et al., SFN ’05
![Page 20: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/20.jpg)
Population retinal recordings
Pillow et al., SFN ’05
![Page 21: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/21.jpg)
![Page 22: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/22.jpg)
![Page 23: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/23.jpg)
![Page 24: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/24.jpg)
![Page 25: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/25.jpg)
![Page 26: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/26.jpg)
![Page 27: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/27.jpg)
![Page 28: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/28.jpg)
Part 2: Decoding subthreshold activity
Given extracellular spikes, what is most likely intracellular V (t)?
5.1 5.15 5.2 5.25 5.3 5.35 5.4 5.45 5.5 5.55
−80
−70
−60
−50
−40
−30
−20
−10
0
10
time (sec)
V (
mV
)
?
![Page 29: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/29.jpg)
Computing VML(t)Loglikelihood of V (t) (given LIF parameters, white noise Nt):
L({V (t)}0≤t≤T ) = −1
2σ2
∫ T
0
[
V̇ (t) −
(
− gV (t) + I(t)
)]2
dt
Constraints:
• Reset at t = 0:
V (0) = Vreset
• Spike at t = T :
V (T ) = Vth
• No spike for 0 < t < T :
V (t) < Vth
Quadratic programming problem: optimize quadratic function
under linear constraints. Concave: unique global optimum.
![Page 30: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/30.jpg)
Most likely vs. average V (t)
0 0.5 10
0.5
1
V
0
0.5
1
0 0.5 10
0.5
1
V
0
0.5
1
0 0.5 10
0.5
1
t
V
t0 0.5 1
0
0.5
1
σ=0.4
σ=0.2
σ=0.1
(Applications to spike-triggered average (Paninski, 2005a; Paninski, 2005b))
![Page 31: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/31.jpg)
Application: in vitro dataRecordings: rat sensorimotor cortical slice; dual-electrode whole-cell
Stimulus: Gaussian white noise current I(t)
Analysis: fit IF model parameters {g,~k, h(.), Vth, σ} by maximum likelihood
(Paninski et al., 2003; Paninski et al., 2004a), then compute VML(t)
![Page 32: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/32.jpg)
Application: in vitro data
1.04 1.05 1.06 1.07 1.08 1.09 1.1 1.11 1.12 1.13
−60
−40
−20
0V
(m
V)
1.04 1.05 1.06 1.07 1.08 1.09 1.1 1.11 1.12 1.13−75
−70
−65
−60
−55
−50
−45
time (sec)
V (
mV
)true V(t)V
ML(t)
P (V (t)|{ti}, θ̂ML, ~x) computed via forward-backward hidden
Markov model method (Paninski, 2005a).
![Page 33: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/33.jpg)
Part 3: Back to detailed models
Can we recover detailed biophysical properties?
• Active: membrane channel densities
• Passive: axial resistances, “leakiness” of membranes
• Dynamic: spatiotemporal synaptic input
![Page 34: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/34.jpg)
Conductance-based models
Key point: if we observe full Vi(t) + cell geometry, channel kinetics known
+ current noise is log-concave,
then loglikelihood of unknown parameters is concave.
Gaussian noise =⇒ standard nonnegative regression (albeit high-d).
![Page 35: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/35.jpg)
Estimating channel densities from V (t)
Ahrens, Huys, Paninski, NIPS ’05
![Page 36: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/36.jpg)
−60
−40
−20
0
V
20 40 60 80 100−100
−50
0
50
dV/d
tsu
mm
ed c
urre
nts
Time [ms]
NaHH KHH Leak NaM KM NaS KAS0
50
100
cond
ucta
nce
[mS
/cm
2 ]
True
Inferred
Ahrens, Huys, Paninski, NIPS ’05
![Page 37: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/37.jpg)
Estimating non-homogeneous channel densities and
axial resistances from spatiotemporal voltage recordings
Ahrens, Huys, Paninski, COSYNE ’05
![Page 38: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/38.jpg)
Estimating synaptic inputs given V (t)
0 500 1000 1500 2000
0
1
−70 mV
−25 mV
20 mV
0
1
Synaptic conductances
Time [ms]
Inh
spik
es |
Vol
tage
[mV
] | E
xc s
pike
s
A B
CHHNa HHK Leak MNa MK SNa SKA SKDR
0
20
40
60
80
100
120
max
con
duct
ance
[mS
/cm
2 ]
Channel conductances
True parameters(spikes and conductances)Data (voltage trace)Inferred (MAP) spikesInferred (ML) channel densities
1280 1300 1320 1340 1360 1380 14000
1−70 mV
−25 mV
20 mV0
1
Time [ms]
Ahrens, Huys, Paninski, NIPS ’05
![Page 39: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/39.jpg)
Collaborators
Theory and numerical methods
— J. Pillow, E. Simoncelli, NYU
— S. Shoham, Princeton
— A. Haith, C. Williams, Edinburgh
— M. Ahrens, Q. Huys, Gatsby
Motor cortex physiology
— M. Fellows, J. Donoghue, Brown
— N. Hatsopoulos, U. Chicago
— B. Townsend, R. Lemon, U.C. London
Retinal physiology
— V. Uzzell, J. Shlens, E.J. Chichilnisky, UCSD
Cortical in vitro physiology
— B. Lau and A. Reyes, NYU
![Page 40: Statistical methods for understanding neural codes](https://reader033.vdocument.in/reader033/viewer/2022050214/626e52738104e7372b66769e/html5/thumbnails/40.jpg)
ReferencesPaninski, L. (2003a). Design of experiments via information theory. Advances in Neural Information Processing
Systems, 16.
Paninski, L. (2003b). Estimation of entropy and mutual information. Neural Computation, 15:1191–1253.
Paninski, L. (2005a). The most likely voltage path and large deviations approximations for integrate-and-fire
neurons. Journal of Computational Neuroscience, In press.
Paninski, L. (2005b). The spike-triggered average of the integrate-and-fire cell driven by Gaussian white
noise. Neural Computation, In press.
Paninski, L., Fellows, M., Hatsopoulos, N., and Donoghue, J. (1999). Coding dynamic variables in
populations of motor cortex neurons. Society for Neuroscience Abstracts, 25:665.9.
Paninski, L., Lau, B., and Reyes, A. (2003). Noise-driven adaptation: in vitro and mathematical analysis.
Neurocomputing, 52:877–883.
Paninski, L., Pillow, J., and Simoncelli, E. (2004a). Comparing integrate-and-fire-like models estimated using
intracellular and extracellular data. Neurocomputing, 65:379–385.
Paninski, L., Pillow, J., and Simoncelli, E. (2004b). Maximum likelihood estimation of a stochastic
integrate-and-fire neural model. Neural Computation, 16:2533–2561.
Pillow, J., Paninski, L., Shlens, J., Simoncelli, E., and Chichilnisky, E. (2005). Modeling multi-neuronal
responses in primate retinal ganglion cells. Comp. Sys. Neur. ’05.
Serruya, M., Hatsopoulos, N., Paninski, L., Fellows, M., and Donoghue, J. (2002). Instant neural control of a
movement signal. Nature, 416:141–142.
Shoham, S., Paninski, L., Fellows, M., Hatsopoulos, N., Donoghue, J., and Normann, R. (2005). Optimal
decoding for a primary motor cortical brain-computer interface. IEEE Transactions on Biomedical
Engineering, 52:1312–1322.
Simoncelli, E., Paninski, L., Pillow, J., and Schwartz, O. (2004). Characterization of neural responses with
stochastic stimuli. In Gazzaniga, M., editor, The Cognitive Neurosciences. MIT Press, 3rd edition.