![Page 1: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/1.jpg)
TNI: Computational Neuroscience
Instructors: Peter LathamManeesh SahaniPeter Dayan
TA: Phillipp Hehrmann, [email protected]: http://www.gatsby.ucl.ac.uk/~hehrmann/TN1/
(slides will be on website)
Lectures: Tuesday/Friday, 11:00-1:00.Review: Friday, 1:00-3:00.
Homework: Assigned Friday, due Friday (1 week later).first homework: 2 weeks later (no class Oct. 12).
![Page 2: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/2.jpg)
What is computational neuroscience?
Our goal: figure out how the brain works.
![Page 3: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/3.jpg)
10 microns
There are about 10 billion cubes ofthis size in your brain!
![Page 4: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/4.jpg)
How do we go about making sense of this mess?
David Marr (1945-1980) proposed three levels of analysis:
1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)
![Page 5: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/5.jpg)
Example #1: vision.
the problem (Marr):2-D image on retina → 3-D reconstruction of a visual scene.
![Page 6: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/6.jpg)
Example #1: vision.
the problem (modern version):2-D image on retina → reconstruction of latent variables.
housesuntreebad artist
![Page 7: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/7.jpg)
Example #1: vision.
the problem (modern version):2-D image on retina → reconstruction of latent variables.
the algorithm:graphical models.
x1 x2 x3
r1 r2 r3 r4
x1 x2 x3^ ^ ^
latent variables
peripheral spikes
estimate of latent variables
![Page 8: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/8.jpg)
Example #1: vision.
the problem (modern version):2-D image on retina → reconstruction of latent variables.
the algorithm:graphical models.
implementation in networks of neurons:no clue.
![Page 9: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/9.jpg)
Example #2: memory.
the problem:recall events, typically based on partial information.
![Page 10: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/10.jpg)
Example #2: memory.
the problem:recall events, typically based on partial information.associative or content-addressable memory.
the algorithm:dynamical systems with fixed points.
r3
r2
r1 activity space
![Page 11: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/11.jpg)
Example #2: memory.
the problem:recall events, typically based on partial information.associative or content-addressable memory.
the algorithm:dynamical systems with fixed points.
neural implementation:Hopfield networks.
xi = sign( ∑j Jij xj)
![Page 12: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/12.jpg)
Comment #1:
the problem:the algorithm:neural implementation:
![Page 13: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/13.jpg)
Comment #1:
the problem: easierthe algorithm: harderneural implementation: harder
often ignored!!!
![Page 14: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/14.jpg)
Comment #1:
the problem: easierthe algorithm: harderneural implementation: harder
My favorite example: CPGs (central pattern generators)
rate
rate
![Page 15: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/15.jpg)
Comment #2:
the problem: easierthe algorithm: harderneural implementation: harder
You need to know a lot of math!!!!!
x1 x2 x3
r1 r2 r3 r4
x1 x2 x3^ ^ ^
r3
r2
r1 activity space
![Page 16: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/16.jpg)
Comment #3:
the problem: easierthe algorithm: harderneural implementation: harder
This is a good goal, but it’s hard to do in practice.
We shouldn’t be afraid to just mess around withexperimental observations and equations.
![Page 17: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/17.jpg)
volt
age
100 mstime
-50 mV
+40 mV
dendrites
soma
axon
1 ms
A classic example: Hodgkin and Huxley.
![Page 18: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/18.jpg)
A classic example: Hodgkin and Huxley.
C dV/dt = –gL(V-VL) – gNam3h(V-VNa) – … dm/dt = … …
![Page 19: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/19.jpg)
the problem: easierthe algorithm: harderneural implementation: harder
A lot of what we do as computational neuroscientistsis turn experimental observations into equations.
The goal here is to understand how networks or singleneurons work.
We should always keep in mind that: a) this is less than ideal, b) we’re really after the big picture: how the brain works.
![Page 20: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/20.jpg)
Basic facts about the brain
![Page 21: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/21.jpg)
Your brain
![Page 22: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/22.jpg)
Your cortex unfolded
~30 cm
~0.5 cm
neocortex (cognition)
subcortical structures(emotions, reward,homeostasis, much muchmore)
6 layers
![Page 23: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/23.jpg)
Your cortex unfolded
1 cubic millimeter,~3*10-5 oz
![Page 24: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/24.jpg)
1 mm3 of cortex:
50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons
![Page 25: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/25.jpg)
1 mm3 of cortex:
50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons
1 mm2 of a CPU:
1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire
![Page 26: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/26.jpg)
1 mm3 of cortex:
50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons
whole brain (2 kg):
1011 neurons1015 connections8 million km of axons
1 mm2 of a CPU:
1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire
whole CPU:
109 transistors2*109 connections2 km of wire
![Page 27: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/27.jpg)
1 mm3 of cortex:
50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons
whole brain (2 kg):
1011 neurons1015 connections8 million km of axons
1 mm2 of a CPU:
1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire
whole CPU:
109 transistors2*109 connections2 km of wire
![Page 28: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/28.jpg)
volt
age
100 mstime
-50 mV
+40 mV
dendrites (input)
soma (spike generation)
axon (output)
1 ms
![Page 29: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/29.jpg)
![Page 30: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/30.jpg)
current flow
synapse
![Page 31: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/31.jpg)
current flow
synapse
![Page 32: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/32.jpg)
volt
age
100 mstime
-50 mV
+40 mV
![Page 33: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/33.jpg)
neuron jneuron i
neuron j emits a spike:
V o
n n
euro
n i
t
10 ms
EPSP
![Page 34: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/34.jpg)
neuron jneuron i
neuron j emits a spike:
V o
n n
euro
n i
t
10 ms
IPSP
![Page 35: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/35.jpg)
neuron jneuron i
neuron j emits a spike:
V o
n n
euro
n i
t
10 ms
IPSP
amplitude = wij
![Page 36: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/36.jpg)
neuron jneuron i
neuron j emits a spike:
V o
n n
euro
n i
t
10 ms
IPSP
amplitude = wij
changes withlearning
![Page 37: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/37.jpg)
current flow
synapse
![Page 38: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/38.jpg)
A bigger picture view of the brain
![Page 39: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/39.jpg)
xr
sensory processing
motor processing
x'
r'
emotionscognitionmemory
peripheral spikes
latent variables
motor actions
peripheral spikes
brain
r̂ “direct” code forlatent variables
r'̂ “direct” code formotor actions
actionselection
![Page 40: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/40.jpg)
r
![Page 41: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/41.jpg)
r
![Page 42: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/42.jpg)
r
![Page 43: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/43.jpg)
r
![Page 44: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/44.jpg)
r
you are thecutest stickfigure ever!
![Page 45: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/45.jpg)
r'̂
x
r
x'
actionselection
r̂
r'
emotionscognitionmemory
brain
Questions:1. How does the brain re-represent latent variables?2. How does it manipulate re-represented variables?3. How does it learn to do both?
Ask at three levels:1. What are the properties of task x?2. What are the algorithms?3. How are they implemented in neural circuits?
![Page 46: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/46.jpg)
Knowing the algorithms is a critical, but often neglected, step!!!
We know the algorithms that the vestibular system uses.We know (sort of) how it’s implemented at the neural level.
We know the algorithm for echolocation.We know (mainly) how it’s implemented at the neural level.
We know the algorithm for computing x+y.We know (mainly) how it might be implemented in the brain.
![Page 47: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/47.jpg)
Knowing the algorithms is a critical, but often neglected, step!!!
We don’t know the algorithms for anything else.We don’t know how anything else is implemented atthe neural level.
This is not a coincidence!!!!!!!!
![Page 48: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/48.jpg)
What we know about the brain
Highly
biased
![Page 49: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/49.jpg)
1. Anatomy. We know a lot about what is where. But becareful about labels: neurons in motor cortex sometimes
respond to color.
Connectivity. We know (more or less) which areais connected to which. We don’t know the wiring diagramat the microscopic level.
wij
![Page 50: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/50.jpg)
2. Single neurons. We know very well how point neurons work(think Hodgkin Huxley).
Dendrites. Lots of potential for incredibly complexprocessing. My guess: they make neurons bigger andreduce wiring length.
![Page 51: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/51.jpg)
3. The neural code. We’re pretty sure that information is carried in action potentials. We’re not sure what aspects of action potentials carry the information. The two main candidates:
precise timingfiring rate
Once you get away from periphery, it’s mainly firing rate.
![Page 52: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/52.jpg)
4. Recurrent networks of spiking neurons. This is a field thatis advancing rapidly! There were two absolutely seminalpapers about a decade ago:
van Vreeswijk and Sompolinsky (Science, 1996)van Vreeswijk and Sompolinsky (Neural Comp., 1998)
We now understand very well randomly connected networks(harder than you might think), and (I believe) we are onthe verge of:
i) understanding networks that have interesting computational properties.ii) computing the correlational structure in those networks.
![Page 53: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/53.jpg)
5. Learning. We know a lot of facts (LTP, LTD, STDP), but it’snot clear which, if any, are relevant.
Theorists are starting to develop unsupervised learningalgorithms, mainly ones that maximize mutual information.
These are promising, but the link to the brain has not beenfully established.
![Page 54: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/54.jpg)
5. Learning. We know a lot of facts (LTP, LTD, STDP), but it’snot clear which, if any, are relevant.
Theorists are starting to develop unsupervised learningalgorithms, mainly ones that maximize mutual information.
These are promising, but the link to the brain has not beenfully established.
![Page 55: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/55.jpg)
A word about learning (remember these numbers!!!):
You have about 1015 synapses.
If it takes 1 bit of information to set a synapse,you need 1015 bits to set all of them.
30 years ≈ 109 seconds.
To set 1/10 of your synapses in 30 years,
you must absorb 100,000 bits/second.
Learning in the brain is almost completely unsupervised!!!
![Page 56: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/56.jpg)
6. Where we know algorithms we know the neuralimplementation (sort of). Vestibular system, soundlocalization, echolocation, x+y.
![Page 57: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/57.jpg)
1. What we know: my score (1=low, 10=high).
a. Anatomy. 7b. Single neurons. 7c. The neural code. 5d. Recurrent networks of spiking neurons. 4e. Learning. 2
Questions: all answers are “we don’t know”.1. How does the brain re-represent latent variables? 0.0012. How does it manipulate re-represented variables? 0.0023. How does it learn to do both? 0.001
![Page 58: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/58.jpg)
Outline:
1. Basics: single neurons/axons/dendrites/synapses. Latham2. Language of neurons: neural coding. Sahani3. What we know about networks (very little). Latham4. Learning at network and behavioral level. Dayan
![Page 59: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan](https://reader035.vdocument.in/reader035/viewer/2022062805/56814c80550346895db99d74/html5/thumbnails/59.jpg)
Outline for this part of the course (biophysics):
1. What makes a neuron spike.2. How current propagates in dendrites.3. How current propagates in axons.4. How synapses work.5. Lots and lots of math!!!