tni: computational neuroscience instructors:peter latham maneesh sahani peter dayan

Post on 17-Jan-2016

33 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan TA:Phillipp Hehrmann, hehrmann@gatsby.ucl.ac.uk Website:http://www.gatsby.ucl.ac.uk/~hehrmann/TN1/ (slides will be on website) Lectures:Tuesday/Friday, 11:00-1:00. Review:Friday, 1:00-3:00. - PowerPoint PPT Presentation

TRANSCRIPT

TNI: Computational Neuroscience

Instructors: Peter LathamManeesh SahaniPeter Dayan

TA: Phillipp Hehrmann, hehrmann@gatsby.ucl.ac.ukWebsite: http://www.gatsby.ucl.ac.uk/~hehrmann/TN1/

(slides will be on website)

Lectures: Tuesday/Friday, 11:00-1:00.Review: Friday, 1:00-3:00.

Homework: Assigned Friday, due Friday (1 week later).first homework: 2 weeks later (no class Oct. 12).

What is computational neuroscience?

Our goal: figure out how the brain works.

10 microns

There are about 10 billion cubes ofthis size in your brain!

How do we go about making sense of this mess?

David Marr (1945-1980) proposed three levels of analysis:

1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)

Example #1: vision.

the problem (Marr):2-D image on retina → 3-D reconstruction of a visual scene.

Example #1: vision.

the problem (modern version):2-D image on retina → reconstruction of latent variables.

housesuntreebad artist

Example #1: vision.

the problem (modern version):2-D image on retina → reconstruction of latent variables.

the algorithm:graphical models.

x1 x2 x3

r1 r2 r3 r4

x1 x2 x3^ ^ ^

latent variables

peripheral spikes

estimate of latent variables

Example #1: vision.

the problem (modern version):2-D image on retina → reconstruction of latent variables.

the algorithm:graphical models.

implementation in networks of neurons:no clue.

Example #2: memory.

the problem:recall events, typically based on partial information.

Example #2: memory.

the problem:recall events, typically based on partial information.associative or content-addressable memory.

the algorithm:dynamical systems with fixed points.

r3

r2

r1 activity space

Example #2: memory.

the problem:recall events, typically based on partial information.associative or content-addressable memory.

the algorithm:dynamical systems with fixed points.

neural implementation:Hopfield networks.

xi = sign( ∑j Jij xj)

Comment #1:

the problem:the algorithm:neural implementation:

Comment #1:

the problem: easierthe algorithm: harderneural implementation: harder

often ignored!!!

Comment #1:

the problem: easierthe algorithm: harderneural implementation: harder

My favorite example: CPGs (central pattern generators)

rate

rate

Comment #2:

the problem: easierthe algorithm: harderneural implementation: harder

You need to know a lot of math!!!!!

x1 x2 x3

r1 r2 r3 r4

x1 x2 x3^ ^ ^

r3

r2

r1 activity space

Comment #3:

the problem: easierthe algorithm: harderneural implementation: harder

This is a good goal, but it’s hard to do in practice.

We shouldn’t be afraid to just mess around withexperimental observations and equations.

volt

age

100 mstime

-50 mV

+40 mV

dendrites

soma

axon

1 ms

A classic example: Hodgkin and Huxley.

A classic example: Hodgkin and Huxley.

C dV/dt = –gL(V-VL) – gNam3h(V-VNa) – … dm/dt = … …

the problem: easierthe algorithm: harderneural implementation: harder

A lot of what we do as computational neuroscientistsis turn experimental observations into equations.

The goal here is to understand how networks or singleneurons work.

We should always keep in mind that: a) this is less than ideal, b) we’re really after the big picture: how the brain works.

Basic facts about the brain

Your brain

Your cortex unfolded

~30 cm

~0.5 cm

neocortex (cognition)

subcortical structures(emotions, reward,homeostasis, much muchmore)

6 layers

Your cortex unfolded

1 cubic millimeter,~3*10-5 oz

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

1 mm2 of a CPU:

1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

whole brain (2 kg):

1011 neurons1015 connections8 million km of axons

1 mm2 of a CPU:

1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire

whole CPU:

109 transistors2*109 connections2 km of wire

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

whole brain (2 kg):

1011 neurons1015 connections8 million km of axons

1 mm2 of a CPU:

1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire

whole CPU:

109 transistors2*109 connections2 km of wire

volt

age

100 mstime

-50 mV

+40 mV

dendrites (input)

soma (spike generation)

axon (output)

1 ms

current flow

synapse

current flow

synapse

volt

age

100 mstime

-50 mV

+40 mV

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

EPSP

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

IPSP

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

IPSP

amplitude = wij

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

IPSP

amplitude = wij

changes withlearning

current flow

synapse

A bigger picture view of the brain

xr

sensory processing

motor processing

x'

r'

emotionscognitionmemory

peripheral spikes

latent variables

motor actions

peripheral spikes

brain

r̂ “direct” code forlatent variables

r'̂ “direct” code formotor actions

actionselection

r

r

r

r

r

you are thecutest stickfigure ever!

r'̂

x

r

x'

actionselection

r'

emotionscognitionmemory

brain

Questions:1. How does the brain re-represent latent variables?2. How does it manipulate re-represented variables?3. How does it learn to do both?

Ask at three levels:1. What are the properties of task x?2. What are the algorithms?3. How are they implemented in neural circuits?

Knowing the algorithms is a critical, but often neglected, step!!!

We know the algorithms that the vestibular system uses.We know (sort of) how it’s implemented at the neural level.

We know the algorithm for echolocation.We know (mainly) how it’s implemented at the neural level.

We know the algorithm for computing x+y.We know (mainly) how it might be implemented in the brain.

Knowing the algorithms is a critical, but often neglected, step!!!

We don’t know the algorithms for anything else.We don’t know how anything else is implemented atthe neural level.

This is not a coincidence!!!!!!!!

What we know about the brain

Highly

biased

1. Anatomy. We know a lot about what is where. But becareful about labels: neurons in motor cortex sometimes

respond to color.

Connectivity. We know (more or less) which areais connected to which. We don’t know the wiring diagramat the microscopic level.

wij

2. Single neurons. We know very well how point neurons work(think Hodgkin Huxley).

Dendrites. Lots of potential for incredibly complexprocessing. My guess: they make neurons bigger andreduce wiring length.

3. The neural code. We’re pretty sure that information is carried in action potentials. We’re not sure what aspects of action potentials carry the information. The two main candidates:

precise timingfiring rate

Once you get away from periphery, it’s mainly firing rate.

4. Recurrent networks of spiking neurons. This is a field thatis advancing rapidly! There were two absolutely seminalpapers about a decade ago:

van Vreeswijk and Sompolinsky (Science, 1996)van Vreeswijk and Sompolinsky (Neural Comp., 1998)

We now understand very well randomly connected networks(harder than you might think), and (I believe) we are onthe verge of:

i) understanding networks that have interesting computational properties.ii) computing the correlational structure in those networks.

5. Learning. We know a lot of facts (LTP, LTD, STDP), but it’snot clear which, if any, are relevant.

Theorists are starting to develop unsupervised learningalgorithms, mainly ones that maximize mutual information.

These are promising, but the link to the brain has not beenfully established.

5. Learning. We know a lot of facts (LTP, LTD, STDP), but it’snot clear which, if any, are relevant.

Theorists are starting to develop unsupervised learningalgorithms, mainly ones that maximize mutual information.

These are promising, but the link to the brain has not beenfully established.

A word about learning (remember these numbers!!!):

You have about 1015 synapses.

If it takes 1 bit of information to set a synapse,you need 1015 bits to set all of them.

30 years ≈ 109 seconds.

To set 1/10 of your synapses in 30 years,

you must absorb 100,000 bits/second.

Learning in the brain is almost completely unsupervised!!!

6. Where we know algorithms we know the neuralimplementation (sort of). Vestibular system, soundlocalization, echolocation, x+y.

1. What we know: my score (1=low, 10=high).

a. Anatomy. 7b. Single neurons. 7c. The neural code. 5d. Recurrent networks of spiking neurons. 4e. Learning. 2

Questions: all answers are “we don’t know”.1. How does the brain re-represent latent variables? 0.0012. How does it manipulate re-represented variables? 0.0023. How does it learn to do both? 0.001

Outline:

1. Basics: single neurons/axons/dendrites/synapses. Latham2. Language of neurons: neural coding. Sahani3. What we know about networks (very little). Latham4. Learning at network and behavioral level. Dayan

Outline for this part of the course (biophysics):

1. What makes a neuron spike.2. How current propagates in dendrites.3. How current propagates in axons.4. How synapses work.5. Lots and lots of math!!!

top related