reading population codes: a neural implementation of ideal observers

19
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget

Upload: hong

Post on 23-Jan-2016

31 views

Category:

Documents


0 download

DESCRIPTION

Reading population codes: a neural implementation of ideal observers. Sophie Deneve, Peter Latham, and Alexandre Pouget. encode. Stimulus (s). neurons. Response (r). decode. Tuning curves. sensory and motor info often encoded in “tuning curves” - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Reading population codes: a neural implementation of ideal observers

Reading population codes: a neural implementation of ideal observers

Sophie Deneve, Peter Latham, and Alexandre Pouget

Page 2: Reading population codes: a neural implementation of ideal observers

Stimulus (s) neurons

encode

Response (r)

decode

Page 3: Reading population codes: a neural implementation of ideal observers

Tuning curves

• sensory and motor info often encoded in “tuning curves”

• neurons give a characteristic “bell shaped” response

Page 4: Reading population codes: a neural implementation of ideal observers

Difficulty of decoding

• noisy neurons create variable responses to same stimuli

• brain must estimate encoded variables from the “noisy hill” of a population response

Page 5: Reading population codes: a neural implementation of ideal observers

Population vector estimator

• assign each neuron a vector

• vector length is proportional to activity

• vector direction corresponds to preferred direction

Sum vectors

Page 6: Reading population codes: a neural implementation of ideal observers

Population vector estimator

• Vector summation is equivalent to fitting a cosine function

• peak of cosine is estimate of direction

Page 7: Reading population codes: a neural implementation of ideal observers

How good is an estimator?

• need to compare variance of estimator after repeated presentations to a lower bound

• the maximum likelihood estimate gives the lower variance bound for a given amount of independent noise

VS

Page 8: Reading population codes: a neural implementation of ideal observers

Stimulus (s) neurons

encode

Response (r)

decode

Page 9: Reading population codes: a neural implementation of ideal observers

Maximum Likelihood Decoding

Maximum likelihood estimator

Decoding

Encoding

Page 10: Reading population codes: a neural implementation of ideal observers

Goal: biological ML estimator

• recurrent neural network with broadly tuned units

• can achieve ML estimate with noise independent of firing rate

• can approximate ML estimate with activity-dependent noise

Page 11: Reading population codes: a neural implementation of ideal observers

General Architecture

• units are fully connected and are arranged in frequency columns and orientation rows

• weights implement a 2-D Gaussian filter:

20

20

Preferred Frequency

Preferred orientation

Page 12: Reading population codes: a neural implementation of ideal observers

Input tuning curves

• circular normal functions with some spontaneous activity:

• Gaussian noise is added to inputs:

Page 13: Reading population codes: a neural implementation of ideal observers

Unit updates & normalization

• units are convolved with filter (local excitation)

• responses are normalized divisively (global inhibition)

Page 14: Reading population codes: a neural implementation of ideal observers

Results

• Rapidly converges

•strongly dependent on contrast

Page 15: Reading population codes: a neural implementation of ideal observers

Results

• sigmoidal response curve after 3 iterations, becomes a step after 20

• actual neuron

Page 16: Reading population codes: a neural implementation of ideal observers

Noise Effects

• Width of input tuning curve held constant

• width of output tuning curve varied by adjusting spatial extent of the weights

Flat Noise

Proportional Noise

Page 17: Reading population codes: a neural implementation of ideal observers

Analysis

Q1: Why does the optimal width depend on noise?

Q2: Why does the network perform better for flat noise?

Flat Noise

Proportional Noise

Page 18: Reading population codes: a neural implementation of ideal observers

Analysis

Smallest achievable variance:

= inverse of the covariance matrix of the noise

= vector of the derivative of the input tuning curve with respect to

For Gaussian noise:

Trace term is 0 when R is independent of Θ (flat noise)

Θ

Page 19: Reading population codes: a neural implementation of ideal observers

Summary

• network gives a good approximation of the optimal tuning curve determined by ML

• type of noise (flat vs proportional) affected variance and optimal tuning width