information theory & the efficient coding...

19
Information Theory & the Efficient Coding Hypothesis Jonathan Pillow Mathematical Tools for Neuroscience (NEU 314) Spring, 2016 lecture 19

Upload: hathien

Post on 06-Feb-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Information Theory & the Efficient Coding Hypothesis

Jonathan Pillow

Mathematical Tools for Neuroscience (NEU 314)Spring, 2016

lecture 19

Page 2: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Information Theory

• Entropy

• Conditional Entropy

• Mutual Information

• Data Processing Inequality

• Efficient Coding Hypothesis (Barlow 1961)

A mathematical theory of communication,Claude Shannon 1948

Page 3: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Entropy

“surprise” of x

averaged over p(x)

• average “surprise” of viewing a sample from p(x)• number of “yes/no” questions needed to identify x (on average)

for distribution on K bins, • maximum entropy = log K (achieved by uniform dist)• minimum entropy = 0 (achieved by all probability in 1 bin)

Page 4: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Entropy

Page 5: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

aside: log-likelihood and entropy

How would we compute a Monte Carlo estimate of this?

model:

entropy H:

for i = 1,…,Ndraw samples:

compute average:

log-likelihood

• Neg Log likelihood = Monte Carlo estimate for entropy!• maximizing likelihood ⇒ minimizing entropy of P(x| θ)

Page 6: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Conditional Entropy

identity of a random variable with distribution P (x). As a simple example, if a fair coin isflipped with p(H) = p(T ) = 1

2 , the entropy is

H(x) = �12 log

12 �

12 log

12 = 1 bit.

A completely biased coin with P (H) = 1 has entropy

H(x) = �1 log 1� 0 log 0 = 0 bits,

where we have used 0 log 0 = 0 (which is true in the limit limp�0 p log p = 0).

2 Di�erential Entropy

Not a “real” entropy, since it can be negative and a deterministic transformation (e.g.,y = ax) can change it. But we can define di�erential entropy as

H(x) =

⇥p(x) log p(x)dx

3 Conditional Entropy

Conditional entropy H(x|y) is the average entropy in x given the value of y.

H(x|y) = ��

y

p(y)�

x

p(x|y) log p(x|y)

= ��

x,y

p(x, y) log p(x|y)

4 Mutual Information

Mutual information is a (symmetric) function two random variables x and y that quantifieshow many bits of information x conveys about y (and vice-versa). It can be written as the

2

entropy of x given some fixed value of y

averaged over p(y)

Page 7: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Conditional Entropy

identity of a random variable with distribution P (x). As a simple example, if a fair coin isflipped with p(H) = p(T ) = 1

2 , the entropy is

H(x) = �12 log

12 �

12 log

12 = 1 bit.

A completely biased coin with P (H) = 1 has entropy

H(x) = �1 log 1� 0 log 0 = 0 bits,

where we have used 0 log 0 = 0 (which is true in the limit limp�0 p log p = 0).

2 Di�erential Entropy

Not a “real” entropy, since it can be negative and a deterministic transformation (e.g.,y = ax) can change it. But we can define di�erential entropy as

H(x) =

⇥p(x) log p(x)dx

3 Conditional Entropy

Conditional entropy H(x|y) is the average entropy in x given the value of y.

H(x|y) = ��

y

p(y)�

x

p(x|y) log p(x|y)

= ��

x,y

p(x, y) log p(x|y)

4 Mutual Information

Mutual information is a (symmetric) function two random variables x and y that quantifieshow many bits of information x conveys about y (and vice-versa). It can be written as the

2

if

entropy of x given some fixed value of y

averaged over p(y)

identity of a random variable with distribution P (x). As a simple example, if a fair coin isflipped with p(H) = p(T ) = 1

2 , the entropy is

H(x) = �12 log

12 �

12 log

12 = 1 bit.

A completely biased coin with P (H) = 1 has entropy

H(x) = �1 log 1� 0 log 0 = 0 bits,

where we have used 0 log 0 = 0 (which is true in the limit limp�0 p log p = 0).

2 Di�erential Entropy

Not a “real” entropy, since it can be negative and a deterministic transformation (e.g.,y = ax) can change it. But we can define di�erential entropy as

H(x) =

⇥p(x) log p(x)dx

3 Conditional Entropy

Conditional entropy H(x|y) is the average entropy in x given the value of y.

H(x|y) = ��

y

p(y)�

x

p(x|y) log p(x|y)

= ��

x,y

p(x, y) log p(x|y)

4 Mutual Information

Mutual information is a (symmetric) function two random variables x and y that quantifieshow many bits of information x conveys about y (and vice-versa). It can be written as the

2

“On average, how uncertain are you about x if you know y?”

Page 8: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Mutual Information

sum of entropies minus joint entropy

total entropy in X minus conditional entropy of X given Y

total entropy in Y minus conditional entropy of Y given X

“How much does X tell me about Y (or vice versa)?”

“How much is your uncertainty about X reduced from knowing Y?”

Page 9: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Venn diagram of entropy and information

Page 10: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Data Processing Inequality

Suppose form a Markov chain, that is

Then necessarily:

• in other words, we can only lose information during processing

Page 11: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Efficient Coding Hypothesis:

mutual information

channel capacityredundancy:

• goal of nervous system: maximize information about environment (one of the core “big ideas” in theoretical neuroscience)

Barlow 1961 Atick & Redlich 1990

Page 12: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Efficient Coding Hypothesis: Barlow 1961

Atick & Redlich 1990

mutual information

channel capacityredundancy:

channel capacity: • upper bound on mutual information• determined by physical properties of encoder

mutual information:• avg # yes/no questions you can

answer about x given y (“bits”)“noise” entropyresponse entropy

• goal of nervous system: maximize information about environment (one of the core “big ideas” in theoretical neuroscience)

Page 13: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Barlow’s original version:

mutual informationredundancy:

mutual information:

response entropy “noise” entropyif responses are noiseless

Barlow 1961 Atick & Redlich 1990

Page 14: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Barlow’s original version:

response entropyredundancy:

mutual information:

“noise” entropynoiseless system

brain should maximize response entropy• use full dynamic range• decorrelate (“reduce redundancy”)

• mega impact: huge number of theory and experimental papers focused on decorrelation / information-maximizing codes in the brain

Barlow 1961 Atick & Redlich 1990

response entropy

Page 15: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

basic intuitionnatural image

nearby pixels exhibit strong dependencies

neural response i

neur

al re

spon

se i+

1

0 50 1000

50

100

neural representationdesired

encoding

0 128 2560

128

256

pixel i

pixe

l i+1

pixels

Page 16: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

Example: single neuron encoding stimuli from a distribution P(x)

stimulus prior

noiseless, discreteencoding

(with constraint on range of y values)

Page 17: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

x

outp

ut le

vel y

−3 0 30

10

20

cdf

stimulus prior

noiseless, discreteencoding

−3 0 30

0.25

0.5

x

p(x)

0 10 200

p(y)

output level y

response distribution

Gaussian prior

Application Example: single neuron encoding stimuli from a distribution P(x)

(with constraint on range of y values)

Page 18: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

responsedata

Laughlin 1981: blowfly light response

cdf of light level

• first major validation of Barlow’s theory

Page 19: Information Theory & the Efficient Coding Hypothesispillowlab.princeton.edu/.../mathtools16/slides/lec19_InfoTheory.pdf · Information Theory & the Efficient Coding Hypothesis Jonathan

• entropy

• negative log-likelihood / N

• conditional entropy

• mutual information

• data processing inequality

• efficient coding hypothesis (Barlow) - neurons should “maximize their dynamic range”- multiple neurons: marginally independent responses

• direct method for estimating mutual information from data

summary