chapter 2

39
Chapter 2

Upload: harper

Post on 06-Jan-2016

22 views

Category:

Documents


0 download

DESCRIPTION

Chapter 2. Outline. Linear filters Visual system (retina, LGN, V1) Spatial receptive fields V1 LGN, retina Temporal receptive fields in V1 Direction selectivity. Linear filter model. Given s(t) and r(t), what is D?. White noise stimulus. Fourier transform. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Chapter 2

Chapter 2

Page 2: Chapter 2

Outline

• Linear filters

• Visual system (retina, LGN, V1)

• Spatial receptive fields– V1– LGN, retina

• Temporal receptive fields in V1– Direction selectivity

Page 3: Chapter 2

Linear filter model

Given s(t) and r(t), what is D?

Page 4: Chapter 2
Page 5: Chapter 2

White noise stimulus

Page 6: Chapter 2

Fourier transform

Page 7: Chapter 2

H1 neuron in visual system of blowfly

• A: Stimulus is velocity profile;

• B: response of H1 neuron of the fly visual system;

• C: rest(t) using the linear kernel D() (solid line) and actual neural rate r(t) agree when rates vary slowly.

• D() is constructed using white noise

Page 8: Chapter 2

Deviation from linearity

Page 9: Chapter 2
Page 10: Chapter 2

Early visual system: Retina

• 5 types of cells: – Rods and cones: photo-

transduction into electrical signal

– Lateral interaction of Bipolar cells through Horizontal cells. No action potentials for local computation

– Action potentials in retinal ganglion cells coupled by Amacrine cells. Note

• G_1 off response

• G_2 on response

Page 11: Chapter 2

Pathway from retina via LGN to V1

• Lateral geniculate nucleus (LGN) cells receive input from Retinal ganglion cells from both eyes.

• Both LGNs represent both eyes but different parts of the world

• Neurons in retina, LGN and visual cortex have receptive fields:

– Neurons fire only in response to higher/lower illumination within receptive field

– Neural response depends (indirectly) on illumination outside receptive field

Page 12: Chapter 2

Simple and complex cells

• Cells in retina, LGN, V1 are simple or complex

• Simple cells:– Model as linear filter

• Complex cells – Show invariance to spatial position within the receptive field– Poorly described by linear model

Page 13: Chapter 2

Retinotopic map• Neighboring image points

are mapped onto neighboring neurons in V1

• Visual world is centered on fixation point.

• The left/right visual world maps to the right/left V1

• Distance on the display (eccentricity) is measured in degrees by dividing by distance to the eye

Page 14: Chapter 2

Retinotopic map

Page 15: Chapter 2

Retinotopic map

Page 16: Chapter 2

Visual stimuli

Page 17: Chapter 2

Nyquist Frequency

Page 18: Chapter 2

Spatial receptive fields

Page 19: Chapter 2

V1 spatial receptive fields

Page 20: Chapter 2

Gabor functions

Page 21: Chapter 2

Response to grating

Page 22: Chapter 2

Temporal receptive fields

• Space-time evolution of V1 cat receptive field• ON/OFF boundary changes to OFF/ON boundary over time.• Extrema locations do not change with time: separable kernel

D(x,y,)=Ds(x,y)Dt()

Page 23: Chapter 2

Temporal receptive fields

Page 24: Chapter 2

Space-time receptive fields

Page 25: Chapter 2

Space-time receptive fields

Page 26: Chapter 2

Space-time receptive fields

Page 27: Chapter 2

Direction selective cells

Page 28: Chapter 2

Complex cells

Page 29: Chapter 2

Example of non-separable receptive fieldsLGN X cell

Page 30: Chapter 2

Example of non-separable receptive fieldsLGN X cell

Page 31: Chapter 2

Comparison model and data

Page 32: Chapter 2

Constructing V1 receptive fields

• Oriented V1 spatial receptive fields can be constructed from LGN center surround neurons

Page 33: Chapter 2
Page 34: Chapter 2

Stochastic neural networks

2000 top-level neurons

500 neurons

500 neurons

28 x 28 pixel image

10 label

neurons

The model learns to generate combinations of labels and images.

To perform recognition we start with a neutral state of the label units and do an up-pass from the image followed by a few iterations of the top-level associative memory.

The top two layers form an associative memory whose energy landscape models the low dimensional manifolds of the digits.

The energy valleys have names

Hinton

Page 35: Chapter 2

Samples generated by letting the associative memory run with one label clamped using

Gibbs sampling

Hinton

Page 36: Chapter 2

Examples of correctly recognized handwritten digitsthat the neural network had never seen before

Hinton

Page 37: Chapter 2

How well does it discriminate on MNIST test set with no extra information about geometric distortions?

• Generative model based on RBM’s 1.25%• Support Vector Machine (Decoste et. al.) 1.4% • Backprop with 1000 hiddens (Platt) ~1.6%• Backprop with 500 -->300 hiddens ~1.6%• K-Nearest Neighbor ~ 3.3%• See Le Cun et. al. 1998 for more results

• Its better than backprop and much more neurally plausible because the neurons only need to send one kind of signal, and the teacher can be another sensory input.

Hinton

Page 38: Chapter 2

Summary

• Linear filters– White noise stimulus for optimal estimation

• Visual system (retina, LGN, V1)• Visual stimuli• V1

– Spatial receptive fields– Temporal receptive fields– Space-time receptive fields– Non-separable receptive fields, Direction selectivity

• LGN and Retina– Non-separable ON center OFF surround cells– V1 direction selective simple cells as sum of LGN simple cells

Page 39: Chapter 2

Exercise 2.3

• Is based on Kara, Reinagel, Reid (Neuron, 2000). – Simultaneous single unit recordings of retinal ganglion cells,

LGN relay cells and simple cells from primary visual cortex– Spike count variability (Fano) less than Poisson, doubling

from RGC to LGN and from LGN to cortex.– Data explained by Poisson with refractory period– Fig. 1,2,3