a photon accurate model of the human eye michael f. deering

Post on 31-Dec-2015

217 Views

Category:

Documents

3 Downloads

Preview:

Click to see full reader

TRANSCRIPT

A Photon Accurate Model

Of The Human Eye

Michael F. Deering

Use Graphics Theory To Simulate VisionUse Graphics Theory To Simulate Vision

MotivationMotivation

• Understanding the interactions between rendering algorithms, cameras, and display devices with the human eye and visual perception.

• Use this to improve (or not improve) rendering algorithms, cameras, and displays.

Graphics/Vision SystemGraphics/Vision System

Display

ImageGeneration

PostProduction

NeuralProcessing

Display Photons

Eye

Concrete Software DeliverableConcrete Software Deliverable

A computer program to:

• Simulate, photon by photon, several frames of video display into an anatomically accurate human eye retinal sampling array.

OverviewOverview

• Photon counts

• Display device pixel model

• Eye optical model

• Rotation of eye due to “drifts”

• Retinal synthesizer

• Diffraction computation

• Results: rendering video photons into eye

Photons In This Room:4K Lumens: 1019 Photons/SecPhotons In This Room:4K Lumens: 1019 Photons/Sec

14’

17’

75’

~600 photons/60th sec per pixel per cone

Display Pixel ModelDisplay Pixel Model

Each pixel color sub-component has:

• Spatial envelope (shape, including fill factor)

• Spectral envelope (color)

• Temporal envelope (time)

Trinitron™ CRT PixelTrinitron™ CRT Pixel

Direct View LCD PixelDirect View LCD Pixel

3 Chip DLP™ Pixel3 Chip DLP™ Pixel

1 Chip DLP™ Pixel1 Chip DLP™ Pixel

1 Chip DLP™ In This Room1 Chip DLP™ In This Room

Optical Model Of The Eye: Schematic EyesOptical Model Of The Eye: Schematic Eyes

• Historically comprised of 6 quadric surfaces.

• Real human eyes are quite a bit more complex.

• My model is based on:“Off-axis aberrations of a wide-angle schematic eye Model”, Escudero-Sanz & Navarro, 1999.

Eye ModelEye Model

Rotation Of The Eye Due To “Drift”Rotation Of The Eye Due To “Drift”

• When seeing, the eye almost always is slowly drifting at 6 to 30 minutes of arc per second relative to the point of fixation.

• The induced motion blur is important for perception, but rarely modeled.

• (The eye also has tremor, micro-saccades, saccades, pursuit motions, etc.)

Why The Eye Sampling Pattern MattersWhy The Eye Sampling Pattern Matters

Roorda And Williams ImageRoorda And Williams Image

Synthetic Retina GenerationSynthetic Retina Generation

• Some existing efforts take real retinal images as representative patches then flip and repeat. Others just perturb a triangular lattice.

• I want all 5 million cones

– New computer model to generate retinas to order (not synthesizing rods yet).

Retina Generation AlgorithmRetina Generation Algorithm

For more details, attend implementation sketch

“A Human Eye Cone Retinal Synthesizer”

Wednesday 8/3 10:30 am session Room 515B (~11:25 am)

Growth Sequence MovieGrowth Sequence Movie

Growth Movie ZoomGrowth Movie Zoom

Retinal Zoom Out MovieRetinal Zoom Out Movie

3D Fly By Movie3D Fly By Movie

Roorda Blood VesselRoorda Blood Vessel

Roorda Verses SyntheticRoorda Verses Synthetic

The Human Eye Verses Simple Optics TheoryThe Human Eye Verses Simple Optics Theory

All eye optical axes unaligned:

• Fovea is 5 degrees off axis

• Pupil is offset ~0.5 mm

• Lens is tilted (no agreement on amount)

• Rotational center: 13 mm back, 0.5 mm nasal

Eye image surface is spherical

Blur And DiffractionBlur And Diffraction

Just blur Blur and diffraction

Generating a Diffracted Point Spread Function (DPSF)Generating a Diffracted Point Spread Function (DPSF)

• Trace the wavefront of a point source as 16 million rays.

• Repeat for 45 spectral wavelengths.

• Repeat for every retinal patch swept out by one degree of arc in both directions.

Diffracted Point Spread Functions MovieDiffracted Point Spread Functions Movie

Putting It All TogetherPutting It All Together

• Generate synthetic retina.

• Compute diffracted point spread functions by tracing wavefronts through optical model.

• Simulate, photon by photon, a video sequence into eye cones.

• Display cone photon counts as colored images.

Simulating Display And EyeSimulating Display And Eye

• For each frame of the video sequence:

• For each pixel in each frame:

• For each color primary in each pixel:

• From the color primary intensity, compute the number of photons that enter the eye

• For each simulated photon, generate a sub-pixel position, sub-frame time, and wavelength

Simulating Display And EyeSimulating Display And Eye

• From the sub-frame time of the photon, interpolate the eye rotation due to “drift”.

• From the position and wavelength of the photon, interpolate the diffracted point spread function.

• Interpolate and compute the effect of pre-receptoral filters: culls ~80% of photons.

Simulating Display And EyeSimulating Display And Eye

• Materialize the photon at a point within the DPSF parameterized by a random value.

• Compute cone hit, cull photons that miss.

• Apply Stiles-Crawford Effect (I), cull photons.

• Compute cone photopigment absorptance; cull photons not absorbed.

• Increment cone photon count by 1.

30x30 Pixel Face Input30x30 Pixel Face Input

Retinal Image ResultsRetinal Image Results

Lumen Ramp MovieLumen Ramp Movie

30x30 Pixel Movie30x30 Pixel Movie

Result MovieResult Movie

How To Test Model?How To Test Model?

How To Test Model?How To Test Model?

• Test it the same way we test real eyes.

20/27

20/20

20/15

20/12

20/9

20/20

20/27

20/15

20/12

20/9

Sine Frequency RampSine Frequency Ramp

20 cycles 80 cycles40 cycles

Maximum Drift MovieMaximum Drift Movie

Maximum Track MovieMaximum Track Movie

Next StepsNext Steps

• Continue validation of model and adding features.

• Simulate deeper into the visual system:

– Retinal receptor fields

– Lateral geniculate nucleus

– Simple and complex cells of the visual cortex.

AcknowledgementsAcknowledgements

• Michael Wahrman for the RenderMan™ rendering of the cone data.

• Julian Gómez and the anonymous SIGGRAPH reviewers for their comments on the paper.

top related