major functional areas

38
Major Functional Areas Primary motor: voluntary movement Primary somatosensory: tactile, pain, pressure, position, temp., mvt. Motor association: coordination of complex movements Sensory association: processing of multisensorial information Prefrontal: planning, emotion, judgement Speech center (Broca’s area): speech production and articulation Wernicke’s area: comprehen- sion of speech Auditory: hearing Auditory association: complex auditory processing Visual: low-level vision Visual association: higher-level vision

Upload: zelda

Post on 05-Jan-2016

18 views

Category:

Documents


0 download

DESCRIPTION

Major Functional Areas. Primary motor: voluntary movement Primary somatosensory: tactile, pain, pressure, position, temp., mvt. Motor association: coordination of complex movements Sensory association: processing of multisensorial information Prefrontal: planning, emotion, judgement - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Major Functional Areas

Major Functional AreasPrimary motor: voluntary movementPrimary somatosensory: tactile, pain, pressure, position, temp., mvt.Motor association: coordination of complex movementsSensory association: processing of multisensorial informationPrefrontal: planning, emotion, judgementSpeech center (Broca’s area): speech production and articulationWernicke’s area: comprehen-

sion of speechAuditory: hearingAuditory association: complex

auditory processingVisual: low-level visionVisual association: higher-level

vision

Page 2: Major Functional Areas
Page 3: Major Functional Areas

“Where” and “What” Visual Pathways

Dorsal stream (to posterior parietal): object localizationVentral stream (to infero-temporal): object identification

Rybak et al, 1998

“where”

“what”

“where”

“what”

Page 4: Major Functional Areas

Riesenhuber & Poggio,Nat Neurosci, 1999

The next step…

Develop scene understanding/navigation/orienting mechanismsthat can exploit the (very noisy) “rich scanpaths” (i.e., with location and sometimes identification) generated by the model.

Page 5: Major Functional Areas

Extract “minimal subscene” (i.e., small number of objects andactions) that is relevant to present behavior.

Achieve representation for it that is robust and stable againstnoise, world motion, and egomotion.

Page 6: Major Functional Areas

How plausible is that?

3—5 eye movements/sec, that’s 150,000—250,000/day

Only central 2deg of retinas (our foveas) carry high-resolution information

Attention helps us bring our foveas onto relevant objects.

Page 7: Major Functional Areas
Page 8: Major Functional Areas

Eye Movements1) Free examination

2) estimate material circumstances of family

3) give ages of the people

4) surmise what family hasbeen doing before arrivalof “unexpected visitor”

5) remember clothes worn bythe people

6) remember position of peopleand objects

7) estimate how long the “unexpected visitor” has been away from familyYarbus, 1967

Page 9: Major Functional Areas

Extended Scene Perception

• Attention-based analysis: Scan scene with attention, accumulate evidence from detailed local analysis at each attended location.

• Main issues:- what is the internal representation?- how detailed is memory?- do we really have a detailed internal

representation at all?

Page 10: Major Functional Areas

Ryb

ak

et

al, 1

99

8

Page 11: Major Functional Areas

Algorithm

- At each fixation, extract central edge orientation, as well as a number of “context” edges;

- Transform those low-level features into more invariant “second order” features, represented in a referential attached to the central edge;

- Learning: manually select fixation points; store sequence of second-order features found at each fixation into “what” memory; also storevector for next fixation, basedon context points and in thesecond-order referential;

Page 12: Major Functional Areas

Algorithm

• As a result, sequence of retinal images is stored in “what” memory, and corresponding sequence of attentional shifts in the “where” memory.

Page 13: Major Functional Areas

Algorithm

- Search mode: lookfor an image patch thatmatches one of thepatches stored in the“what” memory;

- Recognition mode: reproduce scanpath stored in memory and determine whether we have a match.

Page 14: Major Functional Areas

Robust to variations inscale, rotation, illumination, but not 3D pose.

Page 15: Major Functional Areas

Schill et al, JEI, 2001

Page 16: Major Functional Areas

The World as an Outside Memory

Kevin O’Regan, early 90s:

why build a detailed internal representation of the world?

• too complex…• not enough memory…

… and useless?

The world is the memory. Attention and the eyes are a look-up tool.

Page 17: Major Functional Areas

The “Attention Hypothesis”

No “integrative buffer”

Early processing extracts information up to “proto-object” complexity in massively parallel manner

Attention is necessary to bind the different proto-objects into complete objects, as well as to bind object and location

Once attention leaves an object, the binding “dissolves.” Not a problem, it can be formed again whenever needed, by shifting attention back to the object.

Only a rather sketchy “virtual representation” is kept in memory, and attention/eye movements are used to gather details as needed

Rensink, 2000

Page 18: Major Functional Areas
Page 19: Major Functional Areas
Page 20: Major Functional Areas
Page 21: Major Functional Areas
Page 22: Major Functional Areas
Page 23: Major Functional Areas

Outlook on human vision

Unlikely that we perceive scenes by building a progressive buffer and accumulating detailed evidence into it.

- too much resources- too complex to use.

Rather, we may only have an illusion of detailed representation.- use eyes/attention to get the details as needed- the world as an outside memory.

In addition to attention-based scene analysis, we are able to very rapidly extract the gist & layout of a scene – much faster than we can shift attention around.

This gist/layout must be constructed by fairly simple processes that operate in parallel. It can then be used to prime memory and attention.

Page 24: Major Functional Areas

Goal-directed scene understanding

• Goal: develop vision/language-enabled AI system.Architecture it after the primate brain

• Test: ask a question to system about a video clip that it is watching

e.g., “Who is doing what to whom?”

• Test: implement system on mobile robot and give it some instructions

e.g., “Go to the library”

Page 25: Major Functional Areas

Applications include…

-Autonomous vehicles / helper robots

-Warning/helper systems on board of human-operated machinery and vehicles

- Human/computer interfaces

-Vision-based aids for the visually impaired

Page 26: Major Functional Areas

Example

• Question: “who is doing what to whom?”

• Answer: “Eric passes, turns around and passes again”

Page 27: Major Functional Areas

Generalarchitecture

Page 28: Major Functional Areas

Example

• Question: “who is doing what to whom?”

• Answer: “Eric passes, turns around and passes again”

Page 29: Major Functional Areas

Outlook

• Neuromorphic vision algorithms provide robust front-end for extended scene analysis

• To be useful, analysis needs to highly depend on task and behavioral priorities

• Thus the challenge is to develop algorithms that can extract the currently relevant “minimal subscene” from incoming rich scanpaths

• Such algorithms will use a collaboration between fast parallel computation of scene gist/layout and slower attentional scanning of scene details

Page 30: Major Functional Areas

“Beobot”

Page 31: Major Functional Areas
Page 32: Major Functional Areas
Page 33: Major Functional Areas
Page 34: Major Functional Areas

Eric PichonNathan Mundhenk

Daesu ChungApril Tsui

Tasha Drew

Jen Ng

Ruchi Modi

Yoohee Shin

Reid Hirata

Tony Ventrice

Page 35: Major Functional Areas
Page 36: Major Functional Areas
Page 37: Major Functional Areas
Page 38: Major Functional Areas