large-scale neural models forbrainstorm.stanford.edu › events › docs › 2016 ›...

Post on 04-Jul-2020

0 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Large-scale neural models for cognition

Chris EliasmithCentre for Theoretical Neuroscience

University of Waterloo

Three roads to cognition(on Brainstorm)

• Symbolicism

• Connectionism

• Dynamicism

Unify

Spaun• Semantic pointer architecture unified

network (Spaun)

• 8 different percept/cognitive/motor tasks

• 2.5 million neurons

• 8 billion connections

• Uses NEF implemented in Nengo

• Does 8 tasks, including basic perceptual tasks…

And more cognitive tasks...

With no changes to Spaun between tasks...

(8 tasks include: recognition, copy drawing, reinforcement learning, counting, serial working memory, question answering, RVC, RPM)

NEF connection weights

Nengo 2.0

Spaun fly through

Semantic Pointer Architecture

• The SPA uses NEF building blocks for cognitive models

• Four elements described:

• Semantics

• Syntax

• Control

• Learning & memory

Biological Cognition

SPA elements• Communication

protocol:

• Semantic Pointers

• Specific functional claims:

• Motor/perception hierarchies

• Representing structure

• Action selection (BG)

SPA: Semantic Pointers• E.g. The pointer would be the activity of the top

level of a standard hierarchical visual model for object recognition

• This pointer can then support ‘symbol’ manipulation

• It can also be used toreactivate a full visual representation

Serre et al., 2007 PNAS

Surface/deep semantics• Applied to numbers: a) neuron tuning; b) generic SPs;

c) input; d) reconstruction; e) surface semanticsb)

a)a)

c) d)

e)

Charlie Tang

a)Data

Model

Semantic Pointers• Semantic pointers are: Compressed, content-based

‘addresses’ to information in association cortices

• ‘Pointer’ because they are used to recall ‘deep’ semantic information (content-based pointer)

• ‘Semantic’ because they themselves define a ‘surface’ semantic space

a bc

$€

%+=

ê

åëâï

Motor compression• SPs can be used to drive the motor control

hierarchy as a dereferencing operation

Cube Octahedron

8 vertices6 faces

8 faces6 vertices

a) b)Motor Control

Representa!on World

Perception

World Representa!on

World

World

Dom

inantInform

ation Flow

Cognitive compression•Rule: “If vowel, then even”

•Classically: implies(vowel,even)

•What is the antecedent?

Examples of structure

• Rules: implies(vowel, even)

• Concepts: dog

• Lists: 5,3,4,0

Action selection

•Mb compares the cortical state with known SPs•Mc maps selected action to cortical control states

Action selection

Putting it together: Serial working memory

How does it work?

• Compress:

• Map:

• Compress:

• Recurrence:

• Action:

• Decompress:

• Map:

• Decompress:

Perception Cognition Action

Extending Spaun• Spaun: Largely fixed rules in BG

• Built as an expert at a fixed set of tasks

• Limited flexibility for new tasks without model changes

• Two ways to learn new tasks:

• Trial and error

• Explicit instruction

Explicit Instruction

• Flexible perception/action mappings

• Very fast “learning” (to support slow learning)

• Applications:

• Service robots

• Team coordination

Task 1

Rule Manipulation

• Encoding rules:

• If vision=1 then write 3

• If vision=3 then write 9…

Rule Manipulation

• Decoding:

• vision = one

• Action to perform:

Task 2

Rule manipulation

• Encoding/decoding similar:

• Result is loaded into an “executive memory” that keeps track of the task being performed.

• Updates cortical control state instead of simple perception/action map.

Next steps

• Sets of instructions (multiple steps)

• Integrating instruction following into basal ganglia

• Instruct it to perform an entirely novel task

• Parsing instructions from visual stream

Other SPA models• Unifying concepts (Blouw et al, 2016)

• Human-scale conceptual structures (Crawford et al., 2015)

• Instruction following parsing (Choo et al, 2015)

• Intelligence test model (Rasmussen & Eliasmith, 2014)

• Speech perception and generation (Bekolay, 2016)

• Hierarchical reinforcement learning (Rasmussen, 2014)

• N-back task (Gosmann & Eliasmith, 2015)

• Language parsing (Blouw & Eliasmith, 2015)

• Language sequencing (Kroger et al, 2016)

SPA Unifies• Perception (Connectionism)

• Statistical categorization, SPs

• Cognition (Symbolicism)

• Working with structure, SPs

• Action (Dynamicism)

• Brain/body dynamics, SPs & control

Further informationResearch, Papers: http://compneuro.uwaterloo.ca

Nengo, Tutorials, Spaun videos: http://www.nengo.ca

CNRG lab: Terry Stewart, Eric Hunsberger, Brent Komer, Aaron Voelker, Xuan Choo, Sean Aubin, Sugandha Sharma, Mariah Martin-Shein, Peter Blouw, Stacy Gaikovaia, Jan Gosmann, Ivana Kajic, Peter Duggins

top related