notes on teaching act-r modeling

198
Notes on teaching ACT- R modeling Bill Kennedy George Mason University September 2011

Upload: nevina

Post on 22-Jan-2016

21 views

Category:

Documents


0 download

DESCRIPTION

Notes on teaching ACT-R modeling. Bill Kennedy George Mason University September 2011. Why Model?. Real world. Model. Interpret. Experiment. Run. Modify. Abstract. Data. Prediction. 2. From Marvin L. Bittinger(2004) Calculus and Its Applications, 8 th Ed. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Notes on teaching ACT-R modeling

Notes on teaching ACT-R modeling

Bill KennedyGeorge Mason University

September 2011

Page 2: Notes on teaching ACT-R modeling

Why Model?

Real world

Data Prediction

Model

Experiment Run

Abstract

Interpret

Modify

From Marvin L. Bittinger(2004) Calculus and Its Applications, 8th Ed.2

Page 3: Notes on teaching ACT-R modeling

Why Model?1. Explain (very distinct from

predict)2. Guide data collection3. Illuminate core dynamics4. Suggest dynamical analogies5. Discover new questions6. Promote a scientific habit of mind7. Bound outcomes to plausible

ranges8. Illuminate core uncertainties9. Offer crisis options in near-real

time

10. Demonstrate tradeoffs / suggest efficiencies

11. Challenge the robustness of prevailing theory through perturbations

12. Expose prevailing wisdom as incompatible with available data

13. Train practitioners14. Discipline the policy dialogue15. Educate the general public16. Reveal the apparently simple

(complex) to be complex (simple)

Joshua M. Epstein (2008) JASSS3

Page 4: Notes on teaching ACT-R modeling

Cognitive Science

• An interdisciplinary field including parts of Anthropology, Artificial Intelligence, Education, Linguistics, Neuroscience, Philosophy, and Psychology

• Goal: understanding the nature of the human mind

4

Page 5: Notes on teaching ACT-R modeling

Goals of AI & Cog Sci

Artificial Intelligence: “scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines” (from AAAI website): demonstrated by functionality

Cognitive Science: understanding the nature of the human mind as demonstrated by models that match human behavior: i.e., matching human behavior

5

Page 6: Notes on teaching ACT-R modeling

6

What is it?

ProcessInput OutputSystem

Page 7: Notes on teaching ACT-R modeling

System

7

What is it?

ProcessInput Output

Page 8: Notes on teaching ACT-R modeling

8

What is it?

ProcessInput Output

Perception Cognition Action

Page 9: Notes on teaching ACT-R modeling

9

What is it?

ProcessInput Output

Perception Cognition Action

Page 10: Notes on teaching ACT-R modeling

Components

• An architecture is the fixed* part of a system that implements the mind functions

• Associated with an architecture, a model is the variable* part of a the mind’s processing

------• System’s knowledge representation is on the edge

between architectures and models

* with respect to a time scale (from Newell’s 1990 UTC)

10

Page 11: Notes on teaching ACT-R modeling

ACT-R Architecture

11

Perception { } Actuation

Cognition

Page 12: Notes on teaching ACT-R modeling

12

ACT-R Architecture Overview

• Procedure core• Buffers• Modules• 2 types of knowledge representation:

– declarative (facts, “chunks”)– procedural (deductions, “productions”)

Page 13: Notes on teaching ACT-R modeling

13

Productions

• Procedural knowledge as if-then statements

• Basic process is: match, select, fire

• Many may match current status

• Only 1 selected to fire

Page 14: Notes on teaching ACT-R modeling

14

Productions

• Productions use buffers in IF & THEN parts

• IF part checks buffer contents or status

• THEN part, changes buffer values or requests buffer/module action

Page 15: Notes on teaching ACT-R modeling

15

Productions

• Useful to translate English to ACT-R

• eg: IF the goal is to count to y AND currently at x, AND x<>y, THEN remember what comes after x.

Page 16: Notes on teaching ACT-R modeling

16

Production Design

• eg 1: IF the goal is to count to y AND currently at x, AND x<>y, THEN remember what comes after x.

• but:– this production will always match and fire...– another production will deal with the

remembered fact– it can work with addition of a “state” variable

Page 17: Notes on teaching ACT-R modeling

17

Production Design

• IF the goal is to count to y AND currently at x, AND x<>y, THEN remember what comes after y.

(p rule-getnext=goal> isa count

to y current x - current y

- state recalling-next==>+retrieval> isa next-fact current x=goal>

state recalling-next)

count chunk type: to <n> current <m> state <w>

Page 18: Notes on teaching ACT-R modeling

18

Production Design 2

• eg 1: IF the goal is to count to y AND currently at x, AND x<>y, THEN recall what comes after y.

• eg 2: IF the goal is to count with current = x AND “to” is not x, THEN recall what comes after x

• eg 3: IF the goal is to count with current is x AND “to” is not x, and we remembered y comes after x, THEN update current to y and recall what comes after y

Page 19: Notes on teaching ACT-R modeling

19

Production Design (core)

(P increment =goal> ISA count-from count =num1 - end =num1 =retrieval> ISA count-order first =num1 second =num2 ==> =goal> count =num2 +retrieval> ISA count-order first =num2 !output! (=num1))

count-from chunk type: end <n> count <m>

count-order chunk type: first <n> second <m>

Page 20: Notes on teaching ACT-R modeling

20

Production Design (start)

(P start =goal> ISA count-from start =num1 count nil ==> =goal> count =num1 +retrieval> ISA count-order first =num1)

Page 21: Notes on teaching ACT-R modeling

21

Production Design (stop)

(P stop =goal> ISA count-from count =num end =num ==> -goal> !output! (=num))

Page 22: Notes on teaching ACT-R modeling

22

ACT-R & Lisp...

• ACT-R written in Lisp• ACT-R uses Lisp syntax• Parts of a model

– Lisp code (~)– Parameters– Initialization of memory (declarative & proc)– Running a model

Page 23: Notes on teaching ACT-R modeling

23

ACT-R & Lisp...syntax

• ; comments• “(“ <function-name> <arguments> “)”

eg: (clear-all) (sgp) <= lists all parameters & settings

(p ... ) <= p function creates productions

Page 24: Notes on teaching ACT-R modeling

24

ACT-R & Lisp...warnings/errors

• Lisp warnings

#|Warning: Creating chunk BUZZ of default type chunk |#

Undefined term, usually insignificant

#|Warning: Invalid chunk definition: (RED ISA CHUNK) names a

chunk which already exists. |#

Some terms defined within ACT-R as chunks (~reserved words)

Page 25: Notes on teaching ACT-R modeling

25

ACT-R & Lisp...warnings/errors

• Lisp /ACT-R error example 1:> (help)

UNDEFINED-FUNCTION

Error executing command: "(help)":

Error:attempt to call `HELP' which is an undefined function..

Non-existent function call

Page 26: Notes on teaching ACT-R modeling

26

ACT-R & Lisp...warnings/errors

• Lisp /ACT-R error example 2:

Error Reloading:; loading; c:\documents and settings\bill kennedy\desktop\psyc-768-s09\demo2.txterror reloading modelerror:eof encountered on stream#<file-simple-stream #p"c:\\documents and settings\\bill kennedy\\desktop\\psyc-768-s09\\demo2.txt" closed @ #x20b2159a>

Unbalanced parentheses.

Page 27: Notes on teaching ACT-R modeling

27

ACT-R Model (outline)

; header info(clear-all)(define-model <model name>

(sgp :<parm name> <value> <parm name> <value> ... )(chunk-type <isa name> <att1> <att2> ...)(add-dm

(<name> isa <chunk-type> <attn> <value> <attm> <value> ...)

(<name> isa <chunk-type> <attn> <value> <attm> <value> ...)

... ) ; end of add-dm(p ... )(goal-focus <chunk-name>)

) ; end of model definition

Page 28: Notes on teaching ACT-R modeling

28

ACT-R Model

(p <production name> =goal>

ISA <chunk-type><att> <value>...

=retrieval> buffer (content)ISA <chunk-type>

?retrieval> buffer (status)state full

==>=goal>

<att> <value>+retrieval> request to a module

ISA <chunk-type><att> <value>...

-goal> explicit clearing of a buffer!output! (text text =variable text =variable)

)

Page 29: Notes on teaching ACT-R modeling

29

Homework: Data Fitting• From now on the assignments will be compared to human performance

– Mostly Response time• Correlation and Mean deviation

• Provides a way to compare and judge the models• Not the only way!

– Plausibility – Generality– Simplicity

• Make sure the model does the right thing before trying to tune it with parameters!

29

Page 30: Notes on teaching ACT-R modeling

30

Subitizing

• Task: A bunch of objects appear on the display, report the number by speaking it

• Model starts with the counting facts from 0-11• Will need to manage visual attention

– Make sure the model gets to every item– Needs to know when its done– Given 10 finsts with a long duration to start

• Do not have to use that if you do not want to• Should not need to adjust parameters to get a reasonable

fit to the data

30

Page 31: Notes on teaching ACT-R modeling

Solution Model

CORRELATION: 0.980MEAN DEVIATION: 0.230Items Current Original 1 0.54 (T ) 0.60 2 0.77 (T ) 0.65 3 1.00 (T ) 0.70 4 1.24 (T ) 0.86 5 1.48 (T ) 1.12 6 1.71 (T ) 1.50 7 1.95 (T ) 1.79 8 2.18 (T ) 2.13 9 2.41 (T ) 2.15 10 2.65 (T ) 2.58

Page 32: Notes on teaching ACT-R modeling

Mechanism

• Find, attend, count, repeat• Linear in the number of items• Any other solutions?

Page 33: Notes on teaching ACT-R modeling

33

Memory’s Subsymbolic Representation

33

Page 34: Notes on teaching ACT-R modeling

34

Memory’s Subsymbolic Representation

• At symbolic level– chunks in DM– retrieval process

• When turned on, :esc t,– retrieval based on chunk’s “activation”

34

(p add-ones =goal> isa add-pair one-ans busy one1 =num1 one2 =num2 =retrieval> isa addition-fact addend1 =num1 addend2 =num2 sum =sum==> =goal> one-ans =sum carry busy +retrieval> isa addition-fact addend1 10 sum =sum)

Page 35: Notes on teaching ACT-R modeling

35

Memory’s Subsymbolic Representation: Activation

• Activation drives both latency and probability of retrieval

• Activation for chunk i:

A i = B i + i

• Retrieved *if* activation above a threshold (retrieval threshold :rt default 0, often -1)

• Latency calculated from activation

35

Page 36: Notes on teaching ACT-R modeling

36

Memory’s Subsymbolic Representation: Activation

Activation for chunk i:

Ai = Bi + i

Bi = “Base-level activation”

i = noise contribution

36

Page 37: Notes on teaching ACT-R modeling

37

Memory’s Subsymbolic Representation: Base-level Activation

• Base-level activation– Depends on two factors of the history of usage of

the chunk: recency & frequency

– represented as the log of odds of need (Anderson & Schooler, 1991)

– due to math representation, can be negative

– includes every previous use

– affected most by most recent use37

Page 38: Notes on teaching ACT-R modeling

3838

B

Memory’s Subsymbolic Representation: Base-level Activation

Page 39: Notes on teaching ACT-R modeling

3939

Memory’s Subsymbolic Representation: Base-level Activation

Page 40: Notes on teaching ACT-R modeling

4040

Memory’s Subsymbolic Representation: Base-level Activation

With use, decays less

Page 41: Notes on teaching ACT-R modeling

41

Memory’s Subsymbolic Representation: Base-level Activation

• Chunk events affecting activation (“event presentations”)– chunk creation

– cleared from a buffer and entered into DM

– cleared from a buffer and already in DM

– retrieved from DM (credited when cleared)

41

Page 42: Notes on teaching ACT-R modeling

42

Memory’s Subsymbolic Representation: Base-level Activation

• Base-level activation calculation called “Base-Level Learning”

• Key parameter, :bll– the exponent in the formula

– normal value: a half, i.e., 0.5

42

Page 43: Notes on teaching ACT-R modeling

43

Memory’s Subsymbolic Representation: Base-level Activation

• Full representation requires keeping data on every previous use...

• Alternate: “Optimized” Learning

• If previous chunk events are evenly spaced, a simpler formula is possible

• Keeping even 1 previous event is effective

43

Page 44: Notes on teaching ACT-R modeling

44

Memory’s Subsymbolic Representation: Base-level Activation

44

ACT-R parameteroptimized learning::ol t or :ol 1

Page 45: Notes on teaching ACT-R modeling

45

Memory’s Subsymbolic Representation: Activation

Activation for chunk i:

Ai = Bi + i

Bi = “Base-level activation”

i = noise contribution

45

Page 46: Notes on teaching ACT-R modeling

46

Memory’s Subsymbolic Representation: Activation Noise

• i = noise contribution

• 2 parts: permanent & instantaneous

• both ACT-R parameters :pas & :ans

• usually only adjust :ans

• :ans setting varies, from 0.1 to 0.7

• noise in model sometimes nec’y to match noise of human subjects... 46

Page 47: Notes on teaching ACT-R modeling

47

Memory’s Subsymbolic Representation: Latency(s)

• Activation also affects latency (two ways)

• Latency = F * e-A

A is activation

F is “latency factor” (ACT-R parameter :lf ~0.5)

• Threshold setting affects latency of retrieval failure(must wait until latency of threshold passes!)

47

Page 48: Notes on teaching ACT-R modeling

48

Memory’s Subsymbolic Representation

• Activation = base-level and noise

• Base-level dependent of recency & frequency of previous chunk “presentations”

• Retrieval only when activation above “retrieval threshold”

• Activation and threshold affect latency

• Many parameters :esc, :rt, :bll, :ol, :ans48

Page 49: Notes on teaching ACT-R modeling

Memory II: Other Sources of Activation

• Previously, chunk’s activation over time

• Now, add the effect of context (two types)

49

Page 50: Notes on teaching ACT-R modeling

Other Sources: Spreading Activation & Partial Matching

• Activation (previous):Ai = Base Level Activation + noise = Bi + i

• the effect of context (new):Ai = Bi + i + SA + PM

50

Page 51: Notes on teaching ACT-R modeling

• Learn multiple similar facts, e.g.,

A hippie is in the parkA lawyer is in the caveA debutante is in the bankA hippie is in the caveA lawyer is in the church

51

Spreading Activation

Page 52: Notes on teaching ACT-R modeling

• Tests (seen before Y/N?)

A lawyer is in the park

A hippie is in the cave

52

Spreading Activation

Page 53: Notes on teaching ACT-R modeling

• Reponses time increases linearly as number of persons and locations increase, i.e., “fanning out” of activation

• Foils take longer than targets to decide

53

Spreading Activation

Page 54: Notes on teaching ACT-R modeling

• The context affects retrievals

• Contents of other buffers contribute to retrieval activation calculation for chunks in DM

• Affects response time

54

Spreading Activation

Page 55: Notes on teaching ACT-R modeling

• Consider: several matching chunks in memory• How decide which?• Activation based on base (recency &

frequency) PLUS small context effect• Retrieval based on parts of chunk separates

exact matches from non-matches

55

Spreading Activation

Page 56: Notes on teaching ACT-R modeling

Spreading Activation• Activation (previous):

Ai = Base Level Activation + noise = Bi + i

• add context: effect of other buffers’ chunks

Ai = Bi + i + (Wkj Sji)) buffers(k) slots(j)

56

Page 57: Notes on teaching ACT-R modeling

Spreading Activation

• add context: effect of other buffers’ chunks

Ai = Bi + i + (Wkj Sji)) buffers(k) slots(j)

Wkj is weighting of slot j in buffer k (normalized)

Sji is the strength of the association between

slot j and chunk i

57

Page 58: Notes on teaching ACT-R modeling

Spreading Activation

• add context: effect of other buffers’ chunks

Ai = Bi + i + (Wkj Sji)) buffers(k) slots(j)

Wkj is weighting of slot j in buffer k (normalized)

(default is 1 for goal, 0 for others)

Sji is the strength of the association between

slot j and chunk i (Sji=0 or S-ln(fanj) )

58

Page 59: Notes on teaching ACT-R modeling

Spreading Activation

Fan Effect (Anderson 1974)• Fan effect: number of associations “fanning

out” from a chunk

• Other buffers hold chunks• Chunk has slots with other chunks• How many uses of a chunk affects its Ai

59

Page 60: Notes on teaching ACT-R modeling

60

Spreading Activation: Fan Effect

S11

W1

W2

S12

S13

S21

S22

S23

Slot 1

Slot 2

Source1

Source 2

chunk1

chunk2

chunk3

GoalB1

B2

B3A1 = B1 + W1S11 + W2S21

A2 = B2 + W1S12 + W2S22

A3 = B3 + W1S13 + W2S23

Page 61: Notes on teaching ACT-R modeling

• Retrievals based on matching & activation• Now, other buffers affect retrieval• But, activation diluted by similar chunks

• Effect:Similar but non-matches slow retrievals

61

Spreading Activation: Fan Effect

Page 62: Notes on teaching ACT-R modeling

62

Other Sources: Partial Matching

Page 63: Notes on teaching ACT-R modeling

63

Other Sources: Partial Matching

• Provides ACT-R a mechanism to explain errors of commission, retrieving wrong chunk

• (previous activation mechanism explained errors of omission, Ai < :RT )

Page 64: Notes on teaching ACT-R modeling

Partial Matching

• add context: effect of similar chunks

Ai = Bi + i + (Wkj Sji)) + PMli buffers(k) slots(j) retrieval slots

P is weighting of slot

Mli is the similarity between values in slot l of retrieval and slot i of chunk

64

Page 65: Notes on teaching ACT-R modeling

Partial Matching

• add context: effect of similar chunks

Ai = Bi + i + (Wkj Sji)) + PMli buffers(k) slots(j) retrieval slots

P is weighting of slots (all equal)

Mli is the similarity between values in slot l of retrieval and slot i of chunk

65

Page 66: Notes on teaching ACT-R modeling

Partial Matching

• Effect is can retrieve a wrong but similar chunk (... IF chunk hierarchy supports it)

• Retrieval of wrong chunk supports errors of commission, taking wrong action vice no action

66

Page 67: Notes on teaching ACT-R modeling

67

ACT-R Modeling

• ACT-R Model Development

• ACT-R Model Debugging

67

Page 68: Notes on teaching ACT-R modeling

68

ACT-R Model Development1. Plan overall model to work in stages.

2. Start simple then add details to your model.

3. Write simple productions using simple chunks.

4. Run the model (with own trace) frequently to test progress (eg. with every new or changed production).

68

Page 69: Notes on teaching ACT-R modeling

69

ACT-R Model Development

5. Start with productions doing one thing at a time (i.e., reference goal + one buffer) and use multiple productions. Combine later.

6. Use state variables to rigorously control sequencing until model works, then remove as many as possible.

69

Page 70: Notes on teaching ACT-R modeling

70

ACT-R Model Development

7. With each buffer request, consider a production for handling the failure.

8. In using loops, consider preps to start and how to leave loop.

70

Page 71: Notes on teaching ACT-R modeling

71

ACT-R Debugging Process

• Run ACT-R up to problem...– set time limit– change production to stop at problem step

• Check "why not" of expected production• Check buffers & buffer status• Check visicon• Causes ...

71

Page 72: Notes on teaching ACT-R modeling

72

ACT-R Code Debugging

Stops unexpectedly/expected production not firing:– Conditions not met (use "Why not" to

identify which)– Conditions over-specified with unnec'y

variable tests which don’t match– Logic mismatch among conditions– nil will not match =variable– ...

72

Page 73: Notes on teaching ACT-R modeling

73

ACT-R Code Debugging

Stops unexpectedly/expected production not firing (continued):

– Typo on variable name, i.e., not same ref.– Wrong slot referenced in LHS– Time ran out– Production not in memory – Error on loading (production ignored)– Production overwritten by duplicate naming

(warning) 73

Page 74: Notes on teaching ACT-R modeling

74

ACT-R Code Debugging

Wrong production firing:– Firing production also meets current

conditions– Conditions do not meet expected production

LHS

Production firing repeatedly:– LHS not changed by firing, i.e., still valid

74

Page 75: Notes on teaching ACT-R modeling

75

ACT-R Code Debugging

Buffer unexpectedly empty:– Not filled– Implicit clearing (on LHS but not RHS)

Buffer with unexpected chunk:– Previous production to fill it didn't fire– Sensor handling not as expected– Buffer not updated/cleared as expected

75

Page 76: Notes on teaching ACT-R modeling

76

ACT-R Code Debugging

Retrieval unsuccessful:– Expected chunk not in memory– Retrieval specification unintended

• overly specific (too many slots specified)• unintended chunk type

– Expected chunk’s activation too low– Wrong chunk retrieved

• under specified (too few slots specified)• partial matching effect (intended)

76

Page 77: Notes on teaching ACT-R modeling

77

ACT-R Code Debugging

Timing too slow:– Combine productions – Driven by retrieval failures and :RT too low

Timing too fast:– Driven by retrieval failures and :RT too high

77

Page 78: Notes on teaching ACT-R modeling

Unit 4: Zbrodoff’s Experiment

• alpha arithmetic, eg: A + 2 = C: correct?• possible addends: 2, 3, or 4• learning over:

– stimuli set (24) – repetition (5) – blocks (2) = 192 trials

78

Page 79: Notes on teaching ACT-R modeling

Model Design

• Given model that counts to answer

• Process: read problem, count, answer

• Already creates saves chunks of answers

• Strategy?

79

Page 80: Notes on teaching ACT-R modeling

Model Design - start

• Basic strategy: learn to retrieve answer

• How: attempt retrieval if successful, answer based on it

if fails, resort to counting (another basic process...)

• After model runs, adjust :RT

80

Page 81: Notes on teaching ACT-R modeling

Model Design – basic process

81

ReadProblem

SolveProblem

SaveSolution

RespondRecall

Solution

--

+

Instance-based action/declarative learning process:

Page 82: Notes on teaching ACT-R modeling

Model Coding

82

ReadProblem

SolveProblem

SaveSolution

RespondRecall

Solution

--

+

Ziegler homework: ~80% given

Page 83: Notes on teaching ACT-R modeling

Parameter Tweaking

• After model runs, adjust retrieval threshold to control when learning occurs vs. problem solving

83

Page 84: Notes on teaching ACT-R modeling

Unit 5: Siegler Experiment

• Experiment is 4 yr olds answering addition problems: 1+1 up to 3+3 (answers from 0-8 & “other”)

• Environment starts with problem stuffed into imaginal buffer as “text”

• Goal: beat 0.976 / 0.058• Strategy?

84

Page 85: Notes on teaching ACT-R modeling

Model Development Process

1. get working for one case

2. expand for all cases

3. adjust parameters

85

Page 86: Notes on teaching ACT-R modeling

Model / Coding Issue

• Note: one <> “one” <> “1” <> 1, (symbol, strings, and a constant)

• Facts stored using different representations Number fact: (one ISA number value 1 name "one")

Addition fact: (f11 ISA plus-fact addend1 one addend2 one sum two)

• problem input & response as strings, eg., “one”

86

Page 87: Notes on teaching ACT-R modeling

• Now possible to access to chunk names eg., =retrieval like previous =visual-location

• Or, (“old school”) add slot with value, eg:(four ISA number value 4 name "four" ref four)

87

Model / Coding Issue

Page 88: Notes on teaching ACT-R modeling

Model Design - basics

• Given chunk types and functions to set base

activation and set similarities

• Basic process: encode, retrieve, respond

88

Page 89: Notes on teaching ACT-R modeling

Parameter Tweaking

• Set base level activations for successful

retrievals throughout a run

• Adjust :ans to get “other” responses

• Set similarities to generate error distribution

to match human subjects

89

Page 90: Notes on teaching ACT-R modeling

Siegler Experiment Data

zero one two three four five six seven eight other

1+1 0 0.05 0.86 0 0.02 0 0.02 0 0 0.06

1+2 0 0.04 0.07 0.75 0.04 0 0.02 0 0 0.09

1+3 0 0.02 0 0.1 0.75 0.05 0.01 0.03 0 0.06

2+2 0.02 0 0.04 0.05 0.8 0.04 0 0.05 0 0

2+3 0 0 0.07 0.09 0.25 0.45 0.08 0.01 0.01 0.06

3+3 0.04 0 0 0.05 0.21 0.09 0.48 0 0.02 0.11

90

0

0.2

0.4

0.6

0.8

1

1 2 3 4 5 6 7 8 9 100 1 2 3 4 5 6 7 8 Osum=

Page 91: Notes on teaching ACT-R modeling

0

0.2

0.4

0.6

0.8

1

1 2 3 4 5 6 7 8 9 10

Siegler Data & Models

0

0.2

0.4

0.6

0.8

1

1 2 3 4 5 6 7 8 9 10

0

0.2

0.4

0.6

0.8

1

1 2 3 4 5 6 7 8 9 10

91

0 1 2 3 4 5 6 7 8 O

0 1 2 3 4 5 6 7 8 O

0 1 2 3 4 5 6 7 8 O

Experimental Data

Perfect Behavior(0.942 / 0.125)

My Model(0.960 / 0.059)

Page 92: Notes on teaching ACT-R modeling

Procedural Knowledge “Reward”

92

• A reward value is propagated backwards through rule firings and depreciated by time

Page 93: Notes on teaching ACT-R modeling

Rule 1Rule 3Rule 3Rule 1

93

Rule-gRule-d Rule-e Rule-b Rule-j Rule-w

Procedural Knowledge “Reward”

Page 94: Notes on teaching ACT-R modeling

Rule 1Rule 3Rule 3Rule 1

94

Rule-gRule-d Rule-e Rule-b Rule-j Rule-w

Reward 10 Reward 15

• Eg: (spp Rule-b :reward 10)

(spp Rule-w :reward 15)

Procedural Knowledge “Reward”

Page 95: Notes on teaching ACT-R modeling

• Eg: (spp Rule-b :reward 10)

(spp Rule-w :reward 15)

• Rewards propagated back to the last award only

Rule 1Rule 3Rule 3Rule 1

95

Rule-gRule-d Rule-e Rule-b Rule-j Rule-w

Reward 10 Reward 15

Procedural Knowledge “Reward”

Page 96: Notes on teaching ACT-R modeling

Why?

96

• Multiple rules’ LHS meet current conditions and appropriate strategy is a balance between the them.

• Rule selection balance can be learned

• Toturn on utility learning: (sgp :ul t)

Page 97: Notes on teaching ACT-R modeling

Learning Utility

97

• Difference Learning Equation:

Ui(n) = Ui(n-1) + [ Ri(n) – Ui(n-1) ]

Ui(n) is utility of rule i at nth firing

Ri(n) is reward for rule i at nth firing

is “learning rate” (sgp :alpha 0.2)

Page 98: Notes on teaching ACT-R modeling

Learning Utility

98

• Difference Learning Equation effect:

Page 99: Notes on teaching ACT-R modeling

Example: Building Sticks Task

99

• Match a given length by a combination of 3 other given lengths

• Two strategies: building up or subtracting (“undershoot”/“overshoot” based on first move)

Page 100: Notes on teaching ACT-R modeling

100

• Searching for combination of A,B, & C that equals target length (green)

• If current is too long, length is subtracted

Example: Building Sticks Task

Page 101: Notes on teaching ACT-R modeling

101

Demo

A=15, B=200, C=41

Goal: 103

Example: Building Sticks Task

Page 102: Notes on teaching ACT-R modeling

102

Demo

A=15, B=200, C=41

Goal: 103 = B – 2C – A

Example: Building Sticks Task

Page 103: Notes on teaching ACT-R modeling

103

Human performance data:

available lengths Goal %Overshoot a b c

15 250 55 125 20 10 155 22 101 67 14 200 37 112 20 22 200 32 114 47 10 243 37 159 87...

Example: Building Sticks Task

Page 104: Notes on teaching ACT-R modeling

104

• BST model has 27 productions

Example: Building Sticks Task

PerceiveProblem

EvaluateCurrent

ChooseStrategy

ImplementStrategy

Perceive“done”

Page 105: Notes on teaching ACT-R modeling

105

• BST model has 27 productions

Example: Building Sticks Task

PerceiveProblem

EvaluateCurrent

ChooseStrategy

ImplementStrategy

Perceive“done”

Page 106: Notes on teaching ACT-R modeling

106

• BST model has 27 productions

• Key productions decide which length to use based on whether current length is too long (“over”) or too short (“under”)

decide-overdecide-underforce-overforce-under

Example: Building Sticks Task

Page 107: Notes on teaching ACT-R modeling

107

Example: Building Sticks Task

(p decide-under =goal> isa try-strategy state choose-strategy strategy nil =imaginal> isa encoding over =over under =under

!eval! (< =under (- =over 25))==> =imaginal> =goal> state prepare-mouse strategy under +visual-location> isa visual-location kind oval screen-y 85)

(p force-under =goal> isa try-strategy state choose-strategy - strategy under

==>

=goal> state prepare-mouse strategy under +visual-location> isa visual-location kind oval screen-y 85)

Page 108: Notes on teaching ACT-R modeling

108

• (collect-data 100) corr: 0.803

• Utilities start enddecide-over 13 13.15decide-under 13 11.15force-over 10 12.15force-under 10 6.59

Example: Building Sticks Task

Page 109: Notes on teaching ACT-R modeling

Unit 6 Homework

109

Model design:productions

• start by detecting “choose” in window• generate key-press (heads and tails)• read actual result• note matches and non-matches

settings :ul t (to turn on utility learning) :egs noise parameter set initial utilities

AND establish rewards for specific productions

Page 110: Notes on teaching ACT-R modeling

Unit 6 Homework

110

Model results (collect-data 100): Corr: / Mean Dev.

official answer: 0.991 / 0.012

Page 111: Notes on teaching ACT-R modeling

Unit 6 Homework

111

Model results (collect-data 100): Corr: / Mean Dev.

official answer: 0.991 / 0.012 my best: 0.998 / 0.010

Page 112: Notes on teaching ACT-R modeling

Unit 6 Homework

112

Model results (collect-data 100): Corr: / Mean Dev.

official answer: 0.991 / 0.012 my best: 0.998 / 0.010

another run: 0.990 / 0.030 run 200: 0.990 / 0.024

Page 113: Notes on teaching ACT-R modeling

Learning New Rules

113

• Anderson (1982) suggested a 3 stage learning process:

– “Cognitive” (problem solving, chunking)– “Associative” (retrieval of solutions)– “Autonomous” (new procedural

knowledge)

Page 114: Notes on teaching ACT-R modeling

Learning New Rules

114

0

20

40

60

80

100

0 500 1,000 1,500 2,000 2,500

Problems

Co

un

t p

er

10

0 P

rob

lem

s

Moves Created Retrievals Production Firings

from Kennedy & Trafton (2007) Long-term Learning, CSR

Page 115: Notes on teaching ACT-R modeling

115

• How? compiling rules that fire sequentially

Abstract:Rule AB followed by rule BC ...

Learning New Rules

Page 116: Notes on teaching ACT-R modeling

116

• How? compiling rules that fire sequentially

Abstract:Rule AB followed by rule BC combined into a new rule: AB&C

Learning New Rules

Page 117: Notes on teaching ACT-R modeling

117

(p rule1 =goal> isa goal state nil==> =goal> state start)

(p rule2 =goal> isa goal state start==> =goal> add1 zero)

Learning New Rules

Page 118: Notes on teaching ACT-R modeling

118

(p rule1 =goal> isa goal state nil==> =goal> state start)

(p rule2 =goal> isa goal state start==> =goal> add1 zero)

Learning New Rules

A

B

BC

Page 119: Notes on teaching ACT-R modeling

119

(p rule1 =goal> isa goal state nil==> =goal> (p rule1+2 state start =goal>) state nil ==>(p rule2 =goal> =goal> state start isa goal add1 zero state start )==> =goal> add1 zero)

Learning New Rules

ABC

Page 120: Notes on teaching ACT-R modeling

120

(p rule1 =goal> isa goal state nil==> =goal> (p rule1+2 state start =goal>) state nil ==>(p rule2 =goal> =goal> state start isa goal add1 zero state start )==> =goal> add1 zero)

Learning New Rules

Page 121: Notes on teaching ACT-R modeling

121

• Is that all?

Realistic example:• Rule 1 initiates retrieval of a chunk• Rule 2 harvests the retrieved chunk• New Rule 1+2 built with chunk information

“built in” i.e., no retrieval involved

Learning New Rules

Page 122: Notes on teaching ACT-R modeling

122

Realistic example (specifics):• Recall the two column addition exercise

• Add by counting from first number, the second number of times

• Consider a specific case: start with 3 and need to count 2 steps to get sum

Learning New Rules

Page 123: Notes on teaching ACT-R modeling

123

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 =arg1addend2 =arg2sum nil

==>+retrieval>

ISA orderfirst =arg1

=goal>sum =arg1count zero

(P increment-sum =goal>

ISA addsum =sumcount =count

=retrieval>ISA orderfirst =sumsecond =new

==>=goal>

sum =new+retrieval>

ISA orderfirst =count

Page 124: Notes on teaching ACT-R modeling

124

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 =arg1addend2 =arg2sum nil

==>+retrieval>

ISA orderfirst =arg1

=goal>sum =arg1count zero

(P increment-sum =goal>

ISA addsum =sumcount =count

=retrieval>ISA orderfirst =sumsecond =new

==>=goal>

sum =new+retrieval>

ISA orderfirst =count

Retrieved chunk: ISA order first three second four

Page 125: Notes on teaching ACT-R modeling

125

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 =arg1addend2 =arg2sum nil

==>+retrieval>

ISA orderfirst =arg1

=goal>sum =arg1count zero

(P increment-sum =goal>

ISA addsum =sumcount =count

=retrieval>ISA orderfirst =sumsecond =new

==>=goal>

sum =new+retrieval>

ISA orderfirst =count

Retrieved chunk: ISA order first three second four

Page 126: Notes on teaching ACT-R modeling

126

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 =arg1addend2 =arg2sum nil

==>+retrieval>

ISA orderfirst =arg1

=goal>sum =arg1count zero

(P increment-sum =goal>

ISA addsum =sumcount =count

=retrieval>ISA orderfirst =sumsecond =new

==>=goal>

sum =new+retrieval>

ISA orderfirst =count

Retrieved chunk: ISA order first three second four

Page 127: Notes on teaching ACT-R modeling

127

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount =count

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst =count

Retrieved chunk: ISA order first three second four

Page 128: Notes on teaching ACT-R modeling

128

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount =count

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst =count

Retrieved chunk: ISA order first three second four

Page 129: Notes on teaching ACT-R modeling

129

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount =count

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst =count

Page 130: Notes on teaching ACT-R modeling

130

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount =count

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst =count

Page 131: Notes on teaching ACT-R modeling

131

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount =count

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst =count

Page 132: Notes on teaching ACT-R modeling

132

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount =count

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst =count

Page 133: Notes on teaching ACT-R modeling

133

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount =count

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst =count

Page 134: Notes on teaching ACT-R modeling

134

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount =count

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst =count

Page 135: Notes on teaching ACT-R modeling

135

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount zero

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst zero

Page 136: Notes on teaching ACT-R modeling

136

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount zero

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst zero

Page 137: Notes on teaching ACT-R modeling

137

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount zero

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst zero

Page 138: Notes on teaching ACT-R modeling

138

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount zero

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst zero

Page 139: Notes on teaching ACT-R modeling

139

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount zero

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst zero

Page 140: Notes on teaching ACT-R modeling

140

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount zero

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst zero

Page 141: Notes on teaching ACT-R modeling

141

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount zero

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst zero

Page 142: Notes on teaching ACT-R modeling

142

Learning New Rules(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst three

=goal>sum threecount zero

(P increment-sum =goal>

ISA addsum threecount zero

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst zero

LHS

RHS

Page 143: Notes on teaching ACT-R modeling

143

Learning New Rules

(P initialize-add-count =goal>

ISA addaddend1 threeaddend2 =arg2sum nil

==>+retrieval>

ISA orderfirst six

=goal>sum sixcount zero

(P increment-sum =goal>

ISA addsum threecount zero

=retrieval>ISA orderfirst threesecond four

==>=goal>

sum four+retrieval>

ISA orderfirst zero

(P initalize+increment =goal>

ISA addaddend1 threeaddend2 =arg2

sum nil==>

=goal> count zero

sum four +retrieval>

ISA orderfirst zero

}

} }

Page 144: Notes on teaching ACT-R modeling

144

Learning New Rules

NOTICE:

1.New rule is very specific due to adding the retrieved chunk’s information to the two parent rules

Page 145: Notes on teaching ACT-R modeling

145

Learning New Rules

NOTICE:

2.Very simply written rules can become more sophisticated with production compilation (i.e., write simple rules that get the desired behavior and let the system learn to be fast enough to match data...)

Page 146: Notes on teaching ACT-R modeling

146

Learning New Rules

• Utility of new rule?

• Intent: gradual rule use (contrary to Soar philosophy)

Page 147: Notes on teaching ACT-R modeling

147

Learning New Rules

• Previous utility rule Ui(n) = Ui(n-1) + a [ (Ri(n) – Ui(n-1) ]

• Utility for new rule Ui(n) = Ui(n-1) + a [ (U1st parent(n) – Ui(n-1) ]

• Starts at 0 (default) and is increased with re-creation until with noise gets selected. Then gets reward more directly and can surpass parent.

Page 148: Notes on teaching ACT-R modeling

Learning New Rules

148

0

20

40

60

80

100

0 500 1,000 1,500 2,000 2,500

Problems

Co

un

t p

er

10

0 P

rob

lem

s

Moves Created Retrievals Production Firings

from Kennedy & Trafton (2007) Long-term Learning, CSR

Page 149: Notes on teaching ACT-R modeling

149

Learning New Rules

• Rule compilation conditions: – no conflicts

– no “externalities”, i.e., reliance on outside world

– must be turned on: (sgp :epl t)

(default is nil)

Page 150: Notes on teaching ACT-R modeling

150

• Learning new procedural knowledge:• 2 sequentially-firing rules combine into a new rule• production learning turned on with (sgp :epl t)• utility set like reward

• Use: learned new rules eventually take over and can be much faster than parents

Review

Page 151: Notes on teaching ACT-R modeling

Unit 7 Homework

151

• Model design?

Environment Model Results

Page 152: Notes on teaching ACT-R modeling

Unit 7 Homework

152

Regular

IrregularCorrect

IrregularIncorrect

• Given:

Page 153: Notes on teaching ACT-R modeling

Unit 7 Homework

153

Regular

IrregularCorrect

IrregularIncorrect

• Given:

Page 154: Notes on teaching ACT-R modeling

Unit 7 Homework

154

RetrieveVerb

SuccessComplete Regular

IrregularCorrect

IrregularIncorrect

RetrievalError

Page 155: Notes on teaching ACT-R modeling

Unit 7 Homework

155

RetrieveVerb

SuccessComplete

Apply asModel

Regular

IrregularCorrect

IrregularIncorrect

RetrievalError

Page 156: Notes on teaching ACT-R modeling

Unit 7 Homework

156

RetrieveVerb

RetrieveAny Verb

SuccessComplete

Apply asModel

Regular

IrregularCorrect

IrregularIncorrect

RetrievalError

Page 157: Notes on teaching ACT-R modeling

Unit 7 Homework

157

RetrieveVerb

RetrieveAny Verb

SuccessComplete

Apply asModel

IgnoreNon-model

Regular

IrregularCorrect

IrregularIncorrect

RetrievalError

Page 158: Notes on teaching ACT-R modeling

Unit 7 Homework

158

RetrieveVerb

RetrieveAny Verb

SuccessComplete

Apply asModel

IgnoreNon-model

Regular

IrregularCorrect

IrregularIncorrect

RetrievalError

Page 159: Notes on teaching ACT-R modeling

Unit 7 Homework

159

RetrieveVerb

RetrieveAny Verb

SuccessComplete

Apply asModel

IgnoreNon-model

Regular

IrregularCorrect

IrregularIncorrect

RetrievalError

} }

}

Page 160: Notes on teaching ACT-R modeling

Unit 7 Homework

16030,000 trials 10,000 trials 500 increment

Page 161: Notes on teaching ACT-R modeling

161

Issues in Cognitive Modeling

Page 162: Notes on teaching ACT-R modeling

162

• Philosophical issues • Foundational issues• Architectural issues• Modeling issues• Validation issues• Scope issues

Issues in Cognitive Modeling

Page 163: Notes on teaching ACT-R modeling

163

Philosophical Issues

Page 164: Notes on teaching ACT-R modeling

164

• Cognitive Science (& modeling) vs. Artificial Intelligence

• Sources of knowledge• Mind-Body problem• Computational Theory of Mind

Philosophical Issues

Page 165: Notes on teaching ACT-R modeling

165

• Cognitive Science (& modeling) vs Artificial Intelligence– similar search for understanding:

• what is intelligence• how the brain produces behavior

– different purposes• understanding mind for science’s sake• applying that understanding to problems

Philosophical Issues

Page 166: Notes on teaching ACT-R modeling

166

• Source of knowledge– from experience (Locke, Hume, Mill)– innate, instinct, built-in (Decartes, Kant, &

Pinker’s “Language Instinct”)

Philosophical Issues

Page 167: Notes on teaching ACT-R modeling

167

• Mind-body problem– mind distinct from body (Dualism)– mind an extension of the body (Materialism)– reality is in the mind (Idealism)

Philosophical Issues

Page 168: Notes on teaching ACT-R modeling

168

• Computational hypothesis computation as a theory of mind < 50 years old John Searle’s Chinese Room

Philosophical Issues

Page 169: Notes on teaching ACT-R modeling

169

• Symbol hypothesis – symbols vs. Gestalt’s holistic, parallel, analog

perception of whole different as from parts– related to neural network approach to the

Cognitive Science

Philosophical Issues

Page 170: Notes on teaching ACT-R modeling

170

Foundational Issues

Page 171: Notes on teaching ACT-R modeling

171

• Symbol hypothesis (on boarder)• Levels/bands – what are we studying? • Where is foundation• Cognitive plausibility

Foundational Issues

Page 172: Notes on teaching ACT-R modeling

Newell’s Levels & BandsArchitectural term t (sec) Units System Band

1011-13 104-106 years Evolutionary

1010 Millennia Historical

Lifetime 109 ~50 years Historical

108 Years (Expertise) Historical

107 Months (Expertise) Social

Development 106 Weeks Social

105 Days Social

104 Hours Task Rational

Knowledge acq 103 10 min Task Rational

102 Minutes Task Rational

101 10 sec Unit task Cognitive

Performance 100 1 sec Operations Cognitive

Temp storage 10-1 100 ms Deliberate act Cognitive

Primitive act 10-2 10 ms Neural net Biological

10-3 1 ms Neuron Biological

10-4 100 s Organelle Biological172

Page 173: Notes on teaching ACT-R modeling

173

• Where is the foundation?– working from neuron up– theory of mind down, or – middle in both directions

Foundational Issues

Page 174: Notes on teaching ACT-R modeling

174

• Cognitive plausibility– common usage– formal matching of clever experimental data– multi-level justification

Foundational Issues

Page 175: Notes on teaching ACT-R modeling

175

Architectural Issues

Page 176: Notes on teaching ACT-R modeling

176

• Goal of architecture • Perception-cognition boundary• Movement-movement boundary• Symbolic memory representation Subsymbolic

representation • Strong assumptions

Architectural Issues

Page 177: Notes on teaching ACT-R modeling

177

• Goal of architecture – AI or Cognitive Science– Narrow behavioral focus or broad

Architectural Issues

Page 178: Notes on teaching ACT-R modeling

178

• Perception-cognition boundary

Architectural Issues

Page 179: Notes on teaching ACT-R modeling

179

• Perception-cognition boundary

Architectural Issues

ProcessInput Output

Perception Cognition Action

Page 180: Notes on teaching ACT-R modeling

180

• Perception-cognition boundary– where’s the edge?– how much does one affect the other?– being studied

Architectural Issues

Page 181: Notes on teaching ACT-R modeling

181

• Movement-movement boundary– presumed well defined: thought vs. action– “Mirror neurons” challenging definition– not focus of study– (robotics not the same subject)

Architectural Issues

Page 182: Notes on teaching ACT-R modeling

182

• Symbolic memory representation – LTM & STM accepted as different– Procedural generally accepted as different– Episodic different?– Images different?– Spatial different?– (Feelings different?)

Architectural Issues

Page 183: Notes on teaching ACT-R modeling

183

• Subsymbolic representation – Generally accepted as necessary– Architectural design impact not settled– Neural representation?– Functional mathematical representation?– Stochastic representation?

Architectural Issues

Page 184: Notes on teaching ACT-R modeling

184

• Strong assumptions – disprovable theoretic claims to advance science– Newell’s UTC stake in the ground– Icarus’ variation– Clarion (& ACT-R) neural representation

dependency– Modularity and physiological comparisons

Architectural Issues

Page 185: Notes on teaching ACT-R modeling

185

Modeling Issues

Page 186: Notes on teaching ACT-R modeling

186

• Standard methodologies • Bottom-up vs. top-down• Chunk size• Production size• Parameter standardization• Other: modeling or architecture issues

Modeling Issues

Page 187: Notes on teaching ACT-R modeling

187

• Standard methodologies– Some:

• rule based behavior• some patterns of rules

– problem solving techniques– find, attend, harvest– prepare, execute

– Lack of standard methodologies an indicator of the youth of the field or something else...

Modeling Issues

Page 188: Notes on teaching ACT-R modeling

188

• Bottom-up vs. top-down– Goal driven or environmentally driven?– Goal: procedural process retrieval– Env: perception, reactive behavior– Mixed? How?

Modeling Issues

Page 189: Notes on teaching ACT-R modeling

189

• Chunk size– How many slots?

• Smaller is better• When is larger, too large, 7+/- 2? (!)

– How complicated is a slot • indirect references?• variable slot names? • un-named slots?

Modeling Issues

Page 190: Notes on teaching ACT-R modeling

190

• Production size– How many conditions?– How complicated is the logic?

• and• not• or?• evaluation function?

– How many actions on RHS?– How complicated are the actions?

Modeling Issues

Page 191: Notes on teaching ACT-R modeling

191

• Parameter standardization(ACT-R)

• Production firing: 50ms• Retrieval threshold: varies• Move attention: 85ms• Imaginal vs. goal buffer: ?• others...

– Other architectures?

Modeling Issues

Page 192: Notes on teaching ACT-R modeling

192

• Other: modeling or architecture issues– Memory for goals (are goals different?) – Multi-tasking (interleaved by model or

architecture)– Emotion affecting cognition

• different rules• different parameters (eg. RT)

– Motivation(?)

Modeling Issues

Page 193: Notes on teaching ACT-R modeling

193

Validation Issues

Page 194: Notes on teaching ACT-R modeling

194

• Validation criteria • Validity vs. reliability• Acceptable evidence

Validation Issues

Page 195: Notes on teaching ACT-R modeling

195

• Validation criteria • Statistical hypothesis testing• Goodness of fit• Verbal protocols

Validation Issues

Page 196: Notes on teaching ACT-R modeling

196

• Validity vs. reliability• Well defined phenomenon• Clearly demonstrated• Explanation rational• Explanation/model cognitively plausible

Validation Issues

Page 197: Notes on teaching ACT-R modeling

197

• How demonstrated?– match single human on single experiment– match multiple subjects on single experiment– match data on a wide range of behaviors– replicated

Validation Issues

Page 198: Notes on teaching ACT-R modeling

198

• Natural Language

• “Numerosity” (sense of quantity, size, etc.)• Judgment• Realistic vs. rational behavior• Social behavior• Abnormal behavior• Creativity, art, music• ...

Scope Issues