altering the i carus architecture to model social cognition pat langley institute for the study of...

36
Altering the ICARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR Cognitive Science and Human-Robot Interaction 6.1 Program Review June 25–28, 2013

Upload: gabriella-willis

Post on 27-Mar-2015

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Altering the ICARUS Architectureto Model Social Cognition

Pat LangleyInstitute for the Study ofLearning and Expertise

Award Period 2/1/12–1/31/15

ONR Cognitive Science and Human-Robot Interaction

6.1 Program Review

June 25–28, 2013

Page 2: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Critique of the ICARUS Architecture

In previous work (Langley et al., 2009), we have developed ICARUS, an architecture that, despite its accomplishments:

• Relies on exhaustive, deductive inference

• Emphasizes physical activities over mental ones

• Cannot represent or reason about others’ mental states

• Has inflexible mechanisms for execution / problem solving

This project aims to address these drawbacks by developing a radically new version of the architecture.

Page 3: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

3

Research Objectives

We aim to develop a unified theory of the human cognitive architecture that supports:

Representing and reasoning about others’ mental states

Flexible inference and problem solving in this context

Structural learning that supports these processes

The research project’s significance lies in its potential to:

Improve accounts of human reasoning and learning

Support agents/robots that interact effectively with humans

This effort addresses aspects of high-level cognition that have received little attention elsewhere.

Page 4: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

During the past year, our team’s accomplishments have included:

• Developing new formalisms for: •Beliefs and goals that refer to other agents’ mental states•Concepts and skills that involve relations among mental states

• Designing, implementing, and testing an approach to the incremental abduction of explanations• Adapting and applying this mechanism to:

•Understanding domain-level plans•Understanding stories in which agents reason about others•Explaining and judging behavior in moral contexts

• Reimplementing / improving a flexible framework for problem solving that incorporates meta-level control rules

Together, these support our aims to produce a more complete account of human cognitive abilities.

Recent Accomplishments

Page 5: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Challenge: Plan Understanding

A basic task that involves reasoning about others' mental states is plan understanding, which we can define as:

• Given: A sequence S of actions agent A is observed to carry out;

• Given: Knowledge about concepts and activities, organized hierarchically, that are available to agent A;

• Infer: An explanation, E, in proof lattice form, that accounts for S in terms of A's goals, beliefs, and intentions.

This is analogous to language understanding in that analysis produces a connected account of input.

We distinguish it from plan recognition (Goldman et al., 1999), which assigns observed behavior to some known category.

Page 6: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

An Illustrative Example

Consider an action sequence from the Monroe County corpus (Blaylock & Allen, 2005):

Truck driver tdriver1 navigates the dump truck dtruck1 to the location brightondump, where a hazard team ht2 climbs into the vehicle. Then tdriver1 navigates dtruck1 to the gas station texaco1, where ht2 loads a generator gen2 into dtruck1…

Given such observations and knowledge about possible goals / activities, we want to infer the latter to explain events.

In this case, we might conclude the driver is collecting people and a power source for some mission.

Page 7: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Plan Understanding as Abductive Inference

Our theoretical claims about plan understanding are that it:

• Involves inference about the participating agents’ mental states (beliefs / goals about activities and environment)• Involves the abductive generation of explanations through the

introduction of default assumptions• Operates in an incremental fashion to process observations that

arrive sequentially• Proceeds in a data-driven manner because understanding arises

from observations about agents’ activities

These four assumptions place constraints on our computational account of this important process.

Page 8: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

A Sample Explanation

get-to(ht2, texaco1) → get-to(dtruck1, br-dump) → drive-to(tdriver1, dtruck1, br-dump) — at-loc(dtruck1, _) — at-loc(tdriver1, _) → navigate-vehicle(tdriver1, dtruck1, br-dump) — person(tdriver1) — vehicle(dtruck1) — can-drive(tdriver1, dtruck1) — at-loc(dtruck1, br-dump) — at-loc(tdriver1, br-dump) → get-in(ht2, dtruck1) — not(non-ambulatory(ht2)) — person(ht2) → climb-in(ht2, dtruck1) — at-loc(ht2, br-dump) — at-loc(dtruck1, br-dump) — fit-in(ht2, dtruck1) — at-loc(ht2, dtruck1)

→ get-to(dtruck1, texaco1) → drive-to(tdriver1, dtruck1, texaco1) — at-loc(dtruck1, br-dump) — at-loc(tdriver1, br-dump) → navigate-vehicle(tdriver1, dtruck1, texaco1) — person(tdriver1) — vehicle(dtruck1) — can-drive(tdriver1, dtruck1) — at-loc(dtruck1, texaco1) — at-loc(tdriver1, texaco1) → get-out(ht2, dtruck1) — not(non-ambulatory(ht2)) — person(ht2) → climb-out(ht2, dtruck1) — at-loc(ht2, dtruck1) — at-loc(dtruck1, texaco1) — at-loc(ht2, texaco1)

Page 9: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Representing Plan Knowledge

navigate_vehicle(Driver, Veh, Loc, T_Start, T_End) at_location(Veh, VLoc, T_1, T_Start), at_location(Driver, VLoc, T_3, T_Start), Driver(Driver), vehicle(Veh), can_drive(Driver, Veh, T_9, T_10), at_location(Veh, Loc, T_End, T_13), at_location(Driver, Loc, T_End, T_15), constraint(before(T_1, T_Start)), constraint(before(T_2, T_Start)), constraint(before(T_3, T_Start)), constraint(before(T_4, T_Start)), constraint(inside(T_Start, T_End, T_5, T_6)), constraint(before(T_End, T_14)), constraint(inside(T_Start, T_End, T_7, T_8)), constraint(before(T_End, T_13)), constraint(inside(T_Start, T_End, T_9, T_10)), constraint(before(T_End, T_15)), constraint(inside(T_Start, T_End, T_11, T_12)), constraint(before(T_End, T_16)).

We represent knowledge about activities in a notation similar to hierarchical task networks. For example:

This formalism separates conditions, effects, and invariants in terms of temporal constraints on antecedents.

Page 10: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

The UMBRA Abduction System

• Accepts observations and adds them to working memory• Incrementally extends an explanation by:

- Finding rules with antecedents that unify with wm elements

- Tentatively completing each rule instance's missing antecedents

- Selecting the rule instance R with best evaluation score

- Adding R’s inferred elements to memory as default assumptions• Continues until no further observations arrive

We have developed UMBRA, an abductive inference system that:

This data-driven strategy aims to produce a coherent explanation in terms of available knowledge.

UMBRA is similar in spirit to AbRA (Bridewell & Langley, 2011).

Page 11: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Experiments on Plan Understanding

Experiments with UMBRA on the Monroe corpus show that:

• The system can reconstruct much higher-level plan structure• Even when only a fraction of agent actions are observed• Incremental abduction is nearly as effective as batch processing

Page 12: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Results on Plan Understanding

Precision and recall for each problem on ten ‘batch’ runs.

The former is very high on some tasks but not as good on others.

Differences are due to features of problems in the Monroe domain.

Recall is mediocre for similar reasons.

Page 13: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Challenge: Social Understanding in Fables

A more challenging task involves reasoning about plans that take others' mental states in account.

This ability is required to understand Aesop-style fables like:

Explanations of such stories include beliefs and goals about others’ beliefs and goals.

This requires extensions to representations in both working memory and long-term knowledge.

The Snake, the Lion and the Sheep. The lion is too old to chase down animals. The lion announces he is sick. The sheep, believing he is harmless, follows social convention and visits the lion's caves to pay his respects. The lion kills and devours the sheep. A snake watches these events and understands the deception that occurred.

Page 14: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Extending Working Memory

• belief(fox, has(crow, grapes, 0930, _), 0931, _)• goal(crow, acquire_edible_food(crow, _, _))• belief(snake, belief(lion, at_location(lion, river, 0900, _), 0902, _), 0902, _)• belief(snake, goal(fox, trade_food(crow, grapes, fox, grain, 0940, _), 0930, _), 0933, _)• goal(lion, belief(sheep, sick(lion, 0900, 2400), 0945, _), 0900, _)

UMBRA represents agents’ mental states in terms of embedded structures like:

Elements of this sort provide building blocks for explanations of scenarios that involve agents reasoning about others.

Page 15: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Extending Knowledge about Activities

announce_falsehood(Actor, Agent2, Content, START, END) neg(dead(Actor, T1, T2)), exists(Actor, T3, T4), belief(Actor, neg(Content), T5, T6), agent(Actor), agent(Agent2), announce_act(Actor, Agent2, Content, T_S, T_END), belief(Agent2, Content, T_END, T7), belief(Actor, belief(Agent2, Content, T_END, T8), T_END, T9), constraint(inside(T_S, T_END, T1, T2)), constraint(before(T_END, T8)), constraint(before(T_S, T_END)).

UMRBA also requires planning operators that influence others' mental states, such as for communicative actions:

These structures, combined with domain knowledge, support abductive construction of complex social explanations.

Page 16: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

A Testbed for Social Understanding

• About 60 distinct skills / operators – alternative decompositions – many with overlapping conditions – only ten percent used in any 'correct' fable explanation – about 500 domain-level conditions, excluding constraints

• About 100 distinct domain-level predicates

We have constructed a domain and test scenarios, based largely on Aesop's fables, with knowledge that includes:

Most of the six scenarios involve plans that depend on one or more agents reasoning about the mental states of others.

Page 17: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Results on Social Understanding

Nested understanding: The primary agent interprets another agent's mental states and/or plan based on observed behavior.Feeling hungry, a crow travels to a barn and acquires grain by opening a jar. A snake watches and understands the crow solving her simple problem.

Deeply nested understanding: The primary agent infers a secondary agent’s inferences about a third agent's mental states.A fox, watching the snake watching the crow, imagines what the snake thinks about the crow's situation.

Inferring mistakes in understanding: The primary agent infers another agent's mistaken beliefs, why they arise, and the true account. A lion is proud of his mane. He passes by a river, sees his reflection, and attacks the ‘other’ lion. An observing snake infers why he takes this action.

We have tested UMBRA on ‘fable’ scenarios that involve different levels of complexity beyond ‘basic’ plan understanding.

Page 18: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Results on Social Understanding

Reasoning about opportunism in understanding: The primary agent understands how another agent capitalizes upon another's false beliefs.A hungry crow in possession of some sour grapes trades them to a fox, who assumes they are sweet, in return for delicious grain. A watching snake explains the interaction.

Reasoning about deception in understanding: The primary agent infers than another agent deliberately engenders false beliefs in a third agent in order to achieve some goal.A lion is too old to chase down animals. The lion announces he is sick. The sheep, believing he is harmless, follows social convention and visits the lion's caves to pay his respects. The lion kills and devours the sheep. A snake who watches these events and understands the deception that occurred.

UMBRA constructs the desired explanations for each scenario, some of which involve deeply embedded mental models.

Page 19: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Complete Structure of a Fable Explanation

Green = condition Yellow = effectOrange = invariantBlue = constraintDiamond = task / skill

Page 20: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Green = condition Yellow = effectOrange = invariantBlue = constraintDiamond = task / skill

Portion of a Fable Explanation

Green = condition Yellow = effectOrange = invariantBlue = constraintDiamond = task / skill

Page 21: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

One Element of a Fable Explanation

Green = condition Yellow = effectOrange = invariantBlue = constraintDiamond = task / skill

Page 22: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Challenge: Moral Judgement

An even more challenging cognitive task involves complex moral judgement, which we can specify as:

• Given: A sequence S of observed actions, including the agent(s) A who performed them;• Given: Knowledge about these and related events, including their

relation to moral concepts;• Infer: An explanation E that accounts for S in terms of this

knowledge and A’s beliefs, goals, and intentions; and• Infer: A moral evaluation of S that takes into account the

explanation E.

This task combines plan understanding with evaluation in terms of moral concepts.

Page 23: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Claims about Moral Judgement

We maintain that complex moral judgement is a form of social plan understanding in that it:

• Focuses on the mental states of agents who interact in a given scenario;

•Depends on rules that abstract away from domain-specific details and focus on relations among mental states;

• Involves the linking of rule instances into some connected explanation of observed behavior.

However, the process also relies on calculating numeric values on elements that reflect evaluations of behavior.

Page 24: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

A Sample Moral Explanation

Consider a scenario in which one agent (John) causes another (Kelly) to feel pain by shoving her.

We might infer that John carried out this action deliberately so that Kelly would experience distress.

Page 25: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Evaluations of Moral Explanations

We plan to extend UMBRA to support the evaluation of moral explanations by:

We also maintain that top-down influences account for the effect of mitigating factors on judgement scores.

• Adding numeric annotations to long-term knowledge structures: – A default weight for each conceptual predicate

– An upward factor for each rule's antecedent

– A downward factor for each rule's antecedent

• Calculating an evaluation for each element in an explanation by: – Multiplying the sum of upward factors by the default value and propagating the result upward to the root(s)

– Multiplying downward factors by the accrued values at root(s)

Page 26: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

• Utilizes means-ends analysis

• Carries out depth-first search

• Interleaves tightly with skill execution

• Cannot reason about others’ mental states

The current ICARUS architecture incorporates a distinct module for problem solving that:

These features do not reflect the character of human problem solving, which is far more flexible.

Our new framework aims to support such flexibility by using meta-level knowledge.

Problem Solving in ICARUS

Page 27: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

• Search strategies (depth first, breadth-first, iterative sampling)

• Intention selection strategies (means ends, forward search)

• Intention application strategies (eager, delayed commitment)

• Failure conditions (depth limited, effort limited, loops)

• Solution conditions (single, multiple, all)

We have redsigned and reimplemented our meta-level approach to problem solving to support different:

These behaviors are produced by differences among meta-level, domain-independent control rules associated with five modules.

Soar (Laird, 2012) takes a similar but finer-grained approach; our framework is closer to that in Prodigy (Minton, 1988).

Flexible Problem Solving

Page 28: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Organization of Problem Solving

Problem solving occurs in cycles, with meta-level rules determining behavior at each successive stage.

Meta-level rules determine the system’s behavior for each stage.

Page 29: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Problem Decompositions

Problems play the central organizing structure in our framework.

Down subproblems have the same state as their parents.

Right subproblems have the same goals as their parents.

This organization is the same as that in means-ends problem solving, but we use it to support very different strategies.

Page 30: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Plans for Future Research

• Extend UMBRA to support belief revision when it decides its default assumptions are faulty

• Augment the meta-level problem solver to support execution of plans in the environment

• Integrate UMBRA’s inference mechanism with our approach to flexible problem solving

• Introduce mechanisms for learning structures from explanations

• Carry our experiments to demonstrate these extensions’ benefits

Although we have made substantial progress toward the project goals, we still need to:

The resulting architecture should offer a more complete account of high-level cognition in humans.

Page 31: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Summary Remarks

• Represents mental states in terms of embedded beliefs / goals

• Incorporates an incremental approach to abductive inference

• Combines these to support plan understanding • Basic explanations of observed physical activities

• Explanations that involve agents reasoning about other agents

• Moral judgements that include inferences about agent intentions

• Uses meta-level control to support flexible problem solving

In this talk, I presented elements of a new cognitive architecture that addresses limitations of ICARUS by:

When integrated, these should give us a new version of ICARUS that has substantially greater breadth and flexibility.

Page 32: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Publications and Presentations

Langley, P. (2012). The cognitive systems paradigm. Advances in Cognitive Systems, 1, 3-13.

Langley, P. (2012). Intelligent behavior in humans and machines. Advances in Cognitive Systems, 2, 3-12.

MacLellan, C., Langley, P., & Walker, C. (2012). A generative theory of problem solving. Poster Collection / First Annual Conference on Advances in Cognitive Systems, 1-18.

Meadows, B., Langley, P., & Emery, M. (in press). Seeing beyond shadows: Incremental abductive explanation for plan understanding. Proceedings of the AAAI-2013 Workshop on Plan, Activity, and Intent Recognition.

Liu, L., Langley, P., & Meadows, B. (in press). A computational account of complex moral judgement. Proceedings of the Annual Conference of the International Association for Computing and Philosophy.

The Cognitive Systems Paradigm. Presented at AAAI Fall Symposium on Advances in Cognitive Systems, Arlington, VA, November, 2011.

Intelligent Behavior in Humans and Machines. Presented at First Annual Conference on Advances in Cognitive Systems, Palo Alto, CA, December, 2012.

Page 33: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Cooperative Development

• Commitments to hierarchical concepts / skills borrowed from initial ICARUS architecture developed under ONR funding

• Representation of mental states developed jointly with ONR MURI project at CMU

• Ideas on abductive inference co-developed with W. Bridewell in ONR MURI work at Stanford

Our research on this project has benefited from results produced on a number of other efforts:

These efforts have let us make more rapid progress than would have been possible otherwise.

Page 34: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Transition Plan

• Virtual medical assistants that interact with field medics to help them provide emergency care

• Cognitive robots that interact with Navy personnel dealing with shipboard problems (e.g., fighting fires)

Our research on computational social cognition has clear uses in virtual agents and human-robot interaction.

In the longer term, we hope to transition our results to applied settings like:

We hope to take advantage of existing relationships with NRL researchers to increase the chances of successful transitions.

Page 35: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

Project Budget

The research project’s budget, by federal fiscal year, is:

• FY2012: $118K

• FY2013: $179K

• FY2014: $182K

• FY2014: $ 60K

No DURIP were awarded in relation to this project.

Page 36: Altering the I CARUS Architecture to Model Social Cognition Pat Langley Institute for the Study of Learning and Expertise Award Period 2/1/12–1/31/15 ONR

End of Presentation