agent behaviour and knowledge representation

55
Agent Behaviour and Knowledge Representation K. V. S. Prasad Dept. Of Computer Science Chalmers Univ 23 March 2012

Upload: ziven

Post on 22-Mar-2016

37 views

Category:

Documents


2 download

DESCRIPTION

Agent Behaviour and Knowledge Representation. K. V. S. Prasad Dept. Of Computer Science Chalmers Univ 23 March 2012. The two branches of TCS. Semantics What do you want the system to do ? Specify a sort How do you know ( what ) it does ? - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Agent  Behaviour  and Knowledge  Representation

Agent Behaviour andKnowledge Representation

K. V. S. PrasadDept. Of Computer Science

Chalmers Univ23 March 2012

Page 2: Agent  Behaviour  and Knowledge  Representation

The two branches of TCS

• Semantics– What do you want the system to do?

• Specify a sort– How do you know (what) it does?– Build the right system; build the system right– Specification (synthetic biology example)– Testing

• Complexity– How much time/place– Superfast? Program a sort by shuffling

Page 3: Agent  Behaviour  and Knowledge  Representation

Where we are

• You have already seen AI in action– Search, Natural Language, Bayesian Inference– Hard problems, but already clearly posed

• That is, mostly efficiency (central to AI)• What about semantics in AI? Specify humans?

Page 4: Agent  Behaviour  and Knowledge  Representation

Semantics for AI

• Today: what problems do we want to solve?– Evolve the language(s) in which to pose the problem

• Chapter 2: Intelligent Agents• Chapter 7: Logical Agents• Chapter 8: First-Order Logic• Chapter 9: Inference in First-Order Logic• Chapter 12: Knowledge Representation

– Harder to get to grips with• But ”softer” than the hard problems above

– Often tidying up earlier work, seeing commonalities, …

Page 5: Agent  Behaviour  and Knowledge  Representation

The meaning of ”meaning”

• What does ”a tree” mean? – Point to a tree, ”ett träd”, ”grown up plant”, …• Only works if you already know what a tree is, or ”träd”,

or ”plant”• Semantic bootstrapping! (Babies are smart!)

Page 6: Agent  Behaviour  and Knowledge  Representation

What does a tree mean?

• Food, shade, wood, paper, medicine, wildlife, ecology, …

• These meanings are in our heads– The tree itself is all of these and more

• The University is split into departments, not the world– So we choose how to look at the tree

• So, is any model at all OK?– No, it must be right at least in its own terms

– We find out the hard way if we take a narrow view• Then we have to change the model

Page 7: Agent  Behaviour  and Knowledge  Representation

Models and Reality• Natural Sciences model reality

– Different models for different purposes use different abstractions (physics, chemistry, biology, economics…)• Look for predictive and explanatory power

• Natural language itself is a model of the world– Eerily good: ”If we don’t hurry, we’ll miss the train”– How did it get that good? Evolution

• Artificial worlds (games)– Measure of the model

• Not correspondence to other reality: there is none• Instead: How interesting this world is

– Abstractions usually obvious: plastic = wood chess pieces

Page 8: Agent  Behaviour  and Knowledge  Representation

Book sees AI = Science of agent design

• So define agents• How to specify performance– Describe environment– Classify environments

• Basic design skeletons for agents• Learning (where it fits in)• All very high level, lots of definitions• So goal: to get you into the book– Chapter 2 is the framework– boring but needed, like the “how to use” of dictionary

Page 9: Agent  Behaviour  and Knowledge  Representation

Chapter 2: intelligent agents

Notes to textbook and Russell’s slides

Page 10: Agent  Behaviour  and Knowledge  Representation

What are agents?• AI used to look only at agent components

– Theorem provers, vision systems, etc.• Robots are agents• So are controllers in control theory• Agents have sensors, actuators and programs• Ordinary (reactive) programs? No,

– Limited pre-knowledge of actual situations– Rational – takes best action as it appears at the time, not what

actually is best– Autonomy– Learning

Page 11: Agent  Behaviour  and Knowledge  Representation

Are we agents?

• Seem to fit description!• For whom or what are we agents?– As animals, agents of our genes! (Dawkins)– Higher level emergent structures allow us to be• Agents of ideology (sacrificial actions)• Agents of society (monogamy, contraception)• Agents of our ”selves” (gratification seeking)

Page 12: Agent  Behaviour  and Knowledge  Representation

Semantics of Sequential Agents

• Agent = Sequential program?– Old view (CSALL lives here, largely)

• Functional view: Input/output table is the specification• Language view: recognizers for Chomsky hierarchy

– Two machines are ”the same” if they recognize the same language

– Even though one might be smaller or faster– One can replace the other

• Same spec, different implementations• Spec is one abstraction, speed/size another

Page 13: Agent  Behaviour  and Knowledge  Representation

Reactive Agents: some features

• Conversation with environment– I don’t know my second question till you answer

my first• So can’t be function from input sequence• Non-determinism (in ad-hoc notation)– a! ((b? e!) Or (c? f!)) will always accept b– (a! b? e!) Or (a! c? f!) won’t always accept b– The two machine are ”may-equal” but not ”must-

equal”

Page 14: Agent  Behaviour  and Knowledge  Representation

Agents interact with environments

Page 15: Agent  Behaviour  and Knowledge  Representation

Chap 2 from Russell

• Russell’s slides

Page 16: Agent  Behaviour  and Knowledge  Representation

Agents and Environment(S4-6, B34-36)

• Action can depend on percept history– Tabulate the agent function

• Externally, note what it does for each sequence– Infinite table, if unbounded input sequence

• Internally, implemented by (finite) program– The external table of the agent function

• Says what the agent actually does• A similar one can say what it should do (specification)

• What do we want the agent to do?• Is everything an agent? A bridge? A calculator?

Page 17: Agent  Behaviour  and Knowledge  Representation

Problems with R&N definition

• Agent behaviour = f(percept history)– Assumes agent responses are deterministic– If not

• percept at n+1 depends on the agent’s response at n• So behaviour depends on percept history intertwined with

agent responses– Even if deterministic

• some histories make no sense– Agent has moved right, percept says ”left”– Possible if some external force moved the agent

• Asymmetric between agent and environment

Page 18: Agent  Behaviour  and Knowledge  Representation

Good behaviour: rationalityB 36-37

• What should the agent do?– Decide by consequences

• As decided by change in environment• By change in agent? Sour grapes, etc.

• What should we measure?– Amount of dirt cleaned?

• Clean, dump cycle can maximise this– Point per clean square per time unit

• Same average from wild swings and always mediocre– Distinguish by energy use?

• Design spec from point of view of environment– not how you think agent should behave

Page 19: Agent  Behaviour  and Knowledge  Representation

Rationality (B37-38)

• What is the rational action? Depends on– What actions the agent can perform– Performance measure (to be maximised)– Agent’s knowledge• Built in• Percept (input) sequence to date

• Agent S6 optimises “clean square/time”• If moves incur penalty?– Can clean square become dirty again?

Page 20: Agent  Behaviour  and Knowledge  Representation

Living with limits of rationality(B38-39, S7)

• Spec. can’t demand what will turn out best– Unless we are omniscient (crystal balls)

• Hence only precept history– So rational agent can only max. expected outcome

• But in AI– Look if you can– Learn from what you see (look at results!)– Built-in knowledge = lack of autonomy

• Needed at start, but with learning– Can go beyond built-in knowledge– Succeed in many environments

• Randomisation can be rational: ethernet

Page 21: Agent  Behaviour  and Knowledge  Representation

Environments(B 40-42, S 8 -11)

• The environment is the task (spec.)– To which the agent is the solution– The automated taxi is an open-ended problem– Real life can be simple (robots on assembly lines)– Artificial life can be complex (softbots)• Show me news items I find interesting • Need some NLP, ML, and cope with dynamic world• Many human and artificial agents in this world• Environments can be agents (process calculus)

Page 22: Agent  Behaviour  and Knowledge  Representation

Why classify environments?(S 12 – 18)

• We design agents for a class of environments– Environment simulator produces many of given

class• To test and evaluate agent• Designing for single scenario = unconscious cheating

• Classification is often debatable– Assembly robot can see many parts wrong: learn– Game players usually know rules• Interesting to see what they do if they don’t

Page 23: Agent  Behaviour  and Knowledge  Representation

Observability of Environments

• Observable? Fully or partially?– Those aspects relevant to performance– Partial because of noise or data unavailable• Is there dirt in other squares? What will other car do?

– Observables can be re-read rather than stored• Known =/= observable– Know rules of solitaire, but can’t observe cards– New phone – see buttons, but what do they do?

Page 24: Agent  Behaviour  and Knowledge  Representation

Multi-agent Environments• Single agent? Multi-agent?

– Chess is two agent, Solitaire is one-agent– Other vehicles on road, but not the road itself– Agents either cooperate or compete

• Each tries to maximise its performance• Competing: chess, parking• Cooperation: avoid collision

• When is the other thing an agent?– Always, for careful animals (and superstitious humans?)– Another car? The wind?– When its behaviour is best seen as responding to me

• Communication with me

Page 25: Agent  Behaviour  and Knowledge  Representation

(Non)deterministic Environments(B 43)

• Deterministic?– If next state (of environment) depends only on

• current state and action of agent

• Otherwise stochastic– Unexpected events: tires blowing out, etc.– incomplete info might make environment look stochastic– “stochastic” implies probabilities (eg. chemistry)

• Non-deterministic– Like stochastic, but no probabilities– Performance requires success for all outcomes

• Examples from process calculus, sorting, 8 queens

Page 26: Agent  Behaviour  and Knowledge  Representation

Episodic vs. Sequential Environments (B 43-44)

• Sequential– History matters– Chess is sequential• This action affects subsequent ones (env. state change)

• Episodic = actions indep. of previous ones– Spot faulty parts on assembly line (same rules)– Don’t need to think ahead– Soap operas largely episodic

Page 27: Agent  Behaviour  and Knowledge  Representation

Static vs Dynamic Environments

• Dynamic– Environment change while agent thinks (driving)– Chess is partly so (clock runs out)– Like real time; no action is a “pass”

• Semi-dynamic• no change in environment, but in score

Page 28: Agent  Behaviour  and Knowledge  Representation

(Un)known Environments(B 44)

• Agent knows rules? Doesn’t?– i.e., Knows outcomes for all actions– Doesn’t know -> agent has to learn what happens

• Known?– Refers to rules. Card face down is not observable,

but rules known.– New phone? Don’t know what some buttons do.

Page 29: Agent  Behaviour  and Knowledge  Representation

Continuous Environments

• Discrete vs continuous– Can apply to states, time, percepts and actions

Page 30: Agent  Behaviour  and Knowledge  Representation

The structure of agents

• Agent = architecture + program• We don’t consider in our course– physical actuators (robots)– Physical sensors (cameras, vision)– So “architecture = general purpose computer”– So agent = program

Page 31: Agent  Behaviour  and Knowledge  Representation

Agent program skeleton

• function AGENT (INPUT) returns ACTION persistent: STATE, STATEFUN, ACTFUN STATE := STATEFUN(STATE, INPUT); return ACTFUN(STATE, INPUT)

Here STATE keeps track of history. The two FUNs could be just table lookups, but these tables would be huge or even unbounded.

Can we replace the tables by programs?

Page 32: Agent  Behaviour  and Knowledge  Representation

Tables and Programs

• Log tables and square root tables (ca 1970)– Replaced by programs in calculator– Not always the best:• Learn your “times tables”, don’t add 4 seven times• Don’t keep re-deriving (a+b)^2 = a^2 +2ab +b^2• “Facts at fingertips” often sign of a professional• Reflexes faster than thought is more hard vs soft• “Intelligence” = facts, to a shocking extent

– See Cyc project (Doug Lenat)

Page 33: Agent  Behaviour  and Knowledge  Representation

Simple Reflex Agents(B48-50, S19 – 21)

• Reflex agents from behavioural psychology– Stimulus/response– Functionalism introduced state of agent

• Vacuum cleaner (S20, 21)– Depends only on current location and “dirty?”– Small program because no history– Also, if dirty, action indep of location

• Also, only works if env. fully observable– i.e. current percept is only info needed– If vacuum cleaner can’t see where it is, …

• “Left” fails if already in Left• Try flipping coin to decide LEFT/RIGHT; don’t need to see location

• Note LISP code– historically important, John McCarthy 1959, SAIL, etc.

Page 34: Agent  Behaviour  and Knowledge  Representation

Reflex agents with state(B50-52, S22-23)

• You see only one page of a book at a time– But you can read because you remember– History can compensate some partial observability– Drivers (and secret service agents)• Can’t see everyone at once, but• I remember where they were when I last looked• Need to predict (guess) where they would be now

– “how world evolves”, “effect of actions” in S22• Hence “model-based” reflex agents

Page 35: Agent  Behaviour  and Knowledge  Representation

Goal-based agents(B52-53, S24)

• In reflex agents, current percept selects action• If you have a goal– Easy if one step gets you there– Search and planning if you need sequence of steps• Need to represent what happens if I do “X”• This explicitly represented knowledge can be updated

– If it’s raining, update taxi brake info (less efficient)» Reflex agent would need many condition-rule pairs

– Easy to change goal, not hard coded as in reflex

Page 36: Agent  Behaviour  and Knowledge  Representation

Utility (B53-54, S25)

• Goals are binary – happy, unhappy• Utility more general – how happy is the agent?– For conflicting goals, utility specifies trade-off– Several uncertain goals• weigh chances against importance of goal

• Performance grades sequence of env. states– Distinguishes between better and worse routes– Agent has utility function that should match this

Page 37: Agent  Behaviour  and Knowledge  Representation

Learning (B54-57, S26)

• Learner may be simpler to build than learned• “Performance element”– takes in percepts, decides actions– Previously was whole agent

• Learning element uses feedback from critic– Modifies performance element to do better– Ask “what kind of PE do I need now?”– Percept says “won”, critic (spec) says this is good

• Problem generator suggests experiments– Local suboptimal may be optimal in long run

Page 38: Agent  Behaviour  and Knowledge  Representation

Learning taxi example

• PE allows cutting across other cars– Critic sees bad effects– LE makes new rule, modifying PE– Problem generator says “try brakes on wet road”

• LE modifies “how world evolves” and “effects of my actions”– Given brake pressure, how much deceleration?

• Part of input may directly grade agent– (No) tips from (dis)satisfied customers

Page 39: Agent  Behaviour  and Knowledge  Representation

Knowledge and AI

• To act intelligently, machines need– reasoning and creativity

• glamorous – but also (mostly?) knowledge

• boring?

• A smart assistant/teacher/expert– Has the facts at their fingertips– Don’t have to be told things– Can tell you the relevant facts faster than you can find them

with Google and Wikipedia• Why do we still go to the doctor or a lawyer?

Page 40: Agent  Behaviour  and Knowledge  Representation

Propositional Logic (Chap 7)

• From Russell’s slides

Page 41: Agent  Behaviour  and Knowledge  Representation

Counter-cultural currents

• Cyc project– to teach computer millions of facts– How to structure this knowledge?

• E. D. Hirsch’s ”Cultural Literacy”– Long tradition says let student discover• My children know nothing about the Civil War!

– No matter, we teach them how to find out about anything • So they will learn about it when they want to• But they might never want to

– Then remain shut out of much debate

Page 42: Agent  Behaviour  and Knowledge  Representation

Machines are not human

• Human memory is leaky– Stories easier to remember than disjointed facts

• Biology used to be one damn fact after another, perhaps grouped or organised, but no ”how” or ”why”

• With evolution, makes sense

• Machine memory doesn’t leak– All structures equally easy to remember– Choose based on speed of lookup

• Studying birds is different from building planes: lesson for AI

Page 43: Agent  Behaviour  and Knowledge  Representation

What is knowledge?• Structures through which to filter experience

– Is that just a bunch of stripes or is it a tiger?• Evolved to get this right!

– What a pretty red!• It’s a traffic light, so stop!

• Inborn– Babies smile at two dots on a white sheet (a face!)– Babies and birds can count to three

• Learned– Put the milk in the fridge or it will spoil– 4 * 2 = 8 (No 4’s or 2’s in nature, but 4 boys and 2 apples; brace, couple, pair, … all mean 2)

Page 44: Agent  Behaviour  and Knowledge  Representation

Stored knowledge

• Why learn the multiplication table?– Lookup faster than re-computation

• Breadth better than depth for fast search– Deep description vs flat name address styles

• 45, 3rd Cross, 2nd Main, Jayanagar IV Block, Bangalore.– 4532D Jayanagar?

• Rännvägen 6, Gbg. (But most don’t know where this is)– So you need a description: how to get there

– Why do languages have so many words?• Because names are faster than descriptions• Two words of title enough to identify book

Page 45: Agent  Behaviour  and Knowledge  Representation

Wumpus Axioms in Prop. Logic

• KB = Axioms + percept sentences• Axioms– not P(1,1) and not W(1,1)– B(1,1) iff P(1,2) or P(2,1) … similarly for B(n,m)– S(1,1) = W(1,2) or W(2,1) … similarly for W(n,m)– W(1,1) or W(1,2) or … or W(2,1) or W(2,2) …• There is at least one wumpus

– not (W(1,1) and W(1,2)) … for all pairs; 1 wumpus

Page 46: Agent  Behaviour  and Knowledge  Representation

Wumpus Percepts in Prop. Logic

• Should we add ”Stench” to KB?– No, depends on when. Else contradiction.– Index all percepts, position, direction by time– L(t,x,y) => (Breeze(t) iff B(x,y)) etc.– Actions?• L(t,x,y) and East(t) and Forward(t) =>

L(t+1,x+1,y) and not L(t+1,x,y)• For each possible t, x and y, and each action• Also state what remains unchanged: frame problem

Page 47: Agent  Behaviour  and Knowledge  Representation

What a mess, but …

• Predicate Logic gets rid of the separate propositions for each value of t, x and y

• Situation Calculus and other planning languages take care of the frame problem

• But you can see that plans can be made by propositional inference. Prove– Init and action axioms => HaveGold and Out

• Programs from proofs!

Page 48: Agent  Behaviour  and Knowledge  Representation

Formal Languages (Chap 8)

• Prog. languages (C++, Java, …) are formal.– represent computation steps directly– Knowledge ad hoc (in data or algorithms)– Facts from other facts? Hard in Java, etc.– Partial info hard. How say ”W(2,3) or W(3,2)”?

• Databases = domain indep. store/read KB• ”Declarative”– KB and inference are separate– Func. Programming: script and eval separate

Page 49: Agent  Behaviour  and Knowledge  Representation

First Order Logic (FOL)

• From Russel’s slides Chap 8

Page 50: Agent  Behaviour  and Knowledge  Representation

Alternative Semantics for FOL

• ”Richard has two brothers, John and Geoff”.– B(J,R) and B(G,R) and J=/=G

and forall x. B(x,R) => (x=J or x=G)– What a pain! So people make mistakes.– Popular database solution

• Unique name assumption.– J=/=G assumed, different names.

• Closed world assumption– Atomic sentences not stated true assumed false.

• Domain closure– There are no domain elements other than those named

Page 51: Agent  Behaviour  and Knowledge  Representation

Knowledge Engg. In FOL

• ”Knowledge Engineering” = how to build KB• ”Upper ontology”– general concepts above, specific concepts below

• General ontologies– Built by specialists or specifically by laypeople – Study existing databases– Study texts

Page 52: Agent  Behaviour  and Knowledge  Representation

Categories and Objects

• We shop for a new copy of R&N’s AI– Not a particular copy

• We see what looks like an orange– And make predictions about the particular fruit

• How to represent categories?– As predicates: Orange(f)– Reify as an object: Member(f, Oranges)

• Allows subcategories, inheritance, …• As taxonomy, Linnaeus, Dewey, …

– Subclass and member are not the only relations• Disjoint categories + exhaustive decomposition = partition

Page 53: Agent  Behaviour  and Knowledge  Representation

Categories (contd.)

• Exhaustive({S,D,F,Nrg,I}, N)– Exhaustive(s,N) iff

(forall p. p in N iff exists c. c in s and p in c)– A person is Nordic iff they are citizens of a c in s

• Disjoint({Animals, Vegetables})– Disjoint(s) iff

(forall c=/=c’ in s. c and c’ are non-intersecting)

Page 54: Agent  Behaviour  and Knowledge  Representation

How much vs. How many

• Stuff vs. Things• An category of objects with only intrinsic

properties (density, colour, …) is stuff– b in Butter and PartOf(p,b) => p in Butter

• If it has any extrinsic properties (weight, length, shape, …) it is a thing

Page 55: Agent  Behaviour  and Knowledge  Representation

Turing Award Winners in AI

• 1969 Marvin Minsky• 1971 John McCarthy• 1975 Allen Newell and Herb Simon• 1994 Edward Feigenbaum and Raj Reddy• 2011 Judea Pearl• Also, several others relevant in semantics,

logic, etc. See Wikipedia ”Turing Award”