do software agents know what they talk about?

38
Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005

Upload: dustin-carlson

Post on 02-Jan-2016

24 views

Category:

Documents


0 download

DESCRIPTION

Do software agents know what they talk about?. Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005. Definition revisited. Autonomy (generally accepted) Learning (not necessarily, maybe undesirable …. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Do software agents know what they talk about?

Do software agents know what they talk about?

Agents and Ontology

dr. Patrick De Causmaecker, Nottingham, March 7-11,

2005

Page 2: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 2

Definition revisited Autonomy (generally accepted) Learning (not necessarily, maybe

undesirable …

An agent is a computer system that is situated in some environment and that is capable of autonomous action in this environment in order to meet ist design objectives.

Page 3: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 3

Agent

Environment

Actionoutput

Sensorinput

Page 4: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 4

Definition An agent

Has impact on its environment Has partial control Actions may have undeterministic effects

The agent has a set of possible actions, which may make sense depending on environment parameters

Page 5: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 5

The fundamental problem The agent must decide which of its

actions are best fit to meet its objectives.

An agent architecture is a software structure for a decision system that functions in an environment.

Page 6: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 6

Example: a control system: A thermostate works according to

the rules

Distinguish environment, action, impact

Too cold => heating onTemperature OK => heating off

Page 7: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 7

Example : control system X Windows xbiff handle email Xbiff lives in a software environment It uses LINUX software functions uitvoeren to

arrive at its information (ls to check mailbox) It uses LINUX software functions to change its

environment (adapt the icon on the desktop) As an agent it is not more complicated than

the thermostate.

Page 8: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 8

Environments Access Deterministic or not Static or dynamic Discrete or continuous

Page 9: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 9

Access The temperature at the north pole of

Mars? Uncertainty, incompleteness of

information But the agent must decide Better access makes simpler agents

Page 10: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 10

Deterministic or not Sometimes the result of an action is not

deterministic. This is caused by

Limited impact of the agent Limited capabilities of the agent The complexity of the environment

The agent must check the consequences of its actions

Page 11: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 11

Static/Dynamic Is the agent the only actor? E.g. Software systems, large civil

constructions, visitors in an exhibition. Most systems are dynamic The agent must keep collecting data, the state

may change during the action or the decision process

Synchronisation, co-ordination between processes and agents is necessary.

Page 12: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 12

Discrete or continuous Classify:

Chess, taxi driving, navigating, , word processing, understanding natural language

Which is more difficult?

Page 13: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 13

Interaction with environment Originally: functional systems

Compilers

Given a precondition, they realise a postcondition

Top down design is possible

f:I->O

Page 14: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 14

Interaction: reactivity Most programs are reactive

They maintain a relationship with modules and environment, respond on signals

Can react fastly React and think afterwards (or not)

Reactive agents take local decisions with a global impact

Page 15: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 15

Intelligent agents Intelligence is

Responsivity Proactivity Social ability

E.g. proactivity: C-program Constant environment

E.g. responsivity The agent is in the middle, this is

complicated

Page 16: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 16

Agenten and Objects “Objects are actors. They respond

in a human like way to messages…”

Agents are AUTONOMOUS Objects implement methods that can

be CALLED by other objects Agents DECIDE what to do, in

response to messages

Page 17: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 17

Objects do it for freeAgents do it because they want it

Page 18: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 18

Agents and expertsystems Vb: Mycin,… Expertsystems are consultants,

they do not act They are in general not proactive They have no social abilities

Page 19: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 19

Agents as intentional systems Belief, Desire, Intention First order:

Belief,… about objects NOT about Belief…

Higher order: May model its own beliefs, … or those

of other agents BDI

Page 20: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 20

A simple example A light switch is an agent that can

allow current to pass or not. It will do so if it beliefs that we want the current to pass and not of it beliefs that we do not. We pass our intentionts by switching.

There are simpler models of a switch…

Page 21: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 21

Abstract architecture Environment is a set of states:

E = {e,e’,…} An agent has a set of actions

Ac= {,’,…} A run is a sequence state-action-state-…

R=e0- 0 -> e1- 1 -> e2- 2 ->… - u -> eu

Page 22: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 22

Abstract architecture Symbols

R is the set of runs RAc is the set of runs ending in an

action RE is the set of runs ending in a state r,r’ are in R.

Page 23: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 23

Abstract architecture The state transformation:

: RAc->P(E) An action may lead to a set of states The result depends on the run (r) may be empty

Page 24: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 24

Abstract architecture Environment:

Env = <E,e0, > E a set of states, e0 an initial state, state

transformation An agent is a function

Ag: RE -> Ac Which is deterministic!

R(Ag, Env) is the set of all ended runs

Page 25: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 25

Abstract architecture A sequence

(e0,0, e1, 1, e2, 2 …)

Is a run of agent Ag in Env=<E,e0, > iff e0 is the initial stae of Env for u>0

eu ((e0,0, … u-1 …)) u = Ag((e0,0, … u-1, eu )

Page 26: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 26

Perception The action function can be split

Perception Actionselection

We now call see the function that allows the agent

to observe action the function modelling the

decision process

Page 27: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 27

Agent

Environment

Actionoutput

Sensorinput

see action

Page 28: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 28

Perception We have

see : E -> Per action : Per* -> Ac

action works on sequences of perceptions.

An agent is a pair: Ag=<see, action>

Page 29: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 29

Perception: an example Beliefs

x=‘The temperature is OK’ y=‘Gherard Schröder is chanceler’

Environment E={e1= {x,y}, e2= {x,y}, e3= {x,y},

e4= {x,y}}

Thermostate?

Page 30: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 30

Perception Equivalence of states:

e1 ~ e2 a.s.a. see(e1)=see(e2) |~| = |E| for a strong agent |~| = 1 for an agent with a weak

perception

Page 31: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 31

Agents with a state

The past is taken into account through an internal state of the agent: see: E -> Per action: I->Ac next: I x Per -> I

Action selection is action(next(i,see(e)))

The new state is i’=next(i,see(e))

Environmental impact: e’ (r)

Page 32: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 32

How to tell the agent what to do Two approaches: benaderingen:

Utility Predicates

Utitility is a performance measure for states

Predicates contain a specification of the states.

Page 33: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 33

Utility Let it purely work on states:

u:E->R The fitness of an action is judged on

minimum of available u-values Average of available u-values …

Approach is local, agents become myopic

Page 34: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 34

Utility Let it work on runs

u:R->R Agents can look forward E.g.: Tileworld (Pollack 1990)

Page 35: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 35

Utilities

May be defined probabilistically, by adding a probability to the state transformation.

A problem is computability, within specific time limits. In most cases the optimum cannot be found. One can use heuristics here.

Page 36: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 36

Predicates Utilities are not the most natural way to

define a state. What does it mean that the temperature is ok?

Humans think in objectives. Those are statements, or predicates.

Page 37: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 37

Task environments A pair <Env, > is called a task

environment iff Env is an environment and :R->{0,1}

is a predicate over the runs R The set of runs satisfying the predicate is

R An agent Ag is successful iff

R(Ag,Env) = R(Ag,Env) or r R(Ag,Env) : (r)

Alternatively: r R(Ag,Env) : (r)

Page 38: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected] 38

Task environments

One distinguishes Achievement tasks

Aim at a certain condition on the environment Maintenance tasks

Try to avoid a certain condition on the environment