do software agents know what they talk about?
DESCRIPTION
Do software agents know what they talk about?. Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005. Definition revisited. Autonomy (generally accepted) Learning (not necessarily, maybe undesirable …. - PowerPoint PPT PresentationTRANSCRIPT
Do software agents know what they talk about?
Agents and Ontology
dr. Patrick De Causmaecker, Nottingham, March 7-11,
2005
Nottingham, March 2005
Agents and Ontology [email protected] 2
Definition revisited Autonomy (generally accepted) Learning (not necessarily, maybe
undesirable …
An agent is a computer system that is situated in some environment and that is capable of autonomous action in this environment in order to meet ist design objectives.
Nottingham, March 2005
Agents and Ontology [email protected] 3
Agent
Environment
Actionoutput
Sensorinput
Nottingham, March 2005
Agents and Ontology [email protected] 4
Definition An agent
Has impact on its environment Has partial control Actions may have undeterministic effects
The agent has a set of possible actions, which may make sense depending on environment parameters
Nottingham, March 2005
Agents and Ontology [email protected] 5
The fundamental problem The agent must decide which of its
actions are best fit to meet its objectives.
An agent architecture is a software structure for a decision system that functions in an environment.
Nottingham, March 2005
Agents and Ontology [email protected] 6
Example: a control system: A thermostate works according to
the rules
Distinguish environment, action, impact
Too cold => heating onTemperature OK => heating off
Nottingham, March 2005
Agents and Ontology [email protected] 7
Example : control system X Windows xbiff handle email Xbiff lives in a software environment It uses LINUX software functions uitvoeren to
arrive at its information (ls to check mailbox) It uses LINUX software functions to change its
environment (adapt the icon on the desktop) As an agent it is not more complicated than
the thermostate.
Nottingham, March 2005
Agents and Ontology [email protected] 8
Environments Access Deterministic or not Static or dynamic Discrete or continuous
Nottingham, March 2005
Agents and Ontology [email protected] 9
Access The temperature at the north pole of
Mars? Uncertainty, incompleteness of
information But the agent must decide Better access makes simpler agents
Nottingham, March 2005
Agents and Ontology [email protected] 10
Deterministic or not Sometimes the result of an action is not
deterministic. This is caused by
Limited impact of the agent Limited capabilities of the agent The complexity of the environment
The agent must check the consequences of its actions
Nottingham, March 2005
Agents and Ontology [email protected] 11
Static/Dynamic Is the agent the only actor? E.g. Software systems, large civil
constructions, visitors in an exhibition. Most systems are dynamic The agent must keep collecting data, the state
may change during the action or the decision process
Synchronisation, co-ordination between processes and agents is necessary.
Nottingham, March 2005
Agents and Ontology [email protected] 12
Discrete or continuous Classify:
Chess, taxi driving, navigating, , word processing, understanding natural language
Which is more difficult?
Nottingham, March 2005
Agents and Ontology [email protected] 13
Interaction with environment Originally: functional systems
Compilers
Given a precondition, they realise a postcondition
Top down design is possible
f:I->O
Nottingham, March 2005
Agents and Ontology [email protected] 14
Interaction: reactivity Most programs are reactive
They maintain a relationship with modules and environment, respond on signals
Can react fastly React and think afterwards (or not)
Reactive agents take local decisions with a global impact
Nottingham, March 2005
Agents and Ontology [email protected] 15
Intelligent agents Intelligence is
Responsivity Proactivity Social ability
E.g. proactivity: C-program Constant environment
E.g. responsivity The agent is in the middle, this is
complicated
Nottingham, March 2005
Agents and Ontology [email protected] 16
Agenten and Objects “Objects are actors. They respond
in a human like way to messages…”
Agents are AUTONOMOUS Objects implement methods that can
be CALLED by other objects Agents DECIDE what to do, in
response to messages
Nottingham, March 2005
Agents and Ontology [email protected] 17
Objects do it for freeAgents do it because they want it
Nottingham, March 2005
Agents and Ontology [email protected] 18
Agents and expertsystems Vb: Mycin,… Expertsystems are consultants,
they do not act They are in general not proactive They have no social abilities
Nottingham, March 2005
Agents and Ontology [email protected] 19
Agents as intentional systems Belief, Desire, Intention First order:
Belief,… about objects NOT about Belief…
Higher order: May model its own beliefs, … or those
of other agents BDI
Nottingham, March 2005
Agents and Ontology [email protected] 20
A simple example A light switch is an agent that can
allow current to pass or not. It will do so if it beliefs that we want the current to pass and not of it beliefs that we do not. We pass our intentionts by switching.
There are simpler models of a switch…
Nottingham, March 2005
Agents and Ontology [email protected] 21
Abstract architecture Environment is a set of states:
E = {e,e’,…} An agent has a set of actions
Ac= {,’,…} A run is a sequence state-action-state-…
R=e0- 0 -> e1- 1 -> e2- 2 ->… - u -> eu
Nottingham, March 2005
Agents and Ontology [email protected] 22
Abstract architecture Symbols
R is the set of runs RAc is the set of runs ending in an
action RE is the set of runs ending in a state r,r’ are in R.
Nottingham, March 2005
Agents and Ontology [email protected] 23
Abstract architecture The state transformation:
: RAc->P(E) An action may lead to a set of states The result depends on the run (r) may be empty
Nottingham, March 2005
Agents and Ontology [email protected] 24
Abstract architecture Environment:
Env = <E,e0, > E a set of states, e0 an initial state, state
transformation An agent is a function
Ag: RE -> Ac Which is deterministic!
R(Ag, Env) is the set of all ended runs
Nottingham, March 2005
Agents and Ontology [email protected] 25
Abstract architecture A sequence
(e0,0, e1, 1, e2, 2 …)
Is a run of agent Ag in Env=<E,e0, > iff e0 is the initial stae of Env for u>0
eu ((e0,0, … u-1 …)) u = Ag((e0,0, … u-1, eu )
Nottingham, March 2005
Agents and Ontology [email protected] 26
Perception The action function can be split
Perception Actionselection
We now call see the function that allows the agent
to observe action the function modelling the
decision process
Nottingham, March 2005
Agents and Ontology [email protected] 27
Agent
Environment
Actionoutput
Sensorinput
see action
Nottingham, March 2005
Agents and Ontology [email protected] 28
Perception We have
see : E -> Per action : Per* -> Ac
action works on sequences of perceptions.
An agent is a pair: Ag=<see, action>
Nottingham, March 2005
Agents and Ontology [email protected] 29
Perception: an example Beliefs
x=‘The temperature is OK’ y=‘Gherard Schröder is chanceler’
Environment E={e1= {x,y}, e2= {x,y}, e3= {x,y},
e4= {x,y}}
Thermostate?
Nottingham, March 2005
Agents and Ontology [email protected] 30
Perception Equivalence of states:
e1 ~ e2 a.s.a. see(e1)=see(e2) |~| = |E| for a strong agent |~| = 1 for an agent with a weak
perception
Nottingham, March 2005
Agents and Ontology [email protected] 31
Agents with a state
The past is taken into account through an internal state of the agent: see: E -> Per action: I->Ac next: I x Per -> I
Action selection is action(next(i,see(e)))
The new state is i’=next(i,see(e))
Environmental impact: e’ (r)
Nottingham, March 2005
Agents and Ontology [email protected] 32
How to tell the agent what to do Two approaches: benaderingen:
Utility Predicates
Utitility is a performance measure for states
Predicates contain a specification of the states.
Nottingham, March 2005
Agents and Ontology [email protected] 33
Utility Let it purely work on states:
u:E->R The fitness of an action is judged on
minimum of available u-values Average of available u-values …
Approach is local, agents become myopic
Nottingham, March 2005
Agents and Ontology [email protected] 34
Utility Let it work on runs
u:R->R Agents can look forward E.g.: Tileworld (Pollack 1990)
Nottingham, March 2005
Agents and Ontology [email protected] 35
Utilities
May be defined probabilistically, by adding a probability to the state transformation.
A problem is computability, within specific time limits. In most cases the optimum cannot be found. One can use heuristics here.
Nottingham, March 2005
Agents and Ontology [email protected] 36
Predicates Utilities are not the most natural way to
define a state. What does it mean that the temperature is ok?
Humans think in objectives. Those are statements, or predicates.
Nottingham, March 2005
Agents and Ontology [email protected] 37
Task environments A pair <Env, > is called a task
environment iff Env is an environment and :R->{0,1}
is a predicate over the runs R The set of runs satisfying the predicate is
R An agent Ag is successful iff
R(Ag,Env) = R(Ag,Env) or r R(Ag,Env) : (r)
Alternatively: r R(Ag,Env) : (r)
Nottingham, March 2005
Agents and Ontology [email protected] 38
Task environments
One distinguishes Achievement tasks
Aim at a certain condition on the environment Maintenance tasks
Try to avoid a certain condition on the environment