prospective logic agents luís moniz pereira gonçalo lopes

44
Prospective Logic Agents Luís Moniz Pereira Gonçalo Lopes

Upload: anna-leonard

Post on 26-Dec-2015

217 views

Category:

Documents


2 download

TRANSCRIPT

Prospective Logic Agents

Luís Moniz PereiraGonçalo Lopes

Broad Outlook Logic programming (LP) languages and

semantics enabling a program to talk about its own evolution through time are already well established.

Broad Outlook Deliberative agents already have powerful

means for extracting inferences, given their knowledge at time t.

How can they take advantage of an evolving logic program’s capabilities?

The idea: to logically model agents capable of making inferences based not only on their current knowledge, but on knowledge that they might expect to have at time t+n.

Broad Outlook Agents can use abductive reasoning to

produce hypothetical scenarios given partial observations from the environment and current knowledge, possibly subject to a set of integrity constraints.

Broad Outlook The set of possible future scenaria can

be exponentially large due to combinatorial explosion, so a means to efficiently prune the search space is required, a priori preferring some predictions over others.

Broad Outlook Once possible scenaria are finally

generated, a means to prefer a posteriori is also required, so that the agent can commit to a final choice of action.

The choices of the agent should of course be dynamically subject to revision, so that it may backtrack on previous assumptions and even change its preferences based on past experience.

Agent Cycle

Language Let L be a first order language. A domain

rule in L is of the form:A ← L1,...,Lt (t ≥ 0)

An integrity constraint is a rule of the form: ← L1,...,Lt (t > 0)

A is a domain atom in L, and L1,...,Lt are domain literals. is a domain atom denoting falsity.

Language A (logic) program P over L is a set of

domain rules and integrity constraints, standing for all their ground instances.

Every program P has an associated set of abducibles A L.

Abducibles are hypotheses which can be assumed to extend the theory, and therefore they do not appear in any rule heads of P.

Preferring Abducibles Abducibles can have preconditions so

that they can be considered, represented by the following rule:

consider(A) expect(A), not expect_not(A) These preconditions represent domain-

specific knowledge and are used a priori to constrain generation of abducibles.

Preferring Abducibles Preferences between abducible literals

can also be expressed via the binary relation ‘is more relevant than’, expressed by the operator :

A B L1,...,Lt (t ≥ 0)

This relation has been extended to sets of abducibles in [pla07].

Prospective Logic Agents The problem of prospection can be

enounced in this framework as one of finding abductive extensions to the current knowledge theory of the agents which are both: Relevant under the agent’s current goals Preferred extensions w.r.t. the preference rules

The basic problem can be arbitrarily complicated by introducing belief revision requirements, utility theory, etc.

Goals and Observations

Observations An observation is expressed as the

quaternary relation:observe(Observer,Reporter,Observation,Value

) Observations can stand for actions,

goals or perceptions. A distinction should be made between

goals (intentions) and desires.

Goals and Desires A goal can be represented as an

observation from the program to the program which must be proven true.

A desire can be represented as a possibility to fulfill a goal. We represent these by on_observe/4 literals, with a structure analogous to the observe/4 literals.

Goals and Desires In any single moment, an agent can

have a multitude of desires, but only some of them will actually become intentions.

We represent evaluation of an intention by the following rules:try(G) Gtry(G) not try_not(G)try_not(G) not try(G)

Example: Tornado Consider a scenario where weather

forecasts have been transmitted foretelling the possibility of a tornado.

In case of emergency prevention, it is necessary to take action beforehand, proactively, so to increase the chances of success.

Example: Tornado The following prospective logic program

aims to deal with this scenario:

consider(tornado), not deal_with_emergency(tornado)

expect(tornado) weather_forecast(tornado)deal_with_emergency(tornado)

consider(decide_board_house)

expect(decide_board_house) consider(tornado) decide_board_house, not boards_at_home, not

go_buy_boards

Example: Tornado The weather forecast implies that a tornado

is expected and so the above program actually encodes two possible predictions about the future.

In one of the scenaria, the tornado is absent, but in the scenario where it is actually confirmed, the decision to board up the house follows as a necessity.

Scenaria generation can trigger goals, which in turn can trigger more scenaria generation.

Generating Scenaria

Generating Scenaria Once the set of the agent’s active goals

is known, possible scenaria can be found by reasoning backwards from the goals into abducibles under consider/1 literals.

Each abducible represents a choice: it can be assumed either true or false, meaning a combinatorial explosion of possible abducible values in a program.

Generating Scenaria In practice, the combinations are contained

and made tractable by a number of factors. First, we consider only the relevant part of

the program for collecting considered abducibles.

A priori preference rules and preconditions also rule out a majority of latent hypotheses, thus pruning the search space efficiently, using domain-specific knowledge.

Top-down consideration,Bottom-up generation Considered abducibles are found by

reasoning backwards from the goals. However, assuming an abducible as

true or false may trigger unforeseen side-effects on the rest of the program.

For this reason, scenario generation is obtained by reasoning forwards from selected abducibles to find relevant consequences.

Example: Emergencies Consider the emergency scenario in the

London underground [kowalski06], where smoke is observed, and we want to be able to provide an explanation for this observation.

Smoke can be caused by fire, and the possibility of flames should be considered.

But smoke could also be caused by tear gas, in case of police intervention.

Example: Emergencies The tu literal stands for ‘true or

undefined’.smoke consider(fire)flames consider(fire)

smoke consider(tear_gas)eyes_cringing consider(tear_gas)

expect(fire)expect(tear_gas)

Example: Emergencies

observation(smoke), not smokeobservation(smoke)

fire tear_gas

flames, not observe(program,user,flames,tu) eyes_cringing, not

observe(program,user,eyes_cringing,tu)

Preferring a posteriori

Quantitative Preferences Once each scenario’s model is known,

there are a number of strategies which can be followed for choosing between them.

A possible way to achieve this is to use an utility theory to assign, in a domain-specific way, a numerical value to each scenario, which is computed during scenario generation and used as an element for choice a posteriori.

Qualitative Preferences Numerical assessment of the value of

each scenario can be effective in many situations, but there are occasions where a more qualitative expression of preference is desired.

This is the role of the moral theory presented in the figure. Related work [pereira07] explores this qualitative preference mechanism in more detail.

Exploiting Oracles In both quantitative and qualitative

cases, the possibility of acquiring additional information to make a choice is highly advantageous.

Prospective logic agents use the concept of oracles to access additional information from external systems (e.g. sensors, the user, etc.)

Exploiting Oracles Queries to oracles are represented using

the syntax for observations presented previously, in the form:observe(agent, oracle_name, query, Value)

oracle, L1,...,Lt (t ≥ 0)

Since oracles can be expensive to query, a principle of parsimony is enforced via the oracle literal, which is used as a toggle to allow/disallow queries to oracles.

Exploiting Oracles Information obtained from the oracles

can have side-effects in the rest of the program as well.

After the oracle step, it may be necessary to relaunch the procedure in order to reevaluate simulation conditions.

Consequences of Prospection Even after all the strategies for choice

have been used, more than a single desirable scenario may still remain.

In this case, we may have to iterate the procedure to incorporate additional information until we reach a fix-point.

Additionally, we may branch the simulation to consider a number of different possible scenarios in parallel.

Example: Automated Diagnosis Consider a robotic gripper immersed in a

collaborative assembly-line environment. Commands issued to the gripper from its

controller are updated to its evolving knowledge base, as well as regular readings from the sensor.

Diagnosis requests by the system are issued to the gripper's prospecting controller, in order to check for abnormal behaviour.

Example: Automated Diagnosis When the system is confronted with

multiple possible diagnosis, requests for experiments can be asked of the controller. The gripper can have three possible logical states: open, closed or something intermediate. The available gripper commands are simply open and close.

Example: Automated Diagnosisopen request_open, not consider(abnormal(gripper))open sensor(open), not consider(abnormal(sensor))

intermediate request_close, manipulating_part, not consider(abnormal(gripper)), not consider(lost_part)

intermediate sensor(intermediate), not consider(abnormal(sensor))

closed request_close, not manipulating_part, not consider(abnormal(gripper))

closed sensor(closed), not consider(abnormal(sensor))

open, intermediate open, closed closed, intermediate

Example: Automated Diagnosisexpect(abnormal(gripper))expect(lost_part) manipulating_partexpect(abnormal(sensor))expect_not(abnormal(sensor)) manipulating_part,

observe(system,gripper,ok(sensor),true)

observe(system,gripper,Experiment,Result) oracle, test_sensor(Experiment,Result)

abnormal(gripper) abnormal(sensor) request_open, not sensor(open), not sensor(closed)

lost_part abnormal(gripper) observe(system,gripper,ok(sensor),true), sensor(closed)

abnormal(gripper) lost_part not (lost_part abnormal(gripper))

Example: Automated Diagnosis In this case, there is an available

experiment to test whether the sensor is malfunctioning, but resorting to it should be avoided as much as possible, as it will imply occupying additional resources from the assembly-line coalition.

Implementation The presented system has already been

implemented using working state-of-the-art logic programming frameworks.

XSB Prolog was used for the Well-Founded, top-down, abductive semantics.

Smodels was used for the Stable Models, bottom-up, scenaria generation semantics.

Prospecting the Future Both [kowalski06] and [poole00] represent

candidate actions by abducibles and use logic programs to derive possible consequences, to help in deciding between them.

However, they do not derive consequences of abducibles that are not actions, such as observations for example. Nor do they consider how to determine the value of unknown conditions (e.g. by using an oracle).

Prospecting the Future Compared with Poole and Kowalski, one

of the most interesting features of our approach is the use of Smodels to perform a kind of forward reasoning to derive the consequences of candidate hypotheses.

These consequences may then lead to a further cycle of abductive exploration, intertwined with preferences for pruning and for directing search.

Prospecting the Future A number of additional challenges still

need to be addressed, however, in order to allow the system to be able to scale up to scenarios of greater complexity.

Branching update sequences need to be extended to handle an arbitrary length in future lookahead.

Preferences over observations are also desirable, so that agents can best select over which oracles to query during prospection.

Prospecting the Future Prospective agents could use abduction

not only to find the means to furtheir their own goals, but to abduce the goals and intentions of other agents.

Prospection over the past is also of interest, so to gain the ability to perform counterfactuals in order to increase performance in future tasks.

Bibliography (excerpt) [pereira07] - L. M. Pereira and A. Saptawijaya,

Modelling Morality with Prospective Logic, 2007 [kowalski06] – R. Kowalski, The Logical Way to

be Artificially Intelligent, 2006 [poole00] – D. Poole, Abducing Through

Negation as Failure: Stable models within the independent choice logic, 2000

[pla07] - L. M. Pereira, G. Lopes and P. Dell’Acqua, Pre and Post Preferences over Abductive Models, 2007