maintaining consistency in a robot’s knowledge-base via ... · inconsistencies. first experiments...

10
1 Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning Stephan Gspandl a , Ingo Pill a , Michael Reip a , and Gerald Steinbauer a,* a Institute for Software Technology, Graz University of Technology, Inffeldgasse 16b/II, 8010 Graz, Austria E-mail: {sgspandl, ipill, mreip, steinbauer}@ist.tugraz.at Non-deterministic reality is a severe challenge for au- tonomous robots. Error-prone action outcomes, inaccurate sensor perception and exogenous events easily lead to in- consistencies between an actual situation and the internal knowledge-base encoding a robot’s belief. For a viable rea- soning in dynamic environments, a robot is thus required to efficiently cope with such inconsistencies and maintain a consistent knowledge-base as fundament for its decision- making. In this paper, we present a belief management system based on the well-known agent programming language IndiGolog and history-based diagnosis. Extending the language’s de- fault mechanisms, we add a belief management system that is capable of handling several fault types that lead to belief inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction There is an increasing interest to target autonomous mobile robots for complex tasks in dynamic (non- deterministic) environments. Related target applica- tions range from simple transportation services via vis- itor guidance in a museum or autonomous car driving to planetary exploration [31,16,32]. The complexity of such application domains raises the demands regarding autonomous reasoning capabilities of dependable sys- tems. Appropriate robots have to consider, for instance, a high and dynamic number of entities and objects, in- cluding their complex and spatial relations. Also tasks such as localization, navigation and object recognition, as well as the required capabilities to perceive the en- vironment and interact with it, gain in complexity for such settings. * The authors are listed in alphabetical order, where Gerald Stein- bauer is the communicating author. This work has been partly funded by the Austrian Science Fund (FWF) by grants P22690 and P22959. Specifically in such demanding environments, it is of utmost importance to ensure that a robot’s belief about a situation is consistent with reality. Inconsis- tencies between the real situation and the robot’s be- lief encoded in the internal knowledge-base might sig- nificantly alter the robot’s line of reasoning, affecting quality-of-service. However, maintaining a consistent belief in highly dynamic and non-deterministic envi- ronments is a challenging task. Inaccurate sensor per- ception, exogenous events and error-prone action out- comes easily lead to aforementioned inconsistencies with reality. In order to ensure quality-of-service, thus a robot’s control system requires the capabilities to ef- fectively deal with such situations. For a proper discussion of related intricacies, let us consider the example of a delivery robot whose task is to move objects between rooms. Assume as environment an office with four rooms A,B,C,D, a hallway E connecting all the rooms, and three movable objects Calculator, Letter and Folder (see the left of Fig. 1 for an illustration). The task of robot R is to move the Letter to room B and the Folder to room C. Considering the robot’s basic capa- bilities {goto(room), pickup(object), putdown(object), an obvious correct plan π for this task would be the following one: goto(A), pickup(Letter) goto(B), putdown(Letter), goto(D), pickup(Folder), goto(C), and finally putdown(Folder). Now assume that, due to some sensing and action issues, the robot moves to room A instead of room C, and subsequently drops the Folder at the wrong position (see the right of Fig 1). Obviously, the inconsistency between reality (the robot is in room A) and the robot’s belief (to be in room C) has a serious impact: A robot that is unable to recog- nize and handle such inconsistencies neither finishes the given task successfully, nor does it continue opera- tion from a sane situation. AI Communications ISSN 0921-7126, IOS Press. All rights reserved

Upload: others

Post on 13-Oct-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

1

Maintaining Consistency in a Robot’sKnowledge-Base via Diagnostic Reasoning

Stephan Gspandl a, Ingo Pill a, Michael Reip a, and Gerald Steinbauer a,∗

a Institute for Software Technology, Graz University of Technology, Inffeldgasse 16b/II, 8010 Graz, AustriaE-mail: {sgspandl, ipill, mreip, steinbauer}@ist.tugraz.at

Non-deterministic reality is a severe challenge for au-tonomous robots. Error-prone action outcomes, inaccuratesensor perception and exogenous events easily lead to in-consistencies between an actual situation and the internalknowledge-base encoding a robot’s belief. For a viable rea-soning in dynamic environments, a robot is thus requiredto efficiently cope with such inconsistencies and maintaina consistent knowledge-base as fundament for its decision-making.In this paper, we present a belief management system basedon the well-known agent programming language IndiGologand history-based diagnosis. Extending the language’s de-fault mechanisms, we add a belief management system thatis capable of handling several fault types that lead to beliefinconsistencies. First experiments in the domain of servicerobots show the effectiveness of our approach.

1. Introduction

There is an increasing interest to target autonomousmobile robots for complex tasks in dynamic (non-deterministic) environments. Related target applica-tions range from simple transportation services via vis-itor guidance in a museum or autonomous car drivingto planetary exploration [31,16,32]. The complexity ofsuch application domains raises the demands regardingautonomous reasoning capabilities of dependable sys-tems. Appropriate robots have to consider, for instance,a high and dynamic number of entities and objects, in-cluding their complex and spatial relations. Also taskssuch as localization, navigation and object recognition,as well as the required capabilities to perceive the en-vironment and interact with it, gain in complexity forsuch settings.

*The authors are listed in alphabetical order, where Gerald Stein-bauer is the communicating author. This work has been partly fundedby the Austrian Science Fund (FWF) by grants P22690 and P22959.

Specifically in such demanding environments, it isof utmost importance to ensure that a robot’s beliefabout a situation is consistent with reality. Inconsis-tencies between the real situation and the robot’s be-lief encoded in the internal knowledge-base might sig-nificantly alter the robot’s line of reasoning, affectingquality-of-service. However, maintaining a consistentbelief in highly dynamic and non-deterministic envi-ronments is a challenging task. Inaccurate sensor per-ception, exogenous events and error-prone action out-comes easily lead to aforementioned inconsistencieswith reality. In order to ensure quality-of-service, thusa robot’s control system requires the capabilities to ef-fectively deal with such situations.

For a proper discussion of related intricacies, letus consider the example of a delivery robot whosetask is to move objects between rooms. Assume asenvironment an office with four rooms A,B,C,D,a hallway E connecting all the rooms, and threemovable objects Calculator,Letter and Folder (seethe left of Fig. 1 for an illustration). The task ofrobot R is to move the Letter to room B and theFolder to room C. Considering the robot’s basic capa-bilities {goto(room), pickup(object), putdown(object),an obvious correct plan π for this task would bethe following one: goto(A), pickup(Letter) goto(B),putdown(Letter), goto(D), pickup(Folder), goto(C),and finally putdown(Folder). Now assume that, due tosome sensing and action issues, the robot moves toroom A instead of room C, and subsequently drops theFolder at the wrong position (see the right of Fig 1).Obviously, the inconsistency between reality (the robotis in room A) and the robot’s belief (to be in room C)has a serious impact: A robot that is unable to recog-nize and handle such inconsistencies neither finishesthe given task successfully, nor does it continue opera-tion from a sane situation.

AI CommunicationsISSN 0921-7126, IOS Press. All rights reserved

Page 2: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

2 St. Gspandl, I. Pill, M. Reip, and G. Steinbauer / Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning

A

C D

E

B

R

Folder

LetterCalculatorLetterFolder

A

C D

E

B

CalculatorR

Fig. 1. Robot Example. Left: the initial situation, Right: erroneous execution of an action (the robot went to the wrong room).

However, if a robot is capable of reasoning aboutthe domain using some background model, it could de-tect and then address such problems via diagnostic rea-soning. Such a background model could contain, forinstance, a sentence encoding that if one perceives anobject and assumes to be in a specific room, then onecan conclude the object to be in that room as well:perceive(O) ∧ at(L) → isat(O,L). Considering ourfaulty example situation, robotR would conclude rightafter the first action goto room A that the Calculator isin room A. When moving to room A instead of room C,it perceives the Calculator again and subsequentlyconcludes that it is in room room C (as the robot be-lieves to be in room C). This newly derived knowledgeestablishes a recognizable conflict with previously col-lected data that assume the Calculator to be in room A.Recognizing the conflict, the robot then can trigger in-ternal reasoning regarding hypotheses explaining theencountered issue.

Obviously, there is more than one explanation, as isillustrated in Figure 2: (1) the robot drove to the wrongroom (our example scenario as depicted at the left ofFigure 2), (2) a sensing fault resulted in a ghost imageof the object Calculator (the scenario in the middle),and (3) someone moved the object Calculator fromroom A to room C (the scenario at the right). A robot’sreasoning mechanism and belief management has to beable to deal with such ambiguity. This requires eitheran approach to choose the most likely hypothesis (e.g.,via ranking or discrimination), or the ability to handleand consider multiple hypotheses simultaneously.

The belief management system proposed in this pa-per allows us to detect inconsistencies, describe beliefambiguities, and generate multiple hypotheses as wellas rank them. While we adopt the most likely hypothe-sis for immediate operation, we keep track of the alter-natives for flexibility in case future data proves the cho-sen favorite to be wrong. Our reasoning is based on the

situation calculus [22,27], and we adapt history-baseddiagnosis [17] developed by Iwan and colleagues forour implementation based on the agent programminglanguage IndiGolog. This paper extends our work pre-sented in [15] and utilizes our formal framework de-picted in [7].

The remainder of this paper is organized as follows.Section 2 covers related research. In Section 3 we de-pict our belief management system, where sections 3.1and 3.2 introduce details about the situation calculus aswell as IndiGolog and history-based diagnosis respec-tively. Section 3.3 covers the details of our proposedbelief management system. Results from our first ex-periments can be found in Section 4, and are followedby our conclusions drawn in Section 5.

2. Related Research

In the following we discuss previous work in threeresearch areas that are relevant to our paper: (1)fault diagnosis, (2) hypothesis discrimination and (3)dealing with uncertainties in acting and sensing.

Diagnosis, i.e. the detection and localization offaults, is a major topic in research. There is a widerange of different systems dealing with various typesof faults. In the field of robotics, Honghai and Coghill[20] introduced a qualitative model of a planar robot’skinematics which can be used for diagnosis purposes.Many approaches like [8,14,21,33] specifically ad-dress the diagnosis of sensing and/or actuator faults.In the context of autonomic computing (softwareengineering), some model-based approaches aim atthe creation of self-healing and self-adaptive softwaresystems. The authors of [12], for example, proposeto maintain architecture models at run-time as basisfor diagnosis and repair. An important question is

Page 3: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

St. Gspandl, I. Pill, M. Reip, and G. Steinbauer / Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning 3

LetterFolder

A

C

E

B

Calculator

R

D

Letter

Folder

A

C

E

B

D

R

Calculator

Calculator

Letter

Folder

A

C

E

B

Calculator

D

R

Fig. 2. The possible worlds for the robot and their explanations for the given sensor value perceive(Calculator). Wrong navigation on the left.Ghost object in the center. Exogenous event on the right.

also how diagnoses can be calculated efficiently onembedded systems. Struss and colleagues proposedknowledge compilation as a solution to this prob-lem in the automotive industry [30]. For complexand dynamic environments, classic diagnosis likeconsistency-based diagnosis [26] is unfortunatelytoo static. While it focuses on faulty components,for dynamic environments it is more appropriate tofocus on correct action (event) sequences rather thanto blame a particular component [23]. In [17], Iwandescribes a diagnosis approach for robots in suchsettings that is based on the situation calculus. As weadapt this approach for our belief management system,we will discuss it in more detail in Section 3.2.

Once a set of diagnoses has been found, the correctone has to be isolated, or at least we have to identifythe most probable one(s). In [4] de Kleer proposed touse as next measurement in a diagnosis process thatone which provides the most novel information. Thisway, he expects to iteratively improve the diagnosisquality by ruling out inappropriate diagnoses. Regard-ing our work, this approach is related to the generationor confirmation of facts in a knowledge base, whichhas to be an integral part of belief management. In [18]the authors extend this idea. Via planning they aimto derive an action sequence that provides additionalknowledge in order to enhance diagnosis quality. Theirapproach (named pervasive diagnosis) is specificallyinteresting in the sense that it actively gathers addi-tional knowledge. In [29] also Struss argued that inparticular cases probe selection, i.e. using test inputs,

is not sufficient, so that a more active approach isnecessary. McIlraith and Reiter discussed in [24] howtests can be used in the context of consistency-baseddiagnosis to discriminate between hypotheses. In [1]Alur et al. discuss discrimination strategies in thecontext of non-deterministic and probabilistic statemachines. As states can be interpreted as situationsand transitions as actions, their work is also relevant tous.

Handling uncertainty in sensing and acting is an-other issue relevant to our work. There are several ap-proaches using the situation calculus to deal with thisproblem. For example, by formalizing a planning prob-lem [11,10] the relevance of unexpected changes for anactual plan can be decided. This gives valuable clueson which facts are relevant for a given task. The authorsof [6] employ situation calculus as well as decision-theoretic planning to control an autonomous robot.Whenever a mismatch between passive sensing and theexpected outcome of an action (according to modelscalled markers) is detected, re-planning is initiated. Inthe context of execution monitoring, the authors of [3]proposed to use the semantics of the environment. Thisapproach is related to ours, but is based on static in-formation. In [9] Thielscher et al. presented a formal-ization using the fluent calculus to derive explanationsfor unexpected situations. Their approach employs ac-tive sensing to minimize the impact of such events.In the context of planning, the authors of [5] take asomewhat different approach and interleave planningand plan execution in order to reduce the computa-

Page 4: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

4 St. Gspandl, I. Pill, M. Reip, and G. Steinbauer / Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning

tional effort to find an optimal plan. Weld et al. [34] ex-tended GraphPlan to plan with sensing actions and un-certainty. However, their algorithm does not maintainthe consistency of a system’s belief.

3. History-Based Diagnosis and BeliefManagement for IndiGolog

In this section we present a belief management sys-tem that is able to maintain the consistency of a robot’sbelief with reality (to the extent that is perceivable). Asour system is designed around the situation calculus,IndiGolog and history-based diagnosis (HBD), we willdiscuss related essential features before explaining thedetails of our approach in Section 3.3.

3.1. Situation Calculus, Golog and IndiGolog

IndiGolog (Incremental Deterministic Golog) [13]is a logic-based programming and planning languagefor agents and robots1. When using IndiGolog, the de-signer has the flexibility to choose whether computa-tional resources should favor planning or imperativeprograms. IndiGolog (like its predecessors, such asGolog [19]) is based on Reiter’s variant of the situationcalculus [22,27], which is a second-order language forreasoning about actions and their effects.

In the situation calculus, any progress in theworld, for example an (endogenous) activity of therobot or some (exogenous) event triggered by theenvironment, is encoded by action(s). Properties aredescribed by fluents, which are situation-dependentpredicates and functions. In contrast to a state thatis composed of all fluent values at a given point intime, a situation (also called history) is completelydetermined by some initial situation and an actionsequence and is therefore unique within some exe-cution. The special function symbol do maps fromone current situation and an action to the successorsituation do : action × situation → situation .In order to define the relations between fluentsand actions, the user has to specify successorstate axioms (SSAs) of the form F (~x, do(α, s)) ≡ϕ+(α, ~x, s) ∨ F (~x, s) ∧ ¬ϕ−(α, ~x, s), with for-mulas ϕ+/−(α, ~x, s) evaluating F to true orfalse respectively, e.g. at(Room, do(a, s)) ≡a = enter(Room) ∨ at(Room, s) ∧ a 6=

1Please refer to hhttp://indigolog.sourceforge.net on how to useIndiGolog

leave(Room). Executable actions are denotedby the predicate Poss(α(~x), s) ≡ Πα(~x),e.g. Poss(pickup(Object) ≡ At(Room) ∧IsAt(Object, Room) ∧ DoNotCarryAnything . Theso-called basic action theory Σ [27] models the envi-ronment as well as the robot’s capabilities. This theoryis aggregated by SSAs, action precondition axioms, adescription of the initial situation, foundational axiomsand unique names assumptions.

The IndiGolog language is very expressive andprovides common imperative control constructs suchas loops, conditionals and recursive procedures, butalso less standard functions like non-deterministicchoice for actions or parameters. Moreover, it hasbuilt-in support for exogenous actions, i.e. changes nottriggered by the agent, as well as sensing actions thatgather actual information about the world.

Programs in IndiGolog are interpreted in a step-by-step fashion; the interpreter uses transition semanticsfor this purpose. Non-deterministic options are consid-ered, so that the interpreter will execute a primitive ac-tion at the very moment it becomes executable. In or-der to avoid getting stuck because of initial choices thatseem attractive in the first place but only lead to deadends, a search operator can be used to plan the stepsnecessary to achieve some goal. The program seman-tics are defined by the predicates Trans and Final: Aprogram σ’s transition from one configuration 〈σ, s〉to a legal successor configuration 〈δ, s′〉 is encoded byTrans(σ, s, δ, s′), where s and s′ are situations and δ isa program as well. A program σ’s termination condi-tions can be defined by Final(σ, s). A complete formaldescription of the execution semantics can be found in[13]. Program interpretation in IndiGolog is based ona five step main cycle [13]:

1. Integrate all exogenous events reported in the lastexecution cycle into the history.

2. Move the current history fragment forward if nec-essary. Considering only a fragment keeps the his-tory at a processable length.

3. Check whether the program σ may terminatelegally in the current situation s using Final.

4. Evolve the program σ a single step using the pred-icate Trans. A primitive action is executed if ap-plicable and the history changes accordingly.

5. Integrate sensing results obtained in the currentcycle into the history.

The integration of exogenous events and sensing re-sults into the history is done by extending the his-

Page 5: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

St. Gspandl, I. Pill, M. Reip, and G. Steinbauer / Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning 5

tory by a single entry for each occurrence, in the sameway endogenous actions are executed (see predicateabove). This cycle is repeated until either the programmay terminate legally or there is no further executablelegal transition.

The interpreter is implemented in SWI-Prolog pro-viding easy mechanisms for interaction with many pro-gramming languages like JAVA or C++. It consists ofthree components in addition to the main cycle com-ponent discussed above. The most important compo-nent from the perspective of a system as proposed isthe temporal projector. Its main objective is to evaluateformulas in different situations using regression. It fur-ther maintains the “current history fragment” (see step2 of the main cycle). Action execution is part of the en-vironment manager’s duties. It is responsible for com-munication between devices and the main cycle com-ponent. This is done by sending execution commandsto the devices on the one hand and returning sensingresults on the other. Knowledge about the devices’ ca-pabilities is part of the initialization. Finally, a domainapplication comprising the domain specific parts of thebasic action theory is necessary to setup a deployablesystem.

3.2. History-Based Diagnosis

In [17], Iwan describes a diagnosis approach forrobots in dynamic environments: history-based diag-nosis as it aims at finding alternative action sequences(histories) that explain the occurrence of some obser-vation Φ contradicting the expectations for some situa-tion s. That is, Σ |= ¬Φ(s), where Σ is the basic actiontheory of the domain and Φ is a closed formula in s.

Based on the situation calculus, alternative historiesare generated by two operations: (1) variation of indi-vidual actions in the original sequence (e.g. the robotgoes to the wrong room), and (2) insertion of exoge-nous actions (e.g. somebody moves an object fromone room to another). While the former relates to er-roneous actions and sensing executions, the latter ad-dresses events not under a robot’s control.

Sohrabi extended this approach by incorporating theinitial situation S0 in the diagnosis problem [28]. Dif-ferent initial situations allow for conflicting initial be-liefs (e.g. object O is in room A, or it is in room B). Us-ing this extension we are able to deal with uncertaintyin the initial belief as well. This leads to the followingdefinition of a diagnosis: (H(S0),~a) is a diagnosis fora system DS = (Σ,OBS[S0, s]) if and only if

Σ⋃H(S0) |= ∃s.s = do(~a, S0) ∧

executable(s) ∧ OBS[S0, s]

holds. Here, Σ defines the basic action theory, ~a is anaction sequence, OBS[S0, s] is a sentence of the sit-uation calculus in situation s, where s evolved fromthe initial situation S0. H(S0) models adaptations tothe initial state, whereas executable(s) is an abbrevi-ation for all preconditions of every action ai ∈ ~a be-ing fulfilled. This definition of a diagnosis will be usedthroughout the remainder of the paper.

3.3. A Belief Management System for DynamicEnvironments

We designed and implemented our belief manage-ment system around history-based diagnosis and In-diGolog. For our purpose, we had to adapt the standardfeatures and the interpretation cycle of IndiGolog. Weadded diagnostic reasoning and related decision proce-dures, as well as features to handle multiple situationhypotheses (where the belief of our robot is resembledby the value of all fluents).

Our concept for handling multiple hypotheses ismainly based on an extended IndiGolog temporal pro-jector that is able to maintain a pool of hypotheses S(of configurable size) as well as a reference to the cur-rent favorite hypothesis sf ∈ S. The original steps ofthe IndiGolog cycle were only adapted in the sense thatthe temporal projector was changed. We added a newstep 1a (executed right after step 1) that is responsi-ble for controlling our belief maintenance system. Thisconcept enables us to provide an efficient and effectivebelief management system that integrates seamlesslyinto IndiGolog.

The main stages of the IndiGolog interpreter en-hanced with our new step 1a are:

1) Integrate all exogenous events reported in the lastexecution cycle into the history.

1a) Execute diagnosis step.

i) update all si ∈ S : si 6= sfii) check consistency of sf

iii) perform diagnosis if necessaryiv) decide on a new sf if necessary

2) Move the current history fragment forward ifnecessary.

3) Check the program σ if it may terminate legallyin the current situation sf using Final.

4) Evolve the program σ a single step using thepredicate Trans.

Page 6: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

6 St. Gspandl, I. Pill, M. Reip, and G. Steinbauer / Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning

5) Integrate sensing results obtained in the currentcycle into the history.

The first part (i) in step 1a is necessary as the standardcycle only deals with a single history sf , but changes tothis history have to be propagated to the alternatives inthe hypothesis pool S as well. Thus, all hypotheses areextended by executed and exogenous actions, as wellas sensing results.

In part (ii) we verify sf ’s consistency with reality.Background knowledge (rules of common sense, do-main knowledge) is employed for the purpose of de-riving new facts and conflicts from perception results.An example was discussed in the Introduction con-cerning the location of the object Calculator. Eventhough perception results can be handled by SSAs inthe same way as exogenous or endogenous actions,they have to be taken into special consideration as theydo not change the environment. Thus, every fluent up-date resulting from a sensing action has to be ana-lyzed whether the new value range is consistent withthe old one. In IndiGolog there are two different typesof SSAs for sensing actions: (1) settles reduces the pos-sible range of values (due to functional fluents) for afluent to one specific value, (2) rejects discards one flu-ent value. In case of settles we check if the remain-ing value was possible, in case of rejects if the rejectedvalue was not the only possible value before the exe-cution of the sensing action.

Part (iii) triggers a diagnosis process iff sf wasfound to be inconsistent. We derive diagnoses satisfy-ing the definition in Section 3.2. As a history may com-prise endogenous, exogenous and sensing actions, wehave to consider each of these options in the genera-tion of diagnoses. Endogenous actions are performedactively, so it is a good guess that the action or at leastsome variation of it is part of the correct hypothesis.The same applies to sensing actions, whereas exoge-nous actions may occur anytime.We adapt history-based diagnosis according toIwan [17] with these ideas in mind, and us-ing the predicates Varia(OrigAct,Var,Cond,PV) andIns(Act, Cond, PV ). While Ins states that under con-dition Cond OrigAct is a valid insertion, Varia isused to generate a valid variation Var of some actionOrigAct with a preference value PV if condition Condis met by situation s. The preference value is one of{1, 2, 3}; the higher the value, the less likely this vari-ation occurs. We use a value of 1 for an endogenousaction, and the value of 2 for sensing actions. We in-crease this value by one for incomplete initial situa-tions (to be discuss later on). This implies that changes

to the initial state are equally important to variations ofendogenous actions. The preference value of a derivedsequence is defined as the sum of the individual PVs.Equal PVs are ranked in order of their occurrence.Derived action sequences have to be executable, that is,it has to be possible to follow them step-by-step froma given initial state. This implies that the initial assign-ment for all relevant fluents had to be known, so oursystem needs to be able to deal with incomplete initialbeliefs. We cope with this issue by not only calculatingvariations that are definitely, but also those possibly ex-ecutable (under the assumption of an incomplete S0).In this case, we assign higher preference values. As thenumber of derivable situations is not finite in general,pruning is essential to control the size of the hypothesispool. Thus, whenever a preference value of a (partly)generated situation reaches a predefined threshold, weabandon it.

We can estimate an upper bound for the size of thediagnosis space w.r.t the number of variations and in-sertions that happened between two consecutive ac-tions [7].

Theorem 1. Let l be the length of the history σ, nbe the number of different possible exogenous events,k be the maximum number of insertions between twoactions of σ, and m be the maximum number of vari-ations of an action. Then, the number H of potentialdiagnosis candidates is H = ((m+ 1) ·

∑ki=0 n

i)l.

Proof (Sketch).. For every action there are m varia-tions plus the action itself. Every action can be fol-lowed by 0 to k insertions. The number of insertions isexponential in i. The whole term is exponential in thehistory length l.

For a specific number c of faults we can determinea necessary minimum pool size (i.e. the length of theformula) ensuring the completeness of our approach.We can show that w.r.t. a fixed maximum number ofchanges c to a given history, the required pool size ismuch smaller than the number H established in Theo-rem 1.

Theorem 2. Let σ be a history and p be number ofdiagnoses of Pool(σ). Let c be the maximum numberof all insertions i and variations v to a history σ andlet k, l, m and n be as in Theorem 1. Further, let τ =∑cc′=1

∑c′

i=0,v=c′−i(lv

)mv(i+l−1i

)ni. If c ≤ k, l then

τ is the exact amount of possible hypotheses. τ is anupper bound for c > k, l. With p ≥ τ we can guaranteethat our approach is complete.

Page 7: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

St. Gspandl, I. Pill, M. Reip, and G. Steinbauer / Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning 7

Proof (Sketch).. We investigate variations v and inser-tions i separately, where the product of the correspond-ing options determines the total amount of diagnoses.For each of the

(lv

)possibly faulty action combina-

tions we have mv instances. Regarding insertions, af-ter adding i elements to σ we get |σ′| = i+ |σ| = i+ l.As the first element is fixed, similarly as with the vari-ations, we have ni instances for each of the

(i+l−1i

)combinations. Consequently, we have to sum over alldifferent distributions of variations and insertions andfinally sum over all c′ ≤ c. If the pool size is greateror equal to this maximum number of hypotheses, thenobviously all hypotheses can be considered to be in thepool.

In part (iv) we decide on a new sf in case the actualone was found to be inconsistent and some alternativeconsistent histories were found. The preference valuederived in part 3 for each si ∈ S is used to choosethe hypothesis with the highest rank (i.e. the minimumpreference value) as favored. Execution is then contin-ued with step 2.

This concept provides an effective belief manage-ment system that is able to handle multiple hypothe-ses and several fault types, and is easily integrated intoIndiGolog. First experiments are discussed in the nextsection.

4. Experimental Analysis

We simulated and analyzed the performance of ourbelief management system for an office delivery envi-ronment. As in the motivation example in Section 1, arobot had to move objects between rooms. The envi-ronment’s layout was constructed from an office floorof our department. The resulting map (as depicted inFigure 3) consists of 59 rooms connected via 12 hall-way segments.We simulated a robot that is able to pick up an ob-ject, carry a single object, and put the object currentlygrasped down again. In contrast to the motivating ex-ample, the robot can only move from one room to anadjacent one. This way, the number of actions wasincreased to a more realistic value. The robot is fur-ther equipped with a vision sensor perceiving objectswithin the same room, and a pressure sensor indicatingwhether it currently carries an object.

The SSAs and action pre-conditions were imple-mented by hand in a straightforward fashion. Our back-ground model stated basic rules, for example that an

object cannot be in two different rooms simultane-ously, and that the robot currently carries some objectif the pressure sensor is indicating it. We executed agiven plan, so that the necessity of planning wouldn’tinterfere with our experiments. The controller was thusa simple IndiGolog program that searches for the nextaction to execute in this given sequence. A typical exe-cution trace without sensing results consisted of about40 steps. The maximum history size parameter was setto 45 elements.

We defined three action fault types that we injectedon a random basis depending on our fault scenariosdescribed below: Basically, the robot might (1) fail topick up an object, (2) pick up the wrong object, and (3)fail to release an object. These faults have been cho-sen as they frequently occur in reality. A single mis-sion required the robot to deliver three different objectsto three arbitrary offices. A mission was accomplishedif all the objects were at their desired location at theend. It was failed if the interpreter aborted the program,the execution exceeded a given time limit of 2 minutes,or any object was not at the desired location when therobot finished execution. The percentage of successfulmissions served as performance measure.

In order to cover a wide range of scenarios and eval-uate the influence of increasing mission complexitywith our tests, we defined various fault and sensing sce-narios. We considered three fault scenarios F1-F3: F1defines a probability of 20% for a pick up action tofetch a wrong object, and 40% to fail entirely. F2 de-fines an additional probability of 30% for a put downaction to fail, and in F3 we add a probability of 5%that a sensing action fails, in particular that the robotperceives a ghost object (not actually there).

In order to resemble real robot systems closely, con-tinuous sensing was implemented for our experimentsas a new feature in IndiGolog. Three different sensingrates S1-S3 have been used in our tests: S1 refers tothe availability of sensor data after every action. WithS2 sensor data is available after every second and withS3 after every third action. This covers the aspect thatsensing might not be available to the robot at high ratesdue to resource management, e.g. limitations in poweror computation time. As a result, the robot is not pro-vided with full perception in scenarios S2 and S3, andmight thus miss valuable information on failures.

The contestants of our tests were two robots; a sim-ple base line agent (Base) and another one (BMS)equipped with our belief management system. Thesetwo had to prove themselves in the context of 50 dif-ferent missions they had to solve 10 times with varying

Page 8: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

8 St. Gspandl, I. Pill, M. Reip, and G. Steinbauer / Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning

Fig. 3. The office environment used for our experimental analysis.

Fig. 4. Successfully completed missions with (BMS) and without belief management (Base).

seeds for any combination of fault scenario and sensingrate.

The results of our experiments are reported in Fig-ure 4. From these results, it is obvious that the robot us-ing our belief management system, in general, signif-icantly outperformed the robot without this extension.Only in the simplest fault scenario (F1) with full in-formation availability (S1), the simple robot was ablethe solve 100%, on par with BMS. From the figure,it is also evident that the performance of both robotsdeclined with rising difficulty of the scenarios (morefault types), but belief management helps to keep per-formance at a decent level. The results also show that adeclining sensing rate has a negative impact on perfor-mance as well. This is clearly due to the fact that therobot is supplied with less and less information aboutthe environment which it can use to detect and repairinconsistent situations. As our belief management sys-tem allows the robot BMS to keep performance at de-cent levels for declining sensing rates, the results sug-gest that our system might specifically be an interestingapproach for designers of real robots as no hardcodedrecovery information is necessary.

Fig. 5. Robot handling delivery requests using AR-tags.

For initial tests on real hardware, we used a Pio-neer 3 DX2 equipped with a laser sensor, a camera anda gripper (see Figure 5). Basic services like localiza-tion and navigation were provided by ROS [25], anopen source, multi-layer robotic operating system. InROS, participating entities (nodes) are connected in apeer-to-peer topology and preemptive tasks can be re-

2For details on the robot refer to http://www.mobilerobots.com/researchrobots/researchrobots/pioneerp3dx.aspx

Page 9: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

St. Gspandl, I. Pill, M. Reip, and G. Steinbauer / Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning 9

alized by action servers and clients. We created an ac-tion server to handle the pickup and put-down of ob-jects and another one to take care of navigation tasks.A designated node served as communication point toIndiGolog. It received action commands and returnedsensing results and task completion notifications to In-diGolog.

The robot’s mission was similar to those in our sim-ulations, i.e. delivering some object (a milk box). Wethus used the same program to control the robot. Inorder to facilitate the object recognition, all objectswere marked with augmented reality (AR) tags [2].These provide unique identification and orientation.The world-model component could thus register theobject’s pose and send a corresponding sensing mes-sage to IndiGolog when the robot entered a room anddetected an AR tag with its camera. In order to evaluatethe functionality and effectiveness of our approach, weperformed an experiment in which we induced threefaults:

First, we stole the object while the robot was grasp-ing it, mimicking an execution error in this action. Therobot successfully detected the failure, and corrected itby a second, this time successful, attempt to pick upthe object.

Second, we snatched the object while being trans-ported by the robot and put it back to the graspingposition (exogenous event). After the next sensing (inthis case after the first attempt to put the object down),the robot realized that it held no object and there wasno object at the put-down location. Thus, it concludedthat somebody snatched the object (and it consequentlytried to put a non-existing object down), so it went backto the pickup position, grasped the object again, andmoved to the target position.

Finally, we blocked the gripper when the robot wasreleasing the object at the target destination (anotherexecution fault). The robot detected the problem, gen-erated the corresponding hypothesis and managed toput it down in a second attempt.

5. Conclusions and Future Work

In this paper we presented a belief management sys-tem that integrates seamlessly into the agent program-ming language IndiGolog. Our basic concept was tointegrate history-based diagnosis into the interpreter inorder to allow for the repair of inconsistent situationsin the knowledge base and the maintenance of severalsituation hypotheses. We will provide our IndiGolog

extensions as open source and will contribute them toa broadly used reference implementation3.

First simulations in an office delivery robot scenarioshowed that a robot equipped with our system is ableto handle significantly more erroneous situations thana simple robot. In contrast to classical diagnosis ap-proaches, the situation calculus enables us to also dealwith faults that occurred in the past, rather than justfaults in the current situation. While there are more ac-tive approaches to hypothesis discrimination, as dis-cussed in Section 2, we currently follow a passive ap-proach that uses knowledge gathered during the execu-tion of a plan and do not add actions whose sole pur-pose is to gather data.

For future work we are interested in moving thefocus from the described passive behavior to a moreactive management of diagnoses such as hypothesisdiscrimination via actively performed actions. We arealso interested in formally proving the correctness ofour approach. While first tests with actual hardwarerobots showed promising results, further tests will haveto demonstrate the robustness of deploying our beliefmanagement system on real robots. These tests will gobeyond the current basic faults and will feature morecomplex settings, such as a kitchen. Refining the nec-essary discrete models to work effectively for complexsettings on hardware robots in real-time environmentswill be one of the related tasks. In this respect, we willalso provide a more general evaluation of the structureand quality of our solution, i.e. whether it is applicableto other domains as well. For this purpose, we will in-vestigate the run-time properties of our approach thatare important for a broad applicability of our idea.

References

[1] R. Alur, C. Courcoubetis, and M. Yannakakis. Distinguishingtests for nondeterministic and probabilistic machines. In An-nual ACM symposium on Theory of computing (STOC), pages363–372, 1995.

[2] R. T. Azuma. A Survey of Augmented Reality. Presence,6:355–385, 1997.

[3] A. Bouguerra, L. Karlsson, and A. Saffiotti. Handling Uncer-tainty in Semantic-Knowledge Based Execution Monitoring. InInt. Conference on Intelligent Robots and Systems, 2007.

[4] J. de Kleer. Getting the probabilities right for measurement se-lection. In International Workshop on Principles of Diagnosis,pages 141–146, 2006.

[5] R. Dearden and C. Boutilier. Integrating planning and execu-tion in stochastic domains. In Conference on Uncertainty inArtificial Intelligence, pages 162–169, 1994.

3See http://indigolog.sourceforge.net

Page 10: Maintaining Consistency in a Robot’s Knowledge-Base via ... · inconsistencies. First experiments in the domain of service robots show the effectiveness of our approach. 1. Introduction

10 St. Gspandl, I. Pill, M. Reip, and G. Steinbauer / Maintaining Consistency in a Robot’s Knowledge-Base via Diagnostic Reasoning

[6] A. Ferrein, C. Fritz, and G. Lakemeyer. On-line Decision-Theoretic Golog for Unpredictable Domains. In InternationalCognitive Robotics Workshop, 2004.

[7] A. Ferrein, St. Gspandl, I. Pill, M. Reip, and G. Steinbauer.Belief Management for High-Level Robot Programs. In TheTwenty-second International Joint Conference on Artificial In-telligence (IJCAI), Barcelona, Spain, July 2011.

[8] C. Ferrell. Robust agent control of an autonomous robot withmany sensors and actuators. Technical Report AITR-1443,MIT Artificial Intelligence Laboratory, 1993.

[9] M. Fichtner, A. Großmann, and M. Thielscher. Intelligent ex-ecution monitoring in dynamic environments. Fundamenta In-formaticae, 57(2–4):371–392, 2003.

[10] C. Fritz. Monitoring the Generation and Execution of OptimalPlans. PhD thesis, University of Toronto, April 2009.

[11] C. Fritz and S. A. McIlraith. Planning in the face of frequentexogenous events. In International Conference on AutomatedPlanning and Scheduling (online poster), 2008.

[12] D. Garlan and B. Schmerl. Model-based adaptation for self-healing systems. In First workshop on Self-healing systems,pages 27–32, 2002.

[13] G. De Giacomo, Y. Lesperance, H. J. Levesque, and S. Sar-dina. Multi-Agent Programming: Languages, Tools and Ap-plications, chapter IndiGolog: A High-Level ProgrammingLanguage for Embedded Reasoning Agents, pages 31–72.Springer, 2009.

[14] P. Goel, G. Dedeoglu, S. I. Roumeliotis, and G. S. Sukhatme.Fault detection and identification in a mobile robot using mul-tiple model estimation and neural network. In IEEE Interna-tional Conference on Robotics and Automation, 2000.

[15] St. Gspandl, I. Pill, M. Reip, and G. Steinbauer. Belief Manage-ment for Autonomous Robots using History-Based Diagnosis.In The Twenty-fourth International Conference on Industrial,Engineering and Other Applications of Applied Intelligent Sys-tems (IEA/AIE), Syracuse, USA, June 2011.

[16] K. Iagnemma and M. Buehler. Special Issue on the DARPAGrand Challenge. Journal of Field Robotics, 23(8-9), 2006.

[17] G. Iwan. History-based diagnosis templates in the frameworkof the situation calculus. AI Communications, 15(1):31–45,2002.

[18] L. Kuhn, B. Price, J. de Kleer, M. Do, and R. Zhou. Pervasivediagnosis: the integration of active diagnosis into productionplans. In Int. Workshop on Principles of Diagnosis, 2008.

[19] H. J. Levesque, R. Reiter, Y. Lesperance, F. Lin, and R. B.Scherl. GOLOG: A logic programming language for dynamicdomains. The Journal of Logic Programming, 31(1-3):59 – 83,1997.

[20] H. Liu and G. M. Coghill. Qualitative modeling of kinematicrobots. In International Workshop on Qualitative Reasoning,2004.

[21] M.T. Long, R.R. Murphy, and L.E. Parker. Distributed multi-agent diagnosis and recovery from sensor failures. In IEEE/RSJInt. Conference on Intelligent Robots and Systems, 2003.

[22] J. McCarthy. Situations, Actions and Causal Laws. Technicalreport, Stanford University, 1963.

[23] S. McIlraith. Explanatory diagnosis: Conjecturing actions toexplain observations. In International Workshop on Principlesof Diagnosis, 1997.

[24] S. Mcilraith and R. Reiter. On Tests for Hypothetical Rea-soning. In Readings in Model-Based Diagnosis, pages 89–96.Morgan Kaufmann, 1992.

[25] Morgan Quigley, Ken Conley, Brian P. Gerkey, Josh Faust,Tully Foote, Jeremy Leibs, Rob Wheeler, and Andrew Y. Ng.Ros: an open-source robot operating system. In ICRA Work-shop on Open Source Software, 2009.

[26] R. Reiter. A theory of diagnosis from first principles. Art.Intelligence, 32(1):57–95, 1987.

[27] R. Reiter. Knowledge in Action. Logical Foundations for Spec-ifying and Implementing Dynamical Systems. MIT Press, 2001.

[28] S. Sohrabi, J. A. Baier, and S. A. McIlraith. Diagnosis as plan-ning revisited. In Int. Conference on the Principles of Knowl-edge Representation and Reasoning, pages 26–36, 2010.

[29] P. Struss. Testing for Discrimination of Diagnoses. In Work-ing Papers of the 5th International Workshop on Principles ofDiagnosis, 1994.

[30] P. Struss and C. Price. Model-based systems in the automotiveindustry. AI Magazine, 24(4):17–34, 2004.

[31] S. Thrun, M. Bennewitz, W. Burgard, A. B. Cremers, F. Del-laert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, andD. Schulz. Minerva: A second-generation museum tour-guiderobot. In IEEE International Conference on Robotics and Au-tomation, 1999.

[32] A. Trebi-Ollennu. Special Issue on Robots on the Red Planet.IEEE Robotics & Automation Magazine, 13(2), 2006.

[33] V. Verma, G. Gordon, R. Simmons, and S. Thrun. Real-time fault diagnosis. IEEE Robotics & Automation Magazine,11(2):56 – 66, 2004.

[34] D. S. Weld, C. R. Anderson, and D. E. Smith. Extending graph-plan to handle uncertainty and sensing actions. In fifteenth na-tional/tenth conference on Artificial intelligence/Innovative ap-plications of artificial intelligence, pages 897–904, 1998.