mmi.tudelft.nlmmi.tudelft.nl/pub/koen/atal2000.pdfmmi.tudelft.nl

22

Upload: truongnguyet

Post on 12-Mar-2019

222 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

Agent Programming with De larative GoalsKoen V. Hindriks, Frank S. de Boer,Wiebe van der Hoek, and John-Jules Ch. MeyerApril 17, 2000Abstra tA long and lasting problem in agent resear h has been to lose the gap between agent logi sand agent programming frameworks. The main reason for this problem of establishing a linkbetween agent logi s and agent programming frameworks is identi�ed and explained by thefa t that agent programming frameworks have not in orporated the on ept of a de larativegoal. Instead, su h frameworks have fo used mainly on plans or goals-to-do instead of theend goals to be realised whi h are also alled goals-to-be. In this paper, a new programminglanguage alled GOAL is introdu ed whi h in orporates su h de larative goals. The notion ofa ommitment strategy - one of the main theoreti al insights due to agent logi s, whi h explainsthe relation between beliefs and goals - is used to onstru t a omputational semanti s forGOAL. Finally, a proof theory for proving properties of GOAL agents is presented. Theprogramming logi for GOAL is a temporal logi extended with belief and goal modalities.An example program is proven orre t by using this programming logi .1 Goal-Oriented Agent ProgrammingIn the early days of agent resear h, an attempt was made to make the on ept of agents morepre ise by means of logi al systems. This e�ort resulted in a number of - mainly - modal logi s forthe spe i� ation of agents whi h formally de�ned notions like belief, goal, intention, et . asso iatedwith agents [14, 19, 3, 4℄. The relation of these logi s with more pra ti al approa hes remainsun lear, however, to this day. Several e�orts to bridge this gap have been attempted. In parti ular,a number of agent programming languages have been developed to bridge the gap between theoryand pra ti e [13, 9℄. These languages show a lear family resemblan e with one of the �rst agentprogramming languages Agent-0 [17, 6℄, and also with the language ConGolog [5, 8, 7℄.These programming languages de�ne agents in terms of their orresponding beliefs, goals, plansand apabilities. Although they de�ne similar notions as in the logi al approa hes, there is onenotable di�eren e. In logi al approa hes, a goal is a de larative on ept, whereas in the itedprogramming languages goals are de�ned as sequen es of a tions or plans. The terminology useddi�ers from ase to ase. However, whether they are alled ommitments (Agent-0), intentions(AgentSpeak [13℄), or goals (3APL [10℄) makes little di�eren e: all these notions are stru turesbuilt from a tions and therefore similar in nature to plans. With respe t to ConGolog, a moretraditional omputer s ien e perspe tive is adopted, and the orresponding stru tures are sim-ply alled programs. The PLACA language [18℄, a su essor of AGENT0, also fo uses more onextending AGENT0 to a language with omplex planning stru tures (whi h are not part of theprogramming language itself!) than on providing a lear theory of de larative goals of agents aspart of a programming language and in this respe t is similar to AgentSpeak and 3APL. Thetype of goal in luded in these languages may also be alled a goal-to-do and provides for a kind ofpro edural perspe tive on goals.In ontrast, a de larative perspe tive on goals in agent languages is still missing. Be ause ofthis mismat h it has not been possible so far to use modal logi s whi h in lude both belief andgoal modalities for the spe i� ation and veri� ation of programs written in su h agent languagesand it has been impossible to lose the gap between agent logi s and programming frameworks so1

Page 2: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

far. The value of adding de larative goals to agent programming lies both in the fa t that it o�ersa new abstra tion me hanism as well as that agent programs with de larative goals more loselyapproximate the intuitive on ept of an intelligent agent. To fully realise the potential of thenotion of an intelligent agent, a de larative notion of a goal, therefore, should also be in orporatedinto agent programming languages. In this paper, we introdu e the agent programming languageGOAL, whi h takes the de larative on ept of a goal seriously and whi h provides a on reteproposal to bridge the gap between theory and pra ti e. We o�er a omplete theory of agentprogramming in the sense that our theory provides both for a programming framework and aprogramming logi for su h agents. In ontrast with other attempts [17, 21℄ to bridge the gap, ourprogramming language and programming logi are related by means of a formal semanti s. Onlyby providing su h a formal relation it is possible to make sure that statements proven in the logi on ern properties of the agent.2 The Programming Language GOALIn this se tion, we introdu e the programming language GOAL (for Goal-Oriented Agent Lan-guage). The programming language GOAL is inspired by work in on urrent programming, inparti ular by the language UNITY designed by Chandy and Misra [2℄. The basi idea is thata set of a tions whi h exe ute in parallel onstitutes a program. However, whereas UNITY is alanguage based on assignment to variables, the language GOAL is an agent-oriented programminglanguage that in orporates more omplex notions su h as belief, goal, and agent apabilities whi hoperate on high-level information instead of simple values.As in most agent programming languages, GOAL agents sele t a tions on the basis of their urrent mental state. A mental state onsists of the beliefs and goals of the agent. However,in ontrast to most agent languages, GOAL in orporates a de larative notion of a goal that isused by the agent to de ide what to do. Both the beliefs and the goals are drawn from one andthe same logi al language, L, with asso iated onsequen e relation j=. An agent thus keeps twodatabases, respe tively alled the belief base and the goal base. The di�eren e between these twodatabases originates from the di�erent meaning assigned to senten es stored in the belief base andsenten es stored in the goal base. To larify the intera tion between beliefs and goals, one of themore important problems that needs to be solved is establishing a meaningful relationship betweenbeliefs and goals. This problem is solved here by imposing a onstraint on mental states that isderived from the default ommitment strategy that agents use. The notion of a ommitmentstrategy is explained in more detail below. The onstraint imposed on mental states requires thatan agent does not believe that � is the ase if it has a goal to a hieve �, and, moreover, requires� to be onsistent if � is a goal.De�nition 2.1 (mental state)A mental state of an agent is a pair h�; i where � � L are the agent's beliefs and � L are theagent's goals and � and are su h that for any � 2 we have:� � is not entailed by the agent's beliefs (� 6j= �),� � is onsistent (6j= :�), and� � is onsistent (� 6j= false).A mental state does not ontain a program or plan omponent in the ` lassi al' sense. Althoughboth the beliefs and the goals of an agent are drawn from the same logi al language, as we willsee below, the formal meaning of beliefs and goals is very di�erent. This di�eren e in meaningre e ts the di�erent features of the beliefs and the goals of an agent. The de larative goals arebest thought of as a hievement goals in this paper. That is, these goals des ribe a goal state thatthe agent desires to rea h. Mainly due to the temporal features of su h goals many properties ofbeliefs fail for goals. For example, the fa t that an agent has the goal to be at home and the goalto be at the movies does not allow the on lusion that this agent also has the onjun tive goal2

Page 3: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

to be at home and at the movies at the same time. As a onsequen e, less stringent onsisten yrequirements are imposed on goals than on beliefs. An agent may have the goal to be at homeand the goal to be at the movies simultaneously; assuming these two goals annot onsistently bea hieved at the same time does not mean that an agent annot have adopted both in the languageGOAL.In this paper, we assume that the language L used for representing beliefs and goals is a simplepropositional language. As a onsequen e, we do not dis uss the use of variables nor parameterme hanisms. Our motivation for this assumption is the fa t that we want to present our main ideasin their simplest form and do not want to lutter the de�nitions below with details. Also, moreresear h is needed to extend the programming language with a parameter passing me hanism, andto extend the programming logi for GOAL with �rst order features.The language L for representing beliefs and goals is extended to a new language LM whi henables us to formulate onditions on the mental state of an agent. The language LM onsists of so alled mental state formulas. A mental state formula is a boolean ombination of the basi mentalstate formulas B�, whi h expresses that � is believed to be the ase, and G�, whi h expresses that� is a goal of the agent.De�nition 2.2 (mental state formula)The set of mental state formulas LM is de�ned by:� if � 2 L, then B� 2 LM ,� if � 2 L, then G� 2 LM ,� if '1; '2 2 LM , then :'1; '1 ^ '2 2 LM .The usual abbreviations for the propositional operators _, !, and $ are used. We write trueas an abbreviation for B(p _ :p) for some p and false for :true.A third basi on ept in GOAL is that of an agent apability. The apabilities of an agent onsist of a set of so alled basi a tions. The e�e ts of exe uting su h a basi a tion are re e tedin the beliefs of the agent and therefore a basi a tion is taken to be a belief update on the agent'sbeliefs. A basi a tion thus is a mental state transformer. Two examples of agent apabilitiesare the a tions ins(�) for inserting � in the belief base and del(�) for removing � from the beliefbase. Agent apabilities are not supposed to hange the goals of an agent, but be ause of the onstraints on mental states they may as a side e�e t modify the urrent goals. For the purposeof modifying the goals of the agent, two spe ial a tions adopt(�) and drop(�) are introdu edto respe tively adopt a new goal or drop some old goals. We write B ap and use it to denotethe set of all belief update apabilities of an agent. B ap thus does not in lude the two spe iala tions for goal updating adopt(�) and drop(�). The set of all apabilities is then de�ned asCap = B ap [ fadopt(�); drop(�) j � 2 Lg. Individual apabilities are denoted by a.The set of basi a tions or apabilities asso iated with an agent determines what an agent isable to do. It does not spe ify when su h a apability should be exer ised and when performinga basi a tion is to the agent's advantage. To spe ify su h onditions, the notion of a onditionala tion is introdu ed. A onditional a tion onsists of a mental state ondition expressed by amental state formula and a basi a tion. The mental state ondition of a onditional a tion statesthe onditions that must hold for the a tion to be sele ted. Conditional a tions are denoted bythe symbol b throughout this paper.De�nition 2.3 ( onditional a tion)A onditional a tion is a pair '! do(a) su h that ' 2 LM and a 2 Cap.Informally, a onditional a tion ' ! do(a) means that if the mental ondition ' holds, thenthe agent may onsider doing basi a tion a. Of ourse, if the mental state ondition holds in the urrent state, the a tion a an only be su essfully exe uted if the a tion is enabled, that is, onlyif its pre onditions hold.A GOAL agent onsists of a spe i� ation of an initial mental state and a set of onditionala tions. 3

Page 4: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

De�nition 2.4 (GOAL agent)A GOAL agent is a triple h�; �0; 0i where � is a non-empty set of onditional a tions, and h�0; 0iis the initial mental state.2.1 The Operational Semanti s of GOALOne of the key ideas in the semanti s of GOAL is to in orporate into the semanti s a parti ular ommitment strategy ( f. [15, 3℄). The semanti s is based on a parti ularly simple and transparent ommitment strategy, alled blind ommitment. An agent that a ts a ording to a blind om-mitment strategy drops a goal if and only if it believes that that goal has been a hieved. Byin orporating this ommitment strategy into the semanti s of GOAL, a default ommitment strat-egy is built into agents. It is, however, only a default strategy and a programmer an overwritethis default strategy by means of the drop a tion. It is not possible, however, to adopt a goal � in ase the agent believes that � is already a hieved.The semanti s of a tion exe ution should now be de�ned in onforman e with this basi om-mitment prin iple. Re all that the basi apabilities of an agent were interpreted as belief updates.Be ause of the default ommitment strategy, there is a relation between beliefs and goals, however,and we should extend the belief update asso iated with a apability to a mental state transformerthat updates beliefs as well as goals a ording to the blind ommitment strategy. To get started,we thus assume that some spe i� ation of the belief update semanti s of all apabilities - ex eptfor the two spe ial a tions adopt and drop whi h only update goals - is given. Our task is, then,to onstru t a mental state transformer semanti s from this spe i� ation for ea h a tion. That is,we must spe ify how a basi a tion updates the omplete urrent mental state of an agent startingwith a spe i� ation of the belief update asso iated with the apability only.From the default blind ommitment strategy, we on lude that if a basi a tion a - di�erentfrom an adopt or drop a tion - is exe uted, then a goal is dropped only if the agent believes thatthe goal has been a omplished after doing a. The revision of goals thus is based on the beliefsof the agent. The beliefs of an agent represent all the information that is available to an agent tode ide whether or not to drop or adopt a goal. So, in ase the agent believes that a goal has beena hieved by performing some a tion, then this goal must be removed from the urrent goals of theagent. Besides the default ommitment strategy, only the two spe ial a tions adopt and drop anresult in a hange to the goal base.The initial spe i� ation of the belief updates asso iated with the apabilities B ap is formallyrepresented by a partial fun tion T of type : B ap � }(L) ! }(L). T (a; �) returns the result ofupdating belief base � by performing a tion a. The fa t that T is a partial fun tion representsthe fa t that an a tion may not be enabled or exe utable in some belief states. The mental statetransformer fun tion M is derived from the semanti fun tion T and also is a partial fun tion.As explained, M(a; h�; i) removes any goals from the goal base that have been a hieved bydoing a. The fun tionM also de�nes the semanti s of the two spe ial a tions adopt and drop. Anadopt(�) a tion adds � to the goal base if � is onsistent and � is not believed to be the ase. Adrop(�) a tion removes every goal that entails � from the goal base. As an example, onsider thetwo extreme ases: drop(false) removes no goals, whereas drop(true) removes all urrent goals.De�nition 2.5 (mental state transformer M)Let h�; i be a mental state, and T be a partial fun tion that asso iates belief updates with agent apabilities. Then the partial fun tion M is de�ned by:M(a; h�; i) = hT (a; �); n f 2 j T (a; �) j= gi for a 2 B ap if T (a; �) is de�ned;M(a; h�; i) is unde�ned for a 2 B ap if T (a; �) is unde�ned;M(drop(�); h�; i) = h�; n f 2 j j= �gi;M(adopt(�); h�; i) = h�; [ f�gi if � 6j= � and 6j= :�;M(adopt(�); h�; i) is unde�ned if � j= � or j= :�:The semanti fun tion M maps an agent apability and a mental state to a new mental state.The apabilities of an agent are thus interpreted as mental state transformers by M. Although it4

Page 5: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

is not allowed to adopt a goal � that is in onsistent - an adopt(false) is not enabled - there is no he k on the global onsisten y of the goal base of an agent built into the semanti s. This meansthat it is allowed to adopt a new goal whi h is in onsistent with another goal present in the goalbase. For example, if the urrent goal base = fpg ontains p, it is legal to exe ute the a tionadopt(:p) resulting in a new goal base fp;:pg. Although in onsistent goals annot be a hievedat the same time, they may be a hieved in some temporal order. Individual goals in the goal base,however, are required to be onsistent. Thus, whereas lo al onsisten y is required (i.e. individualgoals must be onsistent), global onsisten y of the goal base is not required (i.e. = fp;:pg isa legal goal base).The se ond idea in orporated into the semanti s on erns the sele tion of onditional a tions.A onditional a tion '! do(a) may spe ify onditions on the beliefs as well as onditions on thegoals of an agent. As is usual, onditions on the beliefs are taken as a pre ondition for a tionexe ution: only if the agent's urrent beliefs entail the belief onditions asso iated with ' theagent will sele t a for exe ution. The goal ondition, however, is used in a di�erent way. It is usedas a means for the agent to determine whether or not the a tion will help bring about a parti ulargoal of the agent. In short, the goal ondition spe i�es where the a tion is good for. This doesnot mean that the a tion ne essarily establishes the goal immediately, but rather may be takenas an indi ation that the a tion is helpful in bringing about a parti ular state of a�airs. To makethis dis ussion more pre ise, we introdu e a formal de�nition of a formula � that partially ful�ls agoal in a mental state h�; i.De�nition 2.6 (� partially ful�ls a goal in a mental state)Let h�; i be a mental state, and � 2 L. Then:�;� i� for some 2 : j= � and � 6j= �Informally, the de�nition of �;� an be paraphrased as follows: the agent needs to establish� to realise one of its goals in , but does not believe that � is the ase. The formal de�nition of�;� entails that the realisation of � would bring about at least part of one of the goals in thegoal base of the agent. The ondition that � is not entailed by the beliefs of the agent ensuresthat a goal is not a tautology. Of ourse, variations on this de�nition of the semanti s of goalsare on eivable. For example, one ould propose a stronger de�nition of ; su h that � bringsabout the omplete realisation of a goal in the urrent goal base instead of just part of su h agoal. However, our de�nition of ; provides for a simple and lear prin iple for a tion sele tion:the a tion in a onditional a tion is only exe uted in ase the goal ondition asso iated with thata tion partially ful�ls some goal in the urrent goal base of the agent.The semanti s of belief onditions B�, goal onditions G� and mental state formulas is de�nedin terms of the onsequen e relation j= and the partially ful�ls relation ;.De�nition 2.7 (semanti s of mental state formulas)Let h�; i be a mental state.� h�; i j= B� i� � j= �,� h�; i j= G i� ;� ,� h�; i j= :' i� h�; i 6j= ',� h�; i j= '1 ^ '2 i� h�; i j= '1 and h�; i j= '2.A number of properties of the belief and goal modalities and the relation between these opera-tors are listed in the following lemma. By the ne essitation rule, an agent believes all tautologies(Btrue). The �rst validity below states that the beliefs of an agent are onsistent. The beliefmodality distributes over impli ation, whi h is expressed by the se ond validity. This implies thatthe beliefs of an agent are losed under logi al onsequen e. The third validity is a onsequen e ofthe onstraint on mental states and expresses that if an agent believes � it does not have a goal to5

Page 6: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

a hieve �. As a onsequen e, an agent annot have a goal to a hieve a tautology. An agent alsodoes not have in onsistent goals, that is, :Gfalse is valid.The goal modality is a very weak logi al operator. For example, the goal modality does notdistribute over impli ation. A ounter example is provided by the goal base = fp; p ! qg. EvenG(� ^ (� ! )) ! G does not hold, be ause the agent may believe that is the ase even if ithas a goal to a hieve � ^ (�! ). Be ause of the axiom B ! :G , we must have :G in that ase and we annot on lude that G . From the fa t that G� and G hold, it is also not possibleto on lude that G(�^ ). This re e ts the fa t that individual goals annot be added to a singlebigger goal; re all that two individual goals may be in onsistent (G�^G:� is satis�able) in whi h ase taking the onjun tion would lead to an in onsistent goal. In sum, most of the usual problemsthat many logi al operators for motivational attitudes su�er from do not apply to our G operator( f. also [12℄). Finally, the onditions that allow to on lude that the agent has a (sub)goal arethat the agent has a goal � that logi ally entails and that the agent does not believe that isthe ase. The proof rule below then allows to on lude that G holds.Lemma 2.8� j= �)j= B�, for � 2 L,� j= :Bfalse,� j= B(� ! )! (B�! B ),� j= B�! :G�,� j= :G(true),� j= :G(false),� 6j= G(� ! )! (G�! G ),� 6j= G(� ^ (�! ))! G ,� 6j= (G� ^ G )! G(� ^ ),� G�;:B ; j= �! ;G Now we have de�ned the formal semanti s of mental state formulas, we are able to formallyde�ne the sele tion and exe ution of a onditional a tion. The sele tion of an a tion by an agentdepends on the satisfa tion onditions of the mental state ondition asso iated with the a tionin a onditional a tion. The onditions for a tion sele tion thus may express onditions on boththe belief and goal base of the agent. The belief onditions asso iated with the a tion formulatepre onditions on the urrent belief base of the agent. Only if the urrent beliefs of the agent satisfythese onditions, an a tion may be sele ted. A ondition G� on the goal base is satis�ed if � isentailed by one of the urrent goals of the agent (and thus, assuming the programmer did a goodjob, helps in bringing about one of these goals). The intuition here is that an agent is satis�edwith anything bringing about at least (part of) one of its urrent goals. Note that a onditionG� an only be satis�ed if the agent does not already believe that � is the ase (� 6j= �) whi hprevents an agent from performing an a tion without any need to do so.In the de�nition below, we assume that the a tion omponent � of an agent h�; �0; 0i is �xed.The exe ution of an a tion gives rise to a omputation step formally denoted by the transitionrelation b�! where b is the onditional a tion exe uted in the omputation step. More than one omputation step may be possible in a urrent state and the step relation �! thus denotes apossible omputation step in a state. A omputation step updates the urrent state and yields thenext state of the omputation. Note that be ause M is a partial fun tion, a onditional a tion an only be su essfully exe uted if both the ondition is satis�ed and the basi a tion is enabled.6

Page 7: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

De�nition 2.9 (a tion sele tion)Let h�; i be a mental state and b = '! do(a) 2 �. Then, as a rule, we have:If � the mental ondition ' holds in h�; i, i.e. h�; i j= ', and� a is enabled in h�; i, i.e. M(a; h�; i) is de�ned,then h�; i b�! M(a; h�; i) is a possible omputation step. The relation �! is the smallestrelation losed under this rule.We say that a apability a 2 Cap is enabled in a mental state h�; i in ase M(a; h�; i) isde�ned. This de�nition implies that a belief update apability a 2 B ap is enabled if T (a; �) isde�ned. A onditional a tion b is enabled in a mental state h�; i if there are �0; 0 su h thath�; i b�! h�0; 0i. Note that if a apability a is not enabled, a onditional a tion ' ! do(a) isalso not enabled. The spe ial predi ate enabled is introdu ed to denote that a apability a or onditional a tion b is enabled (denoted by enabled(a) respe tively enabled(b)).De�nition 2.10 (semanti s of enabled)� h�; i j= enabled(a) i� M(a; h�; i) is de�ned for a 2 Cap,� h�; i j= enabled(b) i� there are �0; 0 su h that h�; i b�! h�0; 0i for onditional a tionswhere b = '! do(a).The relation between the enabledness of apabilities and onditional a tions is stated in thenext lemma together with the fa t that drop(�) is always enabled and a proof rule for derivingenabled(adopt(�)).Lemma 2.11� j= enabled(' ! do(a)) $ (' ^ enabled(a)),� j= enabled(drop(�)),� j= enabled(adopt(�))! :B�,� 6j= :�:B�! enabled(adopt(�))3 A Personal Assistant ExampleIn this se tion, we give an example to show how the programming language GOAL an be used toprogram agents. The example on erns a shopping agent that is able to buy books on the Interneton behalf of the user. The example provides for a simple illustration of how the programminglanguage works. The agent in our example uses a standard pro edure for buying a book. It �rstgoes to a bookstore, in our ase Amazon. om. At the web site of Amazon. om it sear hes for aparti ular book, and if the relevant page with the book details shows up, the agent puts the bookin its shopping art. In ase the shopping art of the agent ontains some items, it is allowed tobuy the items on behalf of the user. The idea is that the agent adopts a goal to buy a book if theuser instru ts it to do so.The set of apabilities of the agent is de�ned byB ap = fgoto website(site); sear h(book); put in shopping art(book); pay artgThe apability goto website(site) goes to the sele ted web page site. In our example, relevant webpages are the home page of the user, the main page of Amazon. om, web pages with information7

Page 8: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

about books to buy, and a web page that shows the urrent items in the shopping art of the agent.The apability sear h(book) is an a tion that an be sele ted at the main page of Amazon. omand sele ts the web page with information about book . The a tion put in shopping art(book) an be sele ted on the page on erning book and puts book in the art; a new web page alledContentCart shows up showing the ontent of the art. Finally, in ase the art is not empty thea tion pay art an be sele ted to pay for the books in the art.In the program text below, we assume that book is a variable referring to the spe i� s ofthe book the user wants to buy (in the example, we use variables as a means for abbreviation;variables should be thought of as being instantiated with the relevant arguments in su h a waythat predi ates with variables redu e to propositions). The initial beliefs of the agent are that the urrent web page is the home page of the user, and that it is not possible to be on two di�erentweb pages at the same time. We also assume that the user has provided the agent with the goalsto buy The Intentional Stan e by Daniel Dennett and Intentions, Plans, and Pra ti al Reason byMi hael Bratman.� = f B( urrent website(homepage(user)) _ urrent website(ContentCart))^G(bought(book)) ! do(goto website(Amazon: om));B( urrent website(Amazon: om)) ^ :B(in art(book))^G(bought(book)) ! do(sear h(book));B( urrent website(book)) ^ G(bought(book)) ! do(put in shopping art(book));B(in art(book)) ^ G(bought(book)) ! do(pay art)g;�0 = f urrent webpage(homepage(user));8 s ; s 0((s 6= s 0 ^ urrent webpage(s)) ! : urrent webpage(s 0))g; 0 = fbought(The Intentional Stan e) ^ bought(Intentions, Plans and Pra ti al Reason)gGOAL Shopping AgentSome of the details of this program will be dis ussed in the sequel, when we prove someproperties of the program. The agent basi ally follows the re ipe for buying a book outlinedabove. For now, however, just note that the program is quite exible, even though the agent moreor less exe utes a �xed re ipe for buying a book. The exibility results from the agent's knowledgestate and the non-determinism of the program. In parti ular, the ordering in whi h the a tionsare performed by the agent - whi h book to �nd �rst, buy a book one at a time or both in thesame shopping art, et . is not determined by the program. The s heduling of these a tions thusis not �xed by the program, and might be �xed arbitrarily on a parti ular agent ar hite ture usedto run the program.4 Temporal Logi for GOALOn top of the language GOAL and its semanti s, we now onstru t a temporal logi to proveproperties of GOAL agents. The logi is similar to other temporal logi s but its semanti s isderived from the operational semanti s for GOAL. Moreover, the logi in orporates the belief andgoal modalities used in GOAL agents. First, we introdu e the semanti s for GOAL agents. Thenwe dis uss basi a tion theories and in parti ular the use of Hoare triples for the spe i� ationof a tions. These Hoare triples play an important role in the programming logi sin e it an beshown that temporal properties of agents an be proven by means of proving Hoare triples fora tions only. Finally, the language for expressing temporal properties and its semanti s is de�nedand the fa t that ertain lasses of interesting temporal properties an be redu ed to propertiesof a tions, expressed by Hoare triples, is proven.4.1 Semanti s of GOAL AgentsThe semanti s of GOAL agents is derived dire tly from the operational semanti s and the ompu-tation step relation �! as de�ned in the previous se tion. The meaning of a GOAL agent onsists8

Page 9: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

of a set of so alled tra es. A tra e is an in�nite omputation sequen e of onse utive mental statesinterleaved with the a tions that are s heduled for exe ution in ea h of those mental states. Thefa t that a onditional a tion is s heduled for exe ution in a tra e does not mean that it is alsoenabled in the parti ular state for whi h it has been s heduled. In ase an a tion is s heduled butnot enabled, the a tion is simply skipped and the resulting state is the same as the state before.De�nition 4.1 (tra e)A tra e s is an in�nite sequen e s0; b0; s1; b1; s2; : : : su h that si is a mental state, bi is a onditionala tion, and for every i we have: si bi�! si+1, or bi is not enabled in si and si = si+1.An important assumption in the semanti s for GOAL is a fairness assumption. Fairnessassumptions on ern the fair sele tion of a tions during the exe ution of a program. In our ase, we make a weak fairness assumption [11℄. A tra e is weakly fair if it is not the ase that ana tion is always enabled from some point in time on but is never sele ted for exe ution. This weakfairness assumption is built into the semanti s by imposing a onstraint on tra es. By de�nition,a fair tra e is a tra e in whi h ea h of the a tions is s heduled in�nitely often. In a fair tra e,there always will be a future time point at whi h an a tion is s heduled ( onsidered for exe ution)and by this s heduling poli y a fair tra e implements the weak fairness assumption. However,note that the fa t that an a tion is s heduled does not mean that the a tion also is enabled (andtherefore, the sele tion of the a tion may result in an idle step whi h does not hange the state).The meaning of a GOAL agent now is de�ned as the set of fair tra es in whi h the initialstate is the initial mental state of the agent and ea h of the steps in the tra e orresponds to theexe ution of a onditional a tion or an idle transition.De�nition 4.2 (meaning of a GOAL agent)The meaning of a GOAL agent h�; �0; 0i is the set of fair tra es S su h that for s 2 S we haves0 = h�0; 0i.4.2 Hoare TriplesThe spe i� ation of basi a tions provides the basis for the programming logi , and, as we willshow below, is all we need to prove properties of agents. Be ause they play su h an important rolein the proof theory of GOAL, the spe i� ation of the basi agent apabilities requires spe ial are.In the proof theory of GOAL, Hoare triples of the form f'g b f g, where ' and are mentalstate formulas, are used to spe ify a tions. The use of Hoare triples in a formal treatment oftraditional assignments is well-understood [1℄. Be ause the agent apabilities of GOAL agents arequite di�erent from assignment a tions, however, the traditional predi ate transformer semanti sis not appli able. GOAL agent apabilities are mental state transformers and, therefore, we requiremore extensive basi a tion theories to formally apture the e�e ts of su h a tions. Hoare triplesare used to spe ify the post onditions and the frame onditions of a tions. The post onditions ofan a tion spe ify the e�e ts of an a tion whereas the frame onditions spe ify what is not hangedby the a tion. Axioms for the predi ate enabled spe ify the pre onditions of a tions.The formal semanti s of a Hoare triple for onditional a tions is derived from the semanti sof a GOAL agent and is de�ned relative to the set of tra es SA asso iated with the GOAL agentA. A Hoare triple for onditional a tions thus expresses a property of an agent and not just aproperty of an a tion. The semanti s of the basi apabilities are assumed to be �xed, however,and are not de�ned relative to an agent.De�nition 4.3 (semanti s of Hoare triples for basi a tions)A Hoare triple for basi apabilities f'g a f g means that for all �; � h�; i j= ' ^ enabled(a))M(a; h�; i) j= , and� h�; i j= ' ^ :enabled(a)) h�; i j= . 9

Page 10: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

To explain this de�nition, note that we made a ase distin tion between states in whi h the basi a tion is enabled and in whi h it is not enabled. In ase the a tion is enabled, the post ondition of the Hoare triple f'g a f g should be evaluated in the next state resulting from exe utinga tion a. In ase the a tion is not enabled, however, the post ondition should be evaluated in thesame state be ause a failed attempt to exe ute a tion a is interpreted as an idle step in whi hnothing hanges.Hoare triples for onditional a tions are interpreted relative to the set of tra es asso iated withthe GOAL agent of whi h the a tion is a part. Below, we write '[si ℄ to denote that a mental stateformula ' holds in state si .De�nition 4.4 (semanti s of Hoare triples for onditional a tions)Given an agent A, a Hoare triple for onditional a tions f'g b f g (for A) means that for alltra es s 2 SA and i , we have that('[si ℄ ^ b = bi 2 s)) [si+1℄where bi 2 s means that a tion bi is taken in state i of tra e s .Of ourse, there is a relation between the exe ution of basi a tions and that of onditionala tions, and therefore there also is a relation between the two types of Hoare triples. The followinglemma makes this relation pre ise.Lemma 4.5 Let A be a GOAL agent and SA be the meaning of A. Suppose that we havef' ^ g a f'0g and SA j= (' ^ : )! '0. Then we also have f'g ! do(a) f'0g.Proof: We need to prove that ('[si ℄ ^ ( ! do(a)) = bi 2 s) ) '0[si+1℄. Therefore, assume'[si ℄ ^ ( ! do(a)) = bi 2 s). Two ases need to be distinguished: The ase that the ondition holds in si and the ase that it does not hold in si . In the former ase, be ause we havef' ^ g a f'0g we then know that si+1 j= '0. In the latter ase, the onditional a tion is notexe uted and si+1 = si . From ((' ^ : ) ! '0)[si ℄, '[si ℄ and : [si ℄ it then follows that '0[si+1℄sin e '0 is a state formula. 2The de�nition of Hoare triples presented here formalises a total orre tness property. A Hoaretriple f'g b f g ensures that if initially ' holds, then an attempt to exe ute b results in a su essorstate and in that state holds. This is di�erent from partial orre tness where no laims aboutthe termination of a tions and the existen e of su essor states are made.4.3 Basi A tion TheoriesA basi a tion theory spe i�es the e�e ts of the basi apabilities of an agent. It spe i�es whenan a tion is enabled, it spe i�es the e�e ts of an a tion and what does not hange when an a tionis exe uted. Therefore, a basi a tion theory onsists of axioms for the predi ate enabled for ea hbasi apability, Hoare triples that spe ify the e�e ts of basi apabilities and Hoare triples thatspe ify frame axioms asso iated with these apabilities. Sin e the belief update apabilities of anagent are not �xed by the language GOAL but are user-de�ned, the user should spe ify the axiomsand Hoare triples for belief update apabilities. The spe ial a tions for goal updating adopt anddrop are part of GOAL and a set of axioms and Hoare triples for these a tions is spe i�ed below.Be ause in this paper, our on ern is not with the spe i� ation of basi a tion theories inparti ular, but with providing a programming framework for agents in whi h su h spe i� ations an be plugged in, we only provide some example spe i� ations of the apabilities de�ned in thepersonal assistant example that we need in the proof of orre tness below.First, we spe ify a set of axioms for ea h of our basi a tions that state when that a tion isenabled. Below, we abbreviate the book titles of the example, and write T for The Intentional10

Page 11: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

Stan e and I for Intentions, Plans, and Pra ti al Reason. In the shopping agent example, we thenhave: enabled(goto website(site)) $ true;enabled(sear h(book)) $ B( urrent website(Amazon: om));enabled(put in shopping art(book)) $ B( urrent website(book));enabled(pay art)$ ((Bin art(T ) _ Bin art(I )) ^ B urrent website(ContentCart)):Se ond, we list a number of e�e t axioms that spe ify the e�e ts of a apability in parti ularsituations de�ned by the pre onditions of the Hoare triple.� The a tion goto website(site) results in moving to the relevant web page:ftrueg goto website(site) fB urrent website(site)g,� At Amazon. om, sear hing for a book results in �nding a page with relevant informationabout the book:fB urrent website(Amazon: om)g sear h(book) fB urrent website(book)g� On the page with information about a parti ular book, sele ting the a tionput in shopping art(book) results in the book being put in the art; also, a new web pageappears on whi h the ontents of the art are listed:fB urrent website(book)gput in shopping art(book)fB(in art(book) ^ urrent website(ContentCart))g� In ase book is in the art, and the urrent web page presents a list of all the books in the art, the a tion pay art may be sele ted resulting in the buying of all listed books:fB(in art(book) ^ urrent website(ContentCart))gpay artf:Bin art(book) ^ B(bought(book) ^ urrent website(Amazon: om))gFinally, we need a number of frame axioms that spe ify whi h properties are not hanged byea h of the apabilities of the agent. For example, both the apabilities goto website(site) andsear h(book) do not hange any beliefs about in art . Thus we have, e.g.:fBin art(book)g goto website(site) fBin art(book)gfBin art(book)g sear h(book) fBin art(book)gIt will be lear that we need more frame axioms than these two, and some of these will be spe i�edbelow in the proof of the orre tness of the shopping agent.It is important to realise that the only Hoare triples that need to be spe i�ed for agent a-pabilities are Hoare triples that on ern the e�e ts upon the beliefs of the agent. Changes andpersisten e of (some) goals due to exe uting a tions an be derived with the proof rules and axiomsbelow that are spe i� ally designed to reason about the e�e ts of a tions on goals.A theory of the belief update apabilities and their e�e ts on the beliefs of an agent must be omplemented with a theory about the e�e ts of a tions upon the goals of an agent. Su h atheory should apture both the e�e ts of the default ommitment strategy as well as give a formalspe i� ation of the the drop and adopt a tions.The default ommitment strategy imposes a onstraint on the persisten e of goals. A goalpersists if it is not the ase that after doing a the goal is believed to be a hieved. Only a tiondrop(�) is allowed to overrule this onstraint. Therefore, in ase a 6= drop(�), we have thatfG�g a fB� _ G�g. This Hoare triple pre isely aptures the default ommitment strategy andstates that after exe uting an a tion the agent either believes it has a hieved � or it still has the11

Page 12: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

goal � if � was a goal initially. A similar Hoare triple an be given for the persisten e of theabsen e of a goal. Formally, we have f:G�g b f:B� _ :G�g. This Hoare triple states that theabsen e of a goal � persists, and in ase it does not persist the agent does not believe � (anymore).The adoption of a goal may be the result of exe uting an adopt a tion, of ourse. However, itmay also be the ase that an agent believed it a hieved � but after doing b no longer believes thisto be the ase and adopts � as a goal again. For example, if the goal base = fp ^ qg and thebelief base � = fpg, then the agent does not have a goal to a hieve p be ause it already believesp to be the ase; however, in ase an a tion hanges the belief base su h that p is no longer isbelieved, the agent has a goal to a hieve p (again). This provides for a me hanism similar to thatof maintenan e goals. We do not need the Hoare triple as an axiom, however, sin e it is a dire t onsequen e of the fa t that B� ! :G�. Note that the stronger f:G�g b f:G�g does not hold,even if b 6= '! do(adopt(�)).The spe i� ation of the spe ial a tions drop and adopt involves a number of frame axioms anda number of proof rules. The frame axioms apture the fa t that neither of these a tions has anye�e t on the beliefs of an agent:� fB�g adopt( ) fB�g, f:B�g adopt( ) f:B�g,� fB�g drop( ) fB�g, f:B�g drop( ) f:B�g.The proof rules for the a tions adopt and drop apture the e�e ts on the goals of an agent. Forea h a tion, we list proof rules for the adoption respe tively the dropping of goals, and for thepersisten e of goals. An agent adopts a new goal � in ase the agent does not believe � and � isnot a ontradi tion. 6j= :�f:B�g adopt(�) fG�gAn adopt a tion does not remove any urrent goals of the agent. Any existing goals thus persistwhen adopt is exe uted. The persisten e of the absen e of goals is somewhat more ompli atedin the ase of an adopt a tion. An adopt(�) a tion does not add a new goal in ase is notentailed by � or is believed to be the ase:fG g adopt(�) fG g 6j= ! �f:G�g adopt( ) f:G�g fB g adopt(�) f:G gA drop a tion drop(�) results in the removal of all goals that entail �. This is aptured by theproof rule: j= ! �fG g drop(�) f:G gA drop a tion drop(�) never results in the adoption of new goals. The absen e of a goal thuspersists when a drop a tion is exe uted. It is more diÆ ult to formalise the persisten e of agoal with respe t to a drop a tion. Sin e a drop a tion drop(�) removes goals whi h entail �, to on lude that a goal persists after exe uting the a tion, we must make sure that the goal doesnot depend on a goal (is a subgoal) that is removed by the drop a tion. In ase the onjun tion� ^ is not a goal, we know this for ertain.f:G�g drop( ) f:G�g f:G(� ^ ) ^ G�g drop( ) fG�gThe basi a tion theories for GOAL in lude a number of proof rules to derive Hoare triples.The Rule for Infeasible Capabilities allows to derive frame axioms for a apability in ase it isnot enabled in a parti ular situation. The Rule for Conditional A tions allows the derivation ofHoare triples for onditional a tions from Hoare triples for apabilities. This rule is justi�ed bylemma 4.5. Finally, there are three rules for ombining Hoare triples and for strengthening thepre ondition and weakening the post ondition. 12

Page 13: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

Rule for Infeasible Capabilities: Rule for Conditional A tions:'! :enabled(a)f'g a f'g f' ^ g a f'0g; (' ^ : )! '0f'g ! do(a) f'0gConsequen e Rule: Conjun tion Rule:'0 ! '; f'g a f g; ! 0f'0g a f 0g f'1g b f 1g; f'2g b f 2gf'1 ^ '2g b f 1 ^ 2gDisjun tion Rule:f'1g b f g; f'2g b f gf'1 _ '2g b f g4.4 Temporal logi On top of the Hoare triples for spe ifying a tions, a temporal logi is used to spe ify and verifyproperties of GOAL agents. Two new operators are introdu ed. The proposition init states thatthe agent is at the beginning of exe ution and nothing has happened yet. The se ond operatoruntil is a weak until operator. ' until means that eventually be omes true and ' is trueuntil be omes true, or never be omes true and ' remains true forever.De�nition 4.6 (language of temporal logi LT based on L)The temporal logi language LT is indu tively de�ned by:� init 2 LT ,� enabled(a); enabled(' ! do(a)) 2 LT for a 2 Cap,� if � 2 L, then B�;G� 2 LT ,� if '; 2 LT , then :'; ' ^ 2 LT ,� if '; 2 LT , then ' until 2 LT .A number of other well known temporal operators an be de�ned in terms of the operatoruntil . The always operator 2' is an abbreviation for ' until false, and the eventuality operator�' is de�ned as :2:' as usual.Temporal formulas are evaluated with respe t to a tra e s and a time point i . State formulaslike B�, G , enabled(a) et . are evaluated with respe t to mental states.De�nition 4.7 (semanti s of temporal formulas)Let s be a tra e and i be a natural number.� s ; i j= init i� i = 0,� s ; i j= enabled(a) i� enabled(a)[si ℄,� s ; i j= enabled('! do(a)) i� enabled('! do(a))[si ℄,� s ; i j= B� i� B�[si ℄,� s ; i j= G� i� G�[si ℄,� s ; i j= :' i� s ; i 6j= ',� s ; i j= ' ^ i� s ; i j= ' and s ; i j= ,� s ; i j= ' until i� 9 j � i(s ; j j= ^ 8 k(i � k < j (s ; k j= '))) or 8 k � i(s ; k j= ').13

Page 14: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

We are parti ularly interested in temporal formulas that are valid with respe t to the set oftra es SA asso iated with a GOAL agent A. Temporal formulas valid with respe t to SA expressproperties of the agent A.De�nition 4.8 Let S be a set of tra es.� S j= ' i� 8 s 2 S ; i(s ; i j= '),� j= ' i� S j= ' where S is the set of all tra es.In general, two important types of temporal properties an be distinguished. Temporal prop-erties are divided into liveness and safety properties. Liveness properties on ern the progress thata program makes and express that a (good) state eventually will be rea hed. Safety properties,on the other hand, express that some (bad) state will never be entered. In the rest of this se tion,we dis uss a number of spe i� liveness and safety properties of an agent A = h�A; �0; 0i.We show that ea h of the properties that we dis uss are equivalent to a set of Hoare triples.The importan e of this result is that it shows that temporal properties of agents an be proven byinspe tion of the program text only. The fa t that proofs of agent properties an be onstru tedby inspe tion of the program text means that there is no need to reason about individual tra esof an agent or its operational behaviour. In general, reasoning about the program text is moree onomi al sin e the number of tra es asso iated with a program is exponential in the size of theprogram.The �rst property we dis uss on erns a safety property, and is expressed by the temporalformula ' ! (' until ). Properties in this ontext always refer to agent properties and areevaluated with respe t to the set of tra es asso iated with that agent. Therefore, we an explainthe informal meaning of the property as stating that if ' ever be omes true, then it remains trueuntil be omes true. By de�nition, we write this property as ' unless :' unless df= '! (' until )An important spe ial ase of an unless property is ' unless false, whi h expresses that if' ever be omes true, it will remain true. ' unless false means that ' is a stable property of theagent. In ase we also have init! ', where init denotes the initial starting point of exe ution, 'is always true and is an invariant of the program.Now we show that unless properties of an agent A = h�; �0; 0i are equivalent to a set ofHoare triples for basi a tions in �. This shows that we an prove unless properties by provinga set of Hoare triples. The proof relies on the fa t that if we an prove that after exe uting anya tion from � either ' persists or be omes true, we an on lude that ' unless .Theorem 4.9 Let A = h�A; �0; 0i. Then:8 b 2 �A(f' ^ : g b f' _ g) i� SA j= ' unless Proof: The proof from right to left is the easiest dire tion in the proof. Suppose SA j=' unless and s ; i j= '. This implies that s ; i j= ' until . In ase we also have s ; i j= ,we are done. So, assume s ; i j= : and a tion b is sele ted in the tra e at state si . From thesemanti s of until we then know that ' _ holds at state si+1, and we immediately obtainf' ^ : g b f' _ g sin e s and i were arbitrarily hosen tra e and time point. To prove theHoare triple for the other a tions in the agent program A, note that when we repla e a tion bwith another a tion from �A in tra e s , the new tra e s 0 is still a valid tra e that is in the setSA. Be ause we have SA j= ' unless , we also have s 0; i j= ' unless and from reasoning byanalogy we obtain the Hoare triple for a tion (and similarly for all other a tions).We prove the left to right ase by ontraposition. Suppose that(�) 8 b 2 �A(f' ^ : g b f' _ g) 14

Page 15: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

and for some s 2 SA we have s ; i 6j= ' unless . The latter fa t means that we have s ; i j= ' ands ; i 6j= ' until . s ; i 6j= ' until implies that either (i) is never established at some j � ibut we do have :' at some time point k > i or (ii) is established at some time j > i , but inbetween i and any su h j it is not always the ase that ' holds.In the �rst ase (i), let k > i be the smallest k su h that s ; k 6j= '. Then, we have s ; k � 1 j=' ^ : and s ; k j= :' ^ : . In state sk�1, however, either a onditional a tion is performed orno a tion is performed. From (*) we then derive a ontradi tion.In the se ond ase (ii), let k > i be the smallest k su h that s ; k j= . Then we know thatthere is a smallest j su h that i < j < k and s ; j 6j= ' (j 6= i sin e s ; i j= '). This means that wehave s ; j � 1 j= ' ^ : . However, in state sj either a onditional a tion is performed or no a tionis performed. From (*) we then again derive a ontradi tion. 2Liveness properties involve eventualities whi h state that some state will be rea hed startingfrom a parti ular situation. To express a spe ial lass of su h properties, we introdu e the operator' ensures . ' ensures informally means that ' guarantees the realisation of , and is de�nedas: ' ensures df= ' unless ^ ('! � )' ensures thus ensures that is eventually realised starting in a situation in whi h ' holds,and requires that ' holds until is realised. For the lass of ensures properties, we an showthat these properties an be proven by proving a set of Hoare triples. The proof of a ensuresproperty thus an be redu ed to the proof of a set of Hoare triples.Theorem 4.10 Let A = h�A; �0; 0i. Then:8 b 2 �A(f' ^ : g b f' _ g) ^ 9 b 2 �A(f' ^ : g b f g)) SA j= ' ensures Proof: In the proof, we need the weak fairness assumption. Sin e ' ensures is de�nedas ' unless ^ (' ! � ), by theorem 4.9 we only need to prove that SA j= ' ! � giventhat 8 b 2 �A(f' ^ : g b f' _ g) ^ 9 b 2 �A(f' ^ : g b f g). Now suppose, to arrive at a ontradi tion, that for some time point i and tra e s 2 SA we have: s ; i j= ' ^ : and assumethat for all later points j > i we have s ; j j= : . In that ase, we know that for all j > i we haves ; j j= ' ^ : (be ause we may assume ' unless ). However, we also know that there is ana tion b that is enabled in a state in whi h ' ^ : holds and transforms this state to a state inwhi h holds. The a tion b thus is always enabled, but apparently never taken. This is forbiddenby weak fairness, and we arrive at a ontradi tion. 2The impli ation in the other dire tion in theorem 4.10 does not hold. A ounterexample isprovided by the program:� = fB(:p ^ q)! do(ins(p));B(:p ^ r)! do(ins(p));Bp ! do(ins(:p ^ q));Bp ! do(ins(:p ^ r))g;�0 = fpg; 0 = ?:whereT (ins(p); f:p ^ qg) = fp ^ qg; T (ins(p); f:p ^ rg) = fp ^ rg;T (ins(:p ^ q); fpg) = T (ins(:p ^ q); ffp ^ qg) = f:p ^ qg;T (ins(:p ^ r); fpg) = T (ins(:p ^ r); fp ^ rg) = f:p ^ rg:15

Page 16: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

For this program, we have that B:p ensures Bp holds, but we do not have fB:p ^ :Bpg b fBpgfor any b 2 �.Finally, we introdu e a third temporal operator `leads to' 7!. The operator ' 7! di�ers fromensures in that it does not require ' to remain true until is established, and is derived fromthe ensures operator. 7! is de�ned as the transitive, disjun tive losure of ensures .De�nition 4.11 (leads to operator)The leads to operator 7! is de�ned by:' ensures ' 7! ' 7! �; � 7! ' 7! '1 7! ; : : : ; 'n 7! ('1 _ : : : _ 'n )! The meaning of the `leads to' operator is aptured by the following lemma. ' 7! means thatgiven ' ondition will eventually be realised. The proof of the lemma is an easy indu tion onthe de�nition of 7!.Lemma 4.12 ' 7! j= '! � .5 Proving Agents Corre tIn this se tion, we use the programming logi to prove the orre tness of our example shoppingagent. We do not present all the details, but provide enough details to illustrate the use of theprogramming logi . Before we dis uss what it means that an agent program is orre t and providea proof whi h shows that our example agent is orre t, we introdu e some notation. The notationinvolves a number of abbreviations on erning names and propositions in the language of ourexample agent:� Instead of urrent website(sitename) we simply write sitename; e.g., we write Amazon: omand ContentCart instead of urrent website(Amazon: om) and urrent website(ContentCart),� As before, the book titles The Intentional Stan e and Intentions, Plans and Pra ti al Reasonthat the agent intends to buy are abbreviated to T and I respe tively. These onventions an result in formulas like B(T ), whi h means that the agent is at the web page on erningthe book The Intentional Stan e.A simple and intuitive orre tness property, whi h is natural in this ontext and is appli ableto our example agent, states that a GOAL agent is orre t when the agent program realises theinitial goals of the agent. For this sub lass of orre tness properties, we may onsider the agentto be �nished upon establishing the initial goals and in that ase the agent ould be terminated.Of ourse, it is also possible to ontinue the exe ution of su h agents. This lass of orre tnessproperties an be expressed by means of temporal formulas like G� ! �:G�. Other orre tnessproperties are on eivable, of ourse, but not all of them an be expressed easily in the temporalproof logi for GOAL.5.1 Corre tness Property of the Shopping AgentFrom the dis ussion above, we on lude that the interesting property to prove for our exampleprogram is the following property:B ond ^ G(bought(T ) ^ bought(I )) 7! B(bought(T ) ^ bought(I ))where B ond is some ondition of the initial beliefs of the agent. More spe i� ally, B ond isde�ned by:B urrent webpage(homepage(user)) ^ :Bin art(T ) ^ :Bin art(I )^B(8 s ; s 0((s 6= s 0 ^ urrent webpage(s)) ! : urrent webpage(s 0)))16

Page 17: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

The orre tness property states that the goal to buy the books The Intentional Stan e and Inten-tions, Plans and Pra ti al Reason, given some initial onditions on the beliefs of the agent, leadsto buying (or believing to have bought) these books. Note that this property expresses a total orre tness property. It states both that the program behaves as desired and that it will eventuallyrea h the desired goal state. An extra reason for onsidering this property to express orre tnessof our example agent is that the goals involved on e they are a hieved remain true forever (theyare `stable' properties).5.2 Invariants and Frame AxiomsTo be able to prove orre tness, we need a number of frame axioms. There is a lose relation be-tween frame axioms and invariants of a program. This is be ause frame axioms express propertiesthat are not hanged by a tions, and a property that, on e true, remains true whatever a tionis performed is a stable property. In ase su h a property also holds initially, the property is aninvariant of the program. In our example program, there is one invariant that states that it is im-possible to be at two web pages at the same time: inv = B8 s ; s 0((s 6= s 0 ^ urrent webpage(s)) !: urrent webpage(s 0)).To prove that inv is an invariant of the agent, we need frame axioms stating that when invholds before the exe ution of an a tion it still holds after exe uting that a tion. Formally, forea h a 2 Cap, we need: finvg a finvg. These frame axioms need to be spe i�ed by the user, andfor our example agent we assume that they are indeed true. By means of the Consequen e Rule(strengthen the pre ondition of the Hoare triples for apabilities a) and the Rule for ConditionalA tions (instantiate ' and '0 with inv), we then obtain that finvg b finvg for all b 2 �. Bytheorem 4.9, we then know that inv unless false. Be ause we also have that initially inv holdssin e h�0; 0i j= inv , we may on lude that init! Binv ^ inv unless false. inv thus is an invariantand holds at all times during the exe ution of the agent. Be ause of this fa t, we do not mentioninv expli itly anymore in the proofs below, but will freely use the property when we need it.A se ond property that is stable is the property status(book):status(book) df= (Bin art(book) ^ Gbought(book)) _ Bbought(book)The fa t that status(book) is stable means that on e a book is in the art and it is a goal to buythe book, it remains in the art and is only removed from the art when it is bought.The proof obligations to prove that status(book) is a stable property, i.e. to prove thatstatus(book) unless false, onsist of supplying proofs for fstatus(book)g b fstatus(book)g for ea h onditional a tion b 2 � of the shopping agent ( f. theorem 4.9). By the Rule for ConditionalA tions, therefore, it is suÆ ient to prove for ea h onditional a tion ! do(a) 2 � thatfstatus(book) ^ g a fstatus(book)g and (status(book) ^ : ) ! status(book). The latter impli- ation is trivial. Moreover, it is lear that to prove the Hoare triples it is suÆ ient to provefstatus(book)g a fstatus(book)g sin e we an strengthen the pre ondition by means of the Con-sequen e Rule. The proof obligations thus redu e to proving fstatus(book)g a fstatus(book)g forea h apability of the shopping agent.Again, we annot prove these Hoare triples without a number of frame axioms. Be ause no apability is allowed to reverse the fa t that a book has been bought, for ea h apability, we anspe ify a frame axiom for the predi ate bought :(1) fBbought(book)g a fBbought(book)gIn ase the book is not yet bought, sele ting a tion pay art may hange the ontents of the artand therefore we �rst treat the other three a tions goto website, sear h, and put in shopping artwhi h are not supposed to hange the ontents of the art. For ea h of the latter three apabilitieswe therefore add the frame axioms:fBin art(book) ^ :Bbought(book)g a fBin art(book) ^ :Bbought(book)g17

Page 18: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

where a 6= pay art . Note that these frame axioms do not refer to goals but only refer to the beliefsof the agent, in agreement with our laim that only Hoare triples for belief updates need to bespe i�ed by the user. By using the axiom Gbought(book) ! :Bbought(book) and the Consequen eRule, however, we an on lude that:fBin art(book) ^ Gbought(book)g a fBin art(book) ^ :Bbought(book)gBy ombining this with the axiom fGbought(book)g a fBbought(book) _ Gbought(book)g by meansof the Conjun tion Rule and by rewriting the post ondition with the Consequen e Rule, we thenobtain(2) fBin art(book) ^ Gbought(book)g a fBin art(book) ^ Gbought(book)gwhere a 6= pay art . By weakening the post onditions of (1) and (2) by means of the Consequen eRule and ombining the result with the Disjun tion Rule, it is then possible to on lude thatfstatus(book)g a fstatus(book)g for a 6= pay art .As before, in the ase of apability pay art we deal with ea h of the disjun ts of status(book)in turn. The se ond disjun t an be handled as before, but the �rst disjun t is more involved thistime be ause pay art an hange both the ontent of the art and the goal to buy a book if it isenabled. Note that pay art only is enabled in ase BContentCart holds. In ase BContentCartholds and pay art is enabled, from the e�e t axiom for pay art and the Consequen e Rule weobtain(3) fBin art(book) ^ Gbought(book) ^ BContentCartg pay art fBbought(book)gIn ase :BContentCart holds and pay art is not enabled, we use the Rule for Infeasible Capabil-ities to on lude thatfBin art(book) ^ Gbought(book) ^ :BContentCartg(4) pay artfBin art(book) ^ Gbought(book) ^ :BContentCartgBy means of the Consequen e Rule and the Disjun tion Rule, we then an on lude from (1), (3)and (4) that fstatus(book)g pay art fstatus(book)g, and we are done.5.3 Proof OutlineThe main proof steps to prove our agent example orre t are listed next. The proof steps below onsists of a number of ensures formulas whi h together prove that the program rea hes its goalin a �nite number of steps.(1) Bhomepage(user) ^ :Bin art(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I ) ensuresBAmazon: om ^ :Bin art(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I )(2) BAmazon: om ^ :Bin art(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I ) ensures[(B(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I ))_(B(I ) ^ Gbought(I ) ^ :Bin art(T ) ^ Gbought(T ))℄(3) B(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I ) ensuresBin art(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I ) ^ BContentCart(4) Bin art(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I ) ensuresBAmazon: om ^ :Bin art(I ) ^ Gbought(I ) ^ status(T )(5) B(Amazon: om) ^ :Bin art(I ) ^ Gbought(I ) ^ status(T ) ensuresB(I ) ^ Gbought(I ) ^ status(T )(6) B(I ) ^ Gbought(I ) ^ status(T ) ensuresBin art(I ) ^ Gbought(I ) ^ BContentCart ^ status(T )(7) Bin art(I ) ^ Gbought(I ) ^ BContentCart ^ status(T ) ensuresBbought(T ) ^ Bbought(I ) 18

Page 19: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

At step 3, the proof is split up into two subproofs, one for ea h of the disjun ts of the disjun tthat is ensured in step 2. The proof for the other disjun t is ompletely analogous. By applyingthe rules for the `leads to' operator the third to seventh step result in:(a) B(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I ) 7!Bbought(T ) ^ Bbought(I )(b) B(I ) ^ Gbought(I ) ^ :Bin art(T ) ^ Gbought(T ) 7!Bbought(T ) ^ Bbought(I )Combining (a) and (b) by the disjun tion rule for the `leads to' operator and by using the transi-tivity of `leads to' we then obtain the desired orre tness result:B ond ^ G(bought(T ) ^ bought(I )) 7! B(bought(T ) ^ bought(I ))with B ond as de�ned previously.Step 1 We now dis uss the �rst proof step in somewhat more detail. The remainder of the proofis left to the reader. The proof of a formula ' ensures requires that we show that every a tionb in the Personal Assistant program satis�es the Hoare triple f' ^ : g b f' _ g and that thereis at least one a tion b0 whi h satis�es the Hoare triple f' ^ : g b0 f g. By inspe tion of theprogram, in our ase the proof obligations turn out to be:fBhomepage(user) ^ :Bin art(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I )gbfBhomepage(user) ^ :Bin art(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I )gwhere b is one of the a tionsB(Amazon: om) ^ :B(in art(book)) ^ G(bought(book)) ! do(sear h(book));B(book) ^ G(bought(book)) ! do(put in shopping art(book));B(in art(book)) ^ G(bought(book)) ! do(pay art)gand fBhomepage(user) ^ :Bin art(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I )gB(homepage(user) _ ContentCart) ^ G(bought(book)) ! do(goto website(Amazon: om))fBAmazon: om ^ :Bin art(T ) ^ Gbought(T ) ^ :Bin art(I ) ^ Gbought(I )gThe proofs of the �rst three Hoare triples are derived by using the Rule for Conditional A tions.The key point is noti ing that ea h of the onditions of the onditional a tions involved refers to aweb page di�erent from the web page homepage(user) referred to in the pre ondition of the Hoaretriple. The proof thus onsists of using the fa t that initially Bhomepage(user) and the invariantinv to derive an in onsisten y whi h immediately yield the desired Hoare triples by means of theRule for Conditional A tions.To prove the Hoare triple forB(homepage(user) _ ContentCart) ^ G(bought(book)) ! do(goto website(Amazon: om)) we usethe e�e t axiom (*) for goto website and the frame axiom (**):fBhomepage(user)g(5) goto website(Amazon: om)fBAmazon: omgandf:Bin art(book) ^ :Bbought(book)g(6) goto website(Amazon: om)f:Bin art(book) ^ :Bbought(book)gBy using the axiom fGbought(book)g goto website(Amazon: om) fBbought(book) _ Gbought(book)g,the Conjun tion Rule and the Rule for Conditional A tions it is then not diÆ ult to obtain thedesired on lusion. 19

Page 20: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

6 Possible Extensions of GOALAlthough the basi features of the language GOAL are quite simple, the programming languageGOAL is already quite powerful and an be used to program real agents. In parti ular, GOAL onlyallows the use of basi a tions. There are, however, several strategies to deal with this restri tion.First of all, if a GOAL agent is proven orre t, any s heduling of the basi a tions that is weaklyfair an be used to exe ute the agent. More spe i� ally, an interesting possibility is to de�ne amapping from GOAL agents to a parti ular agent ar hite ture ( f. also [2℄). As long as the agentar hite ture implements a weakly fair s heduling poli y, on erns like the eÆ ien y or exibilitymay determine the spe i� mapping that is most useful with respe t to available ar hite tures.A se ond strategy on erns the grain of atomi ity that is required. If a oarse-grained atomi ityof basi a tions is feasible for an appli ation, one might onsider taking omplex plans as atomi a tions and instantiate the basi a tions in GOAL with these plans (however, termination of these omplex plans should be guaranteed). Finally, in future resear h the extension of GOAL witha ri her notion of a tion stru ture like for example plans ould be explored. This would makethe programming language more pra ti al. The addition of su h a ri her notion, however, is notstraightforward. At a minimum, more bookkeeping seems to be required to keep tra k of the goalsthat an agent already has hosen a plan for and whi h it is urrently exe uting. This bookkeepingis needed, for example, to prevent the sele tion of more than one plan to a hieve the same goal.Note that this problem was dealt with in GOAL by the immediate and omplete exe ution of asele ted a tion. It is therefore not yet lear how to give a semanti s to a variant of GOAL extendedwith omplex plans. The ideal, however, would be to ombine the language GOAL whi h in ludesde larative goals with our previous work on the agent programming language 3APL whi h in ludesplanning features into a single new programming framework.Apart from introdu ing more omplex a tion stru tures, it would also be parti ularly interest-ing to extend GOAL with high-level ommuni ation primitives. Be ause both de larative knowl-edge as well as de larative goals are present in GOAL, ommuni ation primitives ould be de�nedin the spirit of spee h a t theory [16℄. The semanti s of, for example, a request primitive ould thenbe formally de�ned in terms of the knowledge and goals of an agent. Moreover, su h a semanti swould have a omputational interpretation be ause both beliefs and goals have a omputationalinterpretation in our framework.Finally, there are a number of interesting extensions and problems to be investigated in relationto the programming logi . For example, it would be interesting to develop a semanti s for theprogramming logi for GOAL that would allow the nesting of the belief and goal operators. Inthe programming logi , we annot yet nest knowledge modalities whi h would allow an agentto reason about its own knowledge or that of other agents. Moreover, it is not yet possible to ombine the belief and goal modalities. It is therefore not possible for an agent to have a goal toobtain knowledge, nor an an agent have expli it rather than impli it knowledge about its owngoals or those of other agents. So far, the use of the B and G operators in GOAL is, �rst of all,to distinguish between beliefs and goals. Se ondly, it enables an agent to express that it doesnot have a parti ular belief or goal ( onsider the di�eren e between :B� and B:�). Anotherimportant resear h issue on erns an extension of the programming framework to in orporate �rstorder languages and extend the programming logi with quanti�ers. Finally, more work needs tobe done to investigate and lassify useful orre tness properties of agents. In on lusion, whereasthe main aim may be a uni�ed programming framework whi h in ludes both de larative goals andplanning features, there is still a lot of work to be done to explore and manage the omplexitiesof the language GOAL itself.7 Con lusionAlthough a programming language dedi ated to agent programming is not the only viable approa hto building agents, we believe it is one of the more pra ti al approa hes for developing agents.Several other approa hes to the design and implementation of agents have been proposed. One20

Page 21: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

su h approa h promotes the use of agent logi s for the spe i� ation of agent systems and aimsat a further re�nement of su h spe i� ations by means of an asso iated design methodology forthe parti ular logi in use to implementations whi h meet this spe i� ation in, for example, anobje t-oriented programming language like Java. In this approa h, there is no requirement onthe existen e of a natural mapping relating the end result of this development pro ess - a Javaimplementation - and the formal spe i� ation in the logi . It is, however, not very lear how toimplement these ideas for agent logi s in orporating both informational and motivational attitudesand some resear hers seem to have on luded from this that the notion of a motivational attitude(like a goal) is less useful than hoped for. Still another approa h onsists in the onstru tion ofagent ar hite tures whi h `implement' the di�erent mental on epts. Su h an ar hite ture providesa template whi h an be instantiated with the relevant beliefs, goals, et . Although this se ondapproa h is more pra ti al than the �rst one, our main problem with this approa h is that thear hite tures proposed so far tend to be quite omplex. As a onsequen e, it is quite diÆ ult tounderstand what behaviour an ar hite ture that is instantiated will generate.For these reasons, our own resear h on erning intelligent agents has fo used on the program-ming language 3APL whi h supports the onstru tion of intelligent agents, and re e ts in a naturalway the intentional on epts used to design agents (in ontrast with the approa h dis ussed abovewhi h promotes the use of logi , but at the same time suggests that su h an intermediate level isnot required).Nevertheless, in previous work the in orporation of de larative goals in agent programmingframeworks has, to our knowledge, not been established. It has been our aim in this paper toshow that it is feasible to in orporate de larative goals into a programming framework (and thereis no need to dismiss the on ept). Moreover, our semanti s is a omputational semanti s and itis rather straightforward to implement the language, although this may require some restri tionson the logi al reasoning involved on the part of GOAL agents.In this paper, we provided a omplete programming theory. The theory in ludes a on reteproposal for a programming language and a formal, operational semanti s for this language aswell as a orresponding proof theory based on temporal logi . The logi enables reasoning aboutthe dynami s of agents and about the beliefs and goals of the agent at any parti ular state duringits exe ution. The semanti s of the logi is provided by the GOAL program semanti s whi hguarantees that properties proven in the logi are properties of a GOAL program. By providingsu h a formal relation between an agent programming language and an agent logi , we were ableto bridge the gap between theory and pra ti e. Moreover, a lot of work has already been done inproviding pra ti al veri� ation tools for temporal proof theories [20℄.Finally, our work shows that the (re)use of ideas and te hniques from on urrent programming an be very fruitful. In parti ular, we have used many ideas from on urrent programming andtemporal logi s for programs in developing GOAL. It remains fruitful to explore and exploit ideasand te hniques from these areas.Referen es[1℄ Gregory R. Andrews. Con urrent Programming: Prin iples and Pra ti e. The Ben-jamin/Cummings Publishing Company, 1991.[2℄ K. Mani Chandy and Jayadev Misra. Parallel Program Design. Addison-Wesley, 1988.[3℄ Philip R. Cohen and He tor J. Levesque. Intention is hoi e with ommitment. Arti� ialIntelligen e, 42:213{261, 1990.[4℄ Philp R. Cohen and He tor J. Levesque. Communi ative A tions for Arti� ial Agents. InPro eedings of the International Conferen e on Multi-Agent Systems. AAAI Press, 1995.[5℄ Giuseppe De Gia omo, Yves Lesp�eran e, and He tor Levesque. ConGolog, a Con urrentProgramming Language Based on the Situation Cal ulus. Arti� ial Intelligen e, a epted forpubli ation. 21

Page 22: mmi.tudelft.nlmmi.tudelft.nl/pub/koen/ATAL2000.pdfmmi.tudelft.nl

[6℄ Koen Hindriks, F.S. de Boer, Wiebe van der Hoek, and John-Jules Meyer. An OperationalSemanti s for the Single Agent Core of AGENT-0. Te hni al Report UU-CS-1999-30, De-partment of Computer S ien e, University Utre ht, 1999.[7℄ Koen Hindriks, Yves Lesp�eran e, and He tor J. Levesque. An Embedding of ConGologin 3APL. Te hni al Report UU-CS-2000-13, Department of Computer S ien e, UniversityUtre ht, 2000.[8℄ Koen V. Hindriks, Frank S. de Boer, Wiebe van der Hoek, and John-Jules Ch. Meyer. A For-mal Embedding of AgentSpeak(L) in 3APL. In G. Antoniou and J. Slaney, editors, Advan edTopi s in Arti� ial Intelligen e (LNAI 1502), pages 155{166. Springer-Verlag, 1998.[9℄ Koen V. Hindriks, Frank S. de Boer, Wiebe van der Hoek, and John-Jules Ch. Meyer. FormalSemanti s for an Abstra t Agent Programming Language. In Munindar P. Singh, AnandRao, and Mi hael J. Wooldridge, editors, Intelligent Agents IV (LNAI 1365), pages 215{229.Springer-Verlag, 1998.[10℄ Koen V. Hindriks, Frank S. de Boer, Wiebe van der Hoek, and John-Jules Ch. Meyer. AgentProgramming in 3APL. Autonomous Agents and Multi-Agent Systems, 2(4):357{401, 1999.[11℄ Zohar Manna and Amir Pnueli. The Temporal Logi of Rea tive and Con urrent Systems.Springer-Verlag, 1992.[12℄ John-Jules Ch. Meyer, Wiebe van der Hoek, and Bernd van Linder. A Logi al Approa h tothe Dynami s of Commitments. Ariti� ial Intelligen e, 113:1{40, 1999.[13℄ Anand S. Rao. AgentSpeak(L): BDI Agents Speak Out in a Logi al Computable Language.In W. van der Velde and J.W. Perram, editors, Agents Breaking Away (LNAI 1038), pages42{55. Springer-Verlag, 1996.[14℄ Anand S. Rao. De ision pro edures for propositional linear-time belief-desire-intention logi s.In M.J. Wooldridge, J.P. M�uller, and M. Tambe, editors, Intelligent Agents II, volume 1037of LNAI, pages 33{48. Springer, 1996.[15℄ Anand S. Rao and Mi hael P. George�. Intentions and Rational Commitment. Te hni alReport 8, Australian Arti� ial Intelligen e Institute, Melbourne, Australia, 1990.[16℄ John R. Searle. Spee h a ts. Cambridge University Press, 1969.[17℄ Yoav Shoham. Agent-oriented programming. Arti� ial Intelligen e, 60:51{92, 1993.[18℄ Sarah Rebe a Thomas. PLACA, An Agent Oriented Programming Language. PhD thesis,Department of Computer S ien e, Stanford University, 1993.[19℄ Bernd van Linder, Wiebe van der Hoek, and John-Jules Ch. Meyer. Formalising motivationalattitudes of agents: On preferen es, goals, and ommitments. In M.J. Wooldridge, J.P. M�uller,and M. Tambe, editors, Intelligent agents II (LNAI 1037), pages 17{32. Springer-Verlag, 1996.[20℄ Tanja Vos. UNITY in Diversity. PhD thesis, Department of Computer S ien e, Utre htUniversity, 2000.[21℄ Wayne Wob ke. On the Corre tness of PRS Agent Programs. In N.R. Jennings andY. Lesp�eran e, editor, Intelligent Agents VI | Pro eedings of the Sixth International Work-shop on Agent Theories, Ar hite tures, and Languages (ATAL-99), Le ture Notes in Arti� ialIntelligen e. Springer-Verlag, Berlin, 2000.22