multi-agent systems: an investigation of the …etd.dtu.dk/thesis/263657/ep10_29_net.pdfan...

136
Multi-Agent Systems: An Investigation of the Advantages of Making Organizations Explicit Andreas Schmidt Jensen Kongens Lyngby 2010 IMM-M.Sc.-2010-29

Upload: nguyenmien

Post on 26-Apr-2018

215 views

Category:

Documents


2 download

TRANSCRIPT

Multi-Agent Systems:An Investigation of the Advantages

of Making Organizations Explicit

Andreas Schmidt Jensen

Kongens Lyngby 2010IMM-M.Sc.-2010-29

Technical University of DenmarkInformatics and Mathematical ModellingBuilding 321, DK-2800 Kongens Lyngby, DenmarkPhone +45 45253351, Fax +45 [email protected]

Summary

Whereas classical multi-agent systems have the agent in center, there have re-cently been a development towards focusing more on the organization of thesystem. This allows the designer to focus on what the system goals are withoutconsidering how the goals should be fulfilled.

This thesis investigates whether taking the organizational approach has anyclear advantages to the classical way of implementing multi-agent systems. Theinvestigation is done by implementing each type of system in the same environ-ment in order to realize what advantages and disadvantages each approach has.The comparison will be based on a team-based version of Bomberman whichis simple, yet enables the agents to employ advanced strategies to fulfill theirgoals.

The investigation centers around the Java-based AgentSpeak interpreter, Ja-son , which allows the designer to create multi-agent systems using a logic pro-gramming language similar to Prolog. The organizational model Moise+ isused for designing the organization of one team, and a middleware called J -Moise+ combines Jason and Moise+ into a fully functioning organization-centered multi-agent system.

The systems are compared using a set of criteria that enables us to find advan-tages and disadvantages of both systems. As with many comparisons the resultsshow that use of both types of systems can be justified in different situations.

ii

Resume

I klassiske multi-agent systemer er agenten i centrum. Inden for de sidste ar harder været er fokus flyttet fra agenten til organisationen i systemet. Dette ladersystem-designeren fokusere pa hvilke mal der er med systemet uden at tænkeover hvordan disse mal skal opfyldes.

Denne afhandling undersøger hvorvidt der er klare fordele ved at benytte enorganisationscentreret tilgang til multi-agent systemer sammenlignet med denklassiske fremgangsmade. Undersøgelsen foretages ved at implementere beggetyper systemer i samme miljø, for at finde ud af hvilke fordele og ulemper detenkelte systemer har. Sammenligningen er baseret pa en simpel, holdbaseretversion af Bomberman, som lader agenterne følge avancerede strategier for atindfri deres mal.

Undersøgelsen bygger pa den Java-baserede AgentSpeak fortolker, Jason , somlader system-designeren udvikle multi-agent systemer ved brug af et logiskprogrammeringssprog tilsvarende Prolog. Den organisatoriske model Moise+

bruges til at designe organisationen i det ene system. Dette kobles sammenmed Jason ved brug af et middleware-system kaldet J -Moise+ og danner etfunktionsdygtigt organisationscentreret multi-agent system.

Systemerne sammenlignes ud fra et antal kriterier der gør det muligt at findefordele og ulemper ved begge systemer. Ligesom ved mange andre sammen-ligner viser resultaterne at brug af begge typer af systemer kan retfærdiggøres iforskellige situationer.

iv

Preface

This thesis was prepared at DTU Informatics at the Technical University ofDenmark from January through June 2010 as a part of the requirements foracquiring the M.Sc. degree in engineering.

The goal of the thesis was to investigate two types of multi-agent systems: theclassical, agent-centered system and the organization-centered system, where theorganization of the system is explicitly defined. The knowledge gained duringthis investigation is used to discuss whether there are any clear advantages ofmaking the organization of a multi-agent system explicit.

Kongens Lyngby, June 2010

Andreas Schmidt Jensen

vi

Acknowledgements

I would like to thank my supervisor, Jørgen Villadsen, who has provided helpand advice through the project period.

Thanks to Jomi F. Hubner and Rafael H. Bordini for comments on the project.

viii

Contents

Summary i

Resume iii

Preface v

Acknowledgements vii

1 Introduction 11.1 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Overview of the Report . . . . . . . . . . . . . . . . . . . . . . . 3

I Multi-Agent Systems 5

2 Introducing Intelligent Agents 72.1 What is an Intelligent Agent? . . . . . . . . . . . . . . . . . . . . 72.2 Deductive Reasoning Agents . . . . . . . . . . . . . . . . . . . . . 102.3 Practical Reasoning Agents . . . . . . . . . . . . . . . . . . . . . 122.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Multi-Agent Systems 173.1 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2 Cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.3 Organization-Centered Multi-Agent Systems . . . . . . . . . . . . 233.4 Designing a Multi-Agent System . . . . . . . . . . . . . . . . . . 273.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 29

x CONTENTS

4 Logic in Multi-Agent Systems 314.1 Modal Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.2 Epistemic Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.3 Deontic Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 44

5 Jason 455.1 AgentSpeak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.2 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.3 Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.4 Agent Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 485.5 Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.6 Internal Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 51

6 Moise+ 536.1 Structural Specification . . . . . . . . . . . . . . . . . . . . . . . 556.2 Functional Specification . . . . . . . . . . . . . . . . . . . . . . . 576.3 Deontic Specification . . . . . . . . . . . . . . . . . . . . . . . . . 596.4 S-Moise+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606.5 J -Moise+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 62

II Comparing ACMAS and OCMAS 63

7 The Scenario 657.1 Bomberman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657.2 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677.3 Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687.4 Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

8 Agent-Centered Multi-Agent System 718.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 728.2 Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768.3 Pathfinding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788.4 Pursuing Enemies . . . . . . . . . . . . . . . . . . . . . . . . . . 838.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 86

9 Organization-Centered Multi-Agent System 879.1 Structural Specification . . . . . . . . . . . . . . . . . . . . . . . 889.2 Functional Specification . . . . . . . . . . . . . . . . . . . . . . . 899.3 Deontic Relationship . . . . . . . . . . . . . . . . . . . . . . . . . 909.4 J -Moise+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

CONTENTS xi

9.5 Code Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . 939.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 93

10 Results 9510.1 Agent-Centered Approach . . . . . . . . . . . . . . . . . . . . . . 9610.2 Organization-Centered Approach . . . . . . . . . . . . . . . . . . 9610.3 Performance Comparison . . . . . . . . . . . . . . . . . . . . . . 9810.4 Using Jason and Moise+ . . . . . . . . . . . . . . . . . . . . . 9910.5 OCMAS vs ACMAS: When to Use What? . . . . . . . . . . . . . 10210.6 Multi-Agent Programming Contest . . . . . . . . . . . . . . . . . 10510.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 108

11 Conclusions 10911.1 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11011.2 Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11011.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11111.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11311.5 Conclusive Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 115

A Source 117

xii CONTENTS

Chapter 1

Introduction

This project is about intelligent agents and the environments in which theyfunction. In computer science this field is more commonly known as multi-agent systems. An intelligent agent is an autonomous entity, situated in anenvironment in which it is able to sense and act. Being autonomous meansthat it is able to decide for itself how to solve problems, react to changes andcooperate with other intelligent agents.

Originally, multi-agent systems have focused primarily on the agents, what theyare able to do and how they choose to do it [3, 32]. Such systems, in whichthe agent is the central element, are known as agent-centered multi-agent sys-tems (ACMAS). Recently [8, 12, 14, 18] we have seen a development towardsan approach more concerned with the overall structure of multi-agent systemsand, more specifically, the organization an agent implicitly is a part of. Bymaking the organization explicit, we get what is known as an organization-centered multi-agent system (OCMAS). The organization focuses on what theagents should do, but not how they should do so – this makes it possible tostructure the system without specifying any details about the implementation.

I will be comparing an agent-oriented and an organization-oriented multi-agentsystem of the game Bomberman. However, the nature of an implementation ofintelligent agents does not guarantee a certain quality. Therefore, a comparisonbased on the overall performance of a team of agents may not be adequate; the

2 Introduction

results may merely be caused by better or worse strategies.

Instead, since the two approaches are quite different in many ways, it seems morenatural to employ other measures of comparison. The comparison of ACMASand OCMAS will therefore be based on the following measures:

• Structure of the source code

• Development speed

• Performance

• Error handling

• Debugging

• Complexity of the scenario

• Number of intelligent agents

Note that there is a chance that the comparison will partly be a comparison ofthe tools used to build the systems, since the structure and implementation of asystem will highly depend on the languages used. For the ACMAS I will be usingthe AgentSpeak interpreter Jason , which is an agent-oriented programminglanguage similar to Prolog. The OCMAS will be implemented in J -Moise+,a combination of Jason and the organizational model Moise+. This meansthat both systems will make use of AgentSpeak, though the OCMAS will haveaccess to a few more Moise+-specific commands.

The results are not meant to be used strictly; even if an ACMAS seems betterin most of the measures, one cannot necessarily conclude that an ACMAS is al-ways the approach to choose for an agent-based software solution. As with mostsystems, there are different ways to solve problems and while one may be ap-propriate in some situations, another approach may be a much more reasonablechoice in other.

1.1 Previous Work

There is not much in the literature concerning actual comparison between thetwo different aspects of this report, i.e. ACMAS and OCMAS. However, a fewstudies have been made regarding comparison of other aspects of multi-agentsystems.

1.2 Overview of the Report 3

In [1] is a performance comparison between a MAS with a set static agents anda MAS with one dynamic agent. The idea is to analyze whether it is better tohave a single agent moving between locations or to have one static agent at eachlocation at all times. The article concludes that having one dynamic agent isbetter in small systems, while static agents perform better in larger systems.

A comparison of the commercial multi-agent system JACK and the academicproject 3APL is conducted in [27]. The overall results are that both systems haveadvantages and disadvantages, but it is clear that since JACK is a commercialtool, it is a quite reasonable choice for actual agent development. 3APL, onthe other hand, aims to provide a test platform for research within multi-agentsystems. Therefore, the disadvantages of 3APL, such as lack of IDE and simplesoftware development principles, are quite understandable.

Finally, [12] introduces the concept of OCMAS and lists some of the drawbacksof the classical ACMAS approach. While it is not a thorough comparison of thetwo kinds of systems, it does provide useful data for this project.

1.2 Overview of the Report

The report consists of this introduction and two major parts: Multi-Agent Sys-tems and Comparing ACMAS and OCMAS.

Part I (Multi-Agent Systems) provides the reader with the knowledge requiredfor implementing and comparing different types of multi-agent systems. I willdiscuss different types of agents, the systems in which they reside, and introducea few tools, which can be used for building multi-agent systems.

In chapter 2 will introduce the notion of a single agent and describe how agentsreason. In chapter 3 I put a set of agents together in a multi-agent systemand discuss how the agents can cooperate and communicate in order to achievetheir goals. In chapter 4 I describe how to formalize the specification of agentsusing logic. In chapter 5 I introduce Jason , an open-source interpreter forAgentSpeak and show how multi-agent systems can be built using this system.Finally, in chapter 6 I introduce Moise+, an organizational model for multi-agent systems, in which groups of agents and roles can be specified.

In part II (Comparing ACMAS and OCMAS) I describe the scenario in whichthe systems are implemented, I propose a general strategy which the comparisonwill be based upon and I describe the details of the implementations of bothsystems.

4 Introduction

Chapter 7 describes the Bomberman scenario in details, with emphasis on thefact that it consists of teams of Bombermen, instead of a single Bomberman. Ialso go into details with a general strategy both that systems are supposed tofollow. This ensures that performance differences are not a result of differencesin strategy. In chapters 8 and 9 I describe the details of the implemented ACMASand OCMAS respectively. Chapter 10 compares the systems using the measuresdescribed above to provide an overview of the pros and cons of both approaches.Finally, I conclude the project in chapter 11.

Part I

Multi-Agent Systems

Chapter 2

Introducing Intelligent Agents

In this chapter I introduce the notion of an intelligent agent. The aim is to givethe reader an idea of what the individual agent is able to do, how it interactswith the environment and how it reasons. I will briefly discuss some of thedifferent types of intelligent agents.

I will go into details with the parts of an agent which are often used to reason,namely its beliefs, desires and intentions. These three concepts are closelyrelated in the sense that beliefs are what the agent knows, desires are whatthe agent would like to achieve and intentions are what the agent has chosen toattempt to achieve.

2.1 What is an Intelligent Agent?

An agent is an autonomous entity which proactively attempts to achieve certaingoals. The agent is placed in an environment and is able to perceive informationabout it. Percepts are added to the agent’s knowledge base and are used to planhow the agent achieves its goals. Below is the definition of an intelligent agent,as presented in [32]:

8 Introducing Intelligent Agents

Intelligent Agent

Environment

Feedback

Actions

Sensors

Actuators

percepts

Figure 2.1: An intelligent agent and the environment in which it resides (after[32]).

Definition 2.1 (Intelligent agent) An agent is a computer system that issituated in some environment, and that is capable of autonomous action in thisenvironment in order to meet its delegated objectives. [32, p. 21] �

Figure 2.1 shows an intelligent agent in its environment. An agent basicallyconsists of a number of sensors and actuators. The sensors are able to perceivethe feedback it receives from the environment and create percepts which theagent is able to use in its reasoning. The reasoning results in actions which theactuators will perform, thereby potentially changing the environment.

The feedback received from the environment depends on the type of environ-ment. In [25] four types of classifications of environment properties are sug-gested:

Accessible versus inaccessible: An accessible environment is an environmentin which the agent will always have a complete and accurate picture of it,i.e. the agent knows everything about the environment. Most real-worldenvironment are considered inaccessible. One example of this could be theinternet.

Deterministic versus non-deterministic: When an agent performs an ac-tion in a deterministic environment, it is guaranteed that exactly this

2.1 What is an Intelligent Agent? 9

action will be performed. In a non-deterministic environment this maynot be the case. For instance, an attempt to turn the lights on should inmost cases result in the light being turned on, but this is not guaranteedsince a light bulb may be malfunctioning.

Static versus dynamic: A static environment is an environment which willonly change whenever an agent performs an action that changes it. In adynamic environment changes can happen even without interactions froman agent. A real world environment will usually be dynamic.

Discrete versus continuous: In a discrete environment, there is a fixed num-ber of possible actions and types of percepts. While a continuous envi-ronment is more realistic, it also highly increases the complexity of thesystem. Therefore, if it is possible to simulate a discrete environment in-stead, this will usually result in better agents (since one is able to makethem very good at handling a predefined number of known actions, insteadof an infinite set of unknown actions).

2.1.1 The capabilities of an agent

There are basically three key capabilities that one would like an agent to have:reactivity, proactiveness and social ability [32].

Reactivity Reactivity is important because it allows the agent to react tochanges in the environment. This makes it possible for the agent to revise itsplans, making them more likely to succeed.

One should note that changes from the agent’s perspective are not necessarilyconsidered changes in the environment. For instance, in an inaccessible envi-ronment, the agent could simply have found a previously undiscovered area.This distinction is important if the agent knows whether the environment iscompletely static. In this case, there would be no such thing as changes in theenvironment, so every new percept would in fact be the discovery of new areas.

However, if the environment is dynamic, there is a chance that percepts willeventually be obsolete. Normally, the agent will have an idea of the environmentin which it resides and will therefore know which percepts are bound to persist.For instance, if the agent discovers a wall, there is little chance that it will notperceive the same wall the next time it stands near it.

Building a reactive system is not a very hard task since the system will basically

10 Introducing Intelligent Agents

consist of if-then clauses on how the agent should react to different states ofthe environment.

Proactiveness Being proactive enables the agent to actively attempt to achieveits goals. This means that the agent demonstrates a goal-directed behavior inwhich it takes the initiative itself to achieve its goals.

This should be compared to a system in which nothing happens without apreceding event triggering it. Such systems will not actively do anything, butrather wait for events to react to (i.e. an entirely reactive system).

Building a proactive system is not hard. Actually, when building a system usingan imperative language, this will typically be a proactive system. The methodsof each class are actually plans for achieving a goal [32], and in that sense thesystem will be goal-directed and therefore proactive.

Social ability The social ability of an agent allows it to not only communicatewith other agents, but also behave in a socially accepted manner. This makesit possible for the agent to negotiate and cooperate with other agents in orderto fulfill its own goals.

Behaving in a socially accepted manner is necessary because, as I will describefurther in chapter 3, agents are autonomous and will not necessarily fulfill re-quests from other agents. By behaving properly it will be easier for the agentto be able to cooperate1.

Reactivity and proactiveness can easily be achived alone, but the difficult taskis to build a system which balances the reactive and proactive capabilities of anagent, allowing it to react to changes in the environment,while demonstrating agoal-directed behavior at the same time.

2.2 Deductive Reasoning Agents

One type of intelligent agent is known as the deductive reasoning agent. The ideabehind deductive reasoning agents is to specify the agents, the environment andits desired behaviors using a symbolic representation in a formal language [32].

1Of course, it is up to the developer to decide whether an agent should ignore “trouble-making” agents.

2.2 Deductive Reasoning Agents 11

If this symbolic representation is a set of logic formulae the idea is to conducttheorem proving on these formulae to decide what actions are reasonable toperform.

An agent using deductive reasoning can therefore be compared to a theoremprover. We let the agent have a database of information that it has about theenvironment. This database is then used when proving theorems in order todecide which actions to perform.

Definition 2.2 L is a set of first-order formulae, let D = P(L) be the set ofdatabases, i.e. the set of sets containing possible formulae. Then the internalstate of an agent is one of these sets of formulae. DB refers to a member ofD. An agent has a set of deduction rules, ρ, which are the rules of inference forfirst-order logic. Using ρ and the DB, we write DB `ρ ϕ if ϕ can be provedfrom DB using only the rules of ρ. �

Using definition 2.2 it is then possible to define an action selection function(ASF) [32]:

action : D → Ac,

defining how to select which action to perform. The idea of the function is thatif a given formula Do(ϕ), where ϕ is an action, is derivable from the DB usingρ (i.e. DB `ρ Do(ϕ)), then that action is currently the best action to perform.The ASF therefore attempts to derive every possible action from the DB untila derivable action is found. If none of the possible actions are derivable, theASF attempts to find an action which, even though it is not the best actionto perform, at least is consistent with the rules and the database. This meansthat the action is not forbidden to perform at the given state. In other words,the function attempts to show that DB 0ρ ¬Do(ϕ) for an action ϕ. If suchan action is found, that action is selected. Otherwise, no action is performed.Note that if the environment depends on the agent to perform an action, thiswould in most cases result in a deadlock, since the agent waits for consistentknowledge, while the environment waits for the agent to perform an action.

The idea is then to specify rules that govern the agent’s behaviors. These rulesshould all be of the form

ϕ→ ψ,

meaning that if ϕ can be matched against the agent’s DB, then ψ can beconcluded. In the vacuum cleaner example described in [32] we could then havea cleaning action:

In(x, y) ∧Dirt(x, y)→ Do(suck).

12 Introducing Intelligent Agents

If In(x, y) and Dirt(x, y) can be unified with knowledge from the agent’s DB(i.e. if DB = {In(1, 1), Dirt(1, 1)}), then the agent can conclude that it shouldperform the action suck.

2.3 Practical Reasoning Agents

There is in particular one problem with deductive reasoning: it is not com-parable to how human beings reason. It should be clear that even though wedo employ logical reasoning in some situations, we will usually also take otherthings into consideration, like emotions, something an intelligent deductive agentcannot do.

The practical reasoning agent is an attempt to build a reasoning system, whichnot only use logical reasoning, but takes other things, such as desires, intoconsideration. Practical reasoning should not be confused with theoretical rea-soning, which is directed towards beliefs [32]. Basically, applying modus ponensis doing theoretical reasoning: If an agent believes that ϕ is true and ϕ → ψis true, then it can conclude that ψ is also true. This only affects the agent’sbeliefs. On the other hand, practical reasoning is reasoning towards action, i.e.if I decide to make a cup of coffee instead of a cup of tea, I apply practicalreasoning.

Basically, from a human point of view, practical reasoning (or the act of makinga decision) consists of two processes [32]:

Deliberation: Deliberation is the act of deciding what to do. Consider a sit-uation where a persons wants to have something to drink and can choosebetween tea and coffee. Deciding whether to make a cup of tea or a cupof coffee is deliberation.

Means-Ends Reasoning: This is the act of deciding how to do it. Means-ends reasoning only happens after deliberation. For instance, if the personchooses to make a cup of coffee, he must now decide how to make it.Applying means-ends reasoning results in a plan for how to bring aboutthe chosen state of affair. For instance, the plan could be to mix boilingwater and instant coffee. However, the plan may not succeed. The waterboiler could fail or the person could realize that coffee may not be a goodidea, since it is late in the evening and he needs to get up early the nextmorning.

Of course, this was a simple example, and in most situations an agent would

2.3 Practical Reasoning Agents 13

be situated in much more complex environments in which the choices are notobvious and it may not even be obvious how to bring about the state of affair,when deliberation has been made. Therefore, the processes of deliberation andmeans-ends reasoning must be put under time constraints to guarantee thatsome decision will be made even though it may not be the best decision at thegiven state.

Definition 2.3 (Intentions) When an agent has performed deliberation andmeans-ends reasoning, it has chosen and committed itself to achieving a certainstate of affairs. We then say that the agent has the intention of achieving thisparticular state of affairs. �

However, not everything that an agent wants to do is something that it at anytime intends to do. It may be mere desires. The difference between intentionsand desires was described by Bratman [5, p. 22]:

For example, my desire to play basketball this afternoon is merely apotential influencer of my conduct this afternoon. It must vie withmy other relevant desires – say, my desire to finish writing this paper– before it is settled what I will do. In contrast, once I intend to playbasketball this afternoon, the matter is settled: I normally need notcontinue to weigh the relevant pros and cons. When the afternoonarrives, I will normally just proceed to execute my intention.

What this basically tells us is that desires, while they will be considered duringthe deliberation, they will not necessarily be chosen now (if ever) as intentions.One particular property we would like from an agent committed to an intentionis that this commitment is persistent, as least until it is clear that this intentioncan never be achieved. Otherwise, there is a possibility that no goal will ever becompleted, since other intentions will take its place. We should also ensure that“intentions constrain future deliberation” [32], since it would not be rational topursue an intention which conflicts with current commitments.

2.3.1 Agent Control Loop

Agents have beliefs, desires and intentions, and using deliberation and means-ends reasoning makes it possible for the agent to commit to a goal and createa plan for achieving this goal. I will now make it more explicit how the agentperforms this reasoning by describing what is known as the agent control loop[32]. In short, the control loop continuously perceives the environment, updates

14 Introducing Intelligent Agents

beliefs, decides what intention to achieve and looks for a plan to achieve it. Theplan is then executed until it is empty, has succeeded or is impossible. It shouldalso be noted that the plan is reconsidered often, to ensure that it is still sound.

Implementing Deliberation Deliberation can be done by generating the listof options an agent has at a given time (by using its beliefs and current intentions– to ensure that it will not choose conflicting desires) and then committing tobring about the state of affairs of one of these desires, i.e. it will intend to achievethe goal of that desire. Definitions 2.4-2.6 shows the signature of functions whichwill be used to perform deliberation [32].

Definition 2.4 (Belief revision) Let Bel be the set of all beliefs, and Per bethe set of all percepts. An agent then revises his beliefs using the function

brf : P(Bel)× Per→ P(Bel),

i.e. whenever the agent perceives the environment, a new set of beliefs is gener-ated. �

Definition 2.5 (Generating desires) Let Bel be the set of all beliefs, Desbe the set of all desires and Int the set of all intentions. An agent then generateshis desires (options) using the function

options : P(Bel)× P(Int)→ P(Des).

The function takes the agent’s current beliefs and intentions, and on the basisof these generates the set of possible options. �

Definition 2.6 (Commiting to intentions) Let Bel be the set of all beliefs,Des be the set of all desires and Int the set of all intentions. An agent thenchooses an intention to commit to using the function

filter : P(Bel)× P(Des)× P(Int)→ P(Int).

The function chooses an intention from the set of competing desires and inten-tions by choosing what seems to be the best intentions to commit to, given thecurrent set of beliefs. �

Using the functions described above, it is possible to implement deliberation,assuming that B is the agent’s current beliefs, D is its current desires and I is

2.4 Concluding Remarks 15

the intentions to which it is currently committed:

perceive ϕ

B ← brf(B,ϕ)D ← options(B, I)I ← filter(B,D, I)

When deliberation is done, I contains the (possibly new) intentions that theagent is committed to.

Implementing Means-Ends Reasoning An agent uses means-ends reason-ing to achieve its intentions, i.e. it attempts to reach an end (the intention) usingits means (the available actions). This is also known as planning. A planner isan algorithm which takes as input three things: the intention of the agent, theagent’s beliefs about the state of the environment and the actions available tothe agent. The planner then returns plan which, given the current state of theenvironment, enables the agent to reach a state in which the intention of theagent is achieved.

Definition 2.7 (Creating a plan) Let Bel be the set of all beliefs, Int theset of all intentions and Act the set of all actions. An agent then creates a planusing the function

plan : P(Bel)× P(Int)× P(Act)→ Plan.

However, we should notice that even though it seems that a planner will generateplans, nothing in the signature above requires this. The reason for this is thatin many implementations, the approach is instead to build a set of plans in aplan library at design time [3].

This is also the intention in this project, as the framework to be used for buildingmulti-agent systems — Jason — takes this approach. Therefore I will not gointo details with how to generate plans using a planner.

2.4 Concluding Remarks

In this chapter I have given an overview of the intelligent agent and its envi-ronment. I have described two possible types of agents, the deductive and the

16 Introducing Intelligent Agents

practical reasoning agent.

The deductive reasoning agent is an attempt to build agents as theorem provers,while the practical reasoning agent attempts to build an agent which reasonssimilarly to human beings.

The reason for describing both of these types in detail is that while the multi-agent framework I am going to use (Jason) implements a practical reasoningagent, the specification of an agent is more easily done using the deductive ap-proach and is easily transferred to Jason because of its agent-oriented approach(see chapter 5).

Chapter 3

Multi-Agent Systems

In this chapter I describe how to construct a system of several intelligent agents,a multi-agent system. In such system, the agents still act autonomously asdiscussed in the previous chapter, but with one addition. There are now severalagents which may give rise to more complex situations where communicationand cooperation could be the key to success.

I will first define what a multi-agent system is. Then I will discuss how agentscan communicate and cooperate. I will discuss the possibility of organizingagents in groups to which the agents have certain responsibilities. I will describea few methodologies for designing multi-agent systems, and finally discuss a fewapplications of multi-agent systems.

Definition 3.1 (Multi-Agent System) A multi-agent system is a system com-prised of one or more intelligent agents which are able to interact with each otherand their environment in order to achieve their goals.

Generally speaking, the system will consist of an environment in which theagents are situated. The agents may then be in various organizational relation-ships to one another (i.e. one may be leading other agents). Each agent mayalso have some knowledge of some of the other agents [3].

18 Multi-Agent Systems

3.1 Communication

Communication is the key to succeeding in many scenarios for several reasons.Not only does it allow the agents to share knowledge, it also makes cooperationand organization much easier, as we shall see in the following sections.

One may make the assumption that communication in multi-agent systems isanalogous to method invocation of object-oriented languages [32]. However,consider the following example, where the object o2 invokes the method sendon the object o1: o1.send(msg). Now consider the same example in which wehave two agents, i and j, and an action ϕ, and j sends the action to i: j

ϕ−→ i.

The main difference here is that i and j are autonomous agents, while the objectsare not. Therefore, when i receives the action ϕ, it can choose whether or not toperform it. This is not the case for the object o1: the method send is invokedregardless of whether it is convenient for the object or not.

Of course, it should not only be possible for an agent to request that anotheragent achieves some goal. It should be possible for an agent to share knowledge,or to ask for knowledge. An agent could for instance ask another agent whetherit is raining at that agents location.

3.1.1 The Knowledge Query and Manipulation Language

The knowledge query and manipulation language (KQML) solves this problem.It is an “envelope” format in which it is possible to specify a number of thingsabout a message. The language is not as such concerned with the content of amessage – for this, other languages are more appropriate [32]. The language ofthe content of a message may also be highly dependent on the environment, thecurrent situation and the agents.

A message contains a performative and a list of parameters. Below is given anexample of such message:

(tell:content (price(apples, 15)):receiver agent2:language prolog

)

3.2 Cooperation 19

Table 3.1: Some of the available KQML performatives. In this table i and j areagents and ϕ is the content of the message.

Performative Meaningachieve i wants j to achieve ϕ, i.e. make ϕ true in the environment.ask-one i wants to know one of the answers to a question ϕ in j’s

belief base.ask-all i wants to know all of the answers to a question ϕ in j’s

belief base.ask-if i wants to know whether j knows the answer to a question

ϕ, i.e. if the answer is in j’s belief base.tell i tells j that ϕ is in i’s belief base.untell i tells j that ϕ is not in i’s belief base.

In this message the performative is tell and the parameters are content,receiver and language. The message is telling agent2 the price of apples.The language is prolog and the agent is therefore assumed to understand thislanguage.

Table 3.1 shows a list of some of the available performatives of KQML. Theseperformatives make knowledge-sharing possible. The performative achieve isneeded for cooperation, however, as mentioned, asking an agent to achieve some-thing does not necessarily mean that the agent is going to do so. For this, theagents will need to negotiate.

3.2 Cooperation

In a single-agent system, the intelligent agent of that system will have clear goalsand ways to achieve these goals. However, in situations where a goal cannot beachieved by the agent single-handedly, the agent is stuck. Consider a situationwhere the agent is supposed to move a box from A to B. If the agent is actuallyable to carry the box by itself, then the problem can be solved. However, if thebox is very heavy it has no way of doing it alone, leaving the problem unsolved.

This is different in a multi-agent system where an agent is able to ask for helpif it has a task that is impossible or inconvenient1 to do alone. Consider the

1A task could be inconvenient to complete alone, if it is certain that it could be completedmuch faster when cooperating.

20 Multi-Agent Systems

(a) Problem decomposition (b) Subproblem solution (c) Solution synthesis

Figure 3.1: The three stages of a distributed problem solver (based on [32, p.154]).

example with the heavy box. In a multi-agent system the agent can now askanother agent to assist in carrying the box, thus solving the problem.

There are of course many other examples of problems where cooperation ispreferred or necessary. In the previous example, the two agents needed to worktogether to solve a single problem, however in many cases it will be possible todivide problems into subproblems [32].

However, as briefly mentioned, it is not necessarily the case that an agent willcooperate just because another agent asks for help. It is possible to make thebenevolence assumption [32, p.152], which assumes that there is a set of overallsystem objectives, and not individual agent objectives. In that case an agentwill always choose to help a fellow agent, since all agents wants to achieve thesame objectives.

3.2.1 Decomposing problems

It has been suggested that solving a problem in a multi-agent system can bedivided into three stages: problem decomposition, subproblem solution andsolution synthesis [32, p. 154-55]. These stages are illustrated in figure 3.1.

Problem decomposition: In the decomposition phase the problem is dividedinto subproblems which may then be divided further into smaller subprob-

3.2 Cooperation 21

lems. The idea is to divide the problem into very small problems that areeasily solved by single agents using their specific abilities. Therefore itmay not always be obvious in the beginning how to divide the probleminto subproblems. Since different agents have different abilities they maynot be able to solve the same problems. Each problem must be of a typethat the selected agent can solve.

Subproblem solution: In the subproblem solution phase the agents solvetheir delegated problems. In this phase there will usually be a lot ofknowledge-sharing if some agents have knowledge that may help others intheir tasks.

Solution synthesis: Finally, in the solution synthesis the subproblems are as-sembled into a complete solution. This may not be trivial if some subprob-lems overlap or if there are inconsistencies in the solutions (some agentsmay have incorrect beliefs about the environment, thus making false con-clusions).

While the phases given above may provide a general overview of how to solveproblems in a multi-agent system, they do not consider (1) how to actually sharethe tasks between the agents, i.e. choosing which agents are most appropriatefor a task and (2) how to share the results when a (sub)problem has been solved.

3.2.1.1 Task sharing

Of course, task sharing may be very easy if we make the benevolence assumption,since in that case agents will always accept the task they are allocated. However,if we cannot assume this then the agents may need to carry out some negotiationin order to ensure that the tasks will be completed.

The Contract Net Protocol The Contract Net protocol (CNP) is a protocolfor assigning tasks to agents in a multi-agent system. The CNP distinguishesbetween two types of agents: the initiator and the participants. The initiator isan agent, which has a task it wants to delegate to another agent. A participantis an agent which has told the initiator that it may be willing to complete a taskoffered by it. Basically there are three steps in the CNP:

1. Task announcement: The initiator recognizes that it has a problem. Itthen broadcasts the problem to all participants.

22 Multi-Agent Systems

2. Bidding: The participants have the possibility of bidding on the task ifthey believe they are able to complete it.

3. Awarding: Finally, the initiator will award the task to one of the biddingparticipants.

Usually a bid indicates how qualified the agent believes it is for the task. Thismay be an indication of how well the hardware of the agent is suited for the task,but also its plans and knowledge. This makes it very easy for the initiator todecide which bid to accept since the highest bid will then be the most qualified(assuming that the agents do not lie and are able to precisely estimate the costof completing a task).

Using the CNP for task sharing also ensures that the task will be delegatedto an agent which is willing to complete the task. Since only bidding agentsare considered, only agents that have expressed interest in the problem can beawarded the task.

3.2.1.2 Result sharing

Solving a problem is not very relevant if the solution is not made available toothers. Therefore, it is important to consider how to share the results of aproblem. This is even more the case in a situation where some subproblems arefurther decomposed into smaller subproblems. In this case, other agents may bedepending on the results of these subproblems in order to be able to completetheir own task.

There are two ways of sharing results [32]: One is proactive and happens when-ever an agent believes that another agent may need to know this result. Theother is reactive in the sense that the agent only shares the result when anotheragent actively asks for that specific information.

3.2.2 Coordination

If agents i and j have tasks that somehow are dependent on each other, it isnecessary to coordinate the completion of these tasks. In [32] the notion of coor-dination relationships is described. Basically a coordination relationship is therelationship between entities which require cooperation to complete their tasks.Furthermore, there is a distinction between positive and negative relationships:Positive relationships are relationships in which one or both of the agents in the

3.3 Organization-Centered Multi-Agent Systems 23

relationship can benefit by combining the tasks. Negative relationships, on theother hand, are relationships of tasks, which cannot be combined, but cannotbe completed at the same time either. This could for instance be two tasks thatneed to use the same resource. One of the agents will then have to wait for theother agent to complete the task.

Example 3.2 (Negative relationship) An example of a negative coordina-tion relationship could be the use of a network printer. Even if two agents bothsend a job to the printer at the same time, they will be printed one at a time.Therefore, the agents are implicitly cooperating in order to achieve their goals,namely finishing their print jobs.

Example 3.3 (Positive relationship) Two or more agents enter a room inwhich the lights are switched off. Immediately, all of the agents intend to turnthe light on. However, this is a task which only one of the agents needs toperform in order to complete the task for all of them. This is referred to as anaction equality relationship.

Usually one assumes that coordination happens at run-time [32], meaning thatthe agents themselves will be capable of detecting coordination relationshipsand act accordingly when such relationships are detected.

3.3 Organization-Centered Multi-Agent Systems

When I have been talking about multi-agent systems, I have referred to agent-centered multi-agent systems (ACMAS). While such systems are able to solvecertain complex problems efficiently, there may be some drawbacks of designingthe system in terms of the agents and not the organization. In fact, most multi-agent systems do not have an explicit organization. In [12] several drawbacks ofACMAS are described and it is suggested why taking an organization-centeredapproach may solve some of these problems.

The main concern of ACMAS according to [12] is that the agents are free to com-municate with, interact with and use services from every other agent. Moreoveris it “the responsibility of each agent to constrain its accessibility from otheragents” [12, p. 216]. This is a problem because there is no guarantee that anagent will actually make such constraints.

An attempt to solve these problems is proposed as an organization-centeredmulti-agent system (OCMAS).

24 Multi-Agent Systems

Definition 3.4 An organization-centered multi-agent system is a multi-agentsystem with an explicit organization. The organization consists of agents thatexhibit some kind of behavior. The organization can be partitioned into smallersub-organizations (groups). Groups are allowed to overlap. Agents have roleswhich define how they are supposed to behave within the organization. �

There is a distinction between the specification of an organization (OS) and anorganizational entity (OE). Whereas the specification can be thought of as theclass of the organization, the possible roles and tasks, an organizational entityis an instantiation of such structure (the object, to draw a parallel betweenOCMAS and object-oriented programming).

3.3.1 The Principles of OCMAS

In order to design and analyze an OCMAS, it is necessary to define a set ofprinciples on how to approach such systems. In [12] three such principles aredefined:

Principle 1: “What” – not “how”: The organizational level of a multi-agentsystem should not define how the system works, i.e. how the agents aresupposed to act. Instead, it is supposed to define what the system issupposed to do and what norms the agents are supposed to follow.

Principle 2: Not an agent description: The organization should not spec-ify how an agent interprets the organization. Moreover is it not the re-sponsibility of the organization to define so-called mental issues, i.e. thebeliefs, desires and intentions of the agents. Instead, the organizationshould only provide a description of what is expected of an agent.

Principle 3: The context of interaction: The agents are associated with atleast one group within the organization. The organization must ensurethat the agent only have knowledge of the structure of his own groups.Furthermore, while the agent is assumed to know all agents belonging to itsown groups, this is generally not the case for agents outside of those groups.Therefore, interaction between agents happen more naturally within thegroups and not between groups.

3.3 Organization-Centered Multi-Agent Systems 25

3.3.2 AGR: An OCMAS Model

I now briefly describe an OCMAS model called the Agent/Group/Role model.It consists of three primitives: agent, group and role.

The agent is an intelligent agent situated in an OCMAS. It is associated withone or more groups in which it plays certain roles. As principle 2 tells us, theremust be no constraints on the architecture of the agent.

A group is a set of agents which have certain characteristics in common. Fol-lowing principle 3, agents may only communicate if they belong to the samegroup. However, since agents may be associated with several groups, it is pos-sible to share knowledge between groups. Furthermore, we talk about a groupstructure as being the abstract group which defines the types of roles that anagent can play in such group.

Finally, a role represents what function an agent will play in a group. An agentis able to play several roles and several agents can play the same role.

The AGR model is an example of how an OCMAS can be designed using thethree principles given above. I will not go into further details with this model,instead I will be using the organizational model calledMoise+, which uses thesame principles and also define groups and roles, analogous to the AGR model.Moise+ is described in chapter 6.

3.3.3 Social commitment

I have now described the relation between agents and their organization. Thenext step is to define how the agents can be committed to achieving certaingoals with the intention of doing so for another entity, be it another agent ora group. Basically, what lacks in definition 3.4 is the notion of an obligation.More precisely, when an agent is associated with a group, it should be obliged toachieve certain goals simply because of the association to that group. In otherwords, “there is no Organization without Obligations” [8].

In [8] Castelfranchi introduces different kinds of commitment: internal, socialand collective commitments. The internal commitment (I-Commitment) is therelation between an agent and an action

I-Comm(i, ϕ).

That is, when an agent i intends to perform some action ϕ (to achieve a certain

26 Multi-Agent Systems

goal), it is committed to perform this action. It is also possible to talk aboutinternal commitment of a group: In that case, the group has the intention ofachieving some goal, and is therefore committed to this. This is also what isusually called collective commitment (C-Commitment).

The social commitment (S-Commitment), on the other hand, should not be un-derstood as an individual commitment shared by more than one agent. Insteadit is “the commitment of one agent to another” [8].

Definition 3.5 (S-Commitment) Let i, j be agents and ϕ an intention to beachieved. Then S-Commitment is a relation

S-Comm(i, j, ϕ),

where i is the agent committed to j to achieve ϕ. �

In some part of the literature, a witness is introduced as a component of thecommitment [8]. However, by introducing the notion of an honest agent we candiscard the witness:

Honest(i) = S-Comm(i, j, ϕ)→ I-Comm(i, ϕ),

i.e. an honest agent will be internally committed to achieving ϕ, when that agentis socially committed to another agent to achieving ϕ [8]. In this case there isno need for witnesses for social commitments.

The honest agent should not be confused with the benevolence assumption.The benevolence assumption states that all agents wants to achieve the sameglobal objectives; this does not necessarily mean that the agents are honest.Furthermore, an honest agent may not want to achieve the same objectives asother agents.

According to definition 3.5, if i is S-Committed to j, then i is committed to j toachieve ϕ. Surely, if j has ϕ as a goal, then this must imply that j has a goalof i achieving ϕ (i.e. i intends to achieve ϕ):

S-Comm(i, j, ϕ)→ GOALj(INTi(ϕ)),

where GOALi(ϕ) means that agent i has the goal of achieving ϕ, while INTi(ϕ)means that the agent i has the intention of achieving ϕ. This means that bothagents will have the goal of j achieving ϕ.

Another important point about social commitment is the power given to j, themoment i is committed to j to achieve some ϕ [8]. This includes controlling that

3.4 Designing a Multi-Agent System 27

i actively attempts to achieve ϕ, requiring that i does it and finally complain ifi does not make an attempt to achieve ϕ. This basically means that the agent i“loses some of its autonomy” [9]. This is an important fact, because we cannotmake the same assumptions about i anymore: Instead of being committed toits own intentions, the agent is committed to another agent’s intentions. Thisalso means that i ought to see to it that the intention of j is achieved. Usingdeontic logic, this can be specified as follows (see chapter 4 for an explanationof the deontic operator, Oi):

S-Comm(i, j, ϕ)→ Oiϕ,

which means that if i has a social commitment towards j of achieving ϕ, then iought to see to it that ϕ is achieved (becomes true).

Committing to a group Social commitment is defined as the commitmentof one agent to another. However, instead of being committed to another agent,one can be committed to a group. In an organization, social commitment canthen be used to describe exactly what an agent is obliged to do because ofits role(s). It is then possible to define the relation between a role and theobligations that an agent playing this role should commit to.

Furthermore, because of the definition of an honest agent, if agent i is sociallycommitted to a group, it means that i will have the goal of achieving the inten-tions of the group. It is therefore safe to assume that by delegating a role to anagent, we ensure that this agent will always fulfill (or at least attempt to fulfill)the obligations of that role.

Overall, S-Commitment can be used to make assumptions on how an agent willbehave in an environment in which it is part of an organization, having certainroles. By assuming that the agent is honest, we actually constrain the agent tobehave in certain ways, ensuring that our system works as intended.

3.4 Designing a Multi-Agent System

In software engineering, there are many methodologies for modeling and devel-oping complex systems. However, these methodologies are mostly concernedwith object-oriented systems, making them unsuitable for designing a multi-agent (or agent-oriented) system (as briefly mentioned in section 3.1).

Instead, quite a few methodologies for agent-oriented analysis and design haveemerged. These include Gaia [33], which focuses on the organization of a sys-

28 Multi-Agent Systems

tem2, Prometheus [24], which emphasize on three well-defined main stages foridentifying the functionality of the system, the agents and their capabilities andTropos [6], which uses an iterative approach to refine a model of the system.

When using a methodology, one should not be following it strictly, but insteaduse it as a guideline to avoid putting unnecessary constrains on the design (asdiscussed in [24]). By adopting the relevant concepts of each methodology andignore those that are irrelevant for the system of this project, the methodologiesshould help identifying the key parts of the system rather quickly.

I will not go further into details with the methodologies here; the relevant con-cepts of the methodologies used in this project will be introduced when designingthe system to avoid explaining the irrelevant parts.

3.5 Applications

I will now briefly discuss some of the possible uses of multi-agent systems in thereal world. According to [32], two main groups of applications of agent systemexist:

Distributed systems where the agents of the system solve problems by dis-tributing the workload among them. The focus is thus on multi-agentsystems.

Personal software assistants are agents which are made to assist a user inthe use of an application. Here the focus is therefore on individual agents.

3.5.1 Distributed systems

One example of a multi-agent system that many people will have encounteredat some point (possibly without knowing so) is MASSIVE (Multiple Agent Sim-ulation System in Virtual Environment). It is a software product able to createmillions of agents that will act as individuals. Through the use of fuzzy logic,they are able to respond to their individual surroundings. It is used in manyfilms such as Lord of the Rings and Avatar for creating large battle scenes inwhich most or all participants are computer generated intelligent agents 3.

2A static organization however; the OCMAS I am considering will consist of a dynamicorganization.

3Massive Software: http://www.massivesoftware.com/

3.6 Concluding Remarks 29

Another distributed system is within the area of distributed sensing. Here theidea is to let a system of agents manage a network of distributed sensors. Sincethe sensors may provide conflicting and partial information, it is up to the agentsto cooperate in the gathering of information. For example, by letting one agentuse the information about a car passing another agent to predict when it willenter this agent’s region.

3.5.2 Personal software assistants

A classical personal software assistant is an agent in the electronic commercebusiness responsible for doing comparison of products. When a customer wantsto buy a specific CD, the agent pursues the goal of finding a store which resultsin the best deal. One of the problems in such a system is the problem of how tocompare products. This may not be a problem for goods such as CDs, DVDsand books, but when considering used cars and houses, other factors than pricewill be relevant. Therefore such agent should be able to make intelligent guessesabout what the customer might want.

3.6 Concluding Remarks

In this chapter I have discussed how to let multiple intelligent agents worktogether in a multi-agent system. I have discussed how agents can communicateusing the agent communication language KQML. Furthermore I have discussedhow the agents must decompose complex problems into simpler subproblemsthat can be solved by single agents. In this way, the agents can cooperativelysolve complex problems.

Usually, multi-agent systems are agent-centered (ACMAS), but I have also dis-cussed organization-centered multi-agent systems (OCMAS), in which the focusis on what the system is supposed to do as opposed ACMAS, where the focusis on how the agents are supposed to act. By letting agents socially commit toeach other, it is possible to give agents roles in which they are obliged to actin a pre-defined way and also ensure that the agents actually follow the rules ofthe organization and do as they ought to do.

30 Multi-Agent Systems

Chapter 4

Logic in Multi-Agent Systems

In this chapter I discuss logical systems that can be used to specify and reasonabout multi-agent systems. This will give a foundation for specifying the multi-agent systems to be implemented in part II. The focus is on epistemic modallogic, which is logic about knowledge. The reader is assumed to be familiar withclassical propositional logic.

I briefly introduce the Kripke semantics and modal logic. Then I proceed intothe domain of multi-agent systems by discussing epistemic logic, which is logicconcerning knowledge and beliefs. I will also discuss deontic logic which concernsobligations and permissions, and can be used to specify the obligations agentshave with their associated group(s).

4.1 Modal Logic

In classical propositional logic, one is able to express simple formulae such as“two plus two equals four” and “snow is white”. First-order logic extends thepropositional language with quantifiers and predicates, allowing for a much moreexpressive language. One is now able to express formulae such as “the father ofAlice is the brother of Bob”. However, it is not possible to express the mode of

32 Logic in Multi-Agent Systems

a sentence. For instance, in FOL it is possible to express the formula “Alice ishappy”, though we cannot express that she is known to be happy, or that sheis obliged to be happy. Basically, this is what modal logic is able to do.

Modal logic is classical propositional logic with the addition of a modal operator,which allows us to express the mode of formulae. For instance, it is possible toexpress that “Alice will be happy” or that “Alice is known to be happy”.

The basic modal language was first developed to deal with alethic modalities –modalities concerning necessity and possibility. However, other types of modallogics exists, such as temporal logic, for reasoning about time, epistemic logic,for reasoning about knowledge and doxastic logic, for reasoning about beliefs[13].

Definition 4.1 (Basic modal language) Let AP be a set of atomic propo-sitions. The set of well-formed formulae of modal logic is given by the followinggrammar:

ϕ ::= p | ¬ϕ | (ϕ ∧ ψ) | �ϕ | ♦ϕwhere p ∈ AP . �

We can define the other usual propositional operators as follows: (ϕ ∨ ψ) =¬(¬ϕ ∧ ¬ψ) and (ϕ → ψ) = (¬ϕ ∨ ψ). (ϕ ↔ ψ) is a shorthand for (ϕ →ψ)∧ (ψ → ϕ). Finally we use the special operator > = (p∨¬p), which is alwaystrue and its negation, ⊥ = ¬>, which is always false.

Note that the modal operators can be expressed by each other: ♦ϕ = ¬�¬ϕ,�ϕ = ¬♦¬ϕ. Usually, �ϕ is read as “ϕ is necessary”, while ♦ϕ is read as “ϕ ispossible”.

Example 4.2 (Formulas of modal logic) The following are all well-formedmodal logic formulae:

p ∧ (♦q → �p)�(p ∨ q)→ (�p ∨�q)♦♦p→ ♦p

4.1.1 Semantics for modal logic

Semantics for the basic modal language was developed by Saul Kripke in the1960s [22]. The basic idea is to interpret formulae over graph-like relational

4.1 Modal Logic 33

s t

u v

(a) Kripke Frame

s t

u v

{p} {p, q}

{}{p, q}

(b) Kripke Model

Figure 4.1: The relational structure of Kripke semantics. (a) shows a Kripkeframe, while (b) shows a Kripke model over that frame.

structures [13].

Definition 4.3 (Kripke frame) A Kripke frame is a pair 〈W,R〉, where Wis a non-empty set of states (or possible worlds) and R is a binary accessibilityrelation on W (R ⊆W ×W ) between possible worlds. �

Definition 4.4 (Kripke model) A Kripke model (or possible worlds model,relational structure) over a Kripke frame F , is a pairM = 〈F , V 〉, where V is avaluation function, assigning every atomic proposition to a set of states whereit is true (V : W → P(AP )). �

Using these definitions it is possible to study the meaning of modal formulae.

Example 4.5 (Kripke frames and models) Figure 4.1(a) shows a Kripkeframe. Using definition 4.3, we then have

W = {s, t, u, v}

andR = {{s, t}, {s, v}, {v, t}, {v, u}, {u, s}}

Figure 4.1(b) shows a model over that frame. Using definition 4.4, we have thatV (p) = {s, t, u} and V (q) = {t, u}. �

Now, given a model M = 〈W,R, V 〉, we can define truth of a formula in thatmodel as follows:

34 Logic in Multi-Agent Systems

M, s |= p iff s ∈ V (p)M, s |= ¬ϕ iff not M, s |= ϕM, s |= ϕ ∧ ψ iff M, s |= ϕ and M, s |= ψM, s |= �ϕ iff ∀s′ ∈W (sRs′ →M, s′ |= ϕ)M, s |= ♦ϕ iff ∃s′ ∈W (sRs′ ∧M, s′ |= ϕ)

Note that even though other boolean connectives, such as ‘→’ is not definedhere, they can still be used, since they are definable from the connectives inthe basic modal language. Also note the analogy between ‘�’ and the universalquantifier from first-order logic, and ‘♦’ and the existential quantifier.

Using the definitions of truth given above, it is now possible to compute thetruth of modal formulae. If for a formula ϕ there exists a world w in a Kripkemodel M such that M, w |= ϕ, then ϕ is satisfied in (M, w). If for a formulaϕ there is Kripke frame F such that F |= ϕ, then ϕ is valid in this frame. If amodal formula ϕ is valid in every Kripke frame, it is valid and we write |= ϕ.The formula �(p → q) → (�p → �q) is valid in every Kripke frame, and asI discuss below, is actually an instance of the distribution axiom, K of modallogic.

Example 4.6 (Truth of formulae) The following formula is satisfied in states in the Kripke frame F shown in figure 4.1(a).

F , s |= p→ ♦♦�p

The following formulae are satisfied in the Kripke model M shown in figure4.1(b).

M, s |= p

M, t |= p ∧ qM, v |= �(p ∧ q)

The following formula is valid in M:

M |= p ∨�q

4.1.2 Axiomatic Systems

I now present the corresponding axiomatic system for formalizing modal logic.By adding a set of axioms to the propositional calculus we create systems of

4.2 Epistemic Logic 35

modal logic, in which we can reason about formulae. Below are a few of thewell-known elementary axioms:

• N: If ϕ is a theorem, then �ϕ is a theorem.

• K: �(ϕ→ ψ)→ (�ϕ→ �ψ)

• T: �ϕ→ ϕ

• 4: �ϕ→ ��ϕ

• 5: ♦ϕ→ �♦ϕ

• D: �ϕ→ ♦ϕ

We can combine these axioms into systems. An axiomatic system constrains theKripke frames which are in that system. This can be seen from the axioms asystem consists of. As an example, consider a system consisting of N, K, T and4. Such system only accepts Kripke frames with certain properties: T describesKripke frames with a reflexive accessibility relation, while 4 describes frameswith a transitive accessibility relation. Therefore, frames of this system mustbe reflexive and transitive.

Axiomatic systems can be used to derive formulae, given a set of axioms andinference rules. By having an axiomatic system for a modal logic, such asepistemic logic, it enables one to not only constrain the frames but also deriveformulae. In the case of epistemic logic we can therefore reason about knowledge.

4.2 Epistemic Logic

I briefly mentioned in the previous section that one type of modal logic dealswith knowledge, namely epistemic logic. The word “epistemic” comes from theGreek word for knowledge. Basically epistemic logic allows one to express whatdifferent agents know. This is done by reading the modal operator in a specificway: �ϕ means “the agent knows ϕ” and ♦ϕ means “ϕ is consistent with theknowledge of the agent”. However, typically one writes ‘K’ instead of ‘�’ and‘L’ instead of ‘♦’.

As described in [30], epistemic logic is not much concerned with how to justifythat something is knowledge:

36 Logic in Multi-Agent Systems

“The focus of epistemic logic is on reasoning about knowledge, ratherthan the nature of knowledge” [30, p. 6].

The system S5 is generally the most popular and accepted system for epistemicmodal logic. It consists of the axioms K, T (knowledge is truthful), 4 (positiveintrospection) and 5 (negative introspection). This means that the accessibilityrelations of epistemic logic are reflexive, transitive and symmetric.

Definition 4.7 (Epistemic language) Let AP be a set of atomic proposi-tions and Ag be a set of agents. The set of well-formed formulae of epistemiclogic is given by the following grammar:

ϕ ::= p | ¬ϕ | (ϕ ∧ ψ) | Kiϕ | Liϕ | EGϕ | CGϕ | DGϕ

where p ∈ AP , i ∈ Ag and G ⊆ Ag. �

The modal operators intuitively have the following meaning:

• Kiϕ means that agent i knows that ϕ is true.

• Liϕ means that ϕ is consistent with what agent i knows.

• EGϕ means that every agent in the group G knows that ϕ is true.

• CGϕ means that ϕ is a common knowledge between the agents in G.

• DGψ means that ψ is a distributed knowledge between agents in G. Thisis possible if, for instance, Kiϕ and Kj(ϕ→ ψ), where i, j ∈ G.

Since Ki and Li actually are the modal operators described above, they are eachothers dual (Liϕ = ¬Ki¬ϕ and vice versa). The three other modal operatorsoperate on groups of agents.

Group knowledge Using the operator EG we are able to express knowledgein a group of agents, i.e. express that there are certain formula that everybodyin a group knows are true. This, however, does not necessarily mean that eachagent in the group knows that everybody in the group knows that the formulais true (This is what common knowledge tells us).

The operator is definable from Ki. This can be seen in the following way. Letgroup G = {i}, i.e. a group with a single agent. Then we have Kiϕ → EGϕ.Now, let group G = {i, j}. Then it must be the case that (Kiϕ∧Kjϕ)→ EGϕ,

4.2 Epistemic Logic 37

i.e. if both agents in the group knows that ϕ is true, then everybody in thegroup knows this. This can be generalized to the following definition of EG:

EGϕ =∧i∈Ag

Kiϕ

It is also possible to define its dual: EGϕ = ¬EG¬ϕ [30]. The definition of EGis then

EGϕ =∨i∈Ag

Liϕ

i.e. the fact that ϕ is true is consistent with the knowledge of at least one agentin the group.

Common knowledge We say that there is common knowledge about ϕ ina group G, whenever everybody in the group knows ϕ, everybody knows thateverybody knows ϕ, everybody knows that everybody knows that everybodyknows ϕ, and so on ad infinitum.

Having common knowledge is considered good since it allows an agent to reasonon the knowledge of other agents in its group. That means that an agent is ableto make assumptions on what another agent will do in a given situation andthereby act accordingly. If an agent is uncertain whether another agent knowssomething, it may not be able to make these assumptions.

We can define the common knowledge operator, CG, in terms of the operatorfor group knowledge, EG, as follows:

CGϕ =∞∧n=0

EnGϕ

where EnG is an iteration of n EG operators, i.e. E3Gϕ = EGEGEGϕ.

Distributed knowledge The final operator in epistemic logic is the opera-tor for distributed knowledge. Distributed knowledge can be seen as implicitknowledge of a group in the sense that the knowledge is only available if themembers of the group would make all their knowledge explicit.

38 Logic in Multi-Agent Systems

4.2.1 Semantics

Kripke semantics are also used for epistemic logic with the difference that wenow have a set of operators for each agent and for each group. This means thatthe accessibility relation must be changed to conform with this.

Definition 4.8 Let AP be a set of atomic propositions and Ag a set of agents,we have a Kripke model for epistemic logic M = 〈W,RAg, V 〉 where W is theset of possible worlds, RAg is the set of accessibility relations for each a ∈ Agand V is the valuation function mapping atomic propositions to the set of stateswhere they are true. �

Definition 4.8 tells us that we now have a set of accessibility relations for eachagent rather than just one set of accessibility relations. This means that whilewe might have a relation sRit, we might have tRju, i.e. agent i can reach worldt from world s, while agent j is able to reach world u from world t. Notethat because epistemic logic is the axiomatic system S5, we always have sRis,however, this is usually omitted in visual representations. Also, if sRit is thecase, then tRis is also the case (in figures, the arrows are usually omitted aswell).

Intuitively the set of accessibility relations for an agent defines worlds that, fromthe agent’s point of view, are indistinguishable. This means that if there is ani-relation from world s to world t, then the agent i is not able to distinguishbetween these worlds. To put it in another way, the agent will not know whetherthe actual world is world s or world t, only that it is one of them.

4.2.2 The muddy children

One frequently used example is the muddy children puzzle [11], which shows usexactly what it means that worlds are indistinguishable, and also gives us anidea about how to reason in epistemic logic.

Example 4.9 (The muddy children) A group of children have been playingoutside and are called back into the house by their father. The children gatherround him. As one may imagine, some of them have become dirty from theplay and in particular: they may have mud on their forehead. Children canonly see whether other children are muddy, and not if there is any mud on theirown forehead. All this is commonly known, and the children are, obviously,perfect logicians. Father now says: “At least one of you has mud on his or herforehead.” And then: “Will those who know whether they are muddy please

4.2 Epistemic Logic 39

CMC

CCC MCC

MMC

CMM

CCM MCM

MMM

2

1

2

2

1

3 3

33

1

1

2

(a) Initial model.

MMC

CMM

MCM

MMM

2

3

1

(b) Revised model after the father asksthe children the first time.

Figure 4.2: The Muddy Children puzzle.

step forward.” If nobody steps forward, father keeps repeating the request. [30,p. 93] �

I will now show that indeed by repeating the request, eventually the childrenwill know whether they have mud on their forehead and they will thereforestep forward. Given 3 children, 1, 2 and 3, where child 1 and 3 have muddyforeheads, while child 2 has a clean forehead. The environment initially consistsof 8 possible worlds. I denote a world by the configuration of that world: CMMmeans that child 1 has a clean forehead, while the two others have muddyforeheads. We can express the initial knowledge as follows:

K1m3 ∧K2(m1 ∧m3) ∧K3m1

Figure 4.2(a) shows the initial environment. When the children learn that atleast one has a muddy forehead, the world CCC is no longer possible and canbe removed from the environment:

E{1,2,3}(m1 ∨m2 ∨m3)

Now the father asks those who know whether they have a muddy forehead tostep forward. Let us look at the knowledge of each child, to decide whether theyknow.

• Child 1: From his perspective there are two possibilities: CCM andMCM

40 Logic in Multi-Agent Systems

(since we have that K1m3). However, since child 3 did not step forward,child 1 can infer that CCM is not a possible world after all1.

• Child 2: He too has two possible worlds: MMM and MCM . Nothing inthis step changes this, since he could see two muddy foreheads.

• Child 3: His possible worlds are MCC and MCM . Analogous to child 1,he can infer that the world MCC is not possible, since in that case child1 should have stepped forward.

Figure 4.2(b) shows the revised model. When the father repeats his questionnow, the children will know that the real world is MCM . The reason for thisis that from the perspectives of child 1 and child 3, the possible world MCM isdistinguishable from every other possible world. Therefore, they can infer thatthe real world must be exactly this world, and they can step forward.

The example shows that while initially it is impossible to answer the questiontruthfully, the epistemic Kripke model enables us to reason about the knowledgein this scenario.

4.2.3 Truth of formulae

I have briefly described the Kripke model for epistemic logic and have shownhow the indistinguishability relations for the agents can be used to reason aboutknowledge in an environment containing several agents. We can now formallydefine truth of formulae in epistemic logic, enabling us to reason about multi-agent systems. Given an epistemic Kripke model M = 〈W,RAg, V 〉, truth of aformula is defined recursively as follows:

M, s |= p iff s ∈ V (p)M, s |= ¬ϕ iff not M, s |= ϕM, s |= ϕ ∧ ψ iff M, s |= ϕ and M, s |= ψM, s |= Kiϕ iff ∀s′ ∈W (sRis′ →M, s′ |= ϕ)M, s |= Liϕ iff ∃s′ ∈W (sRis′ ∧M, s′ |= ϕ)

Truth of formulae concerning groups requires us to define relations for each ofthe group operators.

• REG=

⋃a∈Ag Ra

1They all know that at least one child has a muddy forehead, therefore if child 3 could notsee any other muddy foreheads, he would know that his head was muddy.

4.2 Epistemic Logic 41

{1,2}s{p}

t{p}

u{p}

v{p}

{1,2}

{1,2}

{1,2}

{1,2}

{1,2}

(a) Group knowledge

1

s{p}

t{p,p -> q}

u{p -> q}

2

(b) Distributed knowledge

Figure 4.3: Kripke model of different epistemic scenarios. (a) shows a scenariowhere E{1,2}p, while (b) shows a scenario where D{1,2}q.

• RDG=

⋂a∈Ag Ra

• RCGis the transitive closure of REG

(for all states x, y, z ∈ R, if xRy ∈ Rand yRz ∈ R, then xRz is also in R.).

We are now able to define truth of formulae concerning groups:

M, s |= EGϕ iff ∀s′ ∈W (sREGs′ →M, s′ |= ϕ)

M, s |= DGϕ iff ∀s′ ∈W (sRDGs′ →M, s′ |= ϕ)

M, s |= CGϕ iff ∀s′ ∈W (sRCGs′ →M, s′ |= ϕ)

Figure 4.3 shows different scenarios, where we are able to create formulae thatexpress something about the knowledge of the group in that scenario. In figure4.3(a), the agents 1 and 2 see four worlds which are indistinguishable. In eachworld, the atomic proposition p is true. Therefore, in that scenario, everybodywill know that p is true, i.e. E{1,2}p. Figure 4.3(b) shows a scenario in whichwe have K1p and K2(p → q). In this case we then have that D{1,2} = q, sincewe can infer q from p ∧ (p→ q).

4.2.4 Dynamic Epistemic Logic

Dynamic Epistemic Logic deals with change of knowledge, more specificallywhat happens when an agent’s knowledge changes. This change can happen for

42 Logic in Multi-Agent Systems

several reasons, but dynamic epistemic logic is mostly concerned with changeof knowledge in terms of groups. When for instance a public announcementis made, the people hearing this announcement will change their knowledgeaccording to it.

I have briefly studied the dynamic aspect of epistemic logic using [30]. One ofthe key parts of dynamic epistemic logic is the addition of epistemic actions.These are actions which, when performed, will lead the system into a differentstate, analogously to our understanding of multi-agent systems (chapter 3).

Basically, the language introduces yet another construct in the epistemic lan-guage:

[α]ψ,

where α is an epistemic action. The construct is to be read “after the action α isperformed, ψ is true”. Furthermore is it possible to specify actions of differenttypes: LBβ2, the group B learns that β holds, (α ; α′), sequential execution (i.efirst α is performed and then α′), etc.

I will not go into further details of the semantics of dynamic epistemic logic; inthe specification of multi-agent systems of the type used in this project, there isno need for explicitly dealing with this kind of change in knowledge. The reasonfor this is that while it may be useful to know what happens in the epistemicmodel when an agent performs an action, being able to reason about it is out ofscope of this project. Instead, the focus is on specifying certain scenarios of themulti-agent system using epistemic logic and in this way being able to decidewhat actions to perform at which states.

4.2.5 Remarks on reasoning about beliefs

A logic similar to epistemic logic concerns reasoning about beliefs: doxasticlogic. The difference between knowing ϕ and believing ϕ is that while ϕ iscertainly true in the first case, it may not be so in the latter. More precisely:agents may have beliefs about the environment which are not true. However,it must be stressed that the agent believes so until it is no longer reasonableto believe so anymore (for instance if that belief leads to a contradiction). Oneusually uses B instead of K as modal operator.

This difference means that the knowledge axiom no longer is valid (Biϕ→ ϕ isnot valid). Instead, one introduces the consistency axiom, D, which states that

2Note that I use the L-operator to specify ¬K¬. [30] refers to that operator as K, whichis why they can use L in this case.

4.3 Deontic Logic 43

the agent will not believe a contradiction or falsum [30]:

¬Bi⊥

This gives us the axiomatic system KD45. In this system, when the agent be-lieves something that is not true, the accessibility relation becomes non-reflexive,since the world the agent believes it is in, does not really conform with the worldit is actually in.

Even though the agents in the system build in this report are going to havebeliefs rather than knowledge, I will be specifying the system using epistemiclogic instead of doxastic logic. The reason is that for my purpose, it suffices toassume that an agent has knowledge instead of beliefs.

4.3 Deontic Logic

The word deontic comes from the Greek word for “duty”. Deontic logic canexpress what an agent ought to do and what it has permission to do. Themodal operator in this case will have yet another interpretation: �ϕ meansthat “ϕ is obligatory for the agent”, while ♦ϕ means that “ϕ is permissible forthe agent”. Typically, we write ‘O’ instead of ‘�’ and ‘P ’ instead of ‘♦’. Theoperators are inter-definable: Pϕ = ¬O¬ϕ.

The system of deontic logic is usually called “standard deontic logic”, SDL orsimply D. It is obtained by adding two axioms to the classical propositional logic[16]:

O(ϕ→ ψ)→ (Oϕ→ Oψ),

which is K with the modal operator substituted by the deontic operator. Itstates that if it ought to be that ϕ implies ψ, then if ϕ is obligatory, then ψ isobligatory. The other axiom is

Oϕ→ ¬O¬ϕ,

i.e. if ϕ is obligatory, then ϕ is permitted. The semantics of deontic logic arebasically the Kripke semantics of modal logic introduced in the previous sections.

Analogous to the epistemic logic it is possible to express obligations about in-dividual agents. For instance

ϕ→ Oiψ,

means that if ϕ is the case, then ψ is obligatory for the agent i.

44 Logic in Multi-Agent Systems

Paradoxes of Deontic Logic Deontic Logic is a quite simple logic and there-fore is it not possible to catch all normative reasoning. This leads to quite acomprehensive list of paradoxes that can be expressed in SDL (see [15, 23]). Anexample is Ross’ Paradox :

Oϕ→ O(ϕ ∨ ψ),

i.e. if ϕ is obligatory, then ϕ or ψ is obligatory. This can lead to unexpectedinterpretations. The classical example is: “If John ought to mail the letter,then John ought to mail the letter or burn it”, something that is clearly notintended. The problem arises because of the propositional rule of introduction ofdisjunction: ϕ→ (ϕ∨ψ). I will not go further into details with the paradoxes ofdeontic logic since it is not directly necessary to be able to interpret the formulaefor specifying organizations in a multi-agent system in a natural language.

4.4 Concluding Remarks

In this chapter I have introduced modal logic to give a foundation for the twologics needed for specifying agent-centered and organizational-centered multi-agent systems: epistemic and deontic logic, respectively.

In chapter 3 I introduced both ACMAS and OCMAS and this chapter has takena more formal approach into specifying multi-agent systems. The idea is to useepistemic and deontic logic to specify the systems and then, using the interpreterJason and the organizational model Moise+, implement these specifications.

While I am not going to reason about knowledge and obligations in this project,the semantics were provided to give a basic understanding of how to interpretformulae in these logics. This should aid when actually specifying the systems.

Chapter 5

Jason

This chapter introduces Jason , “a Java-based interpreter for an extended ver-sion of AgentSpeak”1. I provide an overview of the interpreter by introducinghow to program multi-agent system using it, however I will not go into detailswith all parts of the system. The overview should give a foundation for build-ing simple systems using Jason , but also enable the reader to understand themore advanced concepts such as internal actions and the agent architecture. Athorough description of Jason is found in [3].

5.1 AgentSpeak

The language of Jason , AgentSpeak, is a Prolog-like logic programming lan-guage. AgentSpeak allows the developer to build a set of plans an agent is ableto follow in certain situations, i.e. the idea is to create a plan library for theagent. A plan in AgentSpeak is of the form

+triggering event : context <- body,

and consists of three parts:

1http://jason.sourceforge.net/

46 Jason

Triggering Event: The triggering event describes the situations in which aplan may be applicable for execution. Most commonly, such event willbe either an attempt to pursue a goal (written +!goal) or the additionof a specific percept to the database (i.e. a reaction to a percept, written+percept). It is also possible to for instance create a triggering eventwhich specifies what to do when the execution of some triggering eventfailed.

Context: The context is roughly speaking similar to the antecedent of an im-plication2. This can be used to specify, that even though an event hashappened, which has triggered this plan, it is only applicable if the con-text can be unified with the knowledge from the agent’s database. Thismakes it possible to implement several plans for one triggering event, al-lowing the agent to act according to the situation at hand.

Body: The body is related to the context in the sense that it can be consideredthe consequent of that same implication. The most important part of abody is the possibility to perform actions. It is also possible to createmental notes (i.e. percepts added to the database by the agent itself) ofsome state of affairs or to begin pursuing a new goal.

Roughly speaking, if an event matches a trigger, the context is matched withthe current state of the agent. If this state is matched, the body is executed.Otherwise the engine continues to match contexts of plans with the same trigger.If no plan is applicable, the event fails. As mentioned, this can be caught byyet another triggering event of the form ‘-triggering event’.

The fact that it is a logic programming language allows one to easily transferspecifications written in logic formulas of a multi-agent system to an implemen-tation written in Jason . For instance, the vacuum cleaner example which Imentioned in chapter 2 can easily be transferred to Jason :

+!cleaning : in(X,Y) & dirt(X,Y) <- do(suck).

The plan is triggered by the goal !cleaning, so if the vacuum cleaner is in a“cleaning state”, this triggering event would be applicable. The context specifiesthat this plan is relevant if the agent currently is somewhere in the environmentwhich is dirty. If the context can be unified with data from the database of theagent, it will process the contents of the body, which in this case means that it

2Even though it is written as context <- body, it can be interpreted as context → body

in propositional logic. A plan can therefore be read “when event happens, if context thenbody”.

5.2 Communication 47

will perform the action do(suck). As mentioned it is possible to have severalplans for the same triggering event if those plans have different contexts:

+!cleaning : in(X,Y) & dirt(X+1,Y) <- do(right).

This plan will then be applicable if the agent has perceived dirt in an area tothe right of its current area. In that case, it will perform the action do(right).

5.2 Communication

Agents in Jason are able to communicate using the agent communication lan-guage, KQML, which was introduced in chapter 3. This allows an agent toask other agents for help to achieve a goal, to inform them of changes in theenvironment and even just ask for information from their knowledge base.

The messaging is implemented as internal actions, but it works naturally to-gether with the AgentSpeak language making communication very easy to use.Sending a message to agent ‘robot’ telling it to achieve the goal of cleaning thefloor can then be done using the plan

.send(robot, achieve, cleaning).

This gives the robot the goal of cleaning. Note that this does not make itmandatory for the agent to complete this goal. It may have a plan to ignorethis goal altogether, if it has been requested by a specific agent:

+!cleaning[source(agent1)].

Notice that the plan omits context and body. This is a shorthand way of spec-ifying plans when context or body is true (i.e. empty).

5.3 Reasoning

In chapter 2 I introduced the deductive and the practical reasoning agent. Ja-son implements the practical reasoning agent and an algorithm analogous tothe agent control loop: the reasoning cycle. The reasoning cycle consists of 10

48 Jason

main steps, including perceiving the environment, updating the belief base andreceiving communication from other agents.

The interesting thing about the reasoning cycle is that some of the methodsused can be changed to allow custom behavior of the agents. One usage is tocustomize how the agent updates its belief base. This could be in a situationwhere the agent continuously receives percepts about the light intensity in theenvironment, light( ). If the agent at any time only needs exactly one suchpercept (and not “old”, possibly outdated percepts), the belief base can becustomized to ensure that there can only be one such percept.

In the following sections I further describe some of the customization that ispossible in Jason .

5.4 Agent Architecture

When using Jason to build a multi-agent system, the underlying system func-tions in a specific way with regards to how the agent sees and interacts withthe environment, receives messages from other agents and so on. However, thismay be inconvenient in some scenarios where the user needs to ensure that theagent has a local representation of the environment, or that certain messagesare handled in a specific way which may not be easily done using AgentSpeak.

Jason solves this problem by allowing the user to implement an agent archi-tecture for the agents. The agent architecture is a Java class in which it ispossible to override the methods used for receiving messages from other agents,perceive the environment, perform actions and so on. While this can be usedto implement more advanced, yet still local multi-agent systems, it also opensthe possibility for systems with a more distributed approach.

One example of this is the annual Multi-Agent Programming Contest [2] inwhich the contestants implement a team of agents which will compete againstother teams in a specific multi-agent scenario. The teams connect to a gameserver which uses a specific protocol. The implemented multi-agent system musttherefore be able to use this protocol for two purposes: (1) to allow the agentsto perceive the environment and (2) to forward the actions performed by theagents to the environment. Jason makes it very easy to build a system of agentswhich not only employs the agent-oriented programming approach, but also isable to log onto the servers and use the protocol developed for the contest.

5.5 Environments 49

5.5 Environments

Since the environment in which the agents are situated usually will be verydynamic, Jason allows the user to create his own environment. This makesit possible to define exactly how the environment is designed and also how theagents can interact with the environment. More specifically is it possible todefine when the agent is supposed to perceive, what it perceives and how theactions it attempts to perform affect the environment.

This altogether makes it possible to implement environments that simulate real-world environments and in that way simulate how a multi-agent system will workin a real environment.

An environment is implemented in Java by overriding certain methods in whichone is able to specify how the environment should work. It is possible to defineexactly how actions effect the environment and how the environment is per-ceived. It is also possible to develop a more strict environment in which theactions of the agents are synchronized. This makes it possible to implementgames where every agent is able to perform one action at each time-step. Suchenvironments are called time-stepped environments.

Executing actions Recall that actions are what an agent attempts to performin an environment in order to change it. Actions may, however, fail. In a Jason-environment it is possible to define exactly which actions are available3, and alsotheir probability to succeed.

The user is therefore able to define certain situations in which some actions maynever succeed, or define that certain actions may always have a 90% probabilityof success. It is also possible to define that certain agents are bound to failmore often than others. This makes it possible to define both deterministic andnon-deterministic environments.

Updating percepts When an agent is situated in an environment, the agentis able to perceive information from this environment using the sensors avail-able to it. What it is able to perceive is partly determined by the sensors (anagent without a sound sensor will not be able to perceive sounds) and partlydetermined by the environment (how far is the sight range and so on).

3This need not be a finite set of actions (yielding a discrete environment), however. It ispossible to, for instance, let all real numbers be actions, creating a continuous environment.

50 Jason

In Jason it is possible to define how agents perceive the environment by im-plementing the method updateAgsPercept. This method will be called by theJason engine whenever the agents should perceive (determined by the reason-ing cycle). This makes it possible to create both accessible and inaccessibleenvironments.

Time-stepped environments In a time-stepped environment, there is fur-thermore the possibility of defining behaviors of the environment that shouldhappen after all agents have performed their actions. It is also possible to definedynamic behavior of the environment in an asynchronous environment, howeverin the time-stepped environment these behaviors can be synchronized with theagents.

Note that it is possible to define all the types of environments described inchapter 2 using Jason .

5.6 Internal Actions

Internal actions allow the user to implement features in an object-oriented way,namely by using Java. This makes it possible to implement and use featureswhich are not easily implemented in a logic programming language. This couldbe advanced path-finding algorithms or even simple methods that use informa-tion otherwise unavailable, such as from the internet.

Since AgentSpeak uses a logic programming language, unification is an impor-tant principle. This principle has been transferred to the internal actions aswell. Therefore, the actions are not comparable to ordinary methods of imper-ative languages where a set of parameters are specified and a value is returned.Instead, a number of parameters is required, but not all of them need to beinstantiated with values. Instead it is possible to unify some of the variableswith the results. A typical example of this is an internal action calculating theshortest path from the agent’s current position to some arbitrary position:

+!perform next move : in(X1,Y1) &

target(X2,Y2) &

.path(X1,Y1,X2,Y2,Move)

<- do(Move).

Notice that the internal action is part of the context. This means that if there is

5.7 Concluding Remarks 51

no path from (X1, Y1) to (X2, Y2) (i.e. Move cannot be unified with a move), thenthe Jason engine will attempt to find another plan with the same triggeringevent.

5.7 Concluding Remarks

Jason makes it possible to implement complex multi-agent systems because ofits extensibility. The combination of customized agent architectures and envi-ronments enables the user to simulate realistic scenarios using software agents.It was shown that all the types of environment described in chapter 2 can beimplemented in Jason .

Furthermore since we can customize the agent architecture, it is possible toimplement a local representation of the environment in which we can executeinternal actions which exploit this local representation to for instance computethe shortest path or determine possible behavior of enemies etc.

The use of a logic programming language potentially makes Jason less intu-itive to work with since imperative programming languages are more popular.However it was shown that the use of a logic programming language makes itpossible to transfer deductive agents more or less directly to Jason , making ita quite versatile system.

52 Jason

Chapter 6

Moise+

In this chapter I introduce the Moise+ organizational model and show howto create a structural, functional and deontic specification of an organization.Furthermore, I will describe the so-called middleware, S-Moise+, which is apiece of software that attempts to fill the gap between an organization and itsagents. Finally, I will describe the framework I will be using: J -Moise+, whichuses S-Moise+ to create a combination of the Moise+ organizational modeland the AgentSpeak interpreter, Jason .

Moise+ is an organizational model for multi-agent systems which makes itpossible to specify the organization in a MAS structurally, functionally anddeontically. The model takes an organizations-centered approach, meaning thatan organization will exist a priori (created at design-time) and the agents oughtto follow it [18].

Figure 6.1 shows the behavior space of the agents in a MAS where P is theset of all behaviors drawing the MAS’s global purposes and E is the set of allpossible behaviors in the environment. By adding an organizational structure(as briefly described in chapter 3) we can further constrain the behavior of theagents (represented by the set S in the figure). These constraints come from theorganizational structure by letting agents play roles with certain limitations.

The set of possible behaviors is then (E ∩ S), which is now closer to P than

54 Moise+

Developing Organised Multi-Agent Systems Using the Moise+ Model 5

structureorganisational

functioningorganisational

purposeglobalenvironment

agents’ behaviour space

S

E P

F

Figure 2 Organisation effects on an MAS.

all behaviours which draw the MAS’s global purposes. In the same figure, theset E represents all possible behaviours in the current environment state. Theorganisational structure is formed, for example, by roles, groups, and links that limitthe agents behaviour to those inside the set S , i.e., the set of possible behaviours(E∩S ) becomes closer to P . It is a matter of the agents, and not of the organisation,to conduct their behaviours from a point in ((E ∩S )−P) to a point in P . In orderto help the agents in this task, the functional dimension contains a set of collectiveplans that has been proved efficient in activating P behaviours. For example, in asoccer team we can specify both the structure (defence group, attack group, andsome roles for each group) and the functioning of the team (e.g., rehearsed plays,as a kind of predefined collective plans that have already worked well).

Having only one dimension is normally insufficient for a system. If only thefunctional dimension is specified, the organisation has nothing to ‘tell’ the agentswhen there is no plan to execute (the set of possible behaviours is outside the setF of Figure 2). Otherwise, if only the organisational structure is specified, theagents have to reason for a collective plan every time they want to play together.Even with a small search space of possible plans (since the structure constrains theagents’ options), this may be a hard problem. Furthermore, the plans developedfor a particular problem are lost, since there is no organisational memory to storethese plans. Thus, in the context of some application domains, if the organisationmodel specifies both dimensions while maintaining suitable independence, then themulti-agent system that follows such a model can be more effective in adjusting thegroup behaviour to its purpose. Another advantage of having both dimensions isthat the agents have more information to reason about the others position in theorganisation and thus better interact with them.

The definition of a proper organisation for an MAS is not an easy task. On onehand the organisation can be too flexible, and then it does not help the achieve-ment of the global purpose. On the other hand, it can be too stiff, and then theorganisation removes any advantage of the agents’ autonomy. An initial adequateorganisation is normally set up by the MAS designer, however this may become notsuitable in dynamic environments. In this case the system should have the abilityto change or adjust its organisation.

Figure 6.1: The organization effects on a MAS (from [20]).

the set E. To get the agents to always follow the global purposes of the MAS,they need to map behaviors from points within ((E ∩ S) − P ) to a point in P[18]. The set of functions F can be used for exactly this purpose. It containsa set of validated global plans, which will always follow the global purposes ofthe MAS, if they are used by the agents.

An attempt to combine the organizational structure (i.e. the roles and groups)with the functional dimension (global plans) is Moise (Model of Organizationfor multI-agent SystEms) [14]. Here we have three levels: (1) the individuallevel, describing the roles of the agents in the system, (2) the social level, whichdescribes how agents are acquainted and can communicate and (3) the grouplevel, describing how the agents are associated in groups in which communicationcan be further constrained (e.g. by letting agents only communicate with otheragents in their group).

The Moise+ organizational model considers the structural and functional di-mensions as almost independent (which figure 6.1 also shows), but there is a linkwhich the deontic dimension will be used to establish. I use the notions of anorganizational specification (OS) and organizational entity (OE) as describedin chapter 3. In Moise+, the OS is formed by a structural specification (SS),functional specification (FS) and deontic specification (DS).

6.1 Structural Specification 55

6.1 Structural Specification

Moise+ uses the concepts of roles, role relations and groups in the structuralspecification of an organization.

Role A role ρ is played by an agent. The agent enters a group to play a rolein that group. The role is a set of constraints the agent ought to follow. Thereare two groups of constraints: a constraint in relation to other roles in a group(compatibility) and a deontic relation (in terms of what plans the agent oughtto follow) [18]. We simplify the specification by allowing roles to inherit fromother roles.

Definition 6.1 (Role inheritance) A role ρ′ inherits a role ρ (denoted ρ <

ρ′), where ρ 6= ρ′, if ρ′ receives some properties from ρ and ρ′ is a sub-role orspecialization of ρ. �

The set of all roles is denoted RSS . Furthermore, it is possible to define anabstract role, ρabs, i.e. a role that agents cannot directly play. These roles canthen be played by playing roles that inherits such role, i.e. ρabs < ρ. The set ofall abstract roles is denoted Rabs and we have that Rabs ⊂ Rss.

Role relations The definition of a role as given above does not directly con-strain the agent’s behavior. Instead we can constrain an agent by specifyingthat it is allowed to communicate only with agents playing specific roles. To dothis we create relations between roles which specify how an agent is allowed tocommunicate with these roles.

Definition 6.2 (Link) Given roles ρs, ρd and a link type t, a link is given bythe predicate

link(ρs, ρd, t).

ρs is the link source, ρd is the link destination and the link type, t defines whatthe source is allowed to do with the destination. We have t ∈ {ACQ,COM,AUT},where ACQ defines an acquaintance link, COM, a communication link and AUT,an authority link. �

The definition of a link defines three types of link types. These have the followingmeaning [14]: In an ACQ-link the agents playing ρs know the agents playing ρd.In a COM-link the agents playing ρs are allowed to communicate with agentsplaying ρd. In an AUT-link the agents playing ρs are allowed to control agents

56 Moise+

playing ρd. Furthermore we have the following intuitive understanding of thelinks:

link(ρs, ρd,AUT)→ link(ρs, ρd,COM)→ link(ρs, ρd,ACQ),

i.e. if an agent has authority of another agent, it can send messages to that agent,and if an agent can communicate with another agent, it knows that agent. Aninherited role will have all the links of the super-role:

(link(ρs, ρd, t) ∧ ρs < ρ′s)→ link(ρ′s, ρd, t).

The inheritance is analogous for destination roles. If two roles are not relatedby a link, the agents of those roles will not be able to perform any kind ofinteraction.

Role compatibility The links constrain what an agent playing a role is al-lowed to do on a social level. It must also be possible to constrain what rolesan agent is allowed to play depending on what roles it is already playing.

Definition 6.3 (Compatibility constraint) Given roles ρa, ρb we write

ρa ./ ρb,

exactly when ρa is compatible with ρb, i.e. when an agent is able to play bothρa and ρb at the same time [18]. Furthermore we have that ρa ./ ρa and(ρa ./ ρb ∧ ρb ./ ρc)→ (ρa ./ ρc), i.e it is a reflexive and transitive relation. �

If it is not specified that two roles are compatible, they are not. If a role hasa set of compatible roles, then an inherited role of that role will be compatiblewith the same roles:

(ρa ./ ρb ∧ ρa 6= ρb ∧ ρa < ρ′a)→ ρ′a ./ ρb.

Groups We still need to define where roles are played. We know that an agentaccepts to play a role when it enters a group but we need a way to define whichroles are available in a group, how they are related and whether there are anyconstraints on how many agents can be playing a specific role at the same time.

We distinguish between a group, which is an instantiated group from the OE,and a group specification, i.e. a group specified in the OS [18].

Definition 6.4 (Group specification) A group specification can be repre-sented by the tuple

gspec = 〈R,SG,Lintra,Linter, Cintra, Cinter, np, ng〉,

6.2 Functional Specification 57

where

• R is the set of roles available in the group.

• SG is the set of possible sub-groups. If a group is not included in anyother group specification, it is called a root group specification.

• Lintra is the set of internal links.

• Linter is the set of external links.

• Cintra is the set of internal compatibilities.

• Cinter is the set of external compatibilities.

• np specifies the role cardinality of the group, i.e. the number of agentsthat must play each role. For example np(ρa) = (1, 3) means that the roleρa must be played by at least one and at most three agents. The defaultvalue is (0,∞).

• ng specifies the sub-group cardinality, which is analogous to np.

An internal link between two roles makes it possible for agents in these rolesto communicate within the same group. For an external link, the agents do notneed to be in the same group to be able to communicate [18].

We can now talk about well-formed groups. The cardinality of roles and sub-groups must be fulfilled for all roles and sub-groups for a group to be well-formed.That is, if we have ng(sg1) = (1, 1), then the group have exactly one instantiatedsubgroup from the specification sg1.

Overall, the structural specification defines the structure of the organization: itsroles, their relations and the groups. It must consist of a root group specificationwith a set of roles, (optional) sub-groups and the inheritance relation on R.

6.2 Functional Specification

In the functional specification it is possible to define a set of global plans whichthe agents can follow to achieve certain goals that may lead to the global purposeof the MAS.

58 Moise+

g0

g1 g2

(a) Sequence

g0

g1 g2

(b) Choice

g0

g1 g2

(c) Parallelism

Figure 6.2: Subtrees for the operators used for global plans in the functionalspecification of Moise+ (after [18]).

Basically, one builds a goal decomposition tree, known as a Social Scheme (SCH),where the root is the goal of the SCH and each node is a sub-goal that can bedelegated to different agents. In [18] three operators are defined for decomposinga goal into sub-goals:

Sequence Defines a subtree of height 2 in which the root goal is achieved whenthe leafs are achieved in order from left to right. The operator used is “;”.For example: g0 = g1; g2, where g0 is achieved when first g1 and then g2is achieved (see figure 6.2(a)).

Choice Defines a subtree of height 2 in which the root goal is achieved, whenone of the leafs is achieved. The operator used is “|”. For example:g0 = g1 | g2, where g0 is achieved when either g1 or g2 is achieved (seefigure 6.2(b)).

Parallelism Defines a subtree of height 2 in which the root goal is achievedwhen all leafs are achieved. It differs from a sequence in that the leafgoals can be achieved in parallel. The operator used is “‖”. For example:g0 = g1 ‖ g2, where g0 is achieved when g1 and g2 are achieved in parallel(see figure 6.2(c)).

To take into account the fact that actions, and therefore plans, may not alwayssucceed (depending on the environment), it is possible to specify how certain itis that a goal is achieved if the sub-goals are achieved. This is specified by asubscript:

g0 =0.75 g1 | (g2; g3),

i.e. there is a 75% chance that g0 is achieved if one of the sub-goals g1 or g2; g3is achieved.

Finally, a SCH also specifies missions, which basically are sets of goals thatagents will commit to by following the mission. For instance, we can create a

6.3 Deontic Specification 59

missionm1 7→ {g1, g2},

meaning that any agent committing itself to m1 is automatically committed tothe goals g1 and g2. If an agent is committed to several missions, it is possibleto specify a preference ordering, so that the agent will prioritize some missionsover others [18]. It is now possible to formally define what a Social Schemeconsists of.

Definition 6.5 (Social Scheme) A Social Scheme is a tuple

SCH = 〈G,M,P,mo, nm〉

where G is the set of global goals, M is the set of missions, P is the set ofplans that builds the goal decomposition tree, mo is a function which specifiesthe goals a mission consists of and nm which specifies the cardinality of themissions (analogous to the cardinality of roles and groups). �

A SCH therefore consists of the goals of the system and how to achieve thesegoals. We can now put constraints on the agents by letting them play certainroles. This makes sure that they are only associated with agents they need tobe associated with. Furthermore the agents can be constrained by the groupsthey are in, meaning that some communication can only be done with othergroup members. It is also possible to define global plans which the agents canfollow by committing to missions.

6.3 Deontic Specification

We now proceed by describing how to make the relation between the structuraland functional specification explicit by explaining the deontic specification. Us-ing the deontic specification we can constrain the agents further by specifyingwhat missions an agent ought to follow and what missions an agent is allowedto follow when playing certain roles.

We write obl(ρ,m, tc) when agents playing role ρ are obliged to complete missionm under the time constraint tc. Analogously we write per(ρ,m, tc) for permis-sions. Note that even though the syntax differs from deontic logic in chapter 4,we interpret obligations and permissions in Moise+ the same way.

Permissions and obligations are related by the axiom defined in chapter 4 (Oϕ→¬O¬ϕ):

obl(ρ,m, tc)→ per(ρ,m, tc).

60 Moise+

Fig. 3. S-Moise+ Components.

3.1 Organisational Entity Dynamics

The OE is changed by organisational events created by messages that OrgMan-ager receives from the agents. Each event has arguments, preconditions andeffects (Tab. 2 summarises these events). In this paper we describe only someof the events using our soccer example, a full formalisation can be found in [13]and http://www.lti.usp.br/moise.

As an example, suppose we have an OE where the following events happened:

– createGroup(‘team’): a group, identified hereafter by grt , was created fromthe team group specification defined in Fig. 1;

– createSubGroup(‘defense’, grt): a group, identified hereafter by grd , wascreated from the defense group specification as grt sub-group;

– createSubGroup(‘attack’, grt): a group, identified hereafter by gra , wascreated from the attack group specification as a grt sub-group;

– createScheme(‘side attack’, {grt}): an instance of the side attackscheme specification (Fig. 2), identified by schsa , was created, the agentsof the group grt are responsible for this scheme missions.

After these events, the groups are not well formed, since there is no agentsengaged with their roles (see Fig. 4). The defense group, for instance, needs oneagent playing goalkeeper. If an agent α wants to adopt the role ρ in the groupgr , it must create the event roleAdoption(α, ρ, gr). Notice that a role isalways adopted inside a group of agents, since role is a relational concept [1].The reasons for an agent to adopt a role is not covered by the Moise+ model,for more details regarding motivations for role adoption, the reader is referred to[10,8,3]. The role adoption event in S-Moise+ has the following preconditions:

1. the role ρ must belong to gr ’s group specification;2. the number of ρ players in gr must be lesser or equals than the maximum

number of ρ players defined in the gr ’s compositional specification;3. for all roles ρi that α already plays, the roles ρ and ρi must be intra-group

compatible in the gr ’s group specification;4. for all roles ρi that α already plays in groups other than gr , the roles ρ andρi must be inter-group compatible.

Figure 6.3: The components of S-Moise+ (from [19]).

This intuitively states that if it is obligatory for a role ρ to complete a missionm, then that role is permitted to complete this mission. Furthermore, as withlinks and compatibility constraints, the deontic relation is inherited by subroles:

(obl(ρ,m, tc) ∧ ρ < ρ′)→ obl(ρ′,m, tc),

analogously for permissions.

The deontic specification is then the set F∩S, i.e. the organizational functioningconstrained by the organizational structure. This also means that an agent willactually prefer behaviors from the set of allowed behaviors (S) which are in theset F ∩ S, because it will be able to force other agents to commit to the samemissions [18]. Note however that the plans ofMoise+ are only the global plans.An agent may very well still have local plans and goals which it may committo. However, these plans are not specified in Moise+.

6.4 S-Moise+

The Moise+ organizational model gives a foundation for defining and usingan organizational model for multi-agent systems – in other words to create anOCMAS. However, the model itself is not directly associated with any multi-agent framework and the intention is that it should be usable for all kinds offrameworks for multi-agent systems.

The software implementation called S-Moise+ is an implementation ofMoise+

which should enable arbitrary multi-agent systems to follow an organizationalstructure [19].

Figure 6.3 gives an overview of the S-Moise+ components, which basicallyconsists of the OrgManager agent and the OrgBox API.

6.5 J -Moise+ 61

OrgManager: The OrgManager is an agent which is responsible for chang-ing the organizational entity (OE). It receives requests from each OrgBoxconcerning changes in the state of the OE (such as role adoption, groupcreation and so on). The OrgManager will then change the OE accordinglyif it does not bring the system into an inconsistent state [19]. This is typ-ically when an agent attempts to adopt a role ρ1, which is not compatiblewith one of its current roles ρ2, i.e. when ρ1 ./ ρ2 is not the case.

It must be noted that certain situations exist where it is acceptable to bringthe system into an inconsistent state. This is the case when a new group iscreated, in which certain constraints cannot be fulfilled immediately (suchas minimum cardinality). In that case the group will be created, but willnot be well-formed.

OrgBox: The OrgBox is an API used by the agents to access the organiza-tional and communication layer. An agent uses its OrgBox to gain infor-mation about the current state of the OE. However, because an agent isconstrained by its groups and roles, it will only receive information thatagents playing those roles are allowed to know. Moreover will the OrgBoxensure that communication only can happen between agents that haveestablished a communication link.

The OrgManager is implemented as an agent, which means that thereis nothing restricting an agent from communicating directly with theOrgManager – or any other agents for that matter. Therefore, it is re-quired that the agents use the OrgBox API to ensure that the organiza-tional constraints are not violated.

6.5 J -Moise+

J -Moise+ is like S-Moise+ an implementation of theMoise+ organizationalmodel. J -Moise+ is based on Jason , making it a perfect choice for imple-menting an OCMAS in this project. The basic principles of J -Moise+ arecomparable to those of S-Moise+; it consists of both an OrgBox API and anOrgManager agent [21].

To use J -Moise+ in Jason , one must add the OrgManager agent to the projectfile (*.mas2j). This is done using the following code:

orgManager [osfile="os.xml", gui=yes]agentArchClass jmoise.OrgManager;

62 Moise+

In the code, “os.xml” is the organizational model which the user creates. Itdefines the entire organizational structure.

With this code added, the system is ready to use J -Moise+. An agent usesthe organization by communicating directly with the OrgManager (this differsa bit from S-Moise+, in which the OrgBox API is more explicitly present).An agent in Jason can communicate with the OrgManager in two ways: eitherby (1) sending KQML messages (using the .send(...) internal action) or (2)using internal actions defined in J -Moise+.

J -Moise+ exploits the fact that Jason is highly customizable. An agent inJ -Moise+ should therefore use a customized agent architecture (specificallyjmoise.OrgAgent). This architecture ensures that when organizational eventshappen, the agents will perceive this. This makes it possible to easily createplans for reacting to such events.

6.6 Concluding Remarks

TheMoise+ organizational model makes it possible to create an organizationalstructure which can constrain the behavior of agents in a multi-agent system. Ihave shown how the system can be structured using roles and groups and howthe overall purpose of a system can be defined functionally in terms of missionsand goals. Finally I have shown that by relating the structural and functionalspecifications by a deontic relation, the behavior of the agents are constrainedby the roles and missions.

I have introduced two implementations of the model, S-Moise+ and J -Moise+,which use a Moise+ specification to constrain behavior of agents in a system.The only requirement is that the agents only communicate through theMoise+

system using the OrgBox API. This ensures that links and role compatibilitiesare not violated.

Part II

ComparingACMAS and OCMAS

Chapter 7

The Scenario

This chapter will introduce scenario in which the two systems will be imple-mented. First I introduce the scenario which is a simplified version of the gameBomberman. Then I describe how the environment is being implemented inJason . Finally I discuss the architecture of the agents.

7.1 Bomberman

According to Wikipedia, Bomberman is “a strategic, maze-based computer andvideo game”1. In the original game, Bomberman was a robot trying to escapefrom a factory. To do so, he had to avoid enemies by strategically placing bombsthat would kill them:

Bomberman is a robot engaged in the production of bombs. Like hisfellow robots, he had been put to work in an underground compoundby evil forces. Bomberman found it to be an unbearably drearyexistence. One day, he heard an encouraging rumor. According tothe rumor, any robot that could escape the underground compound

1http://en.wikipedia.org/wiki/Bomberman_(series)

66 The Scenario

and make it to the surface could become human. Bomberman leapedat the opportunity, but escape proved to be no small task. Alertedto Bomberman’s betrayal, large numbers of the enemy set out inpursuit. Bomberman can rely on bombs of his own production forhis defense. Will he ever make it up to the surface? Once there, willhe really become human?2

The original game consists of five key elements: bombs, boxes, solid obstacles,exitways and power-up panels. Whereas boxes are destructible, solid obstaclesare not. This means that Bomberman will always be able to take cover behindsolid obstacles. Most boxes must be destroyed since they hide both power-upsand exitways. Exitways are what Bomberman must find to be able to completea level. A power-up can be used to enhance bomberman’s abilities and bombs.In the beginning, his bombs are weak, but by using power-ups he will be ableto drop several stronger bombs at a time.

7.1.1 Bomberman and Multi-Agent Systems

In a multi-agent context, the enemies could be considered a team of agents(i.e. a multi-agent system) with the general purpose of stopping Bombermanfrom escaping. However, to be able to more easily compare different multi-agent systems (namely, ACMAS and OCMAS), this kind of game will not work.The reason for this is that the game consists of only one bomberman, making itimpossible to experiment with cooperative aspects of intelligent agents. Instead,I propose an altered version of Bomberman in which two teams attempt toeliminate each other:

Definition 7.1 (Team-based Bomberman) The multi-agent system is sim-ilar to Bomberman, however with teams fighting against each other. Each teamconsists of at least two “bombermen” (or agents). The teams are situated ina maze-like environment consisting of solid obstacles and boxes. An agent canplace bombs which at some point will explode. An agent dies when it is hit byan explosion. Explosions will also destroy boxes. A team wins when they havesuccessfully eliminated all players from the other team.

This version of Bomberman consists of some of the same key elements as theoriginal game: a maze, destructible and indestructible obstacles, and bombs.This should allow the agents to employ the strategies intended for the game,

2Taken from the original Bomberman manual (http://nesworld.com/manuals/bomb.txt,downloaded on February 22, 2010).

7.2 Environment 67

while at the same time competing in teams. This fact creates a new aspect ofthe game, since a group of agents potentially is able to trap enemies by placingbombs strategically.

Notice that the concepts of exitways and power-ups have been excluded in thisversion. Exitways have been removed, since the overall goal of “getting to thesurface” is no longer relevant. Power-ups are not included to avoid makingthe overall system too complex since the intention is not to make a perfectimplementation of Bomberman; rather is it to compare and discuss two differentapproaches to implementing multi-agent systems. In this case I believe a simple,yet strategically challenging system will be adequate.

7.2 Environment

The environment for the team-based Bomberman is a time-stepped environmentwhich means that the actions of the agents are synchronized. This has beendone to ensure that computational differences in the different systems will notresult in advantages for simpler teams. I have defined that one step takes 500milliseconds which means that if an agent attempts to decide for too long, itsaction will not be taken into account.

The environment is inaccessible in the sense that agents have limited sight-range. This is not generally the case in Bomberman, however, it makes certainaspects more interesting if an agent will not always know where the bombs andenemies are. It is a deterministic environment, so actions are guaranteed tosucceed. The environment is also dynamic since bombs will explode withoutthe agent explicitly performing a “detonate” action. It is discrete since only afixed number of percepts and actions are available.

In Jason , the environment is implemented in Java by overriding certain meth-ods of the Jason-implementation of a time-stepped environment.

init(String[]) This method is called once, when the environment is initial-ized. As argument, one can provide such things as timeout for a time-step(which in this case is set to 500 milliseconds). During initialization, themodel is created (a level) and initial percepts are send to all agents.

executeAction(String, Structure) This method is called, when an agentattempts to perform an action. Therefore this is the method one wouldchange if the environment should be non-deterministic. In this scenario,however, which is deterministic, an action is guaranteed to succeed.

68 The Scenario

stepFinished(int, long, boolean) When a step is finished, the system checkswhether the game is finished (i.e. a team is eliminated). If this is not thecase, the bombs will perform a single tick. This method also performscollision check between explosions and other objects (such as boxes andagents).

updateAgsPercept() This method ensures that the agents perceive the envi-ronment. It will update the agent’s position, how many bombs it carriesetc. Positions of all agents from the same team and positions of enemyplayers that are within range of any of the agents will be shared amongthe team.

The method emulates the knowledge-sharing between agents by lettingan agent perceive a part of the environment if it, or another team-mate,is within range of that part. This increases the overall performance ofthe system in comparison to letting the agents broadcast everything theyperceive.

7.3 Agents

All agents keep an internal view of the environment. This view may not beentirely accurate since things that happen outside an agent’s sight range willnot be updated in the internal view of the environment.

The view is implemented by overriding the method perceive() from the agentarchitecture. The agent perceives the environment and then uses these perceptsto build an internal model. This model uses the same structure as the environ-ment but only with partial information. Some percepts will be removed fromthe agent’s knowledge base and exists only within the internal representation ofthe environment. This is for instance the case for obstacles, while other things,such as the position of other agents are kept in the knowledge base to ease accessof these within Jason .

This allows the agent to make use of information about the number of bombs,its sight range, current position etc. immediately, while information such asposition of obstacles is only used within internal actions and need not be readilyavailable.

7.4 Strategy 69

7.4 Strategy

The overall goal of this project is to compare ACMAS and OCMAS using differentmeasures. As discussed, I will not simply conclude that one method is betterif that team tends to win more; the implementation could simply be superioreven though the structure of the code is messy.

Even so, I will still attempt to make both implementations follow somewhatthe same strategy. This is done to make sure that if one approach leads toan excessive amount of code, it is not simply a result of an extremely complexstrategy, but rather the result of a quite verbose language.

In the following I will describe the overall strategy that both teams will follow.By comparing the performance of the teams applying this strategy to the samemaps, bias towards a team being better simply because of a superior implemen-tation should be eliminated.

Team Definition 7.1 specifies that a team consists of two or more agents.Because an agent will have an internal, possibly wrong, view of the environmentit is important to keep updating this view, simply by exploration. For this, itseems natural to have at least one agent responsible for doing this. It may proveto be an advantage to have more than one if the team consists of many agents,however in most cases one is probably appropriate.

To win the match, a team must eliminate the other team by killing all enemyplayers. For this, a group of agents must be responsible for finding enemies andplacing bombs near them.

Overall this leads to a design having two kinds of agents: an explorer and adestroyer. Having only two agents may sound too simple, however the game isquite simple and the overall goal of the project is to compare approaches; notto build one extremely intelligent system.

Goals The agents on a team have an implicit collective goal of winning thematch. By letting the agents pursue goals which will lead to the completion ofthis goal, the agents will be attempting to fulfill their overall purpose.

A major goal which will lead the agents towards the overall system goal is thegoal of killing an enemy. This goal will be pursued by all destroyers in a team.To ensure that the internal view of the system is always as accurate as possiblethe explorer(s) will not pursue this goal.

70 The Scenario

An explorer will instead pursue the goal of exploring the environment. However,since the environment is quite dynamic, the goal is rather a maintenance goalthan an achievement goal. The agent should keep pursuing areas which havebeen visited the least, making sure that changes in these areas are known.

Finally, if a destroyer does not know the location of any enemy, it should beginsearching for one. This will mostly be the case in the beginning of a match,since when the agents are spread out in the map, they are more likely to runinto enemies.

Cooperation The agents should be able to cooperate in solving their prob-lems. There are a few situations where cooperation naturally can help on theperformance of the agents:

Knowledge-sharing: By sharing the internal view of the map, the agentsimplicitly cooperate by helping each other explore. This is very importantif a team should have any advantage of using an explorer.

Killing enemies: Killing an enemy is hard if it is a job done alone. The enemywill in most situations be able to choose a covering location, saving it fromexplosions. However, if more than one agent is committed to killing oneenemy they may be able to trick it into being hit. Therefore the agentsshould utilize the possibility of cooperation in this case.

Being stuck: Finally, an agent may be stuck when the game starts if enoughboxes have spawned near his location. This means that it will kill itselfwhen attempting to remove the boxes. The agent will not be able tocompete in the game until some other agent has destroyed the boxes. It iscrucial that the agents detects such situations and act accordingly so thatall agents are available during a match.

Chapter 8

Agent-CenteredMulti-Agent System

In this chapter I will describe the design and implementation of the agent-centered multi-agent system which is implemented in Jason , and is one of thetwo teams competing in Bomberman. I am using Prometheus [24] as a guideline.I will discuss the results of the specification, design and implementation of thesystem, but not go into details with every iteration of a phase.

The Prometheus methodology consists of three phases: System Specification,Architectural Design and Detailed Design. The system specification focuses onthe functionality of the system: what should the resulting system be able to do,which goals should agents pursue, what scenarios are possible and what can beperceived from the environment. In the architectural design phase, the designerdecides upon the agents of the system. It should be specified which agents thesystem consists of, how they are acquainted and how they interact. Finally,the phase also constructs a system overview, combining the results in the firstphases. In the detailed design phase, the capabilities of agents are design, i.e.it is decided exactly what each agent is able to do. Using this, a set of plans isdeveloped which tell how an agent is supposed to fulfill his goals.

I will use Prometheus as a guideline for specifying the functionality of the sys-tem, which eventually will lead to a decision on which agent types the system

72 Agent-Centered Multi-Agent System8.1 System Overview 75

win match

kill all enemies

kill enemy

place bombget to enemy

move

use pathfind path

find enemy

asksearch

survive

avoid bombs

hidemove away

Figure 8.1: Initial goal specification

Functionality The goals can be grouped into functionalities, which basicallyshows how behaviors of the system can be achieved. It is a combination of goals,percepts and actions which are relevant to that behavior. Figure 8.2 gives anoverview of the functionality of this system. Rectangles are functionalities,circles are goals, stars are percepts and rectangles extended with a triangle areactions.

For example, in order to kill an enemy the agent must conduct the behaviorof the functionality “kill enemy”. This means perceiving an enemy, moving tothat enemy, placing a bomb and avoiding getting killed by the bomb.

Percepts & Actions In order to be able to design a fully functioning system,I also need to identify exactly what an agent can expect to perceive and what anagent should be able to do (its actions). In the case of this project the followingpercepts and actions have been identified:

Figure 8.1: Initial goal specification

consists of. I will not use Prometheus to decide upon interaction between agentsand plan descriptions because of the size of the system. I do not believe theamount of work required for specifying protocols and interaction diagrams willresult in a clear increase of quality of the agent communication. The reasonfor this is that the system is quite small, and much of the communication willhappen implicitly.

Note that some parts of this system will also be available to the OCMAS team.This is primarily internal actions which are available to any agent in the project.Whenever this is the case I will briefly explain why.

8.1 System Overview

I use the Prometheus methodology as a guideline for designing the team. Referto definition 7.1 for a description of the game. The first step is to create a systemspecification in which we identify the goals of the system [24]. The initial goalscan be identified from the description of the basic system. Furthermore, we canrefine the goals by asking “how can this goal be achieved?”; the answer to suchquestion will then usually be a list of sub-goals that when achieved, will achievethe overall goal.

Figure 8.1 shows the initial goals and how they relate (by sub-goal relation). Iwrite “goal”, when the sub-goals of that goal can be pursued in parallel, “goal”,when just one of the sub-goals need to be achieved and simply “goal”, when thesub-goals must be completed in a sequence. For example, while the agent can

8.1 System Overview 73

Figure 8.2: Functionalities for the system

choose how to avoid getting hit by a bomb, it will need to first find an enemyand then move, to be able to get to that enemy.

An important part of the goal identification process is that it is iterative: when-ever we realize that certain goals are missing, a new iteration is made in whichthese goals are added to the system1.

Functionality The goals can be grouped into functionalities which show howbehaviors of the system can be achieved. It is a combination of goals, perceptsand actions which are relevant to that behavior. Figure 8.2 gives an overview ofthe functionality of this system. Rectangles are functionalities, circles are goals,stars are percepts and rectangles extended with a triangle are actions.

For example, in order to kill an enemy the agent must conduct the behaviorof the functionality “kill enemy”. This means perceiving an enemy, moving tothat enemy, placing a bomb and avoiding getting killed by the bomb (part ofthe “surviving” goal).

1I am not going to go into details with the iterations and which goals where identified inwhich iteration. This section merely summarizes the process of specifying the multi-agentsystem.

74 Agent-Centered Multi-Agent System

Percepts & Actions In order to be able to design a fully functioning systemwe also need to identify exactly what an agent can expect to perceive and whatan agent should be able to do (its actions). In the case of this project thefollowing percepts and actions have been identified:

Percepts Actions

• Position of known agents

• Obstacles

• Explosions

• Bombs

• Death of agents

• Number of simultaneously placed bombs

• Bomb explosion range

• Movement

– up

– down

– left

– right

• Skip (do nothing)

• Place bomb

The identification of percepts and actions is an iterative process so it will bepossible to revisit it, should it prove to be necessary. Also note that an importantpart of identifying percepts and actions is to get an overview of what the agentcan see and do. This means that some percepts which were initially identifiedmay prove to be too generic and must be described in a more specific way.

Data It is also possible to specify if an agent uses data from a specific datasource. This could for instance be a database or a web server. In this case, eachagent will have access to one data source which they keep updated themselves:their local representation of the environment, or local world map (LWM). Thismap is useful for many functionalities, for example moving along a path, ex-ploring the environment and avoiding bombs. It is possible to split up this datasource into several data sources which have unrelated responsibilities. This re-sults in the data sources: Map-data, Available bombs and Agent status. Map-datacontains everything about the local world, i.e. obstacles, bombs, explosions. Us-ing Available bombs the agent can lookup how many bombs it currently is ableto place. Finally, Agent status holds the position of the agents and whether theyare dead2.

2Note however, that since this is a local representation, it may not be accurate. If an agentdoes not know where another agent is, it will not be able to search the data source for thatinformation.

8.1 System Overview 75

Scenarios Using the identified goals, functionality, percepts and actions, itis possible to identify a set of possible scenarios of the system, i.e. things thatmay happen during a run of the system. Usually one creates scenarios whichshow how the system is normally running, i.e. the expected behavior, but it isalso possible to create scenarios specifying how to react when something goeswrong [24]. This could be very relevant in a non-deterministic environment,since actions may not always lead to the expected results.

I now describe the scenarios which I have identified. In [24] it is suggested thatone specifies exactly what a scenario consists of. However, here I will only brieflypresent each scenario.

Position perceived: When the position of the agent is perceived, this shouldbe put into the local representation of the environment. This is importantif the environment is non-deterministic, since the agent then cannot trustits actions, but must rely on the information it gets from the environment.

Enemy perceived: If an agent perceives an enemy, it should use this informa-tion to create a plan for achieving Kill Enemy.

Ally perceived: When an ally is perceived, this information should be takeninto consideration when planning to kill enemies. It should also be savedin the local representation of the environment.

Bomb perceived: If an agent perceives a bomb it should create a plan forachieving Avoid Bomb.

Explosion perceived: If an explosion is perceived, the agent should make surenot to get hit by this explosion by going into it. An explosion stays in thefield for a few steps and the agents must ensure that their paths are notgoing through an explosion.

Obstacle perceived: When an obstacle is perceived it should be saved in thelocal representation of the environment. If the obstacle is destroyable theagent may choose to achieve Destroy Obstacle.

Target acquired: A target is a location to which the agent wants to move.When a target is acquired the agent should attempt to reach this targetusing the Movement actions.

The agent is out of bombs: Since the agent only is able to drop a limitednumber of bombs at one time, it should not attempt to perform the PlaceBomb-action when it perceives that it currently have no bombs.

The agent is dead: If the agent is dead, it can only perform Skip.

76 Agent-Centered Multi-Agent System

Figure 8.3: The relation between functionality and data sources.

The final part of the system specification is to ensure that the design is con-sistent. This means that we must ensure that every goal is in a scenario andfunctionality and that there should be functionality and scenarios for all actionsand percepts [24]. This will ensure that the expected behavior (from goals,actions and percepts) is covered in possible scenarios and functionalities.

If new information becomes available, thereby identifying new types of goals,actions or percepts, a new system specification iteration is made to ensure thatthe scenarios and functionality cover this as well.

8.2 Agents

In the architectural design phase the focus is on the agents of the system. It isdecided which agents should be used, how they will interact and how the overallsystem structure will be. The agent types can be decided by relating functionali-ties to data sources. In that way it is possible to determine which functionalitiesare related, thereby realizing if some functionality could be implemented in thesame agent [24].

In figure 8.3 I have mapped the functionalities to the data sources of the system.Before deciding on agent types, it should be noted that a functionality such asavoiding a bomb or moving along a path is a functionality all agents should beable to use. Furthermore, destroying boxes is a functionality all agents need tohave since they may otherwise end up in situations where they rely too much on

8.2 Agents 77

other agents. From this figure there are two types of agents: one for explorationand searching for enemies and one for killing enemies. This conforms with theproposed strategy in chapter 7.

The next thing to decide is the cardinality of each type, i.e. how many agents ofeach type should the system contain. This is an important decision if some agenttypes depends on others since it could create a bottleneck. However with thecurrent two types of agents, this will probably not be a problem. The cardinalitycan therefore be decided by testing different setups.

I have now decided upon the overall system specification and the agent types ofthe system. In the following I will describe how the agents cooperate and actin different scenarios. This will as mentioned not be done using Prometheus;rather will I take the approach of specifying using epistemic logic by creatingan epistemic model to be used for creating plans.

8.2.1 Cooperation

As the strategy suggests, the agents will need to employ certain cooperativeskills to be able to succeed. Some cooperation should however happen implicitly.When two agents pursue the goal of killing the same enemy, they should at leastbe able to avoid putting bombs at the same spots, and instead attempt to trapthe enemy. This will generally be possible because of the autonomy of theagents; they should choose paths and bomb locations which seems reasonable,i.e. not place bombs which will potentially hit allies or go through a path inwhich a bomb may explode soon.

Some cooperation can, however, not be done implicitly. Consider the situationin figure 8.4. Agent 6 of team 2 is in the unlucky situation of being stuckbetween a number of boxes without the possibility of placing a bomb to destroythem; it would kill the agent as well.

In such situation the agent has two options: (1) wait for another agent toautonomously choose to help the agent or (2) ask for help. Option (1) may bepossible but it seems irrational to wait for another agent to detect the situationby himself. The agent being stuck should therefore always ask for help. This willbe done using the contract net protocol (see section 3.2.1.1 about task sharing).

The OCMAS team will also be using this strategy for helping stuck team-mates,however it will be used a bit differently because of the differences in the ap-proaches.

78 Agent-Centered Multi-Agent System

Figure 8.4: The agent is stuck between these boxes and cannot bomb his wayout without killing himself.

8.3 Pathfinding

Pathfinding may seem a trivial task at first, however considering that one of themain features of Bomberman is the destructible boxes, it turns out to be a bitmore complex. It seems trivial because one can always choose to avoid boxes,by choosing a path around them. However, in a very dense map this may notbe possible. Also, in some cases the path around a box may be unnecessarilylong. Therefore it must be considered in detail how to compute paths.

I have created an enhanced version of the well-known A*-algorithm [25], inwhich punishment is calculated for every location in the environment. The maindifference of this algorithm compared to A* is that included in the tentativevalue of a neighbor of the current location is a punishment value dependingon the objects on the location of that neighbor. This will make the algorithmconsider other, perhaps longer, paths, which however may prove to be safer.

For instance, by specifying a punishment of 5 on a field containing a box, thealgorithm will consider paths that avoids going through that box and are up to5 steps longer. This may not seem as a big improvement, but it means that ifa single box is blocking a path then the agent will consider a path which is alittle longer. Compared to a situation where it blows up the box and continues,this is usually more efficient, since it will have to wait for the bomb to explodeand the explosion to disappear. However, consider the situation depicted infigure 8.5(a). The agent wants to move from the current position, A, to thetarget, B, so it will be highly inefficient to compute a path avoiding the boxes.A much more efficient path would be to compute a path, in which a box mustbe destroyed. To do this I introduce the notion of an intermediate target. An

8.3 Pathfinding 79

(a) A path avoiding boxes. (b) A path with an intermediate target, C.

Figure 8.5: Different ways of computing a path from a location A to a target B.

intermediate target is a target in which a bomb should be placed in order toclear the way to the “real” target. In figure 8.5(b) we have an intermediatetarget, C. This path is clearly preferred over the other.

Deciding upon a plan We can use epistemic logic to build a model describingthe situation. This model can be used to determine a plan to be used by theagents. The agent need the following predicates from its knowledge base:

• pos(X,Y ): The current position of the agent.

• target(X,Y ): A final target.

• intermediate(X,Y ): A possible intermediate target.

• clear(X,Y ): An intermediate target is clear, meaning that the targetedbox has been removed, creating a passage.

• bombs(N): The number of bombs currently available. As a shorthand Iwrite bombs(N>0) for (bombs(N) ∧N > 0).

We can now create an epistemic model for an agent a. Note that in a scenarioof Bomberman there will generally be an enormous amount of epistemic statessince there will be a state for each possible position. An agent will have differentknowledge every time it is in a state, yielding even more states. We can considera more abstract and general model in which we have three possible locations:(XA, YA), (XB , YB) and (XC , YC), corresponding to the agent’s location, histarget and intermediate target, respectively. These will be referred to as A, Band C. I write foo( ) to match any value of the predicate foo (i.e. when we

80 Agent-Centered Multi-Agent System

pos(A)target(B) ∧ intermediate(C) ∧¬clear(C) ∧ bombs(N>0)

pos(A)target(B) ∧ intermediate(C) ∧clear(C)

pos(A)target(B) ∧ intermediate( )

pos(A)target(B) ∧ intermediate(C) ∧¬clear(C) ∧ bombs(0)

pos(B)target(B) ∧ clear(C)

pos(B)target(B) ∧ ¬clear(C)

pos(C)target(B) ∧ intermediate(C) ∧¬clear(C) ∧ bombs(N>0)

pos(C)target(B) ∧ intermediate(C) ∧clear(C)

pos(C)target(B) ∧ intermediate( )

pos(C)target(B) ∧ intermediate(C) ∧¬clear(C) ∧ bombs(0)

move towards(B)move towards(B)

move towards(B)

do(bomb)

move towards(B)

move towards(C)move towards(C)move towards(C)

wait

Figure 8.6: Model of the path-finding problem depicted in figure 8.5.

are not interested in the value, only in the fact that the predicate exists in theknowledge base).

For each of the three possible locations, a number of possible states exist. Nor-mally, in epistemic logic we talk about indistinguishable states, i.e. states theagent cannot distinguish because of lack of knowledge. In the following whenI talk about indistinguishable states, I include states which, even though theyare somewhat different in terms of knowledge, will result in the agent perform-ing the same action. They are therefore indistinguishable for an outsider (i.e.another agent).

Figure 8.6 shows the model for the path-finding problem. The predicates in astate are the predicates which the agent knows (or believes) to be true at thatstate. That is, if the agent is at the final target, B, and the intermediate targetis clear, it will be in the lower right state of the figure. Using this model it isnow possible to create a plan for how to decide which path to choose.

Notice the two possible outcomes of moving towards C when the agent is atA and has no bombs (with the intermediate target being blocked). This isbecause of the fact that during the move towards C, a bomb may become

8.3 Pathfinding 81

available meaning that the agent will be in a state where bombs(N>0) ratherthan bombs(0).

Example 8.1 (Plan for path-finding) Given an agent a, GOALa(ϕ) meansthat a has a goal of achieving ϕ and ACTa(α) means that a performs the actionα. It is then possible to specify the rules that govern the agent’s behavior inthe form ϕ→ ψ, as described in chapter 2 (deductive reasoning).

Ka(pos(B) ∧ target(B))→ . . . (8.1)

Ka(pos(C) ∧ target(B) ∧ ¬intermediate( )→ GOALa(pos(B))

(8.2)

Ka(pos(C) ∧ target(B) ∧ intermediate(C) ∧ clear(C)→ GOALa(pos(B))

(8.3)

Ka(pos(C) ∧ target(B) ∧ intermediate(C)) ∧ ¬clear(C) ∧ bombs(N>0)→ ACTa(bomb)

(8.4)

Ka(pos(C) ∧ target(B) ∧ intermediate(C) ∧ ¬clear(C) ∧ bombs(0))→ ACTa(skip)

(8.5)

Ka(pos(A) ∧ target(B) ∧ intermediate(C) ∧ ¬clear(C) ∧ bombs(N>0))→ GOALa(pos(C))

(8.6)

Ka(pos(A) ∧ target(B) ∧ intermediate(C) ∧ ¬clear(C) ∧ bombs(0))→ GOALa(pos(C))

(8.7)

Ka(pos(A) ∧ target(B) ∧ ¬intermediate( ))→ GOALa(pos(B))

(8.8)

Ka(pos(A) ∧ target(B) ∧ intermediate(C)) ∧ clear(C)→ GOALa(pos(B))

(8.9)

A formula of the form (pos(X) ∧ target(X)) means that the current positioncan be unified with the target position, i.e. the agent is currently at the desiredtarget.

The action skip is a noop, meaning that the agent will not perform any actionduring this step. In the next step, all formulae are considered once again, andif a bomb is now available, the agent will place it at the intermediate target. �

In (8.1), the dots (. . . ) refers to the fact that the agent is now at the desiredtarget, and therefore exactly what it will be doing here depends on its currentknowledge and tasks. When deciding what to do, the agent should considerthese rules in the order they are presented here, i.e. starting at (8.1) and endingat (8.9) (note: whenever a rule matches the knowledge base, the agent shouldimmediately choose this rule).

82 Agent-Centered Multi-Agent System

For instance, if the agent is at position C, which is a cleared intermediate target,the following happens: First the agent considers (8.1), but the current positiondoes not unify with the target. The next step is to consider (8.2), but this is nota match either, since the agent has an intermediate target (intermediate(C)).Now it considers (8.3), and since the antecedent is a perfect match, the agentcommits itself to the goal of being at position B.

Implementation We now have a plan for deciding which path to follow indifferent situations. This plan is easily implemented in AgentSpeak. The im-plementation looks somewhat different than the formulae, but this is mainlybecause of the use of internal actions.

• The first statement corresponds to situation (8.4) where an agent has atarget (X,Y), has at least one bomb available and is located at an in-termediate target. This is the case because the agent’s current position(AgX,AgY) is unified with the intermediate target.

+!move(X,Y): pos(AgX,AgY) &

.get_intermediate_target(AgX,AgY,AgX,AgY) &bombs(N) & N > 0

<- do(bomb).

• The second situation is quite similar to the first, but in this case, thereare no bombs available (corresponding to (8.5)).

+!move(X,Y): pos(AgX, gY) &

.get_intermediate_target(AgX,AgY,AgX,AgY)<- do(skip).

• This statement corresponds to all situations where the agent has an in-termediate target, which is not clear, and it is currently not yet at thatlocation, i.e. this corresponds to (8.6)-(8.7).

+!move(X,Y): pos(AgX,AgY) &

.get_intermediate_target(AgX,AgY,TX,TY)<- .get_path(AgX,AgY,TX,TY,Act);

do(Act).

• Finally, the most general statement, which specifies what the agent shoulddo, when it has no intermediate target. In this case, it should get the nextstep towards the final target and perform that step.

8.4 Pursuing Enemies 83

+!move(X,Y): pos(AgX,AgY)<- .get_path(AgX,AgY,X,Y,Act);

do(Act).

Notice how the nine statements of the epistemic specification could be com-pressed to four statement in AgentSpeak. The primary reason for this is the useof internal actions. Instead of having the notion of an intermediate target being“clear”, the internal action simply removes the intermediate target the momentit is clear. In this case, situations such as (8.3) and (8.9) become redundant asthey are caught by (8.2) and (8.8) respectively.

The path-finding algorithm is also used for the OCMAS team since both teamsneed to be able to follow a path. The major difference is when the algorithm iscalled since the structure of the systems will differ.

8.4 Pursuing Enemies

In order to successfully eliminate enemies, the agents need to be able to decidewhich enemies to pursue. When an enemy has been chosen, the agent shouldkeep pursuing this enemy until this is no longer possible (the enemy is dead,the agent is dead, it is no longer feasible to pursue exactly this enemy, etc.). Inother words, the agent must be able to commit to pursuing an enemy. In thefollowing I write commi(j) when agent i is committed to pursuing agent j.

It must also be ensured that a situation where all agents are pursuing the sameenemy will never occur, since this will never be as efficient as dividing the taskof eliminating the other team into subtasks of killing a single enemy distributedfairly among the agents. However, it should be possible for more than one agentto pursue the same enemy; otherwise the agents would not be able to trap theirenemies.

Consider the situation depicted in figure 8.7(a). Agents 16, 17 and 18 havefound the enemy 273. However, it has been decided that no more than twoagents may pursue the same enemy simultaneously. The problem is then howto ensure this invariant. An agent knows whether it is committed to an enemy,and also how many agents that are currently committed to an enemy (denotedcomm n(i,N), where i is the enemy and N is the term unified with the numberof currently committed agents).

3The names of the agents are taken directly from the implemented Jason environment.

84 Agent-Centered Multi-Agent System

(a) Scenario

comm17(27)comm18(27)

comm16(27)comm18(27)

comm16(27)comm17(27)

comm17(27)

comm18(27)

comm16(27)

18

18

17 16

16 17

1

6

5

7

2

4

3

(b) Epistemic Model

Figure 8.7: A scenario where three agents potentially can commit to the enemy,however only two agents are allowed to commit to the same enemy at once. (a)shows the scenario, while (b) shows the corresponding epistemic model for thecommitment.

We can now create an epistemic model of the scenario with focus on the com-mitment of each agent. This model is shown in figure 8.7(b). Notice that agent16 cannot distinguish comm17(27) from comm18(27). The reason is that wehave K16(comm n(27, 1)), but also ¬K16(commi(27)), where i ∈ {17, 18}. Thereasoning is analogous for the other agents. In the cases where two agents arecommitted, those agent cannot distinguish states where different allies are help-ing. The reason is that while they know that another agent is committed aswell, they do not know which.

Using the epistemic model, we can now create a set of formulae which describehow the agents should react to different situations regarding commitment. Inthe following we have Ag = {16, 17, 18}. Also note that the agents choose tocommit in a “first-come, first-served” fashion, i.e. even though another agentmay be closer to the enemy, that agent is not guaranteed to be able to committo the enemy. There are several reasons for this: First of all, that agent mayalready be committed to another enemy and secondly, having to weigh pros andcons every time a commitment should happen could make the decision processslower, making the agents vulnerable while they are deciding.

First it should be decided what happens when an agent finds an enemy to whichit is able to commit (states 2, 6 and 7 in the epistemic model from the point of

8.4 Pursuing Enemies 85

view of agent 16):

K16(comm n(27, N) ∧N < 2 ∧ ¬comm16(27))→ comm16(27) ∧ EAg(comm n(27, N) ∧N > 0)

What the formula tells us is that when agent 16 knows that at most one otheragent is committed to agent 27, the agent will commit to it and everybody willknow that at least one agent is committed to this enemy.

The other major case is when the agent has found an enemy, but may notcommit to pursuing it (state 1 in the epistemic model):

K16(comm n(27, 2) ∧ ¬comm16(27))→ search for available enemies

The phrase “search for available enemies” simply points to the idea that theagent should use whatever means it has available to find another enemy towhich it can commit. I am not going into details with this in epistemic logicsince the approach will be using Jason-specific methods.

Finally I consider a case where the agent does not know the location of anenemy. This should result in the same as above; a search for available enemies.

¬K16(comm n( , )) ∧K16(¬comm16( ))→ search for available enemies,

i.e. if the agent does not have knowledge of any commitments at all, it mustsearch for available enemies. The reasoning is as follows: Since allied agentsshare their knowledge about enemies, it follows that if no agent is committed toany enemy, none of the agents will know the location of any enemy4. The agentshould therefore begin a search for enemies.

These formulae make up the plan for choosing when to commit to enemies. Bylooking at the model, it is easy to realize that this is the case. An agent isonly allowed to commit to an enemy when at most one other agent is currentlycommitted to that enemy. For agent 16, this is the case in state 2, 6 and 7. Inall other cases (disregarding the states where that agent already is committedto the enemy), the antecedent of the first formulae can never become true. Theagent will therefore always begin searching for other enemies in these cases.

4It also follows that by substituting K16 with EAg we get a more precise description of thecurrent situation, since all the agents will know that no agent is committed to an enemy.

86 Agent-Centered Multi-Agent System

Implementation A plan for this scenario can easily be implemented in Ja-sonby making a few small adjustments. First of all, it must be ensured that

commi(j)→ Ei.team(comm n(j,N>0)),

where i.team refers to all agents of i’s team. This can be done using the broadcastfunction of Jasonwhich sends a KQML message to all agents in the system.To make things a bit simpler we let the broadcasted message include whatenemy the agent is committing to, instead of implementing comm n(j,N). Thereasoning above still applies, since it in general does not matter exactly who hascommitted to a specific enemy.

The number of committed agents can be found using the standard internal action.findall(term,query,result) which can be used to find all committed agents.The result is a list and the size of that list is the number of committed agents.

8.5 Concluding Remarks

This chapter has given an overview of the implementation of an ACMAS. Byusing the Prometheus methodology as a guideline for the system specification,the overall system goals and functionalities were quickly identified. By mappingfunctionality and data sources, two agent types were identified: an explorer anda destroyer.

The combination of logic programming and internal actions written in Javafurthermore made it possible to implement more advanced features, such aspathfinding, in an easy way while making the use of these features in AgentSpeakunproblematic.

By specifying the epistemic model for certain scenarios of the game it was pos-sible to easily identify the states in these scenarios. These states could then beused to generate a set of plans to be used by the agents.

While the agents may not be following perfect strategies and always chooseto perform the smartest actions, they are in general able to commit to theirmissions and cooperate in different situations. Remember that the idea is notto compare two different implementations by their performance only; rather isit important that the code is well-structured, easy to understand and maintain,and provides satisfactory results reasonably fast.

Chapter 9

Organization-CenteredMulti-Agent System

This chapter describes the design and implementation of an organization-centeredmulti-agent system. It will be implemented in J -Moise+, a combination of Ja-son and the organizational model Moise+ as described in chapter 6. I will befocusing on the design of the organization, the groups and roles of the agentsand how to implement and make use of such organization in J -Moise+.

In [12] an OCMAS model, AGR, is defined. The paper also discuss a generalmethodology for OCMAS. Basically it is suggested that the overall organizationalstructure is identified and then one can use the Gaia methodology [33] to identifythe roles of the system and relate them to the groups of the general structure.

The Gaia methodology was briefly mentioned earlier in the report. It is amethodology for designing multi-agent systems, however an agent-oriented one.A part of it considers the organization of the system by defining roles, theirpermissions, obligations and interactions. Though Gaia takes a static approachto the organization, deciding upon what roles the system consists of will alsoapply to the dynamic approach of Moise+.

I will be using the Gaia methodology as a guideline for designing the organi-zation. Gaia will primarily be used for the part of the structural specification

88 Organization-Centered Multi-Agent System

bomberman-agent

explorerdestroyer

(a) Roles

team

exploreattack

(b) Groups

Figure 9.1: Roles and groups

concerning roles and interaction between roles. By identifying the groups theycan then be related to the roles so that the organization is usable in Moise+.Finally the functional aspect of the system should be identified and related tothe structural specification using deontic relations.

This chapter will present the resulting organization along with some of theconsiderations I have had during the process.

9.1 Structural Specification

The structural specification defines the available roles and groups, and relationsbetween these. I have defined an abstract role, the bomberman-agent. All otherroles inherits this role. Similarly to the types of agents in the ACMAS, there aretwo basic roles: explorer and destroyer. I have considered making the latter anabstract role, with two concrete sub-roles: box-destroyer and enemy-destroyer.However, because most box-destruction is done implicitly using A* (which willalso be used for these agents), it gives no clear advantage to let an agent bedirectly committed to destroying boxes. Figure 9.1(a) shows how the roles arerelated.

The game being quite simple leads to a quite simple design, both role and group-wise. Therefore the groups are, not surprisingly, comparable to the roles of thesystem. Figure 9.1(b) shows the groups and their subgroup relations. I haveconsidered letting the attack group have two sub-groups: one for agents knowingthe location of an enemy, and one with no such knowledge. This would enablethe agents to focus on either attacking the enemy or searching for it. In the endI chose not to do so; having this design would lead to agents joining and leavinggroups all the time. Instead, as I will describe below, the deontic specificationmakes this distinction possible within a group.

I will not go into details with all the definitions here; for a detailed view of the

9.2 Functional Specification 89

Explore map

Move to unexplored locationFind unexplored area(a) Exploration

Find enemy

Receive locationExplore(b) Finding enemies

Kill enemy

Attack

Avoid bombPlace bomb

Move to enemy

(c) Killing enemies

Figure 9.2: Goal Decomposition Tree or SCH’s for (a) exploring the map, (b)finding enemies and (c) killing an enemy.

organization I refer to appendix A, which contains details on how to obtain thesource code.

9.2 Functional Specification

The functional specification defines a set of goals and missions the agents cancommit to. Figure 9.2 gives an overview of the goals of the organization shownas decomposition trees or SCH’s.

The overall goal is to win the match. This is done by killing all enemies. Killingan enemy is done by first finding an enemy and then killing it. A combinationof the schemes (b) and (c) thus leads to the following decomposition for findingand killing an enemy

Find and kill enemy = (Explore | Receive location),Move to enemy,Attack.

By completing this mission for all enemies, the team will win the match.

I have not considered goals with a success degree below 1; this does not meanthat a goal will always be successfully completed at some point after an agenthas committed to it. It is not simple to decide upon a certainty degree, but it ispossible to calculate a value: by letting the agents play their roles and commit

90 Organization-Centered Multi-Agent System

to goals the success rate is simply

success =successful commitments to g

commitments to g

This can for instance be used to revise the strategy. If the rate of success is toolow (say, below 50%) it may be reasonable to consider revising the strategy.

The structural and functional specifications are related by roles and missions.Therefore the goals of the system must be organized in missions that agents cancommit to. This will allow the agents to fulfill a set of goals by completing amission. Below I have proposed three missions, which should cover all of thegoals of the social schemes in figure 9.2.

explore = {explore map,find unexplored area,move to unexplored area}

find enemy = {find enemy, explore, receive location}kill enemy = {move to enemy, attack, place bomb, avoid bomb}

For example, when an agent commits to the mission explore it will automaticallycommit to the goals of finding unexplored areas and moving to those areas.

The relation between the structure and functions should be quite clear: anattacker would probably follow the mission for killing an enemy unless it doesnot know the location of such enemy. In that case it would commit to themission of finding an enemy.

9.3 Deontic Relationship

The final step is to relate the structural and functional specifications by obli-gations and permissions, i.e. a deontic relationship. The idea is to force agentsto commit to certain missions when they choose to play a role. As describedabove, the relation is quite clear. However, it remains to be explicitly definedexactly what obligations and permissions a role has.

The fact that roles inherit a deontic relation makes it much easier to decideon how a role is related to the missions. Below is the proposed list of deontic

9.4 J -Moise+ 91

relations between roles and missions.

obl(explorer, explore, Any)obl(destroyer, kill enemy, Any)per(destroyer,find enemy, Any)

Having these deontic relations means that an agent playing the explorer rolemust commit to the exploration mission. This ensures that the team will alwayshave an agent committed to exploring the environment. I have also consideredpermitting the explorer to commit to the kill enemy mission, but this does notconform with the general strategy and the other team. It should however benoted that this possibility makes an OCMAS very flexible; several agents cancommit to the same missions just by creating a deontic relation.

Agents playing the role destroyer are allowed to search for enemies. They are notobliged to do so since if they already know the location of an enemy, committingto that enemy would in most situations prove more useful.

The constraints of the destroyer role makes it possible to distinguish between thetwo situations described above: (1) the location of an enemy is known and (2)no enemy location is known. The agent is allowed to search for enemies and willdo so, when situation (2) arises. This distinction is more elegant than havingsubgroups since it allows an agent to stay in a group even though its missionhas changed.

9.4 J -Moise+

To use J -Moise+ we need to add the OrgManager as an agent to the system.The OrgManager will load the organizational specification and use it to ensurethat organizational events will happen, that the groups are well-formed and soon.

Initially, there are no instantiated groups in the system. Therefore, the first stepfor the agents will be to create the groups needed in the system. I have simplifiedthis by having an agent responsible for this. It will create the team group byasking the OrgManager to do so. When this happens, an organizational eventhappens in which all agents in the organization are notified about this. Theywill the act accordingly by creating groups and schemes necessary for fulfillingtheir purposes.

92 Organization-Centered Multi-Agent System

9.4.1 Committing to a mission

When the agents commit to a mission in a scheme the OrgManager will generategoal achievement events for the goals that are currently available. For instance,when an explorer commits itself to the mission of exploration, it will automati-cally generate the goal achievement event of finding an unexplored area. Whenthis goal is completed the OrgManager generates the next goal available in thatmission: moving to the unexplored area. In this way it is very easy to follow theplan of a mission since the goals are automatically generated when they havebeen specified in the organization.

Basically, the explorer has the following plans:

+!exploreMap[scheme(Sch)]<- jmoise.set_goal_state(Sch, exploreMap, satisfied).

+!findUnexploredArea[scheme(Sch)]: <context><- <plan to find unexplored area>;

jmoise.set_goal_state(Sch, findUnexploredArea, satisfied).

+!moveToUnexploredArea: <context><- <plan to move to unexplored area>.

+near(_,_)<- ?scheme(exploration, Sch);

jmoise.set_goal_state(Sch, moveToUnexploredArea, satisfied).

Notice that since the organizational specification shows exactly how to explorethe map, it is only necessary to create plans for each goal event. When the planis successfully executed, the agent tells the OrgManager that the goal has beensatisfied. It is then the responsibility of the OrgManager to generate the nextgoal event. Note that moving to an unexplored area is a bit different since ituses the path-finding algorithm described in the previous chapter. Therefore,the goal is satisfied only when the agent is near the unexplored area.

9.5 Code Maintenance 93

9.5 Code Maintenance

In the initial solution I proposed a functional specification consisting of twoschemes: One for exploration and one for finding and killing enemies. Thisresulted in a few issues when letting more than one agent commit to a missionof the second scheme. Moise+ allows the programmer to specify the cardinalityof agents satisfying a given goal. A cardinality of 1 for the mission concerningfinding enemies seems obvious since the system then only requires a single agentto find an enemy before all agents in the group will commit to killing that enemy.

However, the approach for killing an enemy is quite different; first the agenthas to move to a desired position, then place a bomb and finally avoid gettinghit by the bomb. Letting all agents commit to the same scheme lead to a fewproblems:

Cardinality = 1: In this case, when one agent has moved to his desired posi-tion, all agents will begin pursuing the next goal — to place a bomb. Thismeans that even though an agent is not yet at its desired position, it willstill place a bomb, which will then in most cases have been wasted.

Cardinality = number of agents in scheme: By specifying the cardinalityto be equal to the number of agents in the scheme, this situation is avoided.However, in that case a goal is only satisfied when all agents have satis-fied the goal. The system will therefore not let any agent place a bombbefore all agents are at their desired position. This drastically decreasesperformance and in most cases the enemy will have moved far away.

Having to rewrite a big part of the organization was however not a huge task.By splitting the existing scheme into two new schemes, most of the existingcode could be reused. At a few places has it been necessary to include neworganizational code, but otherwise the process of rewriting this part went quitesmoothly.

9.6 Concluding Remarks

The organization of a multi-agent system can be made explicit using an OCMASmodel. In this chapter I have shown how this can be done by specifying theorganization of the Bomberman multi-agent system structurally, functionallyand deontically.

94 Organization-Centered Multi-Agent System

TheMoise+ model makes it possible to define roles, groups and missions of anorganization in a clear format, making it possible for the user to understand itand for agents to be part of the organization and to follow its obligations andpermissions.

By combining Moise+ and Jason we get J -Moise+, in which agents imple-mented in Jason can follow an organization specified inMoise+. The middle-ware implementation ofMoise+ generates goal events for agents committed tocertain missions making it possible to implement exactly how specific subgoalsare completed without being concerned with how the subgoals are related.

Chapter 10

Results

In the previous chapters I have documented the work with two approaches forbuilding multi-agent systems, the agent-centered and the organization-centered.Throughout working with both Jason and Moise+ I have gained insight onthe various aspects of the language of AgentSpeak, of Jason andMoise+, andof the two different approaches in general.

In this chapter I will discuss the resulting systems; how well do the teams followthe proposed strategy, how easily was the strategy implemented, how is theresulting code structured (will it be easily maintained and extended?) and howfast are the agents (reacting to changes and so on)? Furthermore, is there anyclear relation between the number of agents or the complexity of the system andthe advantages of using one approach over another?

Moreover since I have been using Jason and Moise+, they will also be dis-cussed. Since J -Moise+ is based on Jason I cannot make a direct comparisonof the systems. Instead I will go into details with how debugging is performed,the languages used, the documentation of the tools and so on.

Finally since the focus of the each approach is so different from the other (fo-cusing on what to do in an organization, while the agent-centered approachfocuses on how to achieve certain goals) I will also discuss the advantages anddisadvantages of each approach.

96 Results

10.1 Agent-Centered Approach

When implementing the ACMAS I had to build all the features from the ground.Some of these features, such as pathfinding, are also used in the OCMAS andonly the use (i.e. not the implementation) of the features can be compared.

Having to build everything from the ground gives a lot of freedom with regardsto the structure of the implementation; there are no constraints as to wherespecific details must be implemented. This has lead to a solution where plansfor achieving sub-goals and reacting to percepts can be implemented conciselywhile still doing as intended.

The resulting agents are therefore reacting quite fast to changes in the environ-ment; with short code and only few precisely defined responsibilities, the agentsare easily able to prioritize during a game if it for instance is necessary to takecover from a bomb. The agents are also able to cooperate in fulfilling their mis-sions; by putting constraints on the number of agents committed to the sameenemy the agents are forced to commit to enemies to which other agents arealso committed.

But the freedom one has with regards to structure has also been the biggestissue during the implementation. It is difficult to ensure that an agent continuesworking on another sub-goal once the current sub-goal is completed. It is alsonot always obvious how to make sure that the agent drops the current intention,if it is no longer reasonable (for instance, a destroyer which keeps exploring, evenwhen it has spotted an enemy). A reason for this is the way debugging worksin Jason . It is quite different from many imperative language and even logicprogramming languages, since these have well-defined input and output models.

Overall though, I am quite satisfied with the resulting system; it satisfies theproposed strategy and even though the agents may be quite simple, they areable to cooperate to complete their tasks and use their knowledge to decide howto move through the environment.

10.2 Organization-Centered Approach

Working with an OCMAS is naturally more well-structured than an ACMAS,since it consists of two distinct parts, (1) specifying organization and (2) imple-menting the details of the organization where the latter depends on the comple-tion of the first. While this does not automatically result in a more structured

10.2 Organization-Centered Approach 97

program, it does force the user to think more about what, why and how. Whenspecifying the organization the focus is on what the overall goals are. Duringthis phase I realized that this also made me think about why these are the goalsand in that way it allows me to justify the choices I make even before they areimplemented. Finally, when the plans are implemented the focus is on how theagents are supposed to complete their goals.

The specification of the organization is created in an xml-file and is then refer-enced in the Jason-project. Included in Moise+ is a tool which enables thedesigner to create an organizational entity using the specification. In this wayis it possible to verify that the relations and constraints are working. It wouldhowever be less error-prone if it was possible to create the specification in aneditor. In this way it is not necessary to remember what a parameter is calledand what the possible values are.

The code tends to be quite clear, since only sub-goals, and not their relations,need to be implemented. But without being able to study the specification ofthe organization, it is not possible to see the relation between the goals (asit is handled automatically by the OrgManager). Furthermore, as required byJ -Moise+, the code is often quite verbose because of calls to the OrgMan-ager (jmoise.create group(...) etc.) and the extensive use of annotations(+!goal[scheme(Sch)] etc.). While this in general makes the code quite clear,it also can result in awkward situations where one need to include a plan for agoal event in which the goal is simply set to be satisfied. Consider the examplebelow:

+!goal // available when subgoal is satisfied<- jmoise.set_goal_state(goal, satisfied).

+!subgoal<- <complete subgoal>;

jmoise.set_goal_state(subgoal,satisfied).

In this case, the primary goal is completed when the subgoal is completed, butone still has to explicitly state that the goal is satisfied even though nothing elseis done. This is not a serious problem, but in large scenarios with complicatedschemes, it may result in many “empty” plans.

Since the agents are part of an organization, they are required to do whatevertheir obligations tell them to do. This makes them very flexible. When amission has been implemented, it is generally possible to permit several rolesto commit to it. However to be able to know their obligations they need accessto the specification of the organization. As described, this access is provided

98 Results

through the special agent called the OrgManager. This means that when anagent has satisfied a goal, this information is sent to the OrgManager which thendetermines what the next goals are and informs the agent. This can decreaseperformance in very active environments if the agent has no goals to pursuewhile waiting for new information.

This can be seen quite clearly in the implementation of the explorer of the OC-MAS team (Refer to section 9.4.1 for the plans used by the explorer). When theexploreMap goal is available, it is immediately satisfied. When this happens,the scheme for exploring is finished and a new scheme must be created in or-der to continue the exploration. Compared to the explorer of the ACMAS, theperformance differences are quite clear; in the ACMAS, the agent immediatelychooses a new spot to move to, while in the OCMAS, the agent waits for theOrgManager to generate the appropriate goal events.

Limitations As previously described it is possible to define the cardinality ofa role in a group. I have used this to define that the team must have at leastone explorer. Naturally, this puts some constraints on the team (after all, thatis the point of being able to specify cardinality) meaning that the team will notwork correctly when the constraints are not met.

This is completely as expected and will in most cases be fine. In a very smallenvironment in which there really is no need for exploration, this constraintcould however limit the possibilities of the team greatly.

Of course, this problem is easily solved if the specification of the organizationis available; in that case the cardinality constraints can just be changed. Sincethis will most often be the case when using Moise+ it is not something thatis likely to pose a problem. It is however still problematic that it is necessaryto change the specification of the constraints in order to meet small changes inthe setup.

The resulting implementation fulfills the proposed strategy, however the roadtowards this result has been more bumpy than when working with the ACMAS.This will be discussed in more detail below.

10.3 Performance Comparison

The two teams largely follows the proposed strategy and even though the im-plementation process has been quite different in each case the results have been

10.4 Using Jason and Moise+ 99

quite satisfying. The performance results are however more clear when lettingthe teams play against each other.

When a map is initialized there is a noticeable difference in time spend initial-izing the agents. Whereas the agents in the ACMAS are able to immediatelychoose goals and pursue them, the agents in the OCMAS cannot do anythinguntil the OrgManager has been initialized. In most cases this difference is notimportant since it is only in the initialization phase. However, in fast-pacedgames such as Bomberman, this difference can have an impact on the resultssince the ACMAS can more quickly get an overview of the map and revise theirstrategies accordingly.

Both implementations are quite scalable in terms of number of agents. Theperformance tends to decrease when adding many agents on each team. This isexpected since each agent requires a certain amount of processing power.

That being said, they advantages does not mean that the ACMAS wins everymatch. In fact, in many cases the game ends in a situation where both teamshave one or two players left that are unable to defeat each other. This is becauseof the fact that they always prioritize surviving, since that is a major part ofwinning the match. Whenever an agent is within range of a bomb it quicklymoves away from it, since that is part of the path-finding algorithm

Overall, the ACMAS seems to perform a bit better than the OCMAS. Eventhough both teams fulfills the proposed strategy, the performance differencesclearly gives the ACMAS an advantage by being able to more quickly revise itsstrategies.

10.4 Using Jason and Moise+

The results of the performance comparison shows that the ACMAS is betterthan the OCMAS. However, as the discussion shows there are many criteria totake into consideration. Table 10.1 gives an overview of the differences whichwere discussed above.

It is important to note that the main disadvantage ofMoise+ is the fact that thecommunication with the OrgManager creates a lot of overhead which decreasesthe overall performance of the team. In other scenarios, in which it is notnecessary to react quickly to changes, the team implemented in Jason may nothave such an advantage.

100 Results

Table 10.1: The overall advantages and disadvantages of Jason and Moise+.

Jason Moise+

Code structure Short and precisecode. Flow controlcan be difficult.

Well-structured code.Quite verbose.Excellent use ofannotations.

Development speed Sub-problems arequickly implemented.Connectingsub-problems can bedifficult.

No editor fororganization.Structure makesimplementation ofagents swift.

Performance Quick reaction tochanges.

Communication withOrgManager createsoverhead.

Error handling Descriptive messages. Undescriptivemessages. Seems toignore certain errorhandlers.

Debugging Step-wise debugging is difficult to use.

Jason uses AgentSpeak which is an agent-oriented programming language. Asdescribed in chapter 2 such programming languages are perfect for implement-ing goal-directed and reactive behavior since one builds a set of plans for howto react to such events. However AgentSpeak is very similar to Prolog, whichis a logic programming language, meaning that it is quite different from im-perative programming. This may be inconvenient if the programmer is usedto object-oriented programming languages. Once acquainted with the languageit is however possible to write quite complex plans in a concise manner. Boththe language of AgentSpeak and the general features of Jason have been quiteextensively documented in [3]. This makes it possible to fully understand andexploit the features of the interpreter.

When working with Jason I found debugging quite hard. This is partly becauseof the differences in the debugging mechanisms compared to other tools. Oftenwhen attempting to debug, the entire system pauses and then when attemptingto perform a stepwise operation through the system, nothing happens. This haslead to much trial and error and has overall slowed the development process.Generally though, the system provides descriptive error messages and the moreacquainted one gets with the system, the easier errors are spotted.

10.4 Using Jason and Moise+ 101

The J -Moise+ extension is built on Jason and uses the Moise+ model,so in general the same things apply to this. However, since it is an extension, itallows for more actions and there are a few more things one needs to be awareof.

Having an organization often leads to a very well structured result since the useris required to really think about what the agents are supposed to do. This iseven more the case in J -Moise+ since the OrgManager automatically gener-ates goal events, meaning that the user need not consider the relation betweenthe goals.The schemes that can be specified in the functional specification ofMoise+ makes coordination of tasks very easy. Simply by specifying the cardi-nality of a goal in a scheme, the user specifies how many agents must completethis goal before it is completed within the scheme. For instance, a goal eventcan be synchronized by having a sub-goal that all agents must satisfy before theactual goal event is created.

How pleasant the use of a tool is also depends on the documentation; if onehas to perform trial and error to make the simplest things work, the overallimpression will not be great. TheMoise+ organizational model has been quiteextensively described in [18–20] along with a tutorial of the details of how touse it in [21]. This makes it very easy to understand how the different con-cepts are related and should be used. Unfortunately, it seems that some of thedetails have been changed and not included in an updated tutorial, meaningthat some problems arise that are not documented. In particular, the deonticspecification no longer uses the tag <deontic-specification/>, but instead<normative-specification/>.

Using organizational knowledge When a group or scheme is created, theagents will perceive certain events so that they are able to react accordingly. Inorder to be able to distinguish between similar events, annotations are addedthat among other things include which agent created the organizational object.

This can be used to let an agent decide not to join a group if a specific agenthas created it or only committing to a mission it is permitted to commit to ifit is related to a specific group. This is a great use of the Jason annotationsas it is perfectly clear how to use them. Furthermore, because it is annotationsthey will not be shown if the programmer chooses not to use them.

What Moise+ is lacking in term of organizational knowledge is the ability foran agent to know whether it is allowed to join a group before it attempts to joinit. The reason for this is that if it is not permitted to join a group or play arole, an error event is created. This should be okay, but it is not possible for

102 Results

the agent to reason about the error in details so it will not know why it couldnot join the group.

There has also been certain situations where the error handling has not beenworking as expected. Instead of executing the plans for error handling, it tendsto ignore them completely.

Overall both tools are quite pleasant to work with once acquainted with them.The lack of documentation of some features of J -Moise+ has however madethe development process is a little slower than it could have been.

10.5 OCMAS vs ACMAS: When to Use What?

The work with the two approaches has lead to a discussion of the implementationof each team as well as the two tools used for building the implementations.Generally speaking, one approach is not better than the other but given theresults above it is clear that there are situations more suited for one approachthan the other.

Figure 10.1 gives an overview of the main results of the comparison. The figureuses two parameters as basis: the number of agents in the system and thestructural complexity. When the system has a high structural complexity, thereare greater advantages of dividing the implementation into two distinct parts:what and how. This means that while an organization can be applied to simplesystems, they will in most cases not benefit much from this.

Notice that the two approaches overlap. The reason is that there will alwaysbe situations where it is not clear whether one system is an advantage over theother. The Team-Based Bomberman is an example of such system. While theresults show a bias towards the ACMAS in this case, it is partly due to the factthat the OrgManager creates overhead. If this problem is solved, the OCMAScould be performing just as well as the ACMAS.

10.5.1 Personal software assistant

I briefly mentioned the personal software assistant (PSA) in chapter 3. Asan agent responsible for assisting an end-user with certain tasks that can beautomated, it seems an organization will be inappropriate. The primary reasonfor this is that the system will generally consist of very few agents (in many

10.5 OCMAS vs ACMAS: When to Use What? 103

manyagents

fewagents

highstructuralcomplexity

lowstructuralcomplexity

OCMAS

ACMAS

Personal SoftwareAssistant

DistributedCalculations

Paper Review

Team-BasedBomberman

Multi-AgentProgrammingContest 2009

Multi-AgentProgrammingContest 2006

Figure 10.1: An overview of the main results.

cases only one) and the complexity will be low. In such cases there is not muchsense in creating an organization, since the system will not benefit from the OS.It will probably consist of a single group with few roles that the agents can play.Building an entire organization for very few agents can in most cases not bejustified.

Of course, it could be possible that a PSA consists of several agents, each playingdifferent roles with special responsibilities. In that case, depending on the overallresponsibility of the PSA and the size of the system, it may be rewarding tospecify an organization for the agents to follow. In many cases, where only afew agents are part of the organization and their responsibilities are alreadyunambiguous, it may seem a bit farfetched to create an explicit organization.

104 Results

10.5.2 Distributed calculations

Consider a system of intelligent agents, which have one or more sensors. Thiscould for instance be the “distributed sensing” scenario described in [32]. Herean agent has a clear responsibility of sensing the environment and using itscalculations in some well-defined way. The role of an agent is defined by thesensors it can use, i.e. it has a static role. At all time the information it iscomputing will be used for the same purpose.

In such cases there is no need to build an organization since the missions arevery simple and there is no explicit need of coordination. Furthermore, being ina group would not change the behavior of an agent in the system since its roleand responsibilities remains.

In other words: Even though the system can contain many agents, the structuralcomplexity remains low. Therefore the agents will not benefit much from makingthe organization explicit.

10.5.3 Paper review process

In [12] an example of the “reviewing process” of papers in a conference is con-sidered. In this example we have a group for submission of papers and one forevaluation. Being in a certain group then gives an agent certain responsibilities,such as evaluating the papers that are being submitted.

This is a very specific example of the use of an organization but it can easilybe generalized to situations where certain agents are depending on results fromother agents. By grouping such agents and creating schemes they can com-mit to, the general structure of the dependence-relation is immediate and theimplementation is easily constructed using it.

While the number of agents may vary it is clear that the structural complexityis higher than in the previous examples. An explicit organization will definitelymake the implementation much easier since responsibilities and acquaintancesare well-structured.

10.6 Multi-Agent Programming Contest 105

10.5.4 Games

Games can be quite different and naturally there is no definite answer to whetherusing ACMAS or OCMAS would be better. In such situations it is importantto realize how complex the game is. If the game consists of one well-definedtype of controllable character, an organization is probably not a good choice.However, if the game consists of several different characters, all with differentpossibilities, it may be reasonable at least to consider whether an organizationcould be useful.

This seems to indicate that using an organization for a Bomberman game maynot be the best choice. In this specific case the ACMAS solution is better thanthe OCMAS; the agents react faster, can more easily adapt to changes and isin general more robust. However, J -Moise+ is a new system and has severalissues, so the results may be biased by this fact. However, I do feel that anorganization can be justified in a game such as Bomberman if we include featuresfrom the original game that would make the game more complex (e.g. power-ups). An OCMAS will benefit in this case, since it is possible to specify advancedroles and missions in the OS that would otherwise be difficult to implement.

10.6 Multi-Agent Programming Contest

The area of multi-agent systems is quite active, which can be seen by the annualmulti-agent programming contest that was briefly mentioned in chapter 5. Theprimary aim of the competition is to “stimulate research in the area of multi-agent system development and programming” [2]. This is achieved by developinga scenario of a dynamic environment in which cooperation is the key to success.Different multi-agent systems are competing in the scenario in a set of games todetermine their performance. As illustrated in figure 10.1 the complexity andnumber of agents in the contest has increased over the years. Therefore, whilethe implementations would not benefit much from an explicit organization in thefirst scenarios, the increased complexity has made this approach a reasonablechoice.

The 2009 edition of the contest consisted of eight different multi-agent systemsof which two are of particular interest to this project: Romanfarmers [17] whichwas implemented using Jason , Moise+ and CArtAgO1, and JasonDTU 2 [4]

1Common ARTifact infrastructure for AGents Open environment : a framework whichamong other things allows agents to cooperate and share knowledge using external functionsknown as artifacts.

2I was part of the JasonDTU team.

106 Results

which was implemented in Jason . Since both systems have been competingagainst all of the other systems of the contest – and especially each other – theperformance of these teams are quite relevant to the comparison of Jason andMoise+.

In the final results JasonDTU ended as number four while Romanfarmers endedin the third place. Overall, the organization-centered approach therefore seemsto have outperformed the agent-centered one. Of course, several things mustbe considered in this case: (1) the implementation was done by different teamswhich means that the strategies may be quite different, (2) experience withthe tools may have influenced the results and (3) the time spend analyzing thescenario and designing the system will affect the quality of the final system. Inthe following I will briefly compare the solution of the two systems. I refer to[2] for a detailed description of the scenario.

1. As previously discussed, one approach may not be better than anothersimply because it outperforms it in some scenarios; if both systems do notimplement exactly the same strategy, the results may very well be due todifferences in the strategy instead of differences in the approaches.

Overall, the two strategies are somewhat alike: both teams include theroles of a leader, explorer and herder3, though the Romanfarmers include afew more specialized roles. This is a clear advantage of creating an OCMASbecause of the distinction between specification and implementation. It ismuch easier to implement additional roles.

2. The team behind Romanfarmers include the creators of both Jason andMoise+. It is therefore reasonable to assume that the team has first-handexperience with the tools and are able to exploit them to the fullest.

The background of the team behind JasonDTU was a course on multi-agent systems, in which a few lectures introduced Jason . The course wasconcluded with a large project, in which different scenarios were solvedusing Jason . This means that the team may not have been able to useall features of the framework, though the project work did provide usefulexperience.

It is important to realize that even though the teams have different back-grounds regarding the tools, we cannot conclude that this is the reason forthe performance differences in the competition. However, it is very likelyto have influenced the results and it should therefore be considered as wellwhen comparing the systems.

3Note that the roles are implicit in the JasonDTU system.

10.6 Multi-Agent Programming Contest 107

3. In most cases the result will be better when more time is invested in theproblem. Therefore it is also important to take this into account whencomparing the performance of the two systems.

The JasonDTU system was created by myself and one other student asan intensive special course lasting three weeks. The primary focus ofthe course was to design and implement a multi-agent system for thecontest. The system was therefore implemented during the three weeks ofthe course. While we were quite satisfied with system, we felt that therewas space for improvement which we due to the time constraints did nothave time to look into.

The Romanfarmers’ system was implemented by researchers and it is fairto assume that they had other responsibilities during the contest as well.In [17] it is mentioned that the use of artifacts was not included in theimplemented team due to lack of time. However, the team behind Ro-manfarmers participated in the 2008 contest as well and were able toreuse much of their previous implementation.

Overall it seems that time has been an influence on the quality of bothteams. While JasonDTU where able to work focused on the system forthree weeks, Romanfarmers where able to reuse parts of their own imple-mentation from the 2008 contest.

What the discussion shows is that while Romanfarmers outperformed JasonDTUin terms of matches won, it is important to take other factors into considerationwhen comparing systems.

The Romanfarmers had a clear advantage of being able to reuse their previoussolution. Combined with the use of Moise+ this made it possible for them toadd new roles to their specification and implement only those specific changesin Jason .

However, I believe the most important factor in this case is experience: whileJasonDTU focused on the contest for three weeks, much of that time was alsospent getting acquainted with Jason and AgentSpeak.

Overall, the main reason for Romanfarmers outperforming JasonDTU was prob-ably due to the team simply being better, both in terms of strategy and im-plementation; however, since experience and time plays an important part aswell, it does not necessarily mean that the organization-centered approach willalways outperform the agent-centered approach in this scenario. The overallcomplexity of the scenarios in the competition has increased, meaning that itmay be easier to implement a better strategy using OCMAS simply because the

108 Results

complexity is easier handled when the structure of the solution has been madeexplicit.

10.7 Concluding Remarks

In the end, it is hard to say which approach is better and a decision should bejustified by doing some research on the application at hand and the possibletools for creating the system. I have introduced two tools and used these toolsto build two teams of agents.

As this chapter documents, there are both advantages and disadvantages in bothapproaches, however in a system such as the Team Bomberman game, I believethe most appropriate approach will be to use an ACMAS. The complexity of thegame and the agent simply is not high enough to really yield better results fromspecifying and implementing an organization compared to simply building anagent-centered system.

It was shown that when having a structurally complex system or a system withmany agents it is reasonable to consider making the organization explicit. Thisallows the designer to focus on what the system goals are without having tobuild the agents as well.

Performance-wise there are some differences that in certain situations wouldmean that the only reasonable choice is an ACMAS. These results naturally onlyapplies to the tools at hand and the reasons for the performance difference aretherefore more likely to be found in the tools than the principles of the systems.The performance of system implemented in J -Moise+ not quite good becauseof the overhead involved when communicating with the OrgManager. This wasquite clear in the Bomberman scenario because of its rapid nature.

Jason is a fully working tool which enables the user to build complex multi-agent systems using the agent-centered approach. Moise+ is still quite newand has its limitations and issues. The overall idea of Moise+ is quite goodand it enables the user to easily specify an organization. Moreover, by using theimplementation J -Moise+, the programmer is able to use Jason to actuallybuild organization-centered systems.

Chapter 11

Conclusions

The main goals of this project were to learn about two different approaches tobuilding multi-agent systems, an agent-centered and an organization-centeredand to use this knowledge of the approaches to investigate whether there areany advantages of making the organization of a multi-agent system explicit.By using existing platforms that focus on each approach the idea has been toimplement a team of agents for Bomberman to not only get acquainted with thetools but also to carry out a comparison of the implementations, platforms andapproaches.

I have been studying intelligent agents and multi-agent systems in general using[32] as my primary resource. Furthermore I have referred to [30] for an approachtowards specifying the knowledge of an agent within a multi-agent system. Thiswas used to specify some of the more complex situations an agent can experienceand to effectively convert this into plans usable by the agents.

By taking both the agent-centered and organization-centered approach for im-plementing the same strategy I have gained insights about both the advantagesand disadvantages of each approach. The focus has been on a single scenario,which means that not all corners of the approaches have been investigated. Theresults have made several differences of the approaches clear, differences thatin some situations makes one approach highly advantageous compared to theother. These differences will be summarized in the following sections.

110 Conclusions

11.1 Systems

The investigation was based on a set of criteria which allows us to compare thetwo types of systems based on not only the performance, but also complexity ofthe scenario, the number of agents, source code and debugging. This was doneto ensure that we would not falsely conclude that one type is better than theother based simply based on a better strategy. While some of the criteria aremostly concerning the tools and implementations, two are of general interestregarding the systems: complexity of the scenario and the number of agents inthe system.

Roughly speaking, the more complex a system is (and the more agents a systemconsists of), the more reasonable is it to build an OCMAS. The reason is that di-viding the implementation into two clearly distinct parts (what and how) makesthe code more well-structured and easier to maintain. Furthermore, having tothink about what the system should do leads to thinking why this should bedone. This helps justifying the choices one make and helps eliminate futureerror-prone situations.

There will always be exceptions to this rule and a decision should always bejustified by performing some research on the task. Even though a system seemsto be complex, it may not be, meaning that the solution may not benefit muchfrom making the organization explicit.

Overall the agent-oriented team seemed to perform better than the organization-oriented team. One of the reasons is that the agents are able to more quicklyadapt to changes in the environment. The environment is quite fast-paced andthe OCMAS reacts a bit too slow compared to the ACMAS. The scenario is notvery complex and does not consist of that many agents. The results are thereforenot surprising, but in a more complex version of Bomberman the results couldbe quite different.

11.2 Platform

Throughout the project I have been using the interpreter Jason which is basedon the language AgentSpeak. The interpreter allows one to build practicalreasoning agents using deductive rules in a Prolog-like language. While thiscould be a problem considering the substantial differences between imperativeand logic programming languages, the results of the project have shown thatbeing able to specify plans in a logic language results in very elegant solutions.

11.3 Implementation 111

The interpreter has been extensively documented in [3] and its possibility tobuild custom agent architecture and environments has made it a perfect choicefor this project.

I have used J -Moise+, which is based on Jason , to be able to build actualorganizations that agents in a system could use. It is a quite new platform forbuilding OCMAS but the overall results have been quite satisfactory. Being ableto use AgentSpeak as the primary programming language made the solutionquite elegant when combined with the structural benefits of making an orga-nization explicit. Since it is quite new there do exist some issues which makeserror handling and debugging somewhat difficult.

Overall Jason is quite versatile; the user is not limited to specific types ofmulti-agent systems. Even more so is J -Moise+, since it extends the usage ofJason . However since it is a new platform it still has some issues which tendsto slow the speed of implementation. Once acquainted with the platforms andtheir minor issues, using Jason or J -Moise+ will be reasonable choices forbuilding complex multi-agent systems, both agent- and organization-centered.

11.3 Implementation

In the project I have considered two specific tools and any comparison of imple-mentations will therefore naturally be a result of both the resulting implemen-tation and the tool it was implemented in. For example, a very verbose tooltends to result in verbose implementations. Below I summarize my first-handexperience gained by implementing the two Bomberman teams.

Since J -Moise+ uses Jason , some of the features implemented for the ACMAS(mainly internal actions, such as pathfinding) are also used in the OCMAS. Thusthese features cannot be compared implementation-wise.

Agent-centered approach The main advantage of implementing an ACMASis the freedom one has with regard to structure. This made it possible toimplement functionality in a concise way since there are no structural constraintsas to where the functionality must be implemented. However this advantage alsoeasily makes the code quite messy if the programmer is not careful enough.

The freedom with regard to structure also means that the programmer has tobuild everything from the ground: both functionality and structure. This means

112 Conclusions

that the programmer has to decide both what an agent should do, how it shoulddo so and finally how to relate the functionality with goal-directed behavior.

Experience has shown that this tends to be the biggest problem when work-ing with Jason . If one is not extremely careful when implementing how tomove from one goal to another, and how to drop an intention if it is no longerreasonable to follow, the agents may perform unintended actions.

Despite some of the issues that have arisen during the work with the ACMASteam, I am quite satisfied with the results. The agents follow a clear strategyand are able to cooperate to complete their tasks.

Organization-centered approach By taking an organization-centered ap-proach, the programmer forces himself to consider the structure of the imple-mentation before actually implementing the system. The reason is that in orderto build a system in J -Moise+, one has to first make the organization of thatsystem explicit using Moise+. More precisely: the use of Moise+ forces theuser to carefully consider what and how. During the specification of the or-ganization, the focus is on what the agents should do. Only then, when theorganization has been specified, will the focus shift to how the agents reachtheir goals.

Since the structure is specified in the organization it is only necessary to im-plement plans for achieving the goals of the system. The programmer does notneed to consider what to do when a goal has been achieved; the organizationhandles this. This makes Moise+ very straightforward to work with and thecode is generally quite clear.

Having clear code and a straightforward approach however comes at a price:plans that are otherwise quite simple gets quite verbose. That can actually leadto the opposite: making the code less clear.

Moise+ is a new tool and still has some limitations. Working with it isquite straightforward and I am overall quite satisfied with the resulting sys-tem. Though it does not perform quite as well as the ACMAS team, it followsthe same strategy and the agents are able to cooperate in order to meet theirgoals.

11.4 Future Work 113

11.4 Future Work

The focus of the project has been on differences between different types of multi-agent systems and in particular the use of two specific tools. It was shownthat both types systems are useful in different situations and that there is nodefinitive answer to when one system is a better choice than another. Themain reason is the many factors the programmer must consider when choosingbetween the agent- and organization-oriented approaches. These factors includethe quality of the implementation, the actual tools used and the strategy to beimplemented.

While I have been able to discuss the differences and make suggestions forwhich system is most suitable in different situations, it would be interestingto be able to create systems of both types which exhibit the same behavior inmost situations. This would make it possible to compare the actual performancedifference between the systems. However, since this requires specialized systems,there is a chance that the results would not apply to real-world applications.

11.4.1 Other research areas

During the work with this project I have run into a few aspects of multi-agentsystems that I feel could improve the quality of the implementations. Theseaspects are however too broad to be included in this project, but I will brieflydiscuss how I believe they can be used to improve the implementations.

Temporal aspects It is possible in many dynamic multi-agent scenarios toreason about when certain actions are likely to appear. This is the case whenan agent performs an action; it will most likely be expecting certain feedbackfrom the environment. In many cases some of these events may not happenimmediately after an action is executed. In other cases, some events happenregularly and can be predicted.

In the case of Bomberman, there is a certain delay when a bomb is placed beforeit explodes. By being able to reason about this delay, the agents might be ableto make better choices: Instead of choosing the “safe” way around a bomb, theagent can choose to pass a bomb directly if it can conclude that doing so is safe(i.e. if the bomb is going to explode before it has been passed).

While it is possible to achieve this by implementing the functionality in the agentarchitecture of Jason , it would be interesting to include such functionality

114 Conclusions

directly in the reasoning mechanism of a multi-agent system. A combinationof epistemic logic and branching-time temporal logic is defined in [29]. Theresulting logic is called Alternating-time Temporal Epistemic Logic (atel) andcan be used to express how groups of agents can achieve certain states in thefuture. The logic of atel is likely to be a good starting point for implementingsuch mechanisms.

Combining deontic and epistemic logic One of the things that I haverealized during the work with the project was the fact that in order to build anOCMAS, one would have to combine an existing tool for building ACMAS withan organizational layer, allowing the agents to reason about their organization.Recall that dynamic epistemic logic allows one to create statements such asKaϕ → ACTa(ψ), i.e. if the agent a knows that ϕ is true, then it performs theaction ψ. However, there is nothing in this statement that forces the agent todo so. By adding the operators of deontic logic it would be possible to insteadsay Kaϕ → O ACTa(ψ), i.e. if the agent a knows that ϕ is true, then it oughtto perform the action ψ.

There are a few examples in the literature of attempts to perform this “merge”.Work by Wiegel, Van den Hoven and Lokhorst has resulted in a combinationof deontic, epistemic and action logic called DEAL [28, 31]. DEAL makes itpossible to create statements such as the one above. Another interesting projectis the Beliefs-Obligations-Intentions-Desires or BOID architecture [7] in whichthe classical BDI architecture is extended by obligations.

Planning The behavior of agents in Jason is a result of plans created inAgentSpeak. The plans are created before the system is running and it is there-fore not possible for the agents to develop new plans during a simulation. Inother words, the plans are created offline and are static. In chapter 2 I brieflymentioned how to implement means-ends reasoning by choosing a plan usingthe agent’s beliefs, intentions and available actions. In many systems, such asJason , plans are created offline but it is not necessary to do so.

An example of an automated planner is STRIPS (Stanford Research InstituteProblem Solver) [25]. In STRIPS, one specifies an initial state, a desired goalstate and a set of available actions. Using the actions the planner will attemptto create a plan for achieving the goal state given the initial state.

It is not hard to see that automated planning applies to multi-agent systems aswell. An agent in an environment, in which it wants to achieve a certain stateis perfectly able to use automated planning for this purpose: the initial state is

11.5 Conclusive Remarks 115

the agent’s current perspective on the environment, the goal state is the statewhich the agent wants to achieve, and the actions are the actions available tothe agent. An attempt to combine the BDI agent system with hierarchical tasknetwork planners has been made by Sardina, de Silva and Padgham [10, 26].

Constraining intentions The organization of a multi-agent system makes itpossible for the agents to automatically receive new tasks and goals which theyare then supposed to react to. As a result, the agent will intend to complete thesegoals. But there may be situations where an intention is no longer reasonableto commit to – it could be impossible or just no longer feasible to do so.

In most cases is it the responsibility of the programmer to create a plan thatspecifies the situations in which an intention is no longer reasonable. This canbe very error-prone and tends to lead to situations in which an intention is notdropped even though it should have been.

An idea could be to be able to specify certain states of the environment in whichspecific intentions are not reasonable to follow. By having such globally definedrules or constraints, the system would be able to automatically drop intentionswhen needed. Note that this leads to a more complicated agent control cycle,possibly resulting in a decrease of performance. Even so, it may result in lesserror-prone systems in which it is easier to get the agents to behave as intended.

11.5 Conclusive Remarks

The area of multi-agent systems is still somewhat new and is continuously grow-ing. With the addition of the organizational aspects it has been made possibleto create even more sophisticated and advanced systems. My comparison hasshown that both approaches have advantages and disadvantages and are well-suited for different situations.

There is still much work to be done in the area of organizational multi-agentsystems, specifically in theMoise+ organizational model, but also the principlesof OCMAS in general.

The tools available makes it possible to implement advanced systems (bothACMAS and OCMAS) which are very useful in both research and practical ap-plications and there is no doubt that the area will continue to develop even moreefficient and intelligent solutions to research problems and real-world applica-tions.

116 Conclusions

Appendix A

Source

This report does not contain the source code for the implementation. Thiswould require a large amount of pages which are hard to read and understand.Instead, the source code can be downloaded for further study and test fromhttp://www.student.dtu.dk/~s052271/msc/.

118 Source

Bibliography

[1] Tomasz Babczynski, Zofia Kruczkiewicz, and Jan Magott. PerformanceComparison of Multi-agent Systems. In Proc. Central and Eastern Euro-pean Conference on Multiagent Systems (CEEMAS), pages 612–615, 2005.

[2] Tristan M. Behrens, Jurgen Dix, Jomi Hubner, and Michael Koster. MultiAgent Contest. http://www.multiagentcontest.org/, April 2010.

[3] Rafael H. Bordini, Jomi Fred Hubner, and Michael Wooldridge. Program-ming Multi-Agent Systems in AgentSpeak using Jason. John Wiley & SonsLtd, 2007.

[4] Niklas Skamriis Boss, Andreas Schmidt Jensen, and Jørgen Villadsen. De-veloping Artificial Herders Using Jason. In Jurgen Dix, Michael Fisher,and Peter Novak, editors, Proceedings of the 10th International Workshopon Computational Logic in Multi-Agent Systems, 2009.

[5] Michael E. Bratman. What Is Intention? In Philip R. Cohen, Jerry L. Mor-gan, and Martha E. Pollack, editors, Intentions in communication, chap-ter 2. MIT Press, 1990.

[6] Paolo Bresciani, Anna Perini, Paolo Giorgini, Fausto Giunchiglia, and JohnMylopoulos. Tropos: An Agent-Oriented Software Development Method-ology. Autonomous Agents and Multi-Agent Systems, 8:203–236, 2004.

[7] Jan Broersen, Mehdi Dastani, Joris Hulstijn, Zisheng Huang, and Leen-dert van der Torre. The BOID Architecture. In Proceedings of the FifthInternational Conference on Autonomous Agents, 2001.

[8] Cristiano Castelfranchi. Commitments: From Individual Intentions toGroups and Organizations. Proceedings of the First International Con-ference on Multiagent Systems, pages 41–48, 1995.

120 BIBLIOGRAPHY

[9] Lawrence Cavedon and Liz Sonenberg. On Social Commitment, Roles andPreferred Goals. Proceedings of the 3rd International Conference on MultiAgent Systems, 1998.

[10] Lavindra P. de Silva, Sebastian Sardina, and Lin Padgham. First principlesplanning in BDI systems. In Carles Sierra, Cristiano Castelfranchi, Keith S.Decker, and Jaime Simao Sichman, editors, Proceedings of AutonomousAgents and Multi-Agent Systems (AAMAS), volume 2, pages 1001–1008,May 2009.

[11] Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi.Reasoning About Knowledge. MIT Press, 1995.

[12] Jacques Ferber, Olivier Gutknecht, and Fabien Michel. From Agents toOrganizations: an Organizational View of Multi-Agent Systems. Agent-Oriented Software Engineering (AOSE) IV, pages 214–230, 2004.

[13] James Garson. Modal Logic. In Edward N.Zalta, editor, The Stanford Encyclopedia of Philosophy,http://plato.stanford.edu/archives/win2009/entries/logic-modal/, Winter2009 Edition.

[14] Mahdi Hannoun, Olivier Boissier, Jaime Simao Sichman, and ClaudetteSayettat. MOISE: An Organizational Model for Multi-agent Systems. Pro-ceedings of the International Joint Conference, 7th Ibero-American Confer-ence on AI, 15th Brazilian Symposium on AI, 2000.

[15] Jorg Hansen, Gabriella Pigozzi, and Leendert van der Torre. Ten Philo-sophical Problems in Deontic Logic. Dagstuhl Seminar Proceedings, 2007.

[16] Risto Hilpinen. Deontic, Epistemic, and Temporal Modal Logics. In DaleJacquette, editor, A Companion to Philosophical Logic, chapter 31. Black-well Publishing Ltd., 2006.

[17] Jomi Fred Hubner, Rafael H. Bordini, Gustavo Pacianotto Gouveia, Ri-cardo Hahn Pereira, Gauthier Picard, Michele Piunti, and Jaime SimaoSichman. Using Jason ,Moise+and CArtAgO to Develop a Team of Cow-boys. In Jurgen Dix, Michael Fisher, and Peter Novak, editors, Proceedingsof the 10th International Workshop on Computational Logic in Multi-AgentSystems, 2009.

[18] Jomi Fred Hubner, Jaime Simao Sichman, and Olivier Boissier. A Modelfor the Structural, Functional, and Deontic Specification of Organizationsin Multiagent Systems. Proceedings of the 16th Brazilian Symposium onArtificial Intelligence, 2002.

BIBLIOGRAPHY 121

[19] Jomi Fred Hubner, Jaime Simao Sichman, and Olivier Boissier. S-Moise+:A Middleware for developing Organised Multi-Agent Systems. Proceedingsof the International Workshop on Organizations in Multi-Agent Systems,from Organizations to Organization Oriented Programming in MAS, 2005.

[20] Jomi Fred Hubner, Jaime Simao Sichman, and Olivier Boissier. DevelopingOrganised Multi-Agent Systems Using the Moise+ Model: ProgrammingIssues at the System and Agent Levels. International Journal of Agent-Oriented Software Engineering, 2007.

[21] Jomi Fred Hubner, Jaime Simao Sichman, and Olivier Boissier. Moise+

tutorial. http: // moise. sourceforge. net/ , 2008.

[22] Saul A. Kripke. Semantical Considerations on Modal Logic. Acta Philo-sophica Fennica, 16:83–94, 1963.

[23] J.-J. Ch. Meyer, F. P. M. Dignum, and R. J. Wieringa. The Paradoxesof Deontic Logic Revisited: A Computer Science Perspective (Or: Shouldcomputer scientists be bothered by the concerns of philosophers?). Tech-nical report, Department of Information and Computing Science, UtrechtUniversity, 1994.

[24] Lin Padgham and Michael Winikoff. Developing Intelligent Agent Systems.John Wiley & Sons Ltd, 2004.

[25] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Ap-proach. Prentice-Hall, 2003.

[26] Sebastian Sardina, Lavindra P. de Silva, and Lin Padgham. Hierarchi-cal planning in BDI agent programming languages: A formal approach.In Hideyuki Nakashima, Michael P. Wellman, Gerhard Weiss, and PeterStone, editors, Proceedings of Autonomous Agents and Multi-Agent Sys-tems (AAMAS), pages 1001–1008, May 2006.

[27] Gaston Eduardo Tagni and Dejan Jovanovic. Comparison of Multi-AgentSystems.

[28] Jeroen van den Hoven and Gert-Jan Lokhorst. Deontic Logic andComputer-Supported Computer Ethics. In Metaphilosophy, 2002.

[29] Wiebe van der Hoek and Michael Wooldridge. Cooperation, Knowledge andTime: Alternating-time Temporal Epistemic Logic and its Applications. InStudia Logica, volume 75, pages 125–157, 2003.

[30] Hans van Ditmarsch, Wiebe van der Hoek, and Barteld Kooi. DynamicEpistemic Logic. Springer, 2008.

122 BIBLIOGRAPHY

[31] Vincent Wiegel, Jeroen van den Hoven, and Gert-Jan Lokhorst. Privacy,deontic epistemic action logic and software agents. In Ethics and Informa-tion Technology, 2005.

[32] Michael Wooldridge. An Introduction to MultiAgent Systems. John Wiley& Sons Ltd, 2009.

[33] Michael Wooldridge, Nicholas R. Jennings, and David Kinny. The GaiaMethodology for Agent-Oriented Analysis and Design. Journal of Au-tonomous Agents and Multi-Agent Systems, 3:285–312, 2000.