simulating sequential decision-making process of base

45
Simulating Sequential Decision-Making Process of Base- Agent Actions in a Multi Agent-Based Economic Landscape (MABEL) Model Konstantinos T. Alexandridis 1 , Bryan C. Pijanowski 2 , and Zhen Lei 3 This paper has not been submitted elsewhere in identical or similar form, nor will it be during the first three months after its submission to the Publisher. 1 Department of Agricultural Economics, 215 Cook Hall, Michigan State University, East Lansing, Michigan 48824 ([email protected] ). To whom all correspondences should occur. 2 Department of Zoology, 203 Natural Science Building, Michigan State University, East Lansing, Michigan 48824 ([email protected]) 3 Department of Computer Science and Engineering, 3115 Engineering Building, Michigan State University, East Lansing, Michigan 48824 ([email protected])

Upload: others

Post on 25-Feb-2022

10 views

Category:

Documents


0 download

TRANSCRIPT

Simulating Sequential Decision-Making Process of Base-Agent Actions in a Multi Agent-Based Economic

Landscape (MABEL) Model

Konstantinos T. Alexandridis1, Bryan C. Pijanowski2, and Zhen Lei3

This paper has not been submitted elsewhere in identical or similar form, nor will it be during the first

three months after its submission to the Publisher.

1 Department of Agricultural Economics, 215 Cook Hall, Michigan State University, East Lansing, Michigan

48824 ([email protected]). To whom all correspondences should occur.

2 Department of Zoology, 203 Natural Science Building, Michigan State University, East Lansing, Michigan

48824 ([email protected])

3 Department of Computer Science and Engineering, 3115 Engineering Building, Michigan State University,

East Lansing, Michigan 48824 ([email protected])

- 2 -

Simulating Sequential Decision-Making Process of Base-Agents Actions in a Multi Agent-Based Economic

Landscape (MABEL) Model†

Konstantinos T. Alexandridis, Bryan C. Pijanowski, and Zhen Lei

Abstract

In this paper, we present the use of sequential decision-making process simulations for

base agents in our multi-agent based economic landscape (MABEL) model. The sequential

decision-making process described here is a data-driven Markov-Decision Problem (MDP)

integrated with stochastic properties. Utility acquisition attributes in our model are generated

for each time step of the simulation. We illustrate the basic components of such a process in

MABEL, with respect to land-use change. We also show how geographic information

systems (GIS), socioeconomic data, a Knowledge-Base, and a market-model are integrated

into MABEL. A Rule-based Maximum Expected Utility acquisition is used to as a constraint

optimization problem. The optimal policy of base-agents’ decision making in MABEL is one

that maximizes the differences between expected utility and average expected rewards of

agent actions. Finally, we present a procedural representation of extracting optimal agent

policies from socio-economic data using Belief Networks (BN’s). A sample simulation of

- 3 -

MABEL, as it is coded in the SWARM modeling environment, is presented. We conclude

with a discussion of future work that is planned.

Keywords: Belief Networks; land-use; MABEL; Markov-Decision Process (MDP); multi-

agent systems; Utility-based agents;

Introduction

Agent-based modeling is a form of artificial intelligence simulation in which

autonomous agents interact, communicate, evolve, learn, and make complex decisions within

a real time simulation framework (Holland, 1975). Multi-agent systems present a bottom-up

approach to modeling artificial intelligence of individuals (Kohler and Gumerman, 2000).

Such systems are not developed to simulate a specific task, but are rather designed generally

for a common solution to a problem (Alexandridis and Pijanowski, 2002; Bond and Gasser,

1988; Murch and Johnson, 1999; Parker, et al., 2001). Multi-agent intelligent systems are

constructed to represent and simulate problem-solving situations, where collaborative and

conflict behaviors can co-occur. Indeed, as in real human and natural systems, these types of

interactions exist in our everyday life. The main entity within a multi-agent system is an

intelligent agent, which is a computational entity, designed to achieve its internal goals

through proactive and reactive behavior, autonomy, mobility, learning, cooperation,

communication, and coordination simulations (Augusto, 2001; Brenner, et al., 1998; Conte

and Paolucci, 2001; Edmonds, 2000; 1997; Ferber, 1999; Gimblett, et al., 2002;

Mohammadian, 2000; Padget, 1999; Weiss, 1999).

- 4 -

Multi-agent systems are being used to simulate a variety of real-world behavioral

situations (Holland, 1975). Agent based models have been developed to understand artificial

human societies (eg, Epstein, et al., 1996), evolution of cooperation in birds (e.g., Axelrod,

1997), the life histories of animals in dynamic landscapes (DeAngelis, et al., 2001) and the

evolution of economic systems (e.g., Holland and Miller, 1991), to name a few. These studies

emphasize the need to carefully pose the behavior in computer programming frameworks that

simulate individual behavior, interactions, relationships and social structures.

The purpose of this paper is to present an overview of our multi-agent based economic

landscape (MABEL) model that simulates agent behavior during land transactions. There are

several aspects of MABEL that are presented here. We first describe the types of base agent

we have developed and how spatial and socioeconomic data are stored, referenced and

updated within a Knowledge-Base. Second, we provide an overview of the core components

of our agent behavior model; namely state space, actions, the transition model and the reward

function. Third, we show how we derive an agent’s beliefs and expectations with respect to

actions and expectations for the next time step. We then show how these are combined into a

dynamic programming utility that is based on a Markov Decision Problem. We present some

output of the MABEL model and then describe some of the future work that is planned.

Base Agents in MABEL

Base agents in MABEL are agents that own land, designated as parcels, on a

landscape, the fundamental simulation environment. By contrast, non-base agents in MABEL

represent computational entities, that do not necessarily hold geographic attributes, and thus,

they are not displayed on a GIS map. Examples of non-base agents are policy-makers, local

- 5 -

and regional planners, organizational and institutional agents, etc. Land-use based attributes

are the main drivers of the simulation, and land-use driven acquisition of land in a market

model, represents the basic framework for determining these base agents’ actions. Base agents

in MABEL are of various categories: farmer-agents, resident-agents, forestry-agents, and so

on (Table 1). Nevertheless, the assignment of agent classes, types and categories is indicative:

the MABEL architecture exceeds land-use specific classifications, and can be applied to any

land-use classification derived directly from GIS data acquisition. Hence, the classification

used and prescribed in this paper, represents a current employment of MABEL architecture

which has been used for pilot studies on parcel-based GIS data for several counties and

townships in northern Michigan. Our forestry agents here are less descriptive of true foresters

that might occur in this area, due to possible correlations with other agent types and existence

of multiple land use classes. Initial spatial attributes of these agents are derived from digitized

parcel data and interpretation of land use from aerial photographs (see, Brown, et al., 2000 for

details). The parcel database is stored in a GIS (Figure 1). The GIS is used to provide spatial

attributes for input into MABEL. Each parcel-based GIS block may have nested layers of

information, or geospatial variables, or the spatial attributes of a parcel (e.g., shape, area,

perimeter, centroids, and other landscape attributes), as well as location information (land use,

land cover, accessibility, soil type, topography, and other features). The attribute and feature

information is stored as a table in text format for use as input to MABEL.

The geospatial/GIS component of data acquisition in MABEL is coupled with a socio-

economic data attribute component to form a dynamic Knowledge-Base for the base agents.

We use the term Knowledge-Base to reflect the fact that the table is dynamic and a source of

information for intelligent learning (Davis and Lenat, 1982; Pau and Gianotti, 1990; Schmoldt

- 6 -

and Rauscher, 1996). Our use of a Knowledge-Base is consistent with that of Guida and

Tasso (1994) who define a Knowledge-Base System as: “a software system capable of

supporting the explicit representation of knowledge in some specific competence domain and

of exploiting it through appropriate reasoning mechanisms in order to provide high-level

problem-solving performance”. However, socio-economic variables in MABEL are most

often population-specific drivers such as demographic, economic, social, and housing

characteristics. Integration among different parts of the Knowledge-Base is accomplished by

linking all variables through parcel-based and land-use type correlation matrices (Figure 3)

using SPSS based routines. The socio-economic data flows are arranged into two parts, the

first of which contains the raw data used for the simulation, while the second part is a script

code that queries abstract definitions, variable values and assessments on the variables

included in the raw data. In this way, future MABEL outputs can be introduced back to SPSS

for assessment and interpretation using the abstract script routines. Furthermore, our

construction of a dynamic table within the Swarm (Swarm Intelligence Group, 2000)

simulator of the MABEL is used by the base agents to acquire information about its

environmental state space. Each row of the dynamic table contains records for each base-

agent participating in the initialization stage such that each row of the dynamic table extends

the variable information for the major components (GIS/spatial, geographic attributes, socio-

economic variables, Bayesian coefficients) within the Knowledge-Base. The final ten

columns of the table are constructed and reserved to contain the agents’ memory, or history,

of the previous ten steps of the simulation (Figure 3). A MABELmodel module, serving as a

simulation environment, is responsible for assigning and synchronizing the dynamic

Knowledge-Base attributes among base agents, and establishing communication paths

- 7 -

between agents and agent categories in a way that the stream or flow of messages are

incorporated also in the Knowledge-Base as a transition model (Stefansson, 2000).

The sequential decision-making process in MABEL is a utility-based framework of

interactions. Base agents aim to optimize their decisions using the Maximum Expected Utility

(MEU) principle, (Glymour, 2001; Joyce, 1992; Lange, 2002; Smithson, 2000) throughout the

sequences of their actions. This decision-making process for each step is stochastic, rather

than deterministic. This is an important characteristic of our MABEL model. With the

deterministic form of an expected utility function, the outcome of an agent’s actions can be

predicted with each time step because the end game is already estimated and decisions during

each time step is made to reach that ultimate goal. Thus, the simulation occurs regardless of

the accessibility component (Barnden and Srinivas, 1990; Doucet, et al., 2001; Feyock, et al.,

1993; Schwab, 1988; Scihman and Hubner, 1999; Servat, 1998; Smithson, 2000; Vakas-

Doung, 1998; Ward, 2000) of their state space. In such a case, a tree-search algorithm would

be adequate to compute each agent’s actions sequentially all the way to the end of the

simulation. In contrast, a stochastic decision-making process implies that an agent has no way

to specifically predict its next state after any given sequence of future actions (Russell and

Norvig, 1994; Troitzsch, 1999). While MABEL agents can be assumed to present a

deterministic pattern of intentions for their decision-making, the existence of a market-model

(Ballot and Taymaz, 1999; Jager, et al., 2001; Janssen and Jager, 1999; Kerber and Saam,

2001; Kirman and Salmon, 1995; Plantinga and Provencher, 2001; Shubic and Vriend, 1999)

within the simulation generates unpredictability, uncertainty, and variation between expected

and actual outcomes of the agents’ actions in each time-step. Thus, as the agents establish

their intentions using the MEU principle, the final outcome of their actions presents a real-

- 8 -

time utility optimization rule, as opposed to a long-term expected utility, of their actions. In

some sense, this reflects a myopic or selfish behavior rule of the base-agent (Sigmund, 1998).

A final clarification on the nature of the utility-based approach of the agents is needed.

While we are assuming a stochastic decision-making process for the agents, one must not

confuse this with the notion of stochastic utility (Brock and Durlauf, 2000; Gärdenfors and

Sahlin, 1988; Hämäläinen and Ehtamo, 1991; Kuriyama, et al., 2002; Lange, 2002; Li and

Löfgren, 2002; Polasky, et al., 2002; Smithson, 2000; Wakker, et al., 2000). In MABEL, the

utility itself is not stochastic: the accession and calculation of the expected utility within

MABEL is an observed, data-driven process. Under artificial intelligence parlance, the

agents’ decisions are based within an accessible environment, where the agents’ percepts or

sensors will be able to fully identify their current state with each time-step4. This notion

implies that an agent is fully aware of its state before attempting to make its decisions, or

calculating its optimal expected utility. This assumption in MABEL is a direct consequence of

the rational agent assumption (Castro Caldas and Coelho, 1999; Dal Forno and Merlone,

2002; Edmonds, 1999; Macy and Castelfranchi, 1998; Paredes and Martinez, 1998; Roehrl,

1999; Steiner, 1984; Wolozin, 2002). Whilst noise may be encountered in the form of

uncertainty in different phases of the simulation and/or decision-making process of the base-

agents, it is not assumed within the base-agents’ own knowledge-base acquisition. In these

terms then, special attention has been made in selecting the appropriate data for the

4 The policy-making framework in MABEL and the policy-maker agents incorporated in it, demonstrate the

opposite spectrum of the accessibility issue: they present a decision-making process in an inaccessible

environment, where the agents’ percepts are not adequate to completely identify their state, and a Partially

Observable Markov Decision Process (POMDP) is assumed.

- 9 -

initialization stage of the simulation. The socio-economic database used for this purpose is the

Public Use Microdata Sample (PUMS), the long-form of the US Census questionnaire for the

five percent of the population (U.S. Bureau of the Census, 1995). These data provide us with

a complete socioeconomic and demographic profile of real individuals.

A Markov Decision Process

During each time step, MABEL agents calculate their expected utility for every

possible action that they can perform, taking into account their state as determined from their

Knowledge-Base. It is possible that a mapping (Barnden and Srinivas, 1990; Doucet, et al.,

2001; Feyock, et al., 1993; Smithson, 2000) from a given state to possible actions can be

made, and from a sequence of states to a sequence of multiple possible actions that can be

performed for each base agent. This mapping of the state-space is called an agents’ policy

(Augusto, 2001; Banerji, 1990; Baptiste, et al., 2001; Boden, 1996; Cantoni, 1994;

Cartwright, 2000; Das, et al., 1999; Edmonds, 2000; Fonlupt, et al., 2000; Hirafuji and

Hagan, 2000; Kennedy, et al., 2001; Klugl, 2001; Rouchier, 2001; Scott, 2000; Wagman,

2002). The dynamic Knowledge-Base incorporated into the MABELmodel module contains

such a mapping; it is the agents’ environment history component. A set of transition

probabilities can then be calculated to present all the possible transformations of states for all

actions of the base-agents. The sequential decision-making process representing this transition

from states to actions in MABEL is a Markov Decision Process (MDP), which is a Markovian

problem which determines optimal agents’ policies within a stochastic, accessible

environment from a known transition model (Mahadevan, et al., 1997; Russell and Norvig,

1994).

- 10 -

A process is said to be Markov when the assessment of future actions or states is

independent of the past environment history given a set of properties that describe the state-

space environment for the present. According to Russell and Norvig (1994, pp.500), “(…) we

say the Markov property holds if the transition probabilities from any given state depend only

on the state and not on previous history”. In these terms, MABEL base-agents’ utility-based

decisions are Markov: the calculation of their optimal expected utility is based only on the

Knowledge-Base records assigned to a given, single state. Similarly, agents’ decisions affect

the future only through the next time step, and for that reason an update function that

reevaluates their state-space environment is performed at every time-step.

The Markov Decision Process (MDP) for MABEL takes into account a finite, yet

adequately large enough set of possible states, associated with land use classes and the socio-

economic status of an agent n, denoted as niS . For each agent, the state-space can be

represented as,

( ) ∪…

N

n

nPumsi

nLUi

N

n

nPumsi

nLUi

nPumsi

nLUiPumsiLUiPumsiLUi

ni

lkklkk

lkklkklkk

ssss

ssssssS

1,,1,,

,,2,

2,

1,

1,

)(}{

)},(,),,(),,{(

,,

,,,

==

∩=∩=

= (1)

where,

nLUi k

s , : the state corresponding to a given land use class, k, that an agent n acquires on the ith

state. n

Pumsi lks

,, : the state corresponding to a given set of socio-economic variables, l, of a dataset

correlated to a land use class k, that an agent n acquires on the ith state.

n, N: n the number of agents participating on the ith state. This number dynamically changes

for each time step, as new agents are created by the simulation. The total number of

- 11 -

agents is denoted by N, which is the maximum number of agents that exist in the

simulation throughout the i steps.

A base agent can perform an action Ai, out of a finite set of possible actions (out of an

action space A) related to its land acquisition. Thus, the set of actions available for an agent n

within each state is,

)( inn

i sAA = (2)

where,

)( in sA : the set of actions that an agent n can perform on its ith state (si).

For a given MDP in MABEL, we can partition the action space into discrete actions.

Throughout this paper, we will describe the MDP as a market-model decision-making

process, and thus, we have two discrete actions that an agent can perform at each time-step:

( ) { }ni

sellbuyN

n

selli

buyi

N

ni

ni aaaaAA ,,

11

≈====∪∪ (3)

where,

buya : buying-land action that agent n performs; and

sella : selling-land action that agent n performs.

We can then construct transition matrices for the state-space and action-space of the

base-agents that represents a “one-step” dynamic of the simulation (Ballot and Taymaz, 1999;

Fliedner, 2001; Haag and Liedl, 2001; Russell and Norvig, 1994), which is the way that

agents transform their states to actions. Each transition matrix corresponds to a unique time

step of the simulation, and it can be constructed using conditional probabilities. A conditional

- 12 -

probability that a base-agent will transform its state is , to the next one, 1+is , by performing an

action },{ sellbuyi aaA = , will be (Goodman, et al., 1991; Russell and Norvig, 1994; Schwab,

1988),

),( 1 aAssssPP iiiass ==′== +′ (4)

where,

assP ′ : a transition probability matrix.

The fact that base-agents perform specific actions (buy and sell), implies that their

next state will be affected by their previous decision. Yet, buying and selling of land, for a

farmer, forester, or a resident base-agent, significantly affects the specific action the agent

performs. For example, a farmer-agent selling its land may improve its socio-economic

status, but at expense of its available assets in terms of land acreage. In terms of its welfare,

this transaction may improve its available income in the short-term, yet it has serious

consequences for its long-term welfare, and its ability to achieve higher yields and further

farm income in the future. In other words, there is a need to distinguish between actions that

bare positive, and actions that bare negative, effects so that an agent will have a

comprehensive knowledge of the consequence of its actions. This is achieved by introducing a

reward function in the simulation, that proportionally rewards changes in an agents n welfare,

resulting from a specific action a. Such a reward function, R, can be denoted as,

),,( 11 ssaAssrER iiiiass ′==== ++′ (5)

where,

assR ′ : an expected reward function.

- 13 -

)(∝E : an agents’ expectation for a given reward r, conditional to an action a that transformed

the agents’ state is , to the next one, 1+is . In a sense, the expected reward r is also

conditional to the transition probability of that state, assP ′ .

These four factors, that is, states (S), actions (A), transition model (P), and reward

function (R), determine MABEL base-agents behavior over time. In other words, each base-

agent has to determine its series of actions as a function f{S, A, P, R}. Of course, on the other

hand, MABEL agents have as an ultimate goal their actual utility optimization. Actual utility

in terms of MABEL base-agents refers to the utility that an agent acquires from performing an

action a that has a direct effect on his/her welfare. In the case of MABEL, an agents’ welfare

is defined in terms of available state variables, which are the PUMS socio-economic

variables. Optimizing welfare thus means the agent will attempt to improve his/her social

conditions, such as increased income, property value, social status/indicators, and so on.

Evaluating Base-Agents’ Beliefs and Expectations

When we defined the actual utility of an agent, a distinction has been created: namely,

the one between the actual (or real) utility and his/her expected utility (EU), as being defined

earlier. The calculation of an agents’ EU has to take into account any relevant reward

associated with a particular action. A reward though, cannot be considered as an increase in a

persons’ real welfare, since it does not alter its state variables, it is rather a form of a “hidden”

variable, calculated in equation (5), for practical computational reasons. Changes in an agents’

welfare can be considered as the impact of a specific action to an agents’ specific state-

variables.

- 14 -

A given sequence of utility estimations for MABEL base-agents uses initial estimates

of the state-variables from the coupled GIS/Socio-economic Knowledge-Base components

maintained by the MABELmodel module. In a goal-driven conceptual framework, such a

utility must incorporate data estimations from observations. Given a set of state variables,

},,,{ 21 liiii vvvV …= , (6)

)(

21

22221

11211

2

1

2

1

L

LMOMM

LL

MM =

=

=

=

nli

ni

ni

liii

liii

ni

i

i

ni

i

i

i

s

ss

vvv

vvvvvv

V

VV

S (7)

where the boldfaced letters of variables indicate row-vectors of values for each variable, lv ,

representing the state-variables in equation (1), and each agents’ state is the relevant element

of the row in equation (7). The state-space will be ikln )( ××ℜ where n is the number of agents, l

is the number of state-variables, k is the partitions of the sample space corresponding to k land

use classes, and i is the number of time-steps in the simulation. For example, the experimental

evaluation of MABEL (described below) for several geographic blocks/townships in

Michigan, begins with a state-space of 100-300 agents (in an area approximately of nine

square miles), 150-260 state-variables (excluding various PUMS quality-flag variables), and

15 land use classes. In these terms, for each time step, the minimum size of the sample-space

is 51025.2 ×ℜ .

- 15 -

Using the Kalman5 filter (Enns, 1976; Kalman, 1960; Merwe, et al., 2000; Russell and

Norvig, 1994; Tani, et al., 1992; Welch and Bishop, 2002), we can approximate state

sequences as,

)(),()(ˆ11 i

ni

niiii EAE

i

SsSSPSS

⋅== ∑ ++ (8)

and, )(ˆ)()( 111 +++ ⋅⋅= iiii EE SSVPS λ (9)

where, )(ˆ LE and ( )LE , refers to an expected and an estimated probability distribution over

the sequence of steps respectively, and λ is a normalization constant (Russell and Norvig,

1994). Equations (8) and (9) illustrate the “prediction” and “correction” phases of the Kalman

filter respectively, and demonstrate the “beliefs” of the agents about their current and future

states. But from equation (4), we can see that,

ass

ni

niii A ′+ == PsSSP ),( 1 (10)

The first step of estimating the utility attributes for a MABEL base-agent, n, is the

calculation of the probability density of the socioeconomic variables,

)()( ,,,,n

lPumsin

Pumsin

Pumsi PP sSS == (11)

where we observe the probability density of the variables in the PUMS dataset, l, that the

agent n acquires on the ith step. Since we refer to the initial stage, we can denote i=0 (to).

Similarly, for the geospatial attributes, the probability densities for each land use in the

area to be included in the simulation is

5 Also known as particle filter: it was introduced by R.E. Kalman (1960) and has been used widely for

directional problems associated with military applications.

- 16 -

)()( ,,,n

LUin

LUin

LUi PP sSS == (12)

Since the state-space of the geospatial variables is a row-vector of the attributes (S instead of s

in equation 1), the vector incorporates all available k land uses.

The conditional probability ),( ,,,n

lPumsin

LUiniP ssS provides the framework of the

interactions in the MABELmodel module. Geospatial and socioeconomic attributes can be

considered as without any direct causal dependency, since they can be regarded as random

variables and that their observations were made independently. Then, we can say that

)()(

),(),()(

,,

,,,,,,

nPumsi

nLUi

nlPumsi

nLUi

nlPumsi

nLUi

ni

ni

ni

PP

PPP

SS

ssssSsS

⋅=

=== ∏ (13)

Given a set of available actions (see equation 3), the agents can evaluate their beliefs (Russell

and Norvig, 1994, p.511) for the future by constructing a belief network (Bradenburger and

Keisler, 1999; Gammerman, 1995; Heckerman and Breese, 1994; Hunter and Parsons, 1998),

for how variables affect decisions associated with land use choices.

An evaluation of belief networks provides the basis for the agents’ estimation of their

next state over an array of available actions. Belief Networks (BN’s) (Breese and Heckerman,

1996; Druzdzel, 1996; Gammerman, 1995; Heckerman and Breese, 1994; Heckerman, et al.,

1994; Hunter and Parsons, 1998; Schank and Colby, 1973) is a tool for identifying causal

relationships, and generate inference according to Bayesian conditional probabilities. As new

evidences entering a belief network in the form of data or observations, the causal acyclic

structure generated by a BN, can predict future states, or infer from future states to updated

prior beliefs, in the form of conditional probabilities. We constructed separate belief

networks, using the MSBNx software (Breese and Heckerman, 1996; Heckerman and Breese,

- 17 -

1994; Kadie, et al., 2001), associated with nine discrete land-use classes and socioeconomic

variables from PUMS for MABEL base-agents (examples of acyclic belief network graphs

are shown in Figure 4). These belief networks are introduced here to illustrate how agent’s

estimated probabilities (equation 8) can be translated into expected probabilities (equation 9).

For each acyclic belief network, we can derive a probability transition matrix model, ass ′P

(equation 10), corresponding to each of the n agents participating in the simulation.

Consequently, we can estimate )(ˆ1+iE S from equation (8), for any given action aAn

i = , that

alters the land use classes LUk among two sequential time steps. This process represents a

Bayesian weighted index (Bernardo and Smith, 1994; Carlin and Louis, 2000; Chen, et al.,

2000; Chen, 2001; Christakos, 2000; Congdon, 2001; Cyert and DeGroot, 1987; Doucet, et

al., 2001; Gill, 2002; Ibrahim, et al., 2001; Robert, 2001; West and Harrison, 1997) that can

be produced as a normalized estimate (from equation 9). The factor λ , normalizes each

estimate to the state variable vectors in equation 7, so that a universal consistent estimator can

be derived for each time step.

Expected Utility Estimates

The optimal policy of an agent (see p. 7) will be its Maximum Expected Utility rule,

(Das, et al., 1999; Lange, 2002; Russell and Norvig, 1994; Wang and Mahadevan, 1999;

Wellman and Doyle, 1991),

∑+

++ ⋅≈1

11)(maxargi

nii

ai UEMEU S (14)

and, ∑+

++′ ⋅+=1

11)(maxi

nii

a

ass

ni UERU S (15)

- 18 -

where, { }{ }

=∀−

=∀+=′ ni

sellni

ni

buynia

ssaA

aAR

, ,

1

1. The calculation of n

iU and MEUi expressed as an

optimal policy is an iterative, dynamic programming process that is approximated within the

Swarm simulator (Swarm Intelligence Group, v.2.1.1, 2000). For each time-step, the

estimated utility approximates a multi-attribute utility vector of utility-specific elements as a

system of linear equations among variables (Bordley and LiCalzi, 2000; Vernon, 1985;

Wakker, et al., 2000).

Example MABEL Simulation

To illustrate how the MDP is used in MABEL and to demonstrate the how the PUMS

data can be coupled to a GIS in such a simulation, we selected one 3mi x 3 mi area in Grand

Traverse County, Michigan, located in Long Lake Township where parcel and PUMS data for

1990 were available (see figures 1 and 7).

Figures 5 and 6 show maps of parcels and land use for a MABEL simulation over

three time steps. In each time step, the number of agents, and the average area of each parcel

within land uses was saved using screen grab utilities. Note that and on total, dynamically

changes.

The tabular summaries included with these figures (see bottom of figures) present the

results for io=to, i5=to+5 and i10=to+10. Note that the number of agents at each state (Figure

5), and the average area of base-agents’ parcels (Figure 6), is given. A 17.80% relative

increase in the number of agents on the initial five states (so to s5) is followed by a 17.27%

relative increase in the number of agents in the following five states (s5 to s10), while a

- 19 -

cumulative 38.14% increase for the number of base-agents has occurred during the ten steps

of the simulation. On the other hand, a 9.14% relative decrease on the average parcel size in

the first five states was followed by a significant relative decrease on average parcel size of

13.05% during the next five states, while a cumulative 21.00% decrease on average parcel

size occurred during the ten time steps of the simulation. The decrease in average parcel size

is a measure of the significant fragmentation of land use that we can observe on the average

landscape. This has serious consequences for urban sprawl, efficiency of natural resource

management, and agricultural sustainability.

A further calibration of the model to qualitatively and experimentally match state steps

intervals with real time will be required as well. We plan to design a series of sensitivity

analyses and tests to synchronize real time intervals with state-steps of the simulation.

Additional approaches, such as employing a series of Turing tests (Amabile, et al., 1989;

Bynum, et al., 1998; Edmonds, 2000; Garman, 1984; Kurzweil, 1992; Moehring, et al.,

2002), time-series analyses (Griffith, et al., 1999; Kutoyants, 1998; Kutsyy, 2001; Lieshout,

2000; Lowell and Jaton, 1999; Mowrer and Congalton, 2000), and high-low scenario analyses

(Kline, et al., 2001; Nicholls, 1995; Schneider, et al., 2000) are expected to be a part of future

research focus for MABEL development.

The Markov Decision Process approach we presented here that was used for

approximating optimal base-agents policy for utility acquisition generates a basis for higher

level simulations. A Partially-Observed Markov Decision Process (POMDP) (Das, et al.,

1999; Littman, et al., 1995; Mahadevan, et al., 1997; Sorensen and Gianola, 2002; Wang and

Mahadevan, 1999) can then be applied for the policy-maker agents in a policy-specific

framework for decision-making. Policy-makers, unlike base-agents, make their decisions

- 20 -

under uncertainty, within a wider horizon of perceptions, and evaluate their decisions on

discrete and dynamic epochs, rather than over continuous time. But, without a base-agents’

framework, an estimation of a policy-makers’ sequential decision-making is not possible.

Furthermore, changes in land use are fundamentally generated by individuals, based

on their actions, beliefs, and intentions. Estimating base-level relations between land use

changes and individual decision-making provides a comprehensive indicator for approaching

and evaluating environmental and ecosystem-based changes. Exploring the dynamics of a

coupled land use/socio-economic framework enhances our understanding of interactions

between natural and human systems, and increases our ability to generate viable, sustainable

and optimal solutions to environmental problems.

A series of additional rule-based approaches are included for future research plans for

MABEL as well. We plan to incorporate both a computational component of the policy-

making framework and identify a series of policy rules, regulations, and ordinances that apply

to our landscape so that we might better simulate more fully land use change in the real-

world. For example, we are currently developing a series of rules that act as constraints for

the base-agents actions, such as parcel size dimension restrictions for the market model (x/y);

various scenarios for minimum parcel lot (e.g., 5, 10, 15 acres); restrictions imposed by local

ordinances and zoning master-plans, all of which are landscape-specific for the simulated

areas and for the base-agents in MABEL.

- 21 -

Acknowledgements

This work was supported support by a grant from the Great Lakes Fisheries Trust and a grant

from NASA’s Land-Cover and Land-Use Change Program (NAG5-6042). We appreciate the

database help provided by Sean Savage and the statistical advice of Emily Silverman; but all

responsibility for errors in the execution of the research lies with the authors. We also thank

Dan Brown and Mike Vaseivich, who were instrumental to the development of the parcel

database used for the example MABEL execution.

- 22 -

References

Alexandridis, K. T., and B. C. Pijanowski. 2002. "Multi Agent-Based Environmental

Landscape (MABEL) - An Artificial Intelligence Simulation Model: Some Early

Assessments". Paper read at AERE/EAERE: 2002 World Congress of Environmental

and Resource Economists, at Monterey, California, June 24-27. pp. 26.

Amabile, T., and Intellimation Inc. 1989. Against all odds inside statistics. Santa Barbara,

CA: Intellimation. 13 videocassettes :.

Augusto, J. C. 2001. "The Logical Approach to Temporal Reasoning". Artificial Intelligence

Review 16 (4):301-333.

Axelrod, R. M. 1997. The complexity of cooperation : agent-based models of competition and

collaboration, Princeton studies in complexity. Princeton, N.J.: Princeton University

Press. pp. xiv, 232.

Ballot, G., and E. Taymaz. 1999. "Technological Change, Learning and Macro-Economic

Coordination: An Evolutionary Model". Journal of Artificial Societies and Social

Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html 2 (2).

Banerji, R. B. 1990. Formal techniques in artificial intelligence : a sourcebook, Studies in

computer science and artificial intelligence ; 6. Amsterdam ; New York, N.Y.,

U.S.A.: North-Holland; Distributors for the United States and Canada Elsevier Science

Pub. Co. pp. xi, 437.

- 23 -

Baptiste, P., C. Le Pape, and W. Nuijten. 2001. Constraint-based scheduling : applying

constraint programming to scheduling problems, International series in operations

research & management science ; 39. Boston: Kluwer Academic. pp. xii, 198.

Barnden, J. A., and K. Srinivas. 1990. "Overcoming rule-based rigidity and connectionist

limitations through massively-parallel case-based reasoning". NASA contractor report,

no. NASA CR-186963. Las Cruces, N.M. [Washington, DC, Springfield, Va.]:

Computing Research Laboratory New Mexico State University; National Aeronautics

and Space Administration. pp. 1 v.

Bernardo, J. M., and A. F. M. Smith. 1994. Bayesian theory. Chichester, England ; New

York: Wiley. pp. xiv, 586.

Boden, M. A. 1996. Artificial intelligence, Handbook of perception and cognition, 2nd ed.

San Diego: Academic Press. pp. xviii, 376.

Bond, A., and L. Gasser. 1988. Readings in Distributed Artificial Intelligence. San Mateo:

Morgan Kaufman Publishers.

Bordley, R., and M. LiCalzi. 2000. "Decision analysis using targets instead of utility

functions". Decisions in Economics and Finance 23:53-74.

Bradenburger, A., and H. J. Keisler. 1999. "An Impossibility Theorem on Beliefs in Games".

Negotiation, Organization and Markets Research Papers: Harvard Business School.

Social Science Research Network (SSRN) Electronic Paper Collection. pp. 22.

Breese, J. S., and D. Heckerman. 1996. "Topics in Decision-Theoretic Troubleshooting:

Repair and Experiment". Technical Report, no. MSR-TR-96-06. One Microsoft Way,

Redmond WA: Microsoft Research, Advanced Technology Division; Microsoft

Corporation. pp. 16.

- 24 -

Brenner, W., R. Zarnekow, and H. Wittig. 1998. Intelligent software agents : foundations and

applications. Berlin ; New York: Springer. pp. vii, 326.

Brock, W. A., and S. N. Durlauf. Discrete Choice with Social Interactions. Washington D.C.:

Brookings Institute.

Brown, D., B. Pijanowski, and J. Duh. 2000. "Modeling the Relationships between Land Use

and Land Cover on Private Lands in the Upper Midwest". Journal of Environmental

Management 59:247-263.

Bynum, T. W., J. H. Moor, and American Philosophical Association. Committee on

Philosophy and Computers. 1998. The digital phoenix : how computers are changing

philosophy. Oxford ; Malden, MA: Blackwell Publishers. pp. 412.

Cantoni, V. 1994. Human and machine vision : analogies and divergencies, The language of

science. New York: Plenum Press. pp. xviii, 391.

Carlin, B. P., and T. A. Louis. 2000. Bayes and Empirical Bayes methods for data analysis.

2nd ed. Boca Raton: Chapman & Hall/CRC. pp. xvii, 419.

Cartwright, H. M. 2000. Intelligent data analysis in science, Oxford chemistry masters ; 4.

Oxford ; New York: Oxford University Press. pp. xiv, 205.

Castro Caldas, J., and H. Coelho. 1999. "The Origin of Institutions: Socio-Economic

Processes, Choice, Norms and Conventions". Journal of Artificial Societies and Social

Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html 2 (2).

Chen, M.-H., Q.-M. Shao, and J. G. Ibrahim. 2000. Monte Carlo methods in Bayesian

computation, Springer series in statistics. New York: Springer. pp. xiii, 386.

Chen, Z. 2001. Data mining and uncertain reasoning : an integrated approach. New York:

Wiley. pp. xv, 370.

- 25 -

Christakos, G. 2000. Modern spatiotemporal geostatistics. Oxford ; New York: Oxford

University Press. pp. xvi, 288.

Congdon, P. 2001. Bayesian statistical modelling. Chichester ; New York: John Wiley. pp. x,

531.

Conte, R., and M. Paolucci. 2001. "Intelligent Social Learning". Journal of Artificial Societies

and Social Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html 4 (1).

Cyert, R. M., and M. H. DeGroot. 1987. Bayesian analysis and uncertainty in economic

theory, Rowman & Littlefield probability and statistics series. Totowa, N.J.: Rowman

& Littlefield. pp. xiv, 206.

Dal Forno, A., and U. Merlone. 2002. "A multi-agent simulation platform for modeling

perfectly rational and bounded-rational agents in organizations". Journal of Artificial

Societies and Social Simulation 5 (2).

Das, T., A. Gosavi, S. Mahadevan, et al. 1999. "Solving Semi-Markov Decision Problems

using Average Reward Reinforcement Learning". Management Science (April).

Davis, R., and D. B. Lenat. 1982. Knowledge-based systems in artificial intelligence,

McGraw-Hill advanced computer science series. New York: McGraw-Hill

International Book Co. pp. xxi, 490.

DeAngelis, Donald L., Wolf M. Mooij, M. Philip Nott, and Robert E. Bennetts. 2001.

"Individual-Based Models: Tracking Variability Among Individuals". In Modeling in

Natural Resource Management: Development, Interpretation, and Application, edited

by Tanya M. Shenk and Alan B. Franklin. Washington - Covelo - London: Island

Press. pp. 171-196.

- 26 -

Doucet, A., N. De Freitas, and N. Gordon. 2001. Sequential Monte Carlo methods in practice,

Statistics for engineering and information science. New York: Springer. pp. xxvii,

581.

Druzdzel, M. J. 1996. "Qualitative Verbal Explanations in Bayesian Belief Networks".

Artificial Intelligence and Simulation of Behavior Quarterly 94:43-54.

Edmonds, B. 2000. "The Constructability of Artificial Intelligence". CPM Report, no. 99-53:

Published in Journal of Logic Language and Information, 9:419-424. pp. 7.

———. 1999. "Modelling Bounded Rationality in Agent-based Simulations Using the

Evolution of Mental Models". In Computational techniques for modelling learning in

economics, edited by Thomas Brenner. Boston: Kluwer Academic Publishers. pp.

305-332.

———. 1997. "Modelling Socially Intelligent Agents". CPM Report, no. 97-26: Expanded

version published in Applied Artificial Intelligence, 12:677-699. pp. 13.

Enns, P. G. 1976. Bayesian and Maximum Likelihood Estimation in the Kalman-Bucy Model

with Business Applications. pp. 230.

Epstein, J. M., R. Axtell, 2050 Project., et al. 1996. Growing artificial societies : social

science from the bottom up, Complex adaptive systems. Washington, D.C., Cambridge,

Mass. ;: Brookings Institution Press; MIT Press. pp. xv, 208.

Ferber, J. 1999. Multi-agent systems : an introduction to distributed artificial intelligence.

Harlow, Eng.: Addison-Wesley. pp. xviii, 509.

Feyock, S., S. T. Karamouzis, and United States. National Aeronautics and Space

Administration. Scientific and Technical Information Division. 1993. A path-oriented

matrix-based knowledge representation system, NASA contractor report ; 4539.

- 27 -

Washington, DC: National Aeronautics and Space Administration Office of

Management Scientific and Technical Information Program; pp. 1v.

Fliedner, D. 2001. "Six Levels of Complexity". Journal of Artificial Societies and Social

Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html 4 (1).

Fonlupt, C., J.-K. Hao, E. Lutton, et al., eds. 2000. Artificial Evolution. Berlin: Springer-

Verlag.

Gammerman, A. 1995. Probabilistic reasoning and Bayesian belief networks. Henley-on-

Thames: A. Waller in association with Unicom. pp. xix, 271.

Gärdenfors, P., and N.-E. Sahlin. 1988. Decision, probability, and utility : selected readings.

Cambridge; New York: Cambridge University Press. pp. x, 449.

Garman, D. M. 1984. The Theory of Cross-Equation Shrinkage Estimation and an

Application to Michigan SMSA's. Dissertation (Ph.D.) – The University of Michigan

pp. 297.

Gill, J. 2002. Bayesian methods: a social and behavioral sciences approach. Boca Raton,

Fla.: Chapman & Hall/CRC. pp. xx, 459.

Gimblett, H. R., M. T. Richards, and R. M. Itami. 2002. "Simulating Wildland Recreation Use

and Conflicting Spatial Interactions Using Rule-Driven Intelligent Agents". In

Integrating Geographic Information Systems and Agent-Based Modeling Techniques

for Simulating Social and Ecological Processes, edited by Randy H. Gimblett. New

York: Oxford University Press. pp. 211-244.

Glymour, C. N. 2001. The mind's arrows: Bayes nets and graphical causal models in

psychology. Cambridge, Mass.: MIT Press. pp. xv, 222.

- 28 -

Goodman, I. R., H. T. Nguyen, and E. Walker. 1991. Conditional inference and logic for

intelligent systems : a theory of measure-free conditioning. Amsterdam, Netherlands;

New York, N.Y., U.S.A.: North-Holland; Elsevier Science Pub. Co. pp. viii, 288.

Griffith, D. A., L. J. Layne, J. K. Ord, et al. 1999. A casebook for spatial statistical data

analysis : a compilation of analyses of different thematic data sets. New York: Oxford

University Press. pp. xviii, 506.

Guida, G., and C. Tasso. 1994. Design and development of knowledge-based systems : from

life cycle to methodology. Chichester ; New York: John Wiley. pp. xxii, 476.

Haag, G., and P. Liedl. 2001. "Modelling and Simulating Innovation Behaviour within Micro-

Based Correlated Decision Processes". Journal of Artificial Societies and Social

Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html 4 (3).

Hämäläinen, R. P., and H. K. Ehtamo. 1991. Dynamic games in economic analysis :

proceedings of the Fourth International Symposium on Differential Games and

Applications, August 9-10, 1990, Helsinki University of Technology, Finland, Lecture

notes in control and information sciences ; 157. Berlin ; New York: Springer-Verlag.

pp. xiii, 311.

Heckerman, D., and J. S. Breese. 1994. "Causal Independence for Probability Assessment and

Inference Using Bayesian Networks". Technical Report, no. MSR-TR-94-08. One

Microsoft Way, Redmond WA: Microsoft Research, Advanced Technology Division;

Microsoft Corporation. pp. 15.

Heckerman, D., J. S. Breese, and K. Rommelse. 1994. "Troubleshooting under Uncertainty".

Technical Report, no. MSR-TR-94-07. One Microsoft Way, Redmond WA: Microsoft

Research, Advanced Technology Division - Microsoft Corporation. pp. 18.

- 29 -

Hirafuji, M., and S. Hagan. 2000. "A global optimization algorithm based on the process of

evolution in complex biological systems". Comput electron agric 29 (1/2):125-134.

Holland, J., and J. Miller. 1991. "Artificial adaptive agents in economic theory". American

Economic Review 81:365--370.

Holland, J. H. 1975. Adaptation in natural and artificial systems : an introductory analysis

with applications to biology, control, and artificial intelligence. Ann Arbor:

University of Michigan Press. pp. viii, 183.

Hunter, A., and S. Parsons. 1998. Applications of uncertainty formalisms. Berlin ; New York:

Springer. pp. viii, 474.

Ibrahim, J. G., M.-H. Chen, and D. Sinha. 2001. Bayesian survival analysis, Springer series

in statistics. New York: Springer. pp. xiv, 479.

Jager, W., R. Popping, and H. Van de Sande. 2001. "Clustering and Fighting in Two-Party

Crowds: Simulating the Approach-Avoidance Conflict". Journal of Artificial Societies

and Social Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html 4 (3).

Janssen, M., and W. Jager. 1999. "An Integrated Approach to Simulating Behavioural

Processes: A Case Study of the Lock-in of Consumption Patterns". Journal of

Artificial Societies and Social Simulation. http://www.soc.surrey.ac.uk/JASSS

/JASSS.html 2 (2) .

Joyce, J. M. 1992. The Axiomatic Foundations of Bayesian Decision Theory. pp. 250.

Kadie, C. M., D. Hovel, and E. Horvitz. 2001. "MSBNx: A Component-Centric Toolkit for

Modeling and Inference with Bayesian Networks". Technical Report, no. MSR-TR-

2001-67. One Microsoft Way; Redmond, WA 98052: Microsoft Research, Microsoft

Corporation. pp. 33.

- 30 -

Kalman, R. E. 1960. "A New Approach to Linear Filtering and Prediction Problems".

Transactions of the ASME--Journal of Basic Engineering 82 (D):35-45.

Kennedy, J., R. C. Eberhart, and Y. Shi. 2001. Swarm intelligence, The Morgan Kaufmann

series in evolutionary computation. San Francisco: Morgan Kaufmann Publishers. pp.

xxvii, 512.

Kerber, W., and N. J. Saam. 2001. "Competition as a Test of Hypotheses: Simulation of

Knowledge -Generating Market Processes". Journal of Artificial Societies and Social

Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html 4 (3).

Kirman, A. P., and M. Salmon. 1995. Learning and rationality in economics. Oxford, UK ;

Cambridge, Mass., USA: Blackwell. pp. xiii, 394.

Kline, J. D., A. Moses, and R. J. Alig. 2001. "Integrating Urbanization into Landscape-level

Ecologcal Assessments". Ecosystems 4 (1):3-18.

Klugl, F. 2001. "Swarm Intelligence: From Natural to Artificial Systems". Journal of

Artificial Societies and Social Simulation http://www.soc.surrey.ac.uk/JASSS

/JASSS.html 4 (1) .

Kohler, T. A., and G. J. Gumerman, eds. 2000. Dynamics in human and primate societies :

agent-based modeling of social and spatial processes, Santa Fe Institute studies in the

sciences of complexity. New York: Oxford University Press. pp. xiii, 398.

Kuriyama, K., K. Takeuchi, A. Kishimoto, et al. 2002. "A Choice Experiment Model For The

Perception Of Environmental Risk: A Joint Estimation Using Stated Preference And

Probability Data". Paper read at 2002 World Congress of Environmental and Resource

Economists, May, 2002, at Monterey, California, June 24 - 27 2002. pp. 12.

- 31 -

Kurzweil, R. 1992. The age of intelligent machines. 1st MIT Press pbk. ed. Cambridge,

Mass.: MIT Press. pp. xiii, 565.

Kutoyants, Y. A. 1998. Statistical inference for spatial Poisson processes. New York:

Springer. pp. vii, 276.

Kutsyy, V. 2001. Modeling and inference for spatial processes with ordinal data. pp. 1 v.

Lange, A. 2002. "Intertemporal Decisions Under Uncertainty: Combining Expected Utility

And Maximin". Paper read at 2002 World Congress of Environmental and Resource

Economists, at Monterey, California, June 24 - 27 2002. pp. 20.

Li, C.-Z., and K.-G. Löfgren. 2002. "On The Choice Of Metrics In Dynamic Welfare

Analysis: Utility Versus Money Measures". Paper read at 2002 World Congress of

Environmental and Resource Economists, at Monterey, California, June 24 - 27 2002.

pp. 21.

Lieshout, M. N. M. 2000. Markov point processes and their applications. London: Imperial

College Press. pp. viii, 175.

Littman, M. L., A. R. Cassandra, and L. P. Kaelbling. 1995. "Efficient dynamic-programming

updates in partially observable Markov decision processes". Brown University

Technical Report, no. CS-95-19. pp. 31.

Lowell, K., and A. Jaton. 1999. Spatial accuracy assessment : land information uncertainty in

natural resources. Chelsea, Mich.: Ann Arbor Press. pp. xiv, 455.

Macy, M. W., and C. Castelfranchi. 1998. "Social Order in Artificial Worlds". Journal of

Artificial Societies and Social Simulation http://www.soc.surrey.ac.uk/JASSS

/JASSS.html 1 (1) .

- 32 -

Mahadevan, S., N. Khaleeli, and N. Marchalleck. 1997. "Designing Agent Controllers using

Discrete-Event Markov Models". Paper read at AAAI Fall Symposium on Model-

Directed Autonomous Systems, at Nov. 8th-10th, MIT, Cambridge, MA. pp. 8.

Merwe, R. van der, A. Doucety, N. de Freitasz, and E. Wan. 2000. "The Unscented Particle

Filter". Technical Report, no. CUED/F-INFENG/TR-380. Cambridge, England:

Cambridge University Engineering Department. pp. 46.

Moehring, R. H., A. S. Schulz, F. Stork, and M. Uetz. 2002. Solving Project Scheduling

Problems by Minimum Cut Computations. Cambridge, MA: Massachusetts Institute

of Technology (MIT), Sloan School of Management. pp. 33.

Mohammadian, M. 2000. Advances in intelligent systems : theory and applications, Frontiers

in artificial intelligence and applications, v. 59. Amsterdam ;: Washington DC : IOS

Press. pp. xii, 390.

Mowrer, H. T., and R. G. Congalton. 2000. Quantifying spatial uncertainty in natural

resources : theory and applications for GIS and remote sensing. Chelsea, Mich.: Ann

Arbor Press. pp. xxiv, 244 , [8] of plates.

Murch, R., and T. Johnson. 1999. Intelligent software agents. Upper Saddle River, N.J.:

Prentice Hall PTR. pp. xiii, 210.

Nicholls, R. J. 1995. "Synthesis of Vulnerability Analysis Studies". Paper read at Proceedings

of WORLD COAST ‘93, Coastal Zone Management Centre, Final: August 18, 1994.

Padget, J. A. 1999. Collaboration between human and artificial societies : coordination and

agent-based distributed computing. Berlin ; New York: Springer. pp. xiv, 300.

Paredes, A. L., and R. O. Martinez. 1998. "The Social Dimension of Economics and

Multiagent Systems". CPM report, no. 98-44: Published in Edmonds B. and

- 33 -

Dautenhahn K. (eds.), Socially situated Intelligence: a workshop held at SAB'98,

Univ. of Zurich Technical Report 73-70. pp. 8.

Parker, D. C. , S. M. Manson, M. A. Janssen, M. Hoffman, and P. Deadman. 2001. "Multi-

Agent Systems for the Simulation of Land-Use and Land-Cover Change: A Review":

Manuscript prepared for Workshop on Agent-Based Modeling for Land Use and

Cover Change.

Pau, L. F., and C. Gianotti. 1990. Economic and financial knowledge-bases processing.

Berlin; New York: Springer-Verlag. pp. xv, 364.

Plantinga, A. J., and W. Provencher. 2001. "Internal Consistency in Models of Optimal

Resource Use Under Uncertainty." Paper read at American Agricultural Economics

Association - 2001 Annual Meeting, at Chicago, Illinois, August 5-8, 2001. pp. 40.

Polasky, S., C. Costello, and C. McAusland. 2002. "Trade, Land-Use, and Biological

Diversity". Preliminary draft. pp. 11.

Robert, C. P. 2001. The Bayesian choice : from decision-theoretic foundations to

computational implementation. 2nd ed, Springer texts in statistics. New York:

Springer. pp. xxii, 604.

Roehrl, A. 1999. "Multi-Agent Rationality". Journal of Artificial Societies and Social

Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html 2 (3-4).

Rouchier, J. 2001. "Multi-Agent System: An Introduction to Distributed Artificial

Intelligence". Journal of Artificial Societies and Social Simulation

http://www.soc.surrey.ac.uk/JASSS/JASSS.html 4 (2).

Russell, S. J., and P. Norvig. 1994. Artificial intelligence : a modern approach. Englewood

Cliffs, N.J.: Prentice Hall. pp. xxviii, 932.

- 34 -

Schank, R. C., and K. M. Colby. 1973. Computer models of thought and language, A Series of

books in psychology. San Francisco,: W. H. Freeman. pp. 454.

Schmoldt, D. L., and H. M. Rauscher. 1996. Building knowledge-based systems for natural

resource management. New York: Chapman & Hall. pp. xxi, 386.

Schneider, S. H., W. E. Easterling, and L. O. Mearns. 2000. "Adaptation: Sensitivity to

Natural Variability, Agent Assumptions and Dynamic Climate Changes". Climatic

Change 45 (1):203-221.

Schwab, L. H. 1988. Inference for a Multistate Stochastic Model Based Upon Interval-

Censored Data Paths. pp. 218.

Scihman, J. S., and J. F. Hubner. 1999. "Agent Technology: Foundations, Applications and

Markets". Journal of Artificial Societies and Social Simulation

http://www.soc.surrey.ac.uk/JASSS/JASSS.html 2 (3-4).

Scott, P. D. 2000. "Distributed Artificial Intelligence Meets Machine Learning: Learning in

Multi-Agent Environments". Journal of Artificial Societies and Social Simulation

http://www.soc.surrey.ac.uk/JASSS/JASSS.html 3.

Servat, D. 1998. "Modeling and Simulation of Ecosystems: On Deterministic Models to

Simulation to Discrete Events Modelisation et simulation decosystemes: Des modeles

determininistes aux simulations a evenements discrets". Journal of Artificial Societies

and Social Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html 1 (2).

Shubic, M., and N. J. Vriend. 1999. "A Behavioral Approach to a Strategic Market Game". In

Computational techniques for modelling learning in economics, edited by Thomas

Brenner. Boston: Kluwer Academic Publishers. pp. 261-282.

- 35 -

Sigmund, K. 1998. "Complex Adaptive Systems and the Evolution of Reciprocation".

Ecosystems 1 (5):444-448.

Smithson, M. J. 2000. Human Judgment and Imprecise Probabilities .PDF/LaTeX version of

a page at the web site of the Imprecise Probabilities Project: http://ippserv.rug.ac.be.

Sorensen, D., and D. Gianola. 2002. Likelihood, Bayesian and MCMC methods in genetics,

Statistics for biology and health. New York: Springer-Verlag.

Stefansson, B. 2000. "Simulating Economic Agents in Swarm". In Economic simulations in

Swarm : agent-based modeling and object oriented programming, edited by Francesco

Luna and Benedikt Stefansson. Boston: Kluwer Academic. pp. 1-61.

Steiner, D. D. 1984. Bayesian Learning and Tests of the Rational Expectations Hypothesis

(Econometrics, Macroeconomics). pp. 238.

Swarm (v.2.1.1). 2000. Swarm Intelligence Group.Available from: www.swarm.org .

Tani, A., H. Murase, M. Kiyota, and N. Honami. 1992. "Growth simulation of alfalfa cuttings

in vitro by Kalman filter neural network". Acta horticulturae 2 (319):671-676.

Troitzsch, K. G. 1999. "Simulation as a Tool to Model Stochastic Processes in Complex

Systems". In Computational techniques for modelling learning in economics, edited

by Thomas Brenner. Boston: Kluwer Academic Publishers. pp. 45-69.

U.S. Bureau of the Census. 1995. 1990 census of population and housing (A′-B′ Samples).

[Computer Laser Optical Discs]. U.S. Dept. of Commerce; Bureau of the Census; Data

User Services Division.

Vakas-Doung, D. 1998. "Connectionist Models of Social Reasoning and Social Behavior".

Journal of Artificial Societies and Social Simulation http://www.soc.surrey.ac.uk/JASS

/JASSS.html 1 (4) .

- 36 -

Vernon, D. R. 1985. Two Approaches to Evaluation: A Comparison of Quasi-Experimental

and Bayesian-Maut Designs (Multiple Criteria, Decision-Making, Multiattribute

Utility, Microcomputer, Spreadsheet). pp. 111.

Wagman, M. 2002. Problem-solving processes in humans and computers : theory and

research in psychology and artificial intelligence. Westport, Conn.: Praeger. pp. xvii,

230.

Wakker, P. P., A. M. Stiggelbout, and S. J. T. Jansen. 2000. "Measuring Attribute Utilities

when Attributes Interact". paper. pp. 25.

Wang, G., and S. Mahadevan. 1999. "Hierarchical Optimization of Policy-Coupled Semi-

Markov Decision Processes". Paper read at 16th International Conference on Machine

Learning (ICML '99), at Bled, Slovenia, June 27-30, 1999. pp. 10.

Ward, M. 2000. Virtual organisms : the startling world of artificial life. 1st U.S. ed. New

York: St. Martin's Press. pp. xii, 306.

Weiss, G. 1999. Multiagent systems : a modern approach to distributed artificial intelligence.

Cambridge, Mass.: MIT Press. pp. xxiii, 619.

Welch, G., and G. Bishop. 2002. "An Introduction to the Kalman Filter". Working Paper, no.

TR 95-041. Chapel Hill, NC: Department of Computer Science; University of North

Carolina at Chapel Hill. pp. 16.

Wellman, M. P., and J. Doyle. 1991. "Preferential semantics for goals". Paper read at

Proceedings of the Ninth National Conference on Artificial Intelligence, at Anaheim,

1991. pp. 6 (698-703).

West, M., and J. Harrison. 1997. Bayesian forecasting and dynamic models. 2nd ed, Springer

series in statistics. New York: Springer. pp. xiv, 680.

- 37 -

Wolozin, H. 2002. "The individual in economic analysis: toward psychology of economic

behavior". Journal of Socio-Economics 31 (1):45-57.

Zeigler, B. P. 1976. The hierarchy of systems specifications and the problem of structural

inference. [Ann Arbor, Mich.]: University of Michigan College of Literature Science

and the Arts Computer and Communication Sciences Dept. pp. 22.

- 38 -

Appendix: Tables and Figures.

Table 1: MABEL Agents and their Land-Use classification (Level-2), for Michigan Pilot Study.

Agent Categories Land Use Classification – Agent Types Row Crop Non-Row Crop Pasture Plantation-Row Visible

Farmers

Other Agriculture High Density Residential Low Density Residential Commercial

Residents

Industrial Young Forest / Old Field Foresters Mature Forest / Closed Park Open Grass Wetland Water Other Undeveloped

Policy-Makers(a)

Highways; Roads; Streets, etc.(b)

Notes:

(a) Policy-Maker Agents in MABEL represent a separate category of agents that operate on different scales of abstraction; thus, they are not base-agents, and their attributes in terms of sequential decision-making are different. Policy-making framework for MABEL is a higher-level scale problem-solving procedure. (b) Geospatial attributes on MABEL, are point-processes (not spatially expanded), and they represent drivers of change, or static entities.

- 39 -

Figure 1: Parcel-Based Remote Sensing/GIS data Acquisition for MABEL: A case-study of Long-Lake Township, Grand Traverse County, Michigan.

- 40 -

Geospatial / GIS Component(ArcView)

Geospatial / GIS Component(ArcView)

Socio-EconomicComponent

(SPSS)

Socio-EconomicComponent

(SPSS)

GIS Spatial Raster DataGIS Spatial Raster Data

AttributeTable(s)AttributeTable(s)

Socio-Economic Data (raw)

Socio-Economic Data (raw)

AbstractVariablesAbstractVariables

BayesianCoefficientsBayesian

Coefficients

Parcels Parcels

Land Use Type

MABELSimulator(Swarm)

MABELSimulator(Swarm)

input flowsoutput flows

Figure 2: Knowledge-Base Acquisition in MABEL: Initialization Stage

- 41 -

Value Coef Value Coef ValueAg. no: 01 (…) 1716 0.009346 200 0.009346 (…) 111Ag. no: 02 (…) 8830 0.009346 5500 0.018692 (…) 112Ag. no: 03 (…) 8983 0.009346 8211 0.009346 (…) 340Ag. no: 04 (…) 9400 0.009346 9430 0.009346 (…) 210Ag. no: 05 (…) 9430 0.009346 15000 0.018692 (…) 320Ag. no: 06 (…) 9580 0.009346 8130 0.009346 (…) 240Ag. no: 07 (…) 9876 0.009346 21369 0.009346 (…) 112Ag. no: 08 (…) 6204 0.009346 5232 0.009346 (…) 111Ag. no: 09 (…) 10400 0.009346 10762 0.009346 (…) 240Ag. no: 10 (…) 10762 0.009346 12249 0.009346 (…) 330Ag. no: 11 (…) 5772 0.009346 5500 0.018692 (…) 239Ag. no: 12 (…) 6127 0.009346 36063 0.009346 (…) 230Ag. no: 13 (…) 10000 0.009346 1215 0.009346 (…) 320Ag. no: 14 (…) 12035 0.009346 8830 0.009346 (…) 111

Agent NoBayesian Coefficients

LU Type(…)(…)

RfamInc RhhInc

Socio-economic Attributes(…) (…) (…) (…) (…) (…) (…) (…)

Ag. no: 01Ag. no: 02Ag. no: 03Ag. no: 04Ag. no: 05Ag. no: 06Ag. no: 07Ag. no: 08Ag. no: 09Ag. no: 10Ag. no: 11Ag. no: 12Ag. no: 13Ag. no: 14

GIS AttributesAgent No

Bayesian Coefficients Action History

GIS

Socio

-econ

Coeffi

cients

History

SerialNo PUMA (…) RfamInc RhhInc Occup RpIncome Num (…)Ag. no: 03 205943 4500 (…) 46000 46000 174 46000 3 (…)Ag. no: 52 206025 4500 (…) 30000 30000 779 30000 52 (…)Ag. no: 56 206123 4500 (…) 204573 204573 634 194573 56 (…)Ag. no: 57 206134 4500 (…) 35194 35194 0 35194 57 (…)Ag. no: 58 206161 4500 (…) 62810 62810 376 28810 58 (…)Ag. no: 61 206188 4500 (…) 3624 3624 13 0 61 (…)Ag. no: 72 206189 4500 (…) 2868 2868 0 2868 72 (…)Ag. no: 75 206322 4500 (…) 3500 14500 373 3500 75 (…)Ag. no: 85 206340 4500 (…) 9000 9000 379 9000 85 (…)Ag. no: 88 206343 4500 (…) 39384 39384 228 24726 88 (…)Ag. no: 12 207300 4500 (…) 15274 15274 864 5994 12 (…)Ag. no: 34 207887 4500 (…) 7104 7104 495 7104 34 (…)Ag. no: 37 209566 4500 (…) 22173 22173 549 10377 37 (…)Ag. no: 38 210101 4500 (…) 9325 9325 889 5680 38 (…)

Agent No Socio-economic Attributes

FID Num Area Perimeter LU (Prim) LU (Sec) Public QualityAg. no: 00 0 2 359028.71 2543.89 310 112 0 2Ag. no: 01 1 3 10330.94 410.13 112 0 0 2Ag. no: 02 2 4 301209.18 2446.15 210 310 0 2Ag. no: 03 3 5 352067.21 2640.36 210 112 0 1Ag. no: 04 4 6 808161.12 4916.55 210 112 0 2Ag. no: 05 5 7 336419.21 2454.01 210 112 0 2Ag. no: 06 6 8 452015.84 3173.17 210 112 0 2Ag. no: 07 7 9 158887.67 1595.67 210 112 0 2Ag. no: 08 8 10 513057.09 3288.96 210 112 0 2Ag. no: 09 9 11 324982.71 2406.25 210 112 0 2Ag. no: 10 10 12 153736.9 1571.58 330 112 0 1

GIS AttributesAgent No

Figure 3: Dynamic Knowledge-Base in MABEL: Organization of agents’ State Space

- 42 -

Figure 4: Illustration of Belief Network Construction for MABEL base-agents. The software used for the estimation is Microsoft Belief Networks

- 43 -

Figure 5: Number of Agents in three sequential states of MABEL simulation (Data for Long Lake Township, Grand Traverse County, Michigan)

0

20

40

60

80

100

120

140

160N

umbe

r of

Age

nts

Land Use

so=to 4 112 4 6 4 7 17 82

s5=to+5 4 133 5 6 4 9 18 99

s10=to+10 6 158 5 10 4 10 20 113

Low Density Residential

(111)

High Density Residential

(112)

Row Crops (210)

Non-Row Crops (220)

Pasture (230)

Plantation / Row Visible

(340)

Young Forest / Old Field (320)

Mature Forest /

Closed (330)

- 44 -

Figure 6: Average Area of Agents’ Parcels in three sequential states of MABEL simulation (Data for Long Lake Township, Grand Traverse County, Michigan) (area in m2).

0

50000

100000

150000

200000

250000

300000

Ave

rage

Are

a

Land Use

so 43992.74 33685.65 123850.2 143890.3 265100.6 134089.2 75710.17 107112.5

s5 43992.74 28582.38 100800.8 139113 265100.6 105235.2 71189.47 88603.94

s10 29949.35 25572.97 98702.53 90966.3 265100.6 82563.02 63790.77 76031.69

Low Density

Residential

High Density

Residential

Row Crops (210)

Non-Row Crops (220)

Pasture (230)

Plantation / Row

Visible

Young Forest /

Old Field

Mature Forest / Closed

- 45 -

Figure 7: Sequence of Time Steps for a Sample MABEL Simulation in Swarm (Swarm Development Group, 2000): Long Lake Township, Grand Traverse County, Michigan.

(a) so=to

(b) s5=to+5

(c) s10=to+10