virtual reality — in virtuo autonomy —

103
University of Rennes 1 Accreditation to Direct Research Field: Computer Science — Thesis — Virtual Reality in virtuo autonomy — Jacques Tisseau Defense: December 6, 2001 Herman D. Chairperson Bourdon F. Reporter Coiffet P. Reporter Thalmann D. Reporter Abgrall J.F. Examiner Arnaldi B. Examiner

Upload: vukien

Post on 03-Jan-2017

221 views

Category:

Documents


1 download

TRANSCRIPT

University of Rennes 1

Accreditation to Direct Research

Field: Computer Science

— Thesis —

Virtual Reality— in virtuo autonomy —

Jacques Tisseau

Defense: December 6, 2001

Herman D. Chairperson

Bourdon F. Reporter

Coiffet P. Reporter

Thalmann D. Reporter

Abgrall J.F. Examiner

Arnaldi B. Examiner

2

Contents

Contents 3

List of Figures 5

1 Introduction 71.1 An Epistemological Obstacle . . . . . . . . . . . . . . . . . . . . . . . . . . 71.2 An Original Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Concepts 112.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2 Historical Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2.1 Constructing Images . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2.2 Animating Images . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2.3 Image Appropriation . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2.4 From Real Images to Virtual Objects . . . . . . . . . . . . . . . . . 16

2.3 The Reality of Philosophers . . . . . . . . . . . . . . . . . . . . . . . . . . 172.3.1 Sensory/Intelligible Duality . . . . . . . . . . . . . . . . . . . . . . 182.3.2 Virtual Versus Actual . . . . . . . . . . . . . . . . . . . . . . . . . 192.3.3 Philosophy and Virtual Reality . . . . . . . . . . . . . . . . . . . . 21

2.4 The Virtual of Physicists . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.4.1 Virtual Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.4.2 Image Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.4.3 Physics and Virtual Reality . . . . . . . . . . . . . . . . . . . . . . 26

2.5 Principle of Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.5.1 Operating Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.5.2 User Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.5.3 Autonomizing Models . . . . . . . . . . . . . . . . . . . . . . . . . 292.5.4 Autonomy and Virtual Reality . . . . . . . . . . . . . . . . . . . . . 31

2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3 Models 353.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2 Autonomous Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3

Contents

3.2.1 Multi-Agent Approach . . . . . . . . . . . . . . . . . . . . . . . . . 363.2.2 Simulation multi-agents participative . . . . . . . . . . . . . . . . . 373.2.3 Modelisation comportementale . . . . . . . . . . . . . . . . . . . . . 39

3.3 Collective Auto-Organization . . . . . . . . . . . . . . . . . . . . . . . . . 403.3.1 Simulations in Biology . . . . . . . . . . . . . . . . . . . . . . . . . 403.3.2 Coagulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . 433.3.3 In Vivo, In Vitro, In Virtuo . . . . . . . . . . . . . . . . . . . . . . 47

3.4 Individual Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.4.1 Sensation and Perception . . . . . . . . . . . . . . . . . . . . . . . . 503.4.2 Active Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.4.3 The Shepherd, His Dog and His Herd . . . . . . . . . . . . . . . . . 55

3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

4 Tools 614.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614.2 The oRis Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.2.1 General View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.2.2 Objects and Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.2.3 Interactions between Agents . . . . . . . . . . . . . . . . . . . . . . 68

4.3 The oRis Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.3.1 Execution Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.3.2 Activation Procedures . . . . . . . . . . . . . . . . . . . . . . . . . 724.3.3 The Dynamicity of a Multi-Agent System . . . . . . . . . . . . . . 75

4.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774.4.1 Participatory Multi-Agent Simulations . . . . . . . . . . . . . . . . 774.4.2 Application Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.4.3 The AReVi Platform . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

5 Conclusion 835.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Bibliography 87

4

List of Figures

1.1 Pinocchio Metaphor and the Autonomization of Models . . . . . . . . . . . . . 9

2.1 Reality and the Main Philosophical Doctrines . . . . . . . . . . . . . . . . . . 182.2 The Four Modes of Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3 The Three Mediations of The Real . . . . . . . . . . . . . . . . . . . . . . . . 222.4 Optical System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.5 Real Image (a) and Virtual Image (b) . . . . . . . . . . . . . . . . . . . . . . . 232.6 Virtual Image (a), Real Image (b), and Virtual Object (c) . . . . . . . . . . . 242.7 Illusions of Rectilinear Propagation and Opposite Return . . . . . . . . . . . . 252.8 The Three Mediations of Models in Virtual Reality . . . . . . . . . . . . . . . 272.9 The Different Levels of Interaction in Virtual Reality . . . . . . . . . . . . . . 282.10 User Status in Scientific Simulation (a), in Interactive Simulation (b) and in

Virtual Reality (c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.11 Autonomy and Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.1 Cell-Agent Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.2 Interactions of the Coagulation Model . . . . . . . . . . . . . . . . . . . . . . 453.3 Evolution of the Multi-Agent Simulation of Coagulation . . . . . . . . . . . . 463.4 Main Results of the Multi-Agent Coagulation Model . . . . . . . . . . . . . . 473.5 Modeling and Understanding Phenomena . . . . . . . . . . . . . . . . . . . . . 503.6 Example of a Cognitive Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.7 Definition of a Fuzzy Cognitive Map . . . . . . . . . . . . . . . . . . . . . . . 523.8 Flight Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.9 Flight Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.10 Perception of Self and Others . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.11 Fear and Socialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.12 The Sheepdog and Its Sheep . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.1 Execution of a Simple Program in oRis . . . . . . . . . . . . . . . . . . . . . . 634.2 oRis Graphics Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.3 Localization and Perception in 2D . . . . . . . . . . . . . . . . . . . . . . . . . 664.4 Splitting an Execution Flow in oRis . . . . . . . . . . . . . . . . . . . . . . . . 714.5 Structure of the oRis Virtual Machine . . . . . . . . . . . . . . . . . . . . . . 724.6 Modifying an Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5

List of Figures

4.7 An AReVi Application: Training Platform for French Emergency Services . . . 814.8 Metaphor of Ali Baba and Programming Paradigms . . . . . . . . . . . . . . . 82

6

Chapter 1

IntroductionCreated from two words with opposing meanings, the expression virtual realities is absurd.If someone were to talk to you about the color black-white, you would probably think that,because this person was combining a word with its opposite, he was confused. Certainlythough, in all accuracy, virtual and reality are not opposites. Virtual, from Latin virtus(vertue, or force), is that which is in effect in the real, that which in itself has all the basicconditions for its realization; but then, if something contains the conditions of its realizationin itself, what can truly be a reality? When looked at from this perspective, the expression isinadequate. [Cadoz 94a]

The English expression virtual reality was coined for the first time in July of 1989during a professional trade show1 by Jaron Lanier, founder and former CEO of the VPLResearch company specializing in immersion devices. He created this expression as part ofhis company’s marketing and advertising strategy, but never gave it a precise definition.

1.1 An Epistemological Obstacle

The emergence of the notion of virtual reality illustrates the vitality of the inter-disciplinary exchanges between computer graphics, computer-aided design, simulation,teleoperations, audiovisual, etc. However, as the philosopher Gaston Bachelard pointsout in his epistemological study on the Formation of the Scientific Mind [Bachelard 38],any advances that are made in science and technology must face numerous epistemologicalobstacles. One of the obstacles that virtual reality will have to overcome is a verbal obsta-cle (a false explanation obtained from an explanatory word): the name itself is meaninglessa priori, yet at the same time, it refers to the intuitive notion of reality, one of the firstnotions of the human mind.

According to the BBC English Dictionary (HarperCollins Publishers, 1992), Vir-tual: means that something is so nearly true that for most purposes it can be regardedas true, it also means that something has all the effects and consequences of a particularthing, but is not officially recognized as being that thing. Therefore, virtual reality is aquasi-reality that looks and acts like reality, but is not reality: it is an ersatz, or substitutefor reality. In French, the literal translation of the English expression is realite virtuellewhich, as pointed out in the quote at the beginning of this introduction, is an absurdand inadequate expression. The definition according to the Petit Robert (Editions Le

1Texpo’89 in San Francisco (USA)

7

Introduction

Robert, 1992) is: Virtuel : [qui] a en soi toutes les conditions essentielles a sa realisation.Therefore, realite virtuelle would be a reality that would have in itself all of the basicconditions for its realization; which is really the least for reality! So, from English toFrench, the term virtual reality has become ambiguous. It falls under a rhetorical pro-cess called oxymoron, which combines two words that seem incompatible or contradictory(expressions like living dead, vaguely clear and eloquent silence are all oxymora). Thistype of construction creates an original, catchy expression, which is geared more towardsthe media than science. Other expressions such as cyber-space [Gibson 84], artificial real-ity [Krueger 83], virtual environment [Ellis 91] and virtual world [Holloway 92], were alsocoined, but a quick Web search2 proves that the antonymy virtual reality remains widelyused.

The term’s ambiguity is confusing and creates a real epistemological obstacle to thescientific development of this new discipline. It is up to the scientists and professionals inthis field to create a clearer epistemological definition in order to remove any ambiguitiesand establish its status as a scientific discipline (concepts, models, tools); particularly inregards to areas such as modeling, simulation and animation.

1.2 An Original Approach

This document proposes an original contribution to the idea that this new disciplinaryfield of virtual reality needs to be clarified.

The word autonomy best characterizes our approach and is the common thread thatlinks all of our research in virtual reality. Everyday, we face a reality that is resistant,contradictory and that follows its, and not our, own rules; in one word, a reality thatis autonomous. Therefore, we think that virtual reality will become independent of itsorigins if it autonomizes the digital models that it manipulates in order to populaterealistic universes, which are already created through computer-generated digital imagesfor us to see, hear and touch, with autonomous entities. So, like Pinocchio, the famousItalian puppet, models that become autonomous will take part in creating their virtualworlds (Pinocchio Metaphor3, Figure 1.1). A user, partially free from controlling hismodel, will also become autonomous and will take part in this virtual reality as a spectator(the user observes the model: Figure 1.1a), actor (the user tries out the model: Figure1.1b) and creator (the user modifies the model to adapt it to his needs: Figure 1.1c).

This document has three main focuses: the concepts that guide our work and are thebasis of the principle of autonomy (Chapter 2), the models that we developed to observethis principle (Chapter 3), and the tools that we used to implement them (Chapter 4).

In Chapter 2, analyzing the historical context, as well as exploring the real in philosophyand the virtual in physics, bring us to a clearer definition of virtual reality that centers

2A simple search using the search engine www.google.com gives the following results: cyber(-)space(s)≈ 25,540 hits, virtual reality(s) ≈ 19,920, virtual world(s) ≈ 15,030, virtual environment(s) ≈ 3,670,artificial reality(s) ≈ 185. In English the classification is the same.

3This metaphor was inspired by Michel Guivarc’h, official representative of the Communaute Urbainede Brest (Brest Urban Community).

8

An Original Approach

a. user-spectator b. user-actor c. user-creator

Figure 1.1: Pinocchio Metaphor and the Autonomization of Models

around the principle of autonomy of digital models.In Chapter 3, we recommend a multi-agent approach for creating virtual reality appli-

cations based on autonomous entities. We use significant examples to study the principleof autonomy in terms of collective auto-organization (example of blood coagulation) andindividual perception (example of a sheepdog).

Chapter 4 describes our oRis language — a programming language that uses concur-rent active objects, which is dynamically interpreted and has an instance granularity —and its associated simulator; the tool that helped build the autonomous models in theapplications that we had already created.

Finally, in Chapter 5, the conclusion, we review our initial research and outline thefuture direction of research in perspectives.

9

Introduction

10

Chapter 2

Concepts

2.1 Introduction

In this chapter, we present the main epistemological concepts that guide our researchwork in virtual reality.

First, we will review the historical conditions surrounding the emergence of virtualreality. We will establish that in the early 90’s — when the term virtual reality appeared— the sensory rendering of computer-generated images had become so realistic that itbecame necessary to focus more specifically on virtual objects than their images.

Secondly, an overview of the philosophical notion of reality will allow us to define theconditions that we feel are necessary for the illusion of the real; in other words, perception,experiments and modification of the real.

Next, we will use the notion of virtual in physics to explain our design of a reality thatis virtual. Several simple, yet significant, examples borrowed from optics, demonstrate theimportance of underlying models in understanding, explaining and perceiving phenomena.Virtual will then definitely emerge as a construction in the model universe.

Finally, by analyzing the user’s status while he is using these models, we will postulatean autonomy principle that will guide us in designing our models (Chapter 3) and creatingour tools (Chapter 4).

2.2 Historical ContextEpistemology is not particularly concerned with establishing a chronology of scientific dis-covery, it concentrates instead on updating the conceptual structure of a theory or work,which is not always evident, even for its creator. Epistemology must try and figure out whatconnects concepts and plays a vital role in the architecture of the structure: recognizing whatis an obstacle and what remains unclear and unsettled. To this effect, it should focus onproving exactly how the conditions of the science that it is studying is a response, critique,or maturation to previous states. [Granger 86]

Historically, the notion of virtual reality emerged at the intersection of different fieldssuch as computer graphics, computer-aided design, simulation, teleoperations, audiovi-sual, etc. Consequently, virtual reality was developed within a multidisciplinary environ-ment where computer graphics played an influential role because, ever since Sutherlandcreated the first graphic interface (Sketchpad [Sutherland 63]) virtual reality has endeav-

11

Concepts

ored to make the computer-generated digital images that it displays on computer screensmore and more realistic. Therefore, we will focus on the major developments in computergraphics before 1990 — during which the expression virtual reality became popular — bytaking a look at image construction, animation and user appropriation.

2.2.1 Constructing Images

From 2D Images to 3D Images

Computer graphics first had to undertake 2D problems such as displaying line segments[Bresenham 65], eliminating hidden lines [Jones 71], determining segment intersectionsfor coloring polygons [Bentley 79], smoothing techniques [Pitteway 80] and triangulatingpolygons [Hertel 83].

Research departments in the aeronautical and automobile industries quickly under-stood the benefit of computer-graphic techniques and integrated them as support tools indesigning their products, which led to the creation of computer-aided design (CAD). Itquickly evolved from its initial wireframe representations, and added parametric curvesand surfaces to it systems [Bezier 77]. These patch surfaces were easy to manipulateinteractively, and allowed the designer to create free shapes. However, when the objectshape had to follow strict physical constraints, it was better to work with implicit surfaces,which are the isosurfaces of a pre-defined space scalar field [Blinn 82]. By triangulatingthese surfaces, physics problems involving plates and shells could then be solved by us-ing digital simulation (finished elements methods [Zinkiewicz 71]), as well as a realisticdisplay of the results [Sheng 92].

Still, surface modeling proved to be inadequate in ensuring the topological consistencyof modeled objects. Therefore, volume modeling was developed from two types of repre-sentation: boundary representation and volume representation. Boundary representationmodels an object through a polyhedral surface made up of vertices, faces and edges. Thistype of representation requires that adjacency relationships between faces, edges and ver-tices, in other words, the model’s topology, be taken into account. The topology can bemaintained through the edges (winged-edges [Baumgart 75], n-G-maps [Lienhardt 89]) orin a face adjacency graph (Face Adjacency Graph [Ansaldi 85]). Volume representationdefines an object as a combination of primitive volumes with set operators (CSG: Cons-tructive Solid Geometry [Requicha 80]). The object’s topology is defined by a tree (CSGtree) whose leaves are basic solid shapes and whose nodes are set operations, translations,rotations or other deformations (torsion [Barr 84], free-form deformations [Sederberg 86])that are applied to underlying nodes. The elimination of hidden faces then replaced theelimination of hidden lines and split into two large categories [Sutherland 74]: visibilitywas either processed directly in the image-space, most often with a depth-buffer (z-buffer[Catmull 74]), or in the object-space by partitioning this space (BSP: Binary Space Par-titionning [Fuchs 80], quadtree/octree [Samet 84]).

Thus, wire-frame images became surface images and then became volume imagesthrough the use of geometric and topological models that looked more and more likereal solid objects.

12

Historical Context

From Geometric Image to Photometric Image

The visual realism of objects needed more than just precise shapes: they neededcolor, textures [Blinn 76] and shadows [Williams 78]. Therefore, computer graphics hadto figure out how to use light sources to illuminate objects. Light needed to be simulatedfrom its emission to its absorption at the point of observation, while taking into accountthe reflective surfaces of the scene that was being illuminated. The first illuminatedmodels were empiric [Bouknight 70] and used interpolations of colors [Gouraud 71] orsurface normals [Phong 75]. Then came the methods that were based on the laws ofthe physics of reflectivity [Cook 81]. Scenes were illuminated indirectly by shooting rays[Whitted 80], or through a radiosity calculation [Cohen 85]; the two methods could alsobe combined [Sillion 89]. From the observer’s perspective, in order to improve the qualityof visual rendering the effects of the atmosphere on light [Klassen 87], as well as the maincharacteristics of cameras [Kolb 95], including their defaults [Thomas 86], needed to betaken into account.

Hence, from representational art, computer-generated images progressively becamemore realistic by integrating laws of optics.

2.2.2 Animating Images

From Fixed Image to Animated Image

During the early history of the movies, image sequences were manually generated toproduce cartoons. When digital images arrived on the scene, these sequences could begenerated automatically. At one time, a sequence of digital animation was producedby generating in-between images obtained by interpolating the key images created bythe animator [Burtnyk 76]. Interpolation, usually the cubic polynomial type (spline)[Kochanek 84], concerns key positions of the displayed image [Wolberg 90] or the param-eters of the 3D model of the represented object [Steketee 85]. In direct kinematics, timeparameters are taken into account during interpolation. But, with inverse kinematics,a spacetime path for the components of a kinematic chain can be automatically recon-structed [Girard 85] by giving the furthest positions of the chain and evolution constraints.However, these techniques quickly proved to be limited in reproducing certain complexmovements, in particular, human movements [Zeltzer 82]. Therefore, capturing real move-ments from human subjects equipped with electronically-trackable sensors [Calvert 82] oroptical sensors (cameras + reflectors) [Ginsberg 83] emerged as an alternative.

Nevertheless, all of these techniques only reproduced effects without any a prioriknowledge of their causes. Methods based on dynamic models then operate on causes inorder to deduce effects. Usually, this approach leads to models that are regulated by dif-ferential equation systems with limit conditions, which then have to be resolved in order toclarify the solution that it has implicitly defined. Different numerical resolution methodswere then adapted to animation: direct methods by successive approximation (Runge-Kutta type) worked better for poly-articulated rigid systems [Arnaldi 89], structural dis-cretization of objects by finite elements was used for deformable systems [Gourret 89],

13

Concepts

and structural and physical discretization of systems by mass-spring networks was used tomodel a large variety of systems [Luciani 85]. Overall problems of elastic [Terzopoulos 87]or plastic deformation [Terzopoulos 88] were dealt with as well as collision detection be-tween solids [Moore 88]. Particular attention was paid to animating human characters,seen as articulated skeletons and animated by inverse dynamics [Isaacs 87]. In fact, an-imating virtual actors includes all of these animation techniques [Badler 93] because itinvolves skeletons as well as shapes [Chadwick 89], walking [Boulic 90] as well as gripping[Magnenat-Thalmann 88], facial expressions [Platt 81] as well as hair [Rosenblum 91], andeven clothes [Weil 86].

Through kinematics and dynamics, animation sequences could be automatically gen-erated so that their movements were closer to real behavior.

From The Pre-Calculated Image to The Real-Time Image

At the beginning of the 20th century the first flight simulators had instruments, butthey had no visibility; computer-generated images introduced vision, first for simulatingnight flights, (points and segments in black and white), then daytime flights (colored sur-faces with the elimination of hidden faces) [Schumacker 69]. However, training simulatorsrequired response times that corresponded to the simulated device in order for pilots tolearn its reactive behavior. Realistic rendering methods based on the laws of physicsresulted in calculation times in flight simulators that were often too long compared withactual response times. Therefore, during the 60’s and 70’s, special equipment such asthe real-time display device1 was created to accelerate the different rendering processes.These peripheral devices, which generated real-time computer-generated images, wereslowly replaced by graphics stations that would both generate and manage data struc-tures on a general-use processor and render it on a specialized graphics processor (SGI4D/240 GTX [Akeley 89]). At the same time, someone figured out how to reduce thenumber of faces to be displayed through processes that were carried out before graphicrendering [Clark 76]. On the one hand, the underlying data structures makes it possibleto identify, from the observer’s perspective, objects located outside of the observationfield (view-frustum culling), poorly directed faces (backface culling) and concealed objects(occlusion culling) so that these objects are not transmitted to the rendering engine. Onthe other hand, the farther away that an object is located from an observer, the moreits geometry can be simplified while maintaining an appearance, or else a topology, thatis satisfactory for the observer. This simplification of manual or automatic meshing thatgenerates discrete or continual levels of detail, proved to be particularly effective for thedigital land models that are widely used in flight simulators [Cosman 81].

Thus, thanks to optimization, calculation devices, and better adapted hardware de-vices, images can be calculated in real-time.

1The Evans and Sutherland company (www.es.com) created the first commercial display devices con-nected to a PDP-11 computer. For several million Dollars, they allowed you to display several thousandpolygons per second. Today, a GeForce 3 for PC graphics card (www.nvidia.com) reaches performancesin the order of several million polygons per second for several thousand French Francs.

14

Historical Context

2.2.3 Image Appropriation

From Looking at an Image to Living an Image

Calculating an image in real-time means that a user can navigate within the virtualworld that it represents [Brooks 86a], but it was undoubtedly artistic experiments andteleoperations that led users to start appropriating images.

In the 70’s, interactive digital art allowed users to experiment with situations wheredigital images evolved through the free form movements of artists who observed, inreal-time and on the big-screen, how their movements influenced images (VideoPlace[Krueger 77]). With the work of ACROE2, computer-aided artistic expression took anew step with multi-sensory synthesis aided by a force feedback management control[Cadoz 84]. The peripherals, called retroactive gestural transducers, allow users, for ex-ample, to feel the friction of a real bow on a virtual violin string, to hear the sound madeby the vibration of the cord and to observe the cord vibrate on the screen. A physicalmodeling system of mass-spring virtual objects pilots these peripherals (CORDIS-ANIMA[Cadoz 93]), and allows users to have a completely original artistic experience.

For its part, teleoperations became involved very early in ways to remotely controland pilot robots located in dangerous environments (nuclear, underwater, space) in orderto avoid putting human operators at risk. In particular, NASA developed a liquid crystalhead mounted display (VIVED: Virtual Visual Environment Display [Fisher 86]) coupledwith an electronically-trackable sensor [Raab 79] that allowed users to move around in a3D image observed in stereovision. This was a real improvement in comparison to the firsthead mounted display, which had cathode ray tubes, was heavier and was held on top ofthe head by a mechanical arm fixed to the ceiling [Sutherland 68]. This new display wasvery quickly combined with a dataglove [Zimmerman 87] and a spatialized sound systembased on the laws of acoustics [Wenzel 88]. These new functions enhanced the sensationsof tactile feedback systems [Bliss 70] and the force feedback systems (GROPE [Batter 71,Brooks 90]) that were already familiar to teleoperators. The robotics experts were usingexpressions such as telesymbiosis [Vertut 85], telepresence [Sheridan 87] and tele-existence[Tachi 89] to describe the feeling of immersion that operators can have; the feeling thatthey are working directly on a robot, for example, even though they are manipulating itremotely. As it has since the beginning of teleoperations, adding an operator to the loopof a robotized system poses the problem of how to include human factors to improve theergonomics of workstations and their associated peripherals [Smith 89].

By becoming multimodal, the image allows users to see, hear, touch and move ordeform virtual objects with sensations that are almost real.

2ACROE: Association for the Creation and Research on Expression Tools, Grenoble(www-acroe.imag.com)

15

Concepts

From Isolated Image to Shared Image

During the 70’s, ARPANET3, the first long-distance network based on TCP/IP4

[Cerf 74] was developed. Given these new possibilities of exchanging and sharing in-formation, the American Defense Department decided to interconnect simulators (tanks,planes, etc.) through the ARPANET network in order to simulate battle fields. So,the SIMNET (simulator networking) project, which took place between 1983 and 1990,tested group strategies that were freely executed by dozens of users on the network, andnot controlled by pre-established scenarios [Miller 95]. The communication protocols es-tablished for SIMNET [Pope 87] introduced prediction models of movement (estimate:dead-reckoning) in order to avoid overloading the network with too many synchroniza-tions, and at the same time made sure that there was a time coherence between sim-ulators. These protocols were then summarized and standardized for the needs of thedistributed interactive simulation (DIS: Distributed Interactive Simulation [IEEE 93]).

Thus, the computer network and its protocols enabled several users to share imagesand then through these images to share ideas and experiences.

2.2.4 From Real Images to Virtual Objects

Interdisciplinarity

This brief overview of the historical context from which the concept of virtual realityemerged illustrates the vitality of computer graphics, as also proven by events such asSiggraph5 in the United States and Eurographics6 or Imagina7 in Europe. This newexpression and this new field of investigation, which is based upon computer graphics,simulation, computer-aided design, teleoperations, audiovisual, telecommunications, etc.,is being formed within an interdisciplinary melting pot.

Virtual reality thus seems like a catch-all field within the engineering sciences. Itmanipulates images that are interactive, multimodal (3D, sound, tactile, kinesthetic, pro-prioceptive), realistic, animated in real-time, and shared on computer networks. In thissense, it is a natural evolution of computer graphics. During the last ten years, a spe-cial effort has been made to integrate all of the characteristics of these images within thesame system8, and to optimize and improve all of the techniques needed for these systems.Virtual reality is therefore based on interaction in real time with virtual objects and thefeeling of immersion in virtual worlds: it is like a multisensory [Burdea 93], instrumen-tal [Cadoz 94b] and behavioral [Fuchs 96] interface. Today, the major works in virtualreality give a more precise and especially operational meaning to the expression virtual

3ARPANET: Advanced Research Projects Agency Network of the American Defense Department.4TCP/IP : Transmission Control Protocol/Internet Protocol5Siggraph (www.siggraph.org): 29th edition in 20026Eurographics (www.eg.org)7Imagina (www.imagina.mc): 21st edition in 20028You will find in [Capin 99] a comparison of a certain number of these platforms.

16

The Reality of Philosophers

reality, which can be summed up by the definition established by the Virtual Reality WorkGroup9: a set of software and hardware tools that can realistically simulate an interactionwith virtual objects that are computer models of real objects.

Transdisciplinarity

With this definition, we changed our approach from researching real images in com-puter graphics to researching visualized virtual objects. The objects are not only char-acterized by their appearances (their images), but also their behavior, which is no longerresearched under computer graphics. So, images reveal both behaviors and shapes. Vir-tual reality is becoming transdisciplinary and is slowly moving away from its origins.

In 1992, D. Zelter had already proposed an evaluation of virtual universes based onthree basic notions: the autonomy of viewed objects, the interaction with these objects andthe feeling of being present in the virtual world [Zelter 92]. A universe is then representedby a point on the AIP (autonomy, interaction, presence) index. Hence, on this index, 3Dmovies have coordinates (0.0.1), a flight simulator (0.1.1), and Zelter would place virtualreality at point (1.1.1). Still, we must point out that there was very little research doneon the problem of virtual object autonomy until now. We can, nevertheless, note thatanimation, so-called behavior, introduced several autonomous behaviors through particlesystems, [Reeves 83], flocks of birds [Reynolds 87], shoals of fish [Terzopoulos 94], and ofcourse virtual humans [Thalmann 99].

An object’s behavior is considered autonomous if it is able to adapt to unknownchanges in its environment: so, it must be equipped with ways to perceive, respond andcoordinate between its perceptions and actions in order to react realistically to thesechanges. This notion of autonomy is the root of our problem. With this new perspective,we must return to the notion of virtual reality by exploring the real of philosophers (Section2.3) and the virtual of physicists (Section 2.4) in order to understand why the autonomyof virtual objects is vital to the realism of the images that they inhabit (Section 2.5).

2.3 The Reality of PhilosophersImagine human beings living in an underground cave, which has a mouth open towards thelight and reaching all along the cave; here they have been from their childhood, and havetheir legs and necks chained so that they cannot move, and can only see before them, beingprevented by the chains from turning round their heads. [...] And they see only their ownshadows, or the shadows of one another, which the fire throws on the opposite wall of thecave? [...] And if they were able to converse with one another, would they not suppose thatthey were naming what was actually before them? [...] It is without a doubt then, I said,that in their eyes, reality would be nothing else than the shadows of the images.

Plato, The Republic, Book VII, ≈ 375 B.C.

The word real comes from the Latin res, meaning a thing. A reality is a thing thatexists, which is defined mainly by its resistance and recurrence. Reality is realized, itresists and it goes on without end. But what is a thing, and in what does it exist? These

9GT-RV: the French Virtual Reality Group of the National Center of Scientific Research and the FrenchMinistry of Education, Research and Technology (www.inria.fr/epidaure/GT-RV/gt-rv.html).

17

Concepts

questions are without a doubt as old as philosophy itself, and this is why philosophersand scientists revisit the notion of intuitive and complex reality time and time again.

2.3.1 Sensory/Intelligible Duality

Almost 2,500 years ago, Plato, in his famous allegory of the cave, had already placedhuman beings in a semi-immersive situation in front of a screen (the cave wall in frontof them) so that they could only see the projected images of virtual objects (the shadowsof images). This simple thought experiment is no doubt the first virtual reality experi-ence and a prelude to the Reality Centers of today [Gobel 93]. Consequently, from ReneDescartes10 I think, therefore I am, to physicist Bernard d’Espagnat’s [D’Espagnat 94]Veiled Reality, all philosophical doctrines revolve around two extreme opinions (Figure11

2.1). The first doctrine, objectivism, maintains that objects exist independently from us:there is an objective reality that is independent of the sensory, experimental and intellec-tual conditions of its appearance. The second doctrine, subjectivism, relates everythingto subjective thinking in such a way that there is no point in discussing the real natureof objects because reality is only an artifact, an idea. This game of opposites betweenthese two perspectives broadens our thinking: everyone finds their own balance betweenthe object and the consciousness that perceives it [Hamburger 95].

- Materialism (philosophy of the material) is the belief in matter over mind. This philosophy,in which there are only physical-chemical explanations, was the philosophy among many ofthe atomists of Antiquity, as well as scientists from the 19th and 20th centuries.

subjectivism

idealism

rationalism

positivism

empiricism

materialism

objectivism

6

?

- Empiricism (philosophy of induction) makes experience theonly source of knowledge; it justifies induction as the mainapproach for going from a set of observed facts to stating ageneral law.

- Positivism (philosophy of experience), empiric rationalism orrational empiricism is based on the experimental method byemphasizing the importance of hypothesis, which governs rea-soning and controls experience: experimentation becomes de-liberate, conscious and methodical. It gives deduction therole of justifying laws after their inductive or intuitive formu-lation. The pragmatism of this school of thought, started byA. Comte, and amended by, among others, G. Bachelard andK. Popper, attracted a large number of scientists from the19th and 20th centuries. (C. Bernard, N. Bohr, . . . ).

- Rationalism (philosophy of deduction) postulates that reasonis an organizational faculty of experience; through a prioriconcepts and principles, it provides the formal framework forall knowledge of phenomena by relying on a logical-deductiveapproach.

- Idealism (philosophy of the idea) states that reality is not sensory, but ideal. Idealismincludes the idealist conceptions of Plato, R. Descartes, E. Kant, and G. Berkeley.

Figure 2.1: Reality and the Main Philosophical Doctrines

Without getting involved in this debate, let us note that it is through our senses

10Descartes R., Discourse on Method, Book IV, 1637.11Figure 2.1 is freely inspired from the Baraquin and al. Dictionary of Philosophy [Baraquin 95].

18

The Reality of Philosophers

and mind that we understand the world. Certainly, the senses do not protect us fromillusions, mirage or hallucinations, in the same way that ideas can be arbitrary, disem-bodied or schizophrenic. Nevertheless, our only way to investigate reality is through thesensory and the intelligible. Already ahead of his time, I. Kant12 introduced the notionof noumenon — the object of understanding, from the intelligible — as opposed to thenotion of phenomenon — the object of possible experience that appears in space andtime, from the sensory. Recently, Edgar Moris and his paradigm of complexity, haveblurred the line between the sensory and the intelligible. He postulates a dialogical prin-ciple that can be defined as the necessary association of antagonisms in order for anorganized phenomenon to exist, function and develop. In this way, a sensory/intelligibledialogue controls perception from the sensory receptors to the development of a cognitiverepresentation [Morin 86]. In a similar perspective, physicist Jean-Marc Levy-Leblond be-lieves that the antinomial pairs that structure thought are irreducible: for example, con-tinuous/discontinuous, determined/random, elemental/compound, finished/infinite, for-mal/intuitive, global/local, real/fiction, etc. He then uses these examples, borrowed fromphysical science, to demonstrate how these pairs of opposites are inseparable, and howthey structure our scientific understanding of nature, and consequently our understandingof reality [Levy-Leblond 96].

Therefore, in a dialogue between the objective and subjective, reality becomes descrip-tive, not of the thing in itself, but of the object as it is seen by man through the sensory,experimental and intelligible conditions of its appearance. The boundary between theopposing sides of this dialogue is continuously redrawn through interactions between theworld and man. The senses can be deceiving and the mind imagines actions to improveits understanding of the world; these actions cause our senses to react, which in return,keeps our ideas from becoming disembodied. Each person then creates their own realityby continuously switching between the senses and the imagination. Science, in turn, istrying to give them a universal representation that is free of individual illusions.

2.3.2 Virtual Versus Actual

While the concept of reality has always haunted philosophers, the concept of virtualityin itself has only been a topic of research for about twenty or thirty years. The word virtual,from the Latin virtus (virtue or strength), describes that which in itself has all the basicconditions for its actualization. In this sense, virtual is the opposite of actual, the hereand now, that which is in action, and not in effect. This meaning of virtual, which datesback to the Middle Ages, is analyzed in detail by Pierre Levy. He recognized that the real,the possible, the actual and the virtual were four different modes of existence and thatthe possible and the real were opposites and the virtual and the actual were opposites(Figure 2.2 [Levy 95]). The possible is an unproven and latent reality, which is realizedwithout any changes in its determination or nature according to a pre-defined possibility.The actual appears here and now as a creative solution to a problem. The virtual is adynamic configuration of strengths and purposes. Virtualization transforms the initial

12Kant E., Critique of Pure Reason, I, Book II, Chapiter III, 1781.

19

Concepts

actuality into a solution for a more widespread problem.

all

of the

problems

all

of the

predetermined

possibilities

persistent

and

resistant

specific

here

and now

to a

problem

solution

realization

virtualization

actualization

Latent pole

POSSIBLE REAL

VIRTUAL ACTUAL

potentialization

Visible pole

Figure 2.2: The Four Modes of Existence

To illustrate this contrast between virtual and actual, let’s examine the case of a com-pany that decides to have all of its employees telecommute. Currently, all of the employeeswork during the same hours and in the same place. However, with telecommuting, thereis no need to have all of the employees physically present in the same place, becausethe workplace and employee schedule now depend on the employees themselves. Thevirtualization of the company is then a radical modification of its space-time references;employees can work from anywhere in the world, 24 hours a day. This takes place througha type of dematerialization in order to strengthen the information communications withinthe company. For the company, this is a sort of deterritorialization where the coordinatesof space and time always pose a problem, instead of a specific solution. In order to idealizethis new company, the mind creates a mental representation based on the metaphor ofthe actual company. It begins by interpreting the function of the virtual company byanalogy to that of a typical company; then gradually through its understanding, forms anew idea of the notion of a company. The virtual company is not fictitious: even thoughit is detached from the here and now, it still provides services and advantages!

Today, the adjective virtual has become a noun. We say the virtual as we speak of the

20

The Reality of Philosophers

real, and this evolution is accompanied by a change of the senses that makes the contrastbetween the virtual and the actual less clear. For epistemologist Gilles-Gaston Granger,the virtual is one the categories of objective knowledge, along with the possible and theprobable, that is essential to the creation of science objects from actual phenomena ofexperience. The virtual is a representation of objects and facts that is detached fromthe conditions of a complete, singular and actual experience [Granger 95]. For PhilippeQueau, however, the virtual is realistically and actually present: it is an art that makesimitations that can be used. It responds to the reality of which it is a part [Queau 95].In the middle of these two extremes, Dominique Noel bases his theory on the semanticsof S. Kripke’s [Kripke 63] possible worlds in order to arrange possible worlds accordingto a relationship of accessibility, which says the following: a world M1 is accessible toanother world M2 if what is relatively possible for M1 is also relatively possible for M2.The relationship of accessibility R makes up the hypothesis of an initial world M0, whichis interpreted as the actual world. The worlds M1, M2, etc., are inferred from M0 by arelationship R and are both interpreted as well as virtual. This provides a formalism formapping out the main concepts of virtual [Noel 98].

2.3.3 Philosophy and Virtual Reality

Even though philosophers have been divided over the concept of reality since its origins,these same philosophers all agree that, no matter what the status of an object in itself - i.e.does it exist independently of the person that perceives it or not - there are three types ofmediation that occur between the world and man: mediation of the senses (the perceivedworld), mediation of the action (the experienced world) and mediation of the mind (theimagined world). These three mediations are inseparable and create three perspectives ofthe same reality (Figure 2.3).

In the first stage of our study, we will demonstrate that, as reality, virtual reality mustalso allow this triple mediation of senses, action and mind. In other words, we must beable to respond affirmatively to the three following questions:

1. Is it accessible to our senses?

2. Does it respond to our prompts?

3. Do we have a modifiable representation?

But how would such a reality be virtual? The widespread use of the term virtual inall circles of society, from scientists to philosophers, from artists to the general public,changes its definition, which still appears to be a work in progress, and even renews theconcept of reality. For our purposes, we have chosen to refer to the physical science notionof virtual to explain our conception of a reality that is virtual.

21

Concepts

Reality

•Mind

Representation

AAAAAAAAAAAAAA

��������������

•SensesPerception

•ActionExperiment

- Mediation of the senses enables the percep-tion of reality.

- Mediation of the action enables one to ex-periment on reality.

- Mediation of the mind forms a mental repre-sentation of reality.

Figure 2.3: The Three Mediations of The Real

2.4 The Virtual of PhysicistsGeometrical optics are based entirely on the representation of light rays by which light wouldpropogate. By designing reflections and refractions from them, these rays end up being seenas real physical objects. [...] The image of both automobile or boat headlight beams, or thosemodern, thin laser beams or light-conducting glass fibers, impose the too bright feeling offamiliar reality. Nevertheless, are these light rays of theory, these lines without thickness,something else besides beings of reason and constructions, which are powerful, certainly, butpurely intellectual? [Levy-Leblond 96]

Physicists call a certain number of concepts virtual: for example, virtual images inoptics, virtual work in mechanics, virtual processes of interaction in particle physics,etc., and many different models whose explanatory and predictive capabilities are widelyused to obtain more knowledge about the phenomena that they are trying to represent.In fact, this notion of virtual seems to go hand in hand with the mathematization ofphysical sciences that has been taking place since the 17th century. The mathematicalformalism used in physics is becoming more and more inaccessible to ordinary language,and is in fact, putting itself out of touch with intuitive representation. One is forced thento use analogies, images and words, which can only be understood through calculations.Even though these intermediate notions are not adequate enough to be fully credible,they nevertheless explain the physicist’s approach [Omnes 95].

2.4.1 Virtual Images

Geometrical optics deals with light phenomena by studying how light rays work intransparent environments. It is based on the notion of independent light rays that travelin a straight line in transparent, homogeneous environments and whose behavior on thesurface of a dioptre or a mirror is explained by the laws of Snell-Descartes (see for example[Bertin 78]).

22

The Virtual of Physicists

To demonstrate these ideas, let us examine the case of an optical system (S) sur-rounded by an entrance face L1 and an exit face L2, which separates two homogeneousenvironments, indicated respectively as n1 and n2 (Figure 2.4). By definition, if the lighttravels from left to right, the environment located to the left of face L1 is called the objectenvironment, the environment to the right of face L2 is called the image environment. Let

������������������������������������������

������������������������������������������

���������������������������������������������������������������

�������������������������������������������������

���������������������������������������������������������������������������������������������������

���������������������������������������������������������������������������������������������������

������������������������������������������������������������������������������������������

������������������������������������������������������������������������������������������

������������������������������������������������������������������

������������������������������������������������������������������

�������������������������������������������������������������������������������������������������������������������������

���������������������������������������������������������������������������������������������������

� � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � �

���������������������������������������������������������������������������������������������������

���������������������������������������������������������������

�������������������������������������������������

������������������������������������������

������������������������������������������

(S)

L1 L2

optical axis

O I

object environment image environment

(n1) (n2)

Figure 2.4: Optical System

O be a punctual light source that sends light rays, also known as incident rays, onto L1.Point O is an object for (S), and if, after having crossed the system (S), emerging lightrays all pass by a same point I, this point I is called image O through the system (S).Two different scenarios are then identified:

- Emergent rays actually pass through I: it is said that I is the real image of objectO; this is the case for a converging lens as shown in the Figure 2.5a;

- Only the extension of these rays pass through I: it is said then that I is the virtualimage of O; this is the case for a diverging lens as shown in Figure 2.5b. A realimage can be materialized on a screen and placed in the plane where it is located,but this is not the same for a virtual image that is only a construction of the mind.

Thus, in the case of a real image, if a powerful laser is placed in O, a small piece of paperwill burn in I (concentration of energy), but in the case of a virtual image, the paper willnot burn in I (no concentration of energy).

optical axisO I O I

L L

b) diverging lensa) converging lens

Figure 2.5: Real Image (a) and Virtual Image (b)

In a transparent, isotropic environment, which may or may not be homogeneous, thepath of light is independent of the direction of travel: if a light ray leaves from point Oto go towards point I by following a certain path, another light ray can leave from I and

23

Concepts

follow the same path in the opposite direction to go towards O. Thus, the roles of objectand image environments can be interchanged: this property is known as the principle ofthe reversibility of light. Let’s look again at Figure 2.5. In the case of converging lens, itis easy to position a punctual source in I, whose rays converge in O. In the case of thediverging lens depicted in Figure 2.6a, a device must be used. Let us position a punctualsource in O′ so that it forms, through an auxiliary converging lens L′, an image that, ifL did not exist, would form in I (Figure 2.6b). Because of the presence of L, the raysconverge in O (Figure 2.6c): therefore, we say that I plays the role of a virtual object forlens L.

O I

L

O’

L’

O I

L

I O’

L’

a)

b)

c)

To illustrate the principle of re-versibility in a diverging lens thatcreates a virtual image I (Fig-ure 2.6a), from object O, a devicemust be used. Let us position apunctual source in O′ so that itforms, through an auxiliary con-verging lens L′, an image that, ifL did not exist, would form I (Fig-ure 2.6b). Because of the presenceof L, the rays converge in O (Fig-ure 2.6c): therefore, we say thatI plays the role of a virtual objectfor lens L.

Figure 2.6: Virtual Image (a), Real Image (b), and Virtual Object (c)

These notions of virtual images and virtual objects in geometrical optics correspondto geometric constructions, which are the focal point for light ray extensions. These areintermediary concepts, created as such, which cannot be materialized but can facilitateimage construction, and for this reason, are widely used in designing optical devices.

The images of virtual reality then, are not virtual. These are essentially computer-generated images, produced by a computer, which like any image, reproduce a realityof the physical world or display a mental abstraction. They materialize on our terminalscreens and are, therefore, as defined by opticians, real images. However, they give usvirtual objects to look at, and it is the quality of the visual rendering that creates in part,the illusion of reality.

24

The Virtual of Physicists

2.4.2 Image Perception

As in most older disciplines, the fundamentals of geometrical optics are based onconcepts derived directly from common sense and acquired from daily experience. Thesame applies to the notion of light ray, of rectilinear propagation and the principle ofreversibility. These notions are so instilled in us13, so effective in the normal perception ofour environment, that they cause us to make mistakes of interpretation that cause someillusions and mirage. These illusions are in fact solutions proposed by the brain after alight follows the simulation of a path: it travels the path in the opposite direction (principleof reversibility), in a straight line (rectilinear propagation), by bypassing the diopter andmirrors (no refraction, no reflection), and by assuming a homogeneous environment!

O

I

b) ray bending

Earth

c) miragea) mirror

O I

O

I

Each morning, we see our image behind the mirror, even though there is nothing behind it except a mentalimage formed in our brain . . . that forgot the laws of reflection (Figure 2.7a). In dry, clear weather, we cansee an island behind the horizon. The differences in density with the altitude within the atmosphere create anon-homogeneous environment that causes the light rays to bend; if this variation in density is large enough,the curve imposed on the light rays can effectively make the island appear above the horizon (Figure 2.7b).Mirage is the same type of illusion (Figure 2.7c). The layers of air near the ground can become lighter thanthe upper layers due to local warming: they then cause a curve of rays traveling towards the ground. Thesimultaneous observation of an object O in direct vision and from an image I due to the curve of certainrays, can also be interpreted by the presence of a body of water in which the object O would be reflected.

Figure 2.7: Illusions of Rectilinear Propagation and Opposite Return

These illusions are real in the sense that real images make an impression on our retina;and they are virtual in the sense that how we interpret these real images leads us to assumethat virtual objects, which are the results of incomplete geometric constructions (Figure2.7), exist. The interpretation is the result of an image construction simulation, for whichthe brain, like an inexperienced student, would not have verified the application conditionsof the model that it used (here, a rectilinear propagation model of light in a transparentand homogeneous environment). Geometric optics provides us with an interpretation ofa certain number of optical illusions, but to better understand them, we must, like theheroine of Lewis Carroll14, cross the mirror, go to the other side and enter into the model’suniverse.

13Geneticists would say that they are coded, neurophysiologists that they are memorized, electronictechnicians that they are wired, computer specialists that they are compiled, etc. . . .

14Lewis Carroll, Through the Looking Glass, 1872.

25

Concepts

2.4.3 Physics and Virtual Reality

Physicists base their models on the observation of phenomena, and in return, theyexplain the observed phenomena through these models. It is the comparison between thesimulation of the model and the experiment of the real that allows this mutual develop-ment between observing and modeling a phenomenon. The brain also makes this kindof comparison as it perceives our environment: it compares its own predictions to theinformation that it takes from the environment. Thus, the senses act as both a source ofinformation and a way to verify hypotheses; perception problems are caused by unresolvedinconsistencies between information and predictions [Berthoz 97].

For physicists, the virtual is a tool for understanding the real; it definitely appears asa construction in the universe of models. Consequently, the same applies to geometricconstructions that lead to optical virtual images, or the inclusion of virtual movements inthe determination of movement in mechanics. Everything happens as if is indeed the mostcurrently used expression by specialists when they are trying to explain a phenomenonwhich is part of their field of expertise. Just like the expression once upon a time thatbegins fairy tales, everything happens as if is the traditional preamble that marks thepassage into the universe of the scientific model; it clearly puts the reader on guard: whatis going to follow only exists and makes sense within the framework of the model.

To continue along our line of thought, we will demonstrate that virtual reality isvirtual because it involves the universe of models. A virtual reality is a universe of modelswhere everything happens as if the models were real because they simultaneously offer thetriple mediation of senses, action and mind (Figure 2.8). The user can thus observe theactivity of the models through all of his sensory channels (mediation of senses), test thereaction time of models in real-time through adapted peripherals (mediation of action),and build the pro-activity of models by modifying them on-line to adapt them to hisprojects (mediation of the mind). The active participation of the user in this universe ofmodels opens the way to a true model experiment and hence to a better understandingof their complexity.

2.5 Principle of AutonomyGeppetto took his tools and set out to make his puppet. “What name shall I give him? hesaid to himself. I think I will call him Pinocchio.” [...] Having found a name for his puppet,he began to work in good earnest, and he first made his hair, then his forehead, then his eyes.The eyes being finished, imagine his astonishment when Geppetto noticed that they movedand stared right at him. [...] He then took the puppet under the arms and placed him on thefloor to teach him to walk. Pinocchio’s legs were stiff and he could not move, but Geppettoled him by the hand and showed him how to put one foot in front of the other. When hislegs became flexible, Pinocchio began to walk by himself and run about the room; until, hewent out the door, jumped into the street and ran off.

Carlo Collodi, The Adventures of Pinocchio, 1883

A virtual reality application is made up of a certain number of models (the virtual ofphysics) that users must be able to perceive, experiment and modify in the conditions ofthe real (the three mediations of the real of philosophers). Thus, users can jump in or outof the system’s control loop, allowing them to operate models on-line. As for modelers,

26

Principle of Autonomy

Model

•Mind

Representationpro-activity

AAAAAAAAAAAAAA

��������������

•SensePerception

activity

•ActionExperiment

reactivity

. The mediation of the senses allows you toperceive the model by observing its activity.

. The mediation of the action allows you toexperiment on the model and thus test itsreactivity.

. The mediation of the mind creates a mentalrepresentation of the model that defines itspro-activity.

Figure 2.8: The Three Mediations of Models in Virtual Reality

they must integrate a user model (an avatar) into their system so that the user can beeffectively included and participate in the development of this universe of models.

2.5.1 Operating Models

The three main operating modes of models are perception, experiment and modifica-tion. Each one corresponds to a different mediation of the real [Tisseau 98b].

The mediation of the senses takes place through model perception: the user observesthe activity of the model through all of his sensory channels. The same applies to aspectator in an Omnimax theater who, seated on moving seats in front of a hemisphericscreen in a room equipped with a surround-sound system, really feels as if he is a part ofthe animated film that he is watching, even though he cannot modify its course. Here,the quality of the sensory renderings and their synchronization is essential: this is real-time animation’s specialty. The most current definition of animation is to make somethingmove. In the more specific field of animated films, to animate means to give the impressionof movement by scrolling a collection of ordered images (drawings, photographs, computer-generated images, etc.). These images are produced by applying an evolution model ofthe objects in the represented scene.

The mediation of action involves model experiments: the user tests the model’s reac-tivity through adapted manipulators. The same applies to the fighter pilot at the controlsof a flight simulator: his training focuses mainly on learning the reactive behavior of theplane. Based on the principle of action and reaction, the emphasis here is placed on thequality of real-time rendering, which is what interactive simulation does best. The stan-dard meaning of simulate is to make something that is not real seem as if it is. In the fieldof science, simulation is an experiment on a model; it allows you to test the quality andinternal consistency of the model by comparing its results to those of the experiment onthe modeled system. Today, it is being used more and more to study complex systems in

27

Concepts

which human beings are involved to both train operators as well as research the reactionsof users. In these simulations where man is in the loop, the operator develops his ownbehavioral model, which interacts with the other models.

The mediation of the mind takes place when the user himself modifies the model byusing an expressiveness that is equivalent to that of the modeler. The same applies toan operator who partially reconfigures a system while the rest of the system remainsoperational. In this fast-growing field of interactive prototyping and on-line modeling,it is essential that users be able to easily intervene and express themselves. To obtainthis level of expressiveness, the user generally uses the same interface and especially thesame language as that of the modeler. The mediation of the mind is thus realized by themediation of language.

So, the closely related fields of real-time animation, interactive simulation and on-linemodeling represent three aspects of the operation of models. The three together allowthe triple mediation of the real that is needed in virtual reality and define three levels ofinteractivity (Figure 2.9).

. Real-time animation corresponds to a zero level of interactivity between the userand the model being run. The user is under the influence of the model because hecannot act on the parameters of the model: he is simply a spectator of the model.

. Interactive simulation corresponds to the first level of interaction because the usercan access some of the model’s parameters. The user thus plays the role of the actorin the simulation.

. In on-line modeling, the models themselves are the parameters of the system: theinteraction reaches the highest level. The user, by modifying himself the modelbeing run, participates in the creation of the model and thus becomes a cre-actor(creator-actor).

interactivity

0

1

2

interactive simulation

real−time animation

on−line modelizationactor

spectator

cre−actor

senses

mind

med

iatio

n

action

Figure 2.9: The Different Levels of Interaction in Virtual Reality

2.5.2 User Modeling

The user can interact with the image through adapted behavioral interfaces. But,what the user is able to observe or do within the universe of models is only what thesystem controls through device drivers, which are vital intermediaries between man andmachine. The user’s sensory-motor mediation is thus managed by the system and thenmodeled within the system in one manner or another. The only real freedom that the

28

Principle of Autonomy

user has is his decision-making options (mediation of the mind), which are restricted bythe system’s limitations in terms of observation and action.

It is also necessary to explain the inclusion of the user by representing it through a spe-cific model of an avatar within the system. At the least, this avatar is placed in the virtualenvironment in order to define a perspective that is needed for different sensory renderings.It has virtual sensors, robot actuators (vision [Renault 90], sound [Noser 95], and handgrips [Kallmann 99]) to interact with the other models. The data gathered by the avatar’svirtual sensors are transmitted to the user in real time by device drivers, while the user’scommands are transmitted in the opposite direction to the avatar’s robot actuators. Italso has methods of communication in order to communicate with the other avatars, thesemethods thus strengthen its sensory-motor capabilities and allow it to receive and emitlanguage data. The avatar display may be non-existent (BrickNet [Singh 94]), reducedto a simple 3D primitive, which is textured but not structured (MASSIVE [Benford 95]),and assimilated into a polyarticulated rigid system (DIVE [Carlsson 93]) or a more realis-tic representation that includes evolved behaviors such as gestures and facial expressions(VLNET [Capin 97]). This display, when it exists, makes it easier to identify avatars andthe non-verbal communication between them. Thus, with this explicit user modeling,three major types of interaction can coexist in the universe of digital models:

. model-to-model interactions such as collisions and attachments;

. model-to-avatar interactions that allow sensory-motor mediation between a modeland a user;

. avatar-to-avatar interactions that allow avatars to meet in a virtual environmentshared by several users (televirtuality [Queau 93a]).

The user’s status in virtual reality is different than his status in scientific simulations ofdigital calculations or in the interactive simulation of training simulators (Figure 2.10). Inscientific simulation, the user sets the model’s parameters beforehand, and interprets thecalculation results afterwards. In the case of a scientific display system, the developmentof the calculations may be observed with virtual reality sensory peripherals [Bryson 96],but it remains the model’s slave. The systems of scientific simulation are model-centeredsystems because science models want to create universal representations that are separatefrom individual impressions from reality. In contrast, interactive simulation systems arebasically user-centered to give the user all of the ways needed to control and pilot themodel: the model must remain the user’s slave. By introducing the notion of avatar,virtual reality places the user at the same conceptual level as the model. The master-slave relationship is thus eliminated for a greater autonomy for models and consequently,a greater autonomy for users.

2.5.3 Autonomizing Models

To autonomize a model you must provide it with ways to perceive and respond to itssurrounding environment. It must also be equipped with a decision-making module sothat it can adapt its reactions to both external and internal stimuli. We use three lines ofthought as a guide to the autonomization of models: autonomy by essence, by necessityand by ignorance.

29

Concepts

model model model

avatar

Figure 2.10: User Status in Scientific Simulation (a), in Interactive Simulation (b) and inVirtual Reality (c)

Autonomy by essence characterizes all living organisms, from a simple cell to a humanbeing. Avatars are not the only models that can perceive and respond to their digitalenvironments: any model that is supposed to represent a living being must have this kindof sensory motor interface. The notion of animat, for example, concerns artificial animalswhose rules of behavior are based on real animal behaviors [Wilson 85]. Like an avatar,an animat is situated in an environment; it has sensors to acquire information about itsenvironment and effectors to react within this environment. However, unlike an avatar,which is controlled by a human being, an animat must be self-controlled in order to coor-dinate its perceptions and actions [Meyer 91]. Control can be innate (pre-programmed)[Beer 90], but in the animat approach, it will more often be acquired in order to stimulatethe genesis of adaptive behaviors for survival in changing environments. Therefore, re-search in this very active field revolves primarily around the study of learning (epigenesis)[Barto 81], development (ontogeny) [Kodjabachian 98] and the evolution (phylogenesis)[Cliff 93] of control architecture [Meyer 94, Guillot 00]15. Animating virtual creaturesthrough these different approaches provides a very clear example of these adaptive behav-iors [Sims 94], and modeling virtual actors falls under the same approach [Thalmann 96].Thus, autonomizing a model associated with an organism allows us to understand theautonomy observed in this organism more fully.

Autonomy by necessity involves the instant recognition of changes in the environment,by both organisms and mechanisms. Physical modeling of mechanisms usually takes placeby solving differential equation systems. Solving these systems requires knowledge aboutthe conditions of the limits that restrict movement, but in reality, these conditions canchange all the time, whether the causes in themselves are known or not (interactions,disturbances, changes in the environment). The model must then be able to perceivethese changes in order to adapt its behavior during execution. This is truer when theuser is in the system because, through his avatar, he can make changes that were initiallycompletely undefined. The example of sand flowing through an hourglass can be usedto illustrate this concept. The physical simulation of granular medium of sand is usuallybased on micromechanical interactions between two solid spheres. Such simulations takeseveral hours of calculation to visualize the flow of sand from one sphere to another and

15From Animals to Animats (Simulation of Adaptive Behavior): bi-annual conference since 1990(www.adaptive-behavior.org/conf)

30

Principle of Autonomy

are therefore, unsuited for the constraints of virtual reality [Herrmann 98]. Modelingwith larger grains (mesoscopic level) based on punctual masses connected to each otherby appropriate interactions results in displays that are satisfactory, but not interactive[Luciani 00]. Our approach involves large autonomous grains of sand that, individually,detect collisions (elastic collisions) and are sensitive to gravity (free fall). It allows us tonot only simulate the flow of sand in the hourglass, but also to adapt in real-time to thehourglass if it is turned over, or has a hole [Harrouet 00]. Thus, autonomizing any modelwhatsoever, allows it to react to unplanned situations that come up during execution, andwhich are the result of changes in the environment due to the activity of other models.

Autonomy by ignorance reveals our current inability to explain the behavior of complexsystems through the reductionist methods of an analytical approach. A complex systemis an open system made up of a heterogeneous group of atomic or composite entities.The behavior of the group is the result of the individual behavior of these entities andof their different interactions in an environment that is also active. Based on schools,the behavior of the group is seen as either organized through a goal, which would beteleological behavior [Le Moigne 77], or as the result of an auto-organization of the system,which would be emergence [Morin 77]. The lack of overall behavior models for complexsystems leads us to distribute control over the systems’ components and thus, autonomizethe models of these components. The simultaneous evolution of these components enablesa better understanding of the behavior of the entire overall system. Hence, a group ofautonomous models interacting within the same space has a part in the research andexperiments of complex systems.

Autonomizing models, whether by essence, necessity or ignorance, plays a part inpopulating virtual environments with artificial life that strengthens the impression ofreality.

2.5.4 Autonomy and Virtual Reality

The user of a virtual reality system is an all-in-one spectator, actor and creator of theuniverse of digital models with which he interacts. Even better, he participates completelyin this universe where he is represented by an avatar, a digital model, that has virtualsensors and robot actuators to perceive and respond to this universe. The true autonomyof the user is found in his capability to coordinate his perceptions and actions, either byaccident, to simply wander around in this virtual environment, or by following his owngoals. The user is thus placed at the same conceptual level as the digital models thatmake up this virtual world.

Different types of models — such as particles, mechanisms and organisms — coexistwithin virtual universes. For the organism models, it is essential that they have ways toperceive, act and decide in order to reproduce as best as possible their ability to decideby themselves. This approach is also needed for mechanisms that must be able to reactto unforeseen changes in their environment. Moreover, our ignorance in understandingsystems once they become too complex, leads us to decentralize the control of the sys-tem’s components. The models, whatever they are, must then have investigation methodsequal to those of the user and decision methods that match their functions. A sensory

31

Concepts

disability (myopia, deafness, anesthesia) causes an overall loss of the subject’s autonomy.This is also true for a mobility impairment (sprain, tear, strain) and for a mental impair-ment (amnesia, absent-mindedness, phobia). So, providing models with ways to perceive,act and make decisions, means giving them an autonomy that places them at the sameoperational level as the user.

Continuing along our line of thought, which began with the philosophers and physi-cists, we have established, in principle, the autonomization of digital models that makeup a virtual universe. A virtual reality is a universe of interacting autonomous models,within which everything happens as if models were real because they provide users andother models a triple mediation of senses, action and language all at the same time. (Fig-ure 2.11). To accept this autonomy on belief, is to accept sharing the control over theevolution of virtual universes between human users and the digital models that populatethese universes.

senses

senses

senses

senses

senses

senses

senses

senses

senses

action

action

action

action

action

actionaction

action

action

language

language

language

language

language

language

language

language

language

model

modelmodel

model

model

model

model

avatar

avatar

Figure 2.11: Autonomy and Virtual Reality

This conception of virtual reality thus coincides with Collodi’s old dream, as describedin the quote located at the beginning of this section 2.5, of making his famous puppet anautonomous entity that fulfills the life of his creator. Geppetto’s approach for reachinghis goal was the same as what we have found in virtual reality. He started by identifyinghim (I will call him Pinocchio), then he worked on his appearance ([he] started by makinghis hair, then his forehead), and making him sensors and actuators (then the eyes [...]).He then defined his behaviors (Geppetto led him by the hand and showed him how to putone foot in front of the other) in order to make him autonomous (Pinocchio began to walkby himself) and finally, he could not help but notice that autonomizing a model meansthat the creator loses control over his model (he jumped into the street and ran off).

32

Conclusion

2.6 Conclusion

As a result of interdisciplinary works on the computer-generated digital image, virtualreality transcends its origins and has established itself today as a new discipline within theengineering sciences. It involves the specification, design and creation of a participatoryand realistic virtual universe.

A virtual universe is a group of autonomous digital models in interaction in whichhuman beings participate as avatars. Creating these universes is based on an autonomyprinciple according to which digital models:

. are equipped with virtual sensors that enable them to perceive other models,

. have robot actuators to act on the other models,

. have methods of communication to communicate with other models,

. and control their perception-response coordinations through a decision-making mod-ule.

An avatar is therefore a digital model whose decision-making abilities are delegated tothe human operator that it represents. Digital models, located in space and time, evolveautonomously within the virtual universe, of which the evolution of the group is the resultof their combined evolution.

Man, model among models, is spectator, actor and creator of the virtual universeto which he belongs. He communicates with his avatar through a language and varioussensory-motor devices that make the triple mediation of senses, action and languagepossible. The multi-sensory rendering of his environment is that of, realistic, computer-generated digital images: 3D, sound, touch, kinesthetic, proprioceptive, animated in realtime, and shared on computer networks.

Therefore, the epistemological line of thought that we took in Chapter 2 places theconcept of autonomy at the center of our research problem in virtual reality. In thefollowing chapters we will ask ourselves how to implement these concepts by asking thefollowing: which models (Chapter 3) and which tools (Chapter 4) should be used toautonomize models?

33

Concepts

34

Chapter 3

Models

3.1 Introduction

In this chapter, we present the main models that we have developed in order to observethe principle of autonomy that was introduced in the previous chapter.

First, we will adapt the multi-agent approach to the problem of virtual reality byintroducing the notion of participatory multi-agent simulation.

Secondly, we will use the serious example of blood coagulation to explain the phe-nomenon of collective auto-organization that takes place in multi-agent simulations. Throughthis example, we will defend the idea of in virtuo experimentation; a new type of experi-mentation that is possible today in biology.

We will then explain how to model perceptive behavior with the help of fuzzy cognitivemaps. These maps allow us to define the behavior of an autonomous entity, control itsmovement, and simulate its movement through the entity itself so that it can anticipateits behavior through active perception.

These models are thus our intermediary representations between the concepts (Chapter2) that inspire them and the tools (Chapter 4) that implement them.

3.2 Autonomous EntitiesThe unit of analysis is therefore the activity of the person in real life. It does not refer tothe individual, nor the environment, but the relationship between the two. The environmentdoes not just modify actions, it is part of the system of action and cognition. [...] A realactivity then, is made up of flexibility and opportunism. From this perspective, and contraryto the theory of problem solving, a human being does not engage in action with a series ofrationally pre-specified objectives according to an a priori model of the world. He looks forhis information in the world. What becomes normal, is the way that he integrates himselfto act in an environment that changes and that he can modify, and how he uses and selectsavailable information and resources: social, symbolic and material. [Vacherand-Revel 01]

Despite having become more realistic, virtual universes will lack credibility as longas they are not populated with autonomous entities. The autonomization of entities isdivided into three modes: the sensory-motor mode, the decision-making mode and theoperational mode. It is actually based upon a sensory-motor autonomy: each entity isprovided with sensors and effectors that allows it to get information and respond to itsenvironment. It is also based on a decision-making autonomy: each entity makes decisions

35

Models

according to its own personality (its history, intentions, state and perceptions). Finally,it requires execution autonomy: each entity’s execution controller is independent of theother entities’ controllers.

In fact, this notion of autonomous entity is similar to the notion of agent in theindividual-centered approach of multi-agent systems.

3.2.1 Multi-Agent Approach

The first works on multi-agent systems date back to the 80’s. They were based uponthe asynchronism of language interactions between actors in distributed artificial intelli-gence [Hewitt 77], the individual-centered approach of artificial life [Langton 86], and theautonomy of moving robots [Brooks 86]. Currently, there are two major trends within thisdisciplinary field: either attention is focused on agents as they are (intelligent, rationalor cognitive agents [Wooldridge 95]), or the focus is on the interactions between agentsand collective aspects (reactive agents and multi-agent systems [Ferber 95]). Thus, wecan find agents that reason through beliefs, desires and intentions (BDI: Belief-Desire-Intention [Georgeff 87]), agents that are more emotional, as in some video games (Crea-tures [Grand 98]) or stimulus response-type agents that are purely reactive, as in insectsocieties (MANTA [Drogoul 93]). In any case, these systems differ from the symbolicplanning models of classic Artificial Intelligence (STRIPS [Fikes 71]) by recognizing thatan intelligent behavior can emerge from interactions between agents that are more reactiveplaced in an environment that is itself active [Brooks 91].

These works were motivated by the following observation: there are systems in nature,such as insect colonies or the immune system for example, that are capable of accom-plishing complex, collective tasks in dynamic environments without external control orcentral coordination. Thus, research on multi-agent systems is striving towards two majorobjectives. The first objective is to build distributed systems, which are capable of ac-complishing complex tasks through cooperation and interactions. The second objective isto understand and conduct experiments of collective auto-organization mechanisms thatappear when many autonomous entities are interacting. In any event, these models favora local approach where decisions are not made by a central coordinator familiar with eachentity, but by each of the entities individually. These autonomous entities, called agents,only have a partial, and therefore incomplete, view of the virtual universe in which theyevolve. Each agent can be considered as an engine that is continuously following a triplecycle of perception, decision and action:

1. perception : perception: it perceives its immediate environment through specializedsensors,

2. decision: it decides what it must do, taking into account its internal state, its sensorvalues and its intentions,

3. action: it acts by modifying its internal state and its immediate environment.

Two major types of applications are affected by multi-agent systems: distributedproblem-solving and multi-agent system simulation. In distributed problem-solving, there

36

Autonomous Entities

is an overall satisfaction function that allows us to evaluate the solution according tothe problem to be solved; while in simulation, we assume that there is at best, a localsatisfaction function for each agent [Ferber 97].

Like all modeling, the retained multi-agent approach simplifies the researched phe-nomenon. And in addition, this approach enables it, for the most part, to maintain itscomplexity by allowing for diverse elements, structures and associated interactions. To-day however, multi-agent modeling does not have the right theoretical tools to deduce thebehavior of the entire multi-agent system from the individual behavior of agents (directproblem), or to induce through reasoning the individual behavior of an agent when weknow the collective behavior of a multi-agent system (reverse problem). Simulating thesemodels partly makes up for the lack of appropriate theoretical tools by allowing us to ob-serve structures and behaviors that collectively emerge from local, individual interactions.We retain this multi-agent simulation approach for virtual reality.

3.2.2 Participatory Multi-Agent Simulation

Many design methods for multi-agent systems have already been proposed [Wooldridge 01],but none of them have really been established yet, as is the case of the UML method forobject design and programming [Rumbaugh 99]. We will retain here the Vowel approach,which analyzes multi-agent systems from four perspectives: Agents, Environments, In-teractions, and Organizations (the vowels A,E,I,O), which are all of equal paradigmicimportance [Demazeau 95]. In virtual reality, we suggest adding the vowel U (forgottenin AEIO), as in U for User, to include the active participation of the human operatorin the simulation (man is in the loop). Thus, virtual reality multi-agent simulations willbecome participatory.

A as in Agent

A virtual universe is a multi-model universe in which all types of autonomous entitiescan coexist: from the passive object to the cognitive agent, with a pre-programmedbehavior or an adaptive, evolutionary behavior. An agent can be atomic or composite, andbe the head of another multi-agent system itself. Consequently, a principle of componentheterogeneity must prevail in virtual worlds.

E as in Environment

An agent’s environment is made up of other agents, users that take part in the simula-tion (avatars) and objects that occupy the space and define a certain number of space-timeconstraints. Agents are thus placed in their environment, a real open system in whichentities can appear and disappear at any time.

37

Models

I as in Interaction

Diverse entities (objects, agents and users) generate diverse interaction among them-selves. We find physical phenomena such as attractions, repulsions, collisions and attach-ments, as well as exchanges of information among agents and/or avatars through gestureand language transmissions (language acts or code exchanges to be analyzed). Destroyingand creating objects and/or agents also strengthens these different interaction possibili-ties. Consequently, new properties, unknown for different entities taken individually, willeventually emerge from interactions between an agent and its environment.

O as in Organization

A set of social rules, like a driving rule, can structure all of the entities by definingroles and the constraints between these roles. These roles can be assigned to agents ornegotiated among them; they can be known in advance or emerge from conflicting orcollaborative situations - the results of interactions among agents. So, both predefinedand emergent organizations structure multi-agent systems at the organizational level, agiven level which is both an aggregate of low-level entities and a high-level entity.

U as in User

The user can take part in the simulation at any time. He is represented by an avatar-agent of which he controls decisions and uses sensory-motor capabilities. He interfaceswith his avatar through different multi-sensory peripherals, and has a language to com-municate with agents, modify them, create new ones, or destroy them. The structuralidentity between an agent and an avatar allows the user, at any time, to take the place ofan agent by taking control of his decision-making module. During this substitution, theagent can eventually go into a learning mode through imitation (PerAc [Gaussier 98]) orby example [Del Bimbo 95]. At any time, the user can return the control to the agent thathe replaced. This principle of substitution between agents and users can be evaluated bya type of Turing test [Turing 50]: a user interacts with an entity without guessing that itis an agent or another user, and the agents respond to his actions as if he were anotheragent.

The Vowel approach thus extended (AEIO + U) fully involves the user in multi-agentsimulation, hence concurring with the approach of participatory design (participatory de-sign [Schuler 93]), which prefers to see users as human actors rather than human factors[Bannon 91]. Such a participatory multi-agent simulation in virtual reality implements dif-ferent types of models (multi-models) from different fields of expertise (multi-disciplines).It is often complex because its overall behavior depends just as much on the behavior ofthe models themselves as the interactions between the models. Finally, it must includethe free will of the human user who uses the models on-line.

38

Autonomous Entities

3.2.3 Behavioral Modeling

A model is an artificial representation of a phenomenon or an idea; it is an intermediarystep between the intelligible and the sensory, between the phenomenon and its idealization,between the idea and its perception. This representation is based on a system of symbolsthat have a meaning, not just for the designer who composes them, but also for the userwho perceives them (a user’s perspective of the semantics may be different from that ofthe modeler’s). A model has multiple functions. For a modeler, the model allows himto imagine, design, plan and improve his representation of the phenomenon or idea thathe is trying to model. The model becomes a communication medium to represent, makepeople aware of, explain or teach related concepts. At the other end of the spectrum, themodel helps the user to understand the represented phenomenon (or idea); he can alsoevaluate and experiment by simulation.

Three major categories of digital models are usually identified: descriptive, causaland behavioral [Arnaldi 94]. The descriptive model reproduces the effects of the modeledphenomenon without any a priori knowledge about the causes (geometric surfaces, inversekinematics, etc.). The causal model describes the causes capable of producing an effect(for example, rigid poly-articulated, finished elements, mass-spring networks, etc.). Itsexecution requires that the solution which is implicitly contained in the model be clarified;the calculation time is then longer than that of a descriptive model for which the solution isgiven (causal/dynamic versus descriptive/kinematic). The behavioral model imitates howliving beings function according to a cycle of perception, decision and action (artificial life,animat, etc.). It involves both the internal and external behaviors of entities [Donikian 94].Internal behaviors are internal transformations that can lead to perceptible changes onthe entity’s exterior (i.e. morphology and movement). These internal behaviors arecomponents of entities and depend little on their environment, unlike external behaviorsthat reflect the environment’s influence on the entity’s behavior. These external behaviorscan be purely reactive or driven (stimulus/response), perceptive or emotional (response tostimulus depends on an internal emotional state), cognitive or intentional (the response isguided by a goal), adaptive or evolutionary (learning mechanisms allowing you to adaptthe response over the course of time), social or collective (the response is limited by therules of a society).

Section 3.3, which follows, presents a serious example of multi-agent simulation basedon reactive behaviors in order to observe the collective auto-organizing properties of suchsystems. In another direction, Section 3.4 focuses on an emotional type of behavior tohighlight the difference between individual sensation and perception.

39

Models

3.3 Collective Auto-OrganizationA protein is more than a translation in another language of one of the books from the genelibrary. The genes that persist in our cells during our existence, contain — are made up— of information. The real tools — the real actors — of cell life are the proteins, whichare more or less quickly destroyed and disappear if they are not renewed. Once the cell putsthem together, proteins fold up into complex shapes. Their shape determines their capabilityto attach themselves to other proteins and interact with them. And it is the nature of theseinteractions that determine their activity — their effect. [...] But trying to attribute anunequivocal activity — an intrinsic property — to a given protein amounts to an illusion.Its activity, effect and longevity depend on its environment, the collectivity of the otherproteins around it, the pre-existing ballet into which it will integrate itself. [Ameisen 99]

Faced with the growing complexity of biological models, biologists need new modelingtools. The advent of the computer and computer science, and in particular virtual reality,now offers new experiment possibilities with digital simulations and introduces a new typeof investigation: the in virtuo experiment. The expression in virtuo (in the virtual) is aneologism created here by analogy with adverbial phrases from Latin etymology1 in vivo(in the living) and in vitro (in glass). An in virtuo experiment is thus an experimentconducted in a virtual universe of digital models in which man participates.

3.3.1 Simulations in Biology

Medicine has used human (in vivo) experimentation since its beginnings. In antiquity,this experiment was influenced by magic and religion. It was the Greek school of Kos thatdemythologized medical procedure: around 400 B.C., Hippocrates, with his famous oath,established the first rules of ethics by outlining the obligations of a practitioner towardshis patients. For a long time, life sciences remained more or less descriptive and werebased on observation, classification and comparison. It was not until the development ofphysics and chemistry that (in vitro) laboratory examinations became a common prac-tice. In particular, the progress in optics and the invention of more and more evolvedmicroscopes made it possible to go beyond the limits of direct observation. ItaliansMalpighi (1628-1694) and Morgagni (1682-1771) hence became the precursors of histol-ogy by demonstrating the importance of studying the correlations between microscopicbiological structures and clinical manifestations. And undoubtedly, Frenchman ClaudeBernard and his famous Introduction to the Study of Experimental Medicine (1865) werethe start of real scientific medicine based on the rationalization of experimental methods.

Analogical simulation, or experiments on real models, was used for scientific purposesstarting in the 18th century with the biomechanical automata of Frenchman Jacquesde Vaucanson (The Flute Player, 1738), and reached its peak in the 20th century, inaeronautics, with wind tunnel tests on plane models. In medicine, (in vitro) laboratoryexaminations are based on the same approach. In hematology laboratories, in vitro blood

1in virtuo is not, however, a Latin expression. Biologists often use the expression in silico todescribe computerized calculations; there is for example, a scientific journal called In Silico Biology(www.bioinfo.de/journals.html). However, in silico makes no reference to the participation of manin the universe of digital models in progress; this is why we prefer in virtuo which, by its common root,reflects the experimental conditions of virtual reality.

40

Collective Auto-Organization

samples used during laboratory analysis function as simplified representations of in vivoblood: the rheological constraints are, in particular, very different.

Population Dynamics

Digital simulation, or experiments on virtual models, are usually based on solving amathematical model made up of partial derivative equations such as ∂x/∂t = f(x, t),where t represents the time and x = (x1, x2, . . . , xn) represents the vector of the statevariables that are characteristic of the system to be simulated. In molecular biology, themacroscopic behavior of a system is often attained by coupling kinetics equations of chem-ical reactions with thermomechanical equations of molecular flow, which in first approxi-mation solves equations such as ∂x/∂t = D∇2x+g(x), where D is the system’s scatteringmatrix and g is a non-linear function that represents the effects of chemical kinematics.In hematology, this type of approach was applied starting in 1965 to the kinematics ofcoagulation [Hemker 65], then developed and broadened to take into account new data[Jones 94] or to demonstrate the influence of flow phenomena [Sorensen 99]. The solutionsthat were obtained in this way strongly depended on the conditions of the system’s limits,as well as the values given to different reaction times and diffusion parameters. Biologicalsystems are complex systems and modeling them requires a large number of parameters.As Hopkins and Leipold [Hopkins 96] demonstrated in their critique of the use of differ-ential equation mathematical models in biology, ignoring certain parameters of a modelknown for being invalid can lead, despite everything, to valid adjustments with experi-ment data. To avoid this drawback, some authors encourage the use of statistical methodsin order to minimize the influence of specific values of certain parameters [Mounts 97].Today, software tools such as GEPASI [Mendes 93] and StochSim [Morton-Firth 98] makeit possible to use this classic problem-solving approach of chemical kinematics throughdifferential equations at a molecular level, while tools such as vCell [Schaff 97] and eCell[Tomita 99] are used at the cell level.

This type of simulation in cell biology is based on models that represent groups ofcells, not individual cells. Consequently, these models explicitly describe the behavior ofpopulations that, we hope, implicitly reflect the individual behavior of cells.

Individual Dynamics

In contrast, two major methods of digital simulation used in biochemistry, MonteCarlo and molecular dynamics, are based on selecting microscopic theories (structuresand molecular interaction potentials) to determine certain macroscopic properties of thesystem [Gerschel 95]. These methods explore the system’s configuration space throughan initial configuration of n particles whose development is followed by sampling. Thissampling is time-dependent in molecular dynamics, and random in Monte Carlo methods.The macroscopic properties are then calculated as averages for all of the configurations(microcanonical or canonical ensembles from statistical mechanics). The number n ofparticles influences calculation times, usually proportional to n2, and limits the numberof sampled configurations, in other words, the average accuracy in a Monte Carlo method,or the system’s evolution time in a molecular dynamics calculation. These methods require

41

Models

significant calculation methods; today, the size of the largest simulated systems is around106 atoms for real evolution times less than 10−6s [Lavery 98]. In addition, they mainlyinvolve systems close to equilibrium and because of this, are less suitable for studyingsystems far from equilibrium and basically irreversible, as the case may be with thephenomena of hemostasis.

Cellular automata [Von Neumann 66], inspired by biology, offer an even more individ-ualized approach. Conway’s Game of Life [Berlekamp 82] popularized cellular automataand Wolfram formalized them [Wolfram 83]. These are dynamic systems in which spaceand time are discretized. A cellular automata is made up of a finite set of squares (cells),each one containing a finite number of possible states. The squares’ states change syn-chronously according to the laws of local interaction. Although they are based on cellularmetaphor, cell automata are rarely used in cell biology; we can however, refer to the sim-ulation of an immune system through a cellular automata in order to study the collectivebehavior of lymphocytes [Stewart 89], the use of a cell automata to reproduce immunephenomena within lymphatic ganglions [Celada 92], and the development of a program-ming environment based on cell automata to model different cell behavior [Agarwal 95].The heavy constraints of time-space discretization as well as those of action synchronicityare undoubtedly what inhibits the use of this technique in biology.

Multi-Agent Auto-Organization

To overcome these methodology limitations, we propose modeling biological systemsthrough multi-agent systems. The biological systems to be modeled are usually very com-plex. This complexity is mainly due to its diverse elements, structures and interactions.In the example of coagulation, molecules, cells and complexes make blood an environmentthat is open (appearance/disappearance and element dynamics), heterogenous (differentmorphology and behaviors) and formed from composite entities (organs, cells, proteins)that are mobile and distributed in space and in variable numbers in time. These el-ements can be organized into different levels known beforehand (a cell is made up ofmolecules) or emerge during the simulation due to multiple interactions between elements(formation of complexes). The interactions themselves can be different in nature andoperate on different space and time scales (cell-cell interactions, cell-protein interactionsand protein-protein interactions). There is currently no theory that is able to formalizethis complexity, in fact, there is no formal a priori proof method as there is for highlyformalized models. In the absence of formal proof, we must turn to experiments of sys-tems that are being developed in order to be able to validate experiments a posteriori:this is what we have set out to do with multi-agent modeling and the simulation of bloodcoagulation.

As we define it within the framework of virtual reality, multi-agent simulation enablesreal interaction with the modeled system as suggested by a recent prospective studyon the requirements of cell behavior modeling [Endy 01]. In fact, during simulation, itis possible to view coagulation taking place in 3D; this visualization makes it possibleto observe the phenomenon as if you had a virtual microscope that was movable andadjustable at will, and capable of different focus adjustments. The biologist can therefore

42

Collective Auto-Organization

focus on the observation of a particular type of behavior and observe the activity of asub-system, or even the overall activity of the system. At any moment, the biologist caninterrupt the coagulation and take a closer look at elements in front of him and at theinteractions taking place, and can then restart the coagulation from where he had stopped.At any moment, the biologist can disrupt the system by modifying a cell property (state,behavior) or removing or adding new elements. Thus, he can test an active principle(for example, the active principle of heparin in thrombophilia), and more generally anidea, and immediately observe the consequences on the functioning system. Multi-agentsimulation puts the biologist at the center of a real virtual laboratory that brings himcloser to experimental science methods, while at the same time giving him access to digitalmethods.

In addition to being able to observe the activity of a digital model being run on a com-puter, the user can also test the reactivity and adaptability of the model in operation,thus benefitting from the behavioral character of digital models. An in virtuo experimentprovides an experience that a simple analysis of digital results cannot. Between formal apriori proof and a posteriori validations, there is room today for a virtual reality experi-enced by the biologist that can go beyond generally accepted ideas to reach experiencedideas.

3.3.2 Coagulation Example

The analogy between a biological cell and a software agent is quasi-natural. In fact,a cell interacts with its environment through its membrane and its surface receptors. Itsnucleus is the head of a genetic program that determines its behavior; consequently it canperceive, decide and respond. By extension, the analogy between a multicellular systemand a multi-agent system is immediate.

Cell-Agent

Our design of a cell-agent revolves around three main steps. The first step is to choosea 3D geometric shape that represents the cell membrane. The second step is to placethe sensors — the cell’s receptors — on this virtual membrane (Figure 3.1a). The thirdstep involves defining the cell-agent’s own behaviors (Figure 3.1b). A behavior can bedefined by an algorithm — calculation of a mathematical function, resolution of a systemof differential equations, application of production rules — or by another multi-agentsystem that describes the inside of the virtual cell. The choice between algorithm andmulti-agent system depends on the nature of each problem within the cell and how much isknown about the problem. Therefore, we have chosen an algorithm to model the humoralresponse [Ballet 97a] and a multi-agent system to simulate the proliferate response of Blymphocytes to anti CD5 [Ballet 98a], as it was demonstrated experimentally [Jamin 96].We have a library of predefined behaviors such as receptor expression on the cell’s surface,receptor internalization, cell division (mitosis), programmed cell death (apoptosis) andgeneration of molecular messengers (Figure 3.1c). New behaviors can be programmedand integrated into a cell-agent so that it can adapt to a specific behavior of other cells.

43

Models

agent

02321

receptors 02321

agent

a

02321

agent

f(x) g(x)ln(x)

02321

f(x)

functionsagent

behaviors

b

reproductionactivationinternalizationexpressionmutationapoptosis...

02321

f(x)

behavior

c

Designing a cell-agent revolves around three main steps. The first step is to choose a 3D geometric shapethat represents the cell membrane. The second step is to place the sensors — the cell’s receptors — onthis virtual membrane (a). The third step involves defining the cell-agent’s behaviors (b). A behavior canbe defined by an algorithm — calculation of a mathematical function, resolution of a system of differentialequations, application of production rules — or by another multi-agent system that describes the insideof the virtual cell. The choice between algorithm and multi-agent system depends on the nature of eachproblem within the cell and how much is known about the problem. A library of predefined behaviors suchas receptor expression on the cell’s surface, receptor internalization, cell division (mitosis), programmed celldeath (apoptosis) and generation of molecular messengers makes it possible to express a wide variety ofbehaviors (c).

Figure 3.1: Cell-Agent Design

The multi-agent system is then obtained by placing an initial mixture of cells within avirtual environment (i.e. a virtual test tube or virtual organ), in other words, a group ofcell-agents that each have a well-defined behavior. Within this universe, the interactionsbetween cell-agents are local: two cell-agents can only interact if they are within theimmediate neighborhood of the other cell-agent and if their receptors are compatible.These interactions take into account the distance between receptors and their relativedirection, as well as their affinity [Smith 97]. At each cycle, the basic behavior of eachcell-agent determines it movement by taking into account the interactions it had in itssphere of activity. If one of its receptors is activated through contact with a receptor fromanother cell-agent, then a more complex behavior can be triggered if all of its preconditionsare validated.

Virtual Vein

We have thus created, in collaboration with the Hematology Laboratory of the BrestUniversity Hospital Center, a multi-agent model of a virtual vein [Ballet 00a]. The bloodvessels are lined with a layer of endothelial cells of which one of the roles is to stop the ad-hesion of platelets. The blood circulates in the vessels carrying cells and proteins that willbe activated when a vessel is torn in order to stop the bleeding. The stoppage of bleedingis a local phenomenon that takes place after a cascade of cellular and enzymatic eventsin a liquid environment made unstable by blood flow. The phenomena of hemostasis andcoagulation involve mobile cells (platelets, red blood cells, granulocytes, monocytes), fixedcells (endothelial cells, subendothelial fibroblasts), procoagulant proteins and anticoagu-lent proteins. Each one of these elements has its own behavior; these different behaviorsbecome auto-organized, which results in the hole being blocked by platelets interwovenwith fibrin. Hence, our model takes into account the known characteristics of coagulation:3 types of cells (platelets, endothelial cells and fibroblasts) and 32 types of proteins are

44

Collective Auto-Organization

included in 41 interactions (Figure 3.2). Some of these reactions take place on the cellsurface, and others in the blood.

Reagents Products Description Reagents Products DescriptionExogenic Pathway AT3 + IXa 0 Inhibition

TF + VIIa VII:TF Coagulation AT3 + Xa 0 InhibitionVII + VIIa VIIa + VIIa Coagulation AT3 + XIa 0 InhibitionVII + VII:TF VIIa + VII:TF Coagulation AT3 + IIa 0 InhibitionIX + VII:TF IXa + VII:TF Coagulation alpha2M + IIa 0 InhibitionX + VII:TF Xa + VII:TF Coagulation TM + IIa IIi InhibitionTFPI + Xa TFPI:Xa Inhibition AT3 + IXa 0 InhibitionTFPI:Xa + VII:TF 0 Inhibition AT3 + Xa 0 Inhibition

Endogenetic Pathway AT3 + XIa 0 InhibitionVII + IXa VIIa + IXa Coagulation AT3 + IIa 0 InhibitionIXa + X IXa + Xa Coagulation ProC + IIi PCa + IIi InhibitionII + Xa IIa + Xa Coagulation PCa + Va 0 InhibitionXIa + IX XIa + IXa Coagulation PCa + Va:Xa Xa InhibitionXIa + XI XIa + XIa Coagulation PCa + PS PCa:S InhibitionIIa + XI IIa + XIa Retro-activation PCa + PCI 0 InhibitionIIa + VIII IIa + VIIIa Retro-activation PCa:S + Va 0 InhibitionIIa + V IIa + Va Retro-activation PCa:S + V PCa:S:V InhibitionXa + V Xa + Va Retro-activation PCa:S:V + VIIIa:IXa IXa InhibitionXa + VIII Xa + VIIIa Retro-activation PCa:S:V + VIIIa 0 InhibitionXa + Va Va:Xa Prothrombinase Activation of PlateletsVIIIa + IXa VIIIa:IXa Tenase P + IIa Pa + IIa CoagulationVIIIa:IXa + X VIIIa:IXa + Xa Tenase Formation of FibrinVa:Xa + II Va:Xa + IIa Prothrombinase I + IIa Ia + IIa Coagulation

The coagulation process is initiated on the surface of fibroblasts. It is the tissue factor (TF) on the surface of thefibroblasts that initiates the coagulation process. The so-called exogenous pathway is activated: Factor VIIa bindsto the tissue factor and activates Factors VII, IX and X. The first thrombin molecules, key factor of the phenomenon,are generated, which launches the endogenous pathway and the actual coagulation cascade. Thrombin retroactivatesthe Factors XI, V, and VIII, and most importantly, the platelets. On the surface of the platelets thus activated, thetenase and prothrombinase complexes are then in a position to be created and make their important contributionto the formation of thrombin. Thrombin also activates the fibrinogen in fibrin, which then in turn binds togetherthe activated platelets that are over the hole. The blood clot forms. At the same time, the inhibitors enter inaction: the TFP1 binds to the activated Factor X to inhibit the exogenous pathway (Factor VIIa bound to thetissue factor), the alpha2macroglobulin and the antithrombin3 inhibit the thrombin and the activated Factors X,XI, IX. The Protein C is activated by the thrombin-thrombomodulin complex (IIi: thrombin inhibitor) cleavingto the surface of the endothelial cells. The activated Protein C has two courses of action: it directly inhibits theactivated Factor V and indirectly, adhering to the Protein S and to the Factor V, inhibits the activated Factor VIII.The PCI (Protein C Inhibitor) prevents the inhibitor action of the activated Protein C.

Figure 3.2: Interactions of the Coagulation Model

The wall of the vein is represented by a surface of 400 µm2 lined with endothelial cells.In the middle of this virtual wall, a hole exposes 36 fibroblast cells, Willebrand factorsand tissue factors (TF). Platelets, the main factors of coagulation (Factors I (fibrinogen),II (prothrombin), V, VII, VIII, IX, X, XI) and inhibition factors (alpha2macroglobuline(alpha2M) and antithrombin3 (AT3)), TFPI (Tissue Factor Pathway Inhibitor), ProteinC (PC), Protein S (PS), thrombomodulin (TM) and the Protein C inhibitor (PCi), whichare all initially present in plasma, are placed randomly in the virtual vein.

In Virtuo Experiments

Multi-agent simulation can then begin: each agent evolves over time according toits predefined behavior. Figure 3.3 presents six intermediary states of the phenomenonthrough a 3D viewer. So, we can observe primary hemostasis and plasmatic coagulation

45

Models

in different phases and over time, measure how the quantity of each element evolves.[Ballet 00b].

The 6 figures depict different stages of the evolution of the coagulation simulation; they were captured fromdifferent viewing angles of the virtual vessel. Image A shows the geometry of the models at the beginningof the simulation: the yellow spheres represent the endothelial cells. The hole in the vein, in the middle ofthe endothelial layer, shows the subendothelial fibroblasts (blue spheres). The coaguluation and inhibitionfactors in the plasma are represented by their Roman numerals, their Arabic numerals and their initials.The unactivated platelets are black spheres. In the beginning, the exogenous pathway is activated, indicatedby the Factor VIIa’s bond on the tissue factor of the fibroblast membrane (red pyramid in Image B). Theinhibition of these VIIa:TF complexes by the activated TFPI-Factor X complex is very fast (disappearanceof red pyramids). Images C and D show the intermediary steps of the generation of thrombin: the FactorXI binds to the platelets and forms a complex with the Factor IX that it activates (Image C); the tenasecomplex (IXa:VIIIa), which also binds to the platelet surface, activates the Factor X (Image D). Image Eshows a prothrombinase complex (activated Factor V complex-activated Factor X) on the activated plateletssurface (red spheres), activating the prothrombin in thrombin. Finally, Image F demonstrates the adhesionof platelets to the subendothelium and the formation of blood clots. The generated fibrin molecules arerepresented by white and orange spheres (Image F).

Figure 3.3: Evolution of the Multi-Agent Simulation of Coagulation

Validating models is based on several factors:. the similarities between thrombin generation curves obtained in virtuo and in vitro,

. the consistency with pathologies: decrease of thrombin generated in hemophilia

46

Collective Auto-Organization

(Figure 3.4a) and increase in thrombophilia,

. the correction of hemophilia through activated Factor VII (Figure 3.4b),

. the correction of hypercoagulability through heparin (Figure 3.4c).

Hence, several thousand entities2, representing 3 types of cells and 32 types of proteinsinvolved in 41 types of interactions, auto-organize in a cascade of reactions to fill in ahole that was not planned in advance, and which can be created at any time during thesimulation and anywhere in the vein. This significant example reinforces the idea thatreactive autonomous entities, through their conflicting and collaborative interactions, canbecome auto-organized and adopt a group behavior unknown from individual entities.

a b cFigure a compares the total number of thrombin molecules created during the simulation in the cases ofnormal coagulation, hemophilia A (absence of Factor VIII) and hemophilia B (absence of Factor IX). We cantherefore observe a weaker coagulation in the case of hemophilia A and B: hemophilia A generates half thethrombins of a normal coagulation, while hemophilia B generates three times less. Figure b compares thethrombin generation curves in the case of normal coagulation, of hemophilia B (absence of Factor IX) andhemophilia B treated by adding activated Factors VII. The difference in coagulation between the treatedhemophilia and normal coagulation is no more than 6.5% here. Figure c compares the total number ofthrombin molecules created during the simulation in the case of normal coagulation, the Factor V Leidenmutation and Factor V Leiden treated with heparin. The Factor V Leiden mutation inhibits the deactivationof activated Factor V by activated Protein C. This genetic disease increases the risk of venous thrombosis: thenumber of thrombins formed is 4 times larger with the Factor V Leiden mutation than without it. Heparinis an anticoagulent that stimulates the action of anti-thrombin III and thus corrects hypercoagulability.

Figure 3.4: Main Results of the Multi-Agent Coagulation Model

3.3.3 In Vivo, In Vitro, In Virtuo

The main qualities of a model — an artificial representation of an object or a phe-nomenon — are based on its capabilities to describe, suggest, explain, predict and sim-ulate. Model simulation, or model experiment, consists of testing this representation’sbehavior under the effect of actions that we can carry out on the model. The results of asimulation then become hypotheses that we try to prove by designing experiments on thesingular prototype of a real system. These experiments, rationalized in this way, make upa true in vivo experiment. We distinguish four main types of models (perceptive, formal,analogical, digital) that lead to five major simulation families.

2Specific data is given in the thesis of Pascal Ballet [Ballet 00b].

47

Models

In Petto

Simulation of a perceptive model corresponds to in petto intuitions that come fromour imagination and the perception that we have of the system being studied. Thus, itmakes it possible to test perceptions on the real system. Unclassified and unreasonedinspirations, idea associations and heuristics form mental images that have the power ofsuggestion. The scientific approach will try to rationalize these first impressions whileartistic creation will make something from these digital or analogical works according tothe media used. But it is often the suggestive side of the perceptive model that sparksthese creative moments and leads to invention or discovery.

In Abstracto

Simulation of a formal model is based on an in abstracto reasoning carried out withinthe framework of a theory. Reasoning provides predictions that can be tested on the realsystem. Galle’s discovery of Neptune in 1846, based on theoretical predictions by Adamsand Le Verrier, is an illustration of this approach within the framework of the theory ofdisturbances from two bodies in celestial mechanics. Likewise, in particle physics, thediscovery in 1983 of intermediate bosons W+, W− and Z0 had been predicted a few yearsbefore by the theory of electroweak interactions. Hence, from the extremely large tothe extremely small, the predictive characteristic of formal models has proven to be veryfruitful in many scientific fields.

In Vitro

Simulation of an analogical model takes place through in vitro experiments on a sampleor model constructed by analogy with the real system. The similarities between the modeland the system help us to better understand the system being studied. The wind tunneltests on plane models allowed aerodynamicists to better characterize the flow of air aroundobstacles through the study of coefficients of similarity introduced at the end of the 19th

century by Reynolds and Mach. Likewise, in physiology, the analogy of the heart as apump allowed Harvey (1628) to demonstrate that blood circulation fell under hydrauliclaws. Thus, from all times, the explanatory side of analogical models was used, with moreor less of an anthropocentric approach, to make what was unknown, known.

In Silico

Simulation of a digital model is the execution of a program that is supposed to rep-resent the system to be modeled. In silico calculations provide results that are checkedagainst measurements from the real system. The digital resolution of mathematical equa-tion systems corresponds to the most current use of digital modeling. In fact, analyticaldetermination of solutions often encounters problems that are just as likely to involve thecharacteristics of the equations to be solved (non-linearity, coupling) as the complexityof the limit conditions and the need to take into account very different space-time scales.Studying the kinematics of chemical reactions, calculating the deformation of a solid under

48

Individual Perception

the effect of thermomechanical constraints, or characterizing the electromagnetic radia-tion of an antenna are classic examples of implementing differential equation systems on acomputer. Consequently, the digital model obtained by discretization from the theoreticalmodel has today become a vital tool for going beyond theoretical limitations, but is stilloften considered as a last resort.

In Virtuo

More recently, the possibility of interacting with a program that is being run hasopened the way to real in virtuo experiments of digital models. It is now possible to dis-turb a model that is being run, to dynamically change the limit condition and to deleteor add elements during simulation. This gives digital models the status of being virtualmodels, infinitely more malleable than the real model used in analogical modeling. Flightsimulators and video games were the precursors of virtual reality systems. However, thesesystems become necessary when it is difficult, or even impossible to use direct experimentsfor whatever reason: hostile environment, access difficulties, space-time constraints, bud-get constraints, ethics, etc. The user is able to go beyond simply observing the digitalmodel’s activity while it is being run on a computer, and test the reactivity and adapt-ability of the model in operation; thus benefitting from the digital model’s behavioralcharacteristic.

The idea of a model representing the real is based on two metaphors, one artistic andthe other legal. The legal metaphor of delegation (the elected represents the people, thepapal nuncio represents the pope, the ambassador represents the head of state) suggeststhe idea of replacement: the model takes the place of reality. The artistic metaphor ofrealization (the play is performed in public, artistic inspiration is represented by a workof art) proposes the idea of presence: the model is a reality. An in virtuo experimentof a digital model makes it truly present and thus opens up new fields of exploration,investigation and understanding of the real.

3.4 Individual PerceptionFrom time to time, one runs into a figure from a small part of [such] a network of symbols,in which each symbol is represented by a node where arcs come in and go out. The linessymbolize in a certain manner the links of trigger chains. These figures try to reproducethe intuitive notion of conceptual connectedness. [...] The difficulty is that it is not easyto represent the complex interdependence of a large number of symbols through a couple oflines connecting the dots. The other problem with this type of diagram, is that it is wrongto think that a symbol is necessarily in service or out of service. If it is true of neurons, thisbistability will not reflect all of the neurons. From this point of view, the symbols are muchmore complex that the neurons, which is hardly surprising since they are made up of manyneurons. The messages exchanged between the symbols are more complex than a simple I amactivated, which is about the content of the neuron messages. Each symbol can be stimulatedin many different ways, and the type of stimulation determines which other symbols will beactivated. [...] Let’s imagine that there are node configurations united by connections (thatcan be in several colors in order to highlight the different types of conceptual connectedness)accurately representing the simulation mode of symbols by other symbols. [Hofstadter 79]

To become emotional, behaviors must determine their responses, not only accordingto external stimuli, but also according to internal emotions such as fear, satisfaction, love

49

Models

simulation

Model

field

behaviors

digitization

similarities

implem

entat

ion

analo

gical c

reatio

n

materialization

digital creation

perception

form

aliz

atio

n predictions

analogies

in vivo experiments

Real phenomena

prototypes

in abstracto reasoning

Formal models

theories

Perceptual models

in petto intuitions

imaginary

in virtuo experiments

Digital models

programs

in vitro experiments

Analogical models

models

modelization

understanding

impr

essi

ons

Figure 3.5: Modeling and Understanding Phenomena

or hate. We propose to describe such behaviors through fuzzy cognitive maps where theseinternal states will be explicitly represented, and make it possible to clearly draw a linebetween perception and sensation

3.4.1 Sensation and Perception

Fuzzy Cognitive Maps

Cognitive maps are taken from psychologists’ works that presented this concept to de-scribe the complex behaviors of topological memorization in rats (cognitive maps [Tolman 48]).The maps were then formalized as directed graphs and used in the theory of applied deci-sion in economics [Axelrod 76]. Finally, they were combined with a fuzzy logic to becomefuzzy cognitive maps (FCM : Fuzzy Cognitive Maps [Kosko 86]). The use of these mapswas even considered for the overall modeling of a virtual world [Dickerson 94]. We pro-pose here to delocalize fuzzy cognitive maps at every agent level in order to model theirautonomous perceptive behavior within a virtual universe.

Just like semantic networks [Sowa 91], cognitive maps are directed graphs where thenodes are the concepts (Ci) and the arcs are the influence lines (Lij) between these

50

Individual Perception

concepts (Figure 3.6). An activation degree (ai) is associated with each concept whilethe weight Lij of an arc indicates a relationship of inhibition (Lij < 0) or stimulation(Lij > 0) from concept Ci to concept Cj. The dynamics of the map are mathematicallycalculated by normalized matrix product as specified by the formal definition of fuzzycognitive maps given in Figure 3.7.

a 1

a 3

a 2

a 4

−1

−1

+1

+2

−1

+1

−1

This map is made up of 4 concepts and has 7 arcs. Each concept Cihas one degree of activation ai.

a =

a1

a2

a3

a4

, L =

0 +2 −1 −10 −1 0 +1−1 0 0 00 0 +1 0

A zero in the line matrix Lij = 0 indicates the absence of a conceptarc Ci to concept Cj and a non null element of the diagonal Lii 6= 0corresponds to a concept arc Ci on itself.

Figure 3.6: Example of a Cognitive Map

We use fuzzy cognitive maps to specify the behavior of an agent (graph structure),and to control its movement (map dynamics). Thus, a fuzzy cognitive map has sensoryconcepts whose activation degrees are obtained by the fuzzification of data coming fromthe agent’s sensors. It has motor concepts whose activations are defuzzified to be sent tothe agent’s effectors. The intermediary concepts reflect the internal state of the agent andintervene in the calculation of the map’s dynamics. The fuzzification and the defuzzifica-tion are reached according to the principles of fuzzy logic [Kosko 92]. Hence, a conceptrepresents a fuzzy subset, and its degree of activation represents the membership degreein the fuzzy subset.

Feelings

As an example, we want to model an agent that is perceiving its distance from anenemy. According to this distance and its fear, it will decide to flee or stay. The closerthe enemy, the more afraid it is, and in turn, the more afraid, the more rapidly it willflee.

We are modeling this flight through the fuzzy cognitive map in Figure 3.8a. Thismap has four concepts: two sensory concepts (enemy is near and enemy is far), a motorconcept (flee) and an internal concept (fear). Three lines indicate the influences betweenthe concepts: the proximity of the enemy stimulates fear (enemy is near → fear), fearcauses flight (fear→ flee), and the enemy moving away inhibits fear (enemy is far→ fear).We chose the unforced (fa = 0) continuous mode (V = [0, 1], δ = 0, k = 5). The activationof sensory concepts (enemy is near and enemy is far) is carried out by the fuzzification ofthe sensor from the distance to the enemy (Figure 3.8c) while the defuzzification of thedesire to flee gives this agent a flight speed (Figure 3.8d).

51

Models

K indicates one of the rings ZZ or IR, by δ one of the numbers 0 or 1, by V one of the sets{0, 1}, {−1, 0, 1}, or [−δ, 1]. Given (n, t0) ∈ IN2 and k ∈ IR∗+.

A fuzzy cognitive map F is a sextuplet (C,A, L,A, fa,R) where:

1. C = {C1, · · · , Cn} is the set of n concepts forming the nodes of a graph.

2. A ⊂ C × C is the set of arcs (Ci, Cj) directed from Ci to Cj .

3. L :

∣∣∣∣C × C → K

(Ci, Cj) 7→ Lijis a function of C × C to K associating Lij to a pair of concepts (Ci, Cj),

with Lij = 0 if (Ci, Cj) /∈ A, or with Lij equal to the weight of the arc directed from Ci to Cj if(Ci, Cj) ∈ A. L(C × C) = (Lij) ∈ Kn×n is a matrix of Mn(K). It is the line matrix of the map Fthat, to simplify, we observe L unless indicated otherwise.

4. A :

∣∣∣∣C → VINCi 7→ ai

is a function that at each concept Ci associates the sequence of its activation

degrees such as for t ∈ IN, ai(t) ∈ V given its activation degree at the moment t. We will observea(t) = [(ai(t))i∈[[1,n]]]

T the vector of activations at the moment t.

5. fa ∈ (IRn)IN a sequence of vectors of forced activations such as for i ∈ [[1, n]] and t ≥ t0, fai(t) giventhe forced activation of concept Ci at the moment t.

6. R is a recurring relationship on t ≥ t0 between ai(t + 1), ai(t) and fai(t) for i ∈ [[1, n]] indicating themap dynamics F .

∀i ∈ [[1, n]], ai(t0) = 0 ; ∀i ∈ [[1, n]], ∀t ≥ t0, ai(t+ 1) = σ ◦ g

fai(t),

j∈[[1,n]]

Ljiaj(t)

where g : IR2 → IR is a function of IR2 to IR, for example: g(x, y) = min(x, y) or max(x, y) or αx+ βy,and where σ : IR → V is a function of IR to the set of activation degrees V normalizing the activationsas follows:

(1+δ)/2k

(0,0.5,5)σ

a0

−δ

(1−δ)/2

continuous mode

a

(0,0.5,5)σ

1

0

binary mode

b

(1,0,5)σ

+1

0

−1

ternary mode

c

(a) In continuous mode, V = [−δ, 1], σ is the sigmoid function of σ(δ,a0,k) centered in (a0,1−δ

2), from

slope k · 1+δ2

in a0 and from limits in ±∞ respectively 1 and −δ :

σ(δ,a0,k) :

∣∣∣∣∣∣

IR → [−δ; 1]

a 7→ 1 + δ

1 + e−k(a−a0)− δ

(b) In binary mode, V = {0, 1}, σ : a 7→∣∣∣∣

0 if σ(0,0.5,k)(a) ≤ 0.51 if σ(0,0.5,k)(a) > 0.5

.

(c) In ternary mode, V = {−1, 0, 1}, σ : a 7→

∣∣∣∣∣∣

−1 if σ(1,0,k)(a) ≤ −0.50 if −0.5 < σ(1,0,k)(a) ≤ 0.51 if σ(1,0,k)(a) > 0.5

.

Figure 3.7: Definition of a Fuzzy Cognitive Map

Perceptions

We make a distinction between feeling and perception: feeling results from sensorsonly, perception is the feeling influenced by the internal state. A fuzzy cognitive map

52

Individual Perception

fear

−1

+1 +1

flee

ennemi is far

enemy is close

sensory mapL =

0 0 +1 00 0 −1 00 0 0 +10 0 0 0

a

0 20 80 1000

1

distance to the enemy

degr

ee o

f ac

tivat

ion

enemy is close

enem

y is

far

c

fear

+1

−1

+1

−λ

enemy is close

flee

enemy is far

perception map

L =

0 0 +1 00 0 −1 0

+λ −λ +γ +10 0 0 0

b

00

1

1

spee

d of

flig

ht

activation of flight d

The sensory concepts in dashed lines are activated by the fuzzification of sensors. The motor conceptsin pointed lines activate the effectors by defuzzyfication. In (a), concept C1 = enemy is close stimulatesC3 = fear while C2 = enemy is far inhibits it and fear stimulates C4 = flee. This defines a map that ispurely sensory. In (b), the map is perceptive: the fear can be auto-maintained (memory) and even influencefeelings (perception). In (c), the fuzzification of the distance to an enemy results in two sensory concepts:enemy is close and enemy is far. In (d), the defuzzification of flee linearly regulates the flight speed.

Figure 3.8: Flight Behavior

makes it possible to model perception thanks to lines existing between internal conceptsand sensory concepts. For example, we add three lines to the previous flight map (Figure3.8b). One auto-stimulator line (γ ≥ 0) on fear models stress. A second stimulator line(λ ≥ 0) of fear to enemy is close and an inhibitor line (−λ ≤ 0) from fear to enemy isfar model the phenomenon of paranoia. A given distance to the enemy will seem quitelong according to the activation of the fear concept. The agent then becomes perceptivebased on its degree of paranoia λ and its degree of stress γ (Figure 3.9).

a bThe perception of the distance from an enemy can be influenced by fear: according to the proximity ofan enemy and fear, the dynamics of the map determine a speed obtained here in the 3rd cycle. In (a),λ = γ = 0, the agent is purely sensory and its perception of the distance to the enemy does not depend onits fear. In (b), λ = γ = 0.6, the agent is perceptive: its perception of the distance to an enemy is modifiedby its fear.

Figure 3.9: Flight Speed

53

Models

3.4.2 Active Perception

Feeling of Movement

Neurophysiology associates the feeling of movement to the permanent and coordinatedfusion of different sensors such as myoarticular (or proprioceptive) sensors, vestibularreceptors and cutaneous receptors [Lestienne 95]. Among the myoarticular receptors,certain muscle and articular sensors measure the relative movements of body segmentsin relation to each other; other articular sensors act as force sensors and are involvedfor the purpose of effort. The labyrinthic receptors are situated in the vestibular systemof the internal ear. This system is a real center of gravity and inertia that, on the onehand, is sensitive to the angular accelerations of the head around three perpendicular axes(semi-circular channels), and on the other hand, is sensitive to the linear accelerationsof the head, including that of gravity (otoliths). Cutaneous receptors measure pressurevariations and are responsible for tactile sense (touch).

Peripheral vision is also included in the feeling of movement because it provides in-formation about relative movements within a visual scene. In certain circumstances, thiscontribution to vision can conflict with data received from proprioceptive, vestibular andtactile receptors; this conflict can result in postural disturbances that can be accompa-nied by nausea. It is what can happen when you read a book in a moving vehicle: vision,absorbed with reading, tells the brain that the head is not moving, while the vestibularreceptors detect the movement of the vehicle; this contradiction is the main physiologicalcause of motion sickness.

Internal Simulation of Movement

The feeling of movement thus corresponds to a multi-sensor fusion of different sensoryinformation. But this multi-sensory information is itself combined with signals comingfrom the brain that control the motor control of muscles. According to Alain Berthoz,neurophysiologist, perception is not only an interpretation of sensory messages, it is alsoan internal simulation of action and an anticipation of the consequences of this simulatedaction [Berthoz 97]. This is the case when a skier is going down a hill: he mentally goesdown the hill, predicting what will happen, at the same time that he is actually goingdown the hill, intermittently checking the state of his sensors.

Internal simulation of movement is facilitated by a neuronal inhibition mechanism.For vision, the brain is able to imagine eye movements without doing them thanks tothe action of inhibitor neurons that shut down the ocular muscles’ command circuit: bystaring at a point in front of you and focusing your attention, a sort of interior gaze if youwill, you really feel as if your gaze is traveling from one point of the room to another. Thisvirtual movement of the eye was simulated by the brain by activating the same neurons;only the action of the motor neurons was inhibited.

Anticipating movement is well illustrated by the Kohnstamn illusion, named after thephysiologist who was the first to study this phenomenon. When a person is balancinga tray with a bottle on it, the brain adapts to this situation in which the arm remainsmotionless through a constant effort made by the muscles. If someone was to suddenly

54

Individual Perception

remove the bottle, the tray would spring up all by itself. In fact, the brain continues toapply this force until the muscle sensors signal a lifting movement. If the person carryingthe tray took the bottle off himself, the tray would not move: the brain would haveanticipated the movement and signaled for the muscles to relax in response to the changein the weight of the tray.

The brain can therefore be considered as a biological simulation that makes predictionsby using its memory and forming hypotheses about the phenomenon’s internal model. Tounderstand the role of the brain in perception, we need to begin with the goal pursuedby an organism and then study how the brain communicates with sensors by specifyingestimated values according to an internal simulation of anticipated consequences of action.

Perception of Self

An agent can also use a fuzzy cognitive map in an imaginary place and simulate abehavior. Thus, with its own maps, it can reach a perception of itself by observing itselfin an imaginary domain. If it is familiar with the maps of another agent, it will have aperception of the other and will be able to mimic a behavior or a cooperation. In fact, if anagent has a fuzzy cognitive map, it can use it in simulation by forcing the values of sensoryconcepts and by making motor concepts act in an imaginary place as in a projection of thereal world. Such an agent is able to predict its own behavior or that of a deciding agent,according to this map, by going through several cycles of the map in his imaginary placeor by determining an attractor (fixed point or limited cycle in discrete mode) of the map.Figure 3.10 illustrates this mechanism with the map from Figure 3.8b used for 20 cyclesin fuzzy mode and an initial stress vector accounting for ξ for t ≤ t0 only on fear, thennull the rest of the time: fa(t < t0) = (0, 0, ξ, 0)T , fa(t > t0) = 0. With the same stressconditions, the binary mode converges towards a fixed point: a(t > tf) = (0, 1, 0, 0)T ora(t > tf) = (1, 0, 1, 1)T ; either the enemy is far and it is not scared and does not flee,either the enemy is close, it is scared and flees.

In addition, if an agent is familiar with the cognitive maps of another agent, it cananticipate the behavior of this agent by simulating a scenario created by going throughthe cycles of their maps in an imaginary place. This technique opens the doors to a realcooperation between agents, each one being able to represent the behavior of the other.

Hence, the fuzzy cognitive maps make it possible to specify the perceptive behaviorof an agent (graph structure), to control the agent’s movement (map dynamics) and tosimulate movement internally (simulation within the simulation). We can then implementthem within the framework of interactive fiction.

3.4.3 The Shepherd, His Dog and His Herd

The majority of works in virtual reality involve the multi-sensory immersion of theuser into virtual universes, but these universes, as realistic as they are, will lose credibilityas long as they are not populated with autonomous actors. The first works on modelingvirtual actors focused on the physical behavior of avatars [Magnenat-Thalmann 91], andmade it possible, for example, to study worksation ergonomics (Jack [Badler 93]). Another

55

Models

a bSimulation and fixed point of flight speed with λ = γ = 0, 6. In (a), the mode is continuous and this behavioris obtained at the end of the map’s twentieth cycle. In (b), the mode is binary and the vector of activationsof the map stabilizes on a fixed point such as: either all of the concepts are at 1 except enemy is far, whichis at 0 and the speed is at 1, or the opposite and the speed is at 0. We can observe that if the agent is scaredat the time of the simulation, no matter what the distance from the enemy, it would perceive it as if he wasreally close.

Figure 3.10: Perception of Self and Others

category of works dealt with the problem of interaction between a virtual actor anda human operator: for example, companion agent (ALIVE [Maes 95]), training agent(Steve [Rickel 99]) and animator agent [Noma 00]. Finally, others became interested in theinteraction between virtual actors; in such virtual theaters, the scenario can be controlledat an overall level (Oz [Bates 92]), depend on behavior scripts (Improv [Perlin 95]) or bebased on a game of improvisations (Virtual Theater Project [Hayes-Roth 96]).

We propose here to use fuzzy cognitive maps to characterize believable agent rolesin interactive fictions. We will illustrate this through a story about a mountain pasture.Once upon a time there was a shepherd, his dog and his herd of sheep . . . This examplehas already been used as a metaphor of complex collective behavior within a group ofmobile robots (RoboShepherd [Schultz 96]), as an example of the watchdog robot of realgeese (Sheepdog Robot [Vaughan 00]), and as an example of improvisation scenes (BlackSheep [Klesen 00]).

The shepherd moves around in his pasture and can talk to his dog and give himinformation. He wants to round up his sheep in an area which he chooses based on needs.By default, he stays seated: the shepherd is an avatar of a human actor that makes all ofthe decisions in his place.

Each sheep can distinguish an enemy (a dog or a man) from another sheep and fromedible grass. It evaluates the distance and the relative direction (left or right) from anagent that it sees in its field of vision. It knows how to identify the closest enemy. Itknows how to turn to the left or to the right and run without exceeding a certain maximumspeed. It has an energy reserve that it regenerates by eating and spends by running. Bydefault, it moves straight ahead and ends up wearing itself out. We want the sheep toeat grass (random response), be afraid of dogs and humans when they are too close,and in accordance with the Panurge instinct, to try and socialize. So, we chose a mainmap (Figure 3.11) containing all of the sensory concepts (enemy is close, enemy is far,high energy, low energy), motor concepts (eat, socialize, flee, run) and internal concepts(satisfaction, fear). This map calculates the moving speed by defuzzification of the run

56

Conclusion

concept, and the direction of movement by defuzzification of the three eat, socialize andflee concepts; each activation corresponds to a weight on the relative direction to befollowed respectively (to the left or to the right) to go towards the grass, join anothersheep or flee from an enemy.

turn right

turn left

+

+

+

+

−−

−−

eat

+1

+1

socialize

+1

+1

−1+1

+1 +1

+1

−1

+1

−1

−1

run

friend right

friend left

enemy to the right

enemy in the middle

enemy to the left

turn right

turn left

flight

enemy is close

enemy is far

high energy

low energy

satisfaction

fear

flee

socialization

perceptiona

sheep c

sheep a

sheep b

shepherd

1

5

9

1

5

9

1

5

9

bIn (a), the fuzzy cognitive maps define the role of asheep. The two maps at the top determine its directiontowards a goal; the one at the bottom gives the sheepits character. In (b), the sheep try to herd togetherwhile avoiding the shepherd. The 3 sheep act by leavingdated footprints on their paths.

Figure 3.11: Fear and Socialization

The dog is able to identify humans, sheep, the pasture area and the watchpoint. Itdistinguishes by sight and sound its shepherd from another man and knows how to spotthe sheep that is the farthest away from the area among a group of sheep. It knows howto turn to the left and to the right and run up to a maximum speed. Initially young andfull of energy, its behavior consists of running after the sheep, which quickly scatters thesheep (Figure 3.12a). First, the shepherd wants the dog to obey the order to not move,which will lead the sheep to socialize. This is done by giving the dog a sensory map tothe shepherd’s message, which inhibits the dog’s desire to run (Figure 3.12b). The dog’sbehavior is decided by the map and the dog stays still when its master asks it to (messagedissemination stop). Then, the shepherd gives the dog a structured map based on theconcepts associated with the herd’s area, for example whether a sheep is either outsideor inside the area. The concepts also make it possible for the dog to bring a sheep back(Figure 3.12c,d,e) and keep the herd in the area by situating itself at the watchpoint, inother words, on the perimeter and across from the shepherd. It is remarkable to observethat, the path in S of the virtual sheepdog emerging from this in virtuo simulation (Figure3.12c), and not explicitly programmed, is an adapted strategy that is observable in vivoby a real sheepdog that herds a group of sheep.

3.5 Conclusion

The models that we develop in virtual reality must follow the principle of autonomyof the virtual objects that make up virtual universes.

57

Models

1

1

1

4

4

4

4

7

7

7

7

sheep c

dog

1watchpoint

sheep a

sheep b

shepherd

a

−1

stop

run

stopb

In (a), the shepherd is immobile. A circular area represents the grazing area. A watchpoint is attachedto the area that is always located diagonally across from the shepherd: this is where the sheepdog mustbe when all of the sheep are inside the area. Initially, the behavior of a dog without the fuzzy cognitivemap is to run after the sheep, which quickly scatter and go outside the gathering area. In (b), the simplemap depicts the obedience of a dog to the message stop of its shepherd by inhibiting the desire to run.

shepherd

sheep a

dog

watchpoint

sheep b

sheep c

1

5

1

1

1

5

5

9

9

5

913

13

13

9

13

c

sheep is close

sheep is far

guard

large area small area

follow slowly

+1

−1

+1 +1

−3

−3

+1 −1

−1

+1

−1

+1−1 +1

−1+1

+1

−1+1−1

−1

−1+1

−1

+1

+1

run

stop

return

wait

watchpoint is close

watchpoint is far

sheep in

sheep outsheep far from area

sheep close to area

dogd

+2 −2 +2−2

−1 +1 +1 −1

sheep to the left sheep to the right

turn left turn right

area to the left area to the right

returne

For this in virtuo simulation, the so-cialization of sheep is inhibitied. In(c), the film of the sheepdog bringingback 3 sheep unfolds with their mapsof Figure 3.11 for the sheep and themaps represented in (d) and (e) for thedog. In (d), the main map of the dogdepicts the role consisting of bringingback a sheep to the area and maintain-ing a position near the shepherd whenall of the sheep are in the desired area.In (e), the map decides the approachangle of the sheep to bring it back tothe area: go towards the sheep, but ap-proach it from the opposite side of thearea.

Figure 3.12: The Sheepdog and Its Sheep

We have chosen to design these virtual worlds as multi-agent systems that considerthe human user as a participant, and not simply as an observer. The evolution of thesesystems gives rise to multi-agent simulations in which the human user fully participates byplaying his role as a spectator, an actor and a creator. Participatory simulation of virtualuniverses composed of autonomous entities then allows us to first research phenomenathat have been made complex by the diversity of components, structures, interactions,

58

Conclusion

and modeling. Entities are modeled by autonomizing all behavioral models, regardless ofwhether they are reactive, emotional, intentional, developing or social. The behavior ofall of these entities is then indicative of the system’s auto-organization capabilities andis one of the expected results of such simulations. By participating in these simulations,the human operator can, according to a principle of substitution, take control of theentity. Thus, by a sort of digital empathy, he can identify with the entity. In return, thecontrolled entity can learn behaviors that are more adapted to its environment, from thehuman operator.

The study of blood coagulation through this multi-agent approach implemented sev-eral thousand autonomous entities with reactive behavior. The simulations highlightthe auto-organizing capabilities of cells and blood proteins to fill in a hole created by theuser-troublemaker, anytime and anywhere in the virtual vein. The user-observer, upon ob-serving a coagulation abnormality (hemophelia or thrombosis), becomes user-pharmacistby adding new molecules during the simulation, thus allowing the user-biologist to testnew reactives, at a lower cost and in total safety. Within this kind of virtual labora-tory, the biologist conducts real in virtuo experiments that involve a real-life experience,which is not the only interpretation of in silico calculations. The biologist then has a newexperiment process for understanding biological phenomena.

Writing an interactive fiction by modeling the behaviors of virtual actors with thehelp of fuzzy cognitive maps, demonstrates the triple role of these maps. First, they allowyou to specify a perceptive behavior by defining sensory, motor and emotional conceptsas nodes on a directed graph; the arches between these nodes translate the exciter orinhibitor relationships between the associated concepts. They then allow you to controlthe movement of actors during execution, thanks to their dynamics. Finally, they open theway to an active perception when they are simulated by the agents themselves. Simulationin simulation, this active perception, like a kind of inner gaze, takes place through anauto-evaluation of the agent’s behavior and is an outline of self-perception.

In the end, these autonomous behavior models must be implemented and executed ona computer. Therefore, we must ask ourselves about the language that is used and theassociated simulator. This is what we will discuss next in Chapter 4.

59

Models

60

Chapter 4

Tools

4.1 Introduction

This chapter introduces the generic tools that we developed in order to implement themodels that were presented in Chapter 3. The tools are based on an oRis1 developmentplatform (language + simulator), which was designed to meet the needs of our partic-ipatory multi-agent simulation and is the core of AReVi, our distributed virtual realityworkshop .

The first part of this chapter will focus on the oRis language, a generic object-basedconcurrent programming language that is dynamically interpreted and has an instancegranularity.

Next, we will concentrate specifically on the associated simulator in order to demon-strate that it was built so that the activation procedure for autonomous entities does notlead to bias in the simulation, for which the execution platform would be solely responsible.In this environment, execution error robustness will prove to be a particularly significantpoint, especially since the simulator provides access to the entire on-line language.

Finally, we will reconstruct our oRis platform in the context of participatory multi-agent simulation environments and explain which ones are the first applications.

4.2 The oRis Language

When to say something is to do something. [Austin 62]

In order for an operator to easily play those three roles (spectator, actor, creator), whilerespecting the autonomy of the agents with whom he cohabits, he must have a languageto respond to other models, modify them, and eventually create new classes of modelsand instantiate them. Therefore, while the simulation is running, he has the same powerof expression as the creator of the initial model. The language must then be dynamically

1The name oRis was freely inspired from the Latin suffix oris, which is found in the plural form ofwords that are related to language such as cantatio → to sing, cantoris → singers (those who sing), ororatio → speech, oratoris → speakers (those who speak). We kept the triple symbolism of language, theplural and the act: multi-agent language.

61

Tools

interpreted: new code must be able to be introduced (and therefore interpreted) duringexecution. The user may end up only interacting with or modifying an instance of amodel. So, the language must have an instance granularity, which requires a namingservice and most importantly, the ability to syntactically distinguish the overall contextcode from the class and instance codes.

It seems then that participatory multi-agent simulation needs to have an implementa-tion language that incorporates, of course, the paradigm of object-based concurrent pro-gramming (the interest of this paradigm for multi-agent systems is not new [Gasser 92])but that also provides additional properties such as a dynamically interpreted languagewith an instance granularity.

4.2.1 General View

oRis is an interpreted, object-oriented language with strong typing. It also has manysimilarities with C++ languages and Java, which makes learning it easier. Just like thesegeneral-purpose languages, oRis makes it possible to tackle different application-orientedtopics; if you integrate components developed with these languages, the applications’logical framework architecture remains homogeneous, which facilitates the reusability ofthird-party components and the extensibility of the oRis platform. Figure 4.1 providesan illustration of a very basic, yet complete, program, which defines a class and launchesseveral processes (start blocks). We can see that the class describes active objects, whosebehaviors run at the same time as other processes that are initiated in other local contexts.

oRis generates a garbage collector. It is therefore possible to choose which instancesare to be destroyed automatically (however, this decision can be revoked dynamically);the explicit destruction of an instance by the operator delete is always possible. Conse-quently, there is a mechanism that lets you know if a reference is still valid.

We have an interface with C++ language according to a libraries procedure that canbe downloaded on request. Not only does oRis make it possible to combine functions ormethods with an implementation in C++, but it allows further use of the object approachby directly creating a language instance that is interpreted by an instance described inC++. Although oRis is implemented in C++, it is possible to integrate work that was donein other languages. A package ensures that oRis code can interface with SWI-Prolog2

code: a function that makes it possible to call upon the Prolog interpreter in oRis andan oRis method in Prolog. oRis was also interfaced with Java in such a way that anyJava class, whether it is part of standard classes or the subject of a personal project,can be used directly from oRis. There is no need to format the script language’s callingconventions, as is often the case to load from C++.

Many standard packages are provided in oRis. These mainly include services thatare completely routine, but are nevertheless essential, so that the tool can really beused in different conditions. These packages manage topics such as data conversion,mathematics, mutual exclusion semaphores, reflex connections, generic containers, graphicinterface components, curve plotting, objects situated in a plane or in space, reflection

2SWI-Prolog: www.swi.psy.uva.nl/projects/SWI-Prolog

62

The oRis Language

class MyClass // define the class MyClass

{

string _txt; // an attribute

void new(string txt) { _txt=txt; } // constructor

void delete(void) {} // destructor

void main(void) { println("I say: ",_txt); } // behavior

}

start // start a process

{ // in the application

println("---- block start ----");

MyClass i=new MyClass("Hello"); // create active objects

MyClass j=new MyClass("World");

println("---- block end ----");

}

start // start another

{ // process

for(int i=0;i<100;i++)

{

println("doing something else !");

yield();

}

}

doing something else !

---- block start ----

---- block end ----

I say: World

doing something else !

I say: Hello

I say: Hello

I say: World

doing something else !

I say: Hello

I say: World

doing something else !

...

Figure 4.1: Execution of a Simple Program in oRis

tools, graphics inspectors, file communication, socket, IPC, remote-control of sessions,interfacing with Java and Prolog and fuzzy controllers.

Another aspect of oRis is the user’s graphics environment. When oRis is launched, itdisplays by default a graphics console that makes it possible to load, launch and suspendapplications. As shown in Figure 4.2, the console (upper left) offers a text editor corre-sponding to one of the many ways to work on-line. The other windows presented here aresimple application objects: a view to the 3D world, a curve plotter and inspectors. Theseobjects, like all the others, are directly accessible from the console, but can of course, becreated and controlled by the program.

63

Tools

Figure 4.2: oRis Graphics Environment

4.2.2 Objects and Agents

Diverse Entities

oRis agents are characterized by properties (attributes), know-how (methods), declar-ative knowledge (Prolog clauses), a message box, activities (execution flows) and objec-tives. The associated mechanisms are available of course: look-up and modification ofattributes, method calls, inference engine, message look-up and delivery, description andthe activation mode for execution flows, as well as objective evaluation3

But, a multi-agent system is not just made up of agents. Thus, the agent environmentis made up of passive objects (that do nothing as long as no one interacts with them)and active objects (that act without being called upon and generate events). Agentscreate, modify and also destroy objects such as messages and knowledge that – even if,at a certain level can be designed like agents – are implemented at the terminal level likeobjects.

Likewise, not all interactions between entities take place as either language acts in-terpreted asynchronously by their recipient, or as a trace in the environment. Agentsperceive the characteristics of objects or other agents (the geographical position of anagent, the content of a message, etc.), and then an attribute of the considered object isread. Symmetrically, modifying the environment is like modifying an attribute. In ad-dition, an agent’s behavior can result from an unintentional action. Let us imagine twopeople who are talking in the street and are being bumped into by passersby: they each

3Currently, an external Prolog interpreter is responsible for managing declarative knowledge, objectivemanagement is being outsourced to a new application.

64

The oRis Language

make decisions about the messages that they are exchanging based on their own interpre-tations, but they endure being jostled by passersby, even though it is not their intentionor decision (when we are being bumped into, we don’t have a choice!). As a result, in asimilar application, it is possible to use entities of a different nature:

. objects and active objects (according to UML semantics [Rumbaugh 99]);

. actors such as [Hewitt 73] suggests: an actor is an active entity that plays a role byresponding according to a script; his behavior is expressed by message deliveries;

. agents that we can define as autonomous entities (there is no a contrario script ofthe actor model) and whose behavior is based on the perception of the environmentand actions being performed; the actions may be guided by (pro-active) goals.

Fundamentally, implementing these different types of entities is based on the notionof object (a class instance), concurrence and autonomy, which at this level is foundedon the notion of active object (an object that has its own execution flows). The pro-activity of agents is based on the notion of goals, knowledge, action plans and an inferencemechanism. Deductive logic programming responds well to this need; with oRis, theseelements can be implemented in Prolog with which it is interfaced.

Space Environment

Whether it is because the simulated system has its own geometric element, or becauseit is a metaphor that improves the intelligibility of the model, multi-agent simulation mayneed to turn to agents placed (locatable and detectable) in a two or three-dimensionalgeometric environment. Introducing physical dimensions to an environment improves thesemantics of the fundamental notions of multi-agent systems such as:

. location: it is no longer just logic (i.e. conditioned by the organization of multi-agent systems), but a space proximity (distance between two objects) or topologicalproximity (one object is in another, an object is attached to another);

. perception: it is no longer based solely on the identification of an instance and itsattributes (applied semantics), but also on the space locality or the detection of 2Dcontours or 3D faces; the agents must therefore have the appropriate sensors witha certain field of perception;

. interactions: we can always talk about notions such as collision, object attachment,etc.

oRis offers 2D and 3D environments in which objects are defined by their position,their direction and their geometric shape. The functionalities of such objects are verysimilar. Whether they are Object2d or Object3d, oRis obviously offers tools that allowthe user to immerse himself in the multi-agent system. The user, like any agent, has a localview of the environment, can be perceived by other agents, and can interact physicallywith them by using the right device to produce events that they can perceive.

Figure 4.3 illustrates object tracking in the plane and the notion of the field of percep-tion. The methods of perception based on this principle include objects placed at points(origin of their local reference point). The field of perception is made up of an apertureangle and a range. In this area, the tracked objects are localized by polar coordinates onthe local reference point of the entity that perceives, in order to respond easily to local

65

Tools

perception (on my left, go straight ahead). These methods of perception enable you tochoose the type of objects to be perceived and allow two variants: detection of the closestobject or of all of the visible objects. A method of shooting rays subtly detects the con-tours or faces of these objects whose shape is chosen among a group of primitives. Objectscan be hierarchically interconnected so that moving one object moves every object thatis connected to it.

X−axis

Y−axis

orientation placed objects

global reference point

field of perception

Figure 4.3: Localization and Perception in 2D

To go beyond plane problems, oRis offers a tridimensional geometric environment.It reuses the same principles as those of a bidimensional environment, however it addsadditional geometric parameters (six degrees of freedom). The geometric representation(in OpenGL4) of an entity is described as a set of points that defines faces, which can beeither colored or textured. Basic volumes and transformation primitives are also available;thus, a complex shape is constructed by accumulating other shapes.

Time Environment

Time is also a variable of the environment that conditions the behavior of agents andenables them to place their actions and interactions. oRis offers basic functionalities todo this.

Different time management modes traditionally used in simulation are based on thelowest level, either on logical time (event-driven), or physical time [Fujimoto 98]. Forthis reason, oRis provides two ways to measure time. The getTime() function measuresphysical time periods in milliseconds to provide the user, for example, with a feeling ofreal-time. The getClock() function indicates the number of execution cycles that havegone by, which can be integrated into logical time. Notice that the meaning that canbe given to this value depends on the type of multi-tasks used, and more generally, thesimulation’s application context.

4OpenGL: www.opengl.org

66

The oRis Language

User Environment

In order for the user to be more than just a spectator during the development of amulti-agent system, he needs to have a language with dynamic properties so that newcode portions can be included at any time and under various circumstances. The easiestway to intervene is to launch new processes to change the multi-agent system’s naturalcourse of development. Incremental construction of the system needs to be able to fin-ish the application that is running by adding new notions to it, and in particular, newclasses. Modifications can also just involve isolated instances. All of these on-line modifi-cations allow the user to see the models that he is using as being application parametersthemselves.

To make these operations run smoothly, the language’s grammar must make it easyto tell if a change involves the overall context of the application, or that of a class or aninstance. This contextual information provides the user with an interpreter with a uniqueentrance point, which is able to be fed by a frame whose origin imports little (network,input field, file, automatic generation, etc.). Thus, expressing these changes does notforce the user to use a specific tool (a graphics inspector, for example), even if such toolsare available. On the contrary, it allows the completed application to provide access tothe dynamic functionalities that work the best.

Our objective to provide on-line modifications is motivated by the fact that, in avirtual reality, life must go on no matter what. So, no matter what happens, and nomatter what the user was able to do, we want the model to continue to run even if errorsoccur. It is therefore necessary to reduce the potential number of errors in the applicationto a maximum. This point completely justifies choosing a strong typing. In fact, if thetype controls take place as soon as the code is introduced, inconsistencies can be detected,which makes it possible to reject the code in question before it can cause real errors inthe application. Many other errors can arise during execution and they must not in anyway cause the application to quit. To avoid having to interrupt the entire application inorder to de-bug it, we would rather just interrupt the activity that is producing the error.Thus, the other processes will continue to run in parallel while the user takes care of thefaulty process.

To make it easier for the user to interact with the application and its components, oRisoffers several simple, immediately usable mechanisms that are related to introspection.Other more routine reflection services are part of a specific package. The first way tomake interaction easier involves naming objects and expressing references. Each instancethat is created automatically receives a name, which the user can read, that makes itunique. It is made up of the name of the created object class, followed by a period and aninteger to distinguish it from the instances in the same class (MyClass.5, for example).This name is not just a simple, useful functionality, but is an important part of thelanguage’s lexical conventions; it is a reference type constant.

A group of services enables you to become familiar with all of the existing classes orinstances of a particular class. It is also possible to ask an object its class, or even toverify if it is an instance of a particular class or a derived class.

67

Tools

4.2.3 Interactions between Agents

The interactions between an agent and its environment take place through the ma-nipulation of the structural characteristics of the objects that compose it. In turn, theattributes of the targeted objects are read and modified; modifications can be made bycalling a method (e.g. calling the move() method of an object that is going to collidewith a placed agent). These interactions can also instantiate objects or destroy them(the operators new and delete are called). In any case, the reaction of the environment’sobjects is imperative and synchronous.

Interactions between agents can be reactive (example of collision) or mediate (ex-changing of messages). For this reason, the oRis language offers four solutions that canbe used simultaneously while implementing an agent. These services are ensured by theObject class and correspond respectively to synchronous method calls, reflex connections,point-to-point message delivery and broadcasting messages into the universe of agents.

Synchronous Calls

Even though the object model uses the concept of message delivery to describe commu-nication between objects, in reality, languages use a synchronous method call mechanism.Calling a method on an object does not create a new execution flow for the related instancebut simply turns the activity of the called object towards a process that manipulates thedata of the designated object. In other words, it is the calling object that does all of thecalculations; the concerned object only provides data and its operating procedures, butdoes not actively participate in the process.

The advantage of imperative programming through synchronous calls is that its exe-cution is efficient and it ensures that actions are sequenced: the code that follows the callcan count on the fact that the requested process was definitely completed since it wascarried out by the same execution flow. Some agent behaviors, such as the perception ofothers, can be effectively programmed this way. Let us also note that calling a methodcan take place in a start{} block and hence be run by an activity flow that is differentfrom the calling flow.

As indicated in UML, C++ and Java languages provide a way to specify access control(public, protected or private) to the attributes and methods of the object classes. Forefficiency reasons, this control takes place during compilation. With agents however, thissemantic is not relevant. In fact, two instances from the same class can make changesdirectly on their mutual private parties, which violates the principle of autonomy if thetarget is a method that should only be executed under the control of the agent for whichit was meant.

In order to respond to this situation, oRis provides a way to restrict access to methods(or even to functions). While this indicates the object on which the present methodis called, the keyword that indicates the object that called this method on the objectindicated by this. By verifying the identity of that at the beginning of the method, itis possible to control the access to the related service. Thus, we can simply verify thatcertain services can only be directly accessed by the related entities. The verifications

68

The oRis Language

can of course focus on the standard class-based schema, but can also be based uponmodalities that are specific to the application and variable conditions in time. Notice thatthis method is equivalent to the sender field of an asynchronous message, which allowsa receiving agent to maintain the same reasonings no matter what the communicationmedium (synchronous method call or asynchronous message delivery). The access tothat is internally based on the fact that it is possible in oRis to inspect the executionstack of the running process. We can, for example, only invoke one method from severalothers.

Even though execution speed may be slower, these dynamic access controls open upseveral possibilities in multi-agent system programming. Interaction rules can be estab-lished within the organizational structures and made dynamic. Hence, it is possible togive an agent the means to refuse the execution of a service because the requestor has noauthority over it in the organizational structure or because the service could have beenrequested, even indirectly, by an agent that had been uncooperative during a previousrequest.

Reflex Connections

oRis provides an event-driven programming method called reflex connections, whichallows an agent to react to the modification of an object attribute that is either one ofits own attributes (internal state of the agent, mailbox, etc.) or the attribute of anotherobject (case of an agent that watches an environment object or a perceptible characteristicof an other agent).

A reflex connection is an object assigned to an attribute and each time this attributeis modified, some of the object’s methods are automatically launched. Thus, when anattribute that is associated with a reflex connection is modified, the connection is acti-vated, which automatically launches the execution of its before() method, just beforethe modification takes place, and of its after() method, just after it takes place. It ispossible to associate any number of reflex connections with the same attribute of the sameinstance.

This mechanism can be compared to a communication method (in a reactive context)since process launching that signals the modification of an attribute can be seen as a wayto stay informed about an event.

Communication by Messages

In accordance with the object model that is conceptually based on the interactionbetween instances by message delivery, this mechanism is defined in oRis in the basicObject class. It also uses the services of the basic Message class, either for sendingpoint-to-point messages, or for broadcasting messages.

To send a point-to-point message, an agent instantiates a class object derived fromMessage (whose only attribute is the reference of the sender who is determined throughthe keyword that). Calling its sendTo() method places it in the mailbox (FIFO first-in-first-out queue) of the recipient whose reference is specified in argument. In order forthe recipient to process the received message, it must consult its mailbox. The recipient

69

Tools

can be informed that a message was delivered if the reflex method was automaticallylaunched. For example, it is possible to relaunch the execution of the message readingactivity that could have been suspended in order to avoid any active wait state. Thetype of communication explained here is described point-to-point in the sense that thesender sends its message to a recipient that it knows and has explicitly designated. Thisprocedure is asynchronous since the sender pursues its activities without knowing exactlywhen its message will be processed by the recipient.

By extending the object model, an agent can simultaneously send a message to objects(agents) that it does not know; once the message is received, the objects will launch somesort of action that is unknown to the sender. The messages used are completely identicalto those mentioned previously. The difference is how they are sent and received. Thistime the sender calls the broadcast() method of the message so that it is sent to all ofthe instances that can be involved. They are determined through the setSensitivity()

method of the Object class. Calling this method requires two arguments that respectivelyindicate the type of broadcast message to which the instance becomes sensitive (the classname), and the name of the method that must be launched when a message like that isreceived (e.g. setSensitivity("WhoAreYou", "presentMySelf")). When a message isbroadcast, the method that was specified (presentMySelf) is immediately called on eachobject that is sensitive to this type of message (WhoAreYou class). This reflex mechanismworks well with messages that are seen as events and that need an immediate reactionfrom the objects that intercept it. However, if this is not the case, the messages receivedby broadcast can also be placed in the recipient’s mailbox and therefore be processed asif they were sent point-to-point, which homogenizes the reaction mode of the receivingagent. The same object can be sensitive to several types of messages and there is amechanism that recalls the follow-up of polymorphic links involving the choice of reflexmethods to be launched. The sensitivity to the different types of messages can evolve intime, in order to change reaction or to become desensitized to certain types of messages.

4.3 The oRis SimulatorEach computer program is a model, created by the mind, from a real or imaginary process.These processes, which are born from man’s experiences and thoughts, are innumerable andcomplex in their details. At any moment, they may only be partially understood. It is onlyrarely that they are modeled to satisfaction in our computer programs. Although our pro-grams are sets of symbols crafted with care, mosaics of criss-crossed functions, they neverstop evolving. We modify them gradually as our perception of the model deepens, broadensand becomes widespread, until an equilibrium is reached which can reach the boundaries ofanother possible modeling of a problem. The drunken joy that accompanies computer pro-gramming stems from the continual round-trips between the human mind and the computer,mechanisms expressed by programs and the explosion of the new visions that they bring. Ifart translates our dreams, computers fulfill them as programs! [Abelson 85]

Multi-agent simulation requires several models to be executed in parallel. Therefore,we must make sure that the activation procedure of these autonomous entities does notlead to a bias that would result in a global state for which the execution platform wouldbe responsible: it is imperative that only the algorithmic procedures described in agentbehaviors explain the model’s global state.

70

The oRis Simulator

Controlling the scheduling procedure for agent behaviors – implemented in oRis asactive objects – is therefore very important to our project. Experience shows that, as ageneral rule, very few things are known or guaranteed by the multi-task services providedin different programming environments. This lack of information leaves a certain doubtregarding the fairness of these procedures and the bias risks that they could incur. Here,we present the scheduling procedure that is used in oRis so that the user knows exactlywhat he can expect when he decides to develop agents in parallel.

4.3.1 Execution Flows

The first topic to be tackled in regards to multi-tasks is how to determine the natureof the tasks that we would like to execute in parallel. Although they are managed thesame way internally, the oRis language provides three ways to express these tasks, whichwe will call execution flows.

In oRis, an active object has a main() method that represents the entrance point ofthe instance behavior in question. When an instance equipped with a main() method iscreated, this method is immediately ready to be executed. When the end of the methodis reached, it is automatically relaunched to the beginning. This is therefore a simpleway to implement an agent’s autonomous behavior. A multi-agent simulation in oRis

instantiates these agents and lets them live.

Another way to start a new in parallel process for active objects is to split the executionflow by using the start primitive (Figure 4.4). This procedure, which generates severalexecution flows from one flow, is mainly used to assign several activities to one agent.It is very conceivable then that the main behavior of an agent (its main() method) willgenerate other supplementary activities. It seems reasonable that an agent could, forexample, move around while communicating with others.

Function or method code

{/* code 1 */start

{/* code 2 */}

/* code 3 */}

code 1 primitivestart

code 3

code 2

Figure 4.4: Splitting an Execution Flow in oRis

The two previous types of activities are clearly intended for writing agent behaviors.The third type that we will introduce here is intended more specifically for user changes,however, it can also be applied in numerous other circumstances. We will reuse thekeyword start, but this time it will be outside of any code block. It lets you write out acode block whose execution starts right after the interpreter analyzes it. An example ofusing such a code block has already been given in Figure 4.1. In particular, code blockcan be used to display new instances or to begin processes that execute in parallel withall of the application’s activities. A block like start can be considered as an anonymous

71

Tools

function, in other words, it hosts local variables and its code does not involve any instancein particular. It is possible, however, to think of it as an anonymous method by havingthe name of the instance precede it; the code executed in the start block concerns thisparticular object as if it was one of its methods. Such a possibility, when combined withdynamic language properties, allows the user to launch a process during application bypretending to be a specific instance; in other words, this object temporarily becomes auser’s avatar.

No matter which one of these three methods is used to create execution flows, thescheduler manages them all the same way. Execution flows are all indicated by a completeidentifier and can be suspended, relaunched or destroyed. Particular care was takenwith processing errors that could occur during execution. Because errors mainly involveunforeseen circumstances (division by zero, access to an instance that was destroyed, etc.)we did not try to provide a catch mechanism for anticipated errors, like the try/catch,which indicates when there is an error, but is not really able to correct it. In oRis, an errorduring the execution of an activity destroys this flow and itself (as well as the message errordisplay and the execution stack). This solution, which only stops the activity involvedin generating an error, makes it possible to design a universe that continues to run evenwhen errors are produced locally (life goes on no matter what).

4.3.2 Activation Procedures

The oRis scheduler maintains several execution flows. Figure 4.5 diagrams the datastructures used to manage these parallel processes. Each execution flow is representedby a data structure which, in addition to the activity’s identifier, contains a contextstack and a temporary values stack. The context stack is used to embody function andmethod nested calls. The executable modules are represented by a sequence of micro-instructions. Since these are atomic, activities can only be switched between differentflows through the execution of two micro-instructions. The temporary data stack is sharedby all of the execution flow contexts and makes it possible to stack the parameters of anexecutable module before its call, and to retrieve the stacked result when the called moduleis terminated.

oRis virtual machine

All execution flows

Contexts stacks

Executable module (function, method...)

Micro−instructions sequence

Local variables

Instructions counter

Temporary values stack

Figure 4.5: Structure of the oRis Virtual Machine

72

The oRis Simulator

Three Multi-Task Modes

We know now that different execution flows can be interrupted between two of theirmicro-instructions. Therefore, we will try to explain what causes one execution flowto be interrupted for another. This leads to three types of multi-tasking (cooperative,preemptive and deeply parallel) that the user can choose freely in his program.

One possibility is to place the scheduler in cooperative mode. In these conditions, ac-tivity cannot change spontaneously but instead, relies exclusively on explicit instructionswritten in the code to be executed (yield() function, end of main(), resource wait state).The scheduler can then interrupt the current process in order to continue the executionof another process. An activity that allows the other activities to run will restart itsprocesses when all of the others have done the same. This type of multi-tasking requiresthat all of the application’s activities play the game, in other words, that they regularlytry and make way for the others. If one of them monopolizes the scheduler’s activity, itfreezes the rest of the application.

It is also possible to choose a preemptive schedule by specifying in milliseconds whena switch must take place. In these conditions, long behaviors can be written without wor-rying about the minimum number of interleaves an activity must have. The programmerno longer needs to insert explicit switches (yield()) since the scheduler makes sure thatthey take place spontaneously.

A more precise switching control is possible this time by specifying a number of micro-instructions as a preemption interval. The notions of parallelism and intermingling aretherefore pushed to their limits when this number equals 1, which justifies choosing theterm deeply parallel. This process is significant because it provides a parallelism that goeswell beyond what the other two modes offer. Even though the other two modes reducethe amount of time, they are nevertheless based on code portions that are executed ingenerally long sequences. With this new scheduling mode we can greatly decrease thelength of these sequences in the application, which makes the execution closer to a realparallelism (even if this is not the case). The other consideration of this possibility is thatthe scheduler can spend more time scheduling switches than on the application’s usefulcode. Notice that this type of multi-tasking (just like the cooperative mode) does notdepend on any kind of system clock and is only the expression of an internal algorithmin oRis. Consequently, it is perfectly mastered.

Two Designation Modes

The last topic to be discussed in regards to activation process is how to choose whichflow to activate after a switch. Introducing priorities would only push back the problemfor the flows with the same priority and would be very difficult for us to interpret inregards to autonomous entities. It seemed in fact, better to accept that various entitiesdo not use time in the same way, than to say that some live longer than others5. In orderto ensure that time is shared equally between different activities, we present the notionof execution cycle, which adds the following property to the system: each execution flow

5Does the hare live more often than the tortoise or does he simply run faster ?

73

Tools

goes around one single time per cycle. If new activities appear during the cycle; theyare executed in the next cycle to make sure that cycles end. We suggest two activationorders: a fixed order and a random one.

The most immediate solution consists of putting the activities in an unchanged order,which leads to an undesirable precedence relationship since the activity in the n−1 rangeis always chosen before the activity in the n range. Thus, if two agents are regularlycompeting for one resource, the agent whose behavior is in the n− 1 range systematicallytakes the resource, to the detriment of the agent in the higher n range6. Notice that thisparasite priority only involves activities from a local perspective and not from an overallperspective. Indeed, on a microscopic level, the last activity of a cycle precedes the firstactivity of the following cycle (from this point of view, changing cycles is a remarkabledate). This priority relationship only exists between activities that are relatively close inorder in the designation process.

Parallelism simulation requires that code portions of different flows be executed se-quentially as this local priority relationship cannot be completely eliminated. However,by presenting a random order in this designation, we avoid its systematic reproductionfrom cycle to cycle since the advantage of one activity over another during a cycle is ques-tionable during another cycle. We therefore retain the notion of execution cycle, whichprovides us with a common logic time that prevents any deviation in time in favor of anactivity, while eliminating the bias introduced by local precedence relationships.

Besides the three available scheduling modes (cooperative, preemptive and deeplyparallel), the user can also choose between a fixed order designation (strongly advisedagainst) and another random designation that makes it possible to eliminate any localprecedence relationship that could lead to bias. Notice that the process used is basedsolely on algorithms and data structures over which we have complete control and whichare explained in detail in [Harrouet 00]. It does not depend on any particular functionalityof the underlying system (with the exception of the preemptive mode clock).

In regards to concurrent access to common resources, oRis offers very standard mutualexclusion semaphores. In particular, it lets you choose whether the unlocking operationmust take place according to the usual queue or according to a random order to avoidnew local precedence relationships. We also have another solution to ensure this type ofservice. It involves creating low-level critical sections (execute block similar to start)that do not allow the scheduler to execute any switches within the framed code portion.

One last detail about parallelism involves the use of a system thread to encapsulatethe invocation of blocking system calls. Indeed, the oRis scheduler is seen as a uniqueprocess from the point of view of an operating system and is thus likely to be suspendedin its globality. When a potentially blocking process must be carried out, it is launchedin a thread whose termination is regularly verified by the oRis scheduler.

6What would the immunologist from Chapter 3 conclude if the antibodies always took the resourceaway from the antigen due to a bias in the simulator?

74

The oRis Simulator

4.3.3 The Dynamicity of a Multi-Agent System

There seems to be many ways to introduce oRis code during execution (composinga character string, entry window, reading a file, reading a network screen, etc.), buteverything converges towards one single entry point; the parse() function. This functiontransmits the string argument to the interpreter, making all of the language available on-line. Indeed, the same process is used for initial loading and on-line actions. Therefore,it is possible to create agents that, with a learning mechanism, could autonomously maketheir behavior evolve.

Launching Processes

The first way to change a program during execution is to launch new processes. Theseenable you to create new agents, inspect the model, interact with agents or destroy them.Such lines of code are found in an anonymous block like start or execute. As wehave previously seen, these blocks of code are managed by the scheduler once they areanalyzed by the interpreter. The start block begins a process that continues to runin parallel with other activities while the execute block is executed instantaneously anduninterrupted. The user can trigger parameters by pretending to be an application object,and then specifying the name of this object in front of the start or execute block(MyClass.1::execute { ... }). The object then becomes an avatar of the user.

Changing Functions

Introducing source code allows us to add new functions (those that did not exist arecreated) and to modify the existing functions (those that existed are replaced) at any time.Declaring a function (its prototype) can even replace its definition; in this case calling thefunction is prohibited until a new definition is introduced. These modifications also involvethe compiled code, by making it possible to choose between several implementationsprovided in C++ or by going back to a definition in oRis.

Changing Classes

In the same way that it is possible to dynamically change functions, oRis can add,complete and modify classes during execution. A class can be added at any time bydefining it during a call to the parse() function. When the interpreter comes across aclass definition, it may be defining a completely new class, in which case it is created, orit may be defining an existing class, in which case it is completed or modified. Hence, itis possible to add attributes and methods, as well as rewrite existing methods. Addingmethods is therefore very similar to adding functions: the method that did not exist, existsnow, and the method that existed, is replaced. Generally, comments about multipledefinitions and function declarations apply to methods. The situation, however, is alittle more delicate because the effects of new definitions combine with the effects ofoverdefinitions. A modification that involves the method of a particular class can onlyinfluence the derived classes if they do not overdefine the method in question. In return,

75

Tools

when there is no overdefinition, the modification instantly updates the entire hierarchyof both the derived classes and the instances. In the same way, adding an attribute isreflected in the derived classes and instances.

Changing Instances

An even more refined operation involves dynamically specializing the behavior of aninstance; the class then represents the behavior of the instance by default. In this case,we reuse exactly the same principles for both the functions and the classes, in otherwords: what does not already exist is added and what exists is replaced. These changesare unique because they are applied to entities that were already created. There is adefinite distinction in comparison with the anonymous classes of Java that are createdand compiled well before the instances exist. Here, we provide a way to adjust the behaviorof an instance while it is in real-life to give it a better behavior in the studied case.

class Example // Definition of the Example class

{

void new(void) {}

void delete(void) {}

void main(void) // Initial main() method

{ println(this," is an Example !"); }

};

execute // Code to be executed

{

Example ex;

for(int i=0;i<3;i++) ex=new Example; // Instanstiate three Example

string program=

format("int ",ex,"::i; // Add an attribute to ‘ex’

void ",ex,"::show(void) // Add a method to ‘ex’

{ print(++i,\" --> \"); }

void ",ex,"::main(void) // Overdefine main() of ’ex’

{ show(); Example::main(); }"); // to use what was added

println(program); // Display the composed character string

parse(program); // Interpret the composed character string

}

int Example.3::i; // Add an attribute to ‘ex’

void Example.3::show(void) // Add a method to ‘ex’

{ print(++i," --> "); }

void Example.3::main(void) // Overdefine main() of ’ex’

{ show(); Example::main(); }

Example.3 is an Example !

Example.2 is an Example !

Example.1 is an Example !

Example.1 is an Example !

1 --> Example.3 is an Example !

Example.2 is an Example !

Example.1 is an Example !

Example.2 is an Example !

2 --> Example.3 is an Example !

Figure 4.6: Modifying an Instance

76

Applications

Figure 4.6 gives an example of using a parse() function to modify the behavior ofan instance. Composing a character string that involves one of the three instances allowsyou to add an attribute and a method as well as the overdefinition of an existing method.The distinction between class and instance is made naturally by using the range resolutionoperator (::). Such modifications can of course be made several times during execution.

The dynamic functionalities presented here require no special programming techniqueor device in the sense that there is no difference between the code form that you writeoff-line and that of the code that you create dynamically on-line. The rules for rewritingare the same in every case. This makes it possible in particular to develop a behavioron an instance so that afterwards it can be applied to an entire class by simply changingthe established code range. Access to reference on an object-type constants makes itpossible to easily act on any instance. Indeed, if this lexical form did not exist, it wouldbe necessary to memorize a reference on each instance in order to be able to act at theright moment. In oRis, the user does not have to worry about this kind of detail; if bysome means (graphics inspector, pointing in a window, etc.) we display the name of anobject, we can reuse it to make it execute processes or to modify it.

4.4 Applications

Virtual worlds must be realized, in other words, one must endeavor to update what is virtuallypresent in them, to know the intelligible models that structure them and the ideas that developthem. The “fundamental virtue” of virtual worlds is to have been designed with an end inmind. It is this end that must be realized, actualized, whether the application is industrial,space-related, medical, artistic, or philosophical. The images of virtual must help us to revealthe reality of virtual, which is of an intelligible order, and of intelligibility proportional tothe pursued end, theoretical or practical, utilitarian or contemplative [Queau 93b]

4.4.1 Participatory Multi-Agent Simulations

The development of software engineering lead to the definition of the models, meth-ods and methodologies that had been made operational by the various tools that im-plemented them. However, the expression software development environment is far toovague: no universal set and, a fortiori, no tool covers all of the needs. In the morespecialized field of multi-agent system development, this same kind of diversity can befound in methods [Iglesias 98], models (for example, a model for coordinating actionssuch as the contract network [Smith 80]), languages (for example, MetateM [Fisher 94],ConGolog [De Giacomo 00]), application builders such as ZEUS [Nwana 99], componentlibraries such as JATLite7, multi-agent system simulators like SMAS8, toolkits, which aregenerally class packages offering basic services (agent life cycle, communication, messagetransferring, etc.) and classes implementing some components of a multi-agent system([Boissier 99] describes a group of platforms developed by the French science community

7JATLite: Java Agent Template Lite (java.stanford.edu/java agent/html)8SMAS: Simulation of Multiagent Asynchronous Systems (www.hds.utc.fr/~barthes/SMAS)

77

Tools

and [Parunak 99] makes reference to other tools of this type).According to Shoham, agent-oriented programming system is made up of three basic

elements [Shoham 93].

1. A formal language for describing the mental states of agents, which, according toShoham, are modalities such as beliefs or commitments. As a result, a modallogic is defined, as the author does in his Agent-0 language; the internal states ofa reactive agent can be described as a system of discrete events such as a Petrinetwork [Chevaillier 99a].

2. A language for programming agent behaviors that must respect the semantics ofmental states; thus, Agent-0 defines an interpretation for updating mental states– which is based on the theory of language acts – and executing commitments.It would be conceivable in this framework to use KQML9 and, if heterogeneity orstandardization constraints were imposed, a language respecting FIPA10 specifica-tions. In the case of a discrete events system, it would be a good idea to define thesemantics of the states machine (for example, an interpretation of a Petri network).

3. An “agentifier” that can convert a neutral component into a programmable agent;Shoham admits to having very few ideas on this topic. Let us reformulate thisrequirement by saying that it is necessary to have a language that allows you tomake agents operational, i.e., which allows you to make them implementable andexecutable. We prefer to think of this third element as an implementation language.

The first two elements – which are closely related – are open research fields and thetypes of multi-agent systems that we would like to develop must be selected carefully. Tointegrate these elements into a general-purpose platform inevitably leads to two pitfalls:platform instability, as the field is still open, and the diktat of a model, which is notnecessarily appropriate for all situations. In regard to the third element, there are severalpossible choices, which are conditioned by the type of considered application and by thetechnological characteristics of the agents execution environments. The two major typesof applications for multi-agent systems — the resolution of problems and simulation —result in the following typology (according to [Nwana 96]).

. To solve problems:

� on-board agents (e.g. physical agents (process controls) and interface agents):there are heavy efficiency constraints for program execution and dependency inregards to the technology of the target system (many applications are writtenin C or in C++);

� mobile agents: it is better to use a language which can be interpreted on alarge community of operating systems; using script languages such as Perl ispossible;

� rational agents: programming in logic with a language such as Prolog can beperfect for meeting this type of need.

9KQML: Knowledge Query and Manipulation Language (www.cs.umbc.edu/kqml)10FIPA: Foundation for Intelligent Physical Agents (www.fipa.org)

78

Applications

. For simulation: it is necessary to have an environment where different agent be-haviors can be executed in parallel and which offers a wide range of componentsto interface with users. The choice depends on the languages being executed on avirtual machine that allows parallel programming: in the past Smalltalk’80 and,more and more, Java (the development of the DIMA platform gives evidence of thistrend [Guessoum 99]).

We are well aware that this classification, like a lot of others, is somewhat arbitraryand that different languages can be applied to different fields (everything can be done inassembler!) and there are several solutions to the same problem. Its only pretense isto explain the oRis positioning to the reader as a participatory multi-agent simulationplatform. oRis was designed to meet both the need for an implementation languagefor multi-agent systems and these systems’ participatory simulation requirements. Theentire oRis environment and language represent more than one hundred thousand linesof code mainly written in C++ but also in Flex++ and Bison++, and in oRis itself. oRis

is completely operational and stable and is already being used in many projects.

4.4.2 Application Fields

oRis is used as a teaching tool in certain courses given at the Ecole Nationale d’Ingenieursde Brest (National Graduate Engineering School of Brest): for example, programming byconcurrent objects, multi-agent systems, distributed virtual reality, adaptive control. Itis also used by other educational institutions: Military Schools of Saint-Cyr Coetquidan,ENSI (National School of Information Sciences) of Bourges, ENST (National GraduateEngineering School of Telecommunications of Brest) in Brittany, IFSIC (Institution forFuzzy Systems and Intelligent Control)– DEUG (Diploma of General University Stud-ies) University of Rennes 1, DEA (post-Master’s degree) from the University of Toulouse(IRIT), Institute of Technology in Computing of Bordeaux, and a Master’s degree fromthe University of Caen.

oRis is the platform on which research works from the Laboratoire d’InformatiqueIndustrielle (Laboratory for Industrial Data Processing) (LI2) are conducted, which re-sults in the development of many class packages. Among these are packages that coordi-nate actions according to Contract Net Protocol, agent distribution [Rodin 99], commu-nication between agents by using KQML [Nedelec 00], the use of fuzzy cognitive maps[Maffre 01, Parenthoen 01], the declaration of collective action plans by using an exe-cutable extension of Allen’s temporal logic [De Loor 00] and the definition of agent be-haviors as trends [Favier 01]. The platform is also used in image processing [Ballet 97b],medical simulation [Ballet 98b] and in the simulation of vehicle manufacturer systems[Chevaillier 99b].

Other research teams also used oRis for their works: the CREC laboratory (Saint-CyrCoetquidan) for simulating battle fields, conflicts and cyber warfare, the Naval School’sSIGMa team for constructing dynamic digital land models from probe points, the GREYClaboratory (University of Caen) for simulating computer networks, the SMAC and GRICteams from the IRIT (Toulouse), and the UMR CNRS 6553 of Ecobiology (University ofRennes) for simulations in ethology.

79

Tools

4.4.3 The AReVi Platform

The Laboratoire d’Informatique Industrielle (Laboratory for Industrial Data Process-ing) has developed an Atelier de Realite Virtuelle (Virtual Reality Workshop) also knownas AReVi, which is a toolkit for creating distributed virtual reality applications. The firstthree versions of AReVi were built around the Open Inventor graphics library and didnot include the multi-agent approach. In version 4, oRis was used to develop AReVi.The kernel of AReVi is none other than oRis, and thus all of the possibilities that weredescribed previously are available; it is extended by the C++ code offering functionalitiesthat are specific to virtual reality [Duval 97, Reignier 98]. This platform offers a graph-ics rendering that is completely independent from the one provided by oRis. Graphicsobjects are loaded directly from files in VRML211 format (you can define animations andmanage the levels of detail). Graphics elements, such as transparent or animated textures,light sources, lens flare (sunlight on a lens) and particle systems (water streams) are avail-able. AReVi also uses kinematics notions (speeds and linear and angular accelerations),which adds to the possibilities for expressing agent behaviors in a three-dimensional en-vironment. In the sound area, AReVi offers a space sound system and synthesis, as wellas voice recognition functionalities. This platform manages various peripherals, such asa dataglove, a control lever, a steering wheel, localization sensors, a force-feedback armand a stereoscopic head-mounted display, which widen the options for users to immersethemselves in multi-agent systems.

It is important to note that the multiple tools offered by AReVi for multi-sensoryrendering of the virtual world are oRis objects similar to the others. It is the instancesthat can change the application and play an active role. It is therefore possible to adaptthem to the considered subject, and to even give them an evolved behavior that meetsthe application’s specific needs. Thus, no matter what application subject you are dealingwith, you can always think of a way to customize rendering tools in order to make it easierto use them in the chosen context. We can, for example, adapt a rendering window sothat it automatically degrades the quality of its display when the image refresh frequencybecomes too low. We can also associate a geographical area with a scene and give ita behavior that tends to integrate entities entering into this area and stops displayingentities that leave.

In regards to the distribution of graphics entities on different machines, the function-alities involving network communication provided by oRis were used to establish com-munication between remote entities and a dead-reckoning [Rodin 00] mechanism. Whenan entity is created on a machine, copies that have a degraded kinematic behavior on theother machines are made, and their geometric and kinematic characteristics are updatedwhen there is too much of a divergence between the real situation and the copies.

Currently, AReVi has been used within the framework of interactive prototyping ofa stamping cell [Chevaillier00] and the development of a training platform for FrenchEmergency Services [Querrec01a] (Figure 4.7 [Querrec 01b]).

11VRML : Virtual Reality Modeling Language (www.vrml.org)

80

Conclusion

Figure 4.7: An AReVi Application: Training Platform for French Emergency Services

4.5 Conclusion

Choosing a language is to choose a method of thinking [Tisseau 98a]. To prove thispoint, all one has to do is turn to the famous tale of a thousand and one nights: AliBaba and the Forty Thieves (Ali Baba Metaphor, Figure 4.8). There are three possiblescenarios: procedural programming, object-oriented programming or agent programming.

1. In procedural programming (Figure 4.8a), the cave’s door is passive data (DataSesame) manipulated by the agent AliBaba: it is AliBaba that intends to open thedoor, and he is the one who knows how to open it (entrance procedure Open: I goto the door, I grab the doorknob, I turn the doorknob, I push the door).

2. In object-oriented programming (Figure 4.8b), the cave door is now an object(Object Sesame) that has delegated the know-how to open the door (example:an elevator door): it is still AliBaba who intends on opening the door, but now thedoor knows how to do it (Open method from the Door class). To open the door,AliBaba must send it the correct message12 (Open Sesame!: Sesame->Open()).Consequently, the door opens up in accordance with a master-slave schema.

3. In agent programming (Figure 4.8c), the cave door is an agent whose goal is to openwhen it detects a passer-by (example: an airport door equipped with cameras):the door has both the intention and the know-how (Agent Sesame). Whether ornot AliBaba intends on going through the door or not, the door can open if it

12Ali Baba already knew programming by objects! This analogy between sending a message in pro-gramming by objects and the Ali Baba formula (Open Sesame) is given in [Ferber 90].

81

Tools

execute

{

Object Sesame = new Data;

Agent AliBaba = new Avatar;

}

AliBaba::execute

{

Open(Sesame);

}

execute

{

Object Sesame = new Door;

Agent AliBaba = new Avatar;

}

AliBaba::execute

{

Sesame->open();

}

execute

{

Agent Sesame = new Door;

Agent AliBaba = new Avatar;

}

void Door::main(void)

{

if(view(anObject)) open();

else close();

}

a. Procedural Programming b. Object-Oriented Programming c. Agent Programming

Figure 4.8: Metaphor of Ali Baba and Programming Paradigms

detects him: eventually it can even negotiate its opening. The user AliBaba is thusimmersed into the universe of agents through an avatar, which can be detected bythe door.

Hence, choosing a language is important when building an application. Agent pro-gramming such as we develop, is best suited, by construction, for autonomizing the entitiesthat make up our virtual universes.

This is why we have chosen to make oRis a generic language for implementing multi-agent systems that makes it possible to write programs based on the interaction of objectsand agents while placed in a space-time environment and controlled by the user’s on-lineactions. oRis is also a participatory multi-agent simulation environment, which makesit possible to control agent scheduling and interactive modeling through the language’sdynamicity. oRis is stable and operational13 and has already led to many applicationssuch as AReVi, the virtual reality workshop.

13oRis can be used for free by downloading it from the Fabrice Harrouet’s homepage(www.enib.fr/~harrouet/oris.html). Documentation, examples, course material and the thesis dis-sertation [Harrouet 00] are also available on this page.

82

Chapter 5

Conclusion

5.1 SummaryVirtual objects, just like the space in which they appear, are actors and agents. Equipped withmemory, they have functions for processing information and an autonomy, which is regulatedby their programs. A strange, intermediary artificial life continually moves through virtualworlds. Each entity, each object, each agent can be considered as an expert system that hasits own behavior rules and applies or adapts them in response to changes in the environment,modifications of the rules and metarules that govern the virtual world. [Queau 93b]

For a dozen years, our works have taken place in the field of virtual reality, a newdiscipline of engineering sciences that involves the specification, design and creation ofrealistic and participatory universes. There is no way that we could completely coverall of the aspects of a synthesis discipline like virtual reality, so we tried to answer thequestions of what concepts ?, what models ? and what tools ? for virtual reality. Today, wehave answered these questions through a principle of autonomy, a multi-agent approachand a participatory multi-agent simulation environment.

What Concepts?

The way that we think about virtual reality epistemologically has made the concept ofautonomy the core of our research problem. Our approach is thus based on a principle ofautonomy according to which the autonomization of digital models that make up virtualuniverses is vital to the reality of these universes.

Autonomizing models means equipping them with sensory-motor means of commu-nication and ways to coordinate perceptions and actions. This autonomy by convictionproved to be an autonomy by essence for organism models, an autonomy by necessity formechanism models and an autonomy by ignorance for models of complex systems, whichare characterized by a large variety of components, structures and interactions.

The human user is represented within the virtual universe by an avatar, a modelamong models, through which he controls the coordination of perceptions and actions.The human user is connected to his avatar through language and adapted behavioralinterfaces, which make it possible to have a triple mediation of senses, action and language.Consequently, by a sort of digital empathy, he feels as if he is actually in the virtualuniverse whose multi-sensory rendering is that of realistic, computer-generated images:3D, sound, touch, kinesthetics, proprioceptive, animated in real-time, and shared on

83

Conclusion

computer networks.

Thus, according to the Pinocchio metaphor, a human user, more autonomous becausehe is partially freed from controlling his models, will participate fully in these virtualrealities: he will go from being a simple spectator to an actor and even a creator of theseevolving virtual worlds.

What Models?

Based on this principle of autonomy, we modeled these virtual universes throughmulti-agent systems that allow human users to participate in simulations.

We have highlighted the auto-organizing properties of these autonomous entities-basedsystems through the significant and original example of blood coagulation. We have henceshown that participatory multi-agent simulations of virtual reality have given today’sbiologists real virtual laboratories in which they can conduct a new type of experimentthat is cost-effective and safe: in virtuo experiment. This in virtuo experiment addsto the traditional investigations which are in vivo and in vitro experiments, or in silicocalculations.

To go beyond the realm of reactive behaviors, we described emotional behaviorsthrough fuzzy cognitive maps. We use these maps to both specify the behavior of anagent and control its movement. The agent can also auto-evaluate its behavior by sim-ulating its map itself: this simulation within simulation is therefore an outline of theperception of the self, which is necessary for real autonomy. Implementing these fuzzycognitive maps makes it possible to characterize credible agent roles in interactive fictions.

What Tools?

To implement these models, we have developed a language — the oRis language —which favors autonomy by construction of the entities that make up our virtual universes.

Thus, oRis is a generic implementation language for multi-agent systems, which makesit possible to write programs based on objects and agents in interaction, placed in a space-time environment and controlled by the user’s on-line actions. oRis is also a participatorymulti-agent simulation environment that makes it possible to control the schedule of agentsand interactively model through the dynamicity of language.

We have paid careful attention to the associated simulator in order to demonstratethat it was built so that the activation process for autonomous entities does not lead tobias in the simulation, for which the execution platform would be solely responsible. Inthis framework, execution error robustness proved to be a particularly sensitive point,especially since the simulator provides access to the entire on-line language.

oRis is stable and operational: it has already lead to many applications among whichour virtual reality workshop AReVi.

84

Perspectives

5.2 PerspectivesMicroscope, telescope: these words evoke important scientific breakthroughs towards the in-finitely small, and towards the infinitely large. [...] Today, we are confronted with anotherinfinite: the infinitely complex. But this time there is no more instrument. Nothing but themind, an intelligence and logic ill-equipped in front of the immense complexity of life andsociety. [...] Therefore, we need a new tool. [...] This tool, I’ll call it the macroscope (macro,big; and skopein, to observe). The macroscope is not a tool like all the others. It is a sym-bolic instrument, made from a collection of methods and techniques borrowed from a widerange of disciplines. Evidently, it is useless to try and find it in the laboratories or researchcenters. And yet, it is used by many in very different fields. Because, the macroscope canbe considered as the symbol of a new way to see, understand, and act. [De Rosnay 75]

We do not claim to have completely answered the questions of what concepts ?, whatmodels? and what tools? for virtual reality, which have fueled our research for the lastten years: We also intend to vigorously pursue our research in other directions.

What Tools?

The participatory multi-agent simulation of virtual universes made up of autonomousentities will allow us to research a phenomenon that has become complex because modelingis so diverse. Therefore, virtual reality that uses numerous on-line models must be givenengineering tools for these models.

The oRis environment is only a middleware for virtual reality; it is definitely reliableand robust, but its abstraction level is still too low for users. So, we must incorporate toolsthat have the highest abstraction levels possible to make it easier to implement models,debug them, integrate them within existing universes and share them through computernetworks. These tools will automatically generate the corresponding oRis code, whichwill be interpreted on-line by the simulator.

What Models?

We have successfully tested the oRis platform in the case of reactive and emotionalbehaviors. We now need to tackle intentional, social and adaptive behaviors.

We will approach intentional behavior modeling through the notion of objective, whichis neither a constraint in the sense of programming by constraints, nor a goal of program-ming in logic. In these two approaches, the solution takes place according to a synchronoushypothesis of an insignificant execution time: the context is seen as unchanged during theresolution. However, in reality, constraints or goals can change during resolution: timegoes by, the context changes, and this needs to be taken into account. The objectivenotion thus takes into account the irreversibility of the time that goes by.

Social behaviors will be researched through the notion of the organization of multi-agent systems. An organization defines the roles and gives them objectives; the assignmentof roles is then negotiated between autonomous agents that decide to cooperate withinthis organization.

By participating in multi-agent simulations, the human user can, according to a prin-ciple of substitution, take control of an entity and identify himself as that entity. Thecontrolled entity can then learn behaviors that are better adapted to its environment from

85

Conclusion

the operator. It is this type of learning, by example or by imitation, that we will introducefor evolving behaviors modeling.

What Concepts?

Virtual universes are open and heterogeneous; they are made up of atomic and com-posite entities that are mobile and distributed in space, and in a variable number in time.The components can be structured in organizations, imposed or emergent because of mul-tiple interactions between components. Interactions between entities are themselves of adifferent nature and operate on different space and time scales.

Unfortunately, today, there is no formalism capable of recognizing this complexity.Only virtual reality makes it possible to live this complexity. We will need to strengthenthe relationships between virtual reality and the theories of complexity in order to makevirtual reality a tool for investigating complexity, like the macroscope described in thequote at the beginning of this section.

The human user becomes a part of these virtual worlds through an avatar. Thestructural identity between an agent and an avatar allows the user to take the place of anagent at any time by taking control of its decision-making module. And conversely, he canat any time give the control back to the agent whose place he took. An in virtuo autonomytest will be able to evaluate the quality of the substitution, which will be positive if a userinteracting with an entity cannot tell if he is interacting with an agent or another user;the agents should be able to react as if they were interacting with another agent.

This principle of substitution completes our principle of autonomy, and the conse-quences will have to be evaluated on an epistemological level as well as an ethical level.

86

Bibliography

[Abelson 85] Abelson H., Sussman G.J., Sussman J., Structure and interpretation ofcomputer programs (1985)

[Agarwal 95] Agarwal P., The Cell programming Language, Artificial Life 2(1):37-77,1995

[Akeley 89] Akeley K., The Silicon Graphics 4D/240 GTX super-workstation, IEEEComputer Graphics and Applications 9(4):239-246, 1989

[Ameisen 99] Ameisen J.C., La sculpture du vivant : le suicide cellulaire ou la mortcreatrice, Editions du Seuil, Paris, 1999

[Ansaldi 85] Ansaldi S., De Floriani L., Falcidieno B., Geometric modeling of solidobjects by using a face adjacency graph representation, ProceedingsSiggraph’85:131-139, 1985

[Arnaldi 89] Arnaldi B., Dumont G., Hegron G., Dynamics and unification of ani-mation control, The Visual Computer 4(5):22-31, 1989

[Arnaldi 94] Arnaldi B., Modeles physiques pour l’animation, Document d’habilita-tion a diriger des recherches, Universite de Rennes 1 (France), 1994

[Austin 62] Austin J.L., How to do things with words, Oxford University Press, Ox-ford, 1962

[Avouris 92] Avouris N.M., Gasser L. (editeurs), Distributed Artificial Intelligence :theory and praxis, Kluwer, Dordrecht, 1992

[Axelrod 76] Axelrod R., Structure of Decision, Princeton University Press, Prince-ton, 1976

[Bachelard 38] Bachelard G., La formation de l’esprit scientifique (1938), Vrin, Paris,1972

[Badler 93] Badler N.I., Phillips C.B., Webber B.L., Simulating humans : computergraphics animation and control, Oxford University Press, New York,1993

87

Bibliography

[Ballet 97a] Ballet P., Rodin V., Tisseau J., A multiagent system to model an humansecondary immune response, Proceedings IEEE SMC’97:357-362, 1997

[Ballet 97b] Ballet P., Rodin V., Tisseau J., A multiagent system to detect concentricstrias, Proceedings SPIE’97 3164:659-666, 1997

[Ballet 98a] Ballet P., Pers J.O., Rodin V., Tisseau J., A multi-agent system tosimulate an apoptosis model of B-CD5 cell, Proceedings IEEE SMC’982:1-5, 1998

[Ballet 98b] Ballet P., Rodin V., Tisseau J., Interets mutuels des systemes multi-agents et de l’immunologie, Proceedings CID’98:141-153, 1998

[Ballet 00a] Ballet P., Abgrall J.F., Rodin V., Tisseau J., Simulation of thrombingeneration during plasmatic coagulation and primary hemostasis, Pro-ceedings IEEE SMC’00 1:131-136, 2000

[Ballet 00b] Ballet P., Interets mutuels des systemes multi-agents et de l’immuno-logie. Application a l’immunologie, a l’hematologie et au traitementd’images, These de Doctorat, Universite de Bretagne Occidentale, Brest(France), 2000

[Bannon 91] Bannon L.J., From human factors to human actors : the role of psy-chology and human-computer interaction studies in system-design, dans[Greenbaum 91]:25-44, 1991

[Baraquin 95] Baraquin N., Baudart A., Dugue J., Laffitte J., Ribes F., Wilfert J.,Dictionnaire de philosophie, Armand Colin, Paris, 1995

[Barr 84] Barr A.H., Global and local deformations of solid primitives, ComputerGraphics 18:21-30, 1984

[Barto 81] Barto A.G., Sutton R.S., Landmark learning : an illustration of asso-ciative search, Biological Cybernetics 42:1-8, 1981

[Bates 92] Bates J., Virtual reality, art, and entertainment, Presence 1(1):133-138,1992

[Batter 71] Batter J.J., Brooks F.P., GROPE-1: A computer display to the sense offeel, Proceedings IFIP’71:759-763, 1971

[Baumgart 75] Baumgart B., A polyhedral representation for computer vision, Procee-dings AFIPS’75:589-596, 1975

[Beer 90] Beer R.D., Intelligence as adaptive behavior : an experiment in compu-tational neuroethology, Academic Press, San Diego, 1990

[Benford 95] Benford S., Bowers J., Fahlen L.E., Greehalgh C., Mariani J., RoddenT., Networked virtual reality and cooperative work, Presence 4(4):364-386, 1995

88

Bibliography

[Bentley 79] Bentley J.L., Ottmann T.A., Algorithms for reporting and counting geo-metric intersections, IEEE Transaction on Computers 28(9):643-647,1979

[Berlekamp 82] Berlekamp E.R., Conway J.H., Guy R.K., Winning ways for your math-ematical plays, Academic Press, New York, 1982

[Berthoz 97] Berthoz A., Le sens du mouvement, Editions Odile Jacob, Paris, 1997

[Bertin 78] Bertin M., Faroux J-P., Renault J., Optique geometrique, Bordas, Paris,1978

[Bezier 77] Bezier P., Essai de definition numerique des courbes et des surfacesexperimentales, These d’Etat, Universite Paris 6 (France), 1977

[Blinn 76] Blinn J.F., Newell M.E., Texture and reflection in computer generatedimages, Communications of the ACM 19(10):542-547, 1976

[Blinn 82] Blinn J.F., A generalization of algebraic surface drawing, ACM Trans-actions on Graphics 1(3):235-256, 1982

[Bliss 70] Bliss J.C., Katcher M.H., Rogers C.H., Shepard R.P., Optical-to-tactileimage conversion for the blind, IEEE Transactions on Man-MachineSystems 11(1):58-65, 1970

[Boissier 99] Boissier O., Guessoum Z., Occello M. (editeurs), Plateformes dedeveloppement de systemes multi-agents, Groupe ASA, Bulletin del’AFIA n◦39, 1999

[Bouknight 70] Bouknight W.J., A procedure for generation of three-dimensional half-toned computer graphics presentations, Communications of the ACM13(9):527-536, 1970

[Boulic 90] Boulic R., Magnenat-Thalmann N., Thalmann D., A global human walk-ing model with real-time kinematic personification, Visual Computer6(6):344-358, 1990

[Bresenham 65] Bresenham J., Algorithm for computer control of a digital plotter, IBMSystem Journal 4:25-30, 1965

[Brooks 86a] Brooks F.P., Walkthrough : a dynamic graphics system for simulatingvirtual buildings, Proceedings Interactive 3D Graphics’86:9-21, 1986

[Brooks 90] Brooks F.P., Ouh-young M., Batter J.J., Kilpatrick P.J., ProjectGROPE : haptic displays for scientific visualization, Computer Graph-ics, 24(4):177-185, 1990

[Brooks 86] Brooks R.A., A robust layered control system for a mobile robot, IEEEJournal of Robotics and Automation 2(1):14-23, 1986

89

Bibliography

[Brooks 91] Brooks R.A., Intelligence without representation, Artificial Intelligence47:139-159, 1991

[Bryson 96] Bryson S., Virtual reality in scientific visualization, Communications ofthe ACM 39(5):62-71, 1996

[Burdea 93] Burdea G., Coiffet P., La realite virtuelle, Hermes, Paris, 1993

[Burtnyk 76] Burtnyk N., Wein M., Interactive skeleton techniques for enhancing mo-tion dynamics in key frame animation, Communications of the ACM19(10):564-569, 1976

[Cadoz 84] Cadoz C., Luciani A., Florens J.L., Responsive input devices and soundsynthesis by simulation of instrumental mechanisms : the Cordis system,Computer Music Journal 8(3):60-73, 1984

[Cadoz 93] Cadoz C., Luciani A., Florens J.L., CORDIS-ANIMA: a modeling andsimulation system for sound and image synthesis - the general formal-ism, Computer Music Journal 17(1):19-29, 1993

[Cadoz 94a] Cadoz C., Les realites virtuelles, Collection DOMINOS, Flammarion,Paris, 1994

[Cadoz 94b] Cadoz C., Le geste, canal de communication homme/machine : la com-munication instrumentale, Technique et Science Informatiques 13(1):31-61, 1994

[Calvert 82] Calvert T.W., Chapman J., Patla A.E., Aspects of the kinematic simu-lation of human movement, IEEE Computer Graphics and Applications2:41-50, 1982

[Capin 97] Capin T.K., Pandzic I.S., Noser H., Magnenat-Thalmann N., ThalmannD., Virtual human representation and communication in VLNET net-worked virtual environments, IEEE Computer Graphics and Applica-tions 17(2):42-53, 1997

[Capin 99] Capin T.K., Pandzic I.S., Magnenat-Thalmann N., Thalmann D.,Avatars in networked virtual environments, John Wiley, New York, 1999

[Carlsson 93] Carlsson C., Hagsand O., DIVE – A platform for multi-user virtualenvironments, Computers and Graphics 17(6):663-669, 1993

[Catmull 74] Catmull E., A subdivision algorithm for computer display of curved sur-faces, PhD Thesis, University of Utah (USA), 1974

[Celada 92] Celada F., Seiden P.E., A computer model of cellular interactions in theimmune system, Immunology Today 13(2):56-62, 1992

90

Bibliography

[Cerf 74] Cerf V., Kahn R., A protocol for packet network interconnection, IEEETransactions on Communication 22:637-648, 1974

[Chadwick 89] Chadwick J.E., Haumann D.R., Parent R.E., Layered constructionfor deformable animated characters, Computer Graphics 23(3):243-252,1989

[Chevaillier 99a] Chevaillier P., Harrouet F., DeLoor P., Application des reseaux de Petria la modelisation des systemes multi-agents de controle, APII-JESA33(4):413-437, 1999

[Chevaillier 99b] Chevaillier P., Harrouet F., Reignier P., Tisseau J., oRis : un environ-nement pour la simulation multi-agents des systemes manufacturiers deproduction, Proceedings MOSIN’99:225-230, 1999

[Chevaillier 00] Chevaillier P., Harrouet F., Reignier P., Tisseau J., Virtual reality andmulti-agent systems for manufacturing system interactive prototyping,International Journal of Design and Innovation Research 2(1):90-101,2000

[Clark 76] Clark J.H., Hierarchical geometric models for visible surface algorithm,Communications of the ACM 19(10):547-554, 1976

[Cliff 93] Cliff D., Harvey I., Husbands P., Explorations in evolutionary robotics,Adaptive Behavior 2(1):73-110, 1993

[Cohen 85] Cohen M.F., Greenberg D.P., The hemi-cube : a radiosity solution forcomplex environment, Computer Graphics 19(3):26-35, 1985

[Cohen-Tannoudji 95] Cohen-Tannoudji G. (edite par), Virtualite et realite dans les sci-ences, Editions Frontieres, Paris, 1995

[Cook 81] Cook R.L., Torrance K.E., A reflectance model for computer graphics,Computer Graphics 15(3):307-316, 1981

[Cosman 81] Cosman M.A., Schumacker R.A., System strategies to optimize CIG ima-ge content, Proceedings Image II’81:463-480, 1981

[De Giacomo 00] De Giacomo G., Lesperance Y., Levesque H.J., ConGolog : a concurrentprogramming language based on situation calculus, Artificial Intelligence121(1-2):109-169, 2000

[Del Bimbo 95] Del Bimbo A., Vicario E., Specification by example of virtual agentsbehavior, IEEE Transactions on Visualization and Computer Graphics1(4):350-360, 1995

[De Loor 00] De Loor P., Chevaillier P., Generation of agent interactions from tem-poral logic specifications, Proceedings IMACS’00:, 2000

91

Bibliography

[Demazeau 95] Demazeau Y., From interactions to collective behaviour in agent-basedsystems, Proceedings European Conference on Cognitive Science’95:117-132, 1995

[De Rosnay 75] De Rosnay J., Le macroscope : vers une vision globale, Editions du Seuil,Paris, 1975

[D’Espagnat 94] D’Espagnat B., Le reel voile, Fayard, Paris, 1994

[Dickerson 94] Dickerson J.A., Kosko B., Virtual worlds as fuzzy cognitive maps, Pres-ence 3(2):173-189, 1994

[Donikian 94] Donikian S., Les modeles comportementaux pour la generation du mou-vement d’objets dans une scene, Revue internationale de CFAO etd’infographie 9(6):847-871, 1994

[Drogoul 93] Drogoul A., De la simulation multi-agents a la resolution collectivede problemes. Une etude de l’emergence de structures d’organisationdans les systemes multi-agents, These de Doctorat, Universite Paris 6(France), 1993

[Duval 97] Duval T., Morvan S., Reignier P., Harrouet F., Tisseau J., AReVi : uneboıte a outils 3D pour des applications cooperatives, Revue CalculateursParalleles 9(2):239-250, 1997

[Ellis 91] Ellis S.R., Nature and origin of virtual environments : a bibliographicessay, Computing Systems in Engineering 2(4):321-347, 1991

[Endy 01] Endy D., Brent R., Modelling cellular behaviour, Nature 409(18):391-395, 2001

[Favier 01] Favier P.A., De Loor P., Tisseau J., Programming agent with purposes :application to autonomous shooting in virtual environments, LectureNotes in Computer Science 2197:40-43, 2001

[Ferber 90] Ferber J., Conception et programmation par objets, Hermes, Paris, 1990

[Ferber 95] Ferber J., Les systemes multi-agents : vers une intelligence collective,InterEditions, Paris, 1995

[Ferber 97] Ferber J., Les systemes multi-agents : un apercu general, Technique etScience Informatique 16(8):979-1012, 1997

[Fikes 71] Fikes R.E., Nilsson N., STRIPS : a new approach to the application oftheorem proving to problem solving, Artificial Intelligence 5(2):189-208,1971

[Fisher 86] Fisher S.S., MacGreevy M., Humphries J., Robinett W., Virtual envi-ronment display system, Proceedings Interactive 3D Graphics’86:77-87,1986

92

Bibliography

[Fisher 94] Fisher M., A survey of Concurrent Metatem : the language and itsapplications, Lecture Notes in Artificial Intelligence 827:480-505, 1994

[Fuchs 80] Fuchs H., Kedem Z., Naylor B., On visible surface generation by a prioritree structures, Computer Graphics 14(3):124-133, 1980

[Fuchs 96] Fuchs P., Les interfaces de la realite virtuelle, AJIIMD, Presses de l’Eco-le des Mines de Paris, 1996

[Fujimoto 98] Fujimoto R.M., Time management in the High Level Architecture, Si-mulation 71(6):388-400, 1998

[Gasser 92] Gasser L., Briot J.P., Object-based concurrent programming and dis-tributed artificial intelligence, dans [Avouris 92]:81-107, 1992

[Gaussier 98] Gaussier P., Moga S., Quoy M., Banquet J.P., From perception-actionloops to imitation process : a bottom-up approach of learning by imita-tion, Applied Artificial Intelligence 12:701-727, 1998

[Georgeff 87] Georgeff M.P., Lansky A.L., Reactive reasonning and planning, Proceed-ings AAAI’87:677-682, 1987

[Gerschel 95] Gerschel A., Liaisons intermoleculaires, InterEditions/CNRS Editions,Paris, 1995

[Gibson 84] Gibson W., Neuromancer, Ace Books, New York, 1984

[Ginsberg 83] Ginsberg C.M., Maxwell D., Graphical marionette, Proceedings ACMSIGGRAPH/SIGART Workshop on Motion’83:172-179, 1983

[Girard 85] Girard M., Maciejewski A.A., Computational modeling for the computeranimation of legged figures, Computer Graphics 19(3):39-51, 1985

[Gobel 93] Gobel M., Neugebauer J., The virtual reality demonstration centre, Com-puter and Graphics 17(6):627-631, 1993

[Gouraud 71] Gouraud H., Continuous shading of curved surface, IEEE Transactionson Computers 20(6):623-629, 1971

[Gourret 89] Gourret J.P., Magnenat-Thalmann N., Thalmann D., The use of finiteelement theory for simulating object and human skin deformations andcontacts, Proceedings Eurographics’89:477-487, 1989

[Grand 98] Grand S., Cli D., Creatures : entertainment software agents with articiallife, Autonomous Agents and Multi-Agent Systems 1(1):39-57, 1998

[Granger 86] Granger G.G., Pour une epistemologie du travail scientifique, dans[Hamburger 86]:111-129, 1986

93

Bibliography

[Granger 95] Granger G.G., Le probable, le possible et le virtuel, Editions Odile Jacob,Paris, 1995

[Greenbaum 91] Greenbaum J., Kyng M. (editeurs), Design at work : cooperative designof computer systems, Lawrence Erlbaum Associates, Hillsdale, 1991

[Guessoum 99] Guessoum Z., Briot J.P., From active objects to autonomous agents,IEEE Concurrency 7(3):68-76, 1999

[Guillot 00] Guillot A., Meyer J.A., From SAB94 to SAB2000 : What’s New, Ani-mat?, Proceedings From Animals to Animats’00 6:1-10, 2000

[Hamburger 86] Hamburger J. (sous la direction de), La philosophie des sciences au-jourd’hui, Academie des Sciences, Gauthier-Villars, Paris, 1986

[Hamburger 95] Hamburger J., Concepts de realite, dans “ Encyclopædia Universalis ”19:594-597, Encyclopædia Universalis, Paris, 1995

[Harrouet 00] Harrouet F., oRis : s’immerger par le langage pour le prototypaged’univers virtuels a base d’entites autonomes, These de Doctorat, Uni-versite de Bretagne Occidentale, Brest (France), 2000

[Hayes-Roth 96] Hayes-Roth B., van Gent R., Story-making with improvisational puppetsand actors, Technical Report KSL-96-05, Knowledge Systems Labora-tory, Stanford University, 1996

[Hemker 65] Hemker H.C., Hemker P.W., Loeliger A., Kinetic aspects of the interac-tion of blood clotting enzymes, Thrombosis et diathesis haemorrhagica13:155-175, 1965

[Herrmann 98] Herrmann H.J., Luding S., Review article : Modeling granular media onthe computer, Continuum Mechanics and Thermodynamics 10(4):189-231, 1998

[Hertel 83] Hertel S., Mehlhorn K., Fast triangulation of simple polygons, LectureNotes in Computer Science 158:207-218, 1983

[Hewitt 73] Hewitt C., Bishop P., and Steiger R., A universal modular actor formal-ism for artificial intelligence, Proceedings IJCAI’73:235-245, 1973

[Hewitt 77] Hewitt C., Viewing control structures as patterns of message passing,Artificial Intelligence 8(3):323-364, 1977

[Hofstadter 79] Hofstadter D., Godel, Escher, Bach : an eternal golden braid (1979),traduction francaise : InterEditions, Paris, 1985

[Holloway 92] Holloway R., Fuchs H., Robinett W., Virtual-worlds research at the Uni-versity of North Carolina at Chapel Hill, Computer Graphics’92, 15:1-10,1992

94

Bibliography

[Hopkins 96] Hopkins J.C., Leipold R.J., On the dangers of adjusting the parametervalues of mechanism-based mathematical models, Journal of TheoreticalBiology 183:417-427, 1996

[IEEE 93] IEEE 1278, Standard for Information Technology : protocols for Dis-tributed Interactive Simulation applications, IEEE Computer SocietyPress, 1993

[Iglesias 98] Iglesias C.A., Garijo M., Gonzalez J.C., A survey of agent-orientedmethodologies, Proceedings ATAL’98:317-330, 1998

[Isaacs 87] Isaacs P.M., Cohen M.F., Controlling dynamic simulation with kine-matic constraints, behavior functions and inverse dynamics, ComputerGraphics 21(4):215-224, 1987

[Jamin 96] Jamin C., Lydyard P.M., Le Corre R., Youinou P., Anti-CD5 extendsthe proliferative response of human CD5+B cells activated with anti-IgMand IL2, European Journal of Immunology 26:57-62, 1996

[Jones 71] Jones C.B., A new approach to the hidden line problem, Computer Jour-nal 14(3):232-237, 1971

[Jones 94] Jones K.C., Mann K.G., A model for the Tissue Factor Pathway toThrombin: 2. A Mathematical Simulation, Journal of Biological Chem-istry 269(37):23367-23373, 1994

[Kallmann 99] Kallmann M., Thalmann D., A behavioral interface to simulate agent-ob-ject interactions in real time, Proceedings Computer Animation’99:138-146, 1999

[Klassen 87] Klassen R., Modelling the effect of the atmosphere on light, ACM Trans-actions on Graphics 6(3):215-237, 1987

[Klesen 00] Klesen M., Szatkowski J., Lehmann N., The black sheep : interactiveimprovisation in a 3D virtual world, Proceedings I3’00:77-80, 2000

[Kodjabachian 98] Kodjabachian J.,Meyer J.A., Evolution and development of neural con-trollers for locomotion, gradient-following, and obstacle-avoidance in ar-tificial insects, IEEE Transactions on Neural Networks 9:796-812, 1998

[Kochanek 84] Kochanek D.H., Bartels R.H., Interpolating splines with local tension,continuity, and bias control, Computer Graphics 18(3):124-132, 1984

[Kolb 95] Kolb C., Mitchell D., Hanrahan P., A realistic camera model for com-puter graphics, Computer Graphics 29(3):317-324, 1995

[Kosko 86] Kosko B., Fuzzy Cognitive Maps, International Journal Man-MachineStudies 24:65-75, 1986

95

Bibliography

[Kosko 92] Kosko B., Neural networks and fuzzy systems : a dynamical systemsapproach to machine intelligence, Prentice-Hall, Engelwood Cliffs, 1992

[Kripke 63] Kripke S., Semantic analysis of modal logic, I : normal propositionalcalculi, Zeitschrift fur mathematische Logik und Grundlagen der Math-ematik 9:67-96, 1963

[Krueger 77] Krueger M.W., Responsive environments, Proceedings NCC’77:375-385,1977

[Krueger 83] Krueger M.W., Artificial Reality, Addison-Wesley, New York, 1983

[Langton 86] Langton C.G., Studying artificial life with cellular automata, PhysicaD22:120-149, 1986

[Lavery 98] Lavery R., Simulation moleculaire, macromolecules biologiques et syste-mes complexes, Actes Entretiens de la Physique 3:1-18, 1998

[Le Moigne 77] Le Moigne J-L., La theorie du systeme general : theorie de la modelisa-tion, Presses Universitaires de France, Paris, 1977

[Lestienne 95] Lestienne F., Equilibration, dans Encyclopædia Universalis 8:597-601,Encyclopædia Universalis, Paris, 1995

[Levy 95] Levy P., Qu’est-ce que le virtuel ?, Editions La Decouverte, Paris, 1995

[Levy-Leblond 96] Levy-Leblond J-M., Aux contraires : l’exercice de la pensee et la pra-tique de la science, Gallimard, Paris, 1996

[Lienhardt 89] Lienhardt P., Subdivisions of surfaces and generalized maps, ProceedingsEurographics’89:439-452, 1989

[Luciani 85] Luciani A., Un outil informatique de creation d’images animees :modeles d’objets, langage, controle gestuel en temps reel. Le systemeANIMA, These de Doctorat, INPG Grenoble (France), 1985

[Luciani 00] Luciani A., From granular avalanches to fluid turbulences through ooz-ing pastes : a mesoscopic physically-based particle model, ProceedingsGraphicon’00 10:282-289, 2000

[Maes 95] Maes P., Artificial life meets entertainment : interacting with lifelikeautonomous agents, Communications of the ACM 38(11):108-114, 1995

[Maffre 01] Maffre E., Tisseau J., Parenthoen M., Virtual agents’ self-perception instorytelling, Lecture Notes in Computer Science 2197:155-158, 2001

[Magnenat-Thalmann 88] Magnenat-Thalmann N., Laperriere R., Thalmann D., Joint-Dependent Local Deformations for Hand Animation and Object Grasp-ing, Proceedings Graphics Interface’88:26-33, 1988

96

Bibliography

[Magnenat-Thalmann 91] Magnenat-Thalmann N., Thalmann D., Complex Models forAnimating Synthetic Actors, IEEE Computer Graphics and Applications11(5):32-44, 1991

[Mendes 93] Mendes P., GEPASI: a software package for modelling the dynamics,steady states and control of biochemical and other systems, ComputerApplications in the Biosciences 9(5):563-571, 1993

[Meyer 91] Meyer J.A., Guillot A., Simulation of adaptive behavior in animats :review and prospect, Proceedings From Animals to Animats’91 1:2-14,1991

[Meyer 94] Meyer J.A., Guillot A., From SAB90 to SAB94 : four years of animatresearch, Proceedings From Animals to Animats’94 3:2-11, 1994

[Miller 95] Miller D.C., Thorpe J.A., SIMNET : the advent of simulator networking,Proceedings of the IEEE 83(8):1114-1123, 1995

[Minsky 00] Minsky M., The emotion machine : from pain to suffering, ProceedingsIUI’00:187-193, 2000

[Moore 88] Moore M., Wilhelms J., Collision detection and response for computeranimation, Computer Graphics 22(4):289-298, 1988

[Morin 77] Morin E., La methode, Tome 1 : la nature de la nature, Editions duSeuil, Paris, 1977

[Morin 86] Morin E., La methode, Tome 3 : la connaissance de la connaissance,Editions du Seuil, Paris, 1986

[Morton-Firth 98] Morton-Firth C.J., Bray D., Predicting temporal fluctuations in anintracellular signalling pathway, Journal of Theoretical Biology 192:117-128, 1998

[Mounts 97] Mounts W.M., Liebman M.N., Qualitative modeling of normal blood co-agulation and its pathological states using stochastic activity networks,International Journal of Biological Macromolecules 20:265-281, 1997

[Nedelec 00] Nedelec A., Reignier P., Rodin V., Collaborative prototyping in dis-tributed virtual reality using an agent communication language, Procee-dings IEEE SMC’00 2:1007-1012, 2000

[Noma 00] Noma T., Zhao L., Badler N., Design of a Virtual Human Presenter,Journal of Computer Graphics and Applications 20(4):79-85, 2000

[Noel 98] Noel D., Virtuel possible, actuel probable (Combien sont les mo(n)des ?),Proceedings GTRV’98:114-120, 1998

97

Bibliography

[Noser 95] Noser H., Thalmann D., Synthetic vision and audition for digital actors,Proceedings Eurographics’95:325-336, 1995.

[Nwana 96] Nwana H.S., Software agents : an overview, Knowledge EngineeringReview 11(3):205-244, 1996

[Nwana 99] Nwana H.S., Ndumu D.T., Lee L.C., Collis J.C., ZEUS : a toolkit forbuilding distributed multiagent systems, Applied Artificial Intelligence13(1-2):129-185, 1999

[Omnes 95] Omnes R., L’objet quantique et la realite, dans [Cohen-Tannoudji 95]:139–168, 1995

[Parenthoen 01] Parenthoen M., Tisseau J., Reignier P., Dory F., Agent’s perceptionand Charactors in virtual worlds : put Fuzzy Cognitive Maps to work,Proceedings VRIC’01:11-18, 2001

[Parunak 99] Parunak H.V.D., Industrial and practical applications of DAI, dans[Weiss 99]:377-421, 1999

[Perlin 95] Perlin K., Goldberg A., Improv : a system for scripting interactive ac-tors in virtual worlds, Computer Graphics 29(3):1-11, 1995

[Perrin 01] Perrin J. (sous la direction de), Conception entre science et art : regardsmultiples sur la conception, Presses Polytechniques et Universitaires Ro-mandes, Lausanne, 2001

[Phong 75] Phong B.T., Illumination for computer generated pictures, Communica-tions of the ACM 18(8):311-317, 1975

[Pitteway 80] Pitteway M., Watkinson D., Bresenham’s algorithm with grey scale,Communications of the ACM 23(11):625-626, 1980

[Platt 81] Platt S., Badler N., Animating facial expressions, Computer Graphics15(3):245-252, 1981

[Pope 87] Pope A.R., The SIMNET Network and Protocols, Report 6369, BBNSystems and Technologies, 1987

[Queau 93a] Queau Ph., Televirtuality: the merging of telecommunications and vir-tual reality, Computers and Graphics 17(6):691-693, 1993

[Queau 93b] Queau Ph., Le virtuel, vertus et vertiges, Collection Milieux, Champvallon, Seyssel, 1993

[Queau 95] Queau Ph., Le virtuel : un etat du reel, dans [Cohen-Tannoudji 95]:61-93, 1995

98

Bibliography

[Querrec 01a] Querrec R., Reignier P., Chevaillier P., Humans and autonomous agentsinteractions in a virtual environment for fire fighting training, Procee-dings VRIC’01:57-64, 2001

[Querrec 01b] Querrec R., Chevaillier P., Virtual storytelling for training : an appli-cation to fire fighting in industrial environment, Lecture Notes in Com-puter Science 2197:201-204, 2001

[Raab 79] Raab F.H., Blood E.B., Steiner T.O., Jones R.J., Magnetic positionand orientation tracking system, IEEE Transaction on Aerospace andElectronic Systems 15(5):709-718, 1979

[Reeves 83] Reeves W.T., Particle systems : a technic for modelling a class of fuzzyobjects, Computer Graphics 17(3):359-376, 1983

[Renault 90] Renault O., Magnenat-Thalmann N., Thalmann D., A vision-based ap-proach to behavioural animation, Journal of Visualization and ComputerAnimation 1(1):18-21, 1990

[Requicha 80] Requicha A.A.G., Representations for rigid solids: theory, methods andsystems, ACM Computing Surveys 12(4):437-464, 1980

[Reignier 98] Reignier P., Harrouet F., Morvan S., Tisseau J., Duval T., AReVi : avirtual reality multiagent platform, Lecture Notes in Artificial Intelli-gence 1434:229-240, 1998

[Reynolds 87] Reynolds C.W., Flocks, Herds, and Schools: A Distributed BehavioralModel, Computer Graphics 21(4):25-34, 1987

[Rickel 99] Rickel J., Johnson W.L., Animated agents for procedural training in vir-tual reality : perception, cognition, and motor control, Applied ArtificialIntelligence 13:343-382, 1999

[Rodin 99] Rodin V., Nedelec A., oRis : an agent communication language fordistributed virtual environment, Proceedings IEEE ROMAN’99:41-46,1999

[Rodin 00] Rodin V., Nedelec A., A multiagent architecture for distributed virtualenvironments, Proceedings IEEE SMC’00 2:955-960, 2000

[Rosenblum 91] Rosenblum R.E., Carlson W.E., Tripp E., Simulating the structure anddynamics of human hair : modelling, rendering and animation, Journalof Visualization and Computer Animation 2(4):141-148, 1991

[Rumbaugh 99] Rumbaugh J., Jacobson I., Booch G., The Unified Modelling languageReference Manual, Addison-Wesley, New-York, 1999

[Samet 84] Samet H., The quadtree and related hierarchical data structures, ACMComputing Surveys 6(2):187-260, 1984

99

Bibliography

[Schaff 97] Schaff J., Fink C., Slepchenko B., Carson J., Loew L., A general com-putational framework for modeling cellular structure and function, Bio-physical Journal 73:1135-1146, 1997

[Schuler 93] Schuler D., Namioka A. (editeurs), Participatory Design : Principlesand Practices, Lawrence Erlbaum Associates, Hillsdale, 1993

[Schultz 96] Schultz C., Grefenstette J.J., Adams W.L., RoboShepherd : learning acomplex behavior, Proceedings RoboLearn’96:105-113, 1996

[Schumacker 69] Schumacker R., Brand B., Gilliland M., Sharp W., Study for apply-ing computer-generated images to visual simulation, Technical ReportAFHRL-TR-69-14, U.S. Air Force Human Resources Lab., 1969

[Sederberg 86] Sederberg T.W., Parry S.R., Free-form deformation of solid geometricmodels, Computer Graphics 20:151-160, 1986

[Sheng 92] Sheng X., Hirsch B.E., Triangulation of trimmed surfaces in parametricspace, Computer Aided Design 24(8):437-444, 1992

[Sheridan 87] Sheridan T.B., Teleoperation, telepresence, and telerobotics : researchneeds, Proceedings Human Factors in Automated and Robotic SpaceSystems’87:279-291, 1987

[Shoham 93] Shoham Y., Agent-oriented programming, Artificial Intelligence 60(1):51-92, 1993

[Sillion 89] Sillion F., Puech C., A general two-pass method integrating specular anddiffuse reflection, Computer Graphics 23(3):335-344, 1989

[Sims 94] Sims K., Evolving 3D morphology and behavior by competition, ArtificialLife 4:28-39, 1994

[Singh 94] Singh G., Serra L., Ping W., Hern N., BrickNet: a software toolkit fornetwork-based virtual worlds, Presence 3(1):19-34, 1994

[Smith 80] Smith R.G., The Contract Net protocol : high level communication andcontrol in a distributed problem solver, IEEE Transactions on Computers29(12):1104-1113, 1980

[Smith 89] Smith T.J., Smith K.U., The human factors of workstation telepres-ence, Proceedings Space Operations Automation and Robotics’89:235-250, 1989

[Smith 97] Smith D.J., Forrest S., Ackley D.H., Perelson A.S., Using lazy evaluationto simulate realistic-size repertories in models of the immune system,Bulletin of Mathematical Biology 60:647-658, 1997

100

Bibliography

[Sorensen 99] Sorensen E.N., Burgreen G.W., Wagner W.R., Antaki J.F., Computa-tional simulation of platelet deposition and activation: I. Model develop-ment and properties, Annals of Biomedical Engineering 27(4):436-448,1999

[Sowa 91] Sowa J.F., Principles of semantic network : exploration in the repre-sentation of knowledge, Morgan Kaufmann Publisher Inc., San Mateo,1991

[Steketee 85] Steketee S.N., Badler N.I., Parametric keyframe interpolation incorpo-rating kinetic adjustment and phrasing, Computer Graphics 19(3):255-262, 1985

[Stewart 89] Stewart J., Varela F.J., Exploring the connectivity of the immune net-work, Immunology Review 110:37-61, 1989

[Sutherland 63] Sutherland I.E., Sketchpad, a man-machine graphical communicationsystem, Proceedings AFIPS’63:329-346, 1963

[Sutherland 68] Sutherland I.E., A head-mounted three-dimensional display, ProceedingsAFIPS’68, 33(1):757-764, 1968

[Sutherland 74] Sutherland I.E., Sproull R., Schumaker R., A characterization of tenhidden-surface algorithms, Computing Surveys 6(1):1-55, 1974

[Tachi 89] Tachi S., Arai H., Maeda T., Robotic tele-existence, Proceedings NASASpace Telerobotics’89 3:171-180, 1989

[Terzopoulos 87] Terzopoulos D., Platt J., Barr A., Fleischer K., Elastically deformablemodels, Computer Graphics 21(4):205-214, 1987

[Terzopoulos 88] Terzopoulos D., Fleischer K., Modeling inelastic deformation: viscoelas-ticity, plasticity, fracture, Computer Graphics 22(4):269-278, 1988

[Terzopoulos 94] Terzopoulos D., Tu X., Grzeszczuk R., Artificial fishes: autonomouslocomotion, perception, behavior, and learning in a simulated physicalworld, Artificial Life 1(4):327-351, 1994

[Thalmann 96] Thalmann D., A new generation of synthetic actors : the interactiveperceptive actors, Proceedings Pacific Graphics’96:200-219, 1996

[Thalmann 99] Thalmann D., Noser H., Towards autonomous, perceptive and intelligentvirtual actors, Lecture Notes in Artificial Intelligence 1600:457-472, 1999

[Thomas 86] Thomas S.N., Dispersive refraction in ray tracing, The Visual Computer2(1):3-8, 1986

[Tisseau 98a] Tisseau J., Chevaillier P., Harrouet F., Nedelec A., Des procedures auxagents : application en oRis, Rapport interne du LI2, ENIB, 1998

101

Bibliography

[Tisseau 98b] Tisseau J., Reignier P., Harrouet F., Exploitation de modeles et RealiteVirtuelle, Actes des 6emes journees du groupe de travail Animation etSimulation 6:13-22, 1998

[Tolman 48] Tolman E.C., Cognitive maps in rats and men, Psychological Review42(55):189-208, 1948

[Tomita 99] Tomita M., Hashimoto K., Takahashi K., Shimizu T.S., Matsuzaki Y.,Miyoshi F., Saito K., Tanida S., Yugi K., Venter J.C., Hutchison C.A.,E-CELL: software environment for whole-cell simulation, Bioinformatics15:72-84, 1999

[Turing 50] Turing A., Computing machinery and intelligence, Mind 59:433-460,1950

[Vacherand-Revel 01] Vacherand-Revel J., Tarpin-Bernard F., David B., Des modeles del’interaction a la conception participative des logiciels interactifs, dans[Perrin 01]:239-256, 2001

[Vaughan 00] Vaughan R., Sumpter N., Henderson J., Frost A., Cameron S., Experi-ments in automatic flock control, Journal of Robotics and AutonomousSystems 31:109-117, 2000

[Vertut 85] Vertut J., Coiffet P., Les robots, Tome 3 : teleoperation, Hermes, Paris,1985

[Von Neumann 66] Von Neumann J., Theory of self-reproducing automata, University OfIllinois Press, Chicago, 1966

[Weil 86] Weil J., The synthesis of cloth objects, Computer Graphics 20(4):49-54,1986

[Weiss 99] Weiss G. (editeur), Multiagent systems : a modern approach to dis-tributed artificial intelligence, MIT Press, Cambridge, 1999

[Wenzel 88] Wenzel E.M., Wightman F.L., Foster S.H., Development of a three di-mensional auditory display system, Proceedings CHI’88:557-564, 1988

[Whitted 80] Whitted T., An improved illumination model for shaded display, Com-munications of the ACM 23(6):343-349, 1980

[Williams 78] Williams L., Casting curved shadows on curved surfaces, ComputerGraphics 12(3):270-274, 1978

[Wilson 85] Wilson S.W., Knowledge growth in an artificial animal, ProceedingsGenetic Algorithms and their Applications’85:16-23, 1985

[Wolberg 90] Wolberg G., Digital image warping, IEEE Computer Society Press, 1990

102

Bibliography

[Wolfram 83] Wolfram S., Statistical mechanics of cellular automata, Review of Mod-ern Physics 55(3):601-644, 1983

[Wooldridge 95] Wooldridge M., Jennings N.R., Intelligent agents : theory and practice,The Knowledge Engineering Review 10(2):115-152, 1995

[Wooldridge 01] Wooldridge M., Ciancarini P., Agent-oriented software engineering : thestate of the art, a paraıtre dans Handbook of Software Engineering andKnowledge Engineering, ISBN:981-02-4514-9, World Scientific Publish-ing Co., River Edge, 2001

[Zeltzer 82] Zeltzer D., Motor control techniques for figure animation, IEEE Com-puter Graphics and Applications 2(9):53-59, 1982

[Zelter 92] Zelter D., Autonomy, interaction and presence, Presence 1(1):127-132,1992

[Zimmerman 87] Zimmerman T., Lanier J., Blanchard C., Bryson S., A hand gestureinterface device, Proceedings CHI’87:189-192, 1987

[Zinkiewicz 71] Zinkiewicz O.C., The finite element method in engineering science, McGraw Hill, New York, 1971

103