collaborative systems - mit media labweb.media.mit.edu › ~cynthiab › readings ›...

19
The construction of computer systems that are intelligent, collaborative problem-solving part- ners is an important goal for both the science of AI and its application. From the scientific per- spective, the development of theories and mechanisms to enable building collaborative systems presents exciting research challenges across AI subfields. From the applications per- spective, the capability to collaborate with users and other systems is essential if large-scale information systems of the future are to assist users in finding the information they need and solving the problems they have. In this address, it is argued that collaboration must be designed into systems from the start; it cannot be patched on. Key features of collaborative activi- ty are described, the scientific base provided by recent AI research is discussed, and several of the research challenges posed by collaboration are presented. It is further argued that research on, and the development of, collaborative sys- tems should itself be a collaborative endeav- or—within AI, across subfields of computer sci- ence, and with researchers in other fields. A I has always pushed forward on the frontiers of computer science. Our efforts to understand intelligent behavior and the ways in which it could be embodied in computer systems have led both to a richer scientific understanding of various aspects of intelligence and to the development of smarter computer systems. In his keynote address at AAAI-94, Raj Reddy described several of those advances as well as challenges for the future. The Proceedings of the AAAI Innovative Applications of Artificial Intelligence conferences contain descriptions of many commercial systems that employ AI techniques to provide greater power or flexibility. For this Presidential address, I have decided to focus on one such frontier area: the under- standing of collaborative systems and the development of the foundations—the repre- sentations, theories, computational models and processes—needed to construct computer systems that are intelligent collaborative part- ners in solving their users’ problems. In doing so, I follow the precedent set by Allen Newell in his 1980 Presidential Address (Newell 1981, p. 1) of focusing on the state of the science rather than the state of the society. I also fol- low a more recent precedent, that set by Daniel Bobrow in his 1990 Presidential address (Bobrow 1991, p. 65), namely, exam- ining the issues to be faced in moving beyond what he called the “isolation assumptions” of much of AI to the design and analysis of sys- tems of multiple agents interacting with each other and the world. I concur with his claim that a significant challenge for AI in the 1990s is “to build AI systems that can interact productively with each other, with humans, and with the physical world” (p. 65). I will argue further, however, that there is much to be gained by looking in particular at one kind of group behavior, collaboration. My reasons for focusing on collaborative systems are two-fold. First, and most impor- tant in this setting, the development of the underlying theories and formalizations that are needed to build collaborative systems as well as the construction of such systems rais- es interesting questions and presents intel- lectual challenges across AI subfields. Sec- ond, the results of these inquiries promise to have significant impact not only on comput- er science, but also on the general computer- using public. Why Collaborative Systems? There is no question that there is a need to make computer systems better at helping us Articles SUMMER 1996 67 AAAI–94 Presidential Address Collaborative Systems Barbara J. Grosz Copyright © 1996, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1996 / $2.00 This talk was presented at the American Association for Artificial Intelligence’s National Conference on Artificial Intelligence, 3 August 1994, in Seattle, Washington

Upload: others

Post on 29-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

■ The construction of computer systems that areintelligent, collaborative problem-solving part-ners is an important goal for both the science ofAI and its application. From the scientific per-spective, the development of theories andmechanisms to enable building collaborativesystems presents exciting research challengesacross AI subfields. From the applications per-spective, the capability to collaborate with usersand other systems is essential if large-scaleinformation systems of the future are to assistusers in finding the information they need andsolving the problems they have. In this address,it is argued that collaboration must be designedinto systems from the start; it cannot bepatched on. Key features of collaborative activi-ty are described, the scientific base provided byrecent AI research is discussed, and several ofthe research challenges posed by collaborationare presented. It is further argued that researchon, and the development of, collaborative sys-tems should itself be a collaborative endeav-or—within AI, across subfields of computer sci-ence, and with researchers in other fields.

AI has always pushed forward on thefrontiers of computer science. Ourefforts to understand intelligent

behavior and the ways in which it could beembodied in computer systems have ledboth to a richer scientific understanding ofvarious aspects of intelligence and to thedevelopment of smarter computer systems.In his keynote address at AAAI-94, Raj Reddydescribed several of those advances as well aschallenges for the future. The Proceedings ofthe AAAI Innovative Applications ofArtificial Intelligence conferences containdescriptions of many commercial systemsthat employ AI techniques to provide greaterpower or flexibility.

For this Presidential address, I have decidedto focus on one such frontier area: the under-

standing of collaborative systems and thedevelopment of the foundations—the repre-sentations, theories, computational modelsand processes—needed to construct computersystems that are intelligent collaborative part-ners in solving their users’ problems. In doingso, I follow the precedent set by Allen Newellin his 1980 Presidential Address (Newell 1981,p. 1) of focusing on the state of the sciencerather than the state of the society. I also fol-low a more recent precedent, that set byDaniel Bobrow in his 1990 Presidentialaddress (Bobrow 1991, p. 65), namely, exam-ining the issues to be faced in moving beyondwhat he called the “isolation assumptions” ofmuch of AI to the design and analysis of sys-tems of multiple agents interacting with eachother and the world. I concur with his claimthat a significant challenge for AI in the1990s is “to build AI systems that can interactproductively with each other, with humans,and with the physical world” (p. 65). I willargue further, however, that there is much tobe gained by looking in particular at one kindof group behavior, collaboration.

My reasons for focusing on collaborativesystems are two-fold. First, and most impor-tant in this setting, the development of theunderlying theories and formalizations thatare needed to build collaborative systems aswell as the construction of such systems rais-es interesting questions and presents intel-lectual challenges across AI subfields. Sec-ond, the results of these inquiries promise tohave significant impact not only on comput-er science, but also on the general computer-using public.

Why Collaborative Systems?There is no question that there is a need tomake computer systems better at helping us

Articles

SUMMER 1996 67

AAAI–94 Presidential Address

Collaborative SystemsBarbara J. Grosz

Copyright © 1996, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1996 / $2.00

This talk was presented atthe AmericanAssociationfor ArtificialIntelligence’sNationalConferenceon ArtificialIntelligence, 3 August1994, inSeattle, Washington

Page 2: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

often touted as providing the advance thatmakes computing power accessible to oneand all. Now users can just “point and click”to get what they need. They don’t have to tellthe computer system how to do things, justwhat to do. But, in reality, this is true only ata shallow level: for many applications, usersno longer need to write programs or dealwith obtuse command languages. But at thedeeper problem-solving or task level, usersstill must tell the computer system how to dowhat’s needed. Systems don’t have a clueabout what users are really trying to achieve.They just do a bunch of pretty low-levelchores for a user, in service of some largergoal about which the systems know nothing.The system is a tool, no doubt a complexone, but it’s a dumb servant rather than ahelpful assistant or problem-solving partner.Users say jump, and, if they are lucky, the sys-tem figures out how high rather than reply-ing with

usage: jump [-h] height [-v] vigor.

To have systems that help solve our prob-lems without having to be told in laborious,boring detail each step they should takerequires freeing users from “having to tell”at a different level. It wouldn’t be too far-fetched to say that the mouse frees usersfrom having to tell the system what to do at,or perhaps below, the symbol level, but itfails to provide any assistance at the knowl-edge level. Mice and menus may be a start,but they’re far from the best that we can doto make computers more accessible, helpful,or user friendly.

For systems to collaborate with their usersin getting the information they need andsolving the problems they have will requirethe kind of problem solutions and techniquesthat AI develops. But it also requires that welook beyond individual intelligent systems togroups of intelligent systems that worktogether.

So much for politics and technology. Let’snot lose sight of the fact that collaborativebehavior is interesting in its own right, animportant part of intelligent behavior. For atleast two reasons, we should not wait untilwe’ve understood individual behavior beforewe confront the problems of understandingcollaborative behavior. First, as I will showlater, the capabilities needed for collaborationcannot be patched on, but must be designedin from the start. Second, I have a hunch thatlooking at some problems from the perspec-tive of groups of agents collaborating willyield easier and not just different solutions.1

do what we use them to do. It is common par-lance that the world has become a global vil-lage. The daily news makes clear how closeevents in any one part of the world are topeople in other parts. Use of the Internet hasexploded, bringing people closer in anotherway. In July 1994 we had a fine demonstra-tion of the very best use of this technology:astronomers around the world used the netheavily, collaborating to an unprecedentedextent, sharing information about the cometShoemaker–Levy 9’s collision with Jupiter. Asthey have used the information gathered inthose observations to understand better theplanet’s atmosphere, its inner structure,comets, and the like, they continue to employthe communications capabilities of the net.But the net could surely provide more.

There is frequent mention in the news ofthe information superhighway and electroniclibraries, the development of technology thatwill rapidly get us the information weneed—and, one hopes, not too much of whatwe don’t need. But one also frequently hears,both in the media and from individuals—atleast if your friends are like mine—of the frus-trations of trying to get “hooked in.” In a sim-ilar vein, in his invited talk at AAAI-94addressing key problems of software in thefuture, Steve Ballmer from Microsoft broughtup repeatedly the need to make software more“approachable.” An article on the first page ofthe business section of the July 22, 1994Boston Globe (Putzel 1994, p. 43) had theheadline, “Roadblocks on Highway: Gettingon the Internet can be a Real Trip.” The articlebegan, “So you want to take a ride on TheSuperhighway. Pack plenty of patience, andstrap in tight. The entry ramp is cratered likethe moon.” It goes on to say, “The ‘Net’ offersaccess to huge storehouses of information …[but] ordinary people … are having a devilishtime breaking down the Internet door. It’s likea huge private club whose membership isguarded by a hundred secret handshakes.”

The thing is, even after you get in the club,there’s not much help getting what you reallyneed. AI can play a unique and pivotal role inimproving the situation, in making theSuperhighway an information superhighway,not just a gigabit superhighway. The work ofEtzioni, Maes, Weld and others on softbotsand interface agents is an important step inthis direction (Etzioni and Weld 1994; Maes1994; Etzioni 1993).

Limitations of the means currently avail-able for communicating with computer sys-tems are a major stumbling block for manyusers. Direct manipulation interfaces are

For systems tocollaboratewith their

users in getting the

informationthey need and

solving theproblems they

have willrequire the

kind of problem

solutions andtechniques

that AI develops.

But it alsorequires that

we lookbeyond

individualintelligent systems togroups of

intelligent systems

that worktogether.

Articles

68 AI MAGAZINE

Page 3: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

Examples of Collaborative Activity

That collaboration is central to intelligentbehavior is clear from the ways in which itpervades daily activity. Figure 1 lists a smallsampling of them. They range from the well-coordinated, pre-planned and practiced col-laborations of sports teams, dancers, andmusical groups to the spontaneous collabora-tions of people who discover they have aproblem best solved together. Scientific col-laborations occur on large and small scalesand across the sciences and social sciences.The large scale is exemplified by the coordi-nated efforts of the 400 physicists who partic-ipated in finding the top quark as well as bythe astronomers observing Shoemaker—Levy9 collide with Jupiter. Archeologists attempt-ing to produce a coherent explanation of theculture represented by the finds at a dig typi-cally collaborate in somewhat smaller groupsof specialists from different subfields.

To help illustrate some of the features ofcollaborative activity, I will examine in moredetail two small-scale collaborations, one inhealth care and the other in network mainte-nance. The health-care example involvesthree people working together toward com-mon goals; the network-maintenance exam-ple illustrates a human-computer team effort.Later, I will give an example of a collaborativegroup consisting solely of machines. Al-though I will not have time to discuss workon systems that support groups of people col-laborating, this is another area to which theresults of our research could contribute.

In the health care scenario portrayed infigure 2 a patient arrives at the hospital withthree problems affecting his heart and lungs.Three specialists are needed, each providingdifferent expertise needed for curing thepatient. But this is not a one doctor, one dis-ease situation. Treating the patient will re-quire teamwork. For example, the cardiologistand pulmonary specialist must agree on aplan of action for reducing the water in thepatient’s lungs. And the infectious disease andpulmonary specialists must plan together thekind and amount of antibiotic to prescribe.

Notice, however, that this is a team ofequals. No single doctor is the manager,telling the others who does what; there’s nomaster-servant relationship here. The doctorsneed to come to a consensus about what todo and who’s going to do it. Each of themhas only some of the information needed todevise a plan for action; they will have toplan jointly. In doing so, each doctor will

make presumptions about the others’ beliefsand capabilities and must decide when tocheck out these assumptions. Each doctorwill be counting on the others to contributewhat they know to the solution and to dotheir share.

The dialogue in figure 3, taken from thework of Lochbaum (1994), illustrates that “abuddy is better than a slave”: a system thatworks with users to solve problems jointly isbetter than one that takes orders. For exam-ple, in response (2), the system doesn’t justreport that Jupiter is down, but goes on to fix

Articles

SUMMER 1996 69

Figure 1. Collaborations in Daily Life.

COLLABORATIONS IN DAILY LIFE

● sports and entertainment:■ soccer, dance, orchestra

● science:■ archeological digs■ high energy physics

● design:■ building a house■ constructing a computer system

● health care

Figure 2. Example 1: Health Care.

EXAMPLE 1: HEALTH CARE

● patient problems:■ heart attack with congestive heart failure■ pneumonia■ emphysema

● the team:■ cardiologist■ infectious disease specialist■ pulmonary specialist

● the collaborations:■ diuretics: cardiologist & pulmonary specialist■ antibiotics: pulmonary specialist

& infectious disease specialist

Page 4: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

When a worker of the little Africanmyrmicine ant Cardiocondyla venustulafinds a food particle too large to carry, itreturns to the nest and contacts anotherworker, which it leads from the nest. Theinteraction follows a stereotyped se-quence. First the leader remains perfectlystill until touched on the abdomen bythe follower ant. Then it runs for a dis-tance of approximately 3 to 10 mm, orabout one to several times it own bodylength, coming again to a complete halt.The follower ant, now in an excited stateapparently due to a secretion released bythe leader, runs swiftly behind, makesfresh contact, and “drives” the leaderforward. (p. 248)

Social insects collaborate on more thanobtaining food. Weaver ants, termites, andsocial wasps build large complex nests thattake many worker lifetimes to complete (Wil-son 1971, p. 228ff), and many species collabo-rate on raising the next generation. In describ-ing this behavior, Wilson makes an assertionthat we should keep in mind as we attempt todevelop intelligent systems. He says,

The individual social insect, in com-parison with the individual solitaryinsect, displays behavior patterns that areneither exceptionally ingenious norexceptionally complex. The remarkablequalities of social life are mass phenome-na that emerge from the meshing ofthese simple individual patterns bymeans of communication. In this princi-ple lies the greatest challenge and oppor-tunity of insect sociology. (Wilson 1971,p. 224)

This observation also poses a challenge forus in our efforts to design intelligent systems.In particular, it suggests we consider howdesigning systems to work together mightenable us to design simpler individual systemsand still accomplish complex goals. Bajcsy’swork on robots that coordinate physical tasks(Kosecka and Bajcsy 1993) is a step in thisdirection. However, we can also look in sim-pler settings. A somewhat surprising theoreti-cal result (Bender and Slonim 1994), which I’lldiscuss in more detail later, shows that tworobots are better than one in certain map-learning situations. For example, the lighterportion of the graph fragments in figure 4illustrate configurations that two agents canlearn to distinguish, but one cannot.

One interesting issue raised by these “sim-ple societies” is how much of their behavior iscollaborative and not merely coordinated orinteractive, a distinction we’ll examine soon.

the problem; it acts in service of achievingthe shared goal it has with the user. In (6),the system again acts to remove a barrier; itdoes so without requiring an additional pushof a mouse button.

As in the health care example, here too dif-ferent participants have different knowledgeabout how to proceed as well as about theconditions that pertain. To determine theappropriate course of action, they need toshare information. The participants also havedifferent capabilities for acting. The user iscounting on the system to do its part in fur-therance of the common goal; not to give upwhen something goes wrong, but to try toaddress the problem; and to report back onevents or properties that are important to thesituation and to their problem solving.

The Challenge of Simple Organisms

Our analysis of collaboration should not berestricted to complex systems that requireexplicit models of beliefs, desires, intentions,plans and the like. Simple organisms also worktogether. Ant behavior provides a compellingexample. We all know—some too well—thatants leave trails for other ants to follow.2 Butthis is one of the simpler ways in which antsrecruit others in their colony to work towardgetting food for the group. Another approachis tandem running: In his book Insect Societies,E.O. Wilson (1971) reports that,

Articles

70 AI MAGAZINE

Figure 3. Example 2: Network Maintenance (Lochbaum, 1994).

EXAMPLE 2: NETWORK MAINTENANCE

(1) User: We need to repair a connectivity problembetween Mars and Saturn. Do a remote pingfrom Mars to Saturn.

(2) System: I can’t. Saturn seems to be down. I’ll take care ofthat first.

(3) <System reboots machine>

(4) System: Okay, Saturn’s back up and the remote ping wassuccessful.

(5) User: Good. Verify Mars’ IP address for Saturn for me.

(6) System: The entry for Saturn was wrong, but I correctedit.

(7) User: Okay, good. We’re done then.

Page 5: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

Another is the extent to which collaborativebehavior should be hardwired into systems.An important question for us to address is theextent to which collaboration can be achievedwith simple techniques and the extent towhich it requires reasoning with explicit mod-els of beliefs, intentions, and the like.

Collaboration versus InteractionDictionary definitions (Mish 1988) provide acrisp way of seeing the distinction betweencollaboration and interaction. Whereas inter-action entails only acting on someone orsomething else, collaboration is inherently“with” others; working (labore) jointly with(co). It’s the “jointly with” that distinguishescollaboration and that we need to character-ize more precisely. To build collaborative sys-tems, we need to identify the capabilities thatmust be added to individual agents so thatthey can work with other agents. Later, whenI examine what’s required to model collabora-tion, I will argue that collaboration cannotjust be patched on, but must be designed infrom the start.

To characterize “jointly” will require a con-sideration of intentions. The importance ofintentions to modeling collaboration is illus-trated by the clip-art maze in figure 5. Fromthe picture alone, it is not clear whether themice in this maze are collaborating. The pic-ture doesn’t reveal if they have some sharedgoal or if instead, for example, the top mousehas paid the bottom mouse to be his step-stool. We need to know something about thegoals and intentions of these mice to be ableto decide whether they are collaborating.Information about intentions is central todetermining how they will behave in differ-ent circumstances. For example, such infor-mation is needed to figure out what the micewill do if something goes wrong: if the topmouse fails to get over the wall, will the bot-tom mouse help it find another plan, that is,will the mice replan together about how toreach their goal?

Another way to view the distinctionbetween collaboration and interaction is toconsider whether we want computer systemsto be merely tools, or something more. Thecontrast among the simple natural languageexamples in figure 6 clearly illustrates this dif-ference. The assistance Sandy received fromPat in writing the paper was quite differentfrom the pen’s contribution. The pen onlyleft marks on paper, whereas Pat producedwords, paragraphs, maybe even sections, notto mention ideas.

Pat and the pen form a spectrum. Thequestion is where the computer system sitson this spectrum now. Systems are not as pas-sive as the pen (for example, they will dospelling correction), but they’re not nearly ashelpful as Pat. They can’t help formulate anoutline or the approach to take nor identifythe important points to be made, nor can

Articles

SUMMER 1996 71

Figure 4. Two Robots Are Better Than One (Bender & Slonim, 1994).

Figure 5. Collaboration versus Interaction.

TWO ROBOTS ARE BETTER THAN ONE

Fragments from a strongly connected graph

illustrating setting in which a single robot

could get confused.

COLLABORATION VS. INTERACTION

The two mice trying to get out of this maze are

interacting, but are they collaborating?

Page 6: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

the kinds of problems we address and modelswe develop are central to providing them.

Characteristics of CollaborationIn this next section, I will characterize thephenomenon of collaboration more preciselyand will describe the main properties of col-laborative activity. Then I will examine thekey elements that we need to model. Next Iwill show how these characterizations enableus to distinguish collaboration from otherkinds of group activity. In the remainder ofthe article, I will then turn to look at some ofthe major research challenges presented bycollaboration and the research collaborationswe need to form to meet them.

Features of CollaborationIn examining the features of collaborativeactivity, I’ll draw on what I’ve learned aboutcollaboration in joint work with Candy Sid-ner and Sarit Kraus.4 If we look at collabora-tive activity in daily life, we see several fea-tures of the participants’ knowledge,capabilities, and relationships that affect thekinds of reasoning needed to support theconstruction of collaborative computeragents. First, as the health care and networkexamples illustrate, the master-servant rela-tionship, which is currently typical ofhuman-computer interaction, is not appro-priate for collaborations. Second, as theexamples also make clear, most collaborativesituations involve agents who have differentbeliefs and capabilities. Partial knowledge isthe rule, not the exception. As a result, col-laborative planning requires an ability toascribe beliefs and intentions (that is, to guessin a way that you can later retract). Third,multi-agent planning entails collaboration inboth planning and acting.

These characteristics have a significanteffect on the kinds of reasoning systems wecan employ. We need efficient methods fornonmonotonic reasoning about beliefs andintentions of other agents, not just aboutfacts about the world; we can’t depend ononce-and-for-all planning systems, nor sepa-rate planning from execution. Collaborativeplans evolve, and systems must be able tointerleave planning and acting. Thus, therecent moves within the planning communi-ty to cope with a range of resource-boundsissues and the like are crucial to our beingable to make progress on modeling collabora-tion. Similarly, advances in nonmonotonicreasoning are important to handling thekinds of belief ascription needed.

they write complete sections. More particular-ly, we can ask what’s needed so that systemswill become more like Pat than the pen inwhat they contribute to our efforts.3

Why Collaborative Systems Now?

Before moving on to look at some of the issuesthat modeling collaborative systems raises, Iwant to pause to address the question youmight be asking: why is now the time to takeon this challenge? There are three main rea-sons. First, as I mentioned earlier, modelingcollaboration presents a variety of challengingresearch problems across AI subfields. But thiswould be irrelevant were it not for the secondreason: progress in AI over the last decade.

AI is situated to make major advances incollaborative systems because of the substan-tial scientific base established by researchacross most of the areas involved in meetingthis challenge. For example, work in boththe natural language and DAI communitieshas provided a range of models of collabora-tive, cooperative, multi-agent plans. Theplanning community’s efforts to cope withuncertainty and the pressures of resourceconstraints likewise provide a rich source ofideas for coping with some of the modelingproblems that arise in the design of collabo-rative systems. Results from the uncertainand nonmonotonic reasoning communitiescan be drawn on to address the kinds ofascription of beliefs and intentions that arisein reasoning about what others can do aswell as about what they know.

Third, as I discussed at the beginning ofthis article, there are compelling applicationsneeds. Again, recent progress in AI matters. AIis uniquely poised to provide many of thecapabilities that are needed by virtue of theresearch base I’ve just sketched and because

Articles

72 AI MAGAZINE

Figure 6. Tools or Collaborators?

TOOLS OR COLLABORATORS?

● Sandy wrote the paper with a fountain pen.

● Sandy wrote the paper with Pat.

● Sandy wrote the paper with the computer.

Page 7: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

The final feature of collaborative activitythat I’ll discuss is perhaps the most contro-versial one: collaborative plans are not simplythe sum of individual plans. When CandySidner and I first investigated models of col-laborative plans in the context of discourseprocessing (Grosz and Sidner 1990), peopleargued that if we would just go away andthink harder we could find a way to treatthem as sums of individual plans. Natural-language dialogue examples just didn’t con-vince the planning or reasoning researcherswith whom we spoke. So to convince themwe were reduced to that familiar AI standby,the blocks world. The example we used isshown in figure 7. As shown at the top of thefigure, the two players are building a tower ofblue and red blocks; they are working togeth-er, one of them using her supply of blueblocks, the other his supply of red blocks. Thequestion collaboration poses—and much ofthe rest of this article is aimed at address-ing—is what their plan is. As the bottom halfof the figure shows, whatever that plan is, itis not just the sum of two individual block-building plans, each with holes in the appro-priate spot.

Thus, we must design collaboration intosystems from the start. If they don’t havemechanisms for working together—someembodiment of those mental attitudes thatare needed to support the “jointly with” thatcharacterizes collaboration—then they willnot be able to form plans for joint activity. Iwant to turn now to investigate the mentalattitudes that are required.

Intending-ThatFirst, I need to add another kind of intentionto the repertoire that planning formalizationsin AI have used previously. Kraus and I (Groszand Kraus 1995) refer to this as the attitude of“intending-that.”5 Intending-that enables usto represent the commitment to others that isneeded for joint activity. I will not try todefine intending-that here, but rather willillustrate its role by considering several exam-ples of collaborative activity.

To begin, I want to examine the types ofresponsibility that ensue when a professorand student write a paper together. For thisexample, we’ll assume that the student andprofessor will each write different sections ofthe paper. The student intends to write hissections of the paper. The professor intendsthat the student will be able to write his sec-tions. The student will do the planningrequired for writing his section and the subac-tions entailed in doing it: looking up refer-

ences, producing prose and so forth. The pro-fessor’s intention that the student be able towrite his sections obliges her not to ask himto do other things at the same time: no democrunches when there’s a conference paperdeadline. In addition, this intention-that maylead her to help the student when she noticesdoing so would further their joint goal of get-ting the paper written. However, the professoris not directly responsible for planning andexecuting the acts in writing the student’s sec-tions. And, the converse holds for the sectionsthe professor is going to write: no last minutepleas for letters of recommendation whenthere’s a conference paper deadline.

Meal preparation provides an example thatillustrates other aspects of intending-that.Suppose two people—I’ll refer to them as Philand Betsy—agree to make dinner together.They may split up the courses: Phil makingthe main course, Betsy the appetizer, and thetwo of them making the dessert (the mostimportant course) together. Phil intends tomake the main course and intends that Betsywill be able to make the appetizer. Betsyintends to make the appetizer and that Philwill be able to make the main course. They’ll

Articles

SUMMER 1996 73

COLLABORATION IS NOT JUST SUMS OF

INDIVIDUAL PLANS

Figure 7. Collaboration Is Not Just Sums of Individual Plans.

≠+

Page 8: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

the action. In the case of a group plan to do ajoint activity, there must be mutual belief ofthe recipe; agents must agree on how they aregoing to do the action. For instance, Phil andBetsy must agree that dinner will consist ofthree courses of a certain sort and concur onwho will make each of the courses. Second,just as individual agents must have the abilityto perform the constituent actions in an indi-vidual plan and must have intentions to per-form them, the participants in a group activi-ty must have individual or group plans foreach of the constituent actions in theiragreed-on recipe.

Plans for group actions include two majorconstituents that do not have correlates inthe individual plan. First, the agents musthave a commitment to the group activity;they must all intend-that the group will dothe action. For example, both Betsy and Philneed to have intentions that they make din-ner. Among other things, these intentionswill keep them both working on the dinneruntil the meal is on the table. Second, theparticipants must have some commitment tothe other agents being able to do theiractions. Phil must have an intention that Bet-sy be able to make the appetizer. This inten-tion will prevent him from using a pan heknows she needs; it might also lead him tooffer to buy ingredients for her when he goesgrocery shopping.

Modeling ChallengesWith this sketch of the requisite mental statesof the participating agents in hand, I want toturn now to consider what we need to modelto be able to build computer systems that canbe collaborative partners. In this section, I’lllook at three elements that have beenidentified as central by people working onmodeling collaboration: commitment to thejoint activity, the process of reaching agree-ment on a recipe for the group action, andcommitment to the constituent actions. As weexamine each of these elements, we will seethere is a pervasive need for communicationand a range of questions concerning the infor-mation that needs to be communicated at anygiven time. In addition, a significant part ofthe modeling challenge arises from the needfor any formalization to provide for systems tocope with a general background situation inwhich information is incomplete, the worldchanges, and failures are to be expected.

Commitment to the Joint ActivityBratman (1987) has argued that intentions

each adjust their menu choices and recipes sothat they won’t have resource conflicts (suchas both needing the same pan at the sametime) and they may help each other out. Forexample, Phil might offer to pick up theingredients Betsy needs when he goes to thegrocery. I’ll be using this example later toillustrate various elements of collaboration,but lest you misunderstand, let me say nowthat I’m not proposing meal-making robotsas a primary application. Rather, the mealsdomain is a useful one for expository purpos-es since everyone has some intuitions aboutit. Thus, it requires less preliminary introduc-tion than network management or healthcare. And, for some strange reason, it’s lesspainful to discuss than paper writing.

Plans for Collaborative ActionI can now sketch what we need to add toindividual plans in order to have plans forgroup action. I’ll only have time here to layout a piece of the picture and to do so infor-mally, but most of what I say has formaliza-tions, often several competing ones, behind it(see, for example, Grosz and Kraus [1995];Kinny et al. [1994]; Levesque, Cohen, andNunes [1990]).

Figure 8 lists the major components ofgroup plans; to provide a base for compari-son, the top of the figure lists the main com-ponents for individual plans. First, just as anindividual agent must know how to do anaction, agents in a group must have knowl-edge of how they’re going to do an action. Toavoid too many different uses of the word“plan,” I’ll follow Pollack (1990) and call thisknowledge of how to do an action a recipe for

Articles

74 AI MAGAZINE

PLANS FOR COLLABORATIVE ACTION

● To have an individual plan for an act, need■ knowledge of a recipe■ ability to perform subacts in the recipe■ intentions to do the subacts

● To have a group plan for an act, need■ mutual belief of a recipe■ individual or group plans for the subacts■ intentions that group perform act■ intentions that collaborators succeed

Figure 8. Plans for Collaborative Action.

Page 9: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

play three major roles in rational action: (1)they constrain an agent’s choice of what elseit can intend; in particular, agents cannothold conflicting intentions; (2) they providethe context for an agent’s replanning whensomething goes wrong; (3) they guide“means-ends reasoning.”

The commitment of participating agents totheir joint activity plays analogous roles incollaborative activities. For example, Phil can-not plan to study at the library until 6 p.m. ifhe’s going to provide a main course for din-ner at 6 and make dessert with Betsy; that is,Phil cannot hold intentions that conflict withhis commitment to the collaborative dinnerplan. In addition, if Phil is unable to makethe main course he originally planned, thenhis replanning needs to take into account therest of the plan he has with Betsy. He mustmake a main course that will fit with the restof the menu they’ve planned. In both cases,agents must weigh intentions rooted in theirindividual goals and desires with those thatarise from the joint activity. Furthermore,Phil’s commitment to the dinner-makingplan will lead him to undertake means-endsreasoning not only in service of the con-stituent actions for which he takes responsi-bility, but also possibly in support of Betsy’sactions. For instance, if he offers to help Betsyby buying ingredients for her while he’s outshopping, he’ll need to reason about how toget those ingredients.

Commitment to a joint activity also engen-ders certain responsibilities toward the otherparticipants’ actions. For instance, agentsmust be aware of the potential for resourceconflicts and take actions to avoid them. Philcannot intend to use a pot he knows Betsyneeds during a particular time span; at a min-imum, he must communicate with her aboutthe conflict and negotiate with her about useof this resource.

Communication is needed not only tohelp decide on resource conflicts, but alsowhen something goes wrong while carryingout a joint activity. We saw an example ofboth the communications requirement andreplanning in the network maintenanceexample discussed earlier. The system didnot give up just because Saturn was down;instead, it both reported the problem andwent on to try to fix it.

The commitment to joint activity leads toa need to communicate in other ways as well.For example, agents may not be able to com-plete a task they’ve undertaken. If an agentruns into trouble, it needs to assess theimpact of its difficulty on the larger plan, and

decide what information to communicate toits partners. Although in some cases, an indi-vidual agent will be able to repair the plan onits own, in other situations the replanningwill involve other group members. In somecases, the group activity may need to beabandoned. When this happens, all the groupmembers need to be notified so no one con-tinues working on a constituent activity thatis no longer useful.6

Reaching Consensus on the RecipeWhen a group of agents agrees to worktogether on some joint activity, they alsoneed to agree about how they are going tocarry out that activity. Because the agentstypically have different recipe libraries (thatis, they have different knowledge about howto do an action), they need a means of recon-ciling their different ideas of how to proceed.For example, if Phil thinks a dinner can con-sist of a pasta course alone and Betsy thinksno meal is complete without dessert, theyneed to negotiate on the courses to be includ-ed in the dinner they’re making together. Fur-thermore, agents may need to combine theirknowledge to come up with a completerecipe; they may need to take pieces from dif-ferent recipe libraries and assemble them intoa new recipe. So, this part of collaborationmay also entail learning.

Reaching agreement on a recipe encom-passes more than choosing constituentactions. Once agents divide up the con-stituent tasks (a topic we’ll return to shortly),they need to ensure that their individualplans for these subtasks mesh. They’ll need toknow enough about the individual subplansto be able to ensure the pieces fit together.Betsy needs to make sure the appetizer sheplans will go with Phil’s main course, andthat in preparing it she won’t run intoresource conflicts.7 It’s important to note,however, that agents do not need to knowthe complete individual plans of their collab-orators. How much they do need to know isan open question. The two cookbook recipesin the sidebar (taken from Prince [1981])illustrate a spectrum of detail. Notice, howev-er, that the longer recipe contains less detailthan an agent performing this action mustknow (for instance, it does not tell how toscrape the pig), and the shorter recipe maycontain more detail than a dinner-makingpartner responsible for dessert needs to know.Some characteristics of the recipe-agreementprocess are not evident in this simple mealsexample. In general, agents will not be ableto decide on a full recipe in advance. They’ll

Communi-cation is needed notonly to helpdecide onresource conflicts, but also when somethinggoes wrongwhile carrying out a jointactivity.

Articles

SUMMER 1996 75

Page 10: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

has addressed the question of task allocation;proposed solutions range from market mech-anisms like contract nets (Davis and Smith1983)—to voting mechanisms and negotia-tion protocols (Rosenschein and Zlotkin1994). However, most of the techniques pro-posed to date have assumed individual agentswere largely self-interested; there was noshared goal nor commitment to the successof the whole group. Thus, there is a need todetermine the effect of these features of col-laboration on the design and choice of deci-sion making processes.

To decide how to divvy up the constituentactions needed to complete a task, agents mustreason about the capabilities of other agents;for systems to be able to do so will undoubted-ly require techniques for belief attribution andnonmonotonic reasoning. Agents also needmeans of balancing their own previous obliga-tions—from either individual desires or othergroup plans—with the new obligations thatagreeing to perform a constituent action forthe group activity would entail. They need toreconcile intentions from private acts withthose from group activities. Again, communi-cation needs abound.

Another View of Collaborative Activity

Having completed the short overview of themodeling needs posed by collaboration, Iwant to turn to look briefly at another char-acterization of collaborative activity. Doing sowill let us examine the extent to which vari-ous features of collaboration must be explicit-ly modeled.

Figure 9 lists Bratman’s four criteria forshared cooperative activity (Bratman 1992).Only one of these criteria, commitment tothe joint activity, was explicit in the frame-work I just laid out. The others are implicitlyencoded in various constructs I’ve discussed(Grosz and Kraus 1995). Mutual responsive-ness and mutual support can be derived fromcommitment to joint activity and thespecification of what it means for one agentto intend-that another be able to do anaction. Meshing subplans can be derivedfrom ways in which agents come to agree onrecipes in the context of intentions-that thefull group succeed. The point of looking atcollaboration from Bratman’s perspective isnot only good scientific hygiene (it’s alwaysgood to cover the philosophical bases), butalso because we can see that the criteria donot need to be explicit in a formalization.

Although explicit models may enable more

need to plan a little, do some actions, gathersome information, and then do more plan-ning. Just like in the case of individual plans,they’ll need to be able to modify and adaptvarious parts of the plan as they work theirway along. They’ll need to communicateabout the adaptations. Typically agents willbe working with partial recipes and they’ll beinterleaving planning and acting.

It’s evident then that recipe selectionentails a significant amount of decision mak-ing. One major open issue in modeling col-laboration is the kinds of group decision-making processes that are needed, and waysto implement them.

Commitment to Constituent ActionsIn addition to deciding what recipe they’regoing to use, the participants in a collabora-tive activity need to decide who’s going to dowhat. They have to agree on an assignmentof constituent actions to subgroups or indi-viduals. This, again, requires a group deci-sion-making process. It raises the question ofwhether to have centralized control and ateam leader who decides, or a more equalpartnership, as in the health care example,that requires negotiation. Different choicesvary in flexibility, robustness, and communi-cation costs. Research in the DAI community

Articles

76 AI MAGAZINE

Recipes: Level of Detail?

From Joy of CookingRoast Suckling Pig 10 servingsPreheat oven to 450 degrees.Dress by drawing, scraping, and cleaning: a suckling pig.Fill it with: Onion Dressing . . .It takes 2 1/2 quarts of dressing to stuff a pig of this size.Multiply all your ingredients but not the seasonings . . .

(Rombauer & Becker, 1931)

From Répertoire de la CuisineCochon de lait Anglaise:

—Farcir farce à l’anglaise. Rôtir.(Gringoire & Saulnier, 1914)

[English suckling pig: Stuff with English stuffing. Roast.]

Page 11: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

flexibility, sometimes they’re not needed. Themap-learning result of Bender and Slonim(1994) I mentioned earlier provides an exam-ple of a problem domain in which collabora-tion can be “built-in” to an algorithm with-out explicit models of belief and intention.The model that Bender and Slonim examinedis that of strongly connected, directed graphswith indistinguishable nodes; the edges outof each node are labeled only with respect tothe node: they are numbered, but do nothave names that continue along their length.Learning in this setting means being able tooutput an exact copy of the graph.

Now this model might not seem close toreality, but first-time visitors to Boston will tellyou it’s a good approximation to the challengethey’ve faced driving around, trying to figureout what’s connected to what, or learning thestreet map. There are lots of one way streetsand no street signs to speak of. Many of theintersections look the same: crowded withbuildings, pedestrians, and cars. Learning thegraph corresponds to learning all of the streetswith no error; that is, the robot can rememberand connect the intersections correctly.

The graphs at the bottom of figure 4 arefragments taken from a strongly connectedgraph that illustrate the kinds of settings inwhich a single robot could get confused andbe unable to learn correctly. In particular, asingle robot will not be able to distinguishbetween the different highlighted configura-tions. It won’t know after three steps whetherit’s gotten back to the starting node as in theright graph or is headed off somewhere else asin the left one. A pebble might help a robotdifferentiate the two situations, but if therobot were in the situation portrayed on theleft it would lose a pebble dropped at the topleft node. In contrast, two robots can learn thegraph structure, by using an algorithm thatresembles in spirit the behavior of the tandem-running ants Wilson described. The secondrobot can periodically catch up to the firstrobot. Specifically, Bender and Slonim showthat one robot cannot learn these graphs inpolynomial time, even with a constant num-ber of pebbles, and that two robots can learnin polynomial time with no pebbles needed.The main difference is that the second robotcan move on its own (thereby catching up)whereas a pebble cannot. The second robotcan collaborate, the pebble cannot.

It’s interesting to look at the assumptionsof the Bender and Slonim algorithm. Theirtwo robots start together. They must be ableto recognize one another; they can’t beanonymous—they need to know each other.

They also need to communicate, for instanceby radio or by touching. Collaboration isbuilt into the algorithm. The robots do notexplicitly reason or negotiate; instead, thedesigners (or, in this case, the algorithm writ-ers) directly encode collaboration.

Distinguishing Collaborationfrom Other Group Activity

We might stop now to ask whether the crite-ria we’ve given for collaboration really distin-guish it from other group behaviors. Do theyrule out anything? I will examine two othertypes of behavior to show they do. First, I’lllook at the contrast between collaboratingwith someone and contracting to anotheragent. Second, I’ll revisit, this time in moredetail, the differences between collaborationand interaction, looking to see how the crite-ria we’ve developed distinguish betweenthem. There are other kinds of behavior onemight examine; for example, Bratman (1992)suggests we need to rule out coercion: if twopeople go to New York together as a result ofone kidnapping the other, that’s hardly col-laborative. However, the formalization ofindividual will is more complex than theproperties needed to distinguish contractingand interaction from collaboration; for now,I’ll stick with cases we have some chance ofunderstanding formally.

The two painting scenarios given in figure10 lead to different answers to the questionposed at the bottom of the figure. The con-trast makes evident a key distinction betweencontracting and collaborating. In both sce-narios, Leslie is going to do the prep work forthe paint job. (She’s the helper we’d all likearound when we’re painting.) She’ll need to

Articles

SUMMER 1996 77

SHARED COOPERATIVE ACTIVITY

● Mutual responsiveness

● Commitment to the joint activity

● Commitment to mutual support

● Meshing subplans

Figure 9. Shared Cooperative Activity.

Page 12: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

whatsoever to help the other person; allLeslie is obliged to do is the prep work. Incontrast, when Leslie is collaborating withBill, her commitment to their joint activityand commitment to Bill’s being able to do hispart along with a recognition that Bill needsthe paint from the hardware store oblige herat least to consider picking up the paint. Shemay in the end decide she cannot help Billout (for example, if she has gone on her bikeand is thus unable to transport the paint), butshe must consider the tradeoff between doingso and not. That is, she needs to weigh herobligation to the group activity against othercommitments in light of her abilities andmake a decision.

To illustrate how the framework I’ve pre-sented enables us to distinguish collaborationfrom interaction, I’ll use the driving examplein figure 11. Driving in a convoy is a paradig-matic example of joint action by a team, asLevesque, Cohen and Nunes present so clear-ly in their paper on Teamwork (Levesque,Cohen, and Nunes 1990). In contrast, ordi-nary driving is not—even if drivers act simul-taneously, even if they follow certain trafficlaws, even if the laws are those in a motorvehicle code and not just conventions thatresult from evolutionary choice as in Boston.

The contrast is most clearly seen by consid-ering what happens when something goeswrong. Typically, convoy drivers agree on arecipe they will follow, one that includescheckpoints. They have a set of agreed-uponsignals, or CB radios, or car phones, somemethod of communication they can use tohelp maintain mutual belief of how they arecarrying out their joint activity. If one ofthem needs help, the others will assist. So wehave commitment to the joint activity, anagreed-upon recipe, commitment to the suc-cess of others’ actions, and a means of com-munication to establish and maintain therequisite mutual beliefs. In short, we havecollaboration.

The scene in Boston is different. There’s nocommon goal, although the drivers may havea goal in common, namely filling the blankspace in front of them. (The bicyclist havingseen this has wisely fled the scene.) An indi-vidual driver has no commitment to the suc-cess of actions of other drivers. There is nojoint activity so no commitment to one;there’s no agreed upon recipe; and there’s nocommitment to the ability of other agents tosuccessfully carry out their actions. Theremay be communication between drivers, butit’s not in service of any shared goal. In short,there’s no collaboration.

acquire the supplies required and get them tothe house. The question is whether there’sany obligation for her to help the otherpainter; for example, while she’s at the hard-ware store getting supplies for her part of thejob, does she need to consider picking up thepaint? In the contracting scenario in whichSharon has hired Leslie, there’s no obligation

Articles

78 AI MAGAZINE

COLLABORATING VS. CONTRACTING

● Bill and Leslie agree to paint their house together■ Leslie will scrape and prep surface.■ Bill will put on new paint.

● Sharon decides her house needs painting;

she hires Leslie to do the prep work.

Question: Is Leslie obliged to pick up the paint when getting her own supplies?

Figure 10. Collaborating versus Contracting.

COLLABORATING VS. INTERACTING

● Driving in a convoy: a collaboration.

● Driving in Boston:

highly interactive, but not a collaboration.

Figure 11. Collaborating versus Interacting.

Page 13: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

Collaborating Computer Systems

I’ve now characterized the phenomena weneed to model and described some of theresearch challenges they pose for AI. Beforeturning to research problems, I want to lookat the ways in which the issues I’ve discussedarise in and affect settings in which severalcomputer systems need to coordinate theiractivities. The Superhighway goal has ledpeople to ask many questions about this set-ting recently, most of them about hand-shakes, protocols, and means of charging forservices.

I’d like to ask a different kind of question:What capabilities could we provide if systemswere to collaborate rather than merely inter-act? What difference would they make to sys-tem performance? To examine these issues, Iwill focus on the types of systems settingsthat much research in DAI has considered.Figure 12 illustrates a typical scenario.8 Tomeet the needs of the user in the situationportrayed in this figure will take the coordi-nated efforts of three systems, one that hasaccess to NASA images, one that has access toorbit information, and a compute server. Sys-tems B and C will need to collaborate todetermine the appropriate coordinates for thepicture, and then will need to communicatethe results to System A, which can access anddisplay the appropriate pictures.

The advantages of collaboration, asopposed to mere interaction, again may beillustrated by considering what happens ifsomething goes wrong. For example, if Sys-tem B discovers network problems and it’scollaborating with A, it might communicatethis to A to save A wasted effort. If System Acan’t get to its information source, it needs toreport back to the other systems, providing asmuch information as possible so the wholegroup can explore alternative recipes. If Sys-tem C is unable to do a particular calculation,it needs to communicate with the other sys-tems, again providing as much informationas possible, so that the whole group can for-mulate a revised plan; just sending back asimple “can’t compute” error message willnot suffice. Collaboration can also play a rolewhen no problems arise; for instance, Sys-tems A and B might share rather than com-pete for network resources.

None of this is to say that systems mustreason with explicit models of belief, inten-tion and the like in such settings. Rememberthe ants. However we implement the ideas ofcommitment to joint activity and to the suc-

cess of others as well as the ability to formu-late a joint plan of attack, we need them inour systems.

Research ProblemsI want to turn now to ask what problemsneed to be solved for us to be able to con-struct systems that can collaborate with eachother or with people or both. Figure 13 givesonly a small sampling. The problems listedcross all areas of AI. I clearly won’t be able toexamine them all in detail and so will focuson two—negotiation and intention-conflict

Articles

SUMMER 1996 79

Figure 12. Example 3: Computer Agents.

COMPUTER AGENTS

● User’s need:■ Obtain photographs of Jupiter at anticipated

impact sites of Shoemaker-Levy 9■ Wants best grain-size, largest area available

● The team:■ System A: network agent with access to NASA

images■ System B: access to information on orbits of Jupiter

and Shoemaker-Levy 9■ System C: compute server

Figure 13. Research Problem Areas.

RESEARCH PROBLEM AREAS

● Recipe (plan) construction

● Multi-agent learning

● Agent-action assignment

● Modelling commitment

● Communication requirements,

constraints, tradeoffs

● Negotiation

● Intention-conflict resolution

Page 14: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

oration. Huberman and his colleagues(Glance and Huberman 1994) have consid-ered factors that contribute to cooperativebehavior emerging (see also Axelrod [1984]).They have looked at situations in which peo-ple can choose between acting selfishly orcooperating for the common good and haveasked how global cooperation can beobtained among individuals confronted withconflicting choices. They examine situationsin which the individual rational strategy ofweighing costs against benefits leads to allmembers defecting, no common good, andeveryone being less well off. The situationchanges, however, if the players know theywill repeat the game with the same group.Each individual must consider the repercus-sions of a decision to cooperate or defect. Theissue of expectations then comes to the fore.Individuals do not simply react to their per-ceptions of the world; they choose amongalternatives based on their plans, goals andbeliefs” (Glance and Huberman 1994, p. 78).Their modeling work shows that cooperativebehavior arises spontaneously if groups aresmall and diverse and participants have long-term concerns. If we are going to build sys-tems that will interact on more than one-shotdeals, it might be wise to build them withcapabilities for collaboration.

The other dimensions listed in the figureallow for variations in how much control isexerted both over the design of individualagents and over the social organization of theagents. A significant question is when todesign agents with built-in cooperationmechanisms (Durfee and Lesser 1989), thusmaking them essentially benevolent. Otherdimensions of variability concern whetheragents have any history of interactions ormemory of one another. Breaking a promisematters much more if your identity is knownand you’ll meet the person again. How muchagents know about one another, and in par-ticular about each other’s preferences, alsoaffect negotiation strategies.

Much of the work in this area has drawnon research in game theory, but has extendedit to consider questions of how one designsagents to behave in certain ways, and whatkinds of performance different strategies willdeliver. A range of results have been obtained.For instance Rosenschein and Zlotkin (1994)have specified attributes of problem domainsthat affect the choice of an appropriate nego-tiation strategy and have shown that thedomain as well as the deal-type affectswhether or not agents have any incentive tolie. Kraus, Wilkenfeld, and Zlotkin (1994)

resolution. I want to use these problems toillustrate the difference between what weknow how to do from work on individual,individually-motivated agents and what’sneeded for collaboration.

Negotiation The term negotiation has been used withinthe DAI community to refer to many differ-ent types of activities. They may all have incommon only one thing, the aspect of nego-tiation that’s mentioned prominently in thedictionary; that is, “to confer with another soas to arrive at the settlement of some matter”(Mish 1988). The realization in their systemsof “confer” and “arrive at settlement” areamong the problems facing designers of sys-tems.

Although some work has been done on sys-tems that support negotiations among people(Sycara 1988), I will consider work thataddresses how multiple computer agents canreach consensus or accommodate each oth-er’s needs when there are conflicts in theirgoals or among their resource needs.9 Suchsystems will need abilities to communicate,resolve conflicts, and coordinate activities.Researchers working on these problems haveconsidered agents with widely varying char-acteristics; some of the dimensions of agentcharacteristics are listed in figure 14.

The first dimension listed obviously affectsall the others. One might ask under whatconditions people or systems move from self-interest to concern with the good of thegroup, that is move to benevolence or collab-

Articles

80 AI MAGAZINE

Figure 14. Agent Characteristics.

AGENT CHARACTERISTICS:

● Individually motivated or ‘benevolent’?

● Central design or different designers?

● Agents anonymous to one another?

● Can you break a promise or deceive?

● Any long-term commitments?

● Do agents have complete information?

● Limits on reasoning power?

Costs: communication, negotiation time

Page 15: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

have derived strategies that take into accountthe time spent in negotiating. Alas, as theU.S. Congress has proved repeatedly, merelystalling is a great strategy for keeping the sta-tus quo.

In collaborative settings, negotiation hasdifferent parameters and raises additionalissues. There is a shared goal and typically atleast some history: agents collaborate overthe full time period required to completetheir joint activity. Agents may also collabo-rate more than once. How does this affectinclinations to tell the truth, intention-recon-ciliation decisions, and commitment topromises (for example, commitments to com-pleting subtasks)? In reconciling individualneeds with those of the group, agents need toweigh the importance to them of the groupsucceeding against other demands. Howmight we take this into account? How doindividual goals interact with shared objec-tives? There is some game theory literature(Aumann and Maschler 1995) that deals withrepetitive interactions: the success of the “titfor tat” strategy in playing the iterated pris-oner’s dilemma is one example. But this worktoo, at least so far as I have been able todetermine, focuses on self-motivated agents,and does not account for the role of jointcommitment to some shared goal in deter-mining choices.

Finally, although protocol design can helpensure certain behaviors among computersystems, any negotiation that includeshuman participants will need to accommo-date to human preferences and styles. Sys-tems that collaborate with humans may haveto cope with the nonrational as well as therational, as anyone who’s ever been at a facul-ty meeting will know.

Intention-Conflict Resolution Intention-conflict resolution is an inherentproblem given agents with limited resources.Each of the collaborating agents has multipledesires not all of which can be satisfied. Itwill have plans for actions it is carrying out tomeet some desires, plans that include variousintentions. It must ensure that its intentionsdon’t conflict (a property that distinguishesintentions from desires). In consideringwhether it will do something, an agent mustweigh current intentions with potentialintentions. There are significant computa-tional resource constraints. As those whohave worked on real-time decision makingknow, all the while the world is changing—agents can’t think forever. They must choosewhat to think about and whether to stop

thinking and start acting. That heart-attackpatient would surely prefer the doctors tostart working on a good plan rather thanwaiting until they’ve proved they’ve got thebest plan. As Voltaire would have it, “the bestis the enemy of the good.” Collaboratingagents need to reconcile competing inten-tions while respecting time constraints.

The planning community has been activelyconsidering this question in the context of asingle agent. Three of the major approachesthey’ve taken are to design techniques fordecision compilation into finite statemachines (Brooks 1991; Kaelbling and Rosen-schein 1991), to design deliberation schedul-ing algorithms (Boddy and Dean 1989;Horvitz 1988), and to develop architecturesthat include rational heuristic policies formanaging deliberation (Pollack and Ringuette1990).

Decision compilation techniques havebeen used mostly for simple agents, althoughas we saw with the ants, simple agents cansometimes do complex things. Deliberationscheduling algorithms have been appliedboth to traditional object-level algorithmsand to anytime algorithms. Both on-line pref-erence computations and off-line ones (com-piling decision-theoretic utilities into perfor-mance profiles) have been investigated. Incontrast with this decision-theoretic schedul-ing approach, the work on architectures hasexplored the development of satisficing(heuristic) policies that can be subjected toexperimental investigation.

These approaches provide general frame-works for managing reasoning in dynamicenvironments, but none of them have yetbeen applied to the problem of intention-conflict resolution in multi-agent, collabora-tive settings. Again, collaboration changesthe parameters of the model; it affects thequestions we ask and the kinds of answers weneed to find. For example, we need methodsfor balancing individual obligations withgroup obligations and for weighing the costsof helping other agents with the benefits;remember Leslie’s dilemma in the house-painting example. Agents need to be able todetermine when it’s best to ask others in thegroup for help. Finally, we need techniquesthat apply in states of partial knowledge, forexample agents typically will not know allthe utility functions a priori.

Research CollaborationsIn using the word “systems” in the title, Iintended to pun. In this last section, I want

Articles

SUMMER 1996 81

Page 16: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

turned out to be even harder than I’dthought. We all learned a lot. I hope, andhave reason to expect, that the reports (Weld1995; Grosz and Davis 1994) will benefit thefield. Given the current climate for researchfunding, I expect AAAI will be called on to domore of this, and AAAI will in turn need tocall on you, our members, to help.

So, we’ve come full circle: we need to dowhat we study.

Let me return now to research issues andconsider the need for collaboration with oth-er areas of computer science. A major lessonof the 1980s was that AI could not standalone. Rather, AI capabilities need to bedesigned as parts of systems built collabora-tively with others. We need to work with peo-ple in other areas of computer science, get-ting them to build the kinds of platforms weneed and working together to integrate AIcapabilities into the full range of computersystems that they design. This is especiallytrue as we turn to work on large networks ofinterconnected machines with differentarchitectures. It should be obvious that col-laboration with other computer scientists willbe needed to build collaborative systems;these systems will use networks and accesslarge data bases; they’ll depend on high-per-formance operating systems. The develop-ment and testing of experimental systemswill surely be a cooperative endeavor.

At the research level as well, there are over-lapping interests, questions in common andbenefits to be gained from coordinatedresearch. For example, research in distributedsystems asks many questions about coordina-tion and conflict avoidance that are similar tothose asked in DAI. Typically, this work hasbeen able to constrain the environments inwhich it is applied so that simpler solutionsare possible. For example, communicationprotocols can often depend on a small set ofsimple signals. Two-way cross fertilization canbe expected: AI researchers can examine theextent to which the protocols and solutionsdeveloped for distributed systems can beadapted to less restricted environments; con-versely, by identifying limitations to theseapproaches and ways to overcome them, wewill propose new solutions that may lead tomore powerful distributed computer systems.

The July 1994 issue of the Communicationsof the ACM was a special issue on intelligentagents. It included articles from a range ofcomputer science fields, including several onAI projects. I was quite pleased to see AImixed in with all the rest. However, whatstruck me most was how much the rest of

to shift to another meaning. “Systems” canrefer not only to the systems we build, butalso to the systems that those systems arepart of, the network in which the users ofcomputer systems—including us, as re-searchers—and the systems themselves partic-ipate. So, I want to shift from collaboration asan object of study to collaboration as some-thing we do.

I hope the need for collaborative effortsacross AI research areas is evident from theproblems I’ve discussed. Not only are suchefforts needed to solve these problems, butalso collaboration provides a good testbed forresearch in many of the areas I’ve mentioned:nonmonotonic reasoning, planning, reason-ing under uncertainty, and reasoning aboutbeliefs as well as natural language, vision, androbotics. The 1993 Fall Symposium onHuman-Computer Collaboration: ReconcilingTheory, Synthesizing Practice (Terveen 1995)brought together researchers from several dif-ferent areas. As the title suggests, the sympo-sium emphasized those situations in whichthe collaboration is between a human and acomputer. In the future, I hope to see manymore symposia and workshops focused onparticular technical issues that cross areas.

I want now to mention another kind ofcollaboration within the field, that of work-ing together to define new directions and toexplain to those outside the field what it is allabout. In the spring of 1994, following a sug-gestion of some program managers at ARPA,10

AAAI organized a small workshop to definean agenda for foundational AI research atARPA. Just prior to AAAI-94, at the suggestionof various National Science Foundation divi-sion and program directors, we held a some-what larger workshop to discuss and delin-eate the ways in which AI could contribute tothe National Information Infrastructure andin particular to the Information Infrastruc-ture Technology and Applications programwithin it.

Each workshop brought together peoplefrom across the spectrum of AI research andapplications. The participants were asked tobecome familiar enough with work outsidetheir own individual research interests to beable to explain it to funders and to justifyfunding of work on key problems. Partici-pants worked for the common good, even iftheir individual cost was higher. People hadto invest time to learn about areas in whichthey were not already expert. I was struck bythe enthusiasm everyone exhibited, and bythe perseverance with which they stuck tothe task. I expected our job to be hard, but it

A major lesson of the

1980s wasthat AI could

not standalone. Rather,AI capabilities

need to bedesigned as

parts of systems built

collabora-tively with

others.

Articles

82 AI MAGAZINE

Page 17: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

computer science could benefit from know-ing what we know how to do. In particular, Iwas surprised by the (non-AI) articles thatstill view a user’s job as programming, albeitwith more and more user-friendly languages.Why not have an intelligent agent that’s real-ly intelligent? Why not build an agent thatconsiders what the user is trying to do andwhat the user needs, rather than demandingto be told how to do what the user needsdone, or a system that learns. In her article,Irene Greif from Lotus claims that the “nextwave of innovation in work-group computing[will result in] products that encourage col-laboration in the application domain” (Grief1994). If so, the work we’ve done in variousfields—natural-language processing, represen-tation and reasoning to name a few—couldhelp save development time and effort andmake better products. But for this to happen,we in AI have to work on the problems ofmodeling collaboration.

In this article, I have not discussed thequestion of computer systems that assist peo-ple in collaborating or groupware. That, ofcourse, is the focus of those people in thefield of `computer supported cooperativework.’ Much of the work in that field hasfocused on current applications. The kind offoundational work I’ve proposed we take upcould provide a solid theoretical, computa-tional foundation for work in this area andthe basis for significant increase in the capa-bilities of such systems.

Finally, many of the problems I’vedescribed will benefit from interdisciplinarywork. Since the 1970s, various areas of AIresearch have drawn on work in psychologyand linguistics, and we can expect theseinterdisciplinary efforts to continue. Morerecently, DAI and planning research hasdrawn on game theory and decision theory.As we aim to understand collaboration and tobuild systems that work together in groups,we will need also to consider work in thosesocial sciences that study group behavior(such as anthropology and sociology) and tolook into mathematical modeling of groupprocesses.

Conclusion As the acknowledgments make evident, inpreparing the Presidential Address and thisarticle, I have had the help of many people.Our interactions were often collaborations.Each of my collaborators, in this work and inmy research, has shown me the benefits ofcollaboration, and my experience collaborat-

ing makes me sure that I’d rather have a com-puter that collaborated than one that wasmerely a tool. It’s time we generalized JohnDonne’s claim (figure 15). Designing systemsto collaborate with us will make a differenceto AI; it will make a difference to computerscience, and, by enabling qualitatively differ-ent kinds of systems to be built, it will make adifference to the general population. At thevery least, it will lead to a decrease in thenumber of times people say “stupid comput-er.” Besides, working on collaboration is fun.I hope more of you will join in the game.

AcknowledgmentsCandy Sidner, who was a partner in my earli-est explorations of collaboration, and SaritKraus, who has been a partner in extendingand formalizing this early work, have beenthe best of research collaborators. This article,and the talk on which it is based, simplywould not have been possible without them.Karen Lochbaum applied our model of collab-oration to discourse processing and in doingso asked critical questions, pointed out flawsand gaps, and helped improve the formaliza-tion. Michael Bender, Andy Kehler, StevenKetchpel, Christine Nakatani, Dena Wein-stein, and the other students in my graduateplanning course have taught me a lot by thequestions they’ve asked. Ray Perrault, MarthaPollack, Jane Robinson, Jeff Rosenschein, Stu-art Shieber, and Loren Terveen provided help-ful comments and advice. The participants inthe AAAI/ARPA and AAAI/NSF workshopstaught me a lot about current research in

Articles

SUMMER 1996 83

No man is an island, entire of itself . . .

Figure 15. No Man Is an Island, Entire of Itself….

computer

Page 18: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

8. I thank Steven Ketchpel for suggesting thisexample.

9. Another approach to multi-agent coordinationbypasses the need for negotiation by designingagents to follow social laws (Shoham and Tennen-holtz 1995). This kind of approach entails moreeffort at design time, but less run-time overhead;however, it requires either centralized control ofdesign or collaborative designers.

10. The Department of Defense Advanced ResearchProjects Agency; the acronym has recently revertedto DARPA.

ReferencesAumann, R. J., and Maschler, M. B. 1995. RepeatedGames with Incomplete Information. Cambridge,Mass.: MIT Press.

Axelrod, R.M. 1984. The Evolution of Cooperation.New York: Basic Books.

Balch, T.; Boone, G.; Collins, T.; Forbes, H.;MacKenzie, D.; and Santama´ria. IO, GANYMEDE, andCALLISTO: A Multiagent Robot Trash-CollectingTeam. AI Magazine 16(2): 39–52.

Bender, M. A., and Slonim, D. K. 1994. The Powerof Team Exploration: Two Robots Can Learn Unla-beled Directed Graphs. In Proceedings of the Thirty-Fifth Annual Symposium on Foundations of ComputerScience. Los Alamitos, Calif.: IEEE Computer SocietyPress.

Blackwell, T.; Chan, Kee; Chang, Koling; Charuhas,T.; Gwertzman, J.; Karp, B.; Kung, H. T.; Li, W. D.;Lin, Dong; Morris, R.; Polansky, R.; Tang, D.;Young, C.; and Zao, J. 1994. Secure Short-Cut Rout-ing for Mobile IP. In Proceedings of the USENIXSummer 1994 Technical Conference. June.

Bobrow, D. 1991. Dimensions of Interaction: AAAI-90 Presidential Address. AI Magazine 12(3): 64–80.

Boddy, D. M., and Dean, T. 1989. Solving Time-Dependent Planning Problems. In Proceedings ofthe Eleventh International Joint Conference onArtificial Intelligence, 979–984. Menlo Park, Calif.:International Joint Conferences on Artificial Intelli-gence.

Bratman, M. 1992. Shared Cooperative Activity.Philosophical Review 101:327–341.

Bratman, M. 1987. Intention, Plans, and PracticalReason. Cambridge, Mass.: Harvard UniversityPress.

Brooks, R. 1991. Intelligence without Representa-tion. Artificial Intelligence 47:139–159.

Davis, R., and Smith, R. 1983. Negotiation as aMetaphor for Distributed Problem Solving.Artificial Intelligence 20:63–109.

Davis, R., and Smith, R. 1983. Negotiation as aMetaphor for Distributed Problem Solving.Artificial Intelligence 20:63–109.

Durfee, E. H., and Lesser, V. R. 1989. NegotiatingTask Decomposition and Allocation Using PartialGlobal Planning. In Distributed Artificial Intelligence,Volume 2, eds. L. Gasser and M. N. Huhns, 229–244.San Francisco, Calif.: Morgan Kaufmann.

Etzioni, O. 1993. Intelligence without Robots (A

areas of AI outside my normal purview. DanBobrow and Pat Hayes convinced me I’d sur-vive being President of AAAI and giving aPresidential Address. I thank them for theiradvice and support. My youngest collabora-tor, Shira Fischer, prepared the slides for thetalk, some of which have become figures inthis paper, under tight time constraints. Shelikes pointing, clicking, and clip art muchmore than I do. My most longstanding col-laborator, Dick Gross, not only joined me inescaping playpens, but also taught meenough about medicine that I could con-struct a reasonable health care example.

Notes1. This is just a hunch, one I won’t try to prove.However, the fact that a team of three robots fromGeorgia Tech won the office-cleanup part of the1994 Robot Competition (Balch et al. 1995) sug-gests my hunch may be right.

2. A crew tromped through my kitchen and over-took the honey pot while I was preparing this Presi-dential Address.

3. This example was chosen for its linguistic, not itspractical, features. Alas, I don’t think paper-writingassistance is an application we should aim for any-time soon.

4. I’m a long-time fan of collaboration. My firstresearch collaboration was while I was still a gradu-ate student: the speech group in the AI Center atSRI functioned as a team. However, my affectionfor collaboration precedes these research collabora-tions by several decades. It seems to have takenroot when my twin brother and I discovered that ifwe put our heads together and formulated a jointplan, we could get out of play pens, apartments,and back yards. I better not say anything about thetrouble we also got into this way.

5. Various philosophers (Vermazen 1993; Bratman1992) have also argued for types of intentions oth-er than intending to do an action.

6. Some formalizations of collaboration (Levesque,Cohen, and Nunes 1990) force an agent to commu-nicate when it drops an intention related to thegroup activity; for example, the formalizationexplicitly encodes an obligation to communicate ifsomething goes wrong. However, in certain systemssettings, it may be more efficient for other agentsto detect a collaborator’s dropped intention. Black-well et al. (1994) take this approach in a mobilecomputing application. The extremely dynamicnature of this joint activity led the system designersto place the burden for maintaining mutual beliefabout commitment to the joint activity on the host(requiring it to check for the mobile system) ratherthan on the mobile system (requiring it to report achange).

7. An important open question is how agents candetect resource conflicts given incomplete knowl-edge about each other’s recipes (Grosz and Kraus1995).

Articles

84 AI MAGAZINE

Page 19: Collaborative Systems - MIT Media Labweb.media.mit.edu › ~cynthiab › Readings › Grosz-AAAI94.pdf · intelligent, collaborative problem-solving part-ners is an important goal

Reply to Brooks). AI Magazine 14(4): 7–13.

Etzioni, O., and Weld, D. 1994. A Softbot-BasedInterface to the Internet. Communications of theACM 37(7): 72–76.

Glance, N. S., and Huberman, B. A. 1994. TheDynamics of Social Dilemmas. Scientific American,270(3): March, 76–81.

Greif, I. 1994. Desktop Agents in Group-EnabledProducts. Communications of the ACM 37(7):100–105.

Gringoire, T., and Saulnier, L. 1914. Le Re'pertoire dela Cuisine. Paris: Dupont et Malgat-Gu'ering, 81.

Grosz, B., and Davis, R., eds. 1994. A Report toARPA on Twenty-First–Century Intelligent Systems.AI Magazine 15(3): 10–20.

Grosz, B., and Kraus, S. 1995. Collaborative Plansfor Complex Group Action. Artificial Intelligence86(1).

Grosz, B., and Sidner, C. 1990. Plans for Discourse.In Intentions in Communication, 417–444. Cam-bridge, Mass.: Bradford Books.

Horvitz, E. J. 1988. Reasoning under Varying andUncertain Resource Constraints. In Proceedings ofthe Seventh National Conference on ArtificialIntelligence, 111–116. Menlo Park, Calif.: AmericanAssociation for Artificial Intelligence.

Kaelbling, L. P., and Rosenschein, S. J. 1991. Actionand Planning in Embedded Agents. Robotics andAutonomous Systems 6(1).

Kinny, D.; Ljungberg, M.; Rao, A. S.; Sonenberg, E.A.; Tidhar, G.; and Werner, E. 1994. Planned TeamActivity. In Artificial Social Systems: Lecture Notes inArtificial Intelligence, eds. C. Castelfranchi and E.Werner. Amsterdam: Springer Verlag.

Kosecka, J., and Bajcsy, R. 1993. Cooperation ofVisually Guided Behaviors. In Proceedings of theInternational Conference on Computer Vision(ICCV-93). Los Alamitos, Calif.: IEEE ComputerSociety Press.

Kraus, S.; Wilkenfeld, J.; and Zlotkin, G. 1994. Mul-tiagent Negotiation under Time Constraints.Artificial Intelligence. 75(2): 297–345.

Levesque, H.; Cohen, P.; and Nunes, J. 1990. OnActing Together. In Proceedings of the EighthNational Conference on Artificial Intelligence,94–99. Menlo Park, Calif.: American Association forArtificial Intelligence.

Lochbaum, K. 1994. Using Collaborative Plans toModel the Intentional Structure of Discourse, Tech-nical Report, TR-25-94, Harvard University.

Maes, P. 1994. Agents That Reduce Work and Infor-mation Overload. Communications of the ACM37(7): 31–40.

Mish, F. C., ed.1988.Webster’s Ninth New CollegiateDictionary. Springfield, Mass.: Mirriam-Webster.

Newell, A. 1981. The Knowledge Level: PresidentialAddress. AI Magazine 2(2): 1–20, 33.

Pollack, M. E. 1990. Plans as Complex Mental Atti-tudes. In Intentions in Communication, eds. P. N.Cohen, J. L. Morgan, and M. E. Pollack. Cambridge,Mass.: MIT Press.

Pollack, M. E., and Ringuette, M. 1990. Introducing

the TILEWORLD: Experimentally Evaluating AgentArchitectures. In Proceedings of the Eighth Nation-al Conference on Artificial Intelligence, 183–189.Menlo Park, Calif.: American Association forArtificial Intelligence.

Prince, E. F. 1981. Toward a Taxonomy of Given-New Information. In Radical Pragmatics, ed. PeterCole, 223–255. San Diego, Calif.: Academic.

Putzel, M. 1994. Roadblocks on Highway: Gettingon the Internet Can Be a Real Trip. The BostonGlobe, 22 July 22, p. 43.

Rombauer, I. S. and Becker, M. R. . 1931. The Joyof Cooking. St. Louis, Mo.: A. C. Clayton PrintingCompany, 408.

Rosenschein, J., and Zlotkin, G. 1994. Rules ofEncounter: Designing Conventions for AutomatedNegotiation among Computers. Cambridge, Mass.:The MIT Press.

Shoham, Y., and Tennenholtz, M. 1995. On SocialLaws for Artificial Agent Societies: Off-Line Design.Artificial Intelligence 73:231–252.

Sycara, K. P. 1988. Resolving Goal Conflicts viaNegotiation. In Proceedings of the Seventh Nation-al Conference on Artificial Intelligence, 245–250.Menlo Park, Calif.: American Association forArtificial Intelligence.

Terveen, L., ed. 1995. Knowledge-Based Systems (Spe-cial Issue on Human-Computer Collaboration)8(2–3).

Vermazen, B. 1993. Objects of Intention. Philosoph-ical Studies 71:223–265.

Weld, D. 1995. The Role of Intelligent Systems inthe National Information Infrastructure. AI Maga-zine 16(3): 45–64.

Wilson, E. 1971. The Insect Societies. Cambridge,Mass.: Belknap Press/Harvard University Press.

Barbara J. Grosz is Gordon McKay Professor ofComputer Science at HarvardUniversity. Professor Grosz pio-neered research in computation-al models of discourse (includingmodels of focusing of attention,reference, and discourse struc-ture). She has also publishedpapers on collaborative planningand on natural-language inter-

faces to databases. Her current research interestsinclude the development of models of collaborativeplanning for joint human-computer problem-solv-ing, coordinating natural language and graphics forhuman-computer communication, and determin-ing the effects of discourse structure on intonation.A Fellow of the American Association for theAdvancement of Science and the American Associa-tion for Artificial Intelligence (AAAI), she is also thePast-President of the AAAI, a Member and formerChair of the Board of Trustees of the InternationalJoint Conferences on Artificial Intelligence, Inc.,and a member of the Computer Science andTelecommunications Board of the NationalResearch Council.

Articles

SUMMER 1996 85