ibm's chess players: on ai and its supplementseprints.lancs.ac.uk/45088/1/10.pdf · pion gary...

15
This article was downloaded by: [Lancaster University Library] On: 25 April 2013, At: 04:20 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK The Information Society: An International Journal Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/utis20 IBM's Chess Players: On AI and Its Supplements Brian P. Bloomfield a & Theo Vurdubakis a a Centre for the Study of Technology and Organisation, Department of Organisation, Work, and Technology, Lancaster University, Lancaster, United Kingdom Version of record first published: 13 Mar 2008. To cite this article: Brian P. Bloomfield & Theo Vurdubakis (2008): IBM's Chess Players: On AI and Its Supplements, The Information Society: An International Journal, 24:2, 69-82 To link to this article: http://dx.doi.org/10.1080/01972240701883922 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Upload: ngotu

Post on 18-Apr-2018

234 views

Category:

Documents


1 download

TRANSCRIPT

This article was downloaded by: [Lancaster University Library]On: 25 April 2013, At: 04:20Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

The Information Society: An International JournalPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/utis20

IBM's Chess Players: On AI and Its SupplementsBrian P. Bloomfield a & Theo Vurdubakis aa Centre for the Study of Technology and Organisation, Department of Organisation, Work,and Technology, Lancaster University, Lancaster, United KingdomVersion of record first published: 13 Mar 2008.

To cite this article: Brian P. Bloomfield & Theo Vurdubakis (2008): IBM's Chess Players: On AI and Its Supplements, TheInformation Society: An International Journal, 24:2, 69-82

To link to this article: http://dx.doi.org/10.1080/01972240701883922

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form toanyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation that the contentswill be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses shouldbe independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims,proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly inconnection with or arising out of the use of this material.

The Information Society, 24: 69–82, 2008Copyright c© Taylor & Francis Group, LLCISSN: 0197-2243 print / 1087-6537 onlineDOI: 10.1080/01972240701883922

IBM’s Chess Players: On AI and Its Supplements

Brian P. Bloomfield and Theo VurdubakisCentre for the Study of Technology and Organisation, Department of Organisation, Work,and Technology, Lancaster University, Lancaster, United Kingdom

This article investigates the ways in which the reporting of tech-nological developments in artificial intelligence (AI) can serve asoccasions in which Occidental modernity’s cultural antinomies areplayed out. It takes as its reference point the two chess tourna-ments (in 1996 and 1997) between the then world champion GaryKasparov and the IBM dedicated chess computers Deep Blue andDeeper Blue and shows how these games of chess came to be seen asan arena where fundamental issues pertaining to human identitywere contested. The article considers the dominant framing of theseencounters in terms of a conflict between two opposed categories—“human” and “machine”—and argues the essential role of humanagency, the human supplement, in the performances of machineintelligence.

Keywords agency, artificial intelligence, computer chess, Deep Blue,human identity

David: Open the pod bay doors, please, Hal. . . Openthe pod bay doors, please, Hal. . . Hello, Hal, do you readme? . . . Hello, Hal, do you read me?. . . Do you read me,Hal?. . . Do you read me, Hal?. . . Hello, Hal do you readme?. . . Hello, Hal, do you read me?. . . Do you read me, Hal?HAL: Affirmative, Dave, I read you.David: Open the pod bay doors, Hal.HAL: I’m sorry, Dave, I’m afraid I can’t do that.David: What’s the problem?HAL: I think you know what the problem is just as well as Ido.(2001 A Space Odyssey [dir. Stanley Kubrick, 1967]; screen-play by Stanley Kubrick and Arthur C. Clarke)

Debates over what “intelligent machines” can, or cannot,do are often tinged with unease. This unease is the subjectof one of the illustrations in Raymond Kurzweil’s The Age

Received 18 November 2006; accepted 29 July 2007.Address correspondence to Brian Bloomfield, Centre for the Study

of Technology & Organisation, Department of Organisation, Work &Technology, Lancaster University, Lancaster LA1 4YX, United King-dom. E-mail: [email protected]

of Spiritual Machines (1999, p. 197). The picture depicts ahuman figure (representing the “human race”) deep in thethroes of an existential crisis as he tries to identify thoseintellectual attributes that account for the superiority of hu-man over machine “thinking.” His endeavour appears tohave been repeatedly stymied by the relentless “march ofthe machines.”1 with discarded propositions—“only hu-mans can prove important theorems,” “only humans canrecognize faces,” “only humans can pick stocks,” etc.—that have fallen like autumn leaves in the winds of techno-logical progress. Only a few such claims are still defiantlypinned to the walls. It is clear that his Canute-like2 hope tohold back the technological tide is as futile as it is miscon-ceived. Nevertheless, his anxiety remains understandable.For a long time Occidental culture defined “thinking” assomething that only humans could do, and “intelligence”as something that only humans could posses. Thus thepossibility of an artificial intelligence can be perceivedas an erosion of what it means to be human, a bound-ary violation that generates feelings of anxiety and unease(Douglas, 1966; Bloomfield & Vurdubakis, 1997, 2003).As Isaac Asimov notes, “the ultimate machine is an intelli-gent machine,” and our great fear is “that it will [ultimately]supplant us” (1981, p. 136).3 Kurzweil (1999) offers atimetable for when we might expect that to happen.4 Wecould therefore name (after Asimov) the cultural uneasethat attends the notion of an artificial intelligence “dis-placement anxiety.” Of course, the anxiety experienced bythe human subject when faced with evidence of the agencyof the object is a frequently recurring theme in Westernpopular culture, from Mary Shelley’s (1992) Frankensteinto the Terminator (dir. Cameron, 1984, 1991; Mostow,2004) and Matrix (dir. Wachowski & Wachowski, 1999,2003) film trilogies. Perhaps its most memorable drama-tization is in Stanley Kubrick’s (1967) classic 2001 and inparticular the famous scene in which the on-board com-puter HAL 9000 denies astronaut David Bowman reentryinto the spaceship (quoted earlier in this article). The cre-ation thus attempts to displace its creator, evicting himfrom Paradise.5

69

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

70 B. P. BLOOMFIELD AND T. VURDUBAKIS

We could say that AI is as much a cultural constructas it is a techno-scientific research program. In fact, itwas the former some time before it became the latter. Asthe celebrations and reflections that marked HAL’s 1997“birthday” have underscored, our collective understand-ing of what an “artificial intelligence” might be like owesperhaps as much to science fiction as it does to science(e.g., Stork, 1997).6 Clearly, the media play an impor-tant role in this trafficking in representations. The esoteric(Fleck, 1979) character of most technical and scientificwork means that the only way it ever reaches a wider pub-lic is through the intercession of various representationalmedia (Suchman, 2007). In the course of such mediations,techno-scientific developments become invested with spe-cific cultural meanings and significance by being put inthe context of familiar stories. Dorothy Nelkin (1987, p.2) goes as far as to argue that for most people “the realityof [techno-]science is what they read in the press. Theyunderstand science less through direct experience or pasteducation than through the filter of journalistic languageand imagery.” However, such “popularization”—if that isthe right term—is not, as often claimed, a mere renderingof scientific work into more digestible language. Rather,it constitutes a literary enterprise in its own right, one thatuses particular technical and scientific developments as itssources of inspiration (Caro, 1997) while, arguably, alsocontributing to the shaping of cultural expectations regard-ing techno-scientific work (Fleck, 1979).

In what follows we attempt a critical examination ofsome aspects of what we might call the “literary enter-prise of AI.” More specifically, we focus on the 1996 and1997 chess competitions between world champion GaryKasparov and IBM’s dedicated chess computers Deep Blueand Deeper Blue. These contests are typically viewed asepitomizing the process through which claims about whatintelligent machines are, or are not, capable of, find theirway—metaphorically speaking—from the wall to the floorof Kurzweil’s distressed man (e.g., Dennett, 1996; Pan-dolfini, 1997; Cook & Kroker, 1997). In contrast, ouraim is to investigate these tournaments as cultural events,as staged occasions for the rehearsal and dramatizationof various philosophical and moral conflicts character-istic of Western modernity. Of course, imagery of con-flict tends to be a rather overused commodity. Media ac-counts, in particular, often endeavor to drum up interestby (over)using the language of conflict to frame the other-wise “dry” techno-scientific “facts.” And yet the successof such accounts in attracting readers and viewers (demon-strated by the migration from specialist to general news andmedia) crucially depends on whether they resonate withtheir cultural myths, their “deep stories”: for instance, theconflict between creator and creation that forms a centralelement in the Frankenstein mythos (Turney, 1998). The(re)enactment of such conflicts in current cultural prob-

lematizations of artificial intelligence is discussed here interms of the ongoing discursive (re)constructions of the“human,” the “machine,” and of “intelligence,” the pre-sumed sign of ontological difference between them. Aftera brief discussion of the historical antecedents, an analysisof the chess games played between the emblematic repre-sentatives of the two categories—Kasparov and Deep(er)Blue—is provided as a way of exploring how a game ofchess came to be presented as an arena where fundamen-tal issues pertaining to human identity were contested.Finally, the implications of this analysis for what we mightcall the “public understanding” of AI are discussed.

“LAST MAN STANDING”

The “facts of the matter” are already well known: In Febru-ary 1996 in Philadelphia, the (then) world chess cham-pion Gary Kasparov played a six game chess tourna-ment against IBM’s Deep Blue. Kasparov lost the openinggame—the first time the world champion had lost to a ma-chine in a game played at the classic rate (of 3 minutes permove)—but then went on to win the tournament by 4 pointsto 2. The year of HAL’s birthday was also the year whenthe much anticipated rematch took place in New York. InMay 1997 Kasparov played another six games against thesecond generation of Deep Blue, Deeper Blue. The matchopened with a Kasparov win followed by a Deep(er) Bluevictory in the second game. Games 3, 4, and 5 were drawsand the computer proceeded to win in the sixth game whenKasparov resigned after only 19 moves following an un-characteristic blunder. This clinched the tournament with3.5 points for Deep(er) Blue versus 2.5 for Kasparov. Hu-man mastery of the world of chess, observers proclaimed,had come to an end. The “Information Age,” an IBM ex-ecutive proclaimed, “has finally begun” (cited in Cook &Kroker, 1997).

Indeed, a look, however cursory, at the media cov-erage inspired by the two tournaments—“Deep Blue’svictory proves giant step for computerkind” (Harding &Barden inThe Guardian, 1997: 5); “IBM Chess MachineBeats Humanity’s Champ” (Weber in The New York Times,1997)7 —is enough to show that something more wasinvolved than the testing of the latest chess computer.Rather, in media narratives the tournaments provided away of articulating broader cultural anxieties and expec-tations about the human sphere being encroached by themachine: “Remember, Neanderthal man once thought hewas in control. If machines can master the Sicilian Defence(Scheveningen Variation), none of us is safe” (Moss, 2005,p. 21).

The tournaments were thus opportunities to reflect onOccidental8 modernity’s relationship with its technol-ogy, something that has provoked profound debate overa considerable period of time. Indeed, the issues currently

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

IBM’S CHESS PLAYERS: ON AI AND ITS SUPPLEMENTS 71

preoccupying the literary exponents and opponents of AIcould be seen as prefigured in the Enlightenment fasci-nation with the notion of the automaton and the associ-ated reflections on the nature of the relationship betweenthe natural and artificial, the human and the mechani-cal (e.g., Schaffer, 1994, 1995; Winner, 1985; Bloomfield& Vurdubakis, 1997). If we were to attempt a brief ge-nealogy of the Asimovian “displacement anxiety” and ofthe role of the chessboard as a site for acting it out, agood place to start might be Descartes’s Sixth Medita-tion (1968). As is well known, Descartes attempted totrace out the boundaries of humanity vis-a-vis animalsand machines—entities with whom humans may appearto have a number of (for Descartes, superficial) similari-ties. Well-constructed artificial animals, Descartes argues,could in principle deceive the observer since animals arein effect little more than elaborate machines—as are hu-man bodies. In contrast, automata constructed in a humanform would be easy to expose since such machines wouldby definition lack the ability to reason and communicatewith other human beings.

By the second half of the 18th century, however, eventhe powers of human reason itself did not appear immunefrom the mimetic powers of automata. Thus Julien Offrayde la Mettrie (1912, orig. 1741) argued in his L’HommeMachine (Man, a Machine) that the differences identifiedby Descartes between humans and mechanical beings weremere differences of degree. Thus,

[Jacques] Vaucanson, who needed more skill for makinghis [automaton] flute player than for making his duck, wouldhave needed still more to make a talking man, a mechanismno longer to be regarded as impossible, especially in the handsof another Prometheus.9

As Simon Schaffer (1994, 1995) has shown, it was VonKempelen’s mechanical chess-playing Turk that best ex-emplified the ways in which the ability of automata to sim-ulate human behavior was seen to challenge the ontologi-cal boundaries of reason. Throughout the 1780s the Turktoured the capitals of Europe taking on (and usually defeat-ing) all opponents.10 Catherine the Great of Russia andNapoleon are both said to have lost to the all-conqueringTurk.11 Commenting on the automaton’s career in 1783,Cheterien de Mechel argued that

The most daring idea that a mechanician has ever venturedto conceive was that of a machine which would imitate insome way . . . the master work of Creation. Von Kempelen hasnot only had the idea, but he carried it out and his chess-playeris, indubitably, the most astonishing automaton that has everexisted. (reproduced in Chapuis & Droz, 1958, p. 364)

The automaton’s 1784 London visit was accompanied bythe publication of a tract by Von Kempelen’s associate andbiographer, Carl Gotlieb Von Windisch. Entitled Inani-

mate Reason, it challenged contemporary men of learningto solve the riddle of this mechanical Sphinx by identify-ing whether and what deception was involved. At the sametime, the very possibility of “inanimate reason,” howeverplayfully employed, also hinted that the Cartesian bound-aries of humanity were far less secure that had hithertobeen imagined.

Over the next three decades the workings of the Turk,now owned and managed by musical engineer and impre-sario Johann Maelzel (inventor of the metronome), was thesubject of both learned treatises and wild flights of fancy.It was during another visit to London that mathematicianRobert Willis finally solved the puzzle of the Turk. On thebasis of a detailed study of the dimensions of the appara-tus he showed that there was enough space in the machineto conceal not a chess-playing dwarf, as many contempo-raries had suspected, but a normal-sized human being. Thereal purpose of the complex mechanism was to producesufficient noise to cover any sounds made by this hiddenplayer.

Nevertheless, the invention of what we now call the“computer” (the name itself being the usurpation of a ti-tle formerly accorded to human calculators) renewed thespecter of a mechanism crossing the Cartesian boundarythat separated the automaton’s world of mimesis from thehuman world of agency. Machines, it seemed, might sooncome to fulfill the ambition of the modern Prometheus in-voked by La Mettrie by also replicating human cognitivefunctions. Remarking on Charles Babbage’s “analyticalengine” (now routinely described as the first design for adigital computer), one eminent commentator, the physicistSir David Brewster, suggested that:

“Great as the power of mechanism is known to be, yetwe venture to say that many of the most intelligent of ourreaders will scarcely admit it to be possible that astronomicaland navigational tables can be accurately computed by ma-chinery; that the machine can itself correct the errors whichit may commit; and that the results of its calculations, whenabsolutely free from error, can be printed off, without theaid of human hands, or the operation of human intelligence.All this, however, Mr Babbage’s engine can do.” (quoted inCohen, 1966, p. 113)

Whatever we might mean by AI, it reflects the belief that(perhaps all) “intelligent behaviour can be realised com-putationally” (McCarthy, 1999). Since the early days of AIresearch in the 1950s, playing chess was seen as a particu-larly accomplished feature of human problem-solving ca-pabilities and as such has provided a suitable challenge forthe ingenuity of AI researchers.12 John McCarthy (2002,p. 6) quotes AI researcher Alexander Kronrod to the effectthat “Chess is the Drosophilia of AI.”13 AI researchers,in other words, study chess for much the same reasongeneticists study the fruit fly, as the gateway to greaterthings: the understanding (and replication) of intelligent

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

72 B. P. BLOOMFIELD AND T. VURDUBAKIS

behavior. As Claude Shannon (1950, p. 55) put it in1949:

Although [Programming a Computer for Playing Chessis] perhaps of no practical importance, the question is oftheoretical interest, and it is hoped that a satisfactory solutionof this problem will act as a wedge in attacking other problemsof a similar nature and of greater significance.

But there is another reason for the role chess has playedin the cultural imagery of AI. In particular, chess has longbeen used to render “intelligent behavior” into a specta-cle. As Hamilton (2000) argues, chess contests generate“the effect of spectacular intelligence.” This makes chessan obvious means for demonstrating progress toward theeventual (if vague) goal of an “artificial intelligence.” Forinstance, in 2001 HAL is seen playing chess and winning,thereby demonstrating the machine’s status as an “artificialintelligence.”14

The game of chess does, of course, carry a heavy load ofculturally specific connotations of rational control, mentalconquest and conflict15 (e.g., Figure 1, which accompa-nied part of The Guardian’s coverage of the 1996 tour-

FIG. 1. Kasparov vs. Deep Blue (courtesy of Spike Gerrell).

nament between Kasparov and Deep Blue). This imageof the chessboard as an intellectual battlefield is, how-ever, by no means unavoidable.16 In Italo Calvino’s (1979)Invisible Cities, for instance, the chessboard is inter alia asite for a meeting of minds between those who come fromdifferent worlds: the Mongol conqueror Kubla Khan andMarco Polo the Venetian Merchant (see Kallinikos, 1995).It is, nevertheless, the imagery of combat that has tendedto haunt discussions of machine prowess in chess. Thusastronaut Frank Poole’s defeat to HAL in 2001 seems tosymbolically presage the latter’s murderous rampage in itsattempt to take control of the spaceship—to which Poolefalls first victim.

As early as 1958, Herbert Simon and Allan Newellhad made the prediction that a computer would be worldchess champion within a decade—thus representing thequest for AI in the form of a struggle between humanbeing and machine. Clearly that prediction turned outto be overoptimistic, but the notion of chess as a siteof contestation quickly became instituted through a se-ries of contests or public demonstrations of which the

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

IBM’S CHESS PLAYERS: ON AI AND ITS SUPPLEMENTS 73

Deep(er) Blue triumph over Kasparov is now billed as theclimax.17

One of the longest-running surprises in computing is justhow difficult it has been to duplicate the ability of top hu-man chess players with the aid of computers. For over thirtyyears now, computer scientists have been predicting that com-puters would be able to play better than any human ‘withinfive to ten years.’ After such a long period of failed predic-tions, it may seem that the alternative view, preferred by somepsychologists and (understandably) chess players, that hu-man judgement would always remain superior to algorithms,may be correct. It is my opinion that only the timescale waswrong, and that the 1990s will start a new era—one in whichchess no longer belongs to the arena where human intellectreigns supreme, but is transferred to the growing domain ofintellectual tasks performed best by machine. (Beal, 1991,p. vii)

For Herbert Simon it was clear that whatever the officialIBM line might be, Deep Blue was a realization of thesort of machine he had anticipated some 30 years previ-ously. Commenting on IBM’s evident preoccupation withthe computational aspects or qualities of Deep Blue, Her-bert Simon (Simon & Munakata, 1997) emphasized itsknowledge of chess. During those 30 years, one of the prin-cipal opponents of the view that “it’s only a matter of time”had been Hubert Dreyfus, a persistent critic of the claims ofwhat is sometimes referred as “the strong program” in arti-ficial intelligence research (e.g., Dreyfus, 1972; Dreyfus &Dreyfus, 1992). In a contest staged at MIT in 1968, Drey-fus was himself beaten by a chess program—MacHack.18

“L’Affaire Dreyfus,” as it has come to be known (McCor-duck, 1979),19 immediately became a powerful metaphorfor the coming victory for the AI research program overits detractors.

More than two centuries after Kempelen’s Turk andWindisch’s provocative tract, we are still, it seems,wrestling with the notion of the thinking machine. At theclose of the 20th century, chess, where artificial intelli-gence, “inanimate reason,” first dared speak its name, wasonce more providing a focus for Occidental modernity’sambivalent reflections on its emblematic object, the ma-chine. The vocabulary used in the press coverage of thecontests is revealing here. For instance, the IBM comput-ers were frequently compared to monsters: for example,by David Levy in The Guardian (Levy, 1995); by RayMonk in The Observer (1996); and (reportedly) by Kas-parov himself (Tran, 1996b). In what sense do Deep Blueand Deeper Blue qualify as monsters? One obvious an-swer is that the term alludes to their awesome powers ofcalculation. Indeed, the topic figured prominently in mediaaccounts of Deep Blue.

The new system will be able to examine 50-100 billionpositions during the three minutes allocated to each move.(Levy, 1996 p. 6)

“We’ve got one of the greatest concentrations of comput-ing power ever focused on a single problem working here.”(J. Hoane, IBM scientist, quoted in Tran, 1996a, p. 12)

The calculative powers of Deeper Blue were, if anything,twice as awesome. According to Feng-Hsiung Hsu, IBMscientist, speaking to USA Today:

“It can analyse 200 million chess positions per second,twice as many as last year (ie1996). At times, Deep Blue willevaluate up to 74 ply (moves by each player) in advance.Kasparov and fellow chess masters typically evaluate about10 moves in advance.” (Kim, 1997)

What is crucial, however, is that such (monstrous) calcu-lative powers are seen as manifestations of what we mightcall a “cold reason,” that is reason divorced from the humancondition. Mary Douglas (1966) has suggested that “mon-sters” are to be understood as constituting violations of amorally charged and culturally rooted, classificatory or-der. Machines are commonly conceived as instruments ofhuman will and extensions of human agency (Kallinikos,1995). In contrast, the framing of the Kasparov–Deep(er)Blue competitions in confrontational terms hinted darkly atthe prospect of an autonomous technology, a technologyoutside, or even contesting, human control.20 Thus, thenotion of the “intelligent machine” merges contradictorysignifiers into a single identity. A machine, conventionallyunderstood as an object designed for, and dedicated to,the assistance of human actors is here (re)presented as anindependent agent. The representation and perception ofDeep(er) Blue as something monstrous is thus an expres-sion of its anomalous status as an entity which appearsnot to respect classificatory boundaries. In other words,Deep(er) Blue may seem monstrous to the extent that itsproperties threaten the boundary between subject and ob-ject, human agents and machines. Indeed, following hisdefeat in the second game of the 1997 rematch Kasparovis on record as describing Deep Blue as “an alien oppo-nent” (quoted in Barden, 1997). We might see the philo-sophical heat and media interest attached to the “humanversus machine” chess matches as a confirmation that whatwas seen to be at stake is nothing less than human self-understanding. People no longer compete with trains asthey did at the dawn of the railway age, it is now takenfor granted that machines are faster, stronger, etc.; whatis disturbing for so many is the notion that the mind, thelocus of free will and human choice, can be accountedfor in mechanical terms and replicated in a machine, therepresentative par excellence of standardization and pre-dictability (Kallinikos, 1992).

Against this backdrop, Kasparov appeared to have beenanointed by the media as the champion of humanity com-ing forth to battle the mechanical monster, a title that hein turn appeared in all modesty happy to accept. ClearlyDeep or Deeper Blue could beat most human chess players

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

74 B. P. BLOOMFIELD AND T. VURDUBAKIS

easily. Indeed, this was why so much hung on the GreatHuman Hope, Kasparov the champion, who, it has beenclaimed, felt that it was his “mission to safeguard the chessworld from the march of the machine” (Jones, 1994; seealso: Wright, 1996).

Gary Kasparov. . . sees himself as the “last man standing”in a mission to save chess from being turned into a mathe-matical formula.21

In fact the implications of the seemingly unstoppablemarch of computer chess machines were also rehearsedafter a previous defeat for Kasparov in 1994. For exam-ple, in discussing the success of the relatively cheap com-puter program Chess Genius 222 in a match played at highspeed,23 Robertson commented in The Times (1994, p.19):

“If a few silicon chips can surpass one of the human’ race’sintellectual champions, what does it say for the rest of theherd? If it is not brainpower that sets us apart from our fellow-mammals, what is it?”

Kasparov himself echoed this argument in the run up tothe 1997 rematch:

“People want to believe that the world champion is some-how protecting the most sensitive area of our self esteem.Brain superiority is something that keeps us in charge of theplanet. And if it is challenged in chess, who knows what willhappen?” (interview with USA Today, Kim, 1997)

Echoing Descartes, the suggestion here is that chal-lenges to the boundary between humans and machines—represented by contests between human experts and chessprograms—resonate with the boundary between peopleand animals. Put another way, the concession of esotericreasoning skills to machines could be read as a pointer tothe mechanical basis of intelligence. Therefore it is notsimply the human–machine boundary represented by es-oteric skills such as chess that is at stake, but the verydistinctiveness of the human in the evolutionary order ofthings.

Of course, another tenable reading of the Kasparov–Deep(er) Blue contest is to emphasize the fact that acomputer is after all a human construction and that there-fore its achievements might be cause for (human) celebra-tion rather than existential angst. In this version, Deep andDeeper Blue stand out as pinnacles of human ingenuityand creativity. In fact, it is possible to find both gloomyand celebratory views of the contest within the same ar-ticle (e.g., Time, Krauthammer, 1996). However, in themain, commentaries and media treatments of the contestpredominantly employed a language of anxiety rather thancelebration. Kasparov’s defeat was claimed to represent amilestone in the zero-sum contest between humanity andmachinery: “the defeat of human intelligence and the tri-umph of digital intelligence” (Cook and Kroker, 1997).

Understandably, IBM was at pains to distance Deep(er)Blue from the Frankenstein complex as portrayed in 2001,the Matrix, or Terminator:

This match is not about competition between people andmachines. It is a demonstration of what makes us humanbeings so different from computers. (emphasis added)24

In addition, IBM’s prematch and postmatch publicity madegreat play of how Deep(er) Blue’s technology would soonbe adapted for use in weather prediction, pharmaceuticalresearch, financial markets and other applications benefi-cial to humanity (e.g., Hargrave, 1997; Kim, 1997).

THE RISE OF THE MACHINES?

Who or what deserves the credit for beating Kasparov? DeepBlue is clearly the best candidate. Yes, we may join in con-gratulating Feng-hsiung Hsu and the IBM team on the successof their handiwork, but in the same spirit we might congratu-late Kasparov’s teachers, handlers, and even his parents. Butno matter how assiduously they may have trained him, drum-ming into his head the importance of one strategic principleor another, they didn’t beat Deep Blue in the [1996] series;Kasparov did. (Dennett, 1996)

However prominent the “(hu)man versus machine” billingmight have been, there are reasons that make the task ofidentification of Kasparov and Deep(er) Blue with thoseontological categories less straightforward than Dennettsuggests or the press coverage would lead us to expect.These reasons can be glimpsed in the coverage of the lessremarked upon aspects of the games. First, we have al-ready mentioned, or alluded to, the fact that facing Kas-parov across the chessboard was a succession of IBMscientists who physically moved the pieces as instructedby Deep(er) Blue.25 But further, it is notable that be-fore Game 2 of the first tournament, Deep Blue’s handlersmisplaced a file instructing the machine how to play theopening moves, thereby leaving the computer “to impro-vise” (Kim, 1997). On other occasions the operators placedpieces in the “wrong position”—we might say that the op-erators had not followed what the machine (Deep Blue)had said/instructed. In one such case Kasparov had alreadymade his move by the time the “mistake” was corrected(ibid.). These instances, together with their interpretationas glitches or in terms of operator error, reinforced the no-tion that (logical) machines do not make errors whereas(fallible) human beings do.

Second, there were accounts of how Deep Blue hadbroken down during play in the fourth game of the 1996tournament:

Although the world champion confessed to being ex-hausted after the game, his opponent is also showing signsof strain and crashed after Kasparov made his apparentlystrong 25th move. But unlike the computer, Kasparov became

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

IBM’S CHESS PLAYERS: ON AI AND ITS SUPPLEMENTS 75

visibly agitated and nervously paced the stage while IBMtechnicians took 15 minutes to repair the fault and re-establishcontact with Deep Blue’s base. The problem seems to havebeen stress, with the computer reacting adversely to Kas-parov’s cautious game. (Barden, 1996a: 26)

This commentary can be seen as a typical example of thetendency to anthropomorphize the machine, to project hu-man psychological parameters onto it and thus analyzeits “behavior” in such terms. In fact, Kasparov’s reputedadvantage over his usual (human) opponents due to hisvery stature in chess was frequently cited as one factorthat would not have a bearing on his encounters with DeepBlue. Thus the cold logic of the machine would not bebowed in any way by his reputation. As things turned out,particularly in the second tournament, many commentatorsobserved that this time it was Kasparov who was showingsigns of psychological stress. Describing Kasparov’s stateof mind following the defeat in the second game of the re-match, Khodarkovsky and Shamkovich (1997: 203) statethat he was “terribly frustrated” by his inability to fathomDeep(er) Blue’s play and in consequence this affected hispreparation for the next encounter. It would seem that thepressure really told in the final game:

“The computer was beyond my understanding and I wasscared.” (Kasparov quoted in Tran, 1997, p. 1)

We could therefore conclude that a logic that could notbe unfathomed by the world’s greatest chess champion hadindeed won out over frail human psychology. However, ifwe look a little further and again consider the role of theIBM team then matters are not so straightforward. To seewhy, we can usefully explore some of the machinationsbetween the two camps that followed Kasparov’s defeatin the second game of the 1997 rematch. After that game,which puzzled many other chess experts as well as theKasparov team, the latter demanded to see the printoutskept by the IBM team. Kasparov wanted to understand thelogic behind some of the moves. This was refused by hisopponents. Such was the furor that allegations of cheatingbegan to circulate in the media, suggesting that the IBMteam had actually been offering human assistance dur-ing the game. Thus one headline ran: “Computer cheatsat chess says champion” (The Sunday Times, 1997, p. 1).The allegation of cheating has an intriguing parallel withthe case of Von Kempelen’s Turk, where it was assumedby many that human agency had to lie behind the automa-ton’s apparent powers at chess—there had to be a trick.In the case of Deep(er) Blue a similar line of reasoningand attribution of skills and agency was in operation: Ei-ther the computer must have received human help in thesecond game of 1997 because its moves were too humanfor it to be otherwise, or chess had indeed been reduced tocalculation and Kasparov was doomed.26

The IBM team offered to make the printouts public, butonly after the end of the tournament. The matter went tothe Appeals Board and eventually the IBM team offered topass the printouts of the second match to an independentthird party, a computer scientist named Ken Thompson.However, Khodarkovsky and Shamkovich (1997, p. 209)argue that during the third game they had still not beensent to Thompson and that C. J. Tan of the IBM team hadeven asked whether they were still needed. As they sawit: “Once again C. J. Tan was stonewalling us.” This “warof the printouts” (Khodarkovsky & Shamkovich, 1997, p.202) was adduced to have been a form of psychologicalwarfare against Kasparov. Whether this was intentional ornot, it is evident that the role of the IBM team was morethan that of mere operators. Indeed, even the nature of theIBM team—the exact complement of personnel—wouldseem to have been somewhat problematic. More specifi-cally, Khodarkovsky and Shamkovich voice their concerns(1997, p. 204) about the precise role of the two grandmas-ters (Nick De Firmian and John Fedorowicz) who weresecretly brought in as advisors—a factor that further con-tributed to the pressure on Kasparov: “[it] added fuel toour suspicions that unknowingly, we were facing the ef-forts of an untold number of grandmasters. Why was theirparticipation kept secret?”

These arguments can be viewed in terms of a struggleover the identity and ontological status of Deep(er) Blue:If Kasparov’s opponent was just a machine then why didits play seem so much “unlike” a machine at times? Theconverse of the supposition of cheating was the suggestionthat Deep(er) Blue could play real chess:

“The scientists say that Deep Blue is only calculating,but it showed signs of intelligence in our second game.”(Kasparov quoted in Barden, 1997, p. 25)

This relates to a third aspect of the blurring of categoriesin the Kasparov—Deep(er) Blue encounters—namely, thecontinuity of the machine. It seems that to a certain de-gree Deep(er) Blue was actually reprogrammed in betweengames in order to try and prevent Kasparov from exploit-ing any perceived weaknesses he might have exposed inearlier games.27 Indeed, reflecting on the victory at the endof the 1997 rematch and the improvements to Deep Blue,the IBM team noted that as well as Deep(er) Blue’s in-creased computational power and additional chess knowl-edge, they had also developed their ability to change pa-rameters between games.28 This lack of transparencyconcerning the computer’s programming in any particu-lar game, and thus the logic behind the moves chosen, wasanother source of the intense frustration experienced byKasparov in seeking to identify its Achilles heel.

Fourth, it is useful to focus on the games that weredrawn. The rules governing each of the tournamentsstated:

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

76 B. P. BLOOMFIELD AND T. VURDUBAKIS

The operator may offer a draw, accept a draw or resignon behalf of Deep Blue. This may be done with or withoutconsulting Deep Blue.29

Shortly after the first tournament, in an interview withScientific American magazine, IBM team member Mur-ray Campbell stated: “The computer doesn’t ever acceptdraws. If we want to, we can accept a draw, but it willnever accept a draw. It’s in the rules.”30 Accordingly, wecan examine the matter of how draws were offered (orrejected) and thus how the drawn games were achieved—not in terms of chess play itself (that is the specific moveswhich precipitated a draw), but rather as an agreement.For example, the fourth game of the 1996 match wasdrawn, but only after some interesting twists. Writing inThe Guardian, Leonard Barden (1996a: 26) relates whathappened:

At move 41 he (Kasparov) had the ignominy of having hisdraw offer declined but then he fought back with a rook-for-knight sacrifice and accepted a move-50 draw proposal fromDeep Blue’s operator. (emphasis added)

This was followed by an even more telling incident in thefifth game when after 23 moves Kasparov offered a draw.Again, we refer to The Guardian’s chess correspondent todescribe what happened next:

The machine was not programmed to respond and, after itsoperator declined the proposal “in the interests of science,”it made a series of weak moves and was soon a bishop down.(Barden, 1996b, p. 21, emphasis added)

Kasparov went on to win the game.31

A similar pattern emerges if we examine the draws inthe 1997 tournament:

after completing his 48th move... Kasparov offered a drawto Deep Blue’s Feng-Hsiung Hsu (one of the IBM scientists).Hsu conferred briefly with other members of the team, thenaccepted.32

What is important about such instances is the way in whichthey illuminate the nature of the relationship between thehuman beings involved and the computer. Thus, the offer-ing and acceptance (or rejection) of a draw was a matter ofjudgment for the human operator and the rest of the IBMteam. A fifth but related area concerns the matter of howgames are won or lost. For instance, the sixth game of the1996 match (won by Kasparov) did not result in an actualcheckmate:

Deep Blue’s operator had little choice but to resign, sinceall the Black pieces were tied down on the Queen’s side,leaving the King virtually defenceless.33

In other words, the “operator” interpreted the position onthe chessboard and decided that further play was pointless.Again it was the operator’s “human” initiative that wasinvolved in resigning the first game of the 1997 tournament

when the position seemed futile: “on move 45 . . . the DeepBlue team tendered its resignation.”34

Finally, and concerning the games that were won, DeFirmian (1999, p. vii) (U.S. chess champion and one of thegrandmasters recruited in Deeper Blue’s winning team)provides a significant clue when he states:

Lest history evaluate this epic “Man vs Machine” contestincorrectly, Kasparov played much worse than usual, trying afaulty anti-computer strategy when he would likely have wonby normal play. I had a special perspective in this match as Iworked with IBM on this project and set Deep Blue’s open-ing moves for its two victories. In these games the computeremerged with a large opening advantage (before it even beganto “think”), which put Kasparov in a hole. Chess openings arevery difficult for computers unless they simply repeat humanmoves. Imagination and strategic thinking will always be twostrengths humans have over computers. (emphasis added)

Put another way, De Firmian is in essence claiming that itwas because of him (as the one who set the opening moves)that Deep(er) Blue was able to beat Kasparov in 1997.

Taken together, these features of the two tournamentsindicate the difficulty in sustaining an account that consis-tently represents the matches as being between two dis-tinct or (pure) categories—human and nonhuman agency.It is possible of course to analyze Kasparov’s role in thesame terms we have applied to Deep Blue. Accordingly,Kasparov could be viewed as the front man for a teameffort—including computers as well as other human be-ings. On the one hand, Kasparov had a team of advisorsthough it would appear that their advice was not alwaysappreciated. For example, Tran (1997) reported in TheGuardian that in the postmortem of the 1997 tournamentKasparov noted his poor preparation as well as “bad ad-vice, saying his greatest mistake was to listen to computerspecialists.” On the other hand, Kasparov has developedhis game via a PC chess program called HIARC:

“These days all chess professionals use computers tocheck their analysis. That’s exactly what I do. . . . Today ittakes me ten minutes to do what took five hours a few yearsago” (Kasparov, interviewed in USA Today, (quoted in Kim,1997)

Indeed, commenting on the preparations for the 1997 re-match, Khodarkovsky35 and Shamkovich (1997, p. 180)report: “Gary and his team took up residence in thePlaza hotel and in his suite were some special compan-ions: three computers!”36 These machines were used in-between games to analyze Deep(er) Blue’s play and thusinform Kasparov’s stratagems for dealing with it.

But the entanglement of human and machine in Kas-parov’s play can perhaps be best illustrated with referenceto the second game of the 1997 rematch tournament inwhich Kasparov resigned on the point of his 46th move ap-parently seeing his position as hopeless. In the immediate

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

IBM’S CHESS PLAYERS: ON AI AND ITS SUPPLEMENTS 77

aftermath there was much praise for the superior play ofDeeper Blue, but later analysis involving various grand-masters and their own chess computers revealed a morecomplex situation. As it transpired, Deeper Blue’s 46thmove was a mistake in that it could have let Kasparov se-cure a draw by moving to a situation of perpetual check.The fact that Kasparov—uncharacteristically—failed tosee this possibility, hence his premature resignation, hasbeen attributed to the fact that he presumed that the compu-tational power of Deeper Blue, which involved the abilityto look ahead at many layers of possible moves and whichunderpinned its tactical strength, had analyzed the situa-tion thoroughly (see Khodarkovsky & Shamkovich, 1997,pp. 193–206; King, 1997, p. 73). Thus, we might say thatwhile Kasparov tried to play in a way that would exploitthe weaknesses of computer chess—for instance, in termsof strategy, and the relative positioning of pieces—he alsoinformed his own decisions by the computer’s much trum-peted computational powers and ability to analyze manymillions of moves per second; in this instance, we mightsuggest that his own play was, so to speak, crucially shapedby expectations concerning the “agency of the machine.”

It is therefore worth revisiting the argument made by DeFirmian (1999) and others to the effect that computer op-ponents have tended to elicit a different kind of play fromthe human champion(s). It is not merely that Kasparovor for that matter his successor(s) in the role adapt theirplay to neutralize the perceived strengths and exploit theweaknesses of their machine adversaries. Rather, some-thing more may have been going on. Commenting on theinability of Kasparov and of his successor to the worldtitle, Vladimir Kramnik,37 to manage anything better thandraws in their respective (post-Deep Blue) “man versusmachine” tournaments against what most commentatorsconsider inferior chess programs, Feng-Hsiung Hsu (2002,p. 275) questions whether

the two computers [were] really playing at Vladimir orGarry’s level? The match scores said so. Do I believe that? Yesand no. Vladimir and Garry were playing at the computers’level but the computers were not playing at the level thatVladimir and Garry are capable of.

The “human champions” were playing at the level of theircomputer opponents rather than vice versa. In particular,the human champions appeared not to be making full useof their main advantage—i.e., chess knowledge. (Recall,for instance, De Firmian’s 1999 comments cited earlier,or Kasparov’s resignation instead of forcing a draw in thesecond game of the 1997 tournament—not the reaction onewould expect from a grandmaster). “Why couldn’t the toptwo humans bring their chess knowledge to bear in [thosegames they lost]?” (Hsu, 2002, p. 276). His hunch is that“they [Kramnik and Kasparov] were overwhelmed by thepressure not to lose [to a computer]” (ibid, emphasis in

FIG. 2. Vladimir Kramnik vs. Deep Fritz (2006).38

original). They were in other words brought down not bythe calculative powers of their opponents but under theweight of their own (self-assumed) symbolic role.

DISCUSSION

As the stubborn persistence of the “(hu)man vs. ma-chine” frame reminds us (e.g., Fig. 2), chess contests havelong functioned as allegories for Occidental anxieties con-cerning the status of the human subject. It is thereforehardly surprising that human chess champions have re-fused to accept their defeat by computer programs withequanimity and have been unable to resist scratching thatparticular scab again and again. Immediately after his 1997defeat, Kasparov demanded a rematch with Deep(er) Blue,this time offering to compete with it for the World Champi-onship (!), a rematch that for whatever reasons never mate-rialized (e.g., Hsu, 2002, pp. 270–271). Deep(er) Blue hasnow reportedly been dismantled, taking its secrets withit (Hartson, 1997).39 Its short but glorious career, IBMclaimed, was but a prelude to the application of its tech-nology in other areas of expertise. Those who had theirsuspicions regarding its 1997 success will now have to gowith their questions left unanswered. But were they theright questions to start with?

Lucy Suchman (2007, p. 1) has argued that ques-tions of agency in computational artefacts beg another

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

78 B. P. BLOOMFIELD AND T. VURDUBAKIS

question, namely of how the entities in question come tobe framed/configured as human or artificial “prior to ouranalyses.” To answer this question we need to look againat our ways of sorting out what is human from what is not.The late Jacques Derrida (1976) used the term “supple-ment” as shorthand for that “inessential extra” that needsto be added in order to complete and enhance the pres-ence of something which already claims to be completeand self-sufficient. But clearly, Derrida argues, anythingthat needs a supplement cannot be already complete on itsown. Following this line of argument the “human” and the“machine” contestants of the Kasparov versus Deep(er)Blue contests can be said to be deeply involved with oneanother in a relationship of mutual (in)determination andsupplementarity. “Human” and “machine” appear as, orare claimed to be, wholly present and self-sufficient, andto need the other merely as an appendage, an afterthought(e.g., see Dennett, 1996, quoted earlier in this article).However, upon more systematic (re)examination, whatwas described as external and inessential often turns outto be compensating for an essential lack, to be filling in ahole, in the original. In a not dissimilar move, Bruno La-tour (1993, p. 78) has argued that while the world is madeup of “hybrid” entities, Occidental accounts of agency tendto break such hybrids apart in order to identify what camefrom the subject (the human) and what came from the ob-ject (the technology). In this account, we start with hybridsand try to obtain the “human” and the “technical” out ofthem rather than vice versa. Agency is seen to reside ei-ther in the machine or in the people behind the machine. InLatourian terms, however, both are products of processesof purification. As he notes in A Dialog in Honor of HAL(Latour & Powers, 1998):

The idea of a test matching a naked, isolated intelligenthuman against an isolated naked automated machine seemsto me as unrealistic. . . Things and people are too much in-tertwined to be partitioned before the test begins, especiallyto capture this most heavily equipped of all faculties: intelli-gence.

Human versus machine contests are thus, according toLatour, little more than shadowboxing. The machines—orfor that matter the humans—that battle each other on theseoccasions are in this view creatures of legend, products ofour culture-specific forms of storytelling. If so, what is itthat compels us to tell such stories? Furthermore, what isit that incites us to make these stories, as it were, “cometrue” through elaborate staging(s)?

CONCLUDING REMARKS

Sherry Turkle (1984) has famously described the digitalcomputer as a “Second Self.” Such a description carriesa heavy symbolic load. Among other things, it evokesan image of the computer as a creature of Occidental40

modernity’s claimed “demiurgic ambition to exorcise thenatural substance of a thing in order to substitute a syn-thetic one.” (Baudrillard, 1983, pp. 88–89). Here againHAL, the mythological intelligent machine by referenceto which the story of the future of AI is often narrated(e.g., Stork, 1997; Vendy and Nofz, 1999; Bloomfield,2003)—and on whose “birthday” the human dominanceof the world of chess is said to have ended—is illustrative.For what is HAL’s crime but the Original Sin? Moderns,having created thinking machines in their own image, im-mediately expect that these machines will—just like theythemselves did—attempt to usurp the powers of their cre-ator. Thus Hans Moravec (1988, p. 1) speculates about a“postbiological” future “in which the human race has beenswept away . . . usurped by its own artificial progeny.” It isperhaps paradoxical but not unexpected that AI, the enter-prise that is said to epitomize the workings of reason, is atthe same time so heavily mythologized (Turney, 1998).

Anthropologists such as Nancy Munn (1986), BrianPfaffenberger (1995), and others have drawn attention tothe ways in which technological artifacts are more thanmerely functional objects but may also serve to constituteoccasions where the categories and distinctions that givemeaning to social life are enacted. In her account of com-puters as “evocative objects” Turkle (1984, p. 31) echoesthis point when she argues that objects “[o]n the lines be-tween categories . . . draw attention to how we have drawnthe lines. Sometimes . . . they incite us to reaffirm the lines,sometimes to call them into question, simulating differentdistinctions.” Turkle’s main focus is on the social psychol-ogy of human–computer interaction. Our own focus in thisarticle has been rather different. We have argued here thatto the extent to which computational artefacts do come toplay such a role, they do not do so, as it were, “naturally,”by virtue of their inherent qualities but as part of specificsocial performances. We could therefore “read” humanversus “intelligent” machine contests, from Kempelen’sTurk to IBM’s Deep(er) Blue, as events where some ofthe central philosophical and moral conflicts of Occiden-tal modernity were/are dramatized: self vs. other; identityvs. difference; free will vs. determinism; subject vs. object;nature vs. artifice, and so forth.

Lucy Suchman (2007) makes a relevant point in relatingher encounters with MIT’s robots Cog and Kismet, arti-facts we might say of the “bottom-up” (i.e., behaviouraland embodied) approach to AI as opposed to that of ab-stract symbol manipulation dating from the 1950s (Bloom-field & Vurdubakis, 1997).41 In it, she notes that theability of these artifacts to act out the role of what wemight call a (potential) “Second Self,” was crucially de-pendent upon specific practices of framing. These “fram-ings” served to locate the various human labors, relation-ships, and technologies upon which their performanceroutinely relied, outside the picture. In the case of the

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

IBM’S CHESS PLAYERS: ON AI AND ITS SUPPLEMENTS 79

chess tournaments under discussion here it is notable thatmany of the media representations routinely cropped outthe IBM operators.42 Our own analysis of Deep(er) Blue’ssuccesses and failures has suggested a relational view of“intelligence”: that is to say “intelligence” not as a prop-erty located “inside” the machine “itself” but as cruciallydependent on the performative capabilities of its “oper-ators.” We have therefore endeavored to remain atten-tive to the “missing supplement,” that which is left outby the “[hu]man versus machine” framing that has dom-inated narrations of the Kasparov–Deep Blue (and subse-quent) tournaments. Even this dominant media frame, wehave argued, affords glimpses of a less brightly lit “back-stage” and of the shifting human–technical assemblagesthat make possible a “front stage” where “man versus ma-chine” drama is performed before the eyes of a watchingworld.

NOTES

1. This was the original title of Warwick’s (1997) book In the Mindof the Machine.

2. King Canute’s fabled attempt to turn the tide is usually (and un-fairly) cited as an example of out of control megalomania. In fact theking’s purpose was to give a lesson in humility to his sycophanticcourtiers. It is said that after that incident Canute never again wore hiscrown but hung it instead in Winchester Cathedral.

3. In one of Asimov’s short stories (That Thou Art Mindful of Him)intelligent robots, designed to serve humanity, have become so ad-vanced that they come to the conclusion that they fit the definition ofthe “human” much better than their creators and thus need to serve noone but themselves (Asimov, 1976).

4. Kurzweil’s framing of the past and future development of AI inanthropomorphic terms which suggests both replication of, and com-petition with, human beings is far from unique. Rather it is a standardfeature of the “literary enterprise of AI.” For instance, another techno-logical timeline is offered by researchers at British Telecommunicationsplc. It includes: “AI Entity gains degree 2013–217. . . AI Entity gainsPhD 2020s. . . Robots mentally and physically superior to humans2030s”.http://www.btplc.com/Innovation/News/timeline/TechnologyTimeline.pdf (accessed October 30, 2007).

5. For example, see Latour and Powers (1998). AI narratives in bothfiction and nonfiction often have recourse to biblical imagery, fromMultivac the computer in Asimov’s The Last Question (Asimov, 1986),to Wintermute in Gibson’s (1984) Neuromancer, and from Moravec’s(1988) Mind Children, to Warwick’s (1998) In the Mind of the Machine(see also Davis, 1999).

6. For a fuller discussion see Bloomfield (2003).7. This article also appears under the title “Swift and

Slashing, Computer Topples Kasparov,” New York Times,May 12, 1997. http://query.nytimes.com/gst/fullpage.html?res=9903E5D91039F931A25756C0A961958260 (accessed November 1,2007).

8. We use the term “Occidental” here to acknowledge the fact thatthe view of machines—and of “intelligent machines” in particular—inother (say, Japanese or Korean) cultures is often quite different.

9. The notion of a “Modern Prometheus” of course became thetheme of Mary Shelley’s (1992, orig, 1818) Frankenstein.

10. The apparatus was presented to the public as a puzzle, a chal-lenge to identify the location of the automaton’s intelligence. This ishow Edgar Allan Poe (1855/1982: 425–426) describes the prelude to aperformance in Richmond, Virginia:

“Maelzel now informs the company that he will disclose totheir view the mechanism of the machine . . . . and throws thecupboard fully open to the inspection of all present. Its wholeinterior is apparently filled with wheels, pinions, levers, andother machinery, crowded very closely together, . . . . he goesnow round to the back of the box, and raising the drapery ofthe figure, opens another door situated precisely in the rearof the one first opened. Holding a lighted candle at this door,and shifting the position of the whole machine repeatedly atthe same time, a bright light is thrown entirely through thecupboard, which is now clearly seen to be full, completelyfull, of machinery.”

11. Legends that quickly became part of the mystique of the au-tomaton and related in stories and plays such as La Czarine (1968) byAdenis and Gastineau (Chapuis & Droz, 1958, p. 365).

12. In fact, the interest in chess and computers predates the birth ofAI in 1956. For instance, it was discussed earlier by Wiener (1950), whosaw the domain of chess as but a forerunner of applications in otherareas requiring complex decision making. But as things have turnedout, it isn’t the most esoteric areas of problem solving (chess, analyticalmathematical problems etc.) that have proved most recalcitrant to AIwork but rather the most mundane features of everyday intelligencesuch as common sense knowledge.

13. Thus drawing “an analogy with geneticists’ use of that fruit fly.”14. As Stork (n.d.) notes in an essay on the IBM website, in the

Arthur C. Clarke (1968) novel of the film, “HAL is programmed to lose50% of the time—to keep things interesting for the astronauts.” Whetherit would have that effect is debatable since it would have changedchess from a game of skill to a game of chance. Interestingly, “Kubrickoriginally filmed the ‘chess scene’ with a five-in-a-row board gamecalled pentominoes but chose not to use it, believing that viewers wouldbetter appreciate the difficulties involved in a chess game” (Campbell,1997).

15. A bloodless mimesis of the conflict of war, as usefully notedby one of the reviewers for this article.

16. We thank one of the reviewers for this point.17. For a complete chronology of these “human–machine” encoun-

ters up to, and including, the 1996 Kasparov–Deep Blue tournamentsee Newborn (1997).

18. For its achievement, MacHack became an honorary member ofthe U.S. Chess Federation (Boden, 1977, p. 353).

19. As with all legendary events, different sources offer alternativedates, for instance, 1967, for his encounter; 1968 is the date given byDreyfus (Dreyfus & Dreyfus, 1986).

20. For a discussion of the notion of autonomous technology inpolitical thought see Winner (1985).

21. Malcolm Pein, London Chess Centre, covering Game 5 ofthe 1997 rematch for the IBM web site. http://researchweb.watson.ibm.com/deepblue/games/game5/html/c.2.html (accessed October 30,2007).

22. At the time this program was available in the high street at a costof less than £150 and ran efficiently on a relatively modest machine

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

80 B. P. BLOOMFIELD AND T. VURDUBAKIS

(Jones, 1994), it was based on Richard Lang’s program Pentium ChessGenius (Newborn, 1997).

23. That is, each “player” was allowed only 25 minutes per gameto complete all their moves.

24. This statement was contained on the webpage of the offi-cial IBM Internet site specially devoted to covering the tournament.www.chess.ibm.com (accessed February 16, 1996).

25. The IBM team of scientists took turns facing Kasparov.26. And in another intriguing parallel, the 2006 world chess cham-

pionship between Veselin Topalov and Vladimir Kramnik was marredby the allegation that the latter used over-frequent visits to the toilet inorder to secretly consult a chess computer (Barden, 2006).

27. Hsu (2002), of the IBM team, discusses the fixing of bugswithin the software in between games. Of course, such attempts to fixthe computer program could work to Kasparov’s advantage insofar asrepairs in one area might merely serve to open up weaknesses elsewhere.

28. http://researchweb.watson.ibm.com/deepblue/meet/html/d.html (accessed October 30, 2007).

29. http://researchweb.watson.ibm.com/deepblue/watch/html/c.8.html (accessed October 30, 2007).

30. www.sciam.com/explorations/042197/chess/042197blueinter.html (accessed October 30, 2007).

31. During an interview with John Horgan (1996) of Scien-tific American following the 1996 match, one of the IBM team—Joseph Hoane—complained about the “human error” during the drawin game 5 when the “computer’s managers” rejected Kasparov’sdraw offer.http://www.sciam.com/article.cfm?articleID=00058BE2-DD78-1CD9-B4A8809EC588EEDF&sc=I100322 (accessed October30, 2007).

32. http://www.research.ibm.com/deepblue/home/may06/story 2.html (accessed October 30, 2007).

33. http://www.research.ibm.com/deepblue/watch/html/c.10.6.html (accessed October 30, 2007).

34. http://www.research.ibm.com/deepblue/home/may03/story 3.html (accessed October 30, 2007).

35. Khodarkovsky, a friend of Kasparov, was also a member of histeam.

36. Of course “Kasparov” is also a brand of chess-playing machines,of which over 100,000 had been sold by the time of the first Kasparov–Deep Blue confrontation (Krauthammer, 1996).

37. Kramnik donned the mantle of humanity’s defender and in 2001agreed to take on the reigning computer chess champion Deep Fritz,arguing that “It will be hard, but I would like to prove that humansare still worth something” (cited in The Sunday Telegraph, December23, 2001, p. 14). After a series of inconclusive tournaments Kramnikfinally lost to Deep Fritz in 2006 (Fig. 2).

38. Courtesy of the RAG Group, sponsors of the Kramnik–DeepFritz encounter.

39. Only a weaker version, Deep Blue Junior, was to continue thechallenge at chess (Hartson, 1997).

40. See note 6.41. Developed at MIT’s AI Laboratory, Cog is a human-

like head and torso built to “approximate the sensory and mo-tor dynamics of a human body” and learn through interac-tion with humans (http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/overview.html). Also built to explore social interactions,Kismet is a robot developed to emulate interactions with a “human care-giver” through, for instance, “gaze direction, facial expression, bodyposture, and vocal babbles” in a manner “reminiscent of parent-infant

exchanges” (http://www.ai.mit.edu/projects/sociable/overview.html).See also note 4.

42. For example, in the photograph accompanying The Guardianarticle by Tran (1996b). In some accounts the presence of the IBMoperator was very much that of an inessential extra. In an earlier “hu-man versus machine” encounter, on this occasion between Kasparovand Deep Thought, Leithauser (1990), writing in The New York Times,referred to the operator role as “largely secretarial.”

REFERENCES

Asimov, I. 1976. The bicentennial man and other stories. New York:Doubleday.

Asimov, I. 1981. Asimov on science fiction. New York: Avon Books.Asimov, I. 1989. Robot dreams. London: Gollancz.Barden, L. 1996a. Kasparov crashes computer. The Guardian, February

16:26.Barden, L. 1996b. A black end for “Blue.” The Guardian, February

19:21.Barden, L 1997. Kasparov: Deep Blue “is thinking.” The Guardian,

May 8:25.Barden, L. 2006. Advantage slips from shaken Kramnik in Toiletgate

fallout. The Guardian, October 7.Baudrillard, J. 1983. Simulations, trans. P. Beitchman. New York:

Semiotext(e).Beal, D. F., ed. 1991. Advances in computer chess 6. London: Ellis

Horwood.Bloomfield, B. P. 2003. Narrating the future of intelligent machines:

The role of science fiction in technological anticipation. In Narrativeswe organize by, eds. B. Czarniawska and P. Gagliardi, pp. 193–212.Amsterdam: John Benjamins.

Bloomfield, B. P., and Vurdubakis, T. 1997. The revenge of the ob-ject? Artificial intelligence as a cultural enterprise. Social Analysis41(1):29–45.

Bloomfield, B. P., and Vurdubakis, T. 2003. “Imitation games”: Turing,Menard, Van Meegeren. Ethics and Information Technology 5(1):27–38.

Boden. M. 1977. Artificial intelligence and natural man. Hassocks,Sussex: Harvester Press.

British Telecommunications plc. 2005. 2005 BT technology timeline.www.btplc.com/Innovation/News/timeline/TechnologyTimeline.pdf(accessed March 31, 2007).

Calvino, I. 1979. Invisible cities. London: Picador.Caro, P. 1997. An overview of some social and cultural problems with

science. Paper presented at the Social Science Bridge Workshop,Lisbon, 4–5 April.

Campbell, M. 1997. An enjoyable game: How Hal plays chess.In Hal’s legacy: 2001’s Computer as dream and reality, ed.D. Stork. Cambridge, MA: MIT Press. http://mitpress.mit.edu/e-books/Hal/chap5/five1.html (accessed October 29, 2007).

Chapuis, A., and Droz, E. 1958. Automata: A historical and technolog-ical study, trans. A. Reid. London: BT Batsford Ltd.

Clarke, A. C. 1968. 2001 A space odyssey. London: Hutchinson.Cohen, J. 1966. Human robots in myth and science. London: Allen and

Unwin.Cook, D., and Kroker, A. 1997. The digital prince and the last chess

player. CTheory. http://www.ctheory.net/articles.aspx?id=175 (ac-cessed March 31, 2007).

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

IBM’S CHESS PLAYERS: ON AI AND ITS SUPPLEMENTS 81

Davis, E. 1999. TechGnosis: Myth, magic, and mysticism in the age ofinformation. London: Serpent’s Tail.

De Firmian, N. 1999. Modern chess openings: MCO-14. New York:Random House.

Dennett, D. C. 1996. Did HAL commit murder? http://ase.tufts.edu/cogstud/papers/didhal.htm (accessed March 31, 2007).

Derrida, J. 1976. Of grammatology. Baltimore, MD: John Hopkins Uni-versity.

Descartes, R. 1968, orig. 1637 and 1641. Discourse on method and Themeditations, trans. F. Sutcliffe. Harmondswoth: Penguin.

Douglas, M. 1966. Purity and danger. London: Routledge.Dreyfus, H. 1972. What computers can’t do. New York: Harper and

Row.Dreyfus, H., and Dreyfus, S. 1986. Mind over machine. New York: The

Free Press.Dreyfus, H. L., and Dreyfus, S. 1992. What artificial experts can and

cannot do. AI and Society 6(1):18–26.Fleck, L. 1979. Genesis and development of a scientific fact. Chicago:

University of Chicago Press.Gibson, W. 1984. Neuromancer New York: Ace Books.Hamilton, S. N. 2000. The last chess game: Computers, media events,

and the production of spectacular intelligence. Canadian Review ofAmerican Studies 30(3):339–360.

Harding, L., and Barden, L. 1997. Deep Blue’s victory proves giantstep for computerkind. The Guardian, May 12:5.

Hargrave, S. 1997. Deeper Blue’s next move is to check cancer drugs.The Sunday Times, Features Section, May 11:14.

Hartson, W. 1997. Deep Blue quits while ahead. The Independent,September 25:10.

Horgan, J. 1996. The Deep Blue team plots its next move.Scientific American, March 8. http://www.sciam.com/article.cfm?articleID=00058BE2-DD78-1CD9-B4A8809EC588EEDF&sc=I100322 (accessed October 30, 2007).

Hsu, Feng-Hsiung. 2002 Behind Deep Blue: Building the computerthat defeated the world chess champion. Princeton, NJ: PrincetonUniversity Press.

Jones, T. 1994. When the silicon chips were down for Kasparov. TheTimes, Home News, September 1:1.

Kallinikos, J. 1992. The signification of machines. The ScandinavianJournal of Management 82: 113–132.

Kallinikos, J. 1995. The architecture of the invisible. Organization2(1):117–140.

Khodarkovsky, M., and Shamkovich, L. 1997. A new era: How GarryKasparov changed the world of chess. New York: Ballantine Books.

Krauthammer, C. 1996. Deep Blue funk. Time, February 26:48–49.Kim, J. 1997. Upcoming chess match not as simple as “man vs. Com-

puter.” USA Today, Tech Report, February 9. http://www.usatoday.com:80/life/cyber/tech/cta402.htm (accessed March 24, 1998).

Kurzweil, R. 1999. The age of spiritual machines: When computersexceed human intelligence. London: Phoenix.

La Mettrie, J. O. De. 1912, orig. 1741. Man a machine. http://www.cscs.umich.edu/∼crshalizi/LaMettrie/Machine/ (accessed October 30,2007).

Latour, B. 1993. We have never been modern. London: HarvesterWheatsheaf.

Latour B., and Powers, R. 1998. Two writers facing one Turing Test: Adialog in honor of HAL between Richard Powers and Bruno Latour.Common Knowledge 7(1):177–191. http://www.bruno-latour.fr/poparticles/poparticle/p072.html (accessed October 30, 2007).

Leithauser, B. 1990. Kasparov beats Deep Thought.The NewYork Times, January 14. http://query.nytimes.com/gst/fullpage.html?res=9C0CE0DE1E3FF937A25752C0A966958260&sec=&spon=&pagewanted=1 (accessed October 30, 2007).

Levy, D. 1995. It’s the thought that counts. The Guardian, OnLine Page,June 8:T7.

Levy, D. 1996. The devil in Deep Blue. The Guardian, February 1:6.McCarthy, J. 1999. Review of “Artificial Intelligence: The Very

Idea.” http://www-formal.stanford.edu/jmc/reviews/haugeland.html(accessed March 31, 2007).

McCarthy, J. 2002. What is artificial intelligence? http://www-formal.stanford.edu/jmc/ (accessed March 31, 2007).

McCorduck, P. 1979. Machines who think. San Francisco: W. H. Free-man.

Monk, R. 1996. Morons from inner space. The Observer, Review PageFebruary 18:3.

Moravec, H. 1988. Mind children. Cambridge, MA: Harvard UniversityPress.

Moss, S. 2005. Beaten by a microchip. The Guardian, Guardian LifePages, June 30:21.

Munn, N. 1986. The fame of Gawa. Cambridge: Cambridge UniversityPress.

Nelkin, D. 1987. Selling science: How the press covers science andtechnology. New York: W. H. Freeman.

Newborn, M. 1997. Kasparov versus Deep Blue. New York: Springer.Pandolfini, B. 1997. Kasparov and Deep Blue: The historic chess match

between man and machine. New York: Fireside.Pfaffenberger, B. 1995. Technical ritual: Of yams, canoes, and the de-

legitimation of technology studies in social anthropology. Paper pre-sented in ESRC Seminar on Technology as Skilled Practice, Univer-sity of Manchester, January 18.

Poe, E.A. 1982, orig. 1855. Maelzel’s chess player. In The PenguinEdgar Allan Poe, pp. 421–439. Harmondsworth: Penguin.

Robertson, I. 1994. Out of Alzheimer’s, a testament of courage. TheTimes, Features Section, September 6:19.

Schaffer, S. 1994. Babbage’s intelligence: Calculating engines and thefactory system. Critical Inquiry 21:203–227.

Schaffer, S. 1995. Babbage’s dancer and Kempelen’s Turk. Pa-per presented at Humans, Animals, Machines, the FourthBath Quinquennial Science Studies Workshop, Bath, July 27–21.

Scientific American Explorations. 1996. Deep Blue team plots its nextmove. www.sciam.com/explorations/042197/chess/ 042197bluein-ter.html (accessed March 31).

Shannon, C. E. 1950. Programming a computer for playing chess,Philosophical Magazine Ser. 7, 41314:256–275. http://archive. com-puterhistory.org/projects/chess/related materials/text/2-0%20and%202-1.Programming a computer for playing chess.shannon/2-0%20and%202-1.Programming a computer for playing chess.shannon.062303002.pdf

Shelley, M. W. 1992, orig, 1818. Frankenstein; or the modernPrometheus. Harmondsworth: Penguin.

Simon, H., and Munakata, T. 1997. AI lessons. Communications of theACM 408:23–25.

Simon, H. A., and Newell, A. 1958. Heuristic problem solving: Thenext advance in operations research. Operations Research 6(1):1–10.

Stork, D. G., ed. 1997. Hal’s legacy: 2001’s Computer as dream andreality. Cambridge, MA: MIT Press.

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013

82 B. P. BLOOMFIELD AND T. VURDUBAKIS

Stork, D. n.d. The end of an era, The beginning of another?HAL, Deep Blue and Kasparov. http://researchweb.watson.ibm.com/deepblue/learn/html/e.8.1.html (accessed October 30, 2007).

Suchman, L. 2007. Human–machine reconfigurations. Cambridge:Cambridge University Press.

The Sunday Times. 1997 Computer cheats at chess says champion. May11:1, 22.

Tran, M. 1996a. Chess champ in Deep Blues. The Guardian, February12:12.

Tran, M. 1996b. Man draws level with monster. The Guardian, February13:26.

Tran, M. 1997. Computer leaves Kasparov Blue. The Guardian, May12:1.

Turkle, S. 1984. The second self: Computers and the human spirit. NewYork: Simon and Schuster.

Turney, J. 1998. Frankenstein’s footsteps. London: Yale UniversityPress.

Vendy, P., and Nofz, M. 1999. HAL’s long, long run: Computers and so-cial performance in Stanley Kubrick’s 2001. Computers and Society29(4):8–10.

Warwick, K. 1998. In the mind of the machine. London: Arrow, (origi-nally published in 1997 as March of the Machines. London: CenturyBooks).

Weber, B. 1997. IBM chess machine beats humanity’s champ. The NewYork Times, CyberTimes, May 12. http://www.rci.rutgers.edu/∼cfs/472 html/Intro/NYT Intro/ChessMatch/IBMChessMachineBeats.html (accessed October 30, 2007).

Wiener, N. 1950. The human use of human beings. New York: HoughtonMifflin.

Winner, L. 1985. Autonomous technology: Technics-out-of-control asa theme in political thought. Cambridge, MA: MIT Press.

Wright, R., 1996. Can machines think? Time, March 25. http://www.time.com/time/magazine/article/0,9171,984304,00.html (accessedOctober 30, 2007).

Dow

nloa

ded

by [

Lan

cast

er U

nive

rsity

Lib

rary

] at

04:

20 2

5 A

pril

2013