what is this? 2 can the turing test determine is the test ... · the neo–turing test conception...

Post on 22-Jul-2020

1 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

42 Robert French, 1990The Turing test is species biased. Failingthe Turing test proves nothing about generalintelligence because only humans can pass it.Intelligent nonhuman entities (such as intelligentmachines, aliens, etc.) could never pass the test,because they would not be able to answersubcognitive questions. Such questions can onlybe correctly answered on the basis of a specificallyhuman associative network of representations.

3 Alan Turing, 1950The question "can machines think?" istoo vague. The question of whether machinescan think is too vague. It cannot be decided byasking what these words commonly mean,because that reduces the question of machineintelligence to a statistical survey like the GallupPoll. Hence, the question "can machines think?"should be replaced by the more precise questionof whether they can pass the Turing test.

This is the Gallup Poll.Do you think thatmachines can think?

isdisputed

by

isdisputed

by

is supported by

Either Or

In Either Case

91 Robert Richardson, 1982The neo–Turing testfails to improve onbehaviorism.Block's neo–Turing testis subject to thefollowing dilemma.

Internal states are allowed as partof the definition of intelligencethrough the notion of a capacity,

in which case

the test is not purely behavioral asBlock claims.

Internal states are not allowed toplay a role in the definition ofintelligence,

in which case

we have no reason to believe thatBlock's conception fares any betterthan classical behaviorism. (See"Super Spartans," Box 86;"Philosophical Behaviorism IsCircular," Box 87.)

The neo–Turing test fails to offer a plausible behavioral conceptionof intelligence.

isdisputed

by

is supported by

isdisputed

by

is supported by

is supported by

If a simulatedintelligence passes,is it intelligent?

28 James Moor, 1976The Turing test does notdepend on the assumptionthat the brain is a machine.If a machine were to pass theTuring test, it wouldn't tell usanything about the relationbetween brains and machines. Itwould only show that somemachines are capable ofthinking, whether thosemachines are like brains or not.

Is passingthe testdecisive?

Can the Loebner Prize stimulate the studyof intelligence?

12The Loebner Prize. A$100,000 award, funded byphilanthropist Hugh Loebner, isoffered for the first software topass the Turing test in an annualcontest started in 1990. Whethera system has passed the test isdetermined in a competitionwhere computers face off withhumans in a special version ofthe Turing test. Even if entrantsfail to pass the Turing test andwin the $100,000 prize, anannual award is offered for theprogram that makes the bestshowing.

16 Stuart Shieber, 1994aThe Loebner Prize is too far beyondthe reach of current technology.Because full-scale simulated conversation is beyond the reach ofcurrent technology, contestants in the Loebner competition areforced to exploit trickery to win the annual prize. The AI communityshould not focus on winning Loebner's "parlor game" version of theTuring test, but should engage in basic research that willincrementally lead the way toward artificially intelligent behavior.

$

$

$

$

$

$

$$$$

$

$

$

$

$

$

$

$

26 Alan Turing, 1950Perform the test intelepathy-proof rooms.To prevent ESP-relatedproblems, the test wouldhave to be carried out in"telepathy-proof" rooms.

25 Anticipated by Alan Turing, 1950ESP would confound the Turingtest. Extrasensory perception couldinvalidate the Turing test in a varietyof ways. If a competitor had ESP shecould "listen in" on the judges andgain an unfair advantage. If a judgehad ESP he could easily discernhuman from machine by clairvoyance.As Turing put it, "With ESP anythingmay happen."

29 B. Meltzer, 1971The Turing test misleads AI research. TheTuring test motivates AI researchers to accomplish amisleading goal—the general but arbitrary form ofintelligence involved in carrying on a normalconversation. This kind of intelligence is arbitrary inthat it is relative to particular cultures. AI researchersshould focus their efforts on specific intellectual tasksthat can be aggregated to form more generallyintelligent systems, such as the ability to recognizepatterns, solve differential equations, etc.

30 Bennie Shanon, 1989The Turing test assumes arepresentationalist theory of mind.By confining itself to teletypedconversations, the Turing test assumes arepresentational theory of mind, according towhich thinking involves computationaloperations on symbolic representations. Butcritics have pointed out various phenomena thatcannot be accounted for in this framework.Hence, the Turing test begs the question ofwhether machines can think because its veryprocedure assumes that a representational systemlike a computer is capable of thinking. (Alsosee the "Can symbolic representations accountfor human thinking?" arguments on Map 3.)

32 Bennie Shanon, 1989The Turing test makes an autonomyclaim. In assuming a representational theory ofmind, the Turing test endorses an autonomyclaim to the effect that intelligence can bestudied solely in terms of representations andcomputations, without recourse to the brain, thebody, or the world. But there is good reason tothink the autonomy claim is false, and that instudying the mind we should also study its totalembodied context.

31 Justin Lieber, 1989Turing was not committed torepresentationalism. Turing leavesopen the question of whether therepresentational account of intelligenceis valid. In fact, in some passages heseems to reject the representationalview in favor of connectionism (wherehe discusses learning machines).Moreover, Turing would have deniedthat an "autonomous" account ofintelligence could be abstracted fromthe underlying mechanisms ofintelligence, for "he was a hopefulmechanist through and through."

27 Michael Apter, 1971Turing assumes that the brain is amachine. When Turing claims that machinescan be taught in much the same way thatchildren are, he assumes the brain is a machine.But whether or not the brain is a machine is partof what's at issue, because if brains weremachines then obviously some machines(brains) would be able to think. So, Turing begsthe question of whether machines can think—heassumes part of what he's trying to prove.

Other Turing test arguments

The Turing testdoesn't assumethat!

43 Anticipated by Robert French, 1990We can make the test fairby excluding subcognitivequestions. A machine'sinability to answer subcognitivequestions (e.g., questions aboutranking associative pairings)may show that machines cannever match human responses,but we can make the testunbiased by excluding suchquestions.

Is failing the test decisive?

Does the imitation game determinewhether computers can think?

22 Jack Copeland, 1993Some simulations are duplications. Two kinds ofsimulation can be distinguished. (1) Some lack essentialfeatures of the object simulated; for example, theatricaldeath lacks essential physiological features of real death.Such simulations are not duplications. (2) A second typecaptures the essential features of the object simulated despitebeing produced in a nonstandard way, for example artificialcoal. Simulations of this kind are duplications.

21 Anticipated by Jack Copeland, 1993Simulated intelligence is notreal intelligence. Just assimulated diamonds are not realdiamonds, a machine that simulatesintelligence to pass the Turing testis not really intelligent. In general,a simulated X is not a real X.

24 Lawrence Carleton, 1984Simulations are duplications if the inputs andoutputs are of the right kind. If a simulation usesthe same kinds of inputs and outputs as the simulatedphenomenon uses, then it is also a duplication. TheChinese Room, for example, uses the same kinds ofinputs and outputs (i.e., symbol strings) to model speakingas humans use when they speak, hence it is a duplicationas well as a simulation. Computer simulations of fire,digestion, etc., are different. Because they don't use theright kinds of input and output, they are mere simulations.Supported by"The Chinese Room Is More Than A Simulation," Box 55, Map 4.

Simulationof death

Duplicationof coal

23 John Searle, 1980bSimulations are notduplications. Asimulation of a fire willnot burn. A simulationof metabolism indigestion will notnourish. A simulation ofan automobile enginewill not get anywhere.Similarly, a simulation ofunderstanding is notactually understanding.

In another room ... theinterrogator tries todetermine whether X or Yis the woman, based ontheir responses.

In one room ... a man (X) and a woman (Y)each try to convince an interrogator that heor she is a woman.

Interrogator:Will X please tellme the length ofhis or her hair?X: My hair isshingled and thelongest strandsare about 9 incheslong.Y: I am the woman,don't listen tohim!

4 Alan Turing, 1950The imitation game.Turing's original formulationof his test takes the form ofan imitation game, whichtakes place in 2 stages.

To see whether a machine can think, replace the man (X) in the imitationgame with a machine. If the machine can successfully imitate the person,then we say the machine can think.

Part II

33The Turingtest hasbeenpassed.Existing AIprograms havepassed theTuring test forintelligence.

Have anymachinespassed the test?

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

is interpreted as

isdisputed

byis

disputedby

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

byisdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

is supported by

is supported by

is supported byis supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

is supported by

isdisputed

by

isdisputed

by

isdisputed

by

Is the test, behaviorally or operationallyconstrued, a legitimate intelligence test?

Is the neo–Turing test alegitimate intelligence test?

Is the test, as a source ofinductive evidence, alegitimate intelligence test?

88 Anticipated by Ned Block, 1981The neo–Turing test conception ofintelligence. The neo–Turing test conceptionof intelligence is a beefed-up version ofbehaviorism that avoids classical problemsassociated with behaviorism as well as problemsassociated with the Turing test. It does not relyon human judges (it merely requires "a sensiblesequence of verbal responses to verbal stimuli")and the test avoids problems associated withdispositional analysis by substituting "capacities"for dispositions.Note: Block sets up this position in order toattack it, so that he can show that even a beefed-up version of behaviorism will fall prey topsychologism.

90 Ned Block, 1981The all-possible-conversations machine. A machine could bebuilt to engage in sensible conversation by searching a databasecontaining all possible lines of conversation in a finite Turing test.The machine would be intelligent according to the neo–Turing test,but because of the way it processes information, it is clearlyunintelligent. It merely echoes the intelligence of its programmers.Note: Block simply calls this the "unintelligent machine."

isdisputed

by

Programmers

How are you?

Great! Lousy

Hi

Hello

That machine isn'tintelligent. Itsintelligence comesfrom theprogrammers whocoded in all theconversations.

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

is supported by

is supported by

is supported by

isdisputed

by

isdisputed

by

38 Ned Block, 1981Judges maydiscriminate too well.Overly discerning orchauvinistic judges mightfail intelligent machinessolely because of theirmachinelike behavior.

is supported by

is supported by

57The test is too narrow. The Turing test is limited in scope. It narrowly focuses on one ability—the ability to engage inhuman conversation—and is therefore inadequate as a test ofgeneral intelligence.Supported by"One Example Cannot Explain Thinking," Box 8.

Let's go talkto thecomputer.

13The Loebner version of the Turingtest. Thisversion of the Turing test adds severalmeasures to make it practical.• Test questions are restricted to

predetermined topics (see "RestrictedTuring Tests," Box 14).

• Referees watch to make sure judgesstick to the topics and ask fairquestions without "trickery or guile."

• Prizes are awarded using a novelscoring method. Judges rank thecontestants relative to one another,rather than just passing or failingthem.

• The highest scoring computer ineach contest is the yearly winner,and if a computer's score is equal toor greater than the average score ofa human contestant, then it passesthis version of the Turing test andwins the $100,000 prize.

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

77 Alan Turing, 1950; James Moor, 1987Knowledge of internalprocesses is unnecessary.Inferences about thinking are notbased on knowledge of internaloperations. We generally infer thatsomeone thinks just on the basis ofoutward behavior. "Not even brainscientists examine the brains oftheir friends before attributingthinking to them" (Moor, 1987, p.1129).

71 Jack Copeland, 1993SUPERPARRY. An advanced aliencivilization could write a program that,like Colby's PARRY program (see"PARRY," Box 35), would contain ready-made responses to questions. ThisSUPERPARRY would be better preparedfor the test, however, because it wouldcontain responses to all possible questions.SUPERPARRY could therefore pass theTuring test by brute force. But it is clearlynot thinking; it is just dumbly searching along list of conversations (see "The All-Possible-Conversations Machine," Box90).

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

is interpreted as

is supported by

is supported by

isdisputed

by

isdisputed

by

is interpreted as

110 James Moor, 1978Competing explanationscan be compatible. Anexplanation of a computer'sbehavior in terms of itsthinking may be compatiblewith an explanation in terms ofits underlying mechanisms. Acomputer that prints payrollchecks can be explained bothin terms of its physicalmechanisms and its abstractprogram—these arecompatible explanations of thepayroll computer's behavior.Similarly, mental andmechanical explanations of acomputer that passes theTuring test may be compatible.

isdisputed

by

is supported by

is supported by

be • hav • ior • al dis • po • si • tion: Aspecification of input-output correlations that canbe used to define mental terms. Here is an exampleof a definition of hunger for chicken, stated interms of behavioral dispositions.

If I see chicken ...then I eat the chicken.

be • hav • ior • al ca • pac • i • ty: A specificationof input-output correlations that includes a specificationof internal states (beliefs, desires, etc.). Here is anexample of a definition of hunger for chicken, stated interms of behavioral capacities.

If I see chicken and I believe chicken is edibleand I desire chicken ...then I eat the chicken.

I'mdying!!

isdisputed

by

87 Roderick M. Chisholm, 1957Philosophical behaviorism iscircular. Any attempt to define mentalstates in terms of behavioral dispositionsrelies on further mental states. But, thisis a circular definition of mentality. Forexample, if the desire to eat chicken isdefined as a disposition to eat chickenwhen chicken is present, then thispresupposes a belief that chicken is edible,which presupposes a belief that chickenis nonpoisonous, and so forth.

mentalstates

aredefined interms of

behavioraldispositions

which aredefined interms of

Intelligence (or, more accurately,conversational intelligence) isthe capacity to produce asensible sequence of verbalresponses to a sequence of verbalstimuli,whatever they may be(p. 18).

Ned Block

83 Anticipated by Ned Block, 1981The behavioral dispositioninterpretation. A system isintelligent if it is behaviorallydisposed to pass the Turing test. Inthis interpretation, neither passingthe test nor failing the test isconclusive, because intelligencedoesn't require passing but only adisposition to pass.

is supported by

81 Anticipated by Ned Block, 1981The operationalist interpretation. Anintelligent system is operationally definedas one that passes the Turing test. Bydefinition, systems that pass are intelligent;systems that fail are unintelligent.

84 Ned Block, 1981Human judges are unreliable. Even thoughthe behavioral disposition argument avoids theproblems associated with the operationalinterpretation, it still has the problem that itrelies on human judges. Some judges maychauvinistically reject intelligent machines (see"Judges May Discriminate Too Well," Box 38),whereas others might be overly liberal in passingcleverly designed but unintelligent machines(see "Human Judges May Be Fooled TooEasily," Box 52).Supported by"The Neo–Turing Test Conception ofIntelligence," Box 88.

80 James Moor, 1976, 1987The behavioral/operational interpretation is vulnerable tocounterexamples. Whether behaviorally or operationallyinterpreted, the Turing test is vulnerable to cases where unthinkingmachines pass the test or thinking machines fail it (see the "Is passingthe test decisive?" and "Is failing the test decisive?" arguments onthis map). But, if the Turing test is interpreted as a source of inductiveevidence, then such counterexamples can be accommodated.Supported by"The Inductive Evidence Interpretation," Box 108.

72 Anticipated byJack Copeland, 1993

The repeated sentencemaneuver. Aninterrogator could trickSUPERPARRY by typingthe same sentence over andover again.

How old are you?How old are you?How old are you?

isdisputed

by

74 Anticipated byJack Copeland, 1993

The substitution trick. Aninterrogator could trickSUPERPARRY by typing,"Let's write *!= in place of'horse'; now tell me whether a*!= has a tail."

isdisputed

by

75 Jack Copeland, 1993The machine is prepared forsubstitution tricks. Becausethere are only a finite number of"substitution tricks," SUPERPARRYcould be built handle all of them.

73 Jack Copeland, 1993The machine is preparedfor repeated sentences.The aliens would design theprogram to anticipaterepeated sentences and toreply with "increasinglyirritable responses,"ultimately terminatingaltogether.

51 John Barresi, 1987Merely syntactic machines could pass theTuring test. Purely syntactic machines could bedeveloped that would pass the Turing test. Butwithout a full naturalistic semantics it is doubtfulthat such machines will ever think in the humansense. A naturalistic semantics would provide aglobal model of situational context that could allowflexible interaction with the world as well assustained self-perpetuation.Supported by"The Cyberiad Test," Box 68.

hairI have long

Sentence

NPVP

NP

55 Joseph Rychlak, 1991The anthropomorphizing objection.Our natural tendency to anthropomorphizemachines should caution us against callingthem intelligent. We are easily fooledinto thinking that a machine that engagesin conversation can think (see "The ELIZAEffect," Map 1, Box 106). But, passingthe Turing test isn't enough to establishintelligent thought. Introspection is alsonecessary, and the Turing test doesn'treveal it.

48 Charles Karelis, 1986Nonstandard humancontrols can mislead ajudge. An unintelligent machinecould pass the Turing test if itwere compared with a stupid,flippant, or in generalanomalous control.

6 Jack Copeland, 1993A man simulating awoman is not a woman.If a man were to win theimitation game, hewould only havesucceeded in simulatinga woman. He wouldobviously not be awoman. Therefore, theimitation game is not anadequate test.

7 Keith Gunderson, 1964A box of rocks could pass the toe-steppinggame. There is an analogy between the imitation gameand a toe-stepping game, in which an interrogator putshis foot through an opening in the wall and tries todetermine whether a man or a woman has stepped onhis foot. There is a version of the toe-stepping game inwhich a box of rocks hooked up to an electric eye couldwin the toe-stepping game by producing the same effectas a human foot stepping on an interrogator's toe.However, a box of rocks cannot duplicate real humantoe-stepping. Similarly, the imitation game just showsthat a system can produce the outward effects ofconversation, not that it can duplicate real humanintelligence.

9 John Stevenson, 1976The imitation game is anall-purpose test. Playing theimitation game is a "second order"ability that presupposes many otherabilities. Hence, it is misleading tocharacterize the ability to pass theimitation game as just one example orproperty of thinking.

8 Keith Gunderson, 1964One example cannot explainthinking. The imitation gameattempts to explain thinking interms of one example: a man'sability to imitate a woman. Butthis is just one of many differentexamples of mental activity, suchas wondering, reflecting,deliberating, writing sonnets, andso forth. Thinking is "all purpose,"whereas playing the imitationgame uses just one ability.

10 Keith Gunderson, 1964The vacuum salesmanexample. If a vacuum-cleanersalesman demonstrated anall-purpose Swish 600 by suckingup dust from a carpet, thecustomer would be unsatisfied.He or she would want to see itsuck up mud, straw, paper, and cathairs from couches, tight corners,and so forth. One example is notenough to demonstrate theall-purpose Swish 600, just as oneexample is not enough todemonstrate thinking.

=

11 James Moor, 1987The customer could inferthat the vacuum was allpurpose. The buyer wouldnot have to actually watchthe vacuum cleaner doeverything in its repertoire toinfer that it was a goodvacuum. Even if thesalesman only demonstratedthe vacuum cleaner pickingup dust, if the buyer saw thatit came with variousattachments, he or she couldreasonably infer that theyworked.Supported by"The Inductive EvidenceInterpretation," Box 108.

5 Kenneth Colby, F. Hilf, S. Weber, and H. Kraemer, 1972The imitation game is flawed. The imitation game isa weak test for several reasons.• The concept of "womanlikeness" is too vague.• Control subjects may be inexperienced at deception.• The test fails to specify what to do when a computer fails to imitate a man imitating a woman. Is the computer a successful imitation of a man failing to

imitate a woman? Or is the computer a poor imitation of man? Or what?

17 Stuart Shieber, 1994aThe Kremer Prize. In 1959, British engineerHenry Kremer established a prize for the firsthuman-powered flight over a half-milefigure-eight course. The Kremer Prize wassuccessful (it was awarded in 1977) for 2reasons. (1) It had a clear goal: to provideincentive to advance the field ofhuman-powered flight. (2) The task was justbeyond the reach of current technology: all theessentials of human-powered flight wereavailable in 1959, they just had to be creativelycombined.

15 Stuart Shieber, 1994aLoebner's version of the Turingtest lacks a clear goal. By restricting conversation to limitedtopics and using referees to ensure that no unfair questions are asked,the "open-ended, free-wheeling" nature of real human conversationis lost. As a result, contestants win by exploiting crafty tricks ratherthan by simulating intelligence; hence, the goal of understandinghuman intelligence is lost.

18 Stuart Shieber, 1994aThe da Vinci Prize. An imaginary prize is setup in 1442 (da Vinci's era) for the highesthuman-powered flight. We imagine that, likethe Loebner Prize, a yearly competition was heldand a prize was offered each year. At the timethe prize was instituted, the requisite technologyfor human flight (in particular, the airfoil) wasbeyond the reach of current technology. As aresult, competitors won by attaching springs totheir shoes and jumping. Twenty-five yearslater no basic developments in human-poweredflight had been achieved, but the trickery ofspring-assisted flight had been highly optimized.

Implemented Model35 Kenneth Colby, 1981PARRY. PARRY is a system thatpassed an extended version of theTuring test. PARRY simulates aparanoid subject by using a naturallanguage parser and an"interpretation-action" module thattracks a range of emotional states andapplies a set of paranoia-specificstrategies for dealing withshame-induced distress. PARRY wastested against control subjectssuffering from clinical paranoia, andthe transcripts were judged byprofessional clinical psychologists.

105 John Barresi, 1987The all-possible-conversations machine cannotpass the Cyberiad test. The Cyberiad test avoids theproblems posed by Block's all-possible-conversationsmachine because the Cyberiad test tests for more than justlinguistic ability. The Cyberiad test tests for survival in anatural environment, and hence linguistic competence isnot enough to pass it.Supported by"The Cyberiad Test," Box 68.

106 Robert Richardson, 1982A brute list searcher cannot flexibly respond to changing contexts. The all-possible-conversations machine could not adapt its speech to novel and changing contexts.A machine that searches a list to produce responses to verbal stimuli might be able to findsome response that would make sense in some contexts, but it could not coherently adaptto the contingencies of an ongoing conversation.

107 Charles Karelis, 1986Bantering zombies can't think. Banteringzombies, using processing mechanisms farsuperior to those of the all-possible conversationsmachine, would still be unintelligent. No matterhow much we enrich the information processingof the all-possible-conversations machine, it stillcan't think unless it's conscious.Supported by"Consciousness is Essential to Thinking,"Box 50.

How many computersdoes it take to screw in alightbulb?

101 Anticipated by Ned Block, 1981What if we are like the all-possible-conversationsmachine? Suppose humansprocess information in the sameway that the all-possible-conversations machine does.Would that mean humans areunintelligent?

99 Anticipated by Ned Block, 1981Assuming we are better than themachine is chauvinistic. A systemthat processes information differently thanwe do may not be intelligent in our sense,but we may not be intelligent in its senseeither. To assume that human intelligenceis better than machine intelligence ischauvinistic.

How are you?

Great! Lousy

H i

Hello

97 Anticipated by Ned Block, 1981The all-possible-conversationsmachine redefines intelligence.We normally conceive of intelligencerelative to input-output capacity, notrelative to internal processing. Bystipulating an internal processingcondition on intelligence, Blockmerely redefines the word"intelligent."

in•tel•li•gence

94 Anticipated by Ned Block, 1981The machine is dated. The machinewould not be able to answer questionsabout current events.

What do you think ofthe latest events in theMiddle East?

92 Anticipated by Ned Block, 1981All intelligent machinesexhibit the intelligence oftheir designers. There isnothing unusual about theall-possible-conversationsmachine. It may be said of anyintelligent machine that itsintelligence is really just that of itsdesigners.

93 Ned Block, 1981Some machines canexhibit their ownintelligence. The allpossible-conversationsmachine, which dumblysearches a list ofconversations, only exhibitsthe intelligence of itsprogrammers. A machineequipped with generalmechanisms for learning,problem solving, etc. wouldexhibit its own intelligenceas well as the intelligence ofits designers.

95 Ned Block, 1981A dated system can stillbe intelligent.Intelligence does not requireknowledge of currentevents. For example, the allpossible-conversationsmachine could beprogrammed to simulateRobinson Crusoe, who isintelligent even though hecan't answer questions aboutrecent events.

96 Ned Block, 1981The machine could beupdated. Programmerscould periodically updatethe machine's list ofresponses.

98 Ned Block, 1981The machine is all echoes.Intelligence is not being redefined,because it is part of our normalconception of intelligence thatinput-output capacity can bemisleading. Someone who playsgrand-master chess by copyinganother grand master would not bedeemed a grand master. Similarly,the all-possible-conversationsmachine, which merely echoes itsprogrammers, would not normallybe considered intelligent.

100 Ned Block, 1981Intelligence requires"richness" ofinformationprocessing.The all-possible-conversations machineisn't unintelligent simplybecause it processesinformation differentlythan we do. Its style ofinformation processing iswhat's at issue. Inparticular, it lacks therichness of processingcapacity that is associatedwith intelligence.

102 Ned Block, 1981The machine would beintelligent. Admitting that theall-possible-conversationsmachine would be intelligent ifit were like us is compatible withclaiming that it is not in factintelligent. It's like asking anatheist what he would say if heturned out to be God. He mightadmit that if he turned out to beGod, God would exist, but denythat God in fact exists.

104 Ned Block, 1981The all-possible-conversationsmachine is logicallypossible. The all-possible-conversationsmachine is logicallypossible even if themachine is empiricallyimpossible. Becausethe neo–Turing testmakes a claim aboutthe concept ofintelligence, it is refutedby the logicalpossibility of the all-possible-conversationsmachine.

op • er • a • tion • al • ism: A philosophicalposition asserting that scientific concepts shouldbe defined in terms of repeatable operations. Forexample, intelligence can be defined in terms ofthe ability to pass the Turing test.

85 Gilbert Ryle, 1949Philosophical (orlogical) behaviorism.What we call intelligence issimply patterns of potentialand actual behavior. Theexternal behavior of a systemis enough to determinewhether or not it thinks.

in • tel • li • gence:The ability topass the Turingtest.

70 Jack Copeland, 1993The black box objection. The Turing testtreats the mind as a "black box." Internalprocesses are left hidden and mysterious; allthat matters is input and output. But internalprocessing should not be ignored, because it iscrucial to determining whether or not a systemreally thinks.

Input Output

78 Jack Copeland, 1993Turing in a car crash.Suppose you and Alan Turing are driving to aparty and your car crashes. He is gashed up, butthere is no blood. Instead, wires and siliconchips stick out of his skin, and you wonder if heis just an elaborate puppet. In such a case youmight modify your judgment that Turing is athinking being based on this new knowledge ofhis internal constitution. So, internal processesare relevant to judgments of thinking.

46 Robert French, 1990Rating games reveal the subcognitiveunderpinnings of human intelligence.Rating games provide an experimental basis forformulating subcognitive questions that only humanscould answer. A rating game asks a subject to judgethe strength of an "associative pairing" between 2terms. For example, in the "neologism rating game"a subject is presented with a new term, like "Flugly,"and asked to rate it as a name for various things: (a) a child's favorite teddy bear, (b) the name of a glamorous movie star, or (c) the name of character in a W. C. Fields movie.Only humans could answer such questions plausibly,on the basis of subcognitive associations.

---starring--Narka Flugly

Paradise Island

44 Robert French, 1990A purely cognitive Turing testis not an intelligence test. Wecannot distill a level of purelycognitive questions to make anunbiased test. To exclude allsubcognitive questions would be toexclude all questions involvinganalogy and categorization, whichwould render the test useless as atest of what we call intelligence.

36 Ned Block, 1981Failing the test is not decisive. It is possible to fail theTuring test for intelligence and still be an intelligent being.

Pass Fail

109 Douglas Stalker, 1978The inductiveinterpretation is reallyan explanatoryinterpretation. Theinductive evidenceinterpretation assumes thata computer's passing theTuring test should beexplained in terms of itsthinking. But, thinking isnot the best explanation ofa computer's ability to passthe test. A betterexplanation is a mechanicalone, in terms of the structureof the computer, itsprogram, and its physicalenvironment.

54 John Searle, 1980bThe Chinese Roompasses. The ChineseRoom thought experimentinvolves a system thatpasses the Turing test byspeaking Chinese butfails to understandChinese.Note: Also, see Map 4.

56 Joseph Rychlak, 1991The reverse Turing test. Humans tend to treat machines asfellow humans, making quick and unwarranted ascriptions ofintelligence. We say that computers "want to do" things or "willnot allow" certain alternatives. This tendency could be tested forusing a "reverse Turing Test." Instead of testing to see if a machinecan trick a human into thinking it is intelligent, we test to see if amachine can avoid being treated as intelligent. We ask: "Can amachine interacting with a human being avoid beinganthropomorphized?" (p. 60)Note: Also, see "The ELIZA Effect," Map 1, Box 106.

53 James Moor, 1987A serious judge wouldnot be easily fooled. Acritical judge would notlazily interact with acompetitor in the Turingtest, but would be focusedon distinguishing thehuman from the computer.Such a judge could noteasily be fooled.

60 James Moor, 1987Linguistic evidence is sufficient for goodinductive inference. By considering only linguisticbehavior, we can make an inductive inference that asystem can think. Of course, we may revise thisinference as further evidence comes to light, but thatdoes not mean that all such evidence must be gatheredbefore a justified inference can be made. Otherwise,"scientists would never gather enough evidence for anyhypothesis" (p. 1128).Supported by"The Inductive Interpretation," Box 108.

58 Daniel Dennett, 1985The quick probe assumption.Passing the Turing test impliesindefinitely many other abilities,for example the ability tounderstand jokes, write poetry,discuss world politics, etc. It is a"quick probe" of generalintelligence.

PassFail

TURING TEST

37 Robert Abelson, 1968The extended Turing test. Anextended version of the Turing testfacilitates its use for scientific purposes.Its innovations are:• The judges are not informed that a

computer is involved in the test at all.They are told simply to compare the

performance of 2 entrants withrespect to some dimension, for examplewomanliness, paranoid behavior, etc.

• The computer does not participate on its own, but (unbeknownst to the judges) shares time with a human subject. The 2 are switched off periodically throughout the test.• The judges score the human/computer pair after every question, so that separate scores can be generated for each. The point of this method is to force judges to focus on the specific dimension being evaluated.

61 Jerry Fodor, 1968The Turing test only providespartial evidence ofintelligence. Passing the Turingtest does not guarantee successfulsimulation of human behavior.Any given Turing test providesonly a finite sample of a test taker'sbehavioral repertoire. Therefore,passing the Turing test onlyprovides partial, inductiveevidence of intelligence.

63 Daniel Dennett, 1985Sense organs are anunnecessary additionto the test. Because theTuring test indirectly testssense-based capacities(see "The Quick ProbeAssumption," Box 58), theaddition of sensorydevices is unnecessary. Itdoesn't make the test anymore demanding than italready is.

64 Jack Copeland, 1993Sense organs were not prohibited byTuring. The Turing test does not prohibitmachines with sense organs from entering. Infact, Turing thought the best way to prepare acomputer for competition would be to give itsensory abilities and an appropriate education.

-1

65 Jack Copeland, 1993Understanding can be tested withoutsensory interaction. Understanding can betested through verbal quizzing, even when senseorgans are lacking. Indeed, there are certainkinds of concepts that have to be tested forthrough verbal quizzing because of theirnonperceptual nature (e.g., "square root,""intelligence," and "philosophy").

59 James Moor, 1976, 1987Narrowness objections play a misleadingnumbers game. It is misleading to characterizethe Turing test as just one test, because the test canbe used to conduct many different kinds of tests. Theinterrogator can test for the ability to tell jokes, speaka foreign language, discuss urban renewal, or for anyof a wide range of activities.

Computer,write me asonnet ...

86 Hilary Putnam, 1975aSuper Spartans. We can imagine arace of Super Spartans who have beentrained to suppress any behavioralindication of pain. They exhibit no painbehavior whatsoever, even when theyare in excruciating pain. So,philosophical behaviorism fails to givean account of what pain is.

I feel nopain!

OUCH!

79 John Searle, 1980bOvert behavior does not demonstrate understanding.The Chinese Room argument shows that a system can engage inChinese-speaking behavior yet fail to understand Chinese. TheChinese Room is a system that could pass the Turing test withoutthinking in Chinese.Note: Also, see Map 4.

82 Ned Block, 1981The operational interpretation is toorigid. If thinking is operationally defined interms of the ability to pass the Turing test,then systems that pass are necessarily intelligentand systems that fail are necessarily unintelligent.But, this is too rigid. Intelligent machines couldfail the test (see the "Is failing the test decisive?"arguments on this map) and unintelligentmachines could pass the test (see the "Is passingthe test decisive?" arguments on this map).Supported by"The Neo–Turing Test Conception ofIntelligence," Box 88.

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

69The Turing test provides abehavioral/operational definition ofintelligence. The Turing test defines thinkingin terms of overt, measurable behavior. Itoffers a behavioral/operational definition ofour ordinary concept of thinking.Disputed by"A Box of Rocks Could Pass the Toe-SteppingGame," Box 7.Note: A similar debate takes place in the "Ispassing the test decisive?" arguments on thismap, which deal with the question of whatcan be demonstrated by successful simulation.

Unmapped TerritoryAdditional

behaviorism/operationalism

arguments

is interpreted as

34 Joseph Weizenbaum, 1976ELIZA. Written circa 1965, ELIZA emulates a Rogerianpsychotherapist by exploiting a few simple strategies.Although written in just 200 lines of BASIC code, ELIZAmanaged to fool many people, including professionalpsychotherapists, into believing it was intelligent. (See"The ELIZA Effect," Map 1, Box 106.)

Patient: Well, myboyfriend made mecome here.ELIZA: Yourboyfriend made youcome here.Patient: He saysI'm depressed muchof the time.ELIZA: I am sorryto hear that youare depressed.

Implemented Model

isdisputed

by

103 Anticipated by Ned Block, 1981Combinatorial explosion makes theall-possible-conversations machineimpossible. Programming the machine to engagein an hour-long Turing test is impossible, because itwould give rise to a "combinatorial explosion." Theprogrammer would have to code more strings thanthere are particles in the universe.

How areyou?

Great!

Lousy.

Hi

Hello.

Fine.

Swell.

Pretty good. Ciao.

Terrible.

Tired.

How's itgoing?

What'swithyou?

Not bad.OK

What'shappening?

is supported by

Can The Turing Test DetermineWhether Computers Can Think?

The History and Status of the Debate — Map 2 of 7An Issue Map™ Publication

2

Start Here

1 Alan Turing, 1950Yes, machines can(or will be able to)think. A computationalsystem can possess allimportant elements ofhuman thinking orunderstanding.

Alan Turing

I believe that at the endof the century ... onewill be able to speak ofmachines thinkingwithout expecting to becontradicted.

Legend

© 1998 R. E. Horn. All rights reserved.Version 1.0

Focus Box: The lowest-numbered box in each issue area is an introductory focus box.The focus box introduces and summarizes the core dispute of each issue area, sometimesas an assumption and sometimes as a general claim with no particular author.

Arguments with No Authors: Arguments that are not attributable to a particular source(e.g., general philosophical positions, broad concepts, common tests in artificial intelligence)are listed with no accompanying author.

Citations: Complete bibliographic citations can be found in the booklet that accompaniesthis map.

Methodology: A further discussion of argumentation analysis methodology can be foundin the booklet that accompanies this map.

This icon indicates areas of argument that lie on or near theboundaries of the central issue areas mapped on these maps.It marks regions of potential interest for future mapmakersand explorers.

Unmapped Territory

Additionalarguments

The arguments on these maps are organized by links that carry a range of meanings:

A distinctive reconfiguration of an earlier claim.is interpreted as

A charge made against another claim. Examples include:logical negations, counterexamples, attacks on an argument'semphasis, potential dangers an argument might raise, thoughtexperiments, and implemented models.

isdisputed

by

Arguments that uphold or defend another claim. Examples include:supporting evidence, further argumentation, thought experiments,extensions or qualifications, and implemented models.

is supported by

As articulated by Where this phrase appears in a box, it identifies a reform-ulation of another author's argument. The reformulationis different enough from the original author's wording towarrant the use of the tag. This phrase is also used whenthe original argument is impossible to locate other than inits articulation by a later author (e.g., word of mouth), orto denote a general philosophical position that is given aspecial articulation by a particular author.

Anticipated by Where this phrase appears in a box, it identifies a potentialattack on a previous argument that is raised by the authorso that it can be disputed.

The Issue Mapping™ series is published by MacroVU Press, a division of MacroVU, Inc.MacroVU is a registered trademark of MacroVU, Inc. Issue Map and Issue Mapping are trademarksof Robert E. Horn.

The remaining 6 maps in this Issue Mapping™ series can be ordered with MasterCard, VISA, check,or money order. Order from MacroVU, Inc. by phone (206–780–9612), by fax (206–842–0296),or through the mail (Box 366, 321 High School Rd. NE, Bainbridge Island, WA 98110).

One of 7 in this Issue Mapping™ series—Get the rest!

50 Charles Karelis, 1986Consciousness is essential to thinking. No matter howcomplex an entity's behavior is, it's not thinking unless it'sconscious. Proposed counter- examples (e.g., problem solvingin a dreamless sleep) are just deviant uses of language.Note: For more on consciousness, see Map 6.

Players in the Turing Test

Machinecontestant: themachine that isattempting to passthe Turing test.

Human contestant, orcontrol: a human whoseconversation is compared withthat of the machine.The control is sometimes alsocalled a "confederate" or"player" or "foil."

Interrogator: aperson who asksquestions of thecontestants.

Judge: a person whointerprets the conversationsbetween interrogator andcontestant. Often it isassumed that aninterrogator is at the sametime a judge.

Referee: aperson whowatches to makesure interrogatorsask no unfairquestions.

con • trol: A comparison case for what is beingtested, to ensure that test results are not just artifactsof the test procedure or some other factor. In theTuring test the responses of human controls arecompared with those of machines. Without suchcontrols we would never know whether observedconversational ability was a result of actualconversational ability or of some other factor, suchas the interrogator's bias, gullibility, etc.

In another room ... a humancontrol answers question posedby the interrogator.

In one room ... a machineanswers questions posed by theinterrogator.2 Alan Turing, 1950

The Turing test is an adequatetest of thinking. The Turing test isa test to determine whether amachine can think. If a computercan persuade judges that it is humanvia teletyped conversation, then itpasses the test and is deemed able tothink.Notes:• Shown here is a standard interpretation of Turing's test; many others are possible. Turing's original version of the Turing test takes the form of an imitation game (see "The Imitation Game," Box 4).• Authors differ in whether they take

the Turing test to establish "thinking" or "intelligence."

In a third room ... the interrogator engages in teletyped (computer) conversation with thecontestants as judges look on. If a machine can trick the interrogator and the judges into thinking itis a human, then that machine has passed the Turing test.

Judges

1 2

3

The Turing Test

Human control

Interrogator

Machine contestantReferee

Part I

be • hav • ior • ism: A school of psychology that takes the overt behavior of a systemto be the only legitmate basis of research. We can distinguish philosophical (or logical)behaviorism from methodological behaviorism. Philosophical behaviorism is theposition that mental states are reducible to sets of input-output correlations (or"behavioral dispositions"). Methodological behaviorism is a research program thatrestricts itself to the investigation of behavior and its observable causes.

39 Jack Copeland, 1993Intelligent machines couldfail the test. An intelligentmachine could fail the Turingtest by acting nonhuman, bybeing psychologicallyunsophisticated (thoughintelligent), or simply by beingbored with the proceedings.

Whatever ...How oldare you?

Ned Block

PHOTO # 4Refer to photocopy forprecise croppinginstructions.

Jack Copeland

Hugh Loebner

PHOTO # 31Refer to photocopy forprecise croppinginstructions.

James Moor

Keith Gunderson

14Restricted Turing tests. A kind of Turing test that restrictsconversation to one topic, such as "women's clothes" or"automotives." Because restricted Turing tests are much easier topass, they provide a feasible short-term goal for AI researchers towork toward.

Today's Topic:DINOSAURS

Turing testproficiency

Specific intellectualtasks

A I

au • ton • o • my (oftheories): Two theories Aand B are autonomous ifthe phenomena of A canbe accounted for withoutrecourse to theconceptual framework ofB.

B: Syntax

A: Phonology

41 Jack Copeland, 1993Nondecisive tests can be useful. It is true thatfailing the Turing test is not decisive—it is not adefinitive litmus test for intelligence. But that doesn'tmake it a bad test. Many perfectly good tests arenondefinitive but useful. For example, fingerprint testsare nondefinitive (they can be foiled by gloves) butremain useful nevertheless.

40 Jack Copeland, 1993Some intelligent beingswould fail the test.Chimpanzees, dolphins, andprelinguistic infants canthink but would fail theTuring test. If a thinkinganimal could fail the test,then presumably a thinkingmachine could too.

Pass Fail

45 Robert French, 1990The seagull test is species biased.The seagull test—an imaginary test to determine whether a machinecan fly—shows how a Turing-type test is species biased. Judgescompare 2 radar screens that track a seagull and a putative flyingmachine, respectively. A machine passes if the judges can'tdistinguish it from the seagull on the radar screens. Failing theseagull test tells us nothing about flying in general, because onlyseagulls (or machines behaving exactly like them) could pass it.Non-seagull flyers (such as airplanes, helicopters, bats, etc.) couldnever pass the test, because they would produce a distinctive radarsignature.

49 Charles Karelis, 1986A zombie or an unconsciousmachine could pass thetest. A behaviorallysophisticated but unconsciouszombie could pass the Turingtest, but it would not be thinking.Similarly, a behaviorallysophisticated but unconsciouscomputer might pass the Turingtest without thinking.

Pass Fail

47 Ned Block, 1981Passing the test is not decisive. Evenif a computer were to pass the Turing test,this would not justify the conclusion that itwas thinking intelligently.Note: Because the issue of passing the Turingtest is closely tied with the issue of what canbe inferred from external behavior, this regioncontains arguments that are similar to thosein the "Is the test, behaviorally or operationallyconstrued, a legitimate intelligence test?"arguments on this map.

Pass Fail

62 Peter Carruthers, 1986The sense organsobjection. The Turing testneglects perceptual aspects ofintelligence. A machine mightpass the test by producingcompetent verbal behavior butfail to understand how its wordsrelate to the perceptual world.An adequate test would have toprovide the machine with somesort of sensory apparatus.

66 Steve Harnad, 1995The Turing test underdetermines the goal of creatinghumanlike robots. By focusing on a particular subset ofhuman behavior (symbolic capacity), the Turing testunderconstrains the engineering problem of creating robotswith human capabilities. Because of this, machines thatpass the Turing test will not be easily extensible to machinesthat have other human capacities.

68 John Barresi, 1987The Cyberiad test. A race of machinepeople ("cybers") is judged intelligent ifthey can do whatever humans can do ina natural environment. They must be ableto replace humans in their social roles,perpetuate their species for as long as thehuman species could, and maintain anevolving language.

67 John Barresi, 1987You can't fool Mother Nature. Becausethe Turing test only measures the ability ofa machine to trick judges, it cannot test forthe flexible adaptations needed to interactin the real world, where Mother Nature isthe judge of intelligence. In the real worldintelligence is best indicated by an abilityto survive through coordinated perceptionand action. The Turing test neglects suchabilities and biases us to think of intelligencein an intellectualist way.

You can'tfool me ...

89 Ned Block, 1981Psychologism: Internaldifferences matter. Psychologism isthe doctrine that whether or not behavioris intelligent depends on the nature ofthe information processing that producesit. If 2 systems behave in exactly thesame way, one might be intelligent whilethe other is stupid, because of differencesin the way each system processesinformation. Behavior by itself isn'tenough; internal differences matter.

Input Output

108 James Moor, 1976, 1987The inductive evidence interpretation. The Turing test providesinductive evidence that a system thinks. Such evidence may be weakor strong, but it is never deductively certain. The claim that a systemthinks, based on its passing the Turing test, remains subject to revisionas further evidence comes to light.

Pass FailThat provides furtherinductive evidence thatthey think, but we don'tknow for sure.

More tests

76 Jack Copeland, 1993The beefed-up Turing test.The addition of a design criterion tothe original Turing test ensures that thesystem being tested has the right kindof internal design.

Input Output

So, did I ever tellyou about this testI designed?

19 Hugh Loebner, 1994Mozart's backside. After a coach ride to Viennain 1781, Mozart complained that his buttocks were sotired that he could hardly ride on. Had a da VinciPrize advanced the state of spring technology in earliercenturies, Mozart's coach might have boasted asmoother suspension, and his backside might not havebeen so sore. The Loebner Prize might turn out tobe useful in unexpected ways, much as the da VinciPrize might have.

Joseph Weizenbaum

52 Ned Block, 1981Human judges may be fooledtoo easily. Human judges canbe fooled by unintelligent butcleverly programmed machines.Supported by"The ELIZA Effect," Map 1,Box 106.

Sure, it canthink ...

This court isnot fooled.

20 Stuart Shieber, 1994bThe Loebner prize moneyshould not be spent on"useful spin-offs." Whetherthe Loebner prize money shouldbe spent on generating "usefulspin-offs" is dubious. The moneyis best spent toward advancingartificial intelligence.

$

$ $ $ $$

$

$

$

is supported by

Joseph Rychlak

John Searle

Daniel Dennett

Everything in A canbe explained withoutreference to B andvice versa.

PARRY: ... I thinkhe is behind thegangsters who areout to get me.Interrogator: Didyou see things thatothers don't seemto see?PARRY: No, I don't.I'm not like theother patient here.Interrogator: Youdidn't answer myquestion.PARRY: Maybe itwasn't worthanswering.

Pass

Fail

Robert French

What is this?This info-mural is one of seven “argumentation maps” in a series that explores Turing’s question: “Can computers think and/or will they ever be able to?” Argumentation mapping is a method that provides:- a method for portraying major philosophical, political, and pragmatic debates- a summary of an ongoing, major philosophical debate of the 20th century- a new way of doing intellec- tual history.

What does it contain?

How do I get a printed copy?

Altogether the seven maps:- summarize over 800 major moves in the debates threaded into claims, rebuttals, and counterrebuttals- 97-130 arguments and rebuttals per map- 70 issue areas in the 7 maps - 32 sidebars history and further background

The argumentation maps:- arrange debate so that the cur- rent stopping point of each debate thread is easily seen- identify original arguments by over 380 protagonists world- wide over 40 years- make the current frontier of debate easily identifiable- provide summaries of eleven major philosophical camps of the protagonists (or schools of thought).

You can order artist/researcher signed copies of all seven maps from www.macrovu.com for $500.00 plus shipping and han-dling.

1998

top related