11personal.denison.edu/~kretchmar/cs339/frameproblem.pdf(ford and hayes, 1991), and the robot's...

25
Cognitive Wheels: The Frame Problem of AI In 1969 , JohnMcCarthy , the mathematician who coinedthe term " Artificial Intelligence , " joined forces with another AI researcher , Patrick Hayes , and coined another term , " the frame problem . " It seemed to be a devil of a problem , perhaps even a mortal blow to the hopes of AI . It lookedto me like a new philosophical or epistemological problem , certainly not an electronic or compu - tational problem . What is the frame problem ? In this essay I thought I was providing interested bystanders with a useful introduction to the problem , as well as an account of its philosophical interest , but some people in AI , including McCarthy and Hayes (who ought to know), thought I was misleading the bystanders . The real frame problem was not , they said , what I was talking about. Others weren ' t so sure. I myself no longer have anyfirm convictions about which problems are which , or evenabout which problems are really hard after all - but something is still obstreperously resisting solution , that ' s for sure. Happily , thereare now three follow - up volumesthat pursue these disagreements through fascinating thickets of controversy : The Robot' s Dilemma ( Pylyshyn , 1987), Reasoning Agents in a Dynamic World (Ford and Hayes , 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn , 1996). If you readand understand those volumes you will understand just about everything anybody understands about the frame problem . Goodluck. 11 Once upon a time there was robot , named Rl by its creators . Its only task was to fend for itself . One day its designers arranged for it to learn that its spare battery , its precious energy supply , was locked in a room with a time bomb set to go off soon . Rl located the room and the key to the door , and formulated a plan to rescue its battery . There was Originally appeared in Hookway , C., ed., Minds , Machines and Evolution (Cambridge : Cambridge University Press , 1984 ), pp . 129 - 151 .

Upload: others

Post on 06-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Cognitive Wheels:The Frame Problem of AI

In 1969, John McCarthy, the mathematician who coined the term "ArtificialIntelligence,

" joined forces with another AI researcher, Patrick Hayes, and

coined another term, " the frame problem." It seemed to be a devil of a problem,perhaps even a mortal blow to the hopes of AI . It looked to me like a newphilosophical or epistemological problem, certainly not an electronic or compu-tational problem. What is the frame problem? In this essay I thought I wasproviding interested bystanders with a useful introduction to the problem, aswell as an account of its philosophical interest, but some people in AI ,including McCarthy and Hayes (who ought to know), thought I was misleading

the bystanders. The real frame problem was not, they said, what I wastalking about. Others weren't so sure. I myself no longer have any firm convictions

about which problems are which, or even about which problems arereally hard after all - but something is still obstreperously resisting solution,that's for sure. Happily, there are now three follow-up volumes that pursuethese disagreements through fascinating thickets of controversy: The Robot' sDilemma (Pylyshyn, 1987), Reasoning Agents in a Dynamic World(Ford and Hayes, 1991), and The Robot' s Dilemma Revisited (Ford andPylyshyn, 1996). If you read and understand those volumes you will understand

just about everything anybody understands about the frame problem.Good luck.

11

Once upon a time there was robot , named Rl by its creators . Its onlytask was to fend for itself . One day its designers arranged for it to learnthat its spare battery , its precious energy supply , was locked in a roomwith a time bomb set to go off soon . Rl located the room and the keyto the door , and formulated a plan to rescue its battery . There was

Originally appeared in Hookway, C., ed., Minds, Machines and Evolution (Cambridge:Cambridge University Press, 1984), pp. 129- 151.

Page 2: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

a wagon in the room, and the battery was on the wagon, and RI hypothesized that a certain action which it called PULLOUT ( WAGON,

ROOM) would result in the battery being removed from the room.Straightway it acted, and did succeed in getting the battery out of theroom before the bomb went off . Unfortunately , however , the bomb wasalso on the wagon. RI knew that the bomb was on the wagon in theroom, but didn 't realize that pulling the wagon would bring the bombout along with the battery . Poor RI had missed that obvious implication

of its planned act.Back to the drawing board . " The solution is obvious," said the designers

. " Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their

side effects, by deducing these implications from the descriptions ituses in formulating its plans." They called their next model, the robot-deducer, RIDI . They placed RIDI in much the same predicament thatRI had succumbed to, and as it too hit upon the idea of PULLOUT( WAGON, ROOM) it began, as designed, to consider the implicationsof such a course of action. It had just finished deducing that pullingthe wagon out of the room would not change the color of the room'swalls, and was embarking on a proof of the further implication thatpulling the wagon out would cause its wheels to turn more revolutionsthan there were wheels on the wagon- when the bomb exploded .

Back to the drawing board . " We must teach it the difference betweenrelevant implications and irrelevant implications ," said the designers," and teach it to ignore the irrelevant ones." So they developed amethod of tagging implications as either relevant or irrelevant to theproject at hand, and installed the method in their next model, the robot-relevant-deducer, or R2DI for short . When they subjected R2DI to thetest that had so unequivocally selected its ancestors for extinction, theywere surprised to see it sitting, Hamlet -like, outside the room containing

the ticking bomb, the native hue of its resolution sicklied o'erwith the pale cast of thought , as Shakespeare (and more recently Fodor)has aptly put it . " Do something!" they yelled at it . " I am," it retorted ." I' m busy ignoring some thousands of implications I have determinedto be irrelevant . Just as soon as I find an irrelevant implication , I putit on the list of those I must ignore, and . . ." the bomb went off .

All these robot suffer from the frame problem.! If there is ever to be

1. The problem is introduced by John McCarthy and Patrick Hayes in their 1969 paper.The task in which the problem arises was first formulated in McCarthy 1960. I am gratefulto Bo Dahlbohm , Pat Hayes, John Haugeland, John McCarthy , Bob Moore, and Zenon

Brainchildren182

Page 3: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

a robot with the fabled perspicacity and real-time adroitness of R202,robot designers must solve the frame problem . It appears at first to beat best an annoying technical embarrassment in robotics, or merely acurious puzzle for the bemusement of people working in Artificial Intelligence

(AI ). I think , on the contrary, that it is a new, deep epistemo-logical problem - accessible in principle but unnoticed by generationsof philosophers - brought to light by the novel methods of AI , and stillfar from being solved. Many people in AI have come to have a similarlyhigh regard for the seriousness of the frame problem . As one researcherhas quipped , " We have given up the goal of designing an intelligentrobot, and turned to the task of designing a gun that will destroy anyintelligent robot that anyone else designs!"

I will try here to present an elementary, nontechnical, philosophicalintroduction to the frame problem, and show why it is so interesting .I have no solution to offer, or even any original suggestions for wherea solution might lie. It is hard enough, I have discovered, just to sayclearly what the frame problem is- and is not . In fact, there is lessthan perfect agreement in usage within the AI research community .McCarthy and Hayes, who coined the term, use it to refer to a particular

, narrowly conceived problem about representation that arises onlyfor certain strategies for dealing with a broader problem about real-time planning systems. Others call this broader problem the frameproblem - " the whole pudding ,

" as Hayes has called it (personal correspondence )- and this may not be mere terminological sloppiness. If

" solutions" to the narrowly conceived problem have the effect of driving a (deeper) difficulty into some other quarter of the broad problem,

we might better reserve the title for this hard-to-corner difficulty . Withapologies to McCarthy and Hayes for joining those who would appropriate

their term, I am going to attempt an introduction to thewhole pudding , calling it the frame problem . I will try in due courseto describe the narrower version of the problem, " the frame problemproper" if you like, and show something of its relation to the broaderproblem .

Cognitive Wheels 183

Pylyshyn for the many hours they have spent trying to make me understand the frameproblem. It is not their fault that so much of their instruction has still not taken.

I have also benefited greatly from reading an unpublished paper, "Modeling Otange-the Frame Problem, " by Larsoo Erik Janlert, Institute of Information Processing, Universityof Umea, Sweden. It is to be hoped that a subsequent version of that paper will soonfind its way into print, since it is an invaluable vademecum for any neophyte, in additionto advancing several novel themes. [This hope has been fulfilled: Janlert, 1987.- DCD,1997]

Page 4: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Brainchildren184

Since the frame problem, whatever it is, is certainly not solved yet(and may be, in its current guises, insoluble ), the ideological foes ofAI such as Hubert Dreyfus and John Searle are tempted to composeobituaries for the field , citing the frame problem as the cause of death.In What Computers Can't Do ( Dreyfus 1972), Dreyfus sought to showthat AI was a fundamentally mistaken method for studying the mind ,and in fact many of his somewhat impressionistic complaints about AImodels and many of his declared insights into their intrinsic limitations

can be seen to hover quite systematically in the neighborhood ofthe frame problem . Dreyfus never explicitly mentions the frame problem

,2 but is it perhaps the smoking pistol he was looking for but didn 'tquite know how to describe? Yes, I think AI can be seen to be holdinga smoking pistol , but at least in its " whole pudding " guise it is every-

body' s problem, not just a problem for AI , which , like the good guyin many a mystery story, should be credited with a discovery, not accused

of a crime.One does not have to hope for a robot-filled future to be worried by

the frame problem . It apparently arises from some very widely heldand innocuous-seeming assumptions about the nature of intelligence,the truth of the most undoctrinaire brand of physicalism, and the conviction

that it must be possible to explain how we think . (The dualistevades the frame problem - but only because dualism draws the veilof mystery and obfuscation over all the tough how -questions; as weshall see, the problem arises when one takes seriously the task of answering

certain how -questions. Dualists inexcusably excuse themselves from the frame problem .) One utterly central- if not defining -

feature of an intelligent being is that it can " look before it leaps." Better,it can think before it leaps. Intelligence is (at least partly ) a matter ofusing well what you know - but for what ? For improving the fidelity

2. Dreyfus mentions McCarthy (1960, pp. 213-214), but the theme of his discussion thereis that McCarthy ignores the difference between a physical state description and a situationdescription, a theme that might be succinctly summarized: a house is not a home.Similarly, he mentions ceteris paribus assumptions (in the Introduction to the RevisedEdition, p. 56ft), but only in announcing his allegiance to Wittgenstein's idea that "whenever

human behavior is analyzed in terms of rules, these rules must always contain aceteris paribus condition." But this, even if true, misses the deeper point: the need forsomething like ceteris paribus assumptions confronts Robinson Crusoe just as ineluctablyas it confronts any protagonist who finds himself in a situation involving human culture.The point is not, it seems, restricted to Geisteswissenschaft (as it is usually conceived); the"intelligent" robot on an (otherwise?) uninhabited but hostile planet faces the frameproblem as soon as it commences to plan its days.

Page 5: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Cognitive Wheels 185

of your expectations about what is going to happen next, for planning,for considering courses of action, for framing further hypotheses withthe aim of increasing the knowledge you will use in the future , so thatyou can preserve yourself , by letting your hypotheses die in your stead(as Sir Karl Popper once put it ). The stupid - as opposed to ignorant -being is the one who lights the match to peer into the fuel tank,3 whosaws off the limb he is sitting on, who locks his keys in his car andthen spends the next hour wondering how on earth to get his familyout of the car.

But when we think before we leap, how do we do it ? The answer seemsobvious: an intelligent being learns from experience, and then useswhat it has learned to guide expectations in the future . Hume explained

this in terms of habits of expectation, in effect. But how do thehabits work? Hume had a hand-waving answer- associationism- tothe effect that certain transition paths between ideas grew more likely -to-be-followed as they became well worn , but since it was not Hume'sjob, surely, to explain in more detail the mechanics of these links , problems

about how such paths could be put to good use- and not justturned into an impenetrable maze of untraversable alternatives- werenot discovered.

Hume, like virtually all other philosophers and " mentalistic" psychologists, was unable to see the frame problem because he operated

at what I call a purely semantic level, or a phenomenological level . Atthe phenomenological level, all the items in view are individuated bytheir meanings. Their meanings are, if you like, " given

" - but this justmeans that the theorist helps himself to all the meanings he wants. Inthis way the semantic relation between one item and the next is typically

plain to see, and one just assumes that the items behave as itemswith those meanings ought to behave. We can bring this out by concocting

a Humean account of a bit of learning .Suppose there are two children , both of whom initially tend to grab

cookies from the jar without asking. One child is allowed to do thisunmolested but the other is spanked each time she tries. What is theresult? The second child learns not to go for the cookies. Why ? Becauseshe has had experience of cookie-reaching followed swiftly by spanking

. What good does that do? Well , the idea of cookie-reaching becomesconnected by a habit path to the idea of spanking, which in turn is

3. The example is from an important discussion of rationality by Christopher Chemiak,in "Rationality and the Structure of Memory" (1983).

Page 6: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

4. Note that on this unflattering portrayal , the philosophers might still be doing somevaluable work ; the wild goose chases one might avert for some investigator who hadrashly concluded that the magician really did saw the lady in half and then miraculouslyreunite her. People have jumped to such silly conclusions, after all; many philosophershave done so, for instance.5. See my 1982d, a commentary on Goodman (1982).

Brainchildren186

connected to the idea of pain . . . so of course the child refrains. Why ?Well , that' s just the effect of that idea on that sort of circumstance. Butwhy ? Well, what else ought the idea of pain to do on such an occasion?Well , it might cause the child to pirouette on her left foot, or recitepoetry or blink or recall her fifth birthday . But given what the idea ofpain means, any of those effects would be absurd. True; now how canideas be designed so that their effects are what they ought to be, givenwhat they mean? Designing some internal thing - an idea, let' s call it -so that it behaves vis-a-vis its brethren as if it meant cookie or pain isthe only way of endowing that thing with that meaning; it couldn 'tmean a thing if it didn 't have those internal behavioral dispositions .

That is the mechanical question the philosophers left to some dimlyimagined future researcher. Such a division of labor might have beenall right , but it is turning out that most of the truly difficult and deeppuzzles of learning and intelligence get kicked downstairs by thismove. It is rather as if philosophers were to proclaim themselves expertexplainers of the methods of a stage magician, and then, when we askthem to explain how the magician does the sawing-the- lady-in-halftrick , they explain that it is really quite obvious : the magician doesn'treally saw her in half ; he simply makes it appear that he does. " Buthow does he do that?" we ask. " Not our department ," say the philosophers

- and some of them add, sonorously : " Explanation has to stopsomewhere." 4

When one operates at the purely phenomenological or semanticlevel, where does one get one's data, and how does theorizing proceed?The term " phenomenology" has traditionally been associated with anintrospective method- an examination of what is presented or given toconsciousness. A person

's phenomenology just was by definition thecontents of his or her consciousness. Although this has been the ideol-

ogy all along, it has never been the practice. Locke, for instance, mayhave thought his " historical , plain method" was a method of unbiasedself-observation, but in fact it was largely a matter of disguised apriori -stic reasoning about what ideas and impressions had to be to do thejobs they

" obviously" did .s The myth that each of us can observe our

Page 7: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Cognitive Wheels 187

mental activities has prolonged the illusion that major progress couldbe made on the theory of thinking by simply reflecting carefully onour own cases. For some time now we have known better: we haveconscious access to only the upper surface, as it were, of the multilevelsystem of information -processing that occurs in us. Nevertheless, themyth still claims its victims .

So the analogy of the stage magician is particularly apt. One is notlikely to make much progress in figuring out how the tricks are doneby simply sitting attentively in the audience and watching like a hawk .Too much is going on out of sight . Better to face the fact that one musteither rummage around backstage or in the wings, hoping to disrupt

the performance in telling ways; or, from one's armchair, thinkaprioristically about how the tricks must be done, given whatever ismanifest about the constraints. The frame problem is then rather likethe unsettling but familiar "

discovery" that so far as armchair thoughtcan determine, a certain trick we have just observed is flat impossible.

Here is an example of the trick . Making a midnight snack. How isit that I can get myself a midnight snack? What could Qe simpler ? Isuspect there is some sliced leftover turkey and mayonnaise in thefridge , and bread in the bread box- and a bottle of beer in the fridge aswell . I realize I can put these elements together, so I concoct a childishlysimple plan : I' ll just go and check out the fridge , get out the requisitematerials, and make myself a sandwich, to be washed down with abeer. I' ll need a knife , a plate, and a glass for the beer. I forthwith putthe plan into action and it works ! Big deal.

Now of course I couldn 't do this without knowing a good deal-about bread, spreading mayonnaise, opening the fridge , the frictionand inertia that will keep the turkey between the bread slices and thebread on the plate as I carry the plate over to the table beside my easychair. I also need to know about how to get the beer out of the bottleand into the glass.6 Thanks to my previous accumulation of experiencein the world , fortunately , I am equipped with all this worldly knowledge

. Of course some of the knowledge I need might be innate. Forinstance, one trivial thing I have to know is that when the beer getsinto the glass it is no longer in the bottle, and that if I' m holding themayonnaise jar in my left hand I cannot also be spreading the mayonnaise

with the knife in my left hand. Perhaps these are straightforward

6. This knowledge of physics is not what one learns in school, but in one's crib. SeeHayes 1978, 1980.

Page 8: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Brainchildren

implications - instantiations - of some more fundamental things thatI was in effect born knowing such as, perhaps, the fact that if somethingis in one location it isn' t also in another, different location; or the factthat two things can' t be in the same place at the same time; or the factthat situations change as the result of actions. It is hard to imagine justhow one could learn these facts from experience.

Such utterly banal facts escape our notice as we act and plan, and itis not surprising that philosophers, thinking phenomenologically butintrospectively, should have overlooked them. But if one turns one'sback on introspection , and just thinks " hetero-phenomenologically

" 7

about the purely informational demands of the task- what must beknown by any entity that can perform this task- these banal bits ofknowledge rise to our attention . We can easily satisfy ourselves thatno agent that did not in some sense have the benefit of information (thatbeer in the bottle is not in the glass, etc.) could perform such a simpletask. It is one of the chief methodological beauties of AI that it makesone be a phenomenologist, one reasons about what the agent must" know" or figure out unconsciously or consciously in order to performin various ways.

The reason AI forces the banal information to the surface is that thetasks set by AI start at zero: the computer to be programmed to simulate

the agent (or the brain of the robot, if we are actually going tooperate in the real, nonsimulated world ), initially knows nothing at all" about the world ." The computer is the fabled tabuia rasa on whichevery required item must somehow be impressed, either by the programmer

at the outset or via subsequent " learning" by the system.

We can all agree, today, that there could be no learning at all by anentity that faced the world at birth as a tabuia rasa, but the dividing linebetween what is innate and what develops maturationally and what isactually learned is of less theoretical importance than one might havethought . While some information has to be innate, there is hardly anyparticular item that must be: an appreciation of modus ponens, perhaps,and the law of the excluded middle , and some sense of causality. Andwhile some things we know must be learned- for example, thatThanksgiving falls on a Thursday , or that refrigerators keep foodfresh- many other "

very empirical"

things could in principle be innately known - for example, that smiles mean happiness, or that un-

188

7. For elaborations of hetero- phenomenology, see Dennett 1978a, chapter 10, " Two Ap-proaches to Mental Images,

" and Dennett 1982b. See also Dennett 1982a and 1991.

Page 9: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Cognitive Wheels 189

suspended , unsupported things fall . (There is some evidence , in fact ,that there is an innate bias in favor of perceiving things to fall withgravitational acceleration ).8

Taking advantage of this advance in theoretical understanding (ifthat is what it is ), people in AI can frankly ignore the problem of learning

(it seems) and take the shortcut of installing all that an agent hasto " know " to solve a problem . After all , if God made Adam as an adultwho could presumably solve the midnight snack problem ab initio , AIagent -creators can in principle make an " adult " agent who is equippedwith worldly knowledge as if it had laboriously learned all the thingsit needs to know . This may of course be a dangerous shortcut .

The installation problem is then the problem of installing in one wayor another all the information needed by an agent to plan in a changingworld . It is a difficult problem because the information must be installed

in a usable format . The problem can be broken down initiallyinto the semantic problem and the syntactic problem . The semanticproblem - called by Allan Newell the problem at the "

knowledgelevel "

(Newell , 1982)- is the problem of just what information (onwhat topics , to what effect ) must be installed . The syntactic problemis what system , format , structure , or mechanism to use to put that information

in .9

The division is clearly seen in the example of the midnight snackproblem . I listed a few of the very many humdrum facts one needs toknow to solve the snack problem , but I didn ' t mean to suggest thatthose facts are stored in me - or in any agent - piecemeal , in the form

8. Gunnar Johannsen has shown that animated films of " falling" objects in which themoving spots drop with the normal acceleration of gravity are unmistakably distinguished

by the casual observer from " artificial" motions. I do not know whether infantshave been tested to see if they respond selectively to such displays.9. McCarthy and Hayes (1969) draw a different distinction between the "epistemologi-cal" and the "heuristic." The difference is that they include the question " In what kindof internal notation is the system's knowledge to be expressed?" in the epistemologicalproblem (see p. 466), dividing off that syntactic (and hence somewhat mechanical) question from the procedural questions of the design of " the mechanism that on the basisof the information solves the problem and decides what to do."

One of the prime grounds for controversy about just which problem the frame problemis springs from this attempted division of the issue. For the answer to the syntacticalaspects of the epistemological question makes a large difference to the nature of theheuristic problem. After all, if the syntax of the expression of the system's knowledgeis sufficiently perverse, then in spite of the accuracy of the representation of that knowledge

, the heuristic problem will be impossible. And some have suggested that the heuristic problem would virtually disappear if the world knowledge were felicitously couched

in the first place.

Page 10: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

of a long list of sentences explicitly declaring each of these facts for thebenefit of the agent. That is of course one possibility , officially : it is apreposterously extreme version of the " language of thought" theoryof mental representation, with each distinguishable

"proposition

" separately

inscribed in the system. No one subscribes to such a view ; evenan encyclopedia achieves important economies of explicit expressionvia its organization , and a walking encyclopedia- not a bad caricatureof the envisaged AI agent- must use different systemic principles toachieve efficient representation and access. We know trillions of things;we know that mayonnaise doesn't dissolve knives on contact, that aslice of bread is smaller that Mount Everest, that opening the refrigerator

doesn't cause a nuclear holocaust in the kitchen .There must be in us- and in any intelligent agent- some highly efficient

, partly generative or productive system of representing- storing for use- all the information needed. Somehow, then, we must store

many " facts" at once- where facts are presumed to line up more or

less one-to-one with nonsynonymous declarative sentences. Moreover,we cannot realistically hope for what one might call a Spinozistic solution

- a small set of axioms and definitions from which all the rest ofour knowledge is deducible on demand- since it is clear that theresimply are no entailment relations between vast numbers of these facts.( When we rely, as we must, on experience to tell us how the world is,experience tells us things that do not at all follow from what we haveheretofore known .)

The demand for an efficient system of information storage is in parta space limitation , since our brains are not all that large, but more importantly

it is a time limitation , for stored information that is not reliably accessible for use in the short real-time spans typically available

to agents in the world is of no use at all . A creature that can solveany problem given enough time- say a million years- is not in factintelligent at all . We live in a time-pressured world and must be ableto think quickly before we leap. (One doesn't have to view this as ana priori condition on intelligence . One can simply note that we do infact think quickly , so there is an empirical question about how we manage

to do it .)The task facing the AI researcher appears to be designing a system

that can plan by using well -selected elements from its store of knowledge about the world it operates in . " Introspection

" on how we planyields the following description of a process: one envisages a certainsituation (often very sketchily ); one then imagines performing a certain

Brainchildren190

Page 11: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

CognitiveWheels 191

act in that situation ; one then " sees" what the likely outcome of thatenvisaged act in that situation would be, and evaluates it . What happens

backstage, as it were, to permit this "seeing" (and render it as

reliable as it is) is utterly inaccessible to introspection .On relatively rare occasions we all experience such bouts of thought ,

unfolding in consciousness at the deliberate speed of pondering . Theseare the occasions in which we are faced with some novel and relativelydifficult problem, such as: How can I get the piano upstairs? or Is thereany way to electrify the chandelier without cutting through the plasterceiling? It would be quite odd to find that one had to think that way(consciously and slowly ) in order to solve the midnight snack problem .But the suggestion is that even the trivial problems of planning andbodily guidance that are beneath our notice (though in some sense we" face" them) are solved by similar process es. Why ? I don't observe myself

planning in such situations . This fact suffices to convince the traditional, introspective phenomenologist that no such planning is going

on.to The hetero-phenomenologist, on the other hand, reasons that oneway or another information about the objects in the situation , and aboutthe intended effects and side effects of the candidate actions, must beused (considered, attended to, applied , appreciated). Why ? Becauseotherwise the " smart" behavior would be sheer luck or magic. (Do wehave any model for how such unconscious information -appreciationmight be accomplished? The only model we have so far is conscious,deliberate information -appreciation . Perhaps, AI suggests, this is agood model . If it isn't, we are all utterly in the dark for the time being.)

We assure ourselves of the intelligence of an agent by consideringcounterfactuals: if I had been told that the turkey was poisoned, orthe beer explosive, or the plate dirty , or the knife too fragile to spreadmayonnaise; would I have acted as I did ? If I were a stupid

" automaton" - or like the Sphex wasp you

"mindlessly" repeats her stereotyped

burrow -checking routine till she dropstt - 1 might infelicitously "go

10. Such observations also convinced Gilbert Ryle, who was, in an important sense, anintrospective phenomenologist (and not a "behaviorist" ). See Ryle 1949.

One can readily imagine Ryle's attack on AI: " And how many inferences do I perfonnin the course of preparing my sandwich? What syllogisms convince me that the beerwill stay in the glass?" For a further discussion of Ryle's skeptical arguments and theirrelation to cognitive science, see my (1983b) "Styles of Representation."11. "When the time comes for egg laying the wasp Sphex builds a burrow for the purposeand seeks out a cricket which she stings in such a way as to paralyze but not kill it. Shedrags the cricket into her burrow, lays her eggs alongside, closes the burrow, then fliesaway, never to return. In due course, the eggs hatch and the wasp grubs feed off theparalyzed cricket, which has not decayed, having been kept in the wasp equivalent of

Page 12: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

a deep freeze. To the human mind, such an elaborately organized and seemingly purposeful routine conveys a convincing flavor of logic and thought fulness- until more

details are examined. For example, the wasp's routine is to bring the paralyzed cricketto the burrow, leave it on the threshold, go inside to see that all is well, emerge, andthen drag the cricket in. If, while the wasp is inside making her preliminary inspectionthe cricket is moved a few inches away, the wasp, on emerging from the burrow, willbring the cricket back to the threshold, but not inside, and will then repeat the preparatory

procedure of entering the burrow to see that everything is all right. If again thecricket is removed a few inches while the wasp is inside, once again the wasp will movethe cricket up to the threshold and re-enter the burrow for a final check. The wasp neverthinks of pulling the cricket straight in. On one occasion, this procedure was repeatedforty times, always with the same result" (Wooldridge 1963).

This vivid example of a familiar phenomenon among insects is discussed by me inBrainstorms, and in Douglas R. Hofstadter 1982.12. See my 1982a: 58- 59, on " Robot Theater."

Brainchildren192

through the motions" of making a midnight snack oblivious to the recalcitrant features of the environment .12 But in fact, my midnight -

snack-making behavior is multifariously sensitive to current and background information about the situation . The only way it could be so

sensitive- runs the tacit hetero-phenomenological reasoning- is for itto examine, or test for, the information in question. This informationmanipulation may be unconscious and swift , and it need not (it betternot) consist of hundred or thousands of seriatim testing procedures, butit must occur somehow, and its benefits must appear in time to helpme as I commit myself to action.

I may of course have a midnight snack routine , developed over theyears, in which case I can partly rely on it to pilot my actions. Such acomplicated

" habit" would have to be under the control of a mechanism of some complexity , since even a rigid sequence of steps would

involve periodic testing to ensure that subgoals had been satisfied.And even if I am an infrequent snacker, I no doubt have routines formayonnaise-spreading, sandwich-making, and getting something outof the fridge , from which I could compose my somewhat novel activity .Would such ensembles of routines, nicely integrated, suffice to solvethe frame problem for me, at least in my more " mindless" endeavors?That is an open question to which I will return below .

It is important in any case to acknowledge at the outset, and remindoneself frequently , that even very intelligent people do make mistakes;we are not only not infallible planners, we are quite prone to overlooking

large and retrospectively obvious flaws in our plans. This foiblemanifests itself in the familiar case of " force of habit" errors (in whichour stereotypical routines reveal themselves to be surprisingly insensi-

Page 13: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Cognitive Wheels 193

tive to some portentous environmental changes while surprisingly sensitive to others ). The same weakness also appears on occasion in cases

where we have consciously deliberated with some care. How oftenhave you embarked on a project of the piano - moving variety - inwhich you

've thought through or even " walked through" the whole

operation in advance - only to discover that you must backtrack orabandon the project when some perfectly foreseeable but unforeseenobstacle or unintended side effect loomed ? If we smart folk seldomactually paint ourselves into comers , it may not be because we planahead so well as that we supplement our sloppy planning powers witha combination of recollected lore (about fools who paint themselvesinto comers , for instance ) and frequent progress checks as we proceed .Even so, we must know enough to call up the right lore at the righttime , and to recognize impending problems as such .

To summarize : we have been led by fairly obvious and compellingconsiderations to the conclusion that an intelligent agent must engagein swift information -sensitive "

planning " which has the effect of producing reliable but not foolproof expectations of the effects of its actions

. That these expectations are normally in force in intelligentcreatures is testified to by the startled reaction they exhibit when theirexpectations are thwarted . This suggests a graphic way of characteriz -

ing the minimal goal that can spawn the frame problem : we want amidnight -snack -making robot to be " surprised

" by the trick plate , the

unspreadable concrete mayonnaise , the fact that we 've glued the beerglass to the shelf . To be surprised you have to have expected somethingelse, and in order to have expected the right something else, you haveto have and use a lot of information about the things in the world .13

The central role of expectation has led some to conclude that the

13. Hubert Dreyfus has pointed out that not expecnng x does not imply expecnng y (wherex * y), so one can be startled by something one didn't expect without its having to bethe case that one (unconsciously) expected something else. But this sense of not expecnngwill not suffice to explain startle. What are the odds against your seeing an Alfa Romeo,a Buick, a Otevrolet, and a Dodge parked in alphabetical order some time or other withinthe next five hours? Very high, no doubt, all things considered, so I would not expectyou to expect this; I also would not expect you to be startled by seeing this unexpectedsight- except in the sort of special case where you had reason to expect something elseat that time and place.

Startle reactions are powerful indicators of cognitive state- a fact long known by thepolice (and writers of detective novels). Only someone who expected the refrigerator tocontain Smith's corpse (say) would be startled (as opposed to mildly interested) to findit to contain the rather unlikely trio: a bottle of vintage Otablis, a can of cat food, anda dishrag.

Page 14: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Brainchildren

frame problem is not a new problem at all, and has nothing particularlyto do with planning actions. It is, they think , simply the problem ofhaving good expectations about any future events, whether they areone's own actions, the actions of another agent, or mere happeningsof nature . That is the problem of induction - noted by Hume and inten-sified by Goodman (Goodman 1965), but still not solved to anyone

'ssatisfaction. We know today that the problem is a nasty one indeed.Theories of subjective probability and belief fixation have not been stabilized

in reflective equilibrium , so it is fair to say that no one has agood, principled answer to the general question: given that I believeall this (have all this evidence), what ought I to believe as well (aboutthe future , or about unexamined parts of the world )?

The reduction of one unsolved problem to another is some sort ofprogress, unsatisfying though it may be, but it is not an option in thiscase. The frame problem is not the problem of induction in disguise.For suppose the problem of induction were solved. Suppose- perhapsmiraculously - that our agent has solved all its induction problems orhad them solved by fiat; it believes, then, all the right generalizationsfrom its evidence, and associates with all of them the appropriate prob-abilities and conditional probabilities . This agent, ex hypothesi , believes

just what it ought to believe about all empirical matters in itsken, including the probabilities of future events. It might still have abad case of the frame problem, for that problem concerns how to represent

(so it can be used) all that hard-won empirical information - aproblem that arises independently of the truth value, probability , warranted

assertability, or subjective certainty of any of it . Even if youhave excellent knowledge (and not mere belief ) about the changingworld , how can this knowledge be represented so that it can be effica-

ciously brought to bear?Recall poor R I Dl , and suppose for the sake of argument that it had

perfect empirical knowledge of the probabilities of all the effects of allits actions that would be detectable by it . Thus it believes that withprobability 0.7864, executing PULLOUT ( WAGON, ROOM) will causethe wagon wheels to make an audible noise; and with probability 0.5,the door to the room will open in rather than out; and with probability0.999996, there will be no live elephants in the room, and with probability

0.997 the bomb will remain on the wagon when it is moved . Howis RIDI to find this last, relevant needle in its haystack of empiricalknowledge ? A walking encyclopedia will walk over a cliff , for all itsknowledge of cliffs and the effects of gravity , unless it is designed in

194

Page 15: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Cognitive

such a fashion that it can find the right bits of knowledge at the righttimes, so it can plan its engagements with the real world .

The earliest work on planning systems in AI took a deductive approach. Inspired by the development of Robinson's methods of resolution

theorem proving, designers hoped to represent all the system's" world knowledge"

explicitly as axioms, and use ordinary logic- thepredicate calculus- to deduce the effects of actions. Envisaging acertain

situation 5 was modeled by having the system entertain a set ofaxioms describing the situation . Added to this were backgroundaxioms (the so-called frame axioms that give the frame problem itsname) which describe general conditions and the general effects of every

action type defined for the system. To this set of axioms the systemwould apply an action- by postulating the occurrence of some actionA in situation 5- and then deduce the effect of A in 5, producing adescription of the outcome situation 5' . While all this logical deductionlooks like nothing at all in our conscious experience, research on deductive

approach could proceed on either or both of two enabling assumptions: the methodological assumption that psychological realism was

a gratuitous bonus, not a goal, of " pure" AI , or the substantive (if still

vague) assumption that the deductive process es described wouldsomehow model the backstage process es beyond conscious access. Inother words, either we don' t do our thinking deductively in the predicate

calculus but a robot might ; or we do (unconsciously) think deduc-tively in the predicate calculus. Quite aside from doubts about itspsychological realism, however, the deductive approach has not beenmade to work - the proof of the pudding for any robot- except fordeliberately trivialized cases.

Consider some typical frame axioms associated with the action type:move x onto y.

1. If z ~ x and I move x onto y, then if z was on w before, then z is onw after2. If x is blue before, and I move x onto y, then x is blue after

Note that (2), about being blue, is just one example of the many boring" no-change" axioms we have to associate with this action type. Worse

still , note that a cousin of (2), also about being blue, would have to beassociated with every other action-type- with pick up x and with givex to y, for instance. One cannot save this mindless repetition by postulating

once and for all something like

Wheels 195

Page 16: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Brainchildren196

3. If anything is blue, it stays blue

for that is false, and in particular we will want to leave room for theintroduction of such action types as paint x red. Since virtually any aspect

of a situation can change under some circumstance, this methodrequires introducing for each aspect (each predication in the description

of S) an axiom to handle whether that aspect changes for eachaction type .

This representational profligacy quickly gets out of hand, but forsome " toy" problems in AI , the frame problem can be overpoweredto some extent by a mixture of the toyness of the environment andbrute force. The early version of SHAKEY, the robot at SiR.I ., operatedin such a simplified and sterile world , with so few aspects it couldworry about that it could get away with an exhaustive considerationof frame axioms.14

Attempts to circumvent this explosion of axioms began with the proposal that the system operate on the tacit assumption that nothing

changes in a situation but what is explicitly asserted to change in thedefinition of the applied action (Fikes and Nilsson, 1971). The problemhere is that, as Garrett Hardin once noted, you can' t do just one thing .This was Rl's problem, when it failed to notice that it would pull thebomb out with the wagon. In the explicit representation (a few pagesback) of my midnight snack solution, I mentioned carrying the plateover to the table. On this proposal, my model of S' would leave theturkey back in the kitchen, for I didn ' t explicitly say the turkey wouldcome along with the plate. One can of course patch up the definitionof " bring" or " plate

" to handle just this problem, but only at the costof creating others. ( Will a few more patches tame the problem ? At whatpoint should one abandon patches and seek an altogether new approach

? Such are the methodological uncertainties regularly encountered in this field , and of course no one can responsibly claim in

advance to have a good rule for dealing with them. Premature counsels of despair or calls for revolution are as clearly to be shunned as

the dogged pursuit of hopeless avenues; small wonder the field iscontentious. )

While one cannot get away with the tactic of supposing that one cando just one thing, it remains true that very little of what could (logically

) happen in any situation does happen. Is there some way of falli -

14. This early feature of SHAKEY was drawn to my attention by Pat Hayes. See alsoDreyfus (1972, p. 26). SHAKEY is put to quite a different use in Dennett (1982b).

Page 17: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Cognitive Wheels 197

bly marking the likely area of important side effects , and assuming therest of the situation to stay unchanged ? Here is where relevance testsseem like a good idea , and they may well be, but not within the deductive

approach . As Minsky notes :

Even if we formulate relevancy restrictions, logistic systems have a problemusing them. In any logistic system, all the axioms are necessarily

"permissive" - they all help to permit new inferences to be drawn . Each added axiom

means more theorems; none can disappear. There simply is no direct way toadd information to tell such a system about kinds of conclusions that shouldnot be drawn ! . . . If we try to change this by adding axioms about relevancy,we still produce all the unwanted theorems, plus annoying statements abouttheir irrelevancy . (Minsky , 1981, p. 125)

What is needed is a system that genuinely ignores most of what itknows , and operates with a well -chosen portion of its knowledge atany moment . Well -chosen , but not chosen by exhaustive consideration .How , though , can you give a system rules for ignoring - or better , sinceexplicit rule following is not the problem , how can you design a systemthat reliably ignores what it ought to ignore under a wide variety ofdifferent circumstances in a complex action environment ?

John McCarthy calls this the qualification problem, and vividly illustrates it via the famous puzzle of the missionaries and the cannibals .

Three missionaries and three cannibals come to a river . A rowboat that seatstwo is available. If the cannibals ever outnumber the missionaries on eitherbank of the river , the missionaries will be eaten. How shall they cross the river ?

Obviously the puzzler is expected to devise a strategy of rowing the boatback and forth that gets them all across and avoids disaster. . . .

Imagine giving someone the problem, and after he puzzles for awhile, hesuggests going upstream half a mile and crossing on a bridge . " What bridge ?"

you say. " No bridge is mentioned in the statement of the problem ." And thisdunce replies, " Well , they don't say there isn' t a bridge ." You look at the English

and even at the translation of the English into first order logic, and youmust admit that " they don't say

" there is no bridge . So you modify the problemto exclude bridges and pose it again, and the dunce proposes a helicopter, andafter you exclude that, he proposes a winged horse or that the others hangonto the outside of the boat while two row .

You now see that while a dunce, he is an inventive dunce. Despairing ofgetting him to accept the problem in the proper puzzler' s spirit , you tell himthe solution . To your further annoyance, he attacks your solution on thegrounds that the boat might have a leak or lack oars. After you rectify thatomission from the statement of problem, he suggests that a sea monster mayswim up the river and may swallow the boat. Again you are frustrated , andyou look for a mode of reasoning that will settle his hash once and for all .(McCarthy , 1980, pp . 29- 30)

Page 18: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Brainchildren198

What a normal , intelligent human being does in such a situation is toengage in some form of nonmonotonic inference. In a classical, monotoniclogical system, adding premises never diminish es what can be provedfrom the premises. As Minsky noted, the axioms are essentially permissive

, and once a theorem is permit ted, adding more axioms willnever invalidate the proofs of earlier theorems. But when we thinkabout a puzzle or a real life problem, we can achieve a solution (andeven prove that it is a solution , or even the only solution to that problem

), and then discover our solution invalidated by the addition of anew element to the posing of the problem ; For example, " 1 forgot totell you- there are no oars" or " By the way, there's a perfectly goodbridge upstream."

What such late additions show us is that, contrary to our assumption,other things weren't equal. We had been reasoning with the aid of aceteris paribus assumption, and now our reasoning has just been jeopardized

by the discovery that something " abnormal" is the case. (Note,

by the way, that the abnormality in question is a much subtler notionthan anything anyone has yet squeezed out of probability theory . AsMcCarthy notes, " The whole situation involving cannibals with thepostulated properties cannot be regarded as having a probability , soit is hard to take seriously the conditional probability of a bridge giventhe hypothesis

" [ibid.].)The beauty of a ceteris paribus clause in a bit of reasoning is that

one does not have to say exactly what it means. " What do you mean,, other things being equal

'? Exactly which arrangements of which otherthings count as being equal?

" If one had to answer such a question,invoking the ceteris paribus clause would be pointless, for it is precisely

in order to evade that task that one uses it . If one could answerthat question, one wouldn ' t need to invoke the clause in the first place.One way of viewing the frame problem, then, is as the attempt to geta computer to avail itself of this distinctively human style of mentaloperation . There are several quite different approach es to nonmonotonic

inference being pursued in Al today . They have in common onlythe goal of capturing the human talent for ignoring what should beignored, while staying alert to relevant recalcitrance when it occurs.

One family of approach es, typified by the work of Marvin Minskyand Roger Schank (Minsky 1981; Schank and Abelson 1977), gets itsignoring -power from the attention-focussing power of stereotypes. Theinspiring insight here is the idea that all of life 's experiences, for all

Page 19: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Cognitive Wheels 199

their variety , boil down to variations on a manageable number of stereotypic themes, paradigmatic scenarios- " frames" in Minsky

's terms,"

scripts" in Schank's.

An artificial agent with a well -stocked compendium of frames orscripts, appropriately linked to each other and to the impingementsof the world via its perceptual organs, would face the world with anelaborate system of what might be called habits of attention and benigntendencies to leap to particular sorts of conclusions in particular sortsof circumstances. It would "

automatically" pay attention to certain features in certain environments and assume that certain unexamined normal

features of those environments were present. Concomitantly , itwould be differentially alert to relevant divergences from the stereotypes

it would always be " expecting."

Simulations of fragments of such an agent' s encounters with itsworld reveal that in many situations it behaves quite felicitously andapparently naturally , and it is hard to say, of course, what the limitsof this approach are. But there are strong grounds for skepticism. Mostobviously , while such systems perform credit ably when the world cooperates

with their stereotypes, and even with anticipated variations onthem, when their worlds turn perverse, such systems typically cannotrecover gracefully from the misanalyses they are led into . In fact, theirbehavior in extremis looks for all the world like the preposterouslycounterproductive activities of insects betrayed by their rigid tropismsand other genetically hard-wired behavioral routines .

When these embarrassing misadventures occur, the system designercan improve the design by adding provisions to deal with the particular

cases. It is important to note that in these cases, the system does notredesign itself (or learn) but rather must wait for an external designer toselect an improved design. This process of redesign recapitulates theprocess of natural selection in some regards; it favors minimal , piecemeal

, ad hoc redesign which is tantamount to a wager on the likelihoodof patterns in future events. So in some regards it is faithful to biological

themes.lsSeveral different sophisticated attempts to provide the representational

framework for this deeper understanding have emerged from

15. In one important regard, however, it is dramatically unlike the process of naturalselection, since the trial, error and selection of the process is far from blind. But a casecan be made that the impatient researcher does nothing more than telescope time bysuch foresighted interventions in the redesign process.

Page 20: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

the deductive tradition in recent years. Drew McDermott and JonDoyle have developed a " nonmonotonic logic

" (1980), Ray Reiter hasa " logic for default reasoning

" (1980), and John McCarthy has developed a system of " circumscription ," a formalized " rule of conjecture

that can be used by a person or program for 'jumping to conclusions' "

(1980). None of these is, or is claimed to be, a complete solution to theproblem of ceteris paribus reasoning, but they might be componentsof such a solution . More recently, McDermott (1982) has offered a" temporal logic for reasoning about process es and plans." I will notattempt to assay the formal strengths and weaknesses of these ap-

proaches. Instead I will concentrate on another worry . From one pointof view , nonmonotonic or default logic, circumscription , and temporallogic all appear to be radical improvements to the mindless and clanking

deductive approach, but from a slightly different perspective theyappear to be more of the same, and at least as unrealistic as frameworksfor psychological models.

They appear in the former guise to be a step toward greater psychological realism, for they take seriously, and attempt to represent,

the phenomenologically salient phenomenon of common sense ceterisparibus

" jumping to conclusions" reasoning. But do they really succeed

in offering any plausible suggestions about how the backstageimplementation of that conscious thinking is accomplished in people?Even if on some glorious future day a robot with debugged circumscription

methods maneuvered well in a non-toy environment , wouldthere be much likelihood that its constituent process es, described at levelsbelow the phenomenological, would bear informative relations to the unknown

lower -level backstage process es in human beings? To bring outbetter what my worry is, I want to introduce the concept of a cognitivewheel.

We can understand what a cognitive wheel might be by remindingourselves first about ordinary wheels. Wheels are wonderful , eleganttriumphs of technology . The traditional veneration of the mythic inventor

of the wheel is entirely justified . But if wheels are so wonderful ,why are there no animals with wheels? Why are no wheels to be found(functioning as wheels) in nature? First, the presumption of these questions

must be qualified . A few years ago the astonishing discovery wasmade of several microscopic beasties (some bacteria and some unicellular

eukaryotes) that have wheels of sorts. Their propulsive tails, longthought to be flexible flagella, turn out to be more or less rigid corkscrews

, which rotate continuously , propelled by microscopic motors

Brainchildren200

Page 21: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Wheels

of sorts, complete with main bearings.16 Better known , if less interestingfor obvious reasons, are tumbleweeds. So it is not quite true that thereare no wheels (or wheeliform designs) in nature.

Still , macroscopic wheels- reptilian or mammalian or avian wheels- are not to be found . Why not? They would seem to be wonderfulretractable landing gear for some birds, for instance. Once the questions

is posed, plausible reasons rush in to explain their absence. Mostimportant , probably , are the considerations about the topological properties

of the axle / bearing boundary that make the transmission of material or energy across it particularly difficult . How could the life-

support traffic arteries of a living system maintain integrity across thisboundary ? But once that problem is posed, solutions suggest themselves

; suppose the living wheel grows to mature form in a nonrotating, nonfunctional form, and is then hardened and sloughed off , like

antlers or an outgrown shell, but not completely off : it then rotatesfreely on a lubricated fixed axle. Possible? It' s hard to say. Useful? Alsohard to say, especially since such a wheel would have to befreewheeling

. This is an interesting speculative exercise, but certainly not onethat should inspire us to draw categorical, a priori conclusions. Itwould be foolhardy to declare wheels biologically impossible, but atthe same time we can appreciate that they are at least very distant andunlikely solutions to natural problems of design.

Now a cognitive wheel is simply any design proposal in cognitivetheory (at any level from the purest semantic level to the most concretelevel of " wiring di~grams

" of the neurons) that is profoundly unbio -

logical, however wizardly and elegant it is as a bit of technology.Clearly this is a vaguely defined concept, useful only as a rhetorical

abbreviation, as a gesture in the direction of real difficulties to bespelled out carefully . " Beware of postulating cognitive wheels" masquerades

as good advice to the cognitive scientist, while courting vacuity as a maxim to follow .17 It occupies the same rhetorical position as

16. For more details, and further reflections on the issues discussed here, see Diamond(1983).17. I was interested to discover that at least one researcher in AI mistook the rhetoricalintent of my new term on first hearing; he took " cognitive wheels" to be an accolade.If one thinks of AI as he does, not as a research method in psychology but as a branchof engineering attempting to extend human cognitive powers, then of course cognitivewheels are breakthroughs. The vast and virtually infallible memories of computerswould be prime examples; others would be computers' arithmetical virtuosity and invulnerability

to boredom and distraction. See Hofstadter (1982) for an insightful discussionof the relation of boredom to the structure of memory and the conditions for creativity.

Cognitive 201

Page 22: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Brainchildren202

the stockbroker' s maxim : buy low and sell high . Still , the term is a goodtheme-fixer for discussion.

Many critics of AI have the conviction that any AI system is and mustbe nothing but a gearbox of cognitive wheels. This could of courseturn out to be true, but the usual reason for believing it is based on amisunderstanding of the methodological assumptions of the field .When an AI model of some cognitive phenomenon is proposed, themodel is describable at many different levels, from the most global,phenomenological level at which the behavior is described (with somepresumptuousness) in ordinary mentalistic terms, down through various

levels of implementation all the way to the level of programcode- and even further down , to the level of fundamental hardwareoperations if anyone cares. No one supposes that the model maps ontothe process es of psychology and biology all the way down. The claim isonly that for some high level or levels of description below the phe-nomeno logical level (which merely sets the problem ) there is a map-

ping of model features onto what is being modeled: the cognitiveprocess es in living creatures, human or otherwise. It is understood thatall the implementation details below the level of intended modeling

. will consist, no doubt , of cognitive wheels- bits of unbiological computer activity mimicking the gross effects of cognitive subcomponents

by using methods utterly unlike the methods still to be discovered inthe brain . Someone who failed to appreciate that a model composedmicroscopically of cognitive wheels could still achieve a fruitful iso-morphism with biological or psychological process es at a higher levelof aggregation would suppose there were good a priori reasons forgeneralized skepticism about AI .

But allowing for the possibility of valuable intermediate levels ofmodeling is not ensuring their existence. In a particular instance amodel might descend directly from a phenomenologically recognizablelevel of psychological description to a cognitive wheels implementation

without shedding any light at all on how we human beings manage to enjoy that phenomenology . I suspect that all current proposals

in the field for dealing with the frame problem have that shortcoming .Perhaps one should dismiss the previous sentence as mere autobiography

. I find it hard to imagine (for what that is worth ) that any of theprocedural details of the mechanization of McCarthy

's circumscriptions ,for instance, would have suitable counterparts in the backstage storyyet to be told about how human common-sense reasoning is accom-

Page 23: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

plished . If these procedural details lack " psychological reality" then

there is nothing left in the proposal that might model psychologicalprocess es except the phenomenological -level description in terms ofjumping to conclusions, ignoring and the like - and we already knowwe do that .

There is an alternative defense of such theoretical explorations, however, and I think it is to be taken seriously. One can claim (and I take

McCarthy to claim) that while formalizing common-sense reasoningin his fashion would not tell us anything directly about psychologicalprocess es of reason in ~ it would clarify , sharpen, systematize thepurely semantic-level characterization of the demands on any such im-

plementatio~ biological or not . Once one has taken the giant step forward of taking information -processing seriously as a real process in

space and time, one can then take a small step back and explore theimplications of that advance at a very abstract level. Even at this veryformal level, the power of circumscription and the other versions ofnonmonotonic reasoning remains an open but eminently explorablequestion. IS

Some have thought that the key to a more realistic solution to theframe problem (and indeed, in all likelihood , to any solution at all )must require a complete rethinking of the semantic-level settin~ priorto concern with syntactic-level implementation . The more or less standard

array of predicates and relations chosen to fill out the predicate-calculus format when representing the " propositions believed"

mayembody a fundamentally inappropriate parsing of nature for this task.Typically , the interpretation of the formulae in these systems breaksthe world down along the familiar lines of objects with properties attimes and places. Knowledge of situations and events in the world isrepresented by what might be called sequences of verbal snapshots.State 5, constitutively described by a list of sentences true at time tasserting various n-adic predicates true of various particulars , givesway to state 5', a similar list of sentences true at tie Would it perhapsbe better to reconceive of the world of planning in terms of historiesand process es?19 Instead of trying to model the capacity to keep track of

18. McDermott (1982, " A Temporal Logic for Reasoning about Process es and Plans,"Section 6, " A Sketch of an Implementation, " ) shows strikingly how many new issuesare raised once one turns to the question of implementation, and how indirect (but stilluseful) the purely formal considerations are.19. Patrick Hayes has been exploring this theme, and a preliminary account can be foundin " Naive Physics 1: The Ontology of Liquids" (1978).

Cognitive Wheels 203

Page 24: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Brainchildren204

things in terms of principles for passing through temporal cross-sections of knowledge expressed in terms of terms (names for things,in essence) and predicates, perhaps we could model keeping trackof things more directly , and let all the cross-sectional informationabout what is deemed true moment by moment be merely implicit(and hard to extract- as it is for us) from the format . These are tempting

suggestions, but so far as I know they are still in the realm ofhand waving .20

Another , perhaps related, handwaving theme is that the current difficulties with the frame problem stem from the conceptual schemeen-

gendered by the serial-processing von Neumann architecture of thecomputers used to date in AI . As large, fast parallel processors aredeveloped, they will bring in their wake huge conceptual innovationswhich are now of course only dimly imaginable . Since brains are surelymassive parallel processors, it is tempting to suppose that the conceptsengendered by such new hardware will be more readily adaptable forrealistic psychological modeling . But who can say? For the time being,most of the optimistic claims about the powers of parallel processingbelong in the same camp with the facile observations often encounteredin the work of neuroscientists, who postulate marvelous cognitivepowers for various portions of the nervous system without a clue ofhow they are realized} 1

Filling in the details of the gap between the phenomenological magicshow and the well -understood powers of small tracts of brain tissueis the immense research task that lies in the future for theorists of everypersuasion. But before the problems can be solved they must be encountered

, and to encounter the problems one must step resolutely intothe gap and ask how -questions. What philosophers (and everyoneelse) have always known is that people- and no doubt all intelligentagents- can engage in swift , sensitive, risky -but -valuable ceteris pari -

20. Oliver Selfridge's unpublished monograph, Tracking and Trailing, promises to pushback this frontier, I think, but I have not yet been able to assimilate its messages. [Nor,after many years of cajoling, have I succeeded in persuading Selfridge to publish it. Itis still, in 1997, unpublished.] There are also suggestive passages on this topic in RuthGarrett Millikan's Language, Thought, and Other Biological Categories, Cambridge, MA:Bradford Books / The MIT Press, 1984.21. To balance the "top-down" theorists' foible of postulating cognitive wheels, thereis the "bottom-up" theorists' penchant for discovering wonder tissue. Wonder tissue appears

in many locales. J.J. Gibson's (1979) theory of perception, for instance, seems totreat the whole visual system as a hunk of wonder tissue, resonating with marveloussensitivity to a host of sophisticated "affordances."

Page 25: 11personal.denison.edu/~kretchmar/cs339/FrameProblem.pdf(Ford and Hayes, 1991), and The Robot's Dilemma Revisited (Ford and Pylyshyn, 1996). If you read and understand those volumes

Cognitive Wheels 205

bus reasoning . How do we do it ? AI may not yet have a good answer ,but at least it has encountered the question .22

22.. One of the few philosophical articles I have uncovered that seem to contribute tothe thinking about the frame problem - though not in those terms- is Ronald de Sousa's" The Rationality of Emotions" (de Sousa, 1979). In the section entitled " What are Emotions

For?" de Sousa suggests, with compelling considerations, that:

the function of emotion is to fill gaps left by [mere wanting plus] " pure reason" in thedetermination of action and belief . Consider how Iago proceeds to make Othello jealous.His task is essentially to direct Othello 's attention, to suggest questions to ask . . . Onceattention is thus directed, inferences which. before on the same evidence, would noteven have been thought of, are experienced as compelling .

In de Sousa's understanding, " emotions are determinate patterns of salience among objects

of attention, lines of inquiry , and inferential strategies" (p. SO) and they are not" reducible" in any way to " articulated propositions . Suggestive as this is, it does not,of course, offer any concrete proposals for how to endow an inner (emotional ) statewith these interesting powers. Another suggestive- and overlooked - paper is HowardDannstadter ' S " Consistency of Belief " ( Dannstadter, 1971, pp . 301- 310). Dannstadter ' sexploration of ceteris paribus clauses and the relations that might exist between beliefsas psychological states and sentences believers may utter (or have uttered about them)contains a number of claims that deserve further scrutiny .