a suggestion about popper's three worlds in the light of artificial intelligence by aaron sloman

8
A SUGGESTION ABOUT POPPER'S THREE WORLDS IN THE LIGHT OE ARTIFICIAL INTELLIGENCE Introduction H AVING ALWAYS ADMIRED Popper and been deeply influenced by some of his ideas (even though I do not agree with all of them), I feel privi- leged at being invited to contribute to a volume of commentaries on his work. My brief is to indicate the relevance of work in artificial intelligence (henceforth AI) to Popper's philosophy of mind. Materialist philosophers of mind tend to claim that Popper's "World 2" is reducible to "World 1." I shall try to show how AI suggests that "World 2" is reducible to "World 3," and that one of the main explanatory roles Popper attributes to World 2, namely causal mediation between Worlds 1 and 3, is a redundant role. The central claim of this paper can be summed up by the slogan: "Any intelli- gent ghost must contain a computational machine." This essay does not attempt detailed justification of the theses presented: that will require an extended research program. 1. Popper's Ontology For the sake of argument I shall provisionally accept Popper's metaphysical theory that there are three worlds. World 1 contains physical things, like atoms, lightning flashes, skyscrapers, eyelashes, and planets. World 2 con- tains "subjective" mental events, processes, and entities, like pains, acts of deciding, processes of imagining, and mental images. World 3 contains "objective" propositions, theories, problems, proofs, numbers, and the con- tents which may be common to the thoughts of two or more people. World 3 objects are generally formulated or expressed in some concrete, humanly produced physical object, like a book, uttered sound, pattern of light and shade on a television screen. However, Popper says that the necessary crite- * Aaron Sloman is Professor of Cognitive Science and Artificial Intelligence, in the Cogni- tive Studies Programme at Sussex University (Brighton, England). 310

Upload: mike-mcd

Post on 15-Sep-2015

3 views

Category:

Documents


0 download

DESCRIPTION

A write-up by Aaron Sloman, Professor of Cognitive Science and Artificial Intelligence in the Cognitive Studies Programme at Sussex University in Brighton, England. Deep explorations into Artificial Intelligence and the philosophical conundrums it presents.

TRANSCRIPT

  • A SUGGESTION ABOUTPOPPER'S THREE WORLDSIN THE LIGHT OEARTIFICIAL INTELLIGENCE

    Introduction

    HAVING ALWAYS ADMIRED Popper and been deeply influenced by someof his ideas (even though I do not agree with all of them), I feel privi-leged at being invited to contribute to a volume of commentaries on hiswork. My brief is to indicate the relevance of work in artificial intelligence(henceforth AI) to Popper's philosophy of mind. Materialist philosophersof mind tend to claim that Popper's "World 2" is reducible to "World 1."I shall try to show how AI suggests that "World 2" is reducible to "World 3,"and that one of the main explanatory roles Popper attributes to World 2,namely causal mediation between Worlds 1 and 3, is a redundant role. Thecentral claim of this paper can be summed up by the slogan: "Any intelli-gent ghost must contain a computational machine." This essay does notattempt detailed justification of the theses presented: that will require anextended research program.

    1. Popper's OntologyFor the sake of argument I shall provisionally accept Popper's metaphysical

    theory that there are three worlds. World 1 contains physical things, likeatoms, lightning flashes, skyscrapers, eyelashes, and planets. World 2 con-tains "subjective" mental events, processes, and entities, like pains, actsof deciding, processes of imagining, and mental images. World 3 contains"objective" propositions, theories, problems, proofs, numbers, and the con-tents which may be common to the thoughts of two or more people. World 3objects are generally formulated or expressed in some concrete, humanlyproduced physical object, like a book, uttered sound, pattern of light andshade on a television screen. However, Popper says that the necessary crite-

    * Aaron Sloman is Professor of Cognitive Science and Artificial Intelligence, in the Cogni-tive Studies Programme at Sussex University (Brighton, England).

    310

  • A SUGGESTION ABOUT POPPER'S THREE WORLDS 311

    rion of World 3 entities is not that they are physically formulated, but thatthey are brought into existence by the human mind. (1) There are detailsofthe theory that are obscure or controversial, but for my purposes suchdifficulties can be ignored. A fairly up-to-date summary ofthe main ideascan be found by following up the index entries under "world" in Popper's,Unended Quest.2. What is Artificial Intelligence?

    For detailed accounts of work in AI, readers are referred to the worksof M. Boden and P. Winston. (2) The former is philosophically more sophis-ticated, whereas the latter gives more technical details. I have discussedsome of the philosophical implications in my The Computer Revolutionin Philosophy: Philosophy, Science, and Models of Mind. (3) For the present,a very brief summary of AI will have to suffice. Here are the main points:

    1) AI is the study of actual and possible mechanisms underlying mentalprocesses of various kinds-perceiving, learning (including concept-formation), inferring, solving problems, deciding, planning, interpretingmusic or pictures, understanding language, having emotions, etc. The basicassumption is that all such processes are essentially computation. That is,they involve symbol-manipulations of various kinds, such as building, copy-ing, comparing, describing, interpreting, storing, sorting, and searching.

    2) Although AI theories about such mechanisms are usually embeddedin computer programs, since these theories are too complex for their con-sequences, strengths, and weaknesses to be explored "by hand," there isno AI commitment to any specific kind of underlying computer. In partic-ular, if a process in Popper's World 2 admitted a rich enough set of inter-nal states, and laws of transition between states, it would suffice as a"computer" in which typical AI programs could be run.

    3) AI programs usually bear little resemblance to either the "number-crunching" programs familiar to most scientists and engineers, or to theTuring machines and "effective" procedures (i.e., programs that preciselyprescribe all possible computer actions) familiar to logicians and mathe-maticians. For instance, many AI programs blur the distinction betweenprogram and data, since programs may be data for themselves, especiallyself-modifying programs. Further, the larger programs are often complex,messy, and not necessarily fully intelligible to their designers, so that testruns rather than proofs or theoretical analysis are required for establishingtheir properties, especially after self-modification in a complex environ-ment. Finally, an AI program normally consists of several AI programs (orlayers of "virtual machine") between the physical computer and the highestlevel program.

    4) It is usual to interpret AI programs as constructing and manipulatingcomplex symbols to represent hypotheses, plans, possible states of affairs,goals, problems, criteria of adequacy, etc. The programs go through symbol-

  • 312 Et cetera FALL 1985

    manipulations which are best described as inferences, consistency checks,searches, interpretations, etc. In other words, the processes are essentiallyconcerned with World 3 objects, properties, and relations. Because ofthecomplexity already alluded to, the programmer is not usually aware of allthe hypotheses, inferences, decisions, etc. involved in a run ofthe program.Their existence therefore does not depend on their actually being the con-tent of the programmer's thoughts.

    5) Existing AI programs, though they constitute major advances in ourattempts to explain human mental abilities, nevertheless are patheticallylimited in comparison to people and many other animals. In part this isdue to limitations of computers available for such research: their memoriesare far too small and their computational power inadequate. Recent develop-ments in micro-elearonics and distributed processing may alter this. Anothersource of inadequacies is the piecemeal amateurish development character-istic of a young subject. Significant progress will require much more in-tegration of theories and methods from psychology, linguistics, philosophy,anthropology, and computer science. (4)3. Reducing World 2 to World 3

    Nobody could sensibly claim that any existing machine has experiencesclosely analogous to those of people. Certainly there are at present no pains,tingles, thrills of delight, anxieties, loves, hates, despair, etc. But there arethe beginnings of visual experiences-quite richly structured internal statesin which images are interpreted by machines as first this then that, withattention shifting between different parts and aspects ofthe scene. Thereare also machine processes in which references are assigned to Englishphrases, processes involving exploration of alternative actions, and choicesbetween such alternatives. (5) Some hypotheses are rejected by machineprograms, and others accepted as true. Programs which represent goals anduse them to generate new subgoals indicate how systems which have theirown motives might be built. (6) Above all, there are the beginnings of self-awareness, in programs which can monitor their own performance and mod-ify themselves accordingly. (7)

    It is already possible to see in very rough outline how the phenomenol-ogy of physical pain could be replicated in a robot by subsystems whichmonitor parts of the "body" and, upon detecting disturbances and mal-functions, generate new symbols within a special store of motives whichcontrol the direction of attention and influence priorities in decision making.The "warnings" produced by such monitors would need to have fairly richdescriptive content-concerning location, spread, urgency, and nature ofthe malfunction or injury. This would account for some ofthe qualitativedifferences between different sorts of pain. (8)

    It is possible to see, again in rough outline, that much of the phenome-nology of emotions could be replicated in machines which are able to detect

  • A SUGGESTION ABOUT POPPER'S THREE WORLDS 313

    the presence or absence of factors which help or hinder the attainment,preservation, or prevention of events or states which fulfill or confiict withthe machine's motives. Whether there is an emotion will depend on theextent and nature ofthe disturbance, within the machine's internal process-ing, produced by its discovery.

    Notice that it is not mere replication of the external behavior of somehuman being that we need to aim for. A huge condition-action table coulddo that. The structure ofthe machine's internal processing is what is im-portant, for instance, when it attempts to determine the potential for alter-native lines of development in an indefinitely large and varied set ofcircumstances.

    These sketches could already be amplified in some detail. Neverthelessthere are still many unsolved problems, so I do not claim that it is establishedthat much ofthe structure of our internal or "subjective" experience couldbe mirrored in processes inside a symbol-manipulating system ofthe samegeneral character as existing AI programs, though with far greater com-plexity. (In collaboration with Monica Croucher, a research student at SussexUniversity, I am exploring some of these ideas in more detail and hopeto publish reports later on.)

    Would World 2 events, objects, processes, etc. exist in such a symbolmanipulating system? I do not believe that any rational argument can answerthis question decisively. Ultimately, it is a question for moral decision, notfactual discovery. If a robot were to be made whose internal design andverbal and non-verbal behavior indicated decisively that computationalprocesses structurally similar to typical human mental processes occurredwithin it, this would still leave some people saying: "But it is only a machineand so, by definition, ordinary mentalist labels are inapplicable to it." (9)

    At that stage, with the machine pleading or "pleading" for friendship,for civil rights, for a good education, for a less uncomfortable elbow-joint,or whatever, we'd be faced with what I can only describe as a moral orpolitical disagreement between those who asserted or denied that it wasconscious and suffered. (Similar problems arise with animals, brain-damagedpeople, etc.)

    Popper's position on this issue is unclear to me. I suspect he would notjoin the society for prevention of cruelty to robots, not because he doesnot oppose cruelty, but because he appears to be convinced, that only ananimal brain can provide a basis for World 2. (10)

    However, I see no reason to share this conviction, and neither wouldmany people who had grown up with such robots as playmates, nannies,house-servants, etc. Thus for me, and for such people, it would seem morallyright to attribute subjective mental states, processes, and events, to an"individual" with sufficiently rich and human-like internal computations.Apart from racial, or species, prejudice, we'd have as much reason for doingthis as for treating people, cats, monkeys, etc. as conscious.

  • 314 Et cetera FALL 1985

    But as indicated above, the computations in such a machine essentiallyinvolve states, processes, events, and objects in World 3. That is, they involvesuch things as symbols, theories, plans, decisions, refutations, inferences,and such World 3 relations as implication, inconsistency, representation,denotation, validity and the like.

    If, from a certain moral standpoint, the existence of certain sorts of suchWorld 3 phenomena is a sufficient condition for postulating the existenceof World 2, then we have a kind of reduction of World 2 to World 3. Moregenerally, much work in AI can be interpreted as an attempt to show thatWorld 2 processes in people are really World 3 processes. This may seemparadoxical to philosophers who normally think of computers and AI inthe context of attempting to reduce World 2 to World 1, the physical worldof brains and atoms. But such a reduction is of little interest in the lightof AI, since, as stated previously, it is relatively unimportant whether AIprograms run on a physical computer, a spiritual or World 2 computer (inPopper's terms: the subjective human mind), or a "virtual machine" con-stituted by software (i.e., more programs) in a physical computer.

    Levinson has suggested, in correspondence, that a moral distinction mightbe made between apparently intelligent artifacts and humans or otheranimals, since the former are deliberately created and fully understoodwhereas the emergence of life and intelligence in the animal world remainsat present an unexplained mystery, and therefore a suitable basis for aweand reverence. (I have paraphrased his argument.) My answer is two-fold.First the "therefore" expresses a debatable moral position. Secondly, andmore importantly, successful design of a human-like robot would removemuch of the mystery, or at least help to remove it. The remaining taskwould be to extend existing biological studies to account for the evolutionof programs in living things.4. The Causal Dispensibility of World 2

    Finally, whatever the nature of the human mind, it is indubitable thatAI programs do run in physical computers at present, and that physicalevents (e.g., light entering TV cameras) can cause computational eventsto occur, and that computational events can cause physical events to occur,such as switching motors on and off in mechanical arms. Hence, withinalready existing systems we have causal interactions between World 1 andWorld 3, whereas according to Popper this should require the mediationof World 2. (11) Popper could rescue his claim by granting my main thesisconcerning the reduction of World 2 to World 3, and then stating that whatis required for causal mediation between World 1 and certain sorts ofWorld 3 objects, such as theories, proofs, problems, etc. is the presenceof other kinds of World 3 entities, namely the sorts of symbol manipula-tion which constitute mental processes!

    Levinson, in correspondence, has suggested that Popper might argue that

  • A SUGGESTION ABOUT POPPER'S THREE WORLDS 315

    since AI programs (and any robots that may one day descend from them)are products of human minds, they depend on World 2 for their existence.World 2 therefore would not be causally dispensible, at least not in the"prime mover" sense. This, however, presupposes that human mentalprocesses are somehow essentially different from the World 3 processes inA.I. systems. As I pointed out above, this is essentially a moral view. Theexistence of computational systems with human-like abilities strongly sug-gests that instead of being mysterious other-worldly entities, human mentalstates, processes, events, etc. are computational entities on a biologicalcomputer-i.e., they are World 3 entities too. However, I do not claim thatthis has been established. It is a conjecture linked to a fiourishing, excit-ing, and revolutionary research program.

    5. Concluding remarksI have tried to live up to Popper's recommendation that conjectures should

    be bold and rich in content. The theses sketched above have plenty of con-sequences for traditional problems about the relation of mind and body.For instance, they imply that while certain sorts of physical conditions maybe (morally) sufficient for postulating the existence of mental processes, theexistence of a physical world anything like this one (i.e., the biological world)is not necessary for the existence of mind. There are many more implica-tions of AI research, concerning knowledge, reasoning, and the nature offree decisions, some of which I have begun to explore elsewhere. (12)

    Another of Popper's admirable recommendations is that one should ex-pose potential weaknesses of one's theories, instead of merely displayingtheir strong points. I must therefore end by acknowledging that there aretwo aspects ofthe phenomenology of conscious experience which I do notyet see how to account for in terms of internal symbol-manipulations,namely, pleasant physical sensations and finding something funny. Perhapspleasures in general and physical pleasures in particular can, like pains,be accounted for in terms of conscious and unconscious perceptual processeswhich interact in a suitably rich way with motivations, decision making,priorities, and the control of attention. (Analysis in terms of excitation ofa physical subsystem labelled a "pleasure center" of course explains nothing).As for finding something funny-1 suspect that is essentially connectedwith being a social zmm&l. If so, a robot with a sense of humor will haveto have an awareness of, and a concern with, other sentient beings builtdeep into its system of beliefs and motivations. But this idea remains tobe clarified and tested by detailed exploration.

    NOTES AND REFERENCES1. K.R. Popper, Unended Quest (Fontana/Collins, 1976), p. 186.2. M. Boden, Artificial Intelligence and Natural Man (Harvester Press and Basic Books,

    1977); and P. Winston, Artificial Intelligence (Addison Wesley, 1977).

  • 316 Et cetera F A L L 1985

    3. (Harvester Press and Humanities Press, 1978).4. It is worth noting that AI theories, although rich in content, since they are capable of

    explaining widely varying and intricately specified possibilities, often dn not meetPopper's criterion for scientific status, since they are mostly at a stage of developmentwhere empirical falsification is not possible. As far as I am concerned this merelyhelps to show the inadequacy of Popper's criterion. The important thing for scienceis not that theories should have empirically refutable consequences, but that thereshould be varied and detailed consequences, and that whether something is a conse-quence of the theory should be objectively decidable. The consequences will notbe empirically refutable, for instance, if they are ofthe form: X can occur or exist.I have argued elsewhere (Ch.2 1978) that explaining pojji'ftiViriej is a major functionof scientific theories, and not just a hangover from their metaphysical pre-history.

    Notice that I am not totally rejecting Popper's criterion. Like him, I claim thatscientific significance of a theory is to be assessed in terms partly of number, variety,and types of consequences. Popper's mistake was simply to limit the range of admis-sible types to empirically refutable consequences. He could instead have criticizedhis "metaphysical" opponents simply by showing that their theories generated toofew consequences, or that the inherent vagueness or openness of their theories didnot allow an objective decision as to which consequences did or did not follow fromtheir theories. I see no point in striving for a monolithic demarcation between scienceand non-science. It is not usually fruitful to divide the world into "goodies" and"baddies" when in fact there are many overlapping spectra of merit, with the sameindividuals often occupying different locations in different spectra. (Although Popperhimself does not link his demarcation criterion with an evaluation distinction, ithas been so used by many of his followers.)

    Thus, the important question is not "Are AI theories scientific or metaphysical,or whatever?"-but "what are their specific merits and faults and how can they beimproved?" Their study will prove a rewarding field for philosophers of science,as they are so different from theories in more familiar branches of science.

    5. For examples, see Winston and Boden, op.cit.6. See also M. Boden, Purposive Explanation in Psychology (Harvard University Press,

    1972, Harvester Press, 1978).7. See Boden, 1977, Part V, op.cit.; and G.J. Sussman, A Computational Model of Skill

    Acquisition (American Elsevier, 1975).8. D.C. Dennett, Brainstorms: Philosophic Essays on Mind and Psychology (Bradford Books

    and Harvester Press).9. E.g., Boden, 1977, op.cit., pp. 418-426.

    10. K.R. Popper and J. Eccles, The Self and Its Brain (Springer Verlach, 1978).11. E.g., Popper, op.cit., p. 185.12. See also Boden, 1977, op.cit.. Chapter 14.