searle and functionalism - langara...

47
1 SEARLE AND FUNCTIONALISM Is the mind software?

Upload: lediep

Post on 19-May-2018

220 views

Category:

Documents


2 download

TRANSCRIPT

1

SEARLE AND FUNCTIONALISM

Is the mind software?

Materialist Theories of the Mind

• The Identity Theory

– Each type of mental property is identical to a certain type of physical property. E.g. pain is just stimulation of the C-fibres.

• Functionalism

– A given mental state (e.g. pain) depends on the software not the hardware. Pain can be realised in many different physical ways.

• Eliminative Materialism

– Traditional mental states, like beliefs and desires, do not exist.

2

Functionalism and AI

• AI (Artificial intelligence) tries to design computer programs that will perform mental tasks of some kind.

• The whole idea of AI assumes functionalism. Functionalism says that the mind is software, not hardware.

• In his “Chinese Room” paper, Searle attacks functionalism.

3

4

Weak AI:

The programmed computer simulates the mind. It is a research tool, for testing psychological explanations. It can perform tasks that require thought in humans.

Strong AI:

The programmed computer is a mind. The computer really understands and is conscious.

5

• No one questions the possibility of weak AI, but strong AI is controversial.

• Searle focuses on ‘intentionality’ in this paper. But conscious intentionality is probably an even bigger problem for computer programs.

• Intentionality = meaning, significance, or “aboutness”, understanding

6

• Random marks on a piece of paper have no meaning. They are not ‘about’ anything. But the words ‘the moon has no atmosphere’ are about the moon. They have a meaning. They somehow connect with an external state of affairs.

• But the real source of the intentionality is the mind that understands the sentence, and thinks the thought (proposition). Without that, the sentence is just a set of marks on paper.

• Searle doubts that computer programs can have intentionality in this sense.

Intentionality

7

• Roger Shank’s program can answer questions about restaurants. It has a “representation” of general knowledge about restaurants. Does it understand the story, the questions, and its own responses, however?

• (Similar to programs like Elbot, Cleverbot, iGod, today.)

Do chatbots have intentionality?

“Functionally equivalent”

• Imagine two black boxes (you can’t see what’s inside).

• Each box has buttons labelled A, B, C, and a red, green and a blue light.

• Suppose you press buttons ABCA on one box, and the green and blue lights turn on. You try the same ABCA on the other box, and the same thing happens.

8

• N.B. a “black box” description of a system specifies its output (response) for every possible input (stimulus).

9

“Functionally equivalent”

• So you try other inputs, and get various outputs. But the two boxes always react in the same way as each other.

• When the two boxes are given the same input, they always give the same output.

• In that case they’re functionally equivalent.

10

• If you open the boxes and look inside, must they be exactly the same inside as well?

• No. E.g. two calculators both give the output ‘4’ for the input ‘2+2=’, and so on, but the calculators might

have very different circuitry inside.

11

Functionalism:

• Functionally equivalent mentally equivalent

• I.e. if two systems are functionally equivalent (same outputs for the same inputs) then they’re mentally equivalent (same consciousness, intentionality, etc.)

12

13

The Stanford Encyclopedia of Philosophy:

“Functionalism is the doctrine that what makes

something a thought, desire, pain (or any other type of mental state) depends not on its internal constitution, but solely on its function, or the role it plays, in the cognitive system of which it is a part. More precisely, functionalist theories take the identity of a mental state to be determined by its causal relations to sensory stimulations, other mental states, and behavior.”

14

Mental states are “black boxes”

It doesn’t matter what’s going on inside. The mental state is whatever it is that is turning input experiences and other mental states into behaviour.

Programs and functional equivalence

• Two computers with different architecture can run the same program. (E.g. Mac can run Windows.)

• Computers running the same program are functionally equivalent. (Why is this?)

• If functionalism is true, then mental states are just a matter of the program that is running.

15

16

How chatbots work

script general information that the virtual person has (e.g.

people won’t usually eat a badly burned hamburger). story e.g. “the waiter brings a man a burned hamburger

and he storms out without paying”

program

Some very complicated rules, based on dissecting the sentences, seeing formal (structural) relationships, reordering words, substituting terms, etc.

Program instructions are things like:

To any question of the form: “Do you like [X]?” reply:

“Yes, I like [X]” if X is a member of the set {chocolate, money, fast cars, philosophy, …}, but reply

“No, I don’t like [X]” if X is a member of {the smell of a wet dog, watching Glee, …}, and reply

“I’m not sure, I don’t know what [X] is” if X is on neither list.

17

18

questions

e.g. “Did the man eat the hamburger?” responses e.g. “No, he didn’t eat the hamburger”

19

• Suppose that the chatbot’s answers to the questions are convincing, as good as those of a real English speaker. Does the computer understand English?

• No. Not a word of it, says Searle.

• Is Searle right about this?

Chinese Room Thought Experiment

• To show this, Searle imagines that he himself does the job of the computer, obeying the chatbot program’s commands.

• Searle is in a room with a ‘script’, a ‘story’, some ‘questions’ and a ‘program’.

• The trick is that the script, story, questions and answers are all in Chinese, a language that Searle doesn’t speak at all. (The program is in English, so Searle understands that.)

20

• Suppose that the answers to the questions are convincing, as good as those of a real Chinese speaker. Does Searle understand Chinese?

• No. Not a word of it, says Searle. He has no idea what any of the questions, or his answers, mean. He is just cutting and pasting symbols, according to rules.

21

22

“1. As regards the first claim, it seems to me quite

obvious in the example that I do not understand a

word of the Chinese stories. I have inputs and outputs

that are indistinguishable from those of the native

Chinese speaker, and I can have any formal program

you like, but I still understand nothing.” (p. 351)

• In other words, meaning does not arise from running a program, no matter how sophisticated. (Searle argues)

23

24

1. What is “understanding” anyway? More than a mere function, says Searle. When a thermostat turns the furnace off, does it think “it is too hot in here”?

– No. (Certainly not conscious intentionality.)

2. Could a machine think? Yes, says Searle, we are such machines. But not in virtue of the program, he thinks. The hardware of the machine is relevant.

Clarifications

25

• “Yes, but could an artificial, a man-made machine,

think?” Assuming it is possible to produce artificially

a machine with a nervous system, neurons, with

axons and dendrites, and all the rest of it, sufficiently

like ours, again the answer to the question seems to

be obviously, yes. If you can exactly duplicate the

causes, you could duplicate the effects. And indeed it

might be possible to produce consciousness,

intentionality, and all the rest of it using some other

sorts of chemical principles than those that human

beings use. It is, as I said, an empirical question.”

(p. 352)

26

“OK, but could a digital computer think?”

If by “digital computer” we mean anything at all that

has a level of description where it can correctly be

described as the instantiation of a computer program,

then again the answer is, of “yes”, we are the

instantiations of any number of computer programs,

and we can think.”

(p. 353)

27

• “But could something think, understand, and so on

solely by virtue of being a computer with the right

sort of program? Could instantiating a program, the

right program of course, by itself be a sufficient

condition of understanding?” This I think is the right

question to ask, though it is usually confused with

one or more of the earlier questions, and the answer

to it is no.” (p.353)

28

• Functionalism: If two systems are functionally equivalent (same outputs for the same inputs) then they’re mentally equivalent (same consciousness, intentionality, etc.)

• Searle opposes functionalism, not materialism. A complex system of water pipes, etc. cannot be conscious, even if it executes the right program, Searle thinks.

29

What are the pipes thinking about?

30

“The idea that computer simulations could be the real

thing ought to have seemed suspicious in the first place because the computer isn’t confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?” (p.353)

Going deeper …

31

Is thought intrinsically concrete? For programs are abstract. Abstract objects don’t feel pain.

Concretism?

Comparison with idealism

• According to Berkeley, ordinary objects like sticks and stones are really ideas.

• But even Berkeley didn’t go so far as to say that we (humans) are ideas! That notion would have been ridiculous. We are minds, active producers of ideas.

• Ideas are passive, inert. They cannot make free choices. They are not conscious. Aren’t abstract objects also passive and inert?

• Now, computer programs are also abstract ideas, at least in the sense of being fully accessible to the mind (and not to the senses).

• Yet functionalism says that consciousness is just a matter of running the right computer program.

• Searle’s intuition seems to be that consciousness depends on the concrete aspects of a system.

34

• N.B. The notion of ‘concreteness’ is rather obscure, and is basically the same as the notion of substance or substratum that Berkeley ridiculed.

• A concrete object isn’t just a collection of properties, or a bundle of ideas. There’s some (concrete) substance that has the properties, or instantiates those ideas.

Concreteness?

35

• If computer programs cannot be minds because they are abstract, rather than concrete, then Searle’s position looks precarious, however.

• For material systems are also abstract, in a sense, if they correspond precisely to (abstract) mathematical structures.

36

Argument for functionalism: “neuron-replacement therapy”

• Suppose you’re starting to have some mental problems, perhaps memory loss, confusion, emotional instability, or a difficulty solving math equations. It’s gradually getting worse. Your GP refers you to a specialist, who says that some of your neurons are breaking down. The best treatment is NRT, or neuron-replacement therapy. This unfortunately cannot undo the existing damage, but will prevent further decline. They identify neurons that are close to failure, remove them, and replace them with digital circuits that are (you guessed it!) functionally equivalent to the old neurons. (Of course the electronic neurons will last indefinitely.) By replacing all the neurons that might fail in the next 5 years, the treatment gives you 5 years with no further mental decline.

37

• You’re understandably nervous about the procedure, worried that you’ll no longer be fully human, but part machine. The specialist reassures you with this argument:

“There’ll be no loss of function at all, since the replacement neurons are functionally equivalent to the old ones. If you replace part of a system with another part that is functionally equivalent to it, then the whole system is functionally unchanged.”

38

• But, you reply, even if my behaviour is the same, under all possible stimuli, might I not feel different?

“Not a chance,” he says. “For if you felt different, you might talk about it, saying things like “I feel funny”. But in that case your behaviour is also different, which we know is impossible! So you cannot feel any different either. Don’t worry.”

39

• Every 5 years you need another round of NRT, until eventually your brain is entirely electronic. But, of course, all is well. You recommend NRT to all your friends.

• This argument seems to establish the view that there cannot be a change of mental state as long as everything is functionally the same.

• Does it?

40

• Note that this argument assumes materialism as a premise.

• So if it works, it means that every materialist should be a functionalist.

41

Argument for functionalism: the problem of other minds

• How do I know that other people are conscious, as I am?

• The only evidence I have is their behaviour, in response to different situations.

• If functionalism is false, then of course this wouldn’t be very good evidence at all, so that our belief in other minds would be quite unjustified!

42

The Turing Test of intelligence

• Suppose we can program a computer so that it is able to hold (apparently) intelligent conversation, just like a human being. Such a machine would, in conversation at least, be functionally equivalent to a human. Now how could you regard the words of such a machine as “meaningless” to it, or claim that “it has no idea what it’s saying”?

• If it overhears such talk, then it will firmly set the matter straight! “You might just as easily think that your own mother lacks intentional states,” the machine protests. “It’s discrimination, plain and simple.”

Further arguments against functionalism

1. The inverted spectrum. A person with an inverted spectrum will be functionally equivalent to us, but have different mental states. (Ned Block)

2. Functional zombies. We can conceive of a being that is functionally equivalent to a human, but which has no consciousness at all.

43

44

Such a person will be functionally equivalent to us. But their mental states will be different.

1. The inverted spectrum

45

A “functional zombie”, as philosophers use the term in this context, is someone who is functionally equivalent to a human and yet “no one is home”.

Such zombies are not conscious, any more than

electronic calculators are. Hence our conception of consciousness, at any rate, is distinct from any functionally-defined state.

2. “Functional zombies” are

conceivable.

• E.g. Ned Block’s “Chinese nation” may be a functional zombie. The population of China is connected together to be functionally equivalent to a human brain. Then, while individual Chinese people will have conscious experiences, the whole system (the “Blockhead”) will conceivably have no experiences at all.

46

• So a having a certain functional role isn’t sufficient for a conscious experience.

• Is it necessary?

• Could a person have conscious experiences that didn’t cause any behaviour?

“… people may have mild, but distinctive, twinges

that have no typical causes or characteristic effects” (Stanford Encyclopedia)

47