Download - CMSC 671 Fall 2001

Transcript
Page 1: CMSC 671 Fall 2001

1

CMSC 671CMSC 671Fall 2001Fall 2001

Class #11 – Tuesday, October 9

Page 2: CMSC 671 Fall 2001

2

Today’s class

• Philosophy of AI– Can we build intelligent machines?

• If we do, how will we know they’re intelligent?

– Should we build intelligent machines?• If we do, how should we treat them…

• …and how will they treat us?

Page 3: CMSC 671 Fall 2001

3

Philosophy of AI

Alan M. Turing, “Computing Machinery and Intelligence”

John R. Searle, “Minds, Brains, and Programs”

J. Storrs Hall, “Ethics for Machines”

(supplementary: Russell & Norvig Ch. 27)

Page 4: CMSC 671 Fall 2001

4

Philosophical debates

• What is AI, really?– What does an intelligent system look like?

– Do we need, and can we have, emotions, consciousness, empathy, love?

• Can we ever achieve AI, even in principle?

• How will we know if we’ve done it?

• If we can do it, should we?

Page 5: CMSC 671 Fall 2001

5

Turing test

• Basic test: – Interrogator in one room, human in another, system in a third

– Interrogator asks questions; human and system answer

– Interrogator tries to guess which is which

– If the system wins, it’s passed the Turing Test

• The system doesn’t have to tell the truth (obviously…)

Page 6: CMSC 671 Fall 2001

6

Turing test objections

• Objections are basically of two forms:– “No computer will ever be able to pass this test”

– “Even if a computer passed this test, it wouldn’t be intelligent”

Page 7: CMSC 671 Fall 2001

7

“Machines can’t think”

• Theological objections• “It’s simply not possible, that’s all”• Arguments from incompleteness theorems

– But people aren’t complete, are they?

• Machines can’t be conscious or feel emotions– Reductionism doesn’t really answer the question: why can’t

machines be conscious or feel emotions??

• Machines don’t have Human Quality X• Machines just do what we tell them to do

– Maybe people just do what their neurons tell them to do…

• Machines are digital; people are analog

Page 8: CMSC 671 Fall 2001

8

“The Turing test isn’t meaningful”:Chinese Room argument

Page 9: CMSC 671 Fall 2001

9

“The Turing test isn’t meaningful”

• Maybe so, but…

If we don’t use the Turing test, what measure should we use?

• Very much an open question…

Page 10: CMSC 671 Fall 2001

10

Ethical concerns: Robot behavior

• How do we want our intelligent systems to behave?

• How can we ensure they do so?

• Asimov’s Three Laws of Robotics:1. A robot may not injure a human being or, through inaction, allow a

human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Page 11: CMSC 671 Fall 2001

11

Ethical concerns: Human behavior

• Is it morally justified to create intelligent systems with these constraints?– As a secondary question, would it be possible to do so?

• Should intelligent systems have free will? Can we prevent them from having free will??

• Will intelligent systems have consciousness? (Strong AI) – If they do, will it drive them insane to be constrained by artificial

ethics placed on them by humans?

• If intelligent systems develop their own ethics and morality, will we like what they come up with?


Top Related