superintelligence: how afraid should we be?

34
Superintelligence How afraid should we be? Principal, Delta Wisdom Chair, London Futurists David Wood @dw2 #CIUUK14

Upload: deltawisdom

Post on 09-Jun-2015

780 views

Category:

Technology


2 download

DESCRIPTION

Superintelligence: How afraid should we be? Presentation by David Wood at the Computational Intelligence Unconference UK, 26th July 2014. Reviews ideas in three recent books: Superintelligence, by Nick Bostrom; Our Final Invention, by James Barrat; and Intelligence Unbound, edited by Russell Blackford and Damien Broderick. Please contact the author to invite him to present animated and/or extended versions of these slides in front of an audience of your choosing. (Commercial rates will apply for commercial settings.)

TRANSCRIPT

Page 1: Superintelligence: how afraid should we be?

Superintelligence

How afraid should we be?

Principal, Delta Wisdom Chair, London Futurists

David Wood @dw2

#CIUUK14

Page 3: Superintelligence: how afraid should we be?

@dw2

Page 3

Page 4: Superintelligence: how afraid should we be?

@dw2

Page 4

Likely date of advent of HL-AGI Population 10% 50% 90%

Conference: Philosophy & Theory of AI

Conference: Artificial General Intelligence

Greek Association for Artificial Intelligence

Top 100 cited academic authors in AI

Combined (from above)

Nick Bostrom: Superintelligence

Page 5: Superintelligence: how afraid should we be?

@dw2

Page 5

Likely date of advent of HL-AGI Population 10% 50% 90%

Conference: Philosophy & Theory of AI

2048

Conference: Artificial General Intelligence

2040

Greek Association for Artificial Intelligence

2050

Top 100 cited academic authors in AI

2050

Combined (from above) 2040

Nick Bostrom: Superintelligence

Page 6: Superintelligence: how afraid should we be?

@dw2

Page 6

Likely date of advent of HL-AGI Population 10% 50% 90%

Conference: Philosophy & Theory of AI

2048 2080

Conference: Artificial General Intelligence

2040 2065

Greek Association for Artificial Intelligence

2050 2093

Top 100 cited academic authors in AI

2050 2070

Combined (from above) 2040 2075

Nick Bostrom: Superintelligence

Page 7: Superintelligence: how afraid should we be?

@dw2

Page 7

Likely date of advent of HL-AGI Population 10% 50% 90%

Conference: Philosophy & Theory of AI

2023 2048 2080

Conference: Artificial General Intelligence

2022 2040 2065

Greek Association for Artificial Intelligence

2020 2050 2093

Top 100 cited academic authors in AI

2024 2050 2070

Combined (from above) 2022 2040 2075

Nick Bostrom: Superintelligence

Page 8: Superintelligence: how afraid should we be?

@dw2

Page 8

Reaching HL AGI: 5 driving forces 1. Hardware with higher performance: Continuation of Moore’s Law?

– “18 different candidates” in Intel labs to add extra life to that trend – Possible breakthroughs with Quantum Computing?

2. Software algorithm improvements? – Can speed things up faster than hardware gains – e.g. chess computers – Compare: Andrew Wiles, unexpected proof of Fermat’s Last Theorem (1993)

3. Learnings from studying the human brain? – Improved scanning techniques -> “neuromorphic computing” etc – Philosophical insight into consciousness/creativity?!

4. More people studying these fields than ever before – Stanford University online course on AI: 160,000 students (23,000 finished it) – More components / databases / tools /methods ready for re-combination – Unexpected triggers for improvement (malware wars, games AI, financial AI…)

5. Transformation in society’s motivation?

http://intelligence.org/2013/05/15/when-will-ai-be-created/

(Smarter people?!)

“Sputnik moment!?”

Page 9: Superintelligence: how afraid should we be?

@dw2

Page 9

Superintelligence – model 1

Village idiot Einstein

http://intelligence.org/files/mindisall-tv07.ppt

Eliezer Yudkowsky

Page 10: Superintelligence: how afraid should we be?

@dw2

Page 10

Superintelligence – model 2

http://intelligence.org/files/mindisall-tv07.ppt

Village idiot Mouse

Chimp Einstein

AI

50-100 years 50-100 weeks? / days? / hours?

Vernor Vinge: The best answer to the question, “Will computers ever be as smart as humans?”

is probably “Yes, but only briefly.”

“The final invention”

Eliezer Yudkowsky

Page 11: Superintelligence: how afraid should we be?

@dw2

Page 11

Recursive improvement

Design, Manufacturing

Computers

Page 12: Superintelligence: how afraid should we be?

@dw2

Page 12

Recursive improvement

Software tools (debuggers, compilers…)

Software

Page 13: Superintelligence: how afraid should we be?

@dw2

Page 13

Recursive improvement

AI tools

AI

Intelligence explosion

++Rapid reading & comprehension of all written material

++Rapid expansion onto improved

hardware

++Funded by financial winnings from smart stock trading

++Supported by humans easily psychologically manipulated

Page 14: Superintelligence: how afraid should we be?

Who here wanted to merge again? Jaan Tallinn: http://prezi.com/xku9q-v-fg_j/intelligence-stairway/

Page 15: Superintelligence: how afraid should we be?

@dw2

Page 15

Exponential growth?

Technology

Time 2050

?

Technology

Time 2050

AGI=HL

ASI>>HL

Ray Kurzweil Eliezer Yudkowsky

Page 16: Superintelligence: how afraid should we be?

@dw2

Page 16

Going nuclear: hard to calculate • First hydrogen bomb test, 1st March 1954, Bikini Atoll

– Explosive yield was expected to be from 4 to 6 Megatons – Was 15 Megatons, two and a half times

the expected maximum – Physics error by the designers at Los

Alamos National Lab – Wrongly considered the lithium-7 isotope

to be inert in bomb – The crew in a nearby Japanese fishing boat

became ill in the wake of direct contact with the fallout. One of the crew died

http://en.wikipedia.org/wiki/Castle_Bravo

Page 17: Superintelligence: how afraid should we be?

@dw2

Page 17

Superintelligence – model 2

http://intelligence.org/files/mindisall-tv07.ppt

Village idiot

Chimp Einstein

Mouse

AI Linear model of intelligence?

Eliezer Yudkowsky

Page 18: Superintelligence: how afraid should we be?

@dw2

Page 18

Gloopy ASIs

Posthuman mindspace

Bipping ASIs

Freepy ASIs

Eliezer Yudkowsky http://intelligence.org/ files/mindisall-tv07.ppt

Model 3

Transhuman mindspace

Human minds

Minds-in-general

Page 19: Superintelligence: how afraid should we be?

@dw2

Page 19

Dimensions of mind

The ability to achieve goals in a wide range

of environments

Being conscious?

Having compassion for sentient beings with lesser intelligence?

Page 20: Superintelligence: how afraid should we be?

@dw2

Page 20

AI systems we should fear Killer drones with

autonomous decision-making

powers (Robocop)

Malware that can hack infrastructure-

control systems (e.g. Stuxnet)

Financial trading systems software

(high speed)

Software that is expert in

manipulating humans

Page 21: Superintelligence: how afraid should we be?

http://www.williamhertling.com/

Software that is expert in manipulating humans

Page 22: Superintelligence: how afraid should we be?

@dw2

Page 22

AI systems we should fear Killer drones with

autonomous decision-making

powers (Robocop)

Malware that can hack infrastructure-

control systems (e.g. Stuxnet)

Financial trading systems software

(high speed)

Software that is expert in

manipulating humans

Software that pursues

a single optimisation goal to the exclusion of

all others

The more power such an AI has, the more we should fear it

Page 23: Superintelligence: how afraid should we be?

@dw2

Page 23

The pursuit of happiness?

Software that pursues

a single optimisation goal to the exclusion of

all others

Software will do what we say, rather than what we meant to say

Wire-heading?! Just make us happy!?

Page 24: Superintelligence: how afraid should we be?

@dw2

Page 24

The pursuit of morality?

Just be moral!?

http://www.clipartbest.com/clipart-nTXa54XTB

Whose morality?

The problem of computer morality is at least as hard as the problem of

computer vision (!)

http://tvtropes.org/pmwiki/pmwiki.php/Creator/IsaacAsimov

Isaac Asimov’s Three Laws of Robotics?!

Page 25: Superintelligence: how afraid should we be?

@dw2

Page 25 The two fundamental problems of superintelligence

Specification problem: How do we define the goals of the AGI software?

Control problem: How do retain the ability to shut down the software?

Creation problem: How do we create AGI software in the first place?

Page 26: Superintelligence: how afraid should we be?

@dw2

Page 26 The fundamental meta-problem of superintelligence

Specification problem: How do we define the goals of the AGI software?

Control problem: How do retain the ability to shut down the software?

Creation problem: How do we create AGI software in the first place?

~No research

~No research

Some research

Accidental research

“Friendly AI” (FAI)

“AI in a box”

Page 27: Superintelligence: how afraid should we be?

@dw2

Page 27

AI in a box? Tripwires? “Adam and Eve” ethernet port?!

Software will be a tool, answering questions, not an agent?

The “answers” which the software gives us will have effects in the world

(e.g. software it writes for us)

Systems which rely on humans to verify and carry out their actions will be uncompetitive compared to those with greater autonomy

AGI may become very smart in surreptitiously

evading tripwires

Simple?

Page 28: Superintelligence: how afraid should we be?

@dw2

Page 28

“The orthogonality thesis”

Intelligence and final goals are orthogonal More or less

any intelligence

…could in principle be combined with…

more or less any final goal

Page 29: Superintelligence: how afraid should we be?

@dw2

Page 29

“The instrumental convergence thesis” (“AI Drives”)

Some intermediate (instrumental) goals are likely in all cases for a superintelligence: • Resource acquisition • Cognitive enhancement • Greater creativity • Self preservation (preservation of goal)…

Steve Omohundro: “For a sufficiently intelligent system, avoiding vulnerabilities is as powerful a motivator as explicitly constructed goals and subgoals”

Page 30: Superintelligence: how afraid should we be?

@dw2

Page 30

Indirect specification of goals? Specification problem: How do we

define the goals of the AGI software?

“Achieve the goals which the creators of the AGI would have wished it to achieve, if they had thought about the matter long and hard”

This software will do what we meant to say, rather than what we actually said (?)

AGI helps us to figure out the answer to the spec problem!

Page 31: Superintelligence: how afraid should we be?

@dw2

Page 31

CEV: Coherent Extrapolated Volition AGI should be tasked to carry out:

Our wish if we knew more, thought faster,

were more the people we wished we were, had grown up farther together;

where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere;

extrapolated as we wish that extrapolated, interpreted as we wish that interpreted

Eliezer Yudkowsky

Page 32: Superintelligence: how afraid should we be?

@dw2

Page 32

Unanswered questions (selection) 1. Can we turn ‘poetic’ ideas like CEV into bug-free working software?

– Should we humans concentrate harder on working out our “blended volition”? 2. How can we stop a superintelligence from changing its own core goals?

– Like humans can choose to set aside their biologically inherited goals – Could AGIs that start off ‘Friendly’ become “born again” with new priorities?!

3. Can we prevent AGIs from developing dangerous instrumental drives? – By programming (bug-free) in tamper-proof limitations?

4. Can AGIs help us to figure out a solution to the Control problem? – Can we use a hierarchy of lower-level AGIs to control higher-level ones?

5. Can we prevent the rapid nuclear-style take-off of self-improving AGI? 6. Are some approaches to creating AGIs safer than others?

– Whole Brain Emulation / AGI de novo / evolution in virtual environment… – Open (everything published) vs. Closed (some parts secret)?

7. How does the AGI existential risk compare to other x-risks in priority? – Nanotech grey goo, deadly new bio-hazard, nuclear holocaust, climate chaos…

Page 33: Superintelligence: how afraid should we be?

@dw2

Page 33

Answered questions (selection) a) Should we be afraid?

– Yes. (End-of-the-world afraid)

b) Can we slow down all research into AGI, until we’re confident we have good answers to the control and/or specification problems? – Unlikely – there’s too much financial investment happening worldwide – Too many separate countries / militaries / finance houses… are involved

c) How do we promote wider study of the Superintelligence topic? – Need to lose the “weird” and “embarrassment” angles – “Less Wrong” strikes some observers as cultish – “Terminator” and “Transcendence” have done more harm than good – First class books / articles / movies needed, addressing thoughtful audiences – Good intermediate results useful too (not just appeals for more funding)

Page 34: Superintelligence: how afraid should we be?

Practical philosophy!

Preparing humanity to survive the forthcoming transition to superintelligence

Principal, Delta Wisdom Chair, London Futurists

Urgent!

(Roles too for mathematicians, theologians…)

David Wood @dw2

Philosophy with an expiry date! Making a real difference!