the technological singularity - risks & opportunities - monash university

22
The Technological Singularity 101 Adam A. Ford

Upload: adam-ford

Post on 13-Apr-2017

324 views

Category:

Technology


2 download

TRANSCRIPT

Page 1: The Technological Singularity - Risks & Opportunities - Monash University

The Technological Singularity 101

 Adam A. Ford

Page 2: The Technological Singularity - Risks & Opportunities - Monash University

AGI - What is it?    Strong AI or "Artificial General Intelligence" AGI is AI that has the ability to perform "general intelligent action". 

Distinction: Narrow AI (or Weak AI/Applied AI) used to  accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases are completely outside of) the full range of human cognitive abilities. ●Narrow: Deep Blue (Chess), Watson (Jeapordy)

●General: Us, Dolphins, Benobos (with the ability to apply learning in one area to another)

Page 3: The Technological Singularity - Risks & Opportunities - Monash University

Real AGI? Seriously?Yes. This sounds astonishing, but it’s becoming increasingly plausible.  100 years ago most scientists were vitalists.The beginnings of physics...50 years ago DNA (watson and crick double helix) had just been discovered, first transistor computer...

If 21st century sees as much technological and scientific growth as the 20th century, then it is VERY likely we will see super intelligent AIs.

Arthur C. Clarke: "Any sufficiently advanced technology is indistinguishable from magic."

Page 4: The Technological Singularity - Risks & Opportunities - Monash University

AI? Seriously?

 "Reverse Engineering the Brain is Within Reach" - Terry Sejnowski (one of the most respected neuroscientists in the world) - was speaking in Melb last Wednesday! “We’re making steady progress toward Ray Kurzweil’s singularity,” says Justin Rattner, CTO of Intel. See article 'Intel touts progress towards intelligent computers' Dharmendra S Modha - IBM Brain emulation project

Page 5: The Technological Singularity - Risks & Opportunities - Monash University

The Technological Singularity

Vernor Vinge coined the term  "Technolgical Singularity"

 Vernor postulated that "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." - vernor vinge 1994 * * No claim to be science! There is no hard evidence for this prediction! Neither is there for the predicted weather in a couple of weeks.  ... Are we there yet? are we there yet? ...

Page 6: The Technological Singularity - Risks & Opportunities - Monash University

The Metaphor implied by the Singularity●Not a point of infinity - but an event horizon

surrounding a black hole●Not a discrete event

●Difficult to see past the event horizon from the outside (Kurzweil: may observe hawking radiation, and therefore the singularity aftermath is predictable)

●Explosive and transformative once inflection point is reached

Page 7: The Technological Singularity - Risks & Opportunities - Monash University

Accelerating Change●Kurzweil's Graph

●Not a discrete event

●Despite being smooth and continuous, explosive once curve reaches transformative levels (knee of the curve)

●Consider the Internet. When the Arpanet went from 10,000 nodes to 20,000 in one year, and then to 40,000 and then 80,000, it was of interest only to a few thousand scientists. When ten years later it went from 10 million nodes to 20 million, and then 40 million and 80 million, the appearance of this curve looks identical (especially when viewed on a log plot), but the consequences were profoundly more transformative.

Page 8: The Technological Singularity - Risks & Opportunities - Monash University
Page 9: The Technological Singularity - Risks & Opportunities - Monash University

Intelligence Explosion

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." - I. J. Good (1960s)

Page 10: The Technological Singularity - Risks & Opportunities - Monash University

Intelligence Explosion

The purest case - AI rewriting its own source code.  The key idea :●improving intelligence accelerates●tipping point. Like trying to balance a pen on one

end – as soon as it tilts even a little, it quickly falls the rest of the way.

 

Push

Pull

Page 11: The Technological Singularity - Risks & Opportunities - Monash University

Evolutionary History of Mind

●Dennet's Tower of Generate and Test●"Darwinian creatures", which were simply selected by trial and error on

the merits of their bodies' ability to survive; its "thought" processes being entirely genetic. 

●"Skinnerian creatures", which were also capable of independent action and therefore could enhance their chances of survival by finding the best action (conditioning overcame the genetic trial and error of Darwinian creatures); phenotypic plasticity; bodily tribunal of evolved, though sometimes outdated wisdom.

●"Popperian creatures", which can play an action internally in a simulated environment before they perform it in the real environment and can therefore reduce the chances of negative effects - allows "our Hypothesis to die in our stead" - popper

●"Gregorian creatures" are tools-enabled, in particular they master the tool of language. - use tools (e.g. words) - permits learning from others.  The possession of learned predispositions to carry out particular actions

Page 12: The Technological Singularity - Risks & Opportunities - Monash University

Evolutionary History of Mind

●To add to Dennet's metaphore: We are building a new floor to the tower●"Turingian creatures", that use tools-

enabled tool-enablers, in particular they create autonomous agents to create even greater tools (e.g. minds with further optimal cognitive abilities) - permits insightful engineering of increasingly insightful agents, resulting in an Intelligence Explosion.

Page 13: The Technological Singularity - Risks & Opportunities - Monash University

Closing the Loop - Positive FeedbackPositive Feedback loop is completed, through an agents ability to intelligently design and recreate its ability to intelligently design and recreate itself!

Bottle-neck?  Other entities?  Substrait expandable (unlike brain), white box source code which can be engineered and manipulate (brain, black box code with no manual...possibly can be improved through biotech...not trivial )

Better source code

leads to better AI*

AI optimizes /

improves its source code

*Smarter AI will be even better at improving its source code

Page 14: The Technological Singularity - Risks & Opportunities - Monash University

Closing the Loop - Positive FeedbackDifferent from:●evolution: a species falling up the

stairs or 'climbing mount improbable' through natural selection.  Blind. Blackbox

●farming -> specializations -> usary system -> writing 

●Predicting what sort of things are likely to cascade is difficult.

 An "insight" is a chunk of knowledge which, if you possess it, decreases the cost of solving a whole range of governed problems. 

Recursively rewriting the cognitive level

   in an insightful way 

Page 15: The Technological Singularity - Risks & Opportunities - Monash University

Closing the Loop - Positive FeedbackEDIT note the difference between evolution nibbling bits off the immediate search neighborhood, versus the human ability to do things in one fell swoop. 

Evolution - combing immediate search spacevs

Human cognition - search space compression via insight Our compression of the search space is also responsible for ideas cascading much more easily than adaptations.  We actively examine good ideas, looking for neighbours. 

Page 16: The Technological Singularity - Risks & Opportunities - Monash University

Sustainability & ComplexityClear and present danger - environmental sustainability.

We may not be able to dig our own way out with our current level of thinking.   It is likely that we underestimate the complexity of the problems we face                               ...though AGI would have facility to deal with

extreme complexity.

Especially if AI had a           global understanding                                             of science Especially if AI had access to MNT                        (Molecular Nano Technology)

Page 17: The Technological Singularity - Risks & Opportunities - Monash University

"The significant problems we face cannot be solved at the same level of thinking we were at when we created them" - Albert Einstein

Sustainability & AI

AI having a global understanding of Science, would likely have mastery of  MNT - Molecular Nano Technology ●Universal Assemblers oand Disassemblers

●Utility Fog ●Clean energy ●Perhaps too dangerous for humans

to take into their own hands!oHacktivism

Page 18: The Technological Singularity - Risks & Opportunities - Monash University

Desperate People do 

Desperate ThingsTo quote Jamais Cascio -

openthefuture.com

Page 19: The Technological Singularity - Risks & Opportunities - Monash University

AI - When?Don't know   ●Differing estimates.  Predictions are just predictions.●The accumulation and acceleration of singularity supporting

technologies is likely to explode at some point... ●but our abstractions (at least economic ones) are not a good

enough fit to the problem to say exactly when or how. ●The point is that such force multipliers (i.e. discoveries in

multiple areas of science and technology) is likely to lead to an intelligence explosionoAn intelligence explosion would have profound effects on

all us●Best to prepare as early as possible

Page 20: The Technological Singularity - Risks & Opportunities - Monash University

AI - Lets get it done correctly●The impact of the Intelligence Explosion depends entirely on

the type of AIs we push past the tipping point.

●Stability of goals if a self-modifying AI - Staying friendly to humans

●How? Coherent Extrapolated Volition - a seed AI to study human nature, then produce AI which humanity would want, given sufficient time and insight to arrive at a satisfactory answer

●Have a good idea of what will result from building an AI

Page 21: The Technological Singularity - Risks & Opportunities - Monash University

Find out More: H+ & SingSum●WHAT

othe Singularity Summit AU - Humanity+oConference on the acceleration growth of technology/scienceoImpact in the near/medium/long term - social, environmentalo 

●WHENoUsually Held in August oLast weekend of National Science Week

●WHERE ohttp://singularitysummit.com.au / http://hplusconf.com.auoMost likely in Melbourne

●WHOoYou! - promotion of discussionoPreviously : David Chalmers, Ben Goertzel,

Alan Hájek, Hugo de Garis, Steve Omohundro,Lawrence Krauss, Aubrey de Grey, NatashaVita-More, Stelarc 

Vid: Vernor Vinge on the TechnologicalSingularity

Page 22: The Technological Singularity - Risks & Opportunities - Monash University

References● "Kinds of Minds" - Daniel Dennit : http://www.amazon.com/Kinds-Minds-

Understanding-Consciousness-Science/dp/0465073514●Cascades, Cycles, Insight...: http://lesswrong.com/lw/w5/cascades_cycles_insight/●Recursive Self Improvement: http://lesswrong.com/lw/we/recursive_selfimprovement/ ●Terry Sejnowski : http://www.scientificamerican.com/article.cfm?id=when-build-brains-

like-ours ●Nanotech : http://e-drexler.com/ ●Justin Rattner - Intel touts progress towards intelligent computers:

http://news.cnet.com/8301-1001_3-10023055-92.html#ixzz1G9Iy1cXe