singularity
DESCRIPTION
TRANSCRIPT
SingularityWhat is it? Will a positive Singularity lead to a utopia?
What Is It? How can it occur?
The notion of some radical technological shift beyond which the world is unimaginable
Visions of the future Just the continuation of more and better gadgets and
faster computers Accelerating Returns
The rate of change, of innovation, is accelerating, even the rate of its acceleration is accelerating.
Event Horizon Intelligence Explosion
The advent of greater than human intelligence
Apacalyptism Geek religion?
Models of Singularity
Advent of greater than human intelligence AGI Brain emulation Augmenting human intelligence
Accelerating change Which also makes brain emulation and workable
AI nearly inevitable But some say just the change itself will grow
beyond our ability to understand or predict
Accelerating Returns
Examination of multiple events and capabilities shows exponential increase
All are driven by (and contribute to) intelligence Rate of inventions, innovation Adoption rates of innovation Increase in computational ability Increase in communication ability and use Time between paradigm shifts Computational power per unit cost
Accelerating Returns Core Claim
Technological change feeds on itself and therefore accelerates. Our past or current rate of change is not a good predictor of the future rate of change.
Strong Claim Technological change follows smooth curves,
typically exponential. Therefore we can predict with some precision when various changes will arrive and cross key thresholds, like the creation of AI.
Advocates Ray Kurzweil, Alvin Toffler, John Smart
Paradigm Shift Time
DNA Sequencing Costs
Mass Use of Inventions
Moore’s Law
Data Mass Storage
ISP Cost-Performance
Random Access Memory
Exponential Growth - Computing
Computational Increase Predictions
Human brain capability (2 * 10^16 cps) for $1000 around 2023
Human brain capability (2 * 10^16 cps) for one cent around 2037
Human race capability (2 * 10^28 cps) for $1000 around 2049
Human race capability (2 * 10^28 cps) for one cent around 2059
Computational Acceleration
Blue Gene will have 5% of the computational power needed to match a human brain
In the next five years or so supercomputers will match the computing capacity of the human brain
By about 2030 such a machine will cost around $1.
What happens when you put 1000 of them to work 24x7 on say the AGI problem or MNT or ending aging?
Limits of Human Intelligence20 – 500Hz. Neuron firing rate
Lately there is evidence than the pattern of firings is the important information unit
Very few aspects of a problem may be held in conscious awareness at one time
Very slow learning rates Faulty memory Limited communication ability with other minds Squishy deteriorating mind implementation Poor scalability Poor transfer of knowledge Very slow replication
Past Intelligence Augmentation Communication. Starting with speech
Writing – an early augmentation of human intelligence Stable cultures respectful of knowledge and encouraging
to innovation Dissemination of learning Science and Mathematics
Reality respecting culture Computation
Gathering and doing more with information ever faster Device Convergence
Future Intelligence Augmentation
Ubiquitous computing Exo-cortex Wearable or embedded computing / communication
Brain – computer interface
Upgrading the brain with technology Bio-chemical upgrading
Upload
Event Horizon Core claim
Soon we will not be the greatest intelligence on the planet
Then the changes in our world will no longer be in our control or understadable and predictable by us
Strong claim To understand what a superintelligence would do you
would have to be that intelligent. Thus the future after that is totally unpredictable.
Advocates Vernon Vinge
Greater Than Human Intelligence
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
- I. J. Good 1965
Intelligence Explosion
Core Claim Creating minds significantly smarter than humans
closes the loop creating a positive feeback cycle.
Strong Claim The positive feedback cycle goes FOOM, each
improvement triggering the next. Superintelligence up to the limits of the laws of physics develops rapidly.
Advocates: I.J. Good, Eliezer Yudkowsky
What does Singularity Mean for Us?
Short Range Employment / Work Cures for all disease including aging Near limitless abundance Economics Political
Backlash Sociological
Impact of Human Equivalent Machine Intelligence
Assume Human intelligence without biological limitations and
distractions Can focus and work 24 x 7 Ultimately very cheap to own and easy to replicate any unit’s
knowledge
Results Almost all human labor, especially intellectual becomes
economically superfluous Progress increases substantially when entire armies can be
deployed on projects
So.. economics must change a great deal for all humans to
continue to partake in results Our view of ourselves must change Maybe we join the AIs and become uploads
> Human Intelligence Impact
Much that goes on is totally opaque to even augmented humans
Humans even transhumans are not in charge, are second class citizens
Are they respected? Kept around? Our history with other species is not encouraging Are humans irrelevant to AGIs? Is there an objective ethical argument for
treating lesser intelligences well? Is going extinct to usher in a greater intelligence
“OK”?
What does Singularity Mean for us
Long term Do we survive?
If we do not are we happy to have produced wonderful “mind children” as Moravec suggests?
Are we recognizable as human? Does it matter?
What of those who do not wish to change? Unlimited future.. Unchecked or unwise recursively improving AI
goos the universe
What Could Stop Singularity?
Turns out to be unnoticeable and no big deal Harsh Condition Slow, Halt, Reverse Progress Unexpected limits and difficulties
Harsh Things Happen
Economic Disaster Some believe this will take a decade or two to
clean up
Energy Disaster Easily a decade to largely replace oil
Existential Risk Major War
What would Utopia take?
We become super bright, compassionate and enlightened
AGI is Friendly and bring us the best we can possibly have as quickly as it deems best for us to have it
AGI ends up Friendly because
Its own decision based on: We are it progenitors It finds us curious or amusing Its ethics lead it to treat lesser intelligences well
Some really bright human hackers gave it unbreakable limts or top level utility function
Is the best we can get utopia?
What is utopia? End of all suffering?
Does this mean end of wanting what you don’t and perhaps can’t yet have?
Does it mean the end of negative feedback or consequences no matter what you do?
Do you instantly become enlightened? End of material lack
Beyond lack to people stop wanting? Is utopia nothing left to do? Can humans be happy as second class or so
totally outclassed as to be beyond comprehension?