"dinosaurs, dodos, humans?" - n. bostrom

Upload: jfargnoli

Post on 04-Apr-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 "Dinosaurs, Dodos, Humans?" - N. Bostrom

    1/2

    Dinosaurs, dodos,

    humans?There is, perhaps,a 50% chancethat humankind

    will be annihilatedthis century, saysNick Bostrom.We should try toprevent this

    XXXXXXXX EXISTENTIAL RISK

    2 GLOBALAGENDA 2006

    is objectively risky or not, then it is risky in thesubjective sense. The subjective sense is whatwe must base our decisions on.

    The human species has survived natural haz-ards for hundreds of thousands of years. It isunlikely, then, that such a hazard should strikeus down within the next century. By contrast,

    the activities of our own species are introducingcompletely novel possibilities for disaster. Themost serious existential risks for humanity inthe 21st century are of our own making. Morespecifically, they are related to anticipated tech-nological developments.

    Man-made danger

    In this century, we may learn to create design-er pathogens that combine extremely high vir-ulence with easy transmittability, longincubation time and resistance to medication.Physicists will be colliding elementary particlesand creating new kinds of matter in acceleratorsat increasingly high energies, and unexpectedand potentially catastrophic things could hap-

    pen when they do this. Nuclear war remains abig concern. Although current stockpiles areprobably insufficient to cause human extinctioneven in an all-out war, future arms races mightlead to the build-up of much larger arsenals.

    Molecular nanotechnology will, in its matureform, give us an unprecedented ability to con-

    omething like 99.9% of allspecies that ever lived are nowextinct. When will our ownspecies join the dinosaurs andthe dodos? How could thathappen? And what can we do tostave off the end?

    An existential risk is one thatthreatens to annihilate Earth-

    originating intelligent life or permanently anddrastically curtail its potential. Since we arestill here, we know that no existential disasterhas ever occurred. But, lacking experience ofsuch disasters, we have not evolved mecha-nisms, biologically or culturally, for managingexistential risks.

    The human species has had long experiencewith hazards such as dangerous animals, hos-tile tribes and individuals, poisonous foods,automobile accidents, Chernobyl, Bhopal, vol-cano eruptions, earthquakes, droughts, wars,epidemics of influenza, smallpox, black plagueand AIDS. These types of disaster have occurred

    many times throughout history. Our attitudestowards risk have been shaped by trial anderror as we have been trying to cope with suchrisks. Yet tragic as those events are to thepeople immediately affected they have notdetermined the long-term fate of our species.Even the worst of those catastrophes were mereripples on the surface of the great sea of life.

    Existential risks are a relatively novel phe-nomenon. With the exception of a species-destroying comet or asteroid impact (anextremely rare occurrence), there were probablyno significant existential risks in human histo-ry until the mid-20th century and certainlynone that it was within our power to do any-thing about.

    Perception is reality

    The first man-made existential risk may havebeen the inaugural detonation of the atomicbomb. At the time, there was some concernthat the explosion might start a runaway chainreaction by igniting the atmosphere.Although we now know that such an outcome isphysically impossible, an existential risk waspresent then. For there to be a risk it sufficesthat there is some subjective probability, giventhe knowledge and understanding available,of an adverse outcome, even if it later turns outthat, objectively, there was no possibility of adisaster. If we do not know whether something

    S

    With the exception ofa species-destroyingcomet or asteroidimpact, there wereprobably nosignificant existentialrisks in human historyuntil the mid-20thcentury and certainlynone that it was withinour power to doanything about

  • 7/29/2019 "Dinosaurs, Dodos, Humans?" - N. Bostrom

    2/2

    EXISTENTIAL RISK XXXXXXXX

    GLOBALAGENDA 2006 3

    ly much bigger drama. Earth is but a tiny speckin a cosmos that contains billions of suns thatare illuminating empty rooms. Every second,more energy and matter go to waste in our

    galaxy than the human species has used upthroughout its existence. If things go well, wewill one day learn to use some of these resourcesto create wonderful new civilizations and novelforms of life.

    Invasion of the robotic probes

    Some studies suggest that an uncoordinated col-onization race could, instead, lead to the evolu-tion of self-replicating robotic colonizationprobes that would use up these cosmic com-mons to no other purpose than to make morecopies of themselves. An astronomical potentialfor valuable development would have been lost.

    The topic of existential risks has beenremarkably neglected, with few systematic

    studies. Canadian philosopher John Leslie, afterreviewing the evidence, concluded that the riskof human extinction within the present centu-ry is 50%. Britains Astronomer Royal, Sir Mar-tin Rees, also puts the risk at 50%. RichardPosner, an American judge and legal scholar,did not give a numerical estimate but conclud-ed in his book on the subject that the risk isserious and substantial. I argued in a paper in2001 that it would be misguided to set the prob-ability at less than 25%.

    Attempts to quanti fy ex is tent ia l ri skinevitably involve a large helping of subjectivejudgment. And, of course, there may be a pub-lication bias in that those who believe that the

    trol the structure of matter and to create power-ful new weapons systems. Or what if we built

    computers more intelligent than humans andwith the ability to self-improve? Furthermore,we must take into account the possibility thatnew kinds of risk may emerge that are as unfore-seeable as the hydrogen bomb was in 1905.

    Maybe the world will end not with a bang buta whimper. In addition to sudden, cataclysmicdisasters, there are various more gradual andsubtle ways in which an existential catastrophecould occur. One scenario is a totalitarian worldgovernment that oppresses its citizens and bansthe use of technology to enhance human capac-ities, depriving humanity of its potential. Sucha government might use ubiquitous surveil-lance and new kinds of mind-control technolo-gies to prevent reform or rebellion.

    Alternat ively, evolutionary developmentscould take us in an undesirable direction, elim-inating traits central to human values, such asconsciousness, intelligence or, even, playful-ness. There is no biological law that evolutionalways leads to an increase in valuable attrib-utes. While human evolution operates oververy long timescales, technologically-assistedevolution could be much faster and evolutionamong a population of machine intelligencesor human uploads could be extremely rapid.The wrong kind of cultural evolution couldlead to stagnation and thorough debasement ofhuman life.

    Our planet is just a small part of a potential-

    risk is larger are more likely to publish books.Nevertheless, everybody who has seriouslylooked at the issue agrees that the risks are con-siderable. Even if the probability of extinctionwere merely 5%, or 1%, it would still be worthtaking seriously in view of how much is at stake.

    It is sad that humanity as a whole has notinvested even a few million dollars to improveits thinking about how it may best ensure its

    own survival. Some existential risks are difficultto study in a rigorous way but we will not knowwhat insights we might develop until we do theresearch. There are also some sub-species ofexistential risk that can be measured, such asthe risk of a species-destroying meteor or aster-oid impact. This particular risk turns out to bevery small. A meteor or an asteroid would haveto be considerably larger than 1km in diameterto pose an existential risk. Fortunately, suchobjects hit the Earth less than once in 500,000years on average.

    The magnitude of existential risks is not afixed quantity it becomes larger or smallerdepending on human action. We can take delib-erate steps to reduce many existential risks. For

    instance, Nasa, Americas space agency, is map-ping out asteroids larger than 1km in its Space-guard Survey. If we were to get sufficiently earlywarning of a looming impact, it might be pos-sible to launch a rocket with a nuclear charge todeflect the juggernaut.

    Some of the studies and countermeasuresthat would reduce existential risk would also berelevant for mitigating lesser hazards. A global,catastrophic risk is one that could cause tens orhundreds of millions of deaths even as it falls farshort of terminating the human race. The sameprogramme that monitors doomsday asteroidscan also detect somewhat smaller objects thatpose merely global catastrophic risks. Coun-termeasures against future designer pathogens

    might also be useful in combating naturally-occurring pandemics. Research and measuresto reduce the chance of catastrophic, runawayglobal warming would also be useful for under-standing and counteracting the much moreplausible mid-range scenarios.

    A great leader accepts responsibility for thelong-term consequences of the policies he orshe pursues. With regard to existential risks, thechallenge is to neither ignore them nor toindulge in gloomy despondency but to seekunderstanding and to take the most cost-effec-tive steps to make the world safer. GA

    CVNICK BOSTROM

    Nick Bostrom is director of the Future of

    Humanity Institute at the University of

    Oxford. He has a background in physics,

    philosophy, computational neuroscience

    and artificial intelligence.

    Large asteroids hit the earth every 500,000 years