top | item 3675830

We're Underestimating the Risk of Human Extinction

202 points| ca98am79 | 14 years ago |theatlantic.com | reply

111 comments

order
[+] mseebach|14 years ago|reply
There are so many people that could come into existence in the future if humanity survives this critical period of time---we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.

It's really an interesting moral discussion - it's like an extension of the "sacrifice one person to save five" classical dilemma, but I really can't agree with his flippant assertion that our moral obligation to the unborn future billions eclipses that to our obligation to help our contemporaries. And he's not even talking about preserving the planet for the future generations, he's merely concerned with them being born in the first place.

Also, his argument has the same short-coming as the "sacrifice" dilemma: We cannot know for sure that a certain action will have a certain outcome - or that any action taken was actually the cause of the outcome.

[+] lincolnq|14 years ago|reply
"unborn future billions" sounds so... abstract.

Think of Petrov: http://lesswrong.com/lw/8f0/existential_risk/

We effectively credit him with saving the world. If it weren't for him, most of us might not be around. I consider him a hero. From a moral perspective I would rather be him than Bill Gates, who has greatly reduced suffering and disease in the modern world. Just because of the impact on the future.

[+] astrofinch|14 years ago|reply
>Also, his argument has the same short-coming as the "sacrifice" dilemma: We cannot know for sure that a certain action will have a certain outcome - or that any action taken was actually the cause of the outcome.

I don't see how this is a shortcoming.

If I am leading a company, I cannot know for sure that a certain action will have a certain outcome – or that any action taking will actually be the cause of a given outcome. That's not going to prevent me from doing my best to build a good product and make a profit.

In the same way, the uncertainty inherent in reducing existential risk shouldn't prevent us from doing our best to reduce it.

[+] angersock|14 years ago|reply
That's the problem of reasoning on timescales of civilizations, yeah? You can't really pay attention to transient effects and still make meaningful policies.

For example, consider how many people died mining coal in the past couple of centuries, or how many natives died when colonizing forces spread them diseases or outright committed genocide.

Sure, it's awful, but would the world really be a better place if America were limited to some traders on the East coast? Or if England had never really gotten the Industrial Revolution thing figured out?

We can't really reason in human terms when talking about the species writ large.

[+] rudiger|14 years ago|reply
Just like there's a time value of money, there should be a time value of people. The unborn future billions should be discounted to the present value so that we can accurately compare them. What's the discount rate?
[+] AznHisoka|14 years ago|reply
I feel these moral rules go out the window the moment you're faced with a life and death scenario. When we're talking about this theoretically, it's easy to say a billion people in the future are more important, but the moment you realize you could die, you say screw those people, I'm gonna save my life first.
[+] philwelch|14 years ago|reply
> And he's not even talking about preserving the planet for the future generations, he's merely concerned with them being born in the first place.

It's odd to give that much moral weight to the potential future existence of unborn people centuries from now; how does that not favor simply having as many babies as possible?

[+] cletus|14 years ago|reply
It's worth bringing up The Most Important Video You'll Ever Watch [1], which the lecturer characterizes as humanity's biggest problem is our inability to understand the exponential function. Watch all 8 parts.

I have come to the conclusion that there simply are too many of us. We can probably sustain our current levels for a century, maybe two, but at some point scarce resources (and their subsequent cost) will have a devastating effect.

Basically, we need to correct our population before nature does.

As much as people point to space being our future, I simply (sadly) do not agree. While there might be plentiful resources in the asteroid belt (and on other bodies) nothing compares to how cheaply we can pull things out of the ground here on Earth. Our society is predicated on cheap, plentiful resources such that it can't survive them being several (or even one?) order of magnitude more expensive.

As far as interstellar space goes, even if we solve the reaction mass problem and have perfect (100% efficient) conversion of matter to energy, it will still be prohibitive to go to even the nearest stars.

Perhaps the simplest explanation of the Fermi Paradox is that potential growth for a starfaring civilization is geometric (being a sphere ultimately limited by the speed of light) while growth rates are exponential. And exponential will ultimately "win".

[1]: http://www.youtube.com/watch?v=F-QA2rkpBSY

[+] astrofinch|14 years ago|reply
That video is a joke for anyone who has more than a superficial understanding of math. Just because a process has fit an exponential curve does not mean exponential growth is going to continue forever. Past growth patterns are fairly weak evidence of future growth patterns. ("Housing prices always go up!")

That's not to say we shouldn't fear a process that is inherently exponential in nature, like the reproduction of bacteria or a nuclear reaction going supercritical. But if the process only appears from the outside to have been growing exponentially, that's only a weak indicator that it will start behaving in an insane fashion.

In any case, it seems that as nations become more developed people stop having kids:

http://www.overcomingbias.com/2010/11/fertility-the-big-prob...

[+] snowwrestler|14 years ago|reply
The world population growth rate is not exponential; in fact it has been declining for decades. It seems increasingly possible that the human population will stabilize or even decline in this century.
[+] InclinedPlane|14 years ago|reply
Psssst. Want to know a secret?

More humans doesn't equal less resources, it equals more resources. The Earth isn't "running out" of anything, the idea of a carrying capacity for a technological species is ridiculous.

[+] zmj|14 years ago|reply
> Perhaps the simplest explanation of the Fermi Paradox is that potential growth for a starfaring civilization is geometric (being a sphere ultimately limited by the speed of light) while growth rates are exponential. And exponential will ultimately "win".

I read a paper within the last few months that used simulated expansion into a galaxy to debunk that idea. The fallacy is that growth is continuous and ceases simultaneously across a civilization. The (simulated) reality is that only a tiny percentage of frontier colonies need to survive to prevent extinction and eventually resume growth.

I wish I could find the paper. Anyone know the one I'm talking about?

[+] huherto|14 years ago|reply
Is it an exponential function or a simoid?
[+] growt|14 years ago|reply
"... we humans will destroy ourselves. ... Most worrying ... human technology"

no shit! breaking news everybody!

I hate to break it to him but there are 10 million science-fiction books out there dealing with every possible way humans could wipe themselves off the earth. (and maybe just as much real science)

[+] angersock|14 years ago|reply
I don't know why you are getting downvoted...

Maybe people don't read as much speculative fiction as they used to (or books in general, for that matter, but I digress...) but it seems like lately I've run into more than a few people acting surprised about concepts that, frankly, have been explored to death in books as recently as half a century ago.

[+] rquantz|14 years ago|reply
1) Almost all civilizations like ours go extinct before reaching technological maturity.

2) Almost all technologically mature civilizations lose interest in creating ancestor simulations: computer simulations detailed enough that the simulated minds within them would be conscious. 

3) We're almost certainly living in a computer simulation.

It seems like there's a fourth possibility here, which is that a full simulation of the universe is not possible. Anyone care to comment on the computability of something like that?

[+] jerf|14 years ago|reply
There's no reason to believe our universe is uncomputable. It may require vast resources, in excess of what our universe possesses (by definition, in some sense), but we have no ability to say that there can exist no other possible universe that may not only possess these resources, but consider our entire universe the moral equivalent of a homework problem running on a toy computer.

Even if our universe is in some sense based on real numbers (in the math sense), there's no apparent reason to believe that arbitrarily accurate simulations couldn't be run of it. (Or, alternatively, our host universe may also have real numbers and be able to use them for simulation purposes.)

[+] Cieplak|14 years ago|reply
Also, it could be that the 'real' civilization is not simulating the entire universe, but rather just the solar system and simulating the light entering the solar system from outside the solar system, substantially reducing the computational complexity of the simulation.

Still, modeling every living being as a sort of cellular automaton and running the simulation would take a lot of quantum computers. What would be the computational complexity of modelling the consciousness of a human being?

[+] Cieplak|14 years ago|reply
Perhaps the "real" civilization could run the simulation slower than real-time. For instance, while a whole hour elapses in the real world, only a half hour might elapse in the simulation. To the person observing the simulation, everyone inside the simulation appears to be going slowly, while time flows normally for the people inside the simulation.

I think that this would reduce the computational intensity by a factor of 2.

[+] kstenerud|14 years ago|reply
4) There just aren't very many posthuman civilizations yet.

The other three assume a crowded universe.

[+] astrofinch|14 years ago|reply
If you buy Bostrom's arguments, you could make a donation to his research group:

http://www.fhi.ox.ac.uk/donate

I still can't believe that "trying to save the human race from destruction" is being allocated such a tiny fraction of the world GDP.

[+] rwallace|14 years ago|reply
The scare quotes are well placed. The problem with Bostrom et al. is not that they are spending money. It's not even the fact that they are wasting their time on imaginary risks. It's that, by trying to bend policy around those imaginary risks, they are increasing our vulnerability to real-life risks.
[+] PaulHoule|14 years ago|reply
uh, the risk of human extinction is 100%

the main question is do we have 8 months left or 8 years, 80,000 or 800,000.

[+] Symmetry|14 years ago|reply
I'm hoping we'll make it to 80,000,000,000 but your general point is correct. There's only so many computational cycles that can be extracted from the universe before you run out of places to sink entropy.
[+] pjscott|14 years ago|reply
Sure, but worrying about "expected time to human extinction" doesn't make for nearly as good a headline.
[+] funkah|14 years ago|reply
Thanks for commenting!
[+] tianshuo|14 years ago|reply
I'm bringing up an equation that needs consideration. PotentialDamage=NxP1xP2xP3xMaximumDamage

Where N=Population of World P1=Percentage of people with knowledge and access to Dangerous Technologies P2= Percentage of Personalities who are destructive P3=Percentage of People who actually act MaximumDamage=The Number of Casualties that can be caused by a single person

While P1,P2,P3 is relatively stable, and N is almost linear, MaximumDamage is exponential. You can imagine weapons accessible to normal evolved from sticks, to axes, to guns and explosives. At the present moment the main reason that nuclear weapons are not in exploded by terrorists is not because of technology but because of scarce resources. But for bio-warfare, technology cost will exponentially shrink and impact will exponentially grow.

This means if this goes on, PotentialDamage will one day be larger than our population, and extinction will occur. By estimating these parameters we can even predict a data.

[+] koningrobot|14 years ago|reply
The one thing that's always skipped over in this kind of article is... Why? The universe doesn't care whether we're around or not. It sure as hell doesn't affect us whether people are around even as nearby as a thousand years from now. If you're worried about future generations suffering, then don't make them. That's a lot cheaper than spending fortunes on a technological solution. The money saved can be used to help those suffering now.
[+] bdunbar|14 years ago|reply
You don't have to be a Velikovsky to appreciate that the solar system is a giant game of billiards, humanity is in a fragile aquarium perched on the eight-ball.

To put it another way: I have fire insurance, but I've never had a fire. I have a gun (or four) but I've never had to shoot anyone.

Prudent folk recognize that life is risk and prepare accordingly.

[+] Tangurena|14 years ago|reply
We're underestimating the risk because we're using Net Present Value to determine the value of future events. While it can be useful in business to determine profitability of future endeavors, it isn't useful when we are unable to correctly calculate all the other things that get wrapped up into the word "externality".
[+] uvdiv|14 years ago|reply
"Even with nuclear weapons, if you rewind the tape you notice that it turned out that in order to make a nuclear weapon you had to have these very rare raw materials like highly enriched uranium or plutonium, which are very difficult to get [a]. But suppose it had turned out that there was some technological technique that allowed you to make a nuclear weapon by baking sand in a microwave oven or something like that. If it had turned out that way then where would we be now? Presumably once that discovery had been made civilization would have been doomed."

It works both ways; technology could find simpler ways to build superweapons, but it could also lower the entry barrier to complex, resource-intensive projects. Look at the consequences of even mild singularity predictions for robotics and AI -- things that lower industrial "costs" by orders of magnitude, things like 3D-printers, self-replicating machines, cheap and ubiquitous fab robots. Anyone could build incredibly sophisticated machines in their own homes -- including sports cars, jet engines, and giant TV screens, but equally, compact laser enrichment cascades [b] and nuclear weapons.

Perhaps it's lack of knowledge on my part, but I don't see what will stop atomic bombs from being as common as handguns in 30-100 years. Fissile material is ubiquitous [c]; there's no barrier beyond economics and engineering, of exactly the sort near-term AI could unpredictably disrupt.

[a] (Tangential silliness: terrestrial uranium WAS weapons-grade material a few billion years ago -- U-235 decays with a 0.7 billion year half life, compared to 4.5 billion years for U-238, so in geologic history the fissile fraction used to be extremely high. Another one for the "anthropic principle" bin: earth's intelligence must have evolved now, and not earlier, because if it had it would have trivially nuked itself...)

[b] Check out [NYT][APS] -- it's actually a mainstream position among arms control experts to consider shutting down US research in laser enrichment, to prevent the knowledge from being developed at all. To me this plan sounds about as airtight as have suggested to close the NSA, to keep mathematical secrets like RSA from being discovered.

[c] Here's a blogger who goes hiking and brings back uranium ore by the bucket [Willis], for his own hackery experimenting. Uranium ore occurs everywhere [Cameco]; even common granite is a low-grade ore containing 5-50 ppm (parts per million) U [AZGS]; and research suggests it's even feasible to extract it by bulk from seawater [NBF].

[NYT] http://www.nytimes.com/2011/08/21/science/earth/21laser.html...

[APS] http://aps.org/units/fps/newsletters/201007/slakey.cfm

[Willis] http://carlwillis.wordpress.com/2008/02/20/uranium-chemistry...

[Cameco] http://www.cameco.com/uranium_101/uranium_science/uranium/

[AZGS] http://repository.azgs.az.gov/uri_gin/azgs/dlio/414

[NBF] http://nextbigfuture.com/2009/09/uranium-from-seawater-on-la...

[+] angersock|14 years ago|reply
"Perhaps it's lack of knowledge on my part, but I don't see what will stop atomic bombs from being as common as handguns in 30-100 years. Fissile material is ubiquitous; there's no barrier but economics and engineering, of exactly the sort near-term AI could unpredictably disrupt."

You guess right--it's your lack of knowledge.

Fissile material is not useful for making nuclear weapons (of any more than the dirty bomb variety). So far, we seem to have converged on uranium and plutonium, and while at least uranium is relatively common in the Earth the processing and dredging required to get a useful amount of it, and then the refining to get the useful isotopes from that, is nontrivial.

Engineering and fast computers are great and all, and will get you arbitrarily close to the physical limits--but we're there right now, and physics says you aren't getting a centrifuge with meaningful output in your garage.

More troubling is the idea that frankly we've had the technology to build sports cars in our homes for decades and decades now. It's called buying a lathe, an arc welder, a mill, a forge. Despite the availability--cheap!--of these things to the general populace, not only is it not widespread among those who can--and there are damned few of those even with this magical Internet thing telling you how to do all of it--it will get less likely as people focus on cat pictures and tweeting.

We aren't worthy of a singularity as a culture.

EDIT:

Thank you for attaching some interesting facts. Allow me to do the same:

The useful fissile uranium isotope (235) accounts for bout .72% of naturally occurring uranium (http://web.ead.anl.gov/uranium/guide/facts/).

Your hiking source claims uranium metal, but points out that it is completely locked up in slag and that only at large-scales does the approach seem tractable.

Even allowing for that, the amount produced is negligible.

That's just for metal--we aren't even talking about a usable isotope yet.

Take 7/1000 of that result from the blogger, and wave a wand to make it pure enough to use.

Now collect the (at least) several pounds needed of that to make a functioning device. Now do the machining on the rest of the device to make it function correctly (have fun with the berylium dust, if you go that route). Now do the timing electronics, and the charge shaping (if you go that route).

This.

Is.

Not.

Garage.

Technology.

[+] khafra|14 years ago|reply
Yup. Singleton superintelligent friendly AI is pretty much our only chance; and we have no idea how to build one.
[+] gwillis13|14 years ago|reply
Oh... I do love everyone's opinion about this topic. It's interesting, but I think everyone can agree that "human evolution is a train with no tracks". Thinking there is a formula to solve it's equation by humans is pretty amusing.
[+] alexro|14 years ago|reply
Maybe. On the other hand, which risks we estimate properly? What if humans are incapable of properly estimating risks, especially related to themselves?

It all comes down to the ability to predict future, and we are really bad at it.

[+] pjscott|14 years ago|reply
Human extinction sounds a lot less plausible since it's never happened before.
[+] shingen|14 years ago|reply
I'm always fascinated by the extremely long term horizon concerns that people bring up in discussions about the future of humanity.

200 to 300 years out there is no humanity as we know it today. Whatever we are at that juncture, it won't be "human" as we now define it. Our self directed evolution has long since taken over, and it's accelerating at an extraordinary clip. You can debate the merits of that, the details of it, but it's happening either way.

In the next two or three decades we'll have begun to completely take over genetic alteration / evolution / improvement / etc, of our species. Within a few decades after that, we'll be severely altering what we are. Within 150 or 200 years, it'll be very hard for modern humans to relate to the ancestor humans from 2012.

Concerns about asteroids or global warming and so on are moot. We won't be here as a species pumping CO2 into the atmosphere or waiting around helplessly for a rock to crash into the surface. It's not an issue of if, it's just an issue of how long it takes and how many competing models of our self directed evolution become options.

The sole threat to that future is super virus / disease, most likely man-made. It's the only thing that could stop our evolution and wipe out our species in that couple hundred year time frame. Even nuclear war isn't a threat to species survival, you could detonate thousands of nukes simultaneously and it wouldn't come close to killing us off.

[+] angersock|14 years ago|reply
I really wish that I could share your optimism.

Any claims about our "self-directed evolution" seem to either: define it so loosely as to include purely social conventions or uses of technology (we keep our information in the twitterspheres!, or to assume some magic breakthrough in bioengineering that overnight allows us to start jailbreaking our genome.

The sad, sorry fact of the matter is that in the time it takes to accomplish that, we could blow ourselves up.

Or, we could blow enough of ourselves up to render that future unachievable. The nations that are capable of carrying out this research, and supporting the societies that can support this sort of thing logistically and culturally, are very precarious in their positioning. As a thought exercise, consider how far food on your table had to travel, how many things had to work correctly for that to happen, and how fucked you are if they don't (and how tricky it is to find a replacement, and how many other people would be doing the same).

It would not be unreasonable to believe that a collapse of civilization could set back your scheme several hundred years--for example, consider how difficult it would be to get back to a point where you could use existing machines, much less make new ones. A semiconductor foundry--required for computers, in turn required for any meaningful engineering these days--would be practically unattainable in dire times, even if you find people alive who still knew how to run it!

Or, even more likely, nothing goes wrong, and simple social forces render stagnation even with advanced bioengineering. Read "The Calorie Man" by Paolo Bacigalupi for a recent take on this. There is no reason to expect that we're going to fare any better.

So, no, this isn't an unreasonable pessimism on the rest of our parts.

[+] ken|14 years ago|reply
> The sole threat to that future is super virus / disease, most likely man-made.

I remember reading somewhere (maybe one of Cliff Stoll's books?) that the author, a computer person, thought that a biological virus was the greatest threat to humanity. He mentioned this to a biologist friend of his, who assured him that the popular doomsday scenario is essentially impossible, and that his greatest fear was a computer virus!

[+] shiny|14 years ago|reply
Tim Ferriss, on the Joe Rogan podcast [1], predicts a pandemic in the near future (around 28:45). Basically, he knows a guy who runs a biotech company who claims that they, if they wanted to, could engineer a virus within six months that could wipe out humanity.

I wouldn't say that it's the sole threat, however. Yudkowsky seems to take the threat of runaway AI quite seriously. These are crucial times.

[1] http://blog.joerogan.net/archives/3531

[+] nollidge|14 years ago|reply
I think you vastly overestimate our ability to understand genetics. The genome (any species' genome) evolved so haphazardly and with such deep and interconnected causal spiderwebs I think it's going to take centuries to understand it to the extent that we'll be able to intentionally influence it more than genetic drift and selection currently do.

NINJA EDIT: Regardless, the certainty with which you seem to regard your claims is hardly justified (unless you've got some concrete evidence you can share). If you're wrong, we'll have wished we were thinking about these things all along.

[+] locopati|14 years ago|reply
On what basis do you make this statement "Even nuclear war isn't a threat to species survival, you could detonate thousands of nukes simultaneously and it wouldn't come close to killing us off."
[+] bdunbar|14 years ago|reply
Concerns about asteroids or global warming and so on are moot. We won't be here as a species pumping CO2 into the atmosphere or waiting around helplessly for a rock to crash into the surface.

Unless a rock the size of Texas hits tomorrow.

Extinction only a generation before it's possible to avoid it.

It would be ironic except there won't be anyone left to appreciate the irony.

[+] hristov|14 years ago|reply
Wow an entire article about human extinction without a single mention of global warming or climate change. The Atlantic are such reliable whores.
[+] pjscott|14 years ago|reply
We could totally survive any climate change that we've ever seen in the geological record. Even if famine claims billions of lives, it wouldn't be a species-ending event. Bostrom is worried about things that have considerably more destructive potential.