top | item 44158476

(no title)

arcanus | 9 months ago

> And when I reach the part where the AI, having copied itself all over the Internet and built robot factories, then invents and releases self-replicating nanotechnology that gobbles the surface of the earth in hours or days, a large part of me still screams out that there must be practical bottlenecks that haven’t been entirely accounted for here.

This is the crux of the issue. There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities that are explained away by the 'singularity' being essentially magic. The entire approach is a doomed version of deus ex machina.

It also appears quite telling the traditional approach is focused on exotic technologies, such as nanotech, and not ICBMs. That's also magical thinking.

discuss

order

api|9 months ago

We literally spent trillions in the past century building doomsday machines -- hydrogen bombs and ICBMs -- to literally, intentionally destroy humanity as part of the MAD defensive strategy in the Cold War. That stuff is largely still out there. If anything suddenly kills humanity, that's high on the list of possibilities.

The other huge existential risk is someone intentionally creating a doomsday bug. Think airborne HIV with a long incubation period, or an airborne cancer causing virus. Something that would spread far and wide and cause enough debilitation and death that it leads to the collapse of civilization, then continues to hang around and kill people post-collapse (with no health care) to the point that the human race is in long term danger of extinction.

Both of those are extremely plausible to the point that the explanation for why they haven't happened yet is "nobody with the means has been that evil yet."

hollerith|8 months ago

>There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities

Haven't there already been a couple of massive leaps in AI capabilities (AlexNet in 2012, then transformers in 2017)?

Is it not the publicly-stated goal of the leaders of most of the AI labs to make further massive leaps?

Isn't drastic improvements what happens in fields that humanity is starting to understand?

Wasn't there for example a drastic improvement in humanity's ability to manufacture things starting in 1750 (which led to a massive increase in fossil-fuel use, which led to climate change and other adverse effects like "killer smog")?