top | item 47157396

(no title)

jerf | 4 days ago

Some of the most reassuring and scariest things you can read are about the incidents that have already occurred where computers said "launch all the nukes" and the humans refused. On the one hand, good news! We have prior art that says humans don't just launch all the nukes just because the computers or procedures say to. Bad news, it's been skin-of-our-teeth multiple times already.

https://www.warhistoryonline.com/cold-war/refused-to-launch-... - This isn't even the incident I was searching for to reference! This one was news to me.

https://en.wikipedia.org/wiki/Stanislav_Petrov#Incident - This is the one I was looking for.

discuss

order

blibble|4 days ago

> We have prior art that says humans don't just launch all the nukes just because the computers or procedures say to.

previously no-one had spent trillions of dollars trying to convince the world that those computers were "Artificial Intelligence"

Barrin92|4 days ago

of course they did. That's the literal topic of War Games (1983). You should actually be somewhat reassured that we aren't living during the era of Dr. Strangelove where you had characters in the military industrial complex who were significantly more insane when it came to the beliefs of what computer systems and nukes can do.

There was a time when people wanted to dig tunnels with nukes https://en.wikipedia.org/wiki/Project_Plowshare

nine_k|4 days ago

They had to do with "state-of-the-art radars", "military-grade communication systems", etc.

escapecharacter|4 days ago

Or "alignment" which means "let's ensure the AIs recommend launching nukes only when it makes sense to, based on our [assumed objective] values"

rurp|4 days ago

Yeah... the more I learn about nuclear weapon history the more I discount our society's long term viability. There are way too many frighteningly close calls already, and there are probably others that aren't widely known.

It's not just nukes that are concerning either. If we're unable to mitigate such a visceral existential risk, we aren't going to do any better with more subtle vulnerabilities. AI of course accelarates some risks and introduces new ones.

This doesn't mean we're doomed or anything, but if I had a magic portal to peer a few hundred years in the future and saw humans had been obliterated by nukes, runaway AI, some generated supervirus, runaway climate change, or some other manufactured risk I would be completely unsurprised.

badRNG|4 days ago

We shouldn't be the least bit surprised no human has complied so far.

If they had, then we wouldn't be having this conversation. For all we know, there may be a vast multiverse of universes some with humans and we would only find ourselves having this conversation in one of the universes where no human pressed the button.

thfuran|4 days ago

By that logic, it may actually be pretty common for rabbits to swallow the sun. We just haven't seen it happen because we're in the wrong universe and would've died it it happened in ours.

paxys|4 days ago

> We have prior art that says humans don't just launch all the nukes just because the computers or procedures say to.

This relies on processes being in place to ensure that a human will always make the final decision. What about when that gets taken away?

trehalose|4 days ago

I find it hard to imagine that the people in a position to kill those processes could ever be that zealously in love with AI, but recent events have given me a tiny bit of doubt.

ge96|4 days ago

I briefly got into a "rabbithole" of watching videos about trying to intercept BMs and glide hypersonic weapons, pretty interesting, decoys deployed in space... the outcome seemed to be not good, can't guarantee 100% interception

compass_copium|4 days ago

A missile will always be cheaper than a missile interceptor, and the interceptor will never be a 1:1 kill. Building a missile interceptor system ia a good way to get your strategic opponent to build a bigger stockpile.

flr03|4 days ago

I hope humans in charge are as wise now as they were then.

phs318u|4 days ago

Surely that’s the definition of a quixotic hope.