top | item 36018146

(no title)

yblu | 2 years ago

What I found disheartening was many of those scientists, especially those on the "nothing to worry about" camp, seemed not to entertain the thought that they could be wrong, considering the scale of the matter, i.e. human extinction. If there's a chance AI poses an existential threat to us, even if it is 0.00000001% (I made that up), should they be at least a bit more humble? This is uncharted domain and I find it incredible that many talk like they already have all the answers.

discuss

order

akiselev|2 years ago

Meh. Add it to the pile. The number of world ending risks that we could be worried about at this point are piling up and AI exterminating us is far from the top concern, especially when AI may be critical to solving many of the other problems that are.

Wrong about nuclear proliferation and MAD game theory? Human extinction. Wrong about plasticizers and other endocrine disruptors, leading to a Children of Men scenario? Human extinction. Wrong about the risk of asteroid impact? Human extinction. Climate change? Human extinction. Gain of function zombie virus? Human extinction. Malignant AGI? ehh... whatever, we get it.

It's like the risk of driving: yeah it's one of the leading causes of death but what are we going to do, stay inside our suburban bubbles all our lives, too afraid to cross a stroad? Except with AI this is all still completely theoretical.

whimsicalism|2 years ago

I think almost none of the scenarios you have named outside of the asteroid & the AGI would result in complete human extinction, potentially a very bad MAD breakdown could also lead to this but the research here is legitimately mixed.

yblu|2 years ago

You disagreed with me, but at least you acknowledged there was risk, even though we could disagree about the odd or potential impact. Yet, folks like Yann LeCun ridiculed anyone who thought there was a risk AI could endanger us or harm our way of life. What do we know about experts who are always confident (usually on TV) about things that haven't happened yet?

onethought|2 years ago

Yes, and all of those (including AI) are not even human extinction events.

- Nuclear war: Northern Hemisphere is pretty fucked. But life goes one elsewhere.

- Plasticisers: We have enough science to pretty much do what we like with fertility these days. So it's catastrophic but not extinction.

- Climate Change: Life gets hard, but we can build livable habitats in space... pretty sure we can manage a harsh earth climate. Not extinction.

- Deadly virus: Wouldn't be the first time, and we're still here.

- Astroid impact: Again, ALL human life globally? Some how birds survived the meteor that killed the dinosaurs, I'm sure we'd find a way.

- Complete Made up evil AI: Well we'd torch the sky, be turned into batteries but then be freed by Keanu Reeves.. or a Time traveling John Connor. (sounds like I'm being ridiculous, but ask a stupid question...)

namaria|2 years ago

Humans have a start in time and will have an end. I was born and I will die. I don't know why we're so obsessed about this. We will most definitely cease existing soon in geological/cosmic time scale. Doesn't matter.

woodruffw|2 years ago

There's a nonzero chance that the celery in my fridge is harboring an existentially virulent and fatal strain of E. coli. At the same time, it would be completely insane for me to autoclave every vegetable that enters my house.

Sensible action here requires sensible numbers: it's not enough to claim existential risk on extraordinary odds.

yblu|2 years ago

Okay, maybe I shouldn't have mentioned the worst possible outcome. Let's use the words of Sam Altman, the risk here is "light out for all of us", and let's just assume it meant we would still live, just in darkness. Or whatever plausible bad case outcome you could imagine. Do you see any negative outcome is possible at all? If you do, would you at least be cautious so that we could avoid such an outcome? That would be the behavior I expect to see in leading AI scientists and yet...

onethought|2 years ago

> considering the scale of the matter, i.e. human extinction.

There is literally no evidence that this is the scale of the matter. Has AI ever caused anything to go extinct? Where did this hypothesis (and that's all it is) come from? Terminator movies?

It's very frustrating watching experts and the literal founder of lesswrong reacting to pure make believe. There is no disernable/convincing path from GPT4 -> Human Extinction. What am I missing here?

kristiandupont|2 years ago

Nuclear bombs have also never caused anything to go extinct. That's no reason not to be cautious.

The path is pretty clear to me. An AI that can recreate an improved version of itself will cause an intelligence explosion. That is a mathematical tautology though it could turn out that it would plateau at some point due to physical limitations or whatever. And the situation then becomes: at some point, this AI will be smarter than us. And so, if it decides that we are in the way for one reason or another, it can decide to get rid of us and we would have as much chance of stopping it as chimpanzees would of stopping us if we decided to kill them off.

We do not, I think, have such a thing at this point but it doesn't feel far off with the coding capabilities that GPT4 has.

atq2119|2 years ago

> Has AI ever caused anything to go extinct?

We know from human history that intelligence tends to cause extinctions.

AI just hasn't been around long enough, nor been intelligent enough yet.

Though, if you count corporations as artificial intelligences, as some suggest, then yes, AIs have in fact already contributed to extinctions.

transcriptase|2 years ago

Debatable, since there are plenty of other unavoidable existential threats that are far more likely than the best estimates that AI will wipe us out. E.g. supervolcano eruption, massive solar flare, asteroid impact, some novel virus.

At least we can take comfort in the fact that if an AI takes us out, one of the aforementioned will avenge us and destroy the AI too on a long enough time scale.

namaria|2 years ago

I find striking that we have a rich cultural tradition of claiming we're artificial beings. Maybe we're building a successor lifeform... I've thought about this as a story premise: humans and robots are two stages of a lifecycle. Humans flourish on a planetary ecosystem, build robots that go on to colonize new systems where they seed humans because (reason I haven't been able to formulate).