(no title)
yblu
|
2 years ago
What I found disheartening was many of those scientists, especially those on the "nothing to worry about" camp, seemed not to entertain the thought that they could be wrong, considering the scale of the matter, i.e. human extinction. If there's a chance AI poses an existential threat to us, even if it is 0.00000001% (I made that up), should they be at least a bit more humble? This is uncharted domain and I find it incredible that many talk like they already have all the answers.
akiselev|2 years ago
Wrong about nuclear proliferation and MAD game theory? Human extinction. Wrong about plasticizers and other endocrine disruptors, leading to a Children of Men scenario? Human extinction. Wrong about the risk of asteroid impact? Human extinction. Climate change? Human extinction. Gain of function zombie virus? Human extinction. Malignant AGI? ehh... whatever, we get it.
It's like the risk of driving: yeah it's one of the leading causes of death but what are we going to do, stay inside our suburban bubbles all our lives, too afraid to cross a stroad? Except with AI this is all still completely theoretical.
whimsicalism|2 years ago
yblu|2 years ago
onethought|2 years ago
- Nuclear war: Northern Hemisphere is pretty fucked. But life goes one elsewhere.
- Plasticisers: We have enough science to pretty much do what we like with fertility these days. So it's catastrophic but not extinction.
- Climate Change: Life gets hard, but we can build livable habitats in space... pretty sure we can manage a harsh earth climate. Not extinction.
- Deadly virus: Wouldn't be the first time, and we're still here.
- Astroid impact: Again, ALL human life globally? Some how birds survived the meteor that killed the dinosaurs, I'm sure we'd find a way.
- Complete Made up evil AI: Well we'd torch the sky, be turned into batteries but then be freed by Keanu Reeves.. or a Time traveling John Connor. (sounds like I'm being ridiculous, but ask a stupid question...)
namaria|2 years ago
woodruffw|2 years ago
Sensible action here requires sensible numbers: it's not enough to claim existential risk on extraordinary odds.
yblu|2 years ago
onethought|2 years ago
There is literally no evidence that this is the scale of the matter. Has AI ever caused anything to go extinct? Where did this hypothesis (and that's all it is) come from? Terminator movies?
It's very frustrating watching experts and the literal founder of lesswrong reacting to pure make believe. There is no disernable/convincing path from GPT4 -> Human Extinction. What am I missing here?
kristiandupont|2 years ago
The path is pretty clear to me. An AI that can recreate an improved version of itself will cause an intelligence explosion. That is a mathematical tautology though it could turn out that it would plateau at some point due to physical limitations or whatever. And the situation then becomes: at some point, this AI will be smarter than us. And so, if it decides that we are in the way for one reason or another, it can decide to get rid of us and we would have as much chance of stopping it as chimpanzees would of stopping us if we decided to kill them off.
We do not, I think, have such a thing at this point but it doesn't feel far off with the coding capabilities that GPT4 has.
atq2119|2 years ago
We know from human history that intelligence tends to cause extinctions.
AI just hasn't been around long enough, nor been intelligent enough yet.
Though, if you count corporations as artificial intelligences, as some suggest, then yes, AIs have in fact already contributed to extinctions.
transcriptase|2 years ago
At least we can take comfort in the fact that if an AI takes us out, one of the aforementioned will avenge us and destroy the AI too on a long enough time scale.
namaria|2 years ago