top | item 36021429

(no title)

yblu | 2 years ago

Okay, maybe I shouldn't have mentioned the worst possible outcome. Let's use the words of Sam Altman, the risk here is "light out for all of us", and let's just assume it meant we would still live, just in darkness. Or whatever plausible bad case outcome you could imagine. Do you see any negative outcome is possible at all? If you do, would you at least be cautious so that we could avoid such an outcome? That would be the behavior I expect to see in leading AI scientists and yet...

discuss

order

woodruffw|2 years ago

All kinds of negative outcomes are possible, at all times. What matters is their probability.

If you (or anyone else) can present a well-structured argument that AI presents, say, a 1-in-100 existential risk to humanity in the next 500 years, then you'll have my attention. Without those kinds of numbers, there are substantially more likely risks that have my attention first.

GivinStatic|2 years ago

Shouldn't unchared territory come with a risk multiplier of some kind? Currently it's an estimation at best. Maybe 1-in-20 maybe 1-in-million in the next 2 years. The OPs point of this thread still stands, scientists shouldn't be so confident.