(no title)
yblu
|
2 years ago
Okay, maybe I shouldn't have mentioned the worst possible outcome. Let's use the words of Sam Altman, the risk here is "light out for all of us", and let's just assume it meant we would still live, just in darkness. Or whatever plausible bad case outcome you could imagine. Do you see any negative outcome is possible at all? If you do, would you at least be cautious so that we could avoid such an outcome? That would be the behavior I expect to see in leading AI scientists and yet...
woodruffw|2 years ago
If you (or anyone else) can present a well-structured argument that AI presents, say, a 1-in-100 existential risk to humanity in the next 500 years, then you'll have my attention. Without those kinds of numbers, there are substantially more likely risks that have my attention first.
GivinStatic|2 years ago