(no title)
b1daly | 2 years ago
the basis of this claim seems to be a confusion of logical or deductive reasoning with inductive or observational reasoning
argument comes down to
- it’s possible to imagine a super intelligent machine that has properties that will kill everyone (this is an exercise in logical reasoning)
- since it’s possible to imagine it, this means it will come into existence — this is an error because things that exist in the real, physical world do so based on physical processes governed by inductive reasoning
generally, there is a long series of steps between the imagining of some constructed, complex machine and its realization, along with its conceptual foundations it requires sustained effort, trial and error, maintenance, generally a serious fight against entropy to make it function and keep it functioning
the sort of out of control AI imagined by AI doomers is not something we’ve seen before
so we shouldn’t make costly decisions based upon this confusion of reasoning
p-e-w|2 years ago
Nope. That's not the argument. In fact, it's such a bad take that it reeks of a deliberately constructed strawman.
The actual argument is: Since it's possible to imagine it, and doesn't contradict any known laws of nature or technology, and current development appears to be iterating towards it, it might come into existence, thus it presents a statistical risk.
When I take out tornado insurance, it's not because I know my house will be blown away by a storm – it's because I don't know, but the possibility is there.
Certainty is not required in order to conclude that risk exists. Quite the opposite is true: Risk is a function of uncertainty.