top | item 36977067

(no title)

mikea1 | 2 years ago

Another explanation is that there are those who considered and thoughtfully weighed the ramifications, but came to a different conclusion. It is unfair to assume a decision process was agnostic to harm or plain ignorant.

For example, perhaps the lesser-evil argument played a role in the decision process: would a world where deep fakes are ubiquitous and well-known by the public be better than a world where deep fakes have a potent impact because they are generated seldomly and strategically by a handful of (nefarious) state sponsors?

discuss

order

parpfish|2 years ago

there's also the issue that most of the AI catastrophizing is a pretty clear slipperyslope argument:

if we build ai AND THEN we give it a stupid goal to optimize AND THEN we give it unlimited control over its environment, something bad will happen.

the conclusion is always "building AI is wrong" and not "giving AI unrestricted control of critical systems is wrong"

olddustytrail|2 years ago

The massive flaw in your argument is your failure to define "we".

Replace the word "we" with "a psychotic group of terrorists" in your post and see how it reads.

mikea1|2 years ago

I completely agree that's a valid argument. I just think it is rational for someone to come to a different conclusion, given identical priors.