(no title)
mikea1 | 2 years ago
For example, perhaps the lesser-evil argument played a role in the decision process: would a world where deep fakes are ubiquitous and well-known by the public be better than a world where deep fakes have a potent impact because they are generated seldomly and strategically by a handful of (nefarious) state sponsors?
parpfish|2 years ago
if we build ai AND THEN we give it a stupid goal to optimize AND THEN we give it unlimited control over its environment, something bad will happen.
the conclusion is always "building AI is wrong" and not "giving AI unrestricted control of critical systems is wrong"
olddustytrail|2 years ago
Replace the word "we" with "a psychotic group of terrorists" in your post and see how it reads.
mikea1|2 years ago