(no title)
VieEnCode | 1 year ago
In summary, he feels the focus on sci-fi type existential risk to be a deliberate distraction from the AI industry's current and real legal and ethical harms: e.g. scraping copyrighted content for training without paying or attributing creators, not protecting those affected by the misuse of tools to create deepfake porn, the crashes and deaths attributed to Tesla's self-driving mode, AI resume screening bots messing up etc.
DennisP|1 year ago
And it seems to me that if the AI industry wanted to distract us from harms, they would give us optimistic scenarios. "Sure these are problems but it will be worth it because AI will give us utopia." That would be an argument for pushing forward with AI.
Instead we're getting "oh, you may think we have problems now but that's nothing, a few years from now it's going to kill us all." Um, ok, I guess full steam ahead then? If this is a marketing campaign, it's the worst one in history.
jalman|1 year ago
exe34|1 year ago
Nah it has to appear plausible.
PROMISE_237|1 year ago
[deleted]
hoseja|1 year ago
hifromwork|1 year ago