(no title)
ADeerAppeared | 1 year ago
The "Mugging" going on is that "AI safety" folks proclaim that AI might have an "extinction risk" or infinite-negative outcome.
And they proclaim that therefore, we should be devoting considerable resources (i.e. on the scale of billions) to avoiding that even if the actual chance of this scenario is minimal to astronomically small. "ChatGPT won't kill us now, but in 1000 years it might" kinda shit. For some this ends with "and therefore you need to approve my research funding application", for others (including Altman) it has mutated into "We must build AGI first because we're the only people who can do it without destroying the world".
The problem is that this is absurd. They're focussing on a niche scenario whilst ignoring horrific problems caused in the here and now.
"Skynet might happen in Y3K" is no excuse to flood the current internet with AI slop, create a sizeable economic bubble, seek to replace entire economic sectors with outsourced "Virtual" employees, and perhaps most ethically concerning of all: create horrific CSAM torment nexuses where even near-destitute gig economy workers in Kenya walk out of the job.
Yet "AI safety" folks would have you believe so.
tim333|1 year ago
SMBC is quite funny on the AI risks eg. https://www.smbc-comics.com/comic/signal-2
ADeerAppeared|1 year ago
It's called absurd because it does not deserve to be humoured the effort of writing out those arguments.
> Here's some names in the field. 15/19 think the risk is significant
A list that is largely a pile of clowns and morons, many with direct financial interests in amplifying the "danger"/power of AI.
This is why the doomsday cult is not taken serious.