(no title)
ADeerAppeared | 1 year ago
It's not like that. It is that. They're playing Pascal's Wager against an imaginary future god.
The most maddening part is that the obvious problem with that has been well identified by those circles, dubbed "Pascal's Mugging", but they're still rambling on about "extinction risk" whilst disregarding the very material ongoing issues AI causes.
They're all clowns whose opinions are to be immediately discarded.
duvenaud|1 year ago
So I think we might be on the same side on this one.
alephnerd|1 year ago
The same effort could be expended on plenty of other problems that are unsolved.
cyrillite|1 year ago
ADeerAppeared|1 year ago
The "Mugging" going on is that "AI safety" folks proclaim that AI might have an "extinction risk" or infinite-negative outcome.
And they proclaim that therefore, we should be devoting considerable resources (i.e. on the scale of billions) to avoiding that even if the actual chance of this scenario is minimal to astronomically small. "ChatGPT won't kill us now, but in 1000 years it might" kinda shit. For some this ends with "and therefore you need to approve my research funding application", for others (including Altman) it has mutated into "We must build AGI first because we're the only people who can do it without destroying the world".
The problem is that this is absurd. They're focussing on a niche scenario whilst ignoring horrific problems caused in the here and now.
"Skynet might happen in Y3K" is no excuse to flood the current internet with AI slop, create a sizeable economic bubble, seek to replace entire economic sectors with outsourced "Virtual" employees, and perhaps most ethically concerning of all: create horrific CSAM torment nexuses where even near-destitute gig economy workers in Kenya walk out of the job.
Yet "AI safety" folks would have you believe so.