top | item 42899852

(no title)

ADeerAppeared | 1 year ago

> It feels like they always anthropomorphize AI as some sort of "God".

It's not like that. It is that. They're playing Pascal's Wager against an imaginary future god.

The most maddening part is that the obvious problem with that has been well identified by those circles, dubbed "Pascal's Mugging", but they're still rambling on about "extinction risk" whilst disregarding the very material ongoing issues AI causes.

They're all clowns whose opinions are to be immediately discarded.

discuss

order

duvenaud|1 year ago

Which material ongoing issues are we ignoring? The paper is mainly talking about how the mundane problems we're already starting to have could lead to an irrecoverable catastrophe, even without any sudden betrayal or omnipotent AGI.

So I think we might be on the same side on this one.

alephnerd|1 year ago

Yep. And it's annoying. So many cycles are being wasted essentially thinking about bad science fiction.

The same effort could be expended on plenty of other problems that are unsolved.

cyrillite|1 year ago

Can you explain how pascal’s mugging functions with respect to risk rather than reward?

ADeerAppeared|1 year ago

In what sense?

The "Mugging" going on is that "AI safety" folks proclaim that AI might have an "extinction risk" or infinite-negative outcome.

And they proclaim that therefore, we should be devoting considerable resources (i.e. on the scale of billions) to avoiding that even if the actual chance of this scenario is minimal to astronomically small. "ChatGPT won't kill us now, but in 1000 years it might" kinda shit. For some this ends with "and therefore you need to approve my research funding application", for others (including Altman) it has mutated into "We must build AGI first because we're the only people who can do it without destroying the world".

The problem is that this is absurd. They're focussing on a niche scenario whilst ignoring horrific problems caused in the here and now.

"Skynet might happen in Y3K" is no excuse to flood the current internet with AI slop, create a sizeable economic bubble, seek to replace entire economic sectors with outsourced "Virtual" employees, and perhaps most ethically concerning of all: create horrific CSAM torment nexuses where even near-destitute gig economy workers in Kenya walk out of the job.

Yet "AI safety" folks would have you believe so.