top | item 43901902

(no title)

thurn | 10 months ago

Which of these statements do you disagree with?

- Superintelligence poses an existential threat to humanity

- Predicting the future is famously difficult

- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence

- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.

Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.

discuss

order

tsimionescu|10 months ago

You could use the exact same argument to argue the opposite. Simply change the first premise to "Super intelligence is the only thing that can save humanity from certain extinction". Using the exact same logic, you'll reach the conclusion that not building superintelligence is a risk no sane person can afford to take.

So, since we've used the exact same reasoning to prove two opposite conclusions, it logically follows that this reasoning is faulty.

throw101010|10 months ago

That’s not how logic works. The GP is applying the precautionary principle: when there’s even a small chance of a catastrophic risk, it makes sense to take precautions-like restricting who can build superintelligent AI, similar to how we restrict access to nuclear technology.

Changing the premise to "superintelligence is the only thing that can save us" doesn’t invalidate the logic of being cautious. It just shifts the debate to which risk is more plausible. The reasoning about managing existential risks remains valid either way, the real question is which scenario is more likely, not whether the risk-based logic is flawed.

Just like with nuclear power, which can be both beneficial and dangerous, we need to be careful in how we develop and control powerful technologies. The recent deregulation by the US admin are an example of us doing the contrary currently.

voidspark|10 months ago

The best we can hope for is that Artificial Super Intelligence treats us kindly as pets, or as wildlife to be preserved, or at least not interfered with.

ASI to humans, is like humans to rats or ants.

geysersam|10 months ago

Isn't the question you're posing basically Pascals wager?

I think the chance they're going to create a "superintelligence" is extremely small. That said I'm sure we're going to have a lot of useful intelligence. But nothing general or self-conscious or powerful enough to be threatening for many decades or even ever.

> Predicting the future is famously difficult

That's very true, but that fact unfortunately can never be used to motivate any particular action, because you can always say "what if the real threat comes from a different direction?"

We can come up with hundreds of doomsday scenarios, most don't involve AI. Acting to minimize the risk of every doomsday scenario (no matter how implausible) is doomsday scenario no. 153.

nearbuy|9 months ago

> I think the chance they're going to create a "superintelligence" is extremely small.

I'd say the chance that we never create a superintelligence is extremely small. You either have to believe that for some reason the human brain achieved the maximum intelligence possible, or that progress on AI will just stop for some reason.

Most forecasters on prediction markets are predicting AGI within a decade.

polynomial|10 months ago

Yes, this is literally Pascal's wager / Pascal's mugging.

quietbritishjim|10 months ago

> Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence

I think you realise this is the weak point. You can't rule out the current AI approach leading to superintelligence. You also can't rule out a rotting banana skin in your bin spontaneously gaining sentience either. Does that mean you shouldn't risk throwing away that skin? It's so outrageous that you need at least some reason to rule it in. So it goes with current AI approaches.

km144|10 months ago

Isn't the problem precisely that uncertainty though? That we have many data points showing that a rotting banana skin will not spontaneously gain sentience, but we have no clear way to predict the future? And we have no way of knowing the true chance of superintelligence arising from the current path of AI research—the fact that it could be 1-in-100 or 1-in-1e12 or whatever is part of the discussion of uncertainty itself, and people are biased in all sorts of ways to believe that the true risk is somewhere on that continuum.

pembrook|10 months ago

You bring up the example of an extinction-level asteroid hurling toward earth. Gee, I wonder if this superintelligence you’re deathly afraid of could help with that?

This extreme risk aversion and focus on negative outcomes is just the result of certain personality types, no amount of rationalizing will change your mind as you fundamentally fear the unknown.

How do you get out of bed everyday knowing there’s a chance you could get hit by a bus?

If your tribe invented fire you’d be the one arguing how we can’t use it for fear it might engulf the world. Yes, humans do risk starting wildfires, but it’s near impossible to argue the discovery of fire wasn’t a net good.

yard2010|10 months ago

Since the internet inception there were a few wrong turns taken by the wrong people (and lizards, ofc) behind the wheel, leading to the sub-optimal, enshitified tm experience we have today. I think GP just don't want to live through that again.

voidspark|9 months ago

I think of the invention of ASI as introducing a new artificial life form.

The new life form will be to humans, as humans are to chimps, or rats, or ants.

At this point we have lost control of the situation (the planet). We are no longer at the top of the food chain. Fingers crossed it all goes well.

It's an existential gamble. Is the gamble worth taking? No one knows.

OtherShrezzing|10 months ago

> Superintelligence poses an existential threat to humanity

I disagree at least on this one. I don't see any scenario where superintelligence comes into existence, but is for some reason limited to a mediocrity that puts it in contention with humans. That equilibrium is very narrow, and there's no good reason to believe machine-intelligence would settle there. It's a vanishingly low chance event. It considerably changes the later 1-in-n part of your comment.

Meneth|10 months ago

So you assume a superintelligence, so powerful it would see humans as we see ants, would not destroy our habitat for resources it could use for itself?

tempfile|10 months ago

> There are almost no statements about the future which I'd assign this level of confidence to.

You have cooked up a straw man that will believe anything as long as it contains a doomsday prediction. You are more than 99.9% confident about doomsday predictions, even if you claim you aren't.