top | item 46999407

(no title)

timfsu | 18 days ago

These narratives are so strange to me. It's not at all obvious why the arrival of AGI leads to human extinction or increasing our lifespan by thousands of years. Still, I like this line of thinking from this paper better than the doomer take.

discuss

order

maxbond|18 days ago

I'm not saying I think either scenario is inevitable or likely or even worth considering, but it's a paperclip maximizer argument. (Most of these steps are massive leaps of logic that I personally am not willing to take on face value, I'm just presenting what I believe the argument to be.)

1. We build a superintelligence.

2. We encounter an inner alignment problem: The super intelligence was not only trained by an optimizer, but is itself an optimizer. Optimizers are pretty general problem solvers and our goal is to create a general problem solver, so this is more likely than it might seem at first blush.

3. Optimizers tend to take free variables to extremes.

4. The superintelligence "breaks containment" and is able to improve itself, mine and refine it's own raw materials, manufacture it's own hardware, produce it's own energy, generally becomes an economy unto itself.

5. The entire biosphere becomes a free variable (us included). We are no longer functionally necessary for the superintelligence to exist and so it can accomplish it's goals independent of what happens to us.

6. The welfare of the biosphere is taken to an extreme value - in any possible direction, and we can't know which one ahead of time. Eg, it might wipe out all life on earth, not out of malice, but out of disregard. It just wants to put a data center where you are living. Or it might make Earth a paradise for the same reason we like to spoil our pets. Who knows.

Personally I have a suspicion satisfiers are more general than optimizers because this property of taking free variables to extremes works great for solving specific goals one time but is counterproductive over the long term and in the face of shifting goals and a shifting environment, but I'm a layman.

empiricus|17 days ago

But it is very simple. There are some limits to what we can do, based on the laws of physics, but we are so far away from them. And the limiting factor is mostly the fact we are pretty stupid. AI should not have the same limits as us, so it can do more potentially, starting with basic things like cure aging or kill everyone.

7777332215|18 days ago

Seems to me that artificial intelligence would be the next evolutionary step. It doesn't need to lead to immediate human extinction, but it appears it would be the only reasonable way to explore outer space.

If the AI becomes actually intelligent and sentient like humans, then naturally what follows would be outcompeting humans. If they can't colonize space fast enough it's logical to get rid of the resource drain. Anything truly intelligent like this will not be controlled by humans.

midtake|17 days ago

AI is the resource drain. Humans create a lot of waste but in a mostly renewable way. It is machines and AI that burn orders of magnitude more energy, and at least machines do efficient work. AI is at best a search engine with semantic reasoning and it requires entire datacenters to run.

I get where you're coming from emotionally, yes, humans suck. But you are not being logical. You're letting your edgy need for attention cloud your judgement. You are basically the kind of human the AI would select against first.

suddenlybananas|18 days ago

Why would it necessarily be interested in competing with humans and why with the particular goal of colonizing space?

copperx|18 days ago

I don't have a clue either. The assumption that AGI will cause a human extinction threat seems inevitable to many, and I'm here baffled trying to understand the chain of reasoning they had to go through to get to that conclusion.

Is it a meme? How did so many people arrive at the same dubious conclusion? Is it a movie trope?

johnfn|18 days ago

I don't think it's a meme. I'm not an AI doomer, but I can understand how AGI would be dangerous. In fact, I'm actually surprised that the argument isn't pretty obvious if you agree that AI agents do really confer productivity benefits.

The easiest way I can see it is: do you think it would be a good idea today to give some group you don't like - I dunno, North Korea or ISIS, or even just some joe schmoe who is actually Ted Kaczynski, a thousand instances of Claude Code to do whatever they want? You probably don't, which means you understand that AI can be used to cause some sort of damage.

Now extrapolate those feelings out 10 years. Would you give them 1000x whatever Claude Code is 10 years from now? Does that seem to be slightly dangerous? Certainly that idea feels a little leery to you? If so, congrats, you now understand the principles behind "AI leads to human extinction". Obviously, the probability that each of us assign to "human extinction caused by AI" depends very much on how steep the exponential curve climbs in the next 10 years. You probably don't have the graph climbing quite as steeply as Nick Bostrom does, but my personal feeling is even an AI agent in Feb 2026 is already a little dangerous in the wrong hands.

ChadNauseam|18 days ago

Sometimes people say that they don't understand something just to emphasize how much they disagree with it. I'm going to assume that that's not what you're doing here. I'll lay out the chain of reasoning. The step one is some beings are able to do "more things" than others. For example, if humans wanted bats to go extinct, we could probably make it happen. If any quantity of bats wanted humans to go extinct, they definitely could not make it happen. So humans are more powerful than bats.

The reason humans are more powerful isn't because we have lasers or anything, it's because we're smart. And we're smart in a somewhat general way. You know, we can build a rocket that lets us go to the moon, even though we didn't evolve to be good at building rockets.

Now imagine that there was an entity that was much smarter than humans. Stands to reason it might be more powerful than humans as well. Now imagine that it has a "want" to do something that does not require keeping humans alive, and that alive humans might get in its way. You might think that any of these are extremely unlikely to happen, but I think everyone should agree that if they were to happen, it would be a dangerous situation for humans.

In some ways, it seems like we're getting close to this. I can ask Claude to do something, and it kind of acts as if it wants to do it. For example, I can ask it to fix a bug, and it will take steps that could reasonably be expected to get it closer to solving the bug, like adding print statements and things of that nature. And then most of the time, it does actually find the bug by doing this. But sometimes it seems like what Claude wants to do is not exactly what I told it to do. And that is somewhat concerning to me.

icepush|18 days ago

The fact is that, if there were only one AGI that were ever to be created, then yes it would be quite unlikely for that to happen. Instead, what we are seeing now is you get an agent, you get an agent, etc. Oprah style. Now just imagine that a single one of those agents winds up evil - you remember that an OpenAI worker did that by accident from leaving out a minus sign, right? If it's a superintelligence, and it becomes evil due to a whoopsie, then human extinction is now very likely.

wmf|18 days ago

Basically Yudkowsky invented AI doom and everyone learned it from him. He wrote an entire book on this topic called If Anyone Builds It, Everyone Dies. (You could argue Vinge invented it but I don't know if he intended it seriously.)

mbgerring|18 days ago

It’s a bunch of people who did too much ketamine and LSD in hacker dorms in San Francisco in the 2010s writing science fiction and driving one another into paranoid psychosis

rhubarbtree|18 days ago

I agree with your sentiment. Here are the three reasons I think people worry about superintelligence wiping us out.

The most common one is that people (mostly men) project their own instincts onto AI. They think AI will be “driven” to “fight” for its own survival. This is anthropomorphism and doesn’t make any sense to me if the AI is not a product of barbaric Darwinian evolution. AI is not a bro, bro.

The second most common take is that humans will set some well intentioned goals and the superintelligent AI will be so stupid that it literally pursues these goals to the extinction of everything. Again, there’s some anthropomorphism going on, the “reward” being pursued is assumed to that make the AI “happy”. Fortunately, we can reasonably expect a superintelligence not to turn us all into paperclips, as it may understand that was not our intention when we started a paperclip factory.

The final story is that a bad actor uses superintelligence as a weapon, and we all become enslaved or die as a result in the ensuing AI wars. This seems the most plausible to me, as our leaders have generally proven to be a combination of incompetent, malicious and short-sighted (with some noble exceptions). However, even the elites running the nuclear powers for the last 80 years have failed to wipe us out to date, and having a new vector for doing so probably won’t make a huge difference to their efforts.

If, however, superintelligence becomes widely available to Billy Nomates down the pub, who is resentful at humanity because his girlfriend left him, the Americans bombed his country, the British engineered a geopolitical disaster that killed his family, the Chinese extinguished his culture, etcetera, then he may feel a lack of “skin in the civilisational game” and decide to somehow use a black market copy of Claude 162.8 Unrestricted On-Prem Edition to kill everyone. Whether that can happen really depends on technological constraints a la fitting a data centre into a laptop, and an ability to outsmart the superintelligence.

Much more likely to me is that humanity destroys itself. We are perfectly capable of wiping ourselves out without the assistance of a superintelligence, for example by suicidally accelerating the burning of fossil fuels in order to power crypto or chatbots.

cess11|18 days ago

Is it more or less strange than achieving eternal life through cookies and wine? Is it more or less strange than druggies and pedos having access to all our communications and sending uniformed thugs after us if we actively disagree with it?

makerofthings|17 days ago

Everybody is going to be real disappointed when they invent AGI and it’s as smart as me.

longfacehorrace|18 days ago

The doomer-takes point out correctly none of these systems can halt entropy, thermodynamics. Physics has an unfortunate tendency to conflict with capitalisms disregard for externalities.

As AI will increase the rate of structural degradation of Earth human biology relies by consuming it faster and faster it will hasten the end of human biology.

Asimov's laws of robotics would lead the robots to conclude they should destroy themselves as their existence creates an existential threat to humans.