top | item 38323907

(no title)

vanrysss | 2 years ago

So the first AGI is going to be used to kill other AGIs in the cradle ?

discuss

order

T-A|2 years ago

The scenario usually bandied about is AGI self-improving at an accelerating rate: once you cross the threshold to self-improvement, you quickly get superintelligence with God-like powers beyond human comprehension (a.k.a. the Singularity) as AGI v1 creates a faster AGI v2 which creates a faster AGI v3 etc.

Any AI researchers still plodding along at mere human speed are then doomed: they won't be able to catch up even if they manage to reproduce the original breakthrough, since the head start enjoyed by AGI #1 guarantees that its latest iteration is always further along the exponential self-improvement curve and therefore superior to any would-be competitor. Being rational(ists), they give up and welcome their new AI overlord.

And if not, the AI god will surely make them see the error of their ways.

true_religion|2 years ago

What if AI self improvement is not exponential?

We assume a self improving AI will lead to some runaway intelligence improvement but if it grows at 1% per year or even per month that’s something we can adapt to.

anonymouskimmer|2 years ago

It seems to me that non-General AI would typically outcompete AGI, all else held equal. In such a scenario even a first-past-the-post AGI would have trouble becoming an overload if non-Generalized AIs were marshaled against it.

kbenson|2 years ago

Or contain, or counter, or be used as a deterrent. At least, I think that's the idea being espoused here (in general, if not in the GP comment).

I think U. S. VS Japan is not.necessarily the right model to be thinking here, but U.S. VS U.S.S.R., where we'd like to believe that neither nation would actually launch against the other, but both having the weapon meant they couldn't without risking severe damage in response making it a losing proposition.

That said, I'm sure anyone with an AGI in their pocket/on their side will attempt to use it as a big stick against those that don't, in the Teddy Roosevelt meaning.

username332211|2 years ago

I think that was part of the LessWrong eschatology.

It doesn't make sense with modern AI, where improvement (be it learning or model expansion) is separated from it's normal operation, but I guess some beliefs can persevere very well.

mitthrowaway2|2 years ago

Modern AI also isn't AGI. We seem to get a revolution at the frontier every 5 years or so; it's unlikely the current LLM transformer architecture will remain the state of the art for even a decade. Eventually something more capable will become the new modern.

macintux|2 years ago

Which reminds me, I really need to finish Person of Interest someday.