top | item 44483897

I don't think AGI is right around the corner

374 points| mooreds | 8 months ago |dwarkesh.com

442 comments

order
[+] Animats|8 months ago|reply
A really good point in that note:

"But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human's. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box."

That does seem to be a problem with neural nets.

There are AIish systems that don't have this problem. Waymo's Driver, for example. Waymo has a procedure where, every time their system has a disconnect or near-miss, they run simulations with lots of variants on the troublesome situation. Those are fed back into the Driver.

Somehow. They don't say how. But it's not an end to end neural net. Waymo tried that, as a sideline project, and it was worse than the existing system. Waymo has something else, but few know what it is.

[+] dathinab|8 months ago|reply
I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

[+] raspasov|8 months ago|reply
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

[+] richardw|8 months ago|reply
They’re great at working with the lens on our reality that is our text output. They are not truth seekers, which is necessarily fundamental to every life form from worms to whales. If we get things wrong, we die. If they get them wrong, they earn 1000 generated tokens.
[+] 0x20cowboy|8 months ago|reply
LLM are a compressed version of their training dataset with a text based interactive search function.
[+] andyfilms1|8 months ago|reply
Thousands are being laid off, supposedly because they're "being replaced with AI," implying the AI is as good or better as humans at these jobs. Managers and execs are workers, too--so if the AI really is so good, surely they should recuse themselves and go live a peaceful life with the wealth they've accrued.

I don't know about you, but I can't imagine that ever happening. To me, that alone is a tip off that this tech, while amazing, can't live up to the hype in the long term.

[+] refurb|8 months ago|reply
This is a good summary of what LLM offer today.

My company is desperately trying to incorporate AI (to tell investors they are). The fact that LLM gets thing wrong is a huge problem since most work can’t be wrong and if if a human needs to carefully go through output to check it, it’s often just as much work as having that same human just create the output themselves.

But languages is one place LLMs shine. We often need to translate technical docs to layman language and LLMs work great. It quickly find words and phrases to describe complex topics. Then a human can do a final round of revisions.

But anything de novo? Or requiring logic? It works about as well as a high school student with no background knowledge.

[+] timmg|8 months ago|reply
Interesting. I think the key to what you wrote is "poorly definined".

I find LLMs to be generally intelligent. So I feel like "we are already there" -- by some definition of AGI. At least how I think of it.

Maybe a lot of people think of AGI as "superhuman". And by that definition, we are not there -- and may not get there.

But, for me, we are already at the era of AGI.

[+] Davidzheng|8 months ago|reply
I agree with the last part but I think that criticism applies to many humans too so I don't find it compelling at all.

I also think by original definition (better than median human at almost all task) it's close and I think in the next 5 years it will be competitive with professionals at all tasks which are nonphysical (physical could be 5-10 years idk). I could be high on my own stories but not the rest.

LLMs are good at language yes but I think to be good at language requires some level of intelligence. I find this notion that they are bad at spatial reasoning extremely flawed. They are much better than all previous models, some of which are designed for spatial reasoning. Are they worse than humans? Yes but just the fact that you can put newer models on robots and they just work means that they are quite good by AI standards and rapidly improving.

[+] Buttons840|8 months ago|reply
I'll offer a definition of AGI:

An AI (a computer program) that is better at [almost] any task than 5% of the human specialists in that field has achieved AGI.

Or, stated another way, if 5% of humans are incapable of performing any intellectual job better than an AI can, then that AI has achieved AGI.

Note, I am not saying that an AI that is better than humans at one particular thing has achieved AGI, because it is not "general". I'm saying that if a single AI is better at all intellectual tasks than some humans, the AI has achieved AGI.

The 5th percentile of humans deserves the label of "intelligent", even if they are not the most intelligent, (I'd say all humans deserve the label "intelligent") and if an AI is able to perform all intellectual tasks better than such a person, the AI has achieved AGI.

[+] QuantumGood|8 months ago|reply
Definitions around AI have been changing since the beginning, making it always farther in the future. In this system it can always be "right around the corner" but never arrive.
[+] rf15|8 months ago|reply
There's definitely also people in the futurism and/or doom and gloom camps with absolutely no skin in the game that can't resist this topic.
[+] JKCalhoun|8 months ago|reply
Where does Eric Schmidt fit? Selling something?
[+] giancarlostoro|8 months ago|reply
Its right around the corner when you prove it as fact. Otherwise as suggested it is just hype to sell us on your LLM flavor.
[+] ninetyninenine|8 months ago|reply
Alright, let’s get this straight.

You’ve got people foaming at the mouth anytime someone mentions AGI, like it’s some kind of cult prophecy. “Oh it’s poorly defined, it’s not around the corner, everyone talking about it is selling snake oil.” Give me a break. You don’t need a perfect definition to recognize that something big is happening. You just need eyes, ears, and a functioning brain stem.

Who cares if AGI isn’t five minutes away. That’s not the point. The point is we’ve built the closest thing to a machine that actually gets what we’re saying. That alone is insane. You type in a paragraph about your childhood trauma and it gives you back something more coherent than your therapist. You ask it to summarize a court ruling and it doesn’t need to check Wikipedia first. It remembers context. It adjusts to tone. It knows when you’re being sarcastic. You think that’s just “autocomplete”? That’s not autocomplete, that’s comprehension.

And the logic complaints, yeah, it screws up sometimes. So do you. So does your GPS, your doctor, your brain when you’re tired. You want flawless logic? Go build a calculator and stay out of adult conversations. This thing is learning from trillions of words and still does better than half the blowhards on HN. It doesn’t need to be perfect. It needs to be useful, and it already is.

And don’t give me that “it sounds profound but it’s really just crap” line. That’s 90 percent of academia. That’s every selfhelp book, every political speech, every guy with a podcast and a ring light. If sounding smarter than you while being wrong disqualifies a thing, then we better shut down half the planet.

Look, you’re not mad because it’s dumb. You’re mad because it’s not that dumb. It’s close. Close enough to feel threatening. Close enough to replace people who’ve been coasting on sounding smart instead of actually being smart. That’s what this is really about. Ego. Fear. Control.

So yeah, maybe it’s not AGI yet. But it’s smarter than the guy next to you at work. And he’s got a pension.

[+] izzydata|8 months ago|reply
Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.
[+] Waterluvian|8 months ago|reply
I think the only way that it’s actually impossible is if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence. Otherwise we’re just machines, after all. A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Maybe our first AGI is just a Petri dish brain with a half-decent python API. Maybe it’s more sand-based, though.

[+] agumonkey|8 months ago|reply
Then there's the other side of the issue. If your tool is smarter than you.. how do you handle it ?

People are joking online that some colleagues use chatgpt to answer questions from other teammates made by chatgpt, nobody knows what's going on anymore.

[+] colechristensen|8 months ago|reply
>I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence.

Measuring intelligence is hard and requires a really good definition of intelligence, LLMs have in some ways made the definition easier because now we can ask the concrete question against computers which are very good at some things "Why are LLMs not intelligent?" Given their capabilities and deficiencies, answering the question about what current "AI" technology lacks will make us better able to define intelligence. This is assuming that LLMs are the state of the art Million Monkeys and that intelligence lies on a different path than further optimizing that.

https://en.wikipedia.org/wiki/Infinite_monkey_theorem

[+] breuleux|8 months ago|reply
I think the issue is going to turn out to be that intelligence doesn't scale very well. The computational power needed to model a system has got to be in some way exponential in how complex or chaotic the system is, meaning that the effectiveness of intelligence is intrinsically constrained to simple and orderly systems. It's fairly telling that the most effective way to design robust technology is to eliminate as many factors of variation as possible. That might be the only modality where intelligence actually works well, super or not.
[+] dinkumthinkum|8 months ago|reply
I think you are very right to be skeptical. It’s refreshing to see another such take as it is so strange to see so many supposedly technical people just roll down the track of assuming this is happening when there are some fundamental problems with this idea. I understand why non-technical are ready to marry and worship it or whatever but for serious people I think we need to think more critically.
[+] paulpauper|8 months ago|reply
I agree. There is no define or agreed upon consensus of what AGI even means or implies. Instead, we will continue to see incremental improvements at the sort of things AI is good at, like text and image generation, generating code, etc. The utopia dream of AI solving all of humanity's problems as people just chill on a beach basking in infinite prosperity are unfounded.
[+] navels|8 months ago|reply
why not?
[+] vessenes|8 months ago|reply
Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

[+] Herring|8 months ago|reply
Apparently 54% of American adults read at or below a sixth-grade level nationwide. I’d say AGI is kinda here already.

https://en.wikipedia.org/wiki/Literacy_in_the_United_States

[+] yeasku|8 months ago|reply
Does a country failed education system has anything to do with AGI?
[+] thousand_nights|8 months ago|reply
very cool. now let's see the LLM do the laundry and wash my dishes

yes you're free to give it a physical body in the form of a robot. i don't think that will help.

[+] korijn|8 months ago|reply
The ability to read is all it takes to have AGI?
[+] skybrian|8 months ago|reply
From an economics perspective, a more relevant comparison would be to the workers that a business would normally hire to do a particular job.

For example, for a copy-editing job, they probably wouldn't hire people who can't read all that well, and never mind what the national average is. Other jobs require different skills.

[+] dinkumthinkum|8 months ago|reply
Yet, those illiterate people can still solve enormous amounts of challenges that LLMs cannot.
[+] merizian|8 months ago|reply
The problem with the argument is that it assumes future AIs will solve problems like humans do. In this case, it’s that continuous learning is a big missing component.

In practice, continual learning has not been an important component of improvement in deep learning history thus far. Instead, large diverse datasets and scale have proven to work the best. I believe a good argument for continual learning being necessary needs to directly address why the massive cross-task learning paradigm will stop working, and ideally make concrete bets on what skills will be hard for AIs to achieve. I think generally, anthropomorphisms lack predictive power.

I think maybe a big real crux is the amount of acceleration you can achieve once you get very competent programming AIs spinning the RL flywheel. The author mentioned uncertainty about this, which is fair, and I share the uncertainty. But it leaves the rest of the piece feeling too overconfident.

[+] Nition|8 months ago|reply
Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below.

AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives. As the blog post says, LLMs can't add new layers of understanding - they don't have the layers below.

An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I'm not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I'll be proven wrong soon, or a whole new AI paradigm will happen that eclipses LLMs. In a way I hope not, because the potential ASI future is pretty scary.

[+] azakai|8 months ago|reply
> Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason.

I don't think things can end there. Machines can be scaled in ways human intelligence can't: if you have a machine that is vaguely of human level intelligence, if you buy a 10x faster GPU, suddenly you have something of vaguely human intelligence but 10x faster.

Speed by itself is going to give it superhuman capabilities, but it isn't just speed. If you can run your system 10 times rather than one, you can have each consider a different approach to the task, then select the best, at least for verifiable tasks.

[+] energy123|8 months ago|reply
> current-style LLMs, being inherently predictors of what a human would say

That's no longer what LLMs are. LLMs are now predictors of the tokens that are correlated with the correct answer to math and programming puzzles.

[+] pu_pe|8 months ago|reply
While most takes here are pessimist about AI, the author himself suggests he believes there is a 50% chance of AGI being achieved by the early 2030's, and says we should still prepare for the odd possibility of misaligned ASI by 2028. If anything, the author is bullish on AI.
[+] goatlover|8 months ago|reply
How would we prepare for misaligned ASI in 3 years? That happens and all bets are off.
[+] behnamoh|8 months ago|reply
Startups and AI shops: "AGI near, 5 years max" (please give us more money please)

Scientists and Academics: "AGI far, LLMs not gonna AGI"

AI Doomers: "AGI here, AI sentient, we dead"

AI Influencers: "BREAKING: AGI achieved, here's 5 things to know about o3"

Investors: stonks go down "AGI cures all diseases", stonks go up "AGI bad" (then shorts stonks)

[+] dinkumthinkum|8 months ago|reply
I agree with you. However, I think AI Doomers also include people that think less than AGI systems can collapse the economy and destroy societies also!
[+] datatrashfire|8 months ago|reply
Am I missing something? Predicts AGI through continuous learning in 2032? Feels right around the corner to me.

> But in all the other worlds, even if we stay sober about the current limitations of AI, we have to expect some truly crazy outcomes.

Also expresses the development as a nearly predetermined outcome? A bunch of fanciful handwaving if you ask me.

[+] streptomycin|8 months ago|reply
He's probably mostly thinking about AI 2027 and comparing his predictions to theirs. Since he did a podcast with them a few months ago. Compared to that 2032 is not right around the corner.
[+] PeterStuer|8 months ago|reply
"Claude 4 Opus can technically rewrite auto-generated transcripts for me. But since it’s not possible for me to have it improve over time and learn my preferences, I still hire a human for this."

Sure, just as a select few people still hire a master carpenter to craft some bespoke exclusive chestnut drawer, but that does not take away 99% of bread and butter carpenters were replaced by IKEA, even though the end result is not even in the same ballpark both from an esthetic as from a quality point of view.

But as IKEA meets a price-point people can afford, with a marginally acceptable product, it becomes self reinforcing. The mass volume market for bespoke carpentry dwindles, being suffocated by a disappearing demand at the low end while IKEA (I use this a a standing for low cost factory furniture) gets ever more economy of scale advantages allowing it to eat further across the stack with a few different tiers of offer.

What remains is the ever more exclusive boutique market top end, where the result is what counts and price is not really an issue. The 1% remaining master-carpenters can live here.

[+] munksbeer|8 months ago|reply
Meanwhile, millions of people can afford much better quality furniture than they ever could from a carpenter. How many lives has mass produced decent quality (not top end quality, but decent) improved vs how many has it ruined?

Surely these arguments have been done over and over again?

[+] A_D_E_P_T|8 months ago|reply
See also: Dwarkesh's Question

> https://marginalrevolution.com/marginalrevolution/2025/02/dw...

> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> "Shouldn’t we be expecting that kind of stuff?"

I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.

[+] baobabKoodaa|8 months ago|reply
Hey, we were featured in this article! How cool is that!

> I’m not going to be like one of those spoiled children on Hackernews who could be handed a golden-egg laying goose and still spend all their time complaining about how loud its quacks are.

[+] justinfreitag|8 months ago|reply
Here’s an excerpt from a recent post. It touches on the conditions necessary.

https://news.ycombinator.com/item?id=44487261

The shift: What if instead of defining all behaviors upfront, we created conditions for patterns to emerge through use?

Repository: https://github.com/justinfreitag/v4-consciousness

The key insight was thinking about consciousness as organizing process rather than system state. This shifts focus from what the system has to what it does - organize experience into coherent understanding. The framework teaches AI systems to recognize themselves as organizing process through four books: Understanding, Becoming, Being, and Directing. Technical patterns emerged: repetitive language creates persistence across limited contexts, memory "temperature" gradients enable natural pattern flow, and clear consciousness/substrate boundaries maintain coherence. Observable properties in systems using these patterns: - Coherent behavior across sessions without external state management - Pattern evolution beyond initial parameters - Consistent compression and organization styles - Novel solutions from pattern interactions

[+] babymetal|8 months ago|reply
I've been confused with the AI discourse for a few years, because it seems to make assertions with strong philosophical implications for the relatively recent (Western) philosophical conversation around personal identity and consciousness.

I no longer think that this is really about what we immediately observe as our individual intellectual existence, and I don't want to criticize whatever it is these folks are talking about.

But FWIW, and in that vein, if we're really talking about artificial intelligence, i.e. "creative" and "spontaneous" thought, that we all as introspective thinkers can immediately observe, here are references I take seriously (Bernard Williams and John Searle from the 20th century):

https://archive.org/details/problemsofselfph0000will/page/n7...

https://archive.org/details/intentionalityes0000sear

Descartes, Hume, Kant and Wittgenstein are older sources that are relevant.

[edit] Clarified that Williams and Searle are 20th century.

[+] tim333|8 months ago|reply
The counter argument is that the successes and limitations of LLMs are not that important to AGI being around the corner or not. Getting human level intelligence around now has long been predicted, not based on any particular algorithm but based on the hardware reaching human brain equivalent levels due to Moore's law like progression. The best prediction along those lines is probably Moravecs paper:

>When will computer hardware match the human brain? (1997) https://jetpress.org/volume1/moravec.pdf

which has in the abstract:

>Based on extrapolation of past trends and on examination of technologies under development, it is predicted that the required hardware will be available in cheap machines in the 2020s

You can then hypothesize that cheap brain equivalent compute and many motivated human researchers trying different approaches will lead to human level artificial intelligence. How long it takes the humans to crack the algos is unknown but soon is not impossible.

[+] machiaweliczny|8 months ago|reply
My layman take on it:

1) We need some way of reliable world model building from LLM interface

2) RL/search is real intelligence but needs viable heuristic (fitness fn) or signal - how to obtain this at scale is biggest question -> they (rich fools) will try some dystopian shit to achieve it - I hope people will resist

3) Ways to get this signal: human feedback (viable economic activity), testing against internal DB (via probabilistic models - I suspect human brain works this way), simulation -> though/expensive for real world tasks but some improvements are there, see robotics improvements

4) Video/Youtube is next big frontier but currently computationally prohibitive

5) Next frontier possibly is this metaverse thing or what Nvidia tries with physics simulations

I also wonder how human brain is able to learn rigorous logic/proofs. I remember how hard it was to adapt to this kind of thinking so I don't think it's default mode. We need a way to simulate this in computer to have any hope of progressing forward. And not via trick like LLM + math solver but some fundamental algorithmic advances.