top | item 45405182

(no title)

Inufu | 5 months ago

Author here.

The argument is not that it will keep growing exponentially forever (obviously that is physically impossible), rather that:

- given a sustained history of growth along a very predictable trajectory, the highest likelihood short term scenario is continued growth along the same trajectory. Sample a random point on an s-curve and look slightly to the right, what’s the most common direction the curve continues?

- exponential progress is very hard to visualize and see, it may appear to hardly make any progress while far away from human capabilities, then move from just below to far above human very quickly

discuss

order

hnlmorg|5 months ago

My point is that the limits of LLMs will be hit long before we they start to take on human capabilities.

The problem isn’t that exponential growth is hard to visualise. The problem is that LLMs, as advanced and useful a technique as it is, isn’t suited for AGI and thus will never get us even remotely to the stage of AGI.

The human like capabilities are really just smoke and mirrors.

It’s like when people anthropomorphisise their car; “she’s being temperamental today”. Except we know the car is not intelligence and it’s just a mechanical problem. Whereas it’s in the AI tech firms best interest to upsell the human-like characteristics of LLMs because that’s how they get VC money. And as we know, building and running models isn’t cheap.

tim333|5 months ago

>the limits of LLMs will be hit long before we they start to take on human capabilities

Against that you have stuff like Deepmind getting gold in the International Collegiate Programming Contest the other week, including solving one problem where "none of the human teams, including the top performers from universities in Russia, China and Japan, got it right" https://www.theguardian.com/technology/2025/sep/17/google-de...

There's kind of a contradiction that they are nowhere near human capabilities while also beating humans in various competitions.

tim333|5 months ago

There is no particular reason why AI has to stick to language models though. Indeed if you want human like thinking you pretty much have to go beyond language as we do other stuff too if you see what I mean. A recent example: "Google DeepMind unveils its first “thinking” robotics AI" https://arstechnica.com/google/2025/09/google-deepmind-unvei...

rmwaite|5 months ago

My problem with takes like this is it presumes a level of understanding of intelligence in general that we simply do not have. We do not understand consciousness at all, much less consciousness that exhibits human intelligence. How are we to know what the exact conditions are that result in human-like intelligence? You’re assuming that there isn’t some emergent phenomenon that LLMs could very well achieve, but have not yet.

fjdjshsh|5 months ago

>the limits of LLMs will be hit long before we they start to take on human capabilities.

Why do you think this? The rest of the comment is just rephrasing this point ("llms isn't suited for AGI"), but you don't seem to provide any argument.

PeterStuer|5 months ago

AI services are/will be going hybrid. Just like we have seen in search, with thousands of dedicated subsystems handling niches behind the single unified ui element or api call.

adammarples|5 months ago

The most common part of the S-curve by far is the flat bit before and the flat bit after. We just don't graph it because it's boring. Besides which there is no reason at all to assume that this process will follow that shape. Seems like guesswork backed up by hand waving.

tempfile|5 months ago

Very much handwaving. The question is not meaningful at all without knowing the parameters of the S-curve. It's like saying "I flipped a coin and saw heads. What's the most likely next flip?"

YeGoblynQueenne|5 months ago

So it's an argument impossible to counter because it's based on a hypothesis that is impossible to falsify: it predicts that there will either be a bit of progress, or a lot of progress, soon. Well, duh.

bawolff|5 months ago

That feels like you're moving the goal posts a bit.

Exponential growth over the short term is very uninteresting. Exponential growth is exciting when it can compound.

E.g. if i offered you an investing opportunity 500% / per year compounded daily - that's amazing. If the fine print is that that rate will only last for the very near term (say a week), then it would be worse than a savings account.

Inufu|5 months ago

Well, growth has been on this exponential already for 5+ years (for the METR eval), and we are at the point where models are very close to matching human expert capabilities in many domains - only one or two more years of growth would put us well beyond that point.

Personally I think we'll see way more growth than that, but to see profound impacts on our economy you only need to believe the much more conservative assumption of a little extra growth along the same trend.