top | item 46943049

(no title)

Eliezer | 21 days ago

Every time somebody writes an article like this without any dates and without saying which model they used, my guess is that they've simply failed to internalize the idea that "AI" is a moving target; nor understood that they saw a capability level from a fleeting moment of time, rather than an Eternal Verity about the Forever Limits of AI.

discuss

order

iLoveOncall|21 days ago

Funnily enough we have had those comments with every single model release saying "Oh yeah I agree Claude 3 was not good but now with Claude 3.5 I can vibe-code anything".

Rinse and repeat with every model since.

There also ARE intrinsic limits to LLMs, I'm not sure why you deny them?

Eliezer|20 days ago

There's intrinsic limits to vanilla transformer stacks. Nobody knows where they are. We don't know how unvanilla Opus 4.6 or GPT 5.3 are. We don't know what's in development or which new ideas will pan out. But it will still probably be called an "LLM".

josefrichter|21 days ago

Exactly. Basically their thought are invalidated often quicker than they hit the publish button.