Because LLM are Markov Chains on steroids. They're useful for sure.
But they won't suddenly start to create a better (for whatever better is) version of themselves or start pushing the boundaries of the machines they're running on.
Or maybe I'm wrong and the current "Vibe coding" push is in fact LLMs getting "coders" to compile a distributed AI. Or multiple small agents which goal is to get lot of hardware delivered somewhere it can be assembled for a new better monolithic AI.
"By design" LLMs lack: initiative, emotion, creativity, curiosity, opinions, beliefs, self-reflection, or even logical reasoning. All they can do is predict the next token - which is still an extremely powerful building block on its own, but nothing like the above.
You've made a reasonable argument that LLMs cannot on their own be an implementation of GAI. But GP's claim was stronger: that LLMs won't even be a component (or "building block") of the first GAI.
One might reasonably ask a frontier how to generate the source code for an agent based system that exhibits examples of initiative, emotion, creativity, curiosity, opinions, beliefs, self-reflection, or even logical reasoning.
arkh|11 months ago
Or maybe I'm wrong and the current "Vibe coding" push is in fact LLMs getting "coders" to compile a distributed AI. Or multiple small agents which goal is to get lot of hardware delivered somewhere it can be assembled for a new better monolithic AI.
rollcat|11 months ago
tomjakubowski|11 months ago
symbolicAGI|11 months ago