top | item 43388698

(no title)

low_common | 11 months ago

> IMO it is not the case. And I'd go farther in thinking LLM won't even be a component of AGI if we get there.

And why do you think that?

discuss

order

arkh|11 months ago

Because LLM are Markov Chains on steroids. They're useful for sure. But they won't suddenly start to create a better (for whatever better is) version of themselves or start pushing the boundaries of the machines they're running on.

Or maybe I'm wrong and the current "Vibe coding" push is in fact LLMs getting "coders" to compile a distributed AI. Or multiple small agents which goal is to get lot of hardware delivered somewhere it can be assembled for a new better monolithic AI.

rollcat|11 months ago

"By design" LLMs lack: initiative, emotion, creativity, curiosity, opinions, beliefs, self-reflection, or even logical reasoning. All they can do is predict the next token - which is still an extremely powerful building block on its own, but nothing like the above.

tomjakubowski|11 months ago

You've made a reasonable argument that LLMs cannot on their own be an implementation of GAI. But GP's claim was stronger: that LLMs won't even be a component (or "building block") of the first GAI.

symbolicAGI|11 months ago

One might reasonably ask a frontier how to generate the source code for an agent based system that exhibits examples of initiative, emotion, creativity, curiosity, opinions, beliefs, self-reflection, or even logical reasoning.