(no title)
kbrkbr | 3 months ago
LLMs have their name for a reason: they model human language (output given an input) from human text (and other artifacts).
And now the idea seems to be that when we do more of it, or make it even larger, it will stop to be a model of human language generation? Or that human language generation is all there is to AGI?
I wish someone could explain the claim to me...
Gerardo1|3 months ago
And because there's something in the human mind that has a very strong reaction to being talked to, and because LLMs are specifically good at mimicking plausible human speech patterns, chatGPT really, really hooked a lot of people (including said VC/private money people).
hackinthebochs|3 months ago
It's not that language generation is all there is to AGI, but that to sufficiently model text that is about the wide range of human experiences, we need to model those experiences. LLMs model the world to varying degrees, and perhaps in the limit of unbounded training data, they can model the human's perspective in it as well.
[1] https://x.com/karpathy/status/1582807367988654081