4. Language prediction training will not get stuck in a local optimum.
Most previous things we train on could have been better served if the model developed AGI, but they didn't. There is no reason to expect LLMs to not get stuck in a local optimum as well, and I have seen no good argument as to why they wouldn't get stuck like everything else we tried.
You're arguing that LLMs would be a good user interface for AGI...
Whether that's true or not, I don't think that's what the previous post was referring to. The question is, if you start with today's LLMs and progressively improve them, do you arrive at AGI?
(I think it's pretty obvious the answer is no -- LLMs don't even have an intelligence part to improve on. A hypothetical AGI might somehow use an LLM as part of a language interface subsystem, but the general intelligence would be outside the LLM. An AGI might also use speakers and mics but those don't give us a path to AGI either.)
I don’t know if they are or not, but I’m not sure how anyone could be so certain that they’re not that they find the mere idea cringeworthy. Unless you feel you have some specific perspective on it that’s escaped their army of researchers?
Because AI researchers have been on the path to AGI several times before until the hype died down and the limitations became apparent. And because nobody knows what it would take to create AGI. But to put a little more behind that, evolution didn't start with language models. It evolved everything else until humans had the ability to invent language. Current AI is going about it completely backwards from how biology did it. Now maybe robotics is doing a little better on that front.
I mean, if you're using LLM as a stand-in for multi-modal models, and you're not disallowing things like a self-referential processing loop, a memory extraction process, etc, it's not so far fetched. There might be multiple databases and a score of worker processes running in the background, but the core will come from a sequence model being run in a loop.
Zambyte|2 years ago
2. Natural language is both a good and expected interface to AGI.
3. LLMs do a really good job at interfacing with natural language.
Which one(s) do you disagree with?
Jensson|2 years ago
4. Language prediction training will not get stuck in a local optimum.
Most previous things we train on could have been better served if the model developed AGI, but they didn't. There is no reason to expect LLMs to not get stuck in a local optimum as well, and I have seen no good argument as to why they wouldn't get stuck like everything else we tried.
BriggyDwiggs42|2 years ago
jmull|2 years ago
Whether that's true or not, I don't think that's what the previous post was referring to. The question is, if you start with today's LLMs and progressively improve them, do you arrive at AGI?
(I think it's pretty obvious the answer is no -- LLMs don't even have an intelligence part to improve on. A hypothetical AGI might somehow use an LLM as part of a language interface subsystem, but the general intelligence would be outside the LLM. An AGI might also use speakers and mics but those don't give us a path to AGI either.)
MrScruff|2 years ago
goatlover|2 years ago
CuriouslyC|2 years ago
travbrack|2 years ago
dkjaudyeqooe|2 years ago
finnjohnsen2|2 years ago
But the algorithms are black box to me, so maybe there is some kind of launch pad to AGI within it