top | item 39612141

(no title)

lagt_t | 2 years ago

Everytime they say LLMs are the path to AGI, I cringe a little.

discuss

order

Zambyte|2 years ago

1. AGI needs an interface to be useful.

2. Natural language is both a good and expected interface to AGI.

3. LLMs do a really good job at interfacing with natural language.

Which one(s) do you disagree with?

Jensson|2 years ago

I think he disagrees with 4:

4. Language prediction training will not get stuck in a local optimum.

Most previous things we train on could have been better served if the model developed AGI, but they didn't. There is no reason to expect LLMs to not get stuck in a local optimum as well, and I have seen no good argument as to why they wouldn't get stuck like everything else we tried.

BriggyDwiggs42|2 years ago

The underlying premise that llms are capable of fully generalizing to a human level across most domains, i assume?

jmull|2 years ago

You're arguing that LLMs would be a good user interface for AGI...

Whether that's true or not, I don't think that's what the previous post was referring to. The question is, if you start with today's LLMs and progressively improve them, do you arrive at AGI?

(I think it's pretty obvious the answer is no -- LLMs don't even have an intelligence part to improve on. A hypothetical AGI might somehow use an LLM as part of a language interface subsystem, but the general intelligence would be outside the LLM. An AGI might also use speakers and mics but those don't give us a path to AGI either.)

MrScruff|2 years ago

I don’t know if they are or not, but I’m not sure how anyone could be so certain that they’re not that they find the mere idea cringeworthy. Unless you feel you have some specific perspective on it that’s escaped their army of researchers?

goatlover|2 years ago

Because AI researchers have been on the path to AGI several times before until the hype died down and the limitations became apparent. And because nobody knows what it would take to create AGI. But to put a little more behind that, evolution didn't start with language models. It evolved everything else until humans had the ability to invent language. Current AI is going about it completely backwards from how biology did it. Now maybe robotics is doing a little better on that front.

CuriouslyC|2 years ago

I mean, if you're using LLM as a stand-in for multi-modal models, and you're not disallowing things like a self-referential processing loop, a memory extraction process, etc, it's not so far fetched. There might be multiple databases and a score of worker processes running in the background, but the core will come from a sequence model being run in a loop.

finnjohnsen2|2 years ago

Yea the idea that the computers can truly think by mimicking our language really well doesn't make sense.

But the algorithms are black box to me, so maybe there is some kind of launch pad to AGI within it