top | item 38314179

(no title)

abra0 | 2 years ago

>rightfully so

How the hell can people be so confident about this? You describe two smart people reasonably disagreeing about a complicated topic

discuss

order

jumploops|2 years ago

The LLMs of today are just multidimensional mirrors that contain humanity's knowledge. They don't advance that knowledge, they just regurgitate it, remix it, and expose patterns. We train them. They are very convincing, and show that the Turing test may be flawed.

Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.

Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.

Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!

dboreham|2 years ago

Prediction: there isn't a difference. The apparent difference is a manifestation of human brain delusion about how human brains work. The Turing test is a beautiful proof of this phenomenon: so and so thing is impossibility hard only achievable via magic capabilities of human brains...oops no actually it's easily achievable now so we better re-define our test. This cycle Will continue until the singularly. Disclosure: I've been long term skeptical about AI but that writing is up on the wall now.

mrangle|2 years ago

I agree with your premise.

You're right: I haven't seen evidence of LLM novel pattern output that is basically creative.

It can find and remix patterns where there are pre-existing rules and maps that detail where they are and how to use them (ie: grammar, phonics, or an index). But it can't, whatsoever, expose new patterns. At least public facing LLM's can't. They can't abstract.

I think that this is an important distinction when speaking of AI pattern finding, as the language tends to imply AGI behavior.

But abstraction (as perhaps the actual marker of AGI) is so different from what they can do now that it essentially seems to be futurism whose footpath hasn't yet been found let alone traversed.

When they can find novel patterns across prior seemingly unconnected concepts, then they will be onto something. When "AI" begins to see the hidden mirrors so to speak.

FeepingCreature|2 years ago

If LLMs can copy the symbolic behaviors that let humans generate new knowledge, it'll be there.

satvikpendem|2 years ago

> , they just regurgitate it, remix it, and expose patterns

Who cares? Sometimes the remixation of such patterns is what leads to new insights in us humans. It is dumb to think that remixing has no material benefit, especially when it clearly does.

bitcharmer|2 years ago

> They are very convincing, and show that the Turing test may be flawed

The only think flawed here is this statement. Are you even familiar with the premise of Turing test?

smilekzs|2 years ago

Maybe "rightfully so" meant "it is totally within Sam's right to claim that LLMs aren't sufficient for AGI"?