top | item 35782546

(no title)

Mike_12345 | 2 years ago

> Part of my whole point is that LLMs are to NLPs as rockets are to airplanes.

Yes it is deep learning applied to NLP. Makes the old designs obsolete

> the data is text

It is not randomly generated text. There are patterns in that text. It was trained to model the semantics or "meaning" in the text. There is a structure in the text which the machine has recognized.

It automatically learned a model of many concepts without any of those concepts being explicitly programmed into it. That's the entire point of machine learning.

> But not adequately, IMO.

It is adequate for some things and not adequate for other things.

It seems that all you are saying is that GPT is not AGI and doesn't have human level of understanding and reasoning. No one disagrees with that.

discuss

order

PaulDavisThe1st|2 years ago

> It seems that all you are saying is that GPT is not AGI and doesn't have human level of understanding and reasoning. No one disagrees with that.

No, I'm saying that LLMs perform what is essentially an amazing, really hard to understand party trick that happens to be so good it is actually useful. They may be indicative (surprisingly) of how a lot of actual human speech production happens (or may not), and if so, that's really good to discover. That this is surprising represents a bit of a failure of our imagination, but it also leaves untouched the parts of speech production that don't use this sort of process.

Like actual reasoning.

Mike_12345|2 years ago

Here's a prompt: "If there is a bob outside, and a bib inside, and the bib cannot see the bob, can the bob see the bib?"

Answer:

"If the bib cannot see the bob due to being inside, it is likely that the bob, which is outside, also cannot see the bib. The reason for the bib not being able to see the bob is presumably due to some kind of barrier, such as a wall or a window, which would likely obstruct the view for both the bib and the bob."

Prompt:

"What type of reasoning did you just apply to solve that task?"

Answer:

"In solving the task, I applied abductive reasoning. Abductive reasoning involves inferring the most likely explanation for a set of observations or statements. In this case, I inferred that a barrier (e.g., a wall or window) is the most likely explanation for why the bib cannot see the bob, and thus, the bob likely cannot see the bib either."

ChatGPT solves the problem by a mechanical reasoning process, not a "party trick". It's a machine, not a person at a party. Surely you don't believe that it has seen that exact sequence of questions and replies in its training set and is simply regurgitating. You can argue about whether its actually using "abductive reasoning" or not, but that is besides the point that it involves some mechanical reasoning process over an interpretation of the prompt. It's not a simple regurgitation.

AlphaZero learned to play Go starting with nothing but the rules of the game. What is it regurgitating there?

Mike_12345|2 years ago

Alright so deep learning, the state of the art of AI, is a "party trick". AlphaZero is likewise a party trick. No "true" reasoning involved.

> Like actual reasoning.

You're relying on intuition and personal beliefs of what constitutes "true" reasoning instead of formal rigorous mathematical definitions of reasoning. The general concept of reasoning includes what the language models are doing when they solve natural language understanding tasks, by definition.

It just sounds like a No True Scotsman fallacy.