top | item 38322566

(no title)

robbrown451 | 2 years ago

I agree with Hinton, although a lot hinges on your definition of "understand."

I think to best wrap your head around this stuff, you should look to the commonalities of LLM's, image, generators, and even things like Alpha Zero and how it learned to play Go.

Alpha Zero is kind of the extreme in terms of not imitating anything that humans have done. It learns to play the game simply by playing itself -- and what they found is that there isn't really a limit to how good it can get. There may be some theoretical limit of a "perfect" Go player, or maybe not, but it will continue to converge towards perfection by continuing to train. And it can go far beyond what the best human Go player can ever do. Even though very smart humans have spent their lifetimes deeply studying the game, and Alpha Zero had to learn everything from scratch.

One other thing to take into consideration, is that to play the game of Go you can't just think of the next move. You have to think far forward in the game -- even though technically all it's doing is picking the next move, it is doing so using a model that has obviously looked forward more than just one move. And that model is obviously very sophisticated, and if you are going to say that it doesn't understand the game of Go, I would argue that you have a very, oddly restricted definition of the word, understand, and one that isn't particularly useful.

Likewise, with large language models, while on the surface, they may be just predicting the next word one after another, to do so effectively they have to be planning ahead. As Hinton says, there is no real limit to how sophisticated they can get. When training, it is never going to be 100% accurate in predicting text it hasn't trained on, but it can continue to get closer and closer to 100% the more it trains. And the closer it gets, the more sophisticated model it needs. In the sense that Alpha Zero needs to "understand" the game of Go to play effectively, the large language model needs to understand "the world" to get better at predicting.

discuss

order

lsy|2 years ago

The difference is that "the world" is not exhaustible in the same way as Go is. While it's surely true that the number of possible overall Go game states is extremely large, the game itself is trivially representable as a set of legal moves and rules. The "world model" of the Go board is actually just already exhaustive and finite, and the computer's work in playing against itself is to generate more varied data within that model rather than to develop that model itself. We know that when Alpha Zero plays a game against itself it is valuable data because it is a legitimate game which most likely represents a new situation it hasn't seen before and thus expands its capacity.

For an LLM, this is not even close to being the case. The sum of all human artifacts ever made (or yet to be made) doesn't exhaust the description of a rock in your front yard, let alone the world in all its varied possibility. And we certainly haven't figured out a "model" which would let a computer generate new and valid data that expands its understanding of the world beyond its inputs, so self-training is a non-starter for LLMs. What the LLM is "understanding", and what it is reinforced to "understand" is not the world but the format of texts, and while it may get very good at understanding the format of texts, that isn't equivalent to an understanding of the world.

famouswaffles|2 years ago

>The sum of all human artifacts ever made (or yet to be made) doesn't exhaust the description of a rock in your front yard, let alone the world in all its varied possibility.

No human or creature we know of has a "true" world model so this is irrelevant. You don't experience the "real world". You experience a tiny slice of it, a few senses that is further slimmed down and even fabricated at parts.

To the bird who can intuitively sense and use electromagnetic waves for motion and guidance, your model of the world is fundamentally incomplete.

There is a projection of the world in text. Moreover training on additional modalities is trivial for a transformer. That's all that matters.

pizza|2 years ago

Kant would like a word with you about your point on whether people themselves understand the world and not just the format of their perceptions... :)

I think if you're going to be strict about this, you have to defend against the point of view that the same 'ding an sich' problem applies to both LLMs and people. And also whether if you had a limit sequence of KL divergences, one from a person's POV of the world, and one from an LLM's POV of texts, what it is about how a person approaches better grasp of reality - and likewise their KL divergence approaches 0, in some sense implying that their world model is becoming the same as the distribution of the world - that can only apply to people.

It seems possible to me that there is probably a great deal of lurking anthropocentrism that humanity is going to start noticing more and more in ourselves in the coming years, probably in both the direction of AI and the direction of other animals as we start to understand both better

tazjin|2 years ago

The world on our plane of existence absolutely is exhaustible, just on a much, much larger scale. Doesn't mean that the process is fundamentally different, and for the human perspective there might be diminishing returns.

kubiton|2 years ago

What if we are just the result of a ml network with a model of the world?

wbillingsley|2 years ago

LLMs are very good at uncovering the mathematical relationships between words, many layers deep. Calling that understanding is a claim about what understanding is. But because we know how the LLMs we're talking about at the moment are trained, it seems to have more problems:

LLMs do not directly model the world; they train on and model what people write about the world. It is an AI model of a computed gestalt human model of the world, rather than a model of the world directly. If you ask it a question, it tells you what it models someone else (a gestalt of human writing) is most likely say. That in turn is strengthened if user interaction accepts it and corrected only if someone tells it something different.

If we were to define that as what "understanding" is, we would equivalently be saying that a human bullshit artist would have expert understanding if only they produced more believable bullshit. (They also just "try to sound like an expert".)

Likewise, I'm not convinced that we can measure its understanding just by identifying inaccuracies or measuring the difference between its answers and expert answers - There would be no difference between bluffing your way through the interview (relying on your interviewer's limitations in how they interrogate you) and acing the interview.

There seems to be a fundamental difference in levels of indirection. Where we "map the territory", LLMs "map the maps of the territory".

It can be an arbitrarily good approximation, and practically very useful, but it's a strong ontological step to say one thing "is" another just because it can be used like it.

robbrown451|2 years ago

"LLMs do not directly model the world; they train on and model what people write about the world"

This is true. But human brains don't directly model the world either, they form an internal model based on what comes in through their senses. Humans have the advantage of being more "multi-modal," but that doesn't mean that they get more information or better information.

Much of my "modeling of the world" comes from the fact that I've read a lot of text. But of course I haven't read even a tiny fraction of what GPT4 has.

That said, LLMs can already train on images, as GPT4-V does. And the image generators as well do this, it's just a matter of time before the two are fully integrated. Later we'll see a lot more training on video and sound, and it all being integrated into a single model.

voitvodder|2 years ago

We could anthropomorphize any textbook too and claim it has human level understanding of the subject. We could then claim the second edition of the textbook understands the subject better than the first. Anyone who claims the LLM "understands" is doing exactly this. What makes the LLM more absurd though is the LLM will actually tell you it doesn't understand anything while a book remains silent but people want to pretend we are living in the Matrix and the LLM is alive.

Most arguments then descend into confusing the human knowledge embedded in a textbook with the human agency to apply the embedded knowledge. Software that extracts the knowledge from all textbooks has nothing to do with the human agency to use that knowledge.

I love chatGPT4 and had signed up in the first few hours it was released but I actually canceled my subscription yesterday. Part because of the bullshit with the company these past few days but also because it had just become a waste of time the past few months for me. I learned so much this year but I hit a wall that to make any progress I need to read the textbooks on the subjects I am interested in just like I had to this time last year before chatGPT.

We also shouldn't forget that children anthropomorphize toys and dolls quite naturally. It is entirely natural to anthropomorphize a LLM and especially when it is designed to pretend it is typing back a response like a human would. It is not bullshitting you though when it pretends to type back a response about how it doesn't actually understand what it is writing.

SkiFire13|2 years ago

> One other thing to take into consideration, is that to play the game of Go you can't just think of the next move. You have to think far forward in the game -- even though technically all it's doing is picking the next move, it is doing so using a model that has obviously looked forward more than just one move.

It doesn't necessarily have to look ahead. Since Go is a deterministic game there is always a best move (or moves that are better than others) and hence a function that goes from the state of the game to the best move. We just don't have a way to compute this function, but it exists. And that function doesn't need the concept of lookahead, that's just an intuitive way of how could find some of its values. Likewise ML algorithms don't necessarily need lookahead, they can just try to approximate that function with enough precision by exploiting patterns in it. And that's why we can still craft puzzles that some AIs can't solve but humans can, by exploiting edge cases in that function that the ML algorithm didn't notice but are solvable with understanding of the game.

The thing is though, does this really matter if eventually we won't be able to notice the difference?

bytefactory|2 years ago

> It doesn't necessarily have to look ahead. Since Go is a deterministic game there is always a best move

Is there really a difference between the two? If a certain move shapes the opponent's remaining possible moves into a smaller subset, hasn't AlphaGo "looked ahead"? In other words, when humans strategize and predict what happens in the real world, aren't they doing the same thing?

I suppose you could argue that humans also include additional world models in their planning, but it's not clear to me that these models are missing and impossible for machine learning models to generate during training.

xcv123|2 years ago

> Since Go is a deterministic game there is always a best move

The rules of the game are deterministic, but you may be going a step too far with that claim.

Is the game deterministic when your opponent is non-deterministic?

Is there an optimal move for any board state given that various opponents have varying strategies? What may be the best move against one opponent may not be the best move against another opponent.

jon_richards|2 years ago

> to play the game of Go you can't just think of the next move. You have to think far forward in the game -- even though technically all it's doing is picking the next move, it is doing so using a model that has obviously looked forward more than just one move.

While I imagine alpha go does some brute force and some tree exploration, I think the main "intelligent" component of alpha go is the ability to recognize a "good" game state from a "bad" game state based on that moment in time, not any future plans or possibilities. That pattern recognition is all it has once its planning algorithm has reached the leaves of the trees. Correct me if I'm wrong, but I doubt alpha go has a neural net evaluating an entire tree of moves all at once to discover meta strategies like "the opponent focusing on this area" or "the opponent feeling on the back foot."

You can therefore imagine a pattern recognition algorithm so good that it is able to pick a move by only looking 1 move into the future, based solely on local stone densities and structures. Just play wherever improves the board state the most. It does not even need to "understand" that a game is being played.

> while on the surface, they may be just predicting the next word one after another, to do so effectively they have to be planning ahead.

So I don't think this statement is necessarily true. "Understanding" is a major achievement, but I don't think it requires planning. A computer can understand that 2+2=4 or where to play in tic-tac-toe without any "planning".

That said, there's probably not much special about the concept of planning either. If it's just simulating a tree of future possibilities and pruning it based on evaluation, then many algorithms have already achieved that.

theGnuMe|2 years ago

The "meta" here is just the probability distribution of stone densities. The only way it can process those is by monte Carlo simulation. The DNN (trained by reinforcement learning) evaluates the simulations and outputs the top move(s).

klodolph|2 years ago

> As Hinton says, there is no real limit to how sophisticated they can get.

There’s no limit to how sophisticated a model can get, but,

1. That’s a property shared with many architectures, and not really that interesting,

2. There are limits to the specific ways that we train models,

3. We care about the relative improvement that these models deliver, for a given investment of time and money.

From a mathematical perspective, you can just kind of keep multiplying the size of your model, and you can prove that it can represent arbitrary complicated structures (like, internal mental models of the world). That doesn’t mean that your training methods will produce those complicated structures.

With Go, I can see how the model itself can be used to generate new, useful training data. How such a technique could be applied to LLMs is less clear, and its benefits are more dubious.

Jensson|2 years ago

A big difference between a game like Go and writing text is that text is single player. I can write out the entire text, look at it and see where I made mistakes on the whole and edit those. I can't go back in a game of Go and change one of my moves that turned out to be a mistake.

So trying to make an AI that solves the entire problem before writing the first letter will likely not result in a good solution while also making it compute way too much since it solves the entire problem for every token generated. That is the kind of AI we know how to train so for now that is what we have to live with, but it isn't the kind of AI that would be efficient or smart.

bytefactory|2 years ago

This doesn't seem like a major difference, since LLMs are also choosing from a probability distribution of tokens for the most likely one, which is why they respond a token at a time. They can't "write out' the entire text at a time, which is why fascinating methods like "think step by step" work at all.

Someone|2 years ago

> There may be some theoretical limit of a "perfect" Go player, or maybe not, but it will continue to converge towards perfection by continuing to train

I don’t think that’s a given. AlphaZero may have found an extremely high local optimum that isn’t the global optimum.

When playing only against itself, it won’t be able to get out of that local optimum, and when getting closer and closer to it even may ‘forget’ how to play against players that make moves that AplhaGo never would make, and that may be sufficient for a human to beat it (something like that happened with computer chess in the early years, where players would figure out which board positions computers were bad at, and try to get such positions on the board)

I think you have to keep letting it play against other good players (human or computer) that play differently to have it keep improving, and even then, there’s no guarantee it will find a global optimum.

theGnuMe|2 years ago

Alphazero runs monte carlo tree search so it has a next move "planning" simulator. This computes the probability that specific moves up to some distance lead to a win.

LLMs do not have a "planning" module or simulator. There is no way the LLM can plan.

Could build a planning system into an LLM? Possibly and probably, but that is still open research. LeCunn is trying to figure out how to train them effectively. But even an LLM with a planning system does not make it AGI.

Some will argue that iteratively feeding the output embedding back into the input will retain the context but even in those cases it rapidly diverges or as we say "hallucinates"... still happens even with large input context windows. So there is still no planning here and no world model or understanding.

eviks|2 years ago

The issue with Alpha Zero analogy extremes is that those are extremely constrained conditions, so can't be generalized to something infinitely more complicated like speech

And

> When training, it is never going to be 100% accurate in predicting text it hasn't trained on, but it can continue to get closer and closer to 100% the more it trains.

For example, it can reach 25% of accuracy and have an math limit of 26%, so "forever getting closer to 100% with time" would still result in a waste of even infinite resources

user_named|2 years ago

It's not planning ahead, it is looking at the probabilities of the tokens altogether rather than one by one.

ewild|2 years ago

[deleted]

anothernewdude|2 years ago

> You have to think far forward in the game -

I disagree. You can think in terms of a system that doesn't involve predictions at all, but has the same or similar enough outcome.

So an action network just learns patterns. Just like a chess player can learn what positions look good without thinking ahead.

huytersd|2 years ago

Next word generation is one way to put it. The key point here is we have no idea what’s happening in the black box that is the neural network. It could be forming very strong connections between concepts in there with multi tiered abstractions.

theGnuMe|2 years ago

It is certainly not abstracting things.

gilbetron|2 years ago

If LLMs are just glorified autocompletion, then humans are too!

notjoemama|2 years ago

> I would argue that you have a very, oddly restricted definition of the word, understand, and one that isn't particularly useful.

Is it just me or does this read like “here is my assumption about what you said, and now here is my passive aggressive judgement about that assumption”? If you’re not certain about what they mean by the word “understand”, I bet you could ask and they might explain it. Just a suggestion.

SpicyLemonZest|2 years ago

I've asked that question in the past and I've never gotten an answer. Some people sidestep the question by describing something or other that they're confident isn't understanding; others just decline to engage entirely, asserting that the idea is too ridiculous to take seriously. In my experience, people with a clear idea of what they mean by the word "understand" are comfortable saying that ML models understand things.

greenthrow|2 years ago

This is absolute nonsense. The game of Go is a grid and two colors of pieces. "The world" here is literally everything.