(no title)
robbrown451 | 2 years ago
I think to best wrap your head around this stuff, you should look to the commonalities of LLM's, image, generators, and even things like Alpha Zero and how it learned to play Go.
Alpha Zero is kind of the extreme in terms of not imitating anything that humans have done. It learns to play the game simply by playing itself -- and what they found is that there isn't really a limit to how good it can get. There may be some theoretical limit of a "perfect" Go player, or maybe not, but it will continue to converge towards perfection by continuing to train. And it can go far beyond what the best human Go player can ever do. Even though very smart humans have spent their lifetimes deeply studying the game, and Alpha Zero had to learn everything from scratch.
One other thing to take into consideration, is that to play the game of Go you can't just think of the next move. You have to think far forward in the game -- even though technically all it's doing is picking the next move, it is doing so using a model that has obviously looked forward more than just one move. And that model is obviously very sophisticated, and if you are going to say that it doesn't understand the game of Go, I would argue that you have a very, oddly restricted definition of the word, understand, and one that isn't particularly useful.
Likewise, with large language models, while on the surface, they may be just predicting the next word one after another, to do so effectively they have to be planning ahead. As Hinton says, there is no real limit to how sophisticated they can get. When training, it is never going to be 100% accurate in predicting text it hasn't trained on, but it can continue to get closer and closer to 100% the more it trains. And the closer it gets, the more sophisticated model it needs. In the sense that Alpha Zero needs to "understand" the game of Go to play effectively, the large language model needs to understand "the world" to get better at predicting.
lsy|2 years ago
For an LLM, this is not even close to being the case. The sum of all human artifacts ever made (or yet to be made) doesn't exhaust the description of a rock in your front yard, let alone the world in all its varied possibility. And we certainly haven't figured out a "model" which would let a computer generate new and valid data that expands its understanding of the world beyond its inputs, so self-training is a non-starter for LLMs. What the LLM is "understanding", and what it is reinforced to "understand" is not the world but the format of texts, and while it may get very good at understanding the format of texts, that isn't equivalent to an understanding of the world.
famouswaffles|2 years ago
No human or creature we know of has a "true" world model so this is irrelevant. You don't experience the "real world". You experience a tiny slice of it, a few senses that is further slimmed down and even fabricated at parts.
To the bird who can intuitively sense and use electromagnetic waves for motion and guidance, your model of the world is fundamentally incomplete.
There is a projection of the world in text. Moreover training on additional modalities is trivial for a transformer. That's all that matters.
pizza|2 years ago
I think if you're going to be strict about this, you have to defend against the point of view that the same 'ding an sich' problem applies to both LLMs and people. And also whether if you had a limit sequence of KL divergences, one from a person's POV of the world, and one from an LLM's POV of texts, what it is about how a person approaches better grasp of reality - and likewise their KL divergence approaches 0, in some sense implying that their world model is becoming the same as the distribution of the world - that can only apply to people.
It seems possible to me that there is probably a great deal of lurking anthropocentrism that humanity is going to start noticing more and more in ourselves in the coming years, probably in both the direction of AI and the direction of other animals as we start to understand both better
tazjin|2 years ago
unknown|2 years ago
[deleted]
kubiton|2 years ago
wbillingsley|2 years ago
LLMs do not directly model the world; they train on and model what people write about the world. It is an AI model of a computed gestalt human model of the world, rather than a model of the world directly. If you ask it a question, it tells you what it models someone else (a gestalt of human writing) is most likely say. That in turn is strengthened if user interaction accepts it and corrected only if someone tells it something different.
If we were to define that as what "understanding" is, we would equivalently be saying that a human bullshit artist would have expert understanding if only they produced more believable bullshit. (They also just "try to sound like an expert".)
Likewise, I'm not convinced that we can measure its understanding just by identifying inaccuracies or measuring the difference between its answers and expert answers - There would be no difference between bluffing your way through the interview (relying on your interviewer's limitations in how they interrogate you) and acing the interview.
There seems to be a fundamental difference in levels of indirection. Where we "map the territory", LLMs "map the maps of the territory".
It can be an arbitrarily good approximation, and practically very useful, but it's a strong ontological step to say one thing "is" another just because it can be used like it.
robbrown451|2 years ago
This is true. But human brains don't directly model the world either, they form an internal model based on what comes in through their senses. Humans have the advantage of being more "multi-modal," but that doesn't mean that they get more information or better information.
Much of my "modeling of the world" comes from the fact that I've read a lot of text. But of course I haven't read even a tiny fraction of what GPT4 has.
That said, LLMs can already train on images, as GPT4-V does. And the image generators as well do this, it's just a matter of time before the two are fully integrated. Later we'll see a lot more training on video and sound, and it all being integrated into a single model.
voitvodder|2 years ago
Most arguments then descend into confusing the human knowledge embedded in a textbook with the human agency to apply the embedded knowledge. Software that extracts the knowledge from all textbooks has nothing to do with the human agency to use that knowledge.
I love chatGPT4 and had signed up in the first few hours it was released but I actually canceled my subscription yesterday. Part because of the bullshit with the company these past few days but also because it had just become a waste of time the past few months for me. I learned so much this year but I hit a wall that to make any progress I need to read the textbooks on the subjects I am interested in just like I had to this time last year before chatGPT.
We also shouldn't forget that children anthropomorphize toys and dolls quite naturally. It is entirely natural to anthropomorphize a LLM and especially when it is designed to pretend it is typing back a response like a human would. It is not bullshitting you though when it pretends to type back a response about how it doesn't actually understand what it is writing.
unknown|2 years ago
[deleted]
SkiFire13|2 years ago
It doesn't necessarily have to look ahead. Since Go is a deterministic game there is always a best move (or moves that are better than others) and hence a function that goes from the state of the game to the best move. We just don't have a way to compute this function, but it exists. And that function doesn't need the concept of lookahead, that's just an intuitive way of how could find some of its values. Likewise ML algorithms don't necessarily need lookahead, they can just try to approximate that function with enough precision by exploiting patterns in it. And that's why we can still craft puzzles that some AIs can't solve but humans can, by exploiting edge cases in that function that the ML algorithm didn't notice but are solvable with understanding of the game.
The thing is though, does this really matter if eventually we won't be able to notice the difference?
bytefactory|2 years ago
Is there really a difference between the two? If a certain move shapes the opponent's remaining possible moves into a smaller subset, hasn't AlphaGo "looked ahead"? In other words, when humans strategize and predict what happens in the real world, aren't they doing the same thing?
I suppose you could argue that humans also include additional world models in their planning, but it's not clear to me that these models are missing and impossible for machine learning models to generate during training.
xcv123|2 years ago
The rules of the game are deterministic, but you may be going a step too far with that claim.
Is the game deterministic when your opponent is non-deterministic?
Is there an optimal move for any board state given that various opponents have varying strategies? What may be the best move against one opponent may not be the best move against another opponent.
jon_richards|2 years ago
While I imagine alpha go does some brute force and some tree exploration, I think the main "intelligent" component of alpha go is the ability to recognize a "good" game state from a "bad" game state based on that moment in time, not any future plans or possibilities. That pattern recognition is all it has once its planning algorithm has reached the leaves of the trees. Correct me if I'm wrong, but I doubt alpha go has a neural net evaluating an entire tree of moves all at once to discover meta strategies like "the opponent focusing on this area" or "the opponent feeling on the back foot."
You can therefore imagine a pattern recognition algorithm so good that it is able to pick a move by only looking 1 move into the future, based solely on local stone densities and structures. Just play wherever improves the board state the most. It does not even need to "understand" that a game is being played.
> while on the surface, they may be just predicting the next word one after another, to do so effectively they have to be planning ahead.
So I don't think this statement is necessarily true. "Understanding" is a major achievement, but I don't think it requires planning. A computer can understand that 2+2=4 or where to play in tic-tac-toe without any "planning".
That said, there's probably not much special about the concept of planning either. If it's just simulating a tree of future possibilities and pruning it based on evaluation, then many algorithms have already achieved that.
theGnuMe|2 years ago
klodolph|2 years ago
There’s no limit to how sophisticated a model can get, but,
1. That’s a property shared with many architectures, and not really that interesting,
2. There are limits to the specific ways that we train models,
3. We care about the relative improvement that these models deliver, for a given investment of time and money.
From a mathematical perspective, you can just kind of keep multiplying the size of your model, and you can prove that it can represent arbitrary complicated structures (like, internal mental models of the world). That doesn’t mean that your training methods will produce those complicated structures.
With Go, I can see how the model itself can be used to generate new, useful training data. How such a technique could be applied to LLMs is less clear, and its benefits are more dubious.
Jensson|2 years ago
So trying to make an AI that solves the entire problem before writing the first letter will likely not result in a good solution while also making it compute way too much since it solves the entire problem for every token generated. That is the kind of AI we know how to train so for now that is what we have to live with, but it isn't the kind of AI that would be efficient or smart.
bytefactory|2 years ago
Someone|2 years ago
I don’t think that’s a given. AlphaZero may have found an extremely high local optimum that isn’t the global optimum.
When playing only against itself, it won’t be able to get out of that local optimum, and when getting closer and closer to it even may ‘forget’ how to play against players that make moves that AplhaGo never would make, and that may be sufficient for a human to beat it (something like that happened with computer chess in the early years, where players would figure out which board positions computers were bad at, and try to get such positions on the board)
I think you have to keep letting it play against other good players (human or computer) that play differently to have it keep improving, and even then, there’s no guarantee it will find a global optimum.
theGnuMe|2 years ago
LLMs do not have a "planning" module or simulator. There is no way the LLM can plan.
Could build a planning system into an LLM? Possibly and probably, but that is still open research. LeCunn is trying to figure out how to train them effectively. But even an LLM with a planning system does not make it AGI.
Some will argue that iteratively feeding the output embedding back into the input will retain the context but even in those cases it rapidly diverges or as we say "hallucinates"... still happens even with large input context windows. So there is still no planning here and no world model or understanding.
eviks|2 years ago
And
> When training, it is never going to be 100% accurate in predicting text it hasn't trained on, but it can continue to get closer and closer to 100% the more it trains.
For example, it can reach 25% of accuracy and have an math limit of 26%, so "forever getting closer to 100% with time" would still result in a waste of even infinite resources
icy_deadposts|2 years ago
> it will continue to converge towards perfection
Then someone discovered a flaw that made it repeatably beatable by relative amateurs in a way that no human player would be
https://www.vice.com/en/article/v7v5xb/a-human-amateur-beat-...
user_named|2 years ago
ewild|2 years ago
[deleted]
anothernewdude|2 years ago
I disagree. You can think in terms of a system that doesn't involve predictions at all, but has the same or similar enough outcome.
So an action network just learns patterns. Just like a chess player can learn what positions look good without thinking ahead.
huytersd|2 years ago
theGnuMe|2 years ago
gilbetron|2 years ago
notjoemama|2 years ago
Is it just me or does this read like “here is my assumption about what you said, and now here is my passive aggressive judgement about that assumption”? If you’re not certain about what they mean by the word “understand”, I bet you could ask and they might explain it. Just a suggestion.
SpicyLemonZest|2 years ago
greenthrow|2 years ago