I think this submission paper is talking about reinforcement learning as part of/after the main training, then the model does inference as normal.
They might have done that for O1, but the bigger change is the "runtime train of thought" that once the model received the prompt and before giving a definitive answer, it "thinks" with words and readjusts at runtime.
At least that's my understanding from these two approaches, and if that's true, then it's not similar.
AFAIK, OpenAI been doing reinforcement learning since the first version of ChatGPT for all future models, that's why you can leave feedback in the UI in the first place.
I found the paper a tad difficult to understand because it spends a lot of time circling around the main thesis instead of directly describing. So, to the best of my understanding:
We want to improve LLM's abilities to give correct answers to hard problems. One theory is that we can do that by training a "Self Correcting" behavior into the models where they can take as input a wrong answer and improve it to a better/correct answer.
This has been explored previously, trying to train this behavior using various Reinforcement techniques where the reward is based on how good the "corrected" answer is. So far it hasn't worked well, and the trained behavior doesn't generalize well.
The thesis of the paper is that this is because when the model is presented with a training example of `Answer 1, Reasoning, Corrected Answer`, and a signal of "Make Corrected Answer Better" it actually has _two_ perfectly viable ways to do that. One is to improve `Reasoning, Corrected Answer`, which would yield a higher reward and is what we want. The other, just as valid solution, is to simply improve `Answer 1` and have `Corrected Answer` = `Answer 1`.
The latter is what existing research has shown happens, and why so far attempts to train the desired behavior has failed. The models just try to improve their answers, not their correcting behaviors. This paper's solution is to change the training regimen slightly to encourage the model to use the former approach. And thus, hopefully, get the model to actually train the desired behavior of correcting previous answers.
This is done by doing two stages of training. In the first stage, the model is forced (by KL divergence loss) to keep its first answers the same, while being rewarded for improving the second answer. This helps keep the model's distribution of initial answers the same, avoiding the issue later where the model doesn't see as many "wrong" answers because wrong answers were trained out of the model. But it helps initialize the "self correcting" behavior into the model.
In the second stage the model is free to change the first answer, but they tweak the reward function to give higher rewards for "flips" (where answer 1 was bad, but answer 2 was good). So in this second stage it can use both strategies, improving its first answer or improving its self correcting, but it gets more rewards for the latter behavior. This seems to be a kind of refinement on the model, to improve things overall, while still keeping the self correcting behavior intact.
Anyway, blah blah blah, metrics showing the technique working better and generalizing better.
Seems reasonable to me. I'd be a bit worried about, in Stage 2, the model learning to write _worse_ answers for Answer 1 so it can maximize the reward for flipping answers. So you'd need some kind of balancing to ensure Answer 1 doesn't get worse. Not sure if that's in their reward function or not, or if its even a valid concern in practice.
Circling around the idea in a response describes what I see in a lot of LLM output quite well. I haven't tried o1 myself, but it does seem to fix that problem.
LLMs have no direct recollection of the qualia of their own training. This is at least a major way that I self-correct myself: if I'm about to talk about something I know, I'll try and figure out how/why I know that thing and in so doing, try to gauge whether I actually know that thing, if I'm hallucinating, or if I actually heard it from a less than reliable source etc.
I don't think LLMs can self-correct without remembering their own training in some way.
So you’re saying the solution is to prefix each training batch with a description of a sensory experience (You read the following in a paris cafe in 1997. While you read, you have an excellent baguette and some boiled eggs, and over-roasted coffee. The woman one table over is wearing a beautiful blue hat) and then post-train the final model into recalling the setting where it read any piece of text, or failing to recall any experience when presented with text it didn’t read?
(If someone tries this and it works, I’m quitting my phd and going back to camp counseling)
Sort of like this? It does help: Source-Aware Training Enables Knowledge Attribution in Language Models (https://arxiv.org/abs/2404.01019)
From the abstract:
> ... To give LLMs such ability, we explore source-aware training -- a recipe that involves (i) training the LLM to associate unique source document identifiers with the knowledge in each document, followed by (ii) an instruction-tuning stage to teach the LLM to cite a supporting pretraining source when prompted.
I think your overweighting the value of that in day-to-day use. As folks accumulate knowledge, a common pattern (especially for things not embedded in a framework - trivia-like data) is a "I have no idea why I'd know this, but the answer is X".
But even if it's embedded in a framework, say CS, the qualia fade in the background as time passes. E.g. like everybody in CS, I'm pretty much able to quote O() performance characteristics of a sizeable number of algorithms off the bat. If you ask me where I learned it, for that specific algorithm - that's long receded into the past.
When humans self-correct, the normal process isn't "gauging whether you know the thing" or the even more impressive feat of calling up if you heard it from a "less than reliable source". There's a fuzzy sense of "I don't fully understand it", and self-correction means re-verifying the info from a trusted source.
So, no, I don't think the qualia matter for recall as much as you think.
Spoiler: You're never going to get rid of hallucinations in the autoregressive, next token prediction paradigm (aka LeCun's Law).
The issue here is people trying to use language models as deterministic problem solvers, rather than for what they actually excel at (semi-creative text generation).
Is LeCun's Law even a thing? Searching up for it doesn't yield many results, except for a HN comment where it has a different definition. I guess it could be from some obscure paper, but with how poorly it's documented it seems weird to bring it up in this context.
Does anyone here know, has anyone tried something like feeding the perplexity of previous tokens back into the model, so that it has a way of knowing when it's going off the rails? Maybe it could be trained to start responding less confidently in those cases, reducing its desire to hallucinate.
One way I explain it to people: Imagine a corporation that only has a PR department. Extremely good at generating press releases and answering reporter questions. But without the rest of the company, the output text isn't constrained by anything meaningful.
In an alternate universe, one where people understood this, people would be using LLMs for nothing serious, but a whole lot of fun little art projects.
If you're talking about label bias then you don't need to solve label bias to 'solve' hallucinations when the model has already learnt internally when it's bullshitting or going off the rails.
I hate that the AI pundits have succeeded in popularizing the notion of "hallucination", anthropomorphizing these balls of statistics into something that seems like it's actually in some sort of deep thought process akin to a person's mind.
No, it's not "hallucinating". It's not lying, or making things up, or anything like that either. It's spitting out data according to what triggers the underlying weights. If this were a regular JSON API endpoint, you wouldn't say the API is hallucinating, you'd say "This API is shit" because it's broken.
> I hate that the AI pundits have succeeded in popularizing the notion of "hallucination", anthropomorphizing these balls of statistics into something that seems like it's actually in some sort of deep thought process akin to a person's mind.
I'd argue the opposite: people think a person's mind is in "deep thought" when it's actually just a ball of statistics.
The right word is "confabulation". Which is when we fill in missing information but may not be aware that we are doing it.
We all confabulate to some degree, as any neural system must, since no training data is stored perfectly.
Human "hallucinations" in contrast, are a particular kind of breakdown in our sensory feedback loops. Which is not a process LLMs even have.
Hallucinations occur when our internal sensory feedback loops overpower actual sensory input, resulting in a stream of false sensory experience/signals being generated and processed. The false running experience might still incorporate some actual sensory information or not.
When we dream, we are hallucinating - our sensory experience loop running free of our actual senses - to a productive purpose.
The reason our senses have feedback is so that we can use our interpretation of sensory input as cues to make interpreting the next moments input easier. But its important that our running interpretation can reset when new input significantly diverges from our expectations so it can quickly reorient.
(Not only is it important to revert to a raw input interpretation to ensure our running interpretation keeps up the actual context changes and corrects misinterpretations, but such resets signal that something novel or unexpected has happened, so likely trigger learning.)
So "hallucinations" was an unfortunate and misleading choice of terminology.
I've got bad news for you – that term was used in deep learning research well before LLMs came on the scene. It has nothing to do with pundits trying to popularize anything or trying to justify LLMs' shortcomings, it was just a label researchers gave to a phenomenon they were trying to study.
A couple papers that use it in this way prior to LLMs:
Maybe an evolutionary / structuralist lens is helpful here: terms that rapidly diffuse through discourse are those that people like most, and most people like to anthropomorphize, so "hallucination" has come to take on a new meaning, and we all (to different degrees) know what it is referring to.
Yeah it's simply model error. All models from Linear Regression to LLMs have error. I guess because this type of error is in the form of deceptively reasonable human language, it gets a different moniker. It's also notably harder to quantify so it might warrant a different name.
elcomet|1 year ago
I don't see any mention of weight release unfortunately.
diggan|1 year ago
They might have done that for O1, but the bigger change is the "runtime train of thought" that once the model received the prompt and before giving a definitive answer, it "thinks" with words and readjusts at runtime.
At least that's my understanding from these two approaches, and if that's true, then it's not similar.
AFAIK, OpenAI been doing reinforcement learning since the first version of ChatGPT for all future models, that's why you can leave feedback in the UI in the first place.
WithinReason|1 year ago
fpgaminer|1 year ago
We want to improve LLM's abilities to give correct answers to hard problems. One theory is that we can do that by training a "Self Correcting" behavior into the models where they can take as input a wrong answer and improve it to a better/correct answer.
This has been explored previously, trying to train this behavior using various Reinforcement techniques where the reward is based on how good the "corrected" answer is. So far it hasn't worked well, and the trained behavior doesn't generalize well.
The thesis of the paper is that this is because when the model is presented with a training example of `Answer 1, Reasoning, Corrected Answer`, and a signal of "Make Corrected Answer Better" it actually has _two_ perfectly viable ways to do that. One is to improve `Reasoning, Corrected Answer`, which would yield a higher reward and is what we want. The other, just as valid solution, is to simply improve `Answer 1` and have `Corrected Answer` = `Answer 1`.
The latter is what existing research has shown happens, and why so far attempts to train the desired behavior has failed. The models just try to improve their answers, not their correcting behaviors. This paper's solution is to change the training regimen slightly to encourage the model to use the former approach. And thus, hopefully, get the model to actually train the desired behavior of correcting previous answers.
This is done by doing two stages of training. In the first stage, the model is forced (by KL divergence loss) to keep its first answers the same, while being rewarded for improving the second answer. This helps keep the model's distribution of initial answers the same, avoiding the issue later where the model doesn't see as many "wrong" answers because wrong answers were trained out of the model. But it helps initialize the "self correcting" behavior into the model.
In the second stage the model is free to change the first answer, but they tweak the reward function to give higher rewards for "flips" (where answer 1 was bad, but answer 2 was good). So in this second stage it can use both strategies, improving its first answer or improving its self correcting, but it gets more rewards for the latter behavior. This seems to be a kind of refinement on the model, to improve things overall, while still keeping the self correcting behavior intact.
Anyway, blah blah blah, metrics showing the technique working better and generalizing better.
Seems reasonable to me. I'd be a bit worried about, in Stage 2, the model learning to write _worse_ answers for Answer 1 so it can maximize the reward for flipping answers. So you'd need some kind of balancing to ensure Answer 1 doesn't get worse. Not sure if that's in their reward function or not, or if its even a valid concern in practice.
jasfi|1 year ago
kick_in_the_dor|1 year ago
Isn't improving "Answer 1" the whole point?
Your write-up makes it sound like "Answer 1" an input but an output from the LLM?
plaguuuuuu|1 year ago
I don't think LLMs can self-correct without remembering their own training in some way.
QuadmasterXLII|1 year ago
(If someone tries this and it works, I’m quitting my phd and going back to camp counseling)
numeri|1 year ago
From the abstract:
> ... To give LLMs such ability, we explore source-aware training -- a recipe that involves (i) training the LLM to associate unique source document identifiers with the knowledge in each document, followed by (ii) an instruction-tuning stage to teach the LLM to cite a supporting pretraining source when prompted.
triclops200|1 year ago
See also: https://www.sciencedirect.com/science/article/pii/S157106452... o1's training regime is described by the "strange particle" model in this formulation
groby_b|1 year ago
But even if it's embedded in a framework, say CS, the qualia fade in the background as time passes. E.g. like everybody in CS, I'm pretty much able to quote O() performance characteristics of a sizeable number of algorithms off the bat. If you ask me where I learned it, for that specific algorithm - that's long receded into the past.
When humans self-correct, the normal process isn't "gauging whether you know the thing" or the even more impressive feat of calling up if you heard it from a "less than reliable source". There's a fuzzy sense of "I don't fully understand it", and self-correction means re-verifying the info from a trusted source.
So, no, I don't think the qualia matter for recall as much as you think.
williamcotton|1 year ago
optimalsolver|1 year ago
The issue here is people trying to use language models as deterministic problem solvers, rather than for what they actually excel at (semi-creative text generation).
plewd|1 year ago
shawnz|1 year ago
wpietri|1 year ago
One way I explain it to people: Imagine a corporation that only has a PR department. Extremely good at generating press releases and answering reporter questions. But without the rest of the company, the output text isn't constrained by anything meaningful.
In an alternate universe, one where people understood this, people would be using LLMs for nothing serious, but a whole lot of fun little art projects.
whimsicalism|1 year ago
seydor|1 year ago
you only need to solve fusion correctly once
famouswaffles|1 year ago
ziofill|1 year ago
sensanaty|1 year ago
No, it's not "hallucinating". It's not lying, or making things up, or anything like that either. It's spitting out data according to what triggers the underlying weights. If this were a regular JSON API endpoint, you wouldn't say the API is hallucinating, you'd say "This API is shit" because it's broken.
Philpax|1 year ago
qudat|1 year ago
I'd argue the opposite: people think a person's mind is in "deep thought" when it's actually just a ball of statistics.
Nevermark|1 year ago
We all confabulate to some degree, as any neural system must, since no training data is stored perfectly.
Human "hallucinations" in contrast, are a particular kind of breakdown in our sensory feedback loops. Which is not a process LLMs even have.
Hallucinations occur when our internal sensory feedback loops overpower actual sensory input, resulting in a stream of false sensory experience/signals being generated and processed. The false running experience might still incorporate some actual sensory information or not.
When we dream, we are hallucinating - our sensory experience loop running free of our actual senses - to a productive purpose.
The reason our senses have feedback is so that we can use our interpretation of sensory input as cues to make interpreting the next moments input easier. But its important that our running interpretation can reset when new input significantly diverges from our expectations so it can quickly reorient.
(Not only is it important to revert to a raw input interpretation to ensure our running interpretation keeps up the actual context changes and corrects misinterpretations, but such resets signal that something novel or unexpected has happened, so likely trigger learning.)
So "hallucinations" was an unfortunate and misleading choice of terminology.
numeri|1 year ago
A couple papers that use it in this way prior to LLMs:
- 2021: The Curious Case of Hallucinations in Neural Machine Translation (https://arxiv.org/abs/2104.06683)
- 2019: Identifying Fluently Inadequate Output in Neural and Statistical Machine Translation (https://aclanthology.org/W19-6623/)
whimsicalism|1 year ago
hmmmhmmmhmmm|1 year ago
bongodongobob|1 year ago
Sees space shuttle "pff, it's just a pile of engineering."
frakt0x90|1 year ago
seydor|1 year ago
textlapse|1 year ago
Sure it’s sorting through garbage more elegantly but it’s still garbage at the end of the day.
I was hoping the RL-like approach replaced the transformers-like approach or something but that’s a pipe dream.
devoutsalsa|1 year ago