top | item 43760625

Does RL Incentivize Reasoning in LLMs Beyond the Base Model?

84 points| leodriesch | 10 months ago |limit-of-rlvr.github.io

38 comments

order

spwa4|10 months ago

I don't like papers that ask a question in the title, so here's the answer:

"RL boosts sampling efficiency but reduces the reasoning capacity boundary."

Perhaps better to put it like this: Given one, or few attempts, RL trained models beat non-RL models. Given many attempts, non-RL models come up with better answers.

sitkack|10 months ago

My gut feeling when using DeepSeek is that its performance is a lot smoother, the responses feel more robust and not as brittle.

cma|10 months ago

I'm pretty sure RL causes catastrophic forgetting of its base knowledge and that's why o3 hallucinates so much more.

If you mess around with trained weights you're going to delete some base knowledge, as least the knowledge that is outside of the tasks you RL on.

yorwba|10 months ago

They write "We manually inspect CoT validity to ensure correct answers stem from valid reasoning, not lucky guesses." but the example answer they show at the end only gets the correct number due to two errors canceling out. The model calculates 195+367+562+900 and gets 1924 instead of 2024, and also turns -437 - 2*234 into -805 instead of -905, but in total 1924-805 = 2024-905 = 1119 and from there the remaining steps are correct again.

It would be interesting to know how much of the sampling efficiency improvement from reinforcement learning is due to being better at basic arithmetic (something which could also be achieved by giving the model access to a calculator tool) and how much is due to choosing the correct approach for solving the problem more often.

nialv7|10 months ago

> we uncover that RL-trained models excel at low k (e.g., pass@1) but are consistently outperformed by base models at high k (e.g., pass@256).

This is a weak argument. I think I get what we are trying to say, but let's take this to the extreme, say pass@10^10^100. Just like a group of monkeys could write Shakespeare if given enough time, a complete random model could probably outperform an RL-trained model at pass@10^10^100. Would we then say the random model can reason too?

Of course the correct reasoning trace will be in the base model's distribution, just like any other well-formed, coherent paragraph. Kind of makes me think, maybe sampling efficiency _is_ intelligence?

Certhas|10 months ago

If this was just the effect you mention you would not expect the base model to surpass the RL model though. Plus their k are much smaller than that.

I think it's a very interesting and meaningful study.

seertaak|10 months ago

The authors of the paper address this argument in the QA section.

iceman_w|10 months ago

RL constrains the space of possible output token sequences to what is likely to lead to the correct answer. So we are inherently making a trade-off to reduce variance. A non-RL model will have higher variance, so given enough attempts, it will come up with some correct answers that an RL model can't.

KTibow|10 months ago

I'm a bit skeptical of this until it's proven that they're getting the right answers in the right ways. It could be that base models are just more random and when given 200 guesses out of 1000 possible answers tend to distribute them more evenly, bringing up the pass@k number.

energy123|10 months ago

They should try again with higher temperature on the RL model to introduce more variance.

macleginn|10 months ago

‘Crucially, all correct solutions from RL-trained models already exist in the base model's distribution, proving RLVR enhances sampling efficiency, not reasoning capacity, while inadvertently shrinking the solution space.’ — wouldn't any kind of RL fail to converge or even progress at all if the solution weren't to be found in the base model distribution? The way training is set up, the models absolutely need to be able to find right solutions in a reasonable time, otherwis there wouldn't be any training signal.

psb217|10 months ago

That depends a bit on the length of the RL training and the distribution of problems you're training on. You're correct that RL won't get any "traction" (via positive rewards) on problems where good behavior isn't already in the model's behavior distribution.

However, if you're training on many problems, it's possible in principle that if you have traction on _any_ of the problems, then the learning signal you get from success on those problems will have a positive effect on the model's behavior on other problems. Ie, the learning that you do on problems where the model is already producing positive reward behavior will nudge the model towards producing positive reward behavior on problems where it wasn't previously doing so.

mountainriver|10 months ago

I felt like this was already known right? My understanding was always that the base model had all the paths and RL was learning to navigate them

imtringued|10 months ago

>Our key finding is that all reasoning paths in the RLVR model are already present in the base model.

This is a really good observation. It means that you don't need to RL the full model. You merely need to RL a few LoRAs or maybe a small Mamba model appended to the final layer.

ismepornnahi|10 months ago

Interesting, has this already been experimented?

imenani|10 months ago

They fix the temperature at T=0.6 for all k for all models, even though their own Figure 10 shows that RL model benefits from higher temperatures. I would buy the overall claim much more if they swept of temperature parameter for each k and model like they did in the Codex paper [1].

[1] https://arxiv.org/abs/2107.03374

Der_Einzige|10 months ago

This 100% tracks with my experience.

Also fun stuff many don't know - If you run a regular models chat template with a reasoning tuned model, it can go back to acting like the base model, with no "thinking" process.

"Reasoning" models are not any better than non reasoning models. It's a parlor trick, and benchmarks which claimed it wasn't are bad.

NitpickLawyer|10 months ago

> If you run a regular models chat template with a reasoning tuned model, it can go back to acting like the base model, with no "thinking" process.

Well, of course. They've been "fine-tuned" with specific chat templates. Remove those and the fine-tune doesn't take precedence anymore. That's expected behaviour I'd say.

> "Reasoning" models are not any better than non reasoning models. It's a parlor trick, and benchmarks which claimed it wasn't are bad.

All of them? Including the closed ones, never public? I highly doubt that.

kk58|10 months ago

Reasoning models aren't really reasoners, its basically neural style transfer protocol where you force a model "decoder" to emit tokens in a style that appears to be Reasoning like a deductive thinking.

whatshisface|10 months ago

If you don't know the answer to a problem, you're not going to be able to repeat sampling until it is correct. Random strings will saturate all benchmarks at k=infinity if tested this way.