top | item 39125646

Self-rewarding-lm-PyTorch: Self-Rewarding Language Model from MetaAI

145 points| swyx | 2 years ago |github.com

30 comments

order
[+] starbugs|2 years ago|reply
Sorry if this is a dumb question, but how does that make sure that the training process is not going into the wrong direction because of error accumulation?

Maybe I didn't understand something fundamental here. (Not an LLM expert.)

[+] huac|2 years ago|reply
I don't think it does. And there is a pretty big risk that you end up picking up on some quirk ("bias") of your reward model that doesn't reflect reality -- GPT4 preferring longer answers is one such commonly observed bias. AFAIK there is not a great theoretical basis for why we can avoid mode collapse, except empirically the models are good enough to survive some bootstrapping.
[+] candiodari|2 years ago|reply
It doesn't.

I would like to add, there's plenty of examples, some in math (e.g. geometry) playing out over >1000 years and dozens of generations, of the same happening in humans.

That said, for both humans and this kind of LLMs, it does appear to improve performance, certainly in the near term.

[+] potatoman22|2 years ago|reply
"Fine-tuning Llama 2 70B on three iterations of our approach yields a model that outperforms many existing systems on the AlpacaEval 2.0 leaderboard, including Claude 2, Gemini Pro, and GPT-4 0613."

Cool and impressive. I'm curious if this training method will become more common.

[+] lhl|2 years ago|reply
A new 7B model, Snorkel-Mistral-PairRM-DPO, using a similar self-rewarding pipeline was just released:

* Announcement: https://twitter.com/billyuchenlin/status/1749975138307825933

* Model Card: https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO

* Response Re-Ranker: https://huggingface.co/llm-blender/PairRM

"We would also like to acknowledge contemporary work published independently on arXiv on 2024-01-18 by Meta & NYU (Yuan, et al) in a paper called Self-Rewarding Language Models, which proposes a similar general approach for creating alignment pairs from a larger set of candidate responses, but using the LLM as the reward model. While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for most enterprise applications of LLMs for specific use cases, which is why we focus on the use of external reward models."

[+] frogamel|2 years ago|reply
I might be miunderstanding something here, but what complexity here is resolved by making this a framework? Isnt this just:

1. Train model like normal

2. Evaluate model using self

3. Use eval results for DPO finetune

[+] lucidrains|2 years ago|reply
No, you aren't wrong. For ML people, it is quite simple and hopefully the final code reflects that

The aim is really to give a good base for follow up research / modifications, which I think there will be many for this paper

[+] lucidrains|2 years ago|reply
hey, appreciate the interest! repo is not done yet, but probably will be around month's end
[+] dannyw|2 years ago|reply
Hey lucidrains! Epicmafia was so much fun in its glory days :)
[+] choppaface|2 years ago|reply
did you get training compute from HF or thru a16z e.g. andromeda or some private cluster?
[+] greatpostman|2 years ago|reply
Meanwhile google still hasn’t released anything substantial
[+] code51|2 years ago|reply
Singular focus on AlpacaEval feels a bit limiting to validate the gains.

What's the evidence here that this is not just a kind of leaderboard hacking for LLMs?

[+] nmitchko|2 years ago|reply
Great work, will try this tonight.

Only question, why do you name variables with the λ symbol?

[+] lucidrains|2 years ago|reply
just to better match the math equations in the SPIN paper