(no title)
MikeTheGreat | 2 months ago
Genuine question: How is it possible for OpenAI to NOT successfully pre-train a model?
I understand it's very difficult, but they've already successfully done this and they have a ton of incredibly skilled and knowledgeable, well-paid and highly knowledgeable employees.
I get that there's some randomness involved but it seems like they should be able to (at a minimum) just re-run the pre-training from 2024, yes?
Maybe the process is more ad-hoc (and less reproducible?) than I'm assuming? Is the newer data causing problems for the process that worked in 2024?
Any thoughts or ideas are appreciated, and apologies again if this was asked already!
nodja|2 months ago
The same way everyone else fails at it.
Change some hyper parameters to match the new hardware (more params), maybe implement the latest improvements in papers after it was validated in a smaller model run. Start training the big boy, loss looks good, 2 months and millions of dollars later loss plateaus, do the whole SFT/RL shebang, run benchmarks.
It's not much better than the previous model, very tiny improvements, oops.
yalok|2 months ago
thefourthchime|2 months ago
encomiast|2 months ago
MikeTheGreat|2 months ago
I can totally see how they're able to pre-train models no problem, but are having trouble with the "noticeably better" part.
Thanks!
mudkipdev|2 months ago
octoberfranklin|2 months ago
A company's ML researchers are constantly improving model architecture. When it's time to train the next model, the "best" architecture is totally different from the last one. So you have to train from scratch (mostly... you can keep some small stuff like the embeddings).
The implication here is that they screwed up bigly on the model architecture, and the end result was significantly worse than the mid-2024 model, so they didn't deploy it.
threeducks|2 months ago
MikeTheGreat|2 months ago
I guess "Start software Vnext off the current version (or something pretty close)" is such a baseline assumption of mine that it didn't occur to me that they'd be basically starting over each time.
Thanks for posting this!
cherioo|2 months ago
htrp|2 months ago