top | item 46486771

(no title)

lblume | 1 month ago

It has often been claimed, and even shown, that training LLMs on their own outputs will degrade the quality over time. I myself find it likely that on well-measurable domains, RLVR improvements will dominate "slop" decreases in capability when training new models.

discuss

order

No comments yet.