top | item 44064638

(no title)

somebodythere | 9 months ago

My guess is that they did RLVR post-training for SWE tasks, and a smaller model can undergo more RL steps for the same amount of computation.

discuss

order

No comments yet.