top | item 47088884

(no title)

redman25 | 9 days ago

Many older models are still better at "creative" tasks because new models have been benchmarking for code and reasoning. Pre-training is what gives a model its creativity and layering SFT and RL on top tends to remove some of it in order to have instruction following.

discuss

order

No comments yet.