(no title)
roborovskis | 7 days ago
From my perspective, pretraining is pretty clearly not 'distilling', as the goal is not to replicate the pretraining data but to generalize. But what these companies are doing is clearly 'distilling' in that they want their models to exactly emulate Claude's behavior.
armcat|7 days ago
EnPissant|7 days ago