top | item 46236200

(no title)

rockinghigh | 2 months ago

They add new data to the existing base model via continuous pre-training. You save on pre-training, the next token prediction task, but still have to re-run mid and post training stages like context length extension, supervised fine tuning, reinforcement learning, safety alignment ...

discuss

order

astrange|2 months ago

Continuous pretraining has issues because it starts forgetting the older stuff. There is some research into other approaches.