top | item 46320991 (no title) tootyskooty | 2 months ago See pretraining section of the prerelease_notes.md:https://github.com/DGoettlich/history-llms/blob/main/ranke-4... discuss order hn newest pests|2 months ago I was curious, they train a 1900 base model, then fine tune to the exact year:"To keep training expenses down, we train one checkpoint on data up to 1900, then continuously pretrain further checkpoints on 20B tokens of data 1900-${cutoff}$. "
pests|2 months ago I was curious, they train a 1900 base model, then fine tune to the exact year:"To keep training expenses down, we train one checkpoint on data up to 1900, then continuously pretrain further checkpoints on 20B tokens of data 1900-${cutoff}$. "
pests|2 months ago
"To keep training expenses down, we train one checkpoint on data up to 1900, then continuously pretrain further checkpoints on 20B tokens of data 1900-${cutoff}$. "