top | item 40083570

(no title)

gliched_robot | 1 year ago

Disagree on Nvidia, most folks fine-tune model. Proof: there are about 20k models in huggingface derived from llama 2, all of them trained on Nvidia GPUs.

discuss

order

eggdaft|1 year ago

Fine tuning can take a fraction of the resources required for training, so I think the original point stands.

nightski|1 year ago

Maybe in isolation when only considering a single fine tune. But if you look at it in aggregate I am not so sure.