top | item 37864214

(no title)

agnokapathetic | 2 years ago

> "In some specific tasks, a finetuned 7B llama can work as well as GPT3.5."

"some" is doing a lot of heavy lifting here.

Also: don't discount the labor cost of curating a fine tuning dataset, running a FT training run, even if the hardware is cheap.

discuss

order

No comments yet.