(no title)
schopra909 | 3 months ago
For us it’s classifiers that we train for very specific domains.
You’d think it’d be better to just finetune a smaller non-LLM model, but empirically we find the LLM finetunes (like 7B) perform better.
schopra909 | 3 months ago
For us it’s classifiers that we train for very specific domains.
You’d think it’d be better to just finetune a smaller non-LLM model, but empirically we find the LLM finetunes (like 7B) perform better.
moffkalast|3 months ago