Have you successfully fine-tuned an LLM for anything useful?
12 points| johhns4 | 1 year ago
Smaller models that are cheap to host, sure, but in which cases does fine-tune a larger model (and host it) really shine? Oppose to just using RAG and a closed-source API.
Perhaps it makes sense if it is serving a huge customer base, and the tone of voice needs to be different, but the question is how much work it is to train it and if it is worth it.
I'm not against fine-tuning but curious what the actual use cases are, where it makes economic sense and how successful people/organisations have been.
alexei_rudak|1 year ago
muzani|1 year ago
I'm convinced this is the route to doing poetry, find a favorite poet and actually use samples of their writing rather than telling it "write poetry in the style of Robert E Howard"
FezzikTheGiant|1 year ago
ldjkfkdsjnv|1 year ago