top | item 45925322

Ask HN: Anyone Succesfully fine-tuning LLMs?

3 points| Mythli | 3 months ago

Has anyone of you successfully fine-tuned a LLM?

I have done several attempts for simple use cases and the result was always extremely poor generalization.

Any experience, guides, examples would be valuable.

2 comments

order

mpcsb|3 months ago

I have done my share of fine tuning on open source LLMs (e.g. Llama). I'm surprised you have very poor generalization.

I assume you're using standard techniques, like lora/qlora, which might leave room for issues with your data. Can you share more details on what is the format of your data points? like, Q/A, free text,...

bob1029|3 months ago

I've tried it a few times without much success. I think it takes more data and discipline than most are prepared for.

RAG is a lot easier to reason about and much cheaper to iterate.