top | item 45633395

(no title)

oli5679 | 4 months ago

The OpenAI fine-tuning api is pretty good - you need to label an evaluation benchmark anyway to systematically iterate on prompts and context, and it’s often creates good results if you give it a 50-100 examples, either beating frontier models or allowing a far cheaper and faster model to catch up.

It requires no local gpus, just creating a json and posting to OpenAI

https://platform.openai.com/docs/guides/model-optimization

discuss

order

deaux|4 months ago

They don't offer it for GPT-5 series, as a result much of the time fine-tuning Gemini 2.5-Flash is a better deal.