top | item 45633081

The case for the return of fine-tuning

167 points| nanark | 4 months ago |welovesota.com

81 comments

order

deepsquirrelnet|4 months ago

I go back and forth on this. A year ago, I was optimistic and I have had 1 case where RL fine tuning a model made sense. But while there are pockets of that, there is a clash with existing industry skills. I work with a lot of machine learning engineers and data scientists and here’s what I observe.

- many, if not most MLEs that got started after LLMs do not generally know anything about machine learning. For lack of clearer industry titles, they are really AI developers or AI devops

- machine learning as a trade is moving toward the same fate as data engineering and analytics. Big companies only want people using platform tools. Some ai products, even in cloud platforms like azure, don’t even give you the evaluation metrics that would be required to properly build ml solutions. Few people seem to have an issue with it.

- fine tuning, especially RL, is packed with nuance and details… lots to monitor, a lot of training signals that need interpretation and data refinement. It’s a much bigger gap than training simpler ML models, which people are also not doing/learning very often.

- The limited number of good use cases means people are not learning those skills from more senior engineers.

- companies have gotten stingy with sme-time and labeling

What confidence do companies have in supporting these solutions in the future? How long will you be around and who will take up the mantle after you leave?

AutoML never really panned out, so I’m less confident that platforming RL will go any better. The unfortunate reality is that companies are almost always willing to pay more for inferior products because it scales. Industry “skills” are mostly experience with proprietary platform products. Sure they might list “pytorch” as a required skill, but 99% of the time, there isn’t hardly anyone at the company that has spent any meaningful time with it. Worse, you can’t use it, because it would be too hard to support.

daemonologist|4 months ago

Labels are so essential - even if you're not training anything, being able to quickly and objectively test your system is hugely beneficial - but it's a constant struggle to get them. In the unlikely event you can get budget and priority for an SME to do the work, communicating your requirements to them (the need to apply very consistent rules and make few errors) is difficult and the resulting labels tend to be messy.

More than once I've just done labeling "on my own time" - I don't know the subject as well but I have some idea what makes the neurons happy, and it saves a lot of waiting around.

I've found tuning large models to be consistently difficult to justify. The last few years it seems like you're better off waiting six months for a better foundation model. However, we have a lot of cases where big models are just too expensive and there it can definitely be worthwhile to purpose-train something small.

hommes-r|4 months ago

My personal opinion is that true engineering, which revolves around turning complex theory into working practice, has seen a decline in grace. Why spend a lot of time trying to master the art of engineering if you can ride the wave of engineering services and get away with it?

In true hacker spirit, I don't think trying to train a model on a wonky GPU is something that needs an ROI for the individual engineer. It's something they do because they yearn to acquire knowledge.

sdenton4|4 months ago

Eventually someone will make a killing on doing actual outcome measurements instead of just trusting the LLMs, Michael Lewis will write a popular book about it, and the cycle will begin anew...

XenophileJKO|4 months ago

I'm also seeing teams who expected big gains from fine tuning get incremental or moderate gains. Then they put it in production and regret the action as SOTA marches quickly.

I have avoided fine tuning because the models are currently improving at a rate that exceeds big corporate product development velocity.

simonw|4 months ago

I ran a survey on Twitter over the past few days asking for successful case studies that produced economically valuable results from fine-tuning LLMs.

I ask a version of this every six months or so, and usually the results are quite disappointing.

This time I had more credible replies than I have had in the past.

Here's my thread with highlights: https://twitter.com/simonw/status/1979254349235925084

And in a thread viewer for people who aren't signed into Twitter: https://twitter-thread.com/t/1979254349235925084

Some of the most impressive:

Datadog got <500ms latency for their language natural querying feature, https://twitter.com/_brimtown/status/1979669362232463704 and https://docs.datadoghq.com/logs/explorer/search/

Vercel run custom fine-tuned models on v0 for Next.js generation: https://vercel.com/blog/v0-composite-model-family

Shopify have a fine-tuned vision LLM for analyzing product photos: https://shopify.engineering/leveraging-multimodal-llms

donkeyboy|4 months ago

Finetuning is pretty much necessary for regression tasks. Also useful for classification since you can get the direct probabilities in case you want to do some thresholding.

daxfohl|4 months ago

I imagine it's pretty bad risk to reward ratio for most companies. Especially when just tossing some stuff into your system prompt is an option.

CaptainOfCoit|4 months ago

If people have ideas for use cases where fine-tuning can make a big difference, but don't have the time/resources to try it out yourself yet want to see if it'll work, feel free to share your ideas as I'm currently creating a bunch of examples of this and could use some inspiration, I only have 3 real/confirmed use cases as of right now.

leobg|4 months ago

Many ppl think to fine tune an LLM on domain knowledge means to feed it chunked text of, say, psychology books. That is, of course, a wrong application if your goal is for the model to become an expert psychologist. You want the behavior of applying psychology, but you are training the behavior to write about it. TL;DR, many fine tuning fails are due to wrong dataset curation. On the orher hand, if yiu get the dataset right, you can get a 7B model outperform a 180B one.

meander_water|4 months ago

A couple of examples I have seen recently which makes me agree with OP:

- PaddleOCR, a 0.9B model that reaches SOTA accuracy across text, tables, formulas, charts & handwriting. [0]

- A 3B and 8B model which performs HTML to json extraction at GPT-5 level accuracy at 40-80x less cost, and faster inference. [1]

I think it makes sense to fine tune when you're optimizing for a specific task.

[0] https://huggingface.co/papers/2510.14528

[1] https://www.reddit.com/r/LocalLLaMA/comments/1o8m0ti/we_buil...

soVeryTired|4 months ago

Have you used PaddleOCR? I'm surprised they're claiming SOTA without comparing against Amazon Textract or Azure doc intelligence (LayoutLM v3 under the hood, as far as I know).

I've played around with doc recognition quite a bit, and as far as I can tell those two are best-in-class.

alansaber|4 months ago

This comes back to the SLM vs LLM debate (sizes in relative terms), where an SLM can be optimised for a specific task, and out-perform an LLM. But it's not worth it (time, effort) for most tasks unless 1. they are very sensitive to precision or 2. it is ultra-high volume.

gdiamos|4 months ago

Just coming out of founding one of the first LLM fine tuning startups - Lamini - I disagree

Our thesis was that fine tuning would be easier than deep learning for users to adopt because it was starting from a very capable base LLM rather than starting from scratch

However, our main finding with over 20 deployments was that LLM fine tuning is no easier to use than deep learning

The current market situation is that ML engineers who are good enough at deep learning to master fine tuning can found their own AI startup or join Anthropic/OpenAI. They are underpaid building LLM solutions. Expert teams building Claude, GPT, and Qwen will out compete most users who try fine tuning on their own.

RAG, prompt engineering, inference time compute, agents, memory, and SLMs are much easier to use and go very far for most new solutions

bjornsing|4 months ago

Will Anthropic/OpenAI really hire anyone who can fine-tune an LLM?

echelon|4 months ago

What models did you try to find tune? Were the models at the time even good enough to fine tune? Did they suffer from catastrophic forgetting?

We have a lot of more capable open source models now. And my guess is that if you designed models specifically for being fine tuned, they could escape many of the last generation pitfalls.

Companies would love to own their own models instead of renting from a company that seeks to replace them.

matusp|4 months ago

Fine-tuning is a good technique to have in a toolbox, but in reality, it is feasible only in some use cases. On one hand, many NLP tasks are already easy enough for LLMs to have near perfect accuracy and fine tuning is not needed. On the other hand, really complex tasks are really difficult to fine-tune and clevem data collection might be pretty expensive. Fine-tuning can help with the use cases somewhere in the middle, not too simple, not too complex, feasible for data collection, etc.

coldtea|4 months ago

>Fine-tuning is a good technique to have in a toolbox, but in reality, it is feasible only in some use cases.

Yes, 100s of housands of them

libraryofbabel|4 months ago

What would you say is an example of one of those “middle” tasks it can help with?

melpomene|4 months ago

This website loads at impressive speeds (from Europe)! Rarely seen anything more snappy. Dynamic loading of content as you scroll, small compressed images without looking like it (webp). Well crafted!

hshdhdhehd|4 months ago

Magic of a CDN? Plus avoiding JS probably. Haven't checked source though.

stefanwebb|4 months ago

Here's a blog post I wrote last week on the same topic: https://blog.oumi.ai/p/small-fine-tuned-models-are-all-you

I discuss a large-scale empirical study of fine-tuning 7B models to outperform GPT-4 called "LoRA Land", and give some arguments in the discussion section making the case for the return of fine-tuning, i.e. what has changed in the past 6 months

daxfohl|4 months ago

Could you use LoRA adapters to free up your context with all the stuff that normally has to go into it? Coding standards and fuzzy preferences like "prefer short names" or "prefer functional style", reference materials, MCP definitions, etc.?

For training data, I was thinking you could just put all the stuff into context, then give it some prompts, and see how the responses differ over the baseline context. You could feed that into the fine tuner either as raw prompt and the output from the full-context model, or as like input="refactor {output from base model}", output="{output from full-context model}".

My understanding is that LoRA are composable, so in theory MCPs could be deployed as LoRA adapters. Then toggling on and off would not require any context changes. You just enable or disable the LoRA adapter in the model itself. Seems like this would help with context poisoning too.

funfunfunction|4 months ago

Creator of inference.net / schematron here.

There is growing emphasis on efficiency as more companies adopt and scale with LLMs in their products.

Developers might be fine paying GPT-5-Super-AGI-Thinking-Max prices to use the very best models in Cursors, but (despite what some may think about Silicon Valley), businesses do care about efficiency.

And if you can fine-tune an 8b-parameter Llama model on GPT-5 data in < 48 hours and save $100k/mo, you're going to take that opportunity.

qrios|4 months ago

> Finally, companies may have reached the ceiling of what can be achieved with prompting alone. Some want models that know their vocabulary, their tone, their taxonomy, and their compliance rules.

Together with speed and const, this is from my point of view this is the only "case" for the return of fine-tuning here. And this can be managed by context management.

With growing context sizes, first RAG replaced fine-tuning and later even RAG was replaced by just a good-enough prompt preparation for more and more usage pattern.

Sure, speed and costs are important drivers. But like with FPGAs vs. CPUs or GPUs, the development costs and delivery time for high-performance solutions, eliminate the benefit most the time.

oli5679|4 months ago

The OpenAI fine-tuning api is pretty good - you need to label an evaluation benchmark anyway to systematically iterate on prompts and context, and it’s often creates good results if you give it a 50-100 examples, either beating frontier models or allowing a far cheaper and faster model to catch up.

It requires no local gpus, just creating a json and posting to OpenAI

https://platform.openai.com/docs/guides/model-optimization

deaux|4 months ago

They don't offer it for GPT-5 series, as a result much of the time fine-tuning Gemini 2.5-Flash is a better deal.

aininja|4 months ago

2026 will be the year of specialized SLMs...enterprises care about more IP ownership/control, lower costs, and higher quality than the slow and expensive generic models that were not optimized for their use cases.

leblancfg|4 months ago

Fine tuning was never really hard to do locally if you had the hardware. What I’d like to read in an article like this is more details into why they’re making a comeback.

Curious to hear others’ thoughts on this

AYBABTME|4 months ago

Which minimum hardware spec would qualify as making this not really hard to do locally?

lorenzohess|4 months ago

And here I am thinking we'd be discussing the teleological argument.

psadri|4 months ago

Lots of caveats here in the following statement: if your application is not fully leaning in to frontier model capabilities, you are probably building a previous generation product.

_ea1k|4 months ago

Return? Did it run away?

I don't think anyone thought fine tuning was dead.

marcosdumay|4 months ago

There were many comments claiming that from around the end of 2023 to shortly before ChatGPT 5 was launched.

The main claim was that new models were much better than anything you could get your hands on to fine tune.

IMO, intuitively that never made sense. But I never tested it either.

spacecadet|4 months ago

For some of us fine-tuning is a constant activity...

CuriouslyC|4 months ago

Fine tuning by pretraining over a RL tuned model is dumb AF. RL task tuning works quite well.

HarHarVeryFunny|4 months ago

You may have no choice in how the model you are fine tuning was trained, and may have no interest in verticals it was RL tuned for.

In any case, platforms like tinker.ai support both SFT and RL.