I think the point the author misses is that many applications of fine-tuning are to get a model to do a single task. This is what I have done in my current role at my company.
We’ve fine-tuned open weight models for knowledge-injection, among other things, and get a model that’s better than OpenAI models at exactly one hyper specific task for our use case, which is hardware verification. Or, fine-tuned the OAI models and get significantly better OAI models at this task, and then only use them for this task.
The point is that a network of hyper-specific fine-tuned models is how a lot of stuff is implemented. So I disagree from direct experience with the premise that fine-tuning is a waste of time because it is destructive.
I don’t care if I “damage” Llama so that it can’t write poetry, give me advice on cooking, or translate to German. In this instance I’m only ever going to prompt it with: “Does this design implement the AXA protocol? <list of ports and parameters>”
It looked to me like the author did know that. The title only says "Fine-tuning", but immediately in the article he talks about Fine-tuning for knowledge injection, in order to "ensure that their systems were always updated with new information".
Fine-tuning to help it not make the stupid mistake that it makes 10% of the time no matter what instructions you give it is a completely different use case.
Cost, latency, and performance are huge reasons why my company chooses to fine tune models. We start with using a base model for a task and as our traffic grows, we tune a smaller model, resulting huge performance and cost savings.
The author makes it specific they talk about finetuning "for Knowledge Injection". The give a quote that claims that finetuning is still useful for things like following a specific style, formatting etc. The title they chose could have been a bit more specific and less aphoristic.
What finetuning makes less sense is doing it merely to get a model eg up to date with changes in some library, or to learn a new library it did not know, or, even worse, your codebase. I think this is what OP talks about.
Let me preface by saying I'm not skeptical about your answer or think you're full of crap. Can you give me an example or two about a single task that you fine-tune for? Just trying to familiarize myself with more AI engineering tasks.
What is your (company's) motivation behind using non-deterministic tools for "verification" instead of actually verifying designs using formal methods?
Could you give any rough details? I'm in this world, and have only experienced rigid/deterministic bounds for hardware, ideally based on "guaranteed by design" based models. The need for determinism has always prevented AI from being a part of it.
Interestingly the author mentions LoRa as a "special" way for fine-tuning thatis not destructive. Have you considered it or you opted for more direct fine-tuning?
In this case, for doing specific tasks, it makes much more sense to optimize the prompts and the whole flow with DSPy, instead of just fine tuning for each task.
You do you, and if it works i’m not going to argue with your results, but for others, finetuning is the wrong tool for knowledge injection over a well-designed RAG pipeline.
Finetuning is good for, like you said, doing things a particular way but that’s not the same thing as being good at knowledge injection and shouldn’t considered as such.
It’s also much easier to prevent a RAG pipeline from generating hallucinated responses. You cannot finetune that out of a model.
> Adapter Modules and LoRA (Low-Rank Adaptation) insert new knowledge through specialized, isolated subnetworks, leaving existing neurons untouched. This is best for stuff like formatting, specific chains, etc- all of which don’t require a complete neural network update.
This highlights to me that the author doesn't know what they're talking about. LoRA does exactly the same thing as normal fine-tuning, it's just a trick to make it faster and/or be able to do it on lower end hardware. LoRA doesn't add "isolated subnetworks" - LoRA parameters are added to the original weights!
Here's the equation for the forward pass from the original paper[1]:
h = W_{0} * x + ∆W * x = W_{0} * x + B * A * x
where "W_{0}" are the original weights and "B" and "A" (which give us "∆W_{x}" after they're multiplied) are the LoRA adapter. And if you've been paying attention it should also be obvious that, mathematically, you can merge your LoRA adapter into the original weights (by doing "W = W_{0} + ∆W") which most people do, or you could even create a LoRA adapter from a fully fine-tuned model by calculating "W - W_{0}" to get ∆W and then do SVD to recover B and A.
If you know what you're doing anything you can do with LoRA you can also do with full-finetuning, but better. It might be true that it's somewhat harder to "damage" a model by doing LoRA (because the parameter updates are fundamentally low rank due to the LoRA adapters being low rank), but that's a skill issue and not a fundamental property.
> LoRA does exactly the same thing as normal fine-tuning
You wrote exactly so I'm going to say "no". To clarify what I mean: LoRA seeks to accomplish a similar goal as "vanilla" fine-tuning but with a different method (freezing existing model weights while adding adapter matrices that get added to the original). LoRA isn't exactly the same mathematically either; it is a low-rank approximation (as you know).
> LoRA doesn't add "isolated subnetworks"
If you think charitably, the author is right. LoRA weights are isolated in the sense that they are separate from the base model. See e.g. https://www.vellum.ai/blog/how-we-reduced-cost-of-a-fine-tun... "The end result is we now have a small adapter that can be added to the base model to achieve high performance on the target task. Swapping only the LoRA weights instead of all parameters allows cheaper switching between tasks. Multiple customized models can be created on one GPU and swapped in and out easily."
> you can merge your LoRA adapter into the original weights (by doing "W = W_{0} + ∆W") which most people do
Yes, one can do that. But on what basis do you say that "most people do"? Without having collected a sample of usage myself, I would just say this: there are many good reasons to not merge (e.g. see link above): less storage space if you have multiple adapters, easier to swap. On the other hand, if the extra adapter slows inference unacceptably, then don't.
> This highlights to me that the author doesn't know what they're talking about.
It seems to me you are being some combination of: uncharitable, overlooking another valid way of reading the text, being too quick to judge.
> that's a skill issue and not a fundamental property
This made me laugh.
You seem like you may know something I've been curious about.
I'm a shader author these days, haven't been a data scientist for a while, so it's going to distort my vocab.
Say you've got a trained neural network living in a 512x512 structured buffer. It's doing great, but you get a new video card with more memory so you can afford to migrate it to a 1024x1024. Is the state of the art way to retrain with the same data but bigger initial parameters, or are there other methods that smear the old weights over a larger space to get a leg up? Anything like this accelerate training time?
... can you up sample a language model like you can lowres anime profile pictures? I wonder what the made up words would be like.
This is a pretty awful take. Everyone understands they are modifying the weights - that is the point. It’s not like these models were released with all of the weights perfectly accounted for and changing them in any way ruins them. The awesome thing about fine-tuning is that the weights are malleable and you have a great base to start from.
Also the basic premise that knowledge injection is a bad use-case seems flawed? There are countless open models released by Google that completely fly in the face of this. Medgemma is just Gemma 3 4b fine-tuned on a ton of medical datasets, and it’s measurably better than stock Gemma within the medical domain. Maybe it lost some ability to answer trivia about Minecraft in the process, but isn’t that kinda implied by “fine-tuning” something? Your making it purpose built for a specific domain.
Medgemma gets its domain expertise from pre-training on medical datasets, not finetuning. It’s pretty uncharitable to call the post an awful take if you’re going to get that wrong.
> It’s not like these models were released with all of the weights perfectly accounted for and changing them in any way ruins them.
So more imperfect is better?
Of course the model’s parameters leave a many billions of elements vector path for improvement. But what circuitous path is that, which it didn’t already find?
You can’t find it by definition if you don’t include all the original data with the tuning data. You have radically changed the optimization surface with no contribution from the previous data at all.
The one use case that makes sense is sacrificing functionality to get better at a narrow problem.
A man who burns his own house down may understand what they are doing and do it intentionally - but without any further information still appears to be wasting his time and doing something stupid. There isn't any contradiction between something being a waste of time and people doing it on purpose - indeed the point of the article is to get some people to change what they are purposefully doing.
He's proposing alternatives he thinks are superior. He might well be right too, although I don't have a horse in the race but LORA seem like a more satisfying approach to get a result than retraining the model and giving LLMs tools seems to be proving more effective too.
Clickbait headline. "Fine-tuning LLMs for knowledge injection is a waste of time" is true, but IDK who's trying to do that. Fine-tuning is great for changing model behavior (i.e. the zillions of uncensored models on Hugging Face are much more willing to respond to... dodgy... prompts than any amount of RAG is gonna get you), and RAG is great for knowledge injection.
Also... "LoRA" as a replacement for finetuning??? LoRA is a kind of finetuning! In the research community it's actually referred to as "parameter efficient finetuning." You're changing a smaller number of weights, but you're still changing them.
They provide no references other than self-referencing blogs. It was also suspenseful to read about loss in changing neural network weights when there was 0 mention of quantization. Unfortunately, most of the content in this one was taken from his own previous work.
RAG is getting some backlash and this reads as a backlash of the backlash. I hope things settle down soon but many techfluencers put all their eggs in RAG and used it to gatekeep AI.
It was the best option at one point. They're still a great option if you want an override (e.g. categorization or dialects), but they're not precise.
Changes that happened:
1. LLMs got a lot cheaper but fine tuning didn't. Fine tuning was a way to cut down on prompts and make them 0 shot (not require examples)
2. Context windows became bigger. Fine tuning was great when it was expected to respond a sentence.
3. The two things above made RAG viable.
4. Training got better on released models, to the point where 0 shots worked fine. Fine tuning ends up overriding these things that were scoring nearly full points on benchmarks.
Yeah, as soon as I read that I felt like the author was living in a very different context from mine. It's never even occurred to me that fine-tuning could be an effective method for injecting new knowledge.
If anything, I expect fine-tuning to destroy knowledge (and reasoning), which hopefully (if you did your fine-tuning right) is not relevant to the particular context you are fine-tuning for.
To be fair there are lots of Facebook, Instagram, and Youtube cargo cultists telling people to fine-tune on their documents for some reason. This got to be so common in 2024 that I think it was part of the pressure behind Gigabyte branding their hardware around it.
I think it is a very common misconception (by consumers or businesses trying to use LLMs) that fine tuning can be used to inject new knowledge. I'm not sure many of the fine-tuning platforms do much to disavow people of this notion.
There is no real difference between fine-tuning with and without a lora. If you give me a model with a lora adapter, I can give you an updated model without the extra lora params that is functionally identical.
Fitting a lora changes potentially useful information the same way that fine-tuning the whole model does. It's just the lora restricts the expressiveness of the weight update so that is compactly encoded.
I see this and immediately relived the last two years of the journey. I think some of the mental model that helped me might help the community too.
What people expect from finetuning is knowledge addition. You want to keep the styling[1] of the original model, just add new knowledge points that would help your task. In context learning is one example of how this works well. Just that even here, if the context is out of distribution, a model does not "understand" it and would produce guesswork.
When it comes to LoRA or PEFT or adapters, it's about style transfer. And if you focus on a specific style of content, you will see the gains, just that the model wont learn new knowledge that wasnt already in original training data. It will forget previously learnt styles depending on context. When you do full finetuning (or SFT with no frozen parameters), it will alter all the parameters, and results in gain of new knowledge at the cost of previous knowledge (and would give you some gibberish if you ask about topics outside of domain). This is called catastrophic forgetting. Hence, yes, full finetuning works - just that it is an imperfect solution like all the others. Recently, with Reinforcement learning, there have been talks of continual learning, where Richard sutton's latest paper also lands at, but thats at research level.
Having said all that, if you start with the wrong mental model for Finetuning, you would be disappointed with the results.
The problem to solve is about adding new knowledge, while preserving the original pretrained intelligence. Still in wip, but we published a paper last year on one way it could be done. Here is the link: https://arxiv.org/abs/2409.17171 (it also has results for experiments all different approaches).
[1]: Styling here means the style learned by the model in SFT. Eg: Bullets, lists, bolding out different headings etc. all of that makes the content readable. The understanding of how to present the answer to a specific question.
I think of it as trying to encourage the LLM to want to give answers from a particular part of the phase space. You can do it by fine tuning it to be more likely to return values from there, or you can prompt it to get into that part of the phase space. Either works, but fiddling around with prompts doesn't require all that much MLops or compute power.
That said, fine tuning small models because you have to power through vast amounts of data where a larger model might be cost ineffective -- that's completely sensible, and not really mentioned in the article.
> That said, fine tuning small models because you have to power through vast amounts of data where a larger model might be cost ineffective -- that's completely sensible, and not really mentioned in the article.
...which I thought was arguably the most popular use case for fine tuning these days.
Wasn't there that thing about how large LLM's are essentially compression algorithms (https://arxiv.org/pdf/2309.10668)? Maybe that's where this article is coming from, is the idea that finetuning "adds" data to the set of data that compresses well. But that indeed doesn't work unless you mix in the finetuning data with the original training corpus of the base model. I think the article is wrong though in saying it "replaces" the data - it's true that finetuning without keeping in the original training corpus increases loss on the original data, but "large" in LLM really is large and current models are not trained to saturation so there is plenty of room to fit in finetuning if you do it right.
Not sure what you mean by “not trained to saturation”. Also I agree with the article, in the literature, the phenomenon to which the article refers is known as “catastrophic forgetting”. Because no one has specific knowledge about which weights contribute to model performance, by updating the weights via fine-tuning, you are modifying the model such that future performance will change in ways that are not understood. Also I may be showing my age a bit here, but I always thought “fine-tuning” was performing additional training on the output network (traditionally a fully-connected net), but leaving the initial portion (the “encoder”) weights unchanged - allowing the model to capture features the way it always has, but updating the way it generates outputs based on the discovered features.
While the author makes some good points (along with some non-factual assertions), I wonder why he decided to have this counter-productive and factually wrong clickbait title.
Fine-tuning (and LoRA IS fine-tuning) may not be cost-effective for most organizations for knowledge updates, but it excels in driving behavior in task specific ways, for alignment, for enforcing structured output (usually way more accurately than prompting), tool and function use, and depending on the type of knowledge, if it is highly specific, niche, long tail type of knowledge, it can even make smaller models beat bigger models, like the case with MedGemma.
Obviously there are going to be narrow tasks where fine tuning makes sense. But using leading models for agents is a completely different mindset and approach.
Because I have been working on replacing multiple humans handling complex business processes mostly end-to-end (with human in the loop somehow in there).
I find that I need the very best models to be able to handle a lot of instructions and make the best decisions about tool selection. And overall I just need the most intelligence possible to make fewer weird errors or misinterpretations of the instructions or situations/data.
I can see how fine tuning would help for some issues like some report formatting. But that output comes at the end of the whole process. And I can address formatting issues almost instantly by either just using a smarter model that follows instructions better, or adding a reminder instruction, or creating a simpler subtask. Sometimes the subtask can run on a cheaper model.
So it's kind of like the difference between building a traditional manufacturing line with very specific robot arms, tooling and and conveyor belts, versus plugging in just a few different humanoid robots with assembly manuals and access to more general purposes tools on their belt. You used to always have to build the full traditional line. In many cases that doesn't necessarily make sense anymore.
> Instead, use modular methods like retrieval-augmented generation, adapters, or prompt-engineering — these techniques inject new information without damaging the underlying model’s carefully built ecosystem.
So obviously this is what most of us are already doing, I would venture. But there's a pretty big "missing middle" here. RAG/better prompts serve to provide LLMs with the context they need for a specific task, but are heavily limited by context windows. I know they've been growing quite a bit, but from my usage it still seems that things further back in the window get forgotten about pretty regularly.
Fine tuning was always the pitch for the solution to that. By baking the "context" you need directly into the LLM. Very few people or companies are actually doing this though, because it's expensive and you end up with an outdated model by the time you're done...if you even have the data you need to do it in the first place.
So where we're left is basically without options for systems that need more proprietary knowledge than we can reasonably fit into the context window.
I wonder if there's anyone out there attempting to do some sort of "context compression". An intermediary step that takes our natural language RAG/prompts/context and compresses it into a data format that the LLM can understand (vectors of some sort?) but are a fraction of the tokens that the natural language version would take.
edit After I wrote this I fed this into chatgpt and asked if there were techniques i am missing. It introduced me to Lora (which I suppose are the "adapters" mentioned in the OP). and now I have a whole new rabbithole to climb down. AI is pretty cool sometimes.
For medical applications, across several generations of models, we see fine-tuned models outperform base models of similar size. However, newer/bigger general base models outperform smaller fine-tuned models.
Also, as others have pointed out, supervised fine-tuning can be quite useful for teaching how to perform specific tasks. I agree with the author that RAG generally is more suited for injecting additional knowledge.
It would be very interesting to fine tune a model for a narrow task, while tracking its performance on every original training sample from the pre-tuning baseline.
I expect it would greatly help characterize what was lost, at the expense of a great deal of extra computation. But with enough experiments might shed some more general light.
I suspect the smaller the tuning dataset, the faster and worse the overwriting will be, since the new optimization surface will be so much simpler to navigate than the much bigger datasets optimization surface.
Then a question might be, what percentage of the original training data, randomly retained, might slow general degradation.
Fine tuning isn’t for everything but certainly makes it easy to build models for special purposes, eg metadata extraction. Happy to lose some capability in another domain for that, eg Pokémon. The headline is a bit too general.
I love how people say things like this with complete disregard for research.
Most LLM research involves fine tuning models, and we do amazing things with it. R1 is a fine tune, but I guess that’s bad?
Our company adds knowledge with fine tuning all the time. It’s usually a matter of skill not some fundamental limit. You need to either use LoRA or use a large batch size and mix the previous training data in.
All we are doing is forcing deep representations. This isn’t a binary “fine tuning good/bad” it’s a spectrum of how deep and robust you make the representations
I feel that the effects of fine-tuning are often short-term, and sometimes it can end up overwriting what the model has already learned, making it less intelligent in the process.
I lean more towards using adaptive methods, optimizing prompts, and leveraging more efficient ways to handle tasks. This feels more practical and resource-efficient than blindly fine-tuning.
We should focus on finding ways to maximize the potential of existing models without damaging their current capabilities, rather than just relying on fine-tuning.
Before post-ChatGPT boom, we used to talk of "catastrophic forgetting"...
Make sure the new training dataset is "large" by augmenting it with general data (see it as a sample of the original dataset), use PEFT techniques (freezing weights => less risks), use regularization (elastic weight consolidation).
Fine-tuning is fine, but will be more expensive that you thought and should be led by more experienced ML engineers. You probably don't need to fine tune models anyway.
Correct me if I am wrong, but I thought the point of fine-tuning was to get precise returns. We make it hyper specific to the task at hand.
Sure, we can get 90% of the way there without fine-tuning, but most of these models are vast.
I would argue that it potentially MAY be a waste of time right out the gate.
RAG and fine-tuning are suitable for different business scenarios. For some directional and persistent knowledge, such as adjustments for power, energy and other fields, it can bring better performance;
RAG is more oriented to temporary and variable situations.
In addition, LoRA is also a fine-tuning technology,and it is written in their paper.
I don’t know if fine tuning works. But if it doesn’t, then are we assuming the underlying weights are optimal? At what point do we determine that a network is properly “trained” and any subsequent training is “fine tuning”.
I am under the impression that fine tuning is expensive (could anyone put a number on that?) and that each time a new model is released you have to fine tune it again, paying full price every time.
Seriously, most fine-tuning now is done with LoRa adapters. They are much faster and more reliable. In my lab, I don't know anybody who is trying to do any kind of thorough fine-tuning...
This post is hilarious. People like this author are the ones vetting start-ups? Please. The idea that alignment leads to a degradation in model utility is hardly news.
But let’s be clear: fine-tuning an LLM to specialize in a task isn’t just about minimizing utility loss. It’s about trade-offs. You have to weigh what you gain against what you lose.
Fine-tuning is excellent way to reliably bake-in domain specific data into a model; there is a plenty of coding finetunes on Huggingface face, that outperforms foundation models on say coding, without significant loss in other domains.
"But this logic breaks down for advanced models, and badly so. At high performance, fine-tuning isn’t merely adding new data — it’s overwriting existing knowledge. Every neuron updated risks losing information that’s already intricately woven into the network. In short: neurons are valuable, finite resources. Updating them isn’t a costless act; it’s a dangerous trade-off that threatens the delicate ecosystem of an advanced model."
Mainly including this article to spark discussion—I agree with some of this and not with all of it. But it is an interesting take.
rybosome|8 months ago
We’ve fine-tuned open weight models for knowledge-injection, among other things, and get a model that’s better than OpenAI models at exactly one hyper specific task for our use case, which is hardware verification. Or, fine-tuned the OAI models and get significantly better OAI models at this task, and then only use them for this task.
The point is that a network of hyper-specific fine-tuned models is how a lot of stuff is implemented. So I disagree from direct experience with the premise that fine-tuning is a waste of time because it is destructive.
I don’t care if I “damage” Llama so that it can’t write poetry, give me advice on cooking, or translate to German. In this instance I’m only ever going to prompt it with: “Does this design implement the AXA protocol? <list of ports and parameters>”
gwd|8 months ago
It looked to me like the author did know that. The title only says "Fine-tuning", but immediately in the article he talks about Fine-tuning for knowledge injection, in order to "ensure that their systems were always updated with new information".
Fine-tuning to help it not make the stupid mistake that it makes 10% of the time no matter what instructions you give it is a completely different use case.
itake|8 months ago
freehorse|8 months ago
What finetuning makes less sense is doing it merely to get a model eg up to date with changes in some library, or to learn a new library it did not know, or, even worse, your codebase. I think this is what OP talks about.
stingraycharles|8 months ago
The whole point of base models is to be general purpose, and fine tuned models to be tuned for specific tasks using a base model.
RoyTyrell|8 months ago
akkaygin|8 months ago
nomel|8 months ago
Could you give any rough details? I'm in this world, and have only experienced rigid/deterministic bounds for hardware, ideally based on "guaranteed by design" based models. The need for determinism has always prevented AI from being a part of it.
3abiton|8 months ago
BenGosub|8 months ago
laborcontract|8 months ago
Finetuning is good for, like you said, doing things a particular way but that’s not the same thing as being good at knowledge injection and shouldn’t considered as such.
It’s also much easier to prevent a RAG pipeline from generating hallucinated responses. You cannot finetune that out of a model.
kouteiheika|8 months ago
This highlights to me that the author doesn't know what they're talking about. LoRA does exactly the same thing as normal fine-tuning, it's just a trick to make it faster and/or be able to do it on lower end hardware. LoRA doesn't add "isolated subnetworks" - LoRA parameters are added to the original weights!
Here's the equation for the forward pass from the original paper[1]:
where "W_{0}" are the original weights and "B" and "A" (which give us "∆W_{x}" after they're multiplied) are the LoRA adapter. And if you've been paying attention it should also be obvious that, mathematically, you can merge your LoRA adapter into the original weights (by doing "W = W_{0} + ∆W") which most people do, or you could even create a LoRA adapter from a fully fine-tuned model by calculating "W - W_{0}" to get ∆W and then do SVD to recover B and A.If you know what you're doing anything you can do with LoRA you can also do with full-finetuning, but better. It might be true that it's somewhat harder to "damage" a model by doing LoRA (because the parameter updates are fundamentally low rank due to the LoRA adapters being low rank), but that's a skill issue and not a fundamental property.
[1] -- https://arxiv.org/pdf/2106.09685
xpe|8 months ago
You wrote exactly so I'm going to say "no". To clarify what I mean: LoRA seeks to accomplish a similar goal as "vanilla" fine-tuning but with a different method (freezing existing model weights while adding adapter matrices that get added to the original). LoRA isn't exactly the same mathematically either; it is a low-rank approximation (as you know).
> LoRA doesn't add "isolated subnetworks"
If you think charitably, the author is right. LoRA weights are isolated in the sense that they are separate from the base model. See e.g. https://www.vellum.ai/blog/how-we-reduced-cost-of-a-fine-tun... "The end result is we now have a small adapter that can be added to the base model to achieve high performance on the target task. Swapping only the LoRA weights instead of all parameters allows cheaper switching between tasks. Multiple customized models can be created on one GPU and swapped in and out easily."
> you can merge your LoRA adapter into the original weights (by doing "W = W_{0} + ∆W") which most people do
Yes, one can do that. But on what basis do you say that "most people do"? Without having collected a sample of usage myself, I would just say this: there are many good reasons to not merge (e.g. see link above): less storage space if you have multiple adapters, easier to swap. On the other hand, if the extra adapter slows inference unacceptably, then don't.
> This highlights to me that the author doesn't know what they're talking about.
It seems to me you are being some combination of: uncharitable, overlooking another valid way of reading the text, being too quick to judge.
MrLeap|8 months ago
This made me laugh.
You seem like you may know something I've been curious about.
I'm a shader author these days, haven't been a data scientist for a while, so it's going to distort my vocab.
Say you've got a trained neural network living in a 512x512 structured buffer. It's doing great, but you get a new video card with more memory so you can afford to migrate it to a 1024x1024. Is the state of the art way to retrain with the same data but bigger initial parameters, or are there other methods that smear the old weights over a larger space to get a leg up? Anything like this accelerate training time?
... can you up sample a language model like you can lowres anime profile pictures? I wonder what the made up words would be like.
kamranjon|8 months ago
Also the basic premise that knowledge injection is a bad use-case seems flawed? There are countless open models released by Google that completely fly in the face of this. Medgemma is just Gemma 3 4b fine-tuned on a ton of medical datasets, and it’s measurably better than stock Gemma within the medical domain. Maybe it lost some ability to answer trivia about Minecraft in the process, but isn’t that kinda implied by “fine-tuning” something? Your making it purpose built for a specific domain.
laborcontract|8 months ago
Nevermark|8 months ago
So more imperfect is better?
Of course the model’s parameters leave a many billions of elements vector path for improvement. But what circuitous path is that, which it didn’t already find?
You can’t find it by definition if you don’t include all the original data with the tuning data. You have radically changed the optimization surface with no contribution from the previous data at all.
The one use case that makes sense is sacrificing functionality to get better at a narrow problem.
You are correct about that.
roenxi|8 months ago
He's proposing alternatives he thinks are superior. He might well be right too, although I don't have a horse in the race but LORA seem like a more satisfying approach to get a result than retraining the model and giving LLMs tools seems to be proving more effective too.
reissbaker|8 months ago
Also... "LoRA" as a replacement for finetuning??? LoRA is a kind of finetuning! In the research community it's actually referred to as "parameter efficient finetuning." You're changing a smaller number of weights, but you're still changing them.
kittikitti|8 months ago
RAG is getting some backlash and this reads as a backlash of the backlash. I hope things settle down soon but many techfluencers put all their eggs in RAG and used it to gatekeep AI.
qeternity|8 months ago
Have people who say this ever actually done it? It works. It works pretty well.
I have no clue why this bad advice is so routinely parroted.
unknown|8 months ago
[deleted]
fibrahim|8 months ago
[deleted]
muzani|8 months ago
Changes that happened:
1. LLMs got a lot cheaper but fine tuning didn't. Fine tuning was a way to cut down on prompts and make them 0 shot (not require examples)
2. Context windows became bigger. Fine tuning was great when it was expected to respond a sentence.
3. The two things above made RAG viable.
4. Training got better on released models, to the point where 0 shots worked fine. Fine tuning ends up overriding these things that were scoring nearly full points on benchmarks.
simonw|8 months ago
Is that true though? I don't think I've seen a vendor selling that as a benefit of fine-tuning.
cbsmith|8 months ago
If anything, I expect fine-tuning to destroy knowledge (and reasoning), which hopefully (if you did your fine-tuning right) is not relevant to the particular context you are fine-tuning for.
bird0861|8 months ago
zkoch|8 months ago
robrenaud|8 months ago
Fitting a lora changes potentially useful information the same way that fine-tuning the whole model does. It's just the lora restricts the expressiveness of the weight update so that is compactly encoded.
ankit219|8 months ago
What people expect from finetuning is knowledge addition. You want to keep the styling[1] of the original model, just add new knowledge points that would help your task. In context learning is one example of how this works well. Just that even here, if the context is out of distribution, a model does not "understand" it and would produce guesswork.
When it comes to LoRA or PEFT or adapters, it's about style transfer. And if you focus on a specific style of content, you will see the gains, just that the model wont learn new knowledge that wasnt already in original training data. It will forget previously learnt styles depending on context. When you do full finetuning (or SFT with no frozen parameters), it will alter all the parameters, and results in gain of new knowledge at the cost of previous knowledge (and would give you some gibberish if you ask about topics outside of domain). This is called catastrophic forgetting. Hence, yes, full finetuning works - just that it is an imperfect solution like all the others. Recently, with Reinforcement learning, there have been talks of continual learning, where Richard sutton's latest paper also lands at, but thats at research level.
Having said all that, if you start with the wrong mental model for Finetuning, you would be disappointed with the results.
The problem to solve is about adding new knowledge, while preserving the original pretrained intelligence. Still in wip, but we published a paper last year on one way it could be done. Here is the link: https://arxiv.org/abs/2409.17171 (it also has results for experiments all different approaches).
[1]: Styling here means the style learned by the model in SFT. Eg: Bullets, lists, bolding out different headings etc. all of that makes the content readable. The understanding of how to present the answer to a specific question.
solresol|8 months ago
That said, fine tuning small models because you have to power through vast amounts of data where a larger model might be cost ineffective -- that's completely sensible, and not really mentioned in the article.
lyu07282|8 months ago
Mostly referred to as model distillation, but I give the author the benefit of the doubt that they didn't mean that.
cbsmith|8 months ago
...which I thought was arguably the most popular use case for fine tuning these days.
Mathnerd314|8 months ago
sota_pop|8 months ago
elzbardico|8 months ago
While the author makes some good points (along with some non-factual assertions), I wonder why he decided to have this counter-productive and factually wrong clickbait title.
Fine-tuning (and LoRA IS fine-tuning) may not be cost-effective for most organizations for knowledge updates, but it excels in driving behavior in task specific ways, for alignment, for enforcing structured output (usually way more accurately than prompting), tool and function use, and depending on the type of knowledge, if it is highly specific, niche, long tail type of knowledge, it can even make smaller models beat bigger models, like the case with MedGemma.
ilaksh|8 months ago
Because I have been working on replacing multiple humans handling complex business processes mostly end-to-end (with human in the loop somehow in there).
I find that I need the very best models to be able to handle a lot of instructions and make the best decisions about tool selection. And overall I just need the most intelligence possible to make fewer weird errors or misinterpretations of the instructions or situations/data.
I can see how fine tuning would help for some issues like some report formatting. But that output comes at the end of the whole process. And I can address formatting issues almost instantly by either just using a smarter model that follows instructions better, or adding a reminder instruction, or creating a simpler subtask. Sometimes the subtask can run on a cheaper model.
So it's kind of like the difference between building a traditional manufacturing line with very specific robot arms, tooling and and conveyor belts, versus plugging in just a few different humanoid robots with assembly manuals and access to more general purposes tools on their belt. You used to always have to build the full traditional line. In many cases that doesn't necessarily make sense anymore.
rco8786|8 months ago
So obviously this is what most of us are already doing, I would venture. But there's a pretty big "missing middle" here. RAG/better prompts serve to provide LLMs with the context they need for a specific task, but are heavily limited by context windows. I know they've been growing quite a bit, but from my usage it still seems that things further back in the window get forgotten about pretty regularly.
Fine tuning was always the pitch for the solution to that. By baking the "context" you need directly into the LLM. Very few people or companies are actually doing this though, because it's expensive and you end up with an outdated model by the time you're done...if you even have the data you need to do it in the first place.
So where we're left is basically without options for systems that need more proprietary knowledge than we can reasonably fit into the context window.
I wonder if there's anyone out there attempting to do some sort of "context compression". An intermediary step that takes our natural language RAG/prompts/context and compresses it into a data format that the LLM can understand (vectors of some sort?) but are a fraction of the tokens that the natural language version would take.
edit After I wrote this I fed this into chatgpt and asked if there were techniques i am missing. It introduced me to Lora (which I suppose are the "adapters" mentioned in the OP). and now I have a whole new rabbithole to climb down. AI is pretty cool sometimes.
adultSwim|8 months ago
Also, as others have pointed out, supervised fine-tuning can be quite useful for teaching how to perform specific tasks. I agree with the author that RAG generally is more suited for injecting additional knowledge.
gdiamos|8 months ago
"SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT ..."
me_vinayakakv|8 months ago
I've hit this with gemini-2.0-flash and changing the prompt ever so slightly seems to make things work, just to break it at other input.
Nevermark|8 months ago
I expect it would greatly help characterize what was lost, at the expense of a great deal of extra computation. But with enough experiments might shed some more general light.
I suspect the smaller the tuning dataset, the faster and worse the overwriting will be, since the new optimization surface will be so much simpler to navigate than the much bigger datasets optimization surface.
Then a question might be, what percentage of the original training data, randomly retained, might slow general degradation.
mehulashah|8 months ago
mountainriver|8 months ago
Most LLM research involves fine tuning models, and we do amazing things with it. R1 is a fine tune, but I guess that’s bad?
Our company adds knowledge with fine tuning all the time. It’s usually a matter of skill not some fundamental limit. You need to either use LoRA or use a large batch size and mix the previous training data in.
All we are doing is forcing deep representations. This isn’t a binary “fine tuning good/bad” it’s a spectrum of how deep and robust you make the representations
Kiyo-Lynn|8 months ago
arbfay|8 months ago
Make sure the new training dataset is "large" by augmenting it with general data (see it as a sample of the original dataset), use PEFT techniques (freezing weights => less risks), use regularization (elastic weight consolidation).
Fine-tuning is fine, but will be more expensive that you thought and should be led by more experienced ML engineers. You probably don't need to fine tune models anyway.
ZacWil|8 months ago
mapinxue|8 months ago
RAG is more oriented to temporary and variable situations.
In addition, LoRA is also a fine-tuning technology,and it is written in their paper.
a_c|8 months ago
varsketiz|8 months ago
clauderoux|8 months ago
unknown|8 months ago
[deleted]
Havoc|8 months ago
titaniumrain|8 months ago
But let’s be clear: fine-tuning an LLM to specialize in a task isn’t just about minimizing utility loss. It’s about trade-offs. You have to weigh what you gain against what you lose.
iamnotagenius|8 months ago
j-wang|8 months ago
Mainly including this article to spark discussion—I agree with some of this and not with all of it. But it is an interesting take.
unknown|8 months ago
[deleted]