When fine tuning an LLM you can use the LORA technique to make the fine tuning faster. LORA involves fine tuning a subset of parameters (really it's a low rank approximation of the weight matrix determined by picking the n largest eigenvalues in the SVD decomposition). The size of the subset is determined by the rank. The smaller the rank the faster the fine tuning. However if you make the rank too small then quality will suffer. So you want to pick the optimal rank. This paper describes a technique which can be used to find the optimal rank more easily.
Would you say the following understanding is correct?:
- You can fine-tune a model, regardless of whether it has been quantized (as in the 4-bit versions of models made to fit in consumer grade RAM sizes) or not.
- You can fine-tune any model on any hardware, provided it fits into RAM. That means, that the 30B llama-derived models in their 4-bit quantized version and 19.5GB of VRAM requirement can be fine-tuned on consumer grade GPUs with 24gb of VRAM. (Like the RTX 3090 and 4090)
Kudos for the authors for providing the code https://github.com/huawei-noah/KD-NLP/tree/main/DyLoRA and the roberta example. Considering the current state of the OSS LLM community, I'm guessing someone is already porting it to Llama and gpt-style models.
I'm unsure of the value of dynamically reducing the rank of the LoRA matrix at inference time given that probably most of the parameter count comes from the original weights rather than the LoRA diff.
But nonetheless, training time improvements look interesting.
e: Oh I see, the training time improvement is compared to a grid search over the LoRA rank. Not for a single run.
I am not convinced that you shouldn't just train on the highest possible rank that you can with your compute budget. If you can train a DynLoRA with rank 8, why not just train a LoRA with that rank?
So this can tune a model 7X faster than LoRA, which was already a massive speed boost? Curious to see what this will do to the LLaMA-derivative community in particular.
It's very different, but hard to distill in a comment. They use a new regularization technique to basically create a LoRA with dynamically adjustable rank.
fancyfredbot|2 years ago
FrostKiwi|2 years ago
Would you say the following understanding is correct?:
- You can fine-tune a model, regardless of whether it has been quantized (as in the 4-bit versions of models made to fit in consumer grade RAM sizes) or not.
- You can fine-tune any model on any hardware, provided it fits into RAM. That means, that the 30B llama-derived models in their 4-bit quantized version and 19.5GB of VRAM requirement can be fine-tuned on consumer grade GPUs with 24gb of VRAM. (Like the RTX 3090 and 4090)
lxe|2 years ago
kernelsanderz|2 years ago
whimsicalism|2 years ago
But nonetheless, training time improvements look interesting.
e: Oh I see, the training time improvement is compared to a grid search over the LoRA rank. Not for a single run.
I am not convinced that you shouldn't just train on the highest possible rank that you can with your compute budget. If you can train a DynLoRA with rank 8, why not just train a LoRA with that rank?
huevosabio|2 years ago
Maybe if the "optimal rank" of LORA applies to any adaptation and you interested in training multiple adaptations for different use cases?
vladf|2 years ago
turnsout|2 years ago
whimsicalism|2 years ago
I am not convinced that the "best rank" is not just the highest possible with your compute budget, personally.
vladf|2 years ago
It seems like they use a fixed-distribution controller for training. It’d be nice to see why it’s worth deviating from the original RL paradigm.
whimsicalism|2 years ago
charleshmartin|2 years ago