top | item 35514228

DyLoRA: Parameter Efficient Tuning of Pre-Trained Models

115 points| mparrett | 2 years ago |arxiv.org

36 comments

order

fancyfredbot|2 years ago

When fine tuning an LLM you can use the LORA technique to make the fine tuning faster. LORA involves fine tuning a subset of parameters (really it's a low rank approximation of the weight matrix determined by picking the n largest eigenvalues in the SVD decomposition). The size of the subset is determined by the rank. The smaller the rank the faster the fine tuning. However if you make the rank too small then quality will suffer. So you want to pick the optimal rank. This paper describes a technique which can be used to find the optimal rank more easily.

FrostKiwi|2 years ago

Fascinating progress.

Would you say the following understanding is correct?:

- You can fine-tune a model, regardless of whether it has been quantized (as in the 4-bit versions of models made to fit in consumer grade RAM sizes) or not.

- You can fine-tune any model on any hardware, provided it fits into RAM. That means, that the 30B llama-derived models in their 4-bit quantized version and 19.5GB of VRAM requirement can be fine-tuned on consumer grade GPUs with 24gb of VRAM. (Like the RTX 3090 and 4090)

whimsicalism|2 years ago

I'm unsure of the value of dynamically reducing the rank of the LoRA matrix at inference time given that probably most of the parameter count comes from the original weights rather than the LoRA diff.

But nonetheless, training time improvements look interesting.

e: Oh I see, the training time improvement is compared to a grid search over the LoRA rank. Not for a single run.

I am not convinced that you shouldn't just train on the highest possible rank that you can with your compute budget. If you can train a DynLoRA with rank 8, why not just train a LoRA with that rank?

huevosabio|2 years ago

Yea, this is interesting but I can't see the immidiate value (not that there isn't).

Maybe if the "optimal rank" of LORA applies to any adaptation and you interested in training multiple adaptations for different use cases?

vladf|2 years ago

The optimal rank could differ across layers

turnsout|2 years ago

So this can tune a model 7X faster than LoRA, which was already a massive speed boost? Curious to see what this will do to the LLaMA-derivative community in particular.

whimsicalism|2 years ago

7x faster compared to grid-search LoRA for best rank.

I am not convinced that the "best rank" is not just the highest possible with your compute budget, personally.

vladf|2 years ago

How does this technique differ from the supernet optimization for one-shot NAS? https://proceedings.mlr.press/v80/bender18a.html

It seems like they use a fixed-distribution controller for training. It’d be nice to see why it’s worth deviating from the original RL paradigm.

whimsicalism|2 years ago

It's very different, but hard to distill in a comment. They use a new regularization technique to basically create a LoRA with dynamically adjustable rank.