top | item 35289319

(no title)

stephanheijl | 2 years ago

To be more exact, LoRA adds two matrices `A` and `B` to any layers that contain trainable weights. The original weights (`W_0`) have the shape `d × k`. These are frozen. Matrix `A` has dimensions `d × <rank>` (`rank` is configurable) and matrix `B` has the shape `<rank> × k`. A and B are then multiplied and added to `W_0` to get altered weights. The benefit here is that the extra matrices are small compared to `W_0`, which means less parameters need to be optimized, so less activations need to be stored in memory.

discuss

order

twic|2 years ago

Ah, so the resulting model contains both the large matrix of original weights, and also the two small matrices of alterations? But this is smaller than the alternative of a model which contains the large matrix of original weights, and an equally large matrix of alterations.

Why is fine-tuning done with separate alterations, rather than by mutating the original weights?

arugulum|2 years ago

> Why is fine-tuning done with separate alterations, rather than by mutating the original weights?

The goal of most parameter-efficient methods is to store one gold copy of the original model, and learn minor modifications/additions to the model. The easiest way to think about this is in some kind of deployment setting, where you have 1 capable model and you learn different sets of LoRA weights for different tasks and applications.

The original intent of parameter-efficient methods is to reduce the amount of storage space needed for models (do you really want to keep a whole additional copy of LLaMA for each different task?). A secondary benefit is that because you are fine-tuning a smaller number of parameters, the optimizer states (can take up to 2x the size of your model) are also heavily shrunk, which makes it more economical (memory-wise) to (parameter-efficient) fine-tune your model.

stu2b50|2 years ago

> But this is smaller than the alternative of a model which contains the large matrix of original weights, and an equally large matrix of alterations.

It's actually larger. If you just have two equally large matrices of the same dimension, one original, and one of "altercations"... then you can just add them together.

> Why is fine-tuning done with separate alterations, rather than by mutating the original weights?

Then you'd have to compute the gradients for the whole network, which is very expensive when the model has 7b, 65b, 165b parameters. The intent is to make that cheaper by only computing gradients for a low rank representation of the change in the weight matrix from training.

TuringTest|2 years ago

It's larger, but there are less parameters to train for your specific use case since you are training the small matrix only, while the original ones remain unaltered.

seydor|2 years ago

Can rank decomposition be used to reduce the original weight matrices as well? Or are they assumed to be compressed already?

metanonsense|2 years ago

Those fully trained networks are usually considered full-rank. At least that is what they say in the LoRA paper.

grph123dot|2 years ago

Your explanation is crystal clear. I suppose it works well in practice, but is there any reason it works that well?

stu2b50|2 years ago

Per the original paper, empirically it’s been found that neural network weights often have low intrinsic rank. It follows, then, that the change in the weights as you train also have low intrinsic rank, which means that you should be able represent them with a lower rank matrix.