top | item 39873698

(no title)

mobicham | 1 year ago

LoRA training should benefit from the same speed-up, because the 1-bit weights will be frozen and all you need for both the forward and backward pass is a binary matmul, then maybe cast after to get more stable gradients.

discuss

order

No comments yet.