top | item 38243440

(no title)

keonix | 2 years ago

> with or without finetuning?

With, but it's still bonkers that it works so well

>Also is there a practical motivation for creating them?

You could get in-between model sizes (like 20b instead of 13b or 34b). Before better quantization it was useful for inference (if you are unlucky with vram size), but now I see this being useful only for training because you can't train on quants

discuss

order

ShamelessC|2 years ago

> With, but it's still bonkers that it works so well

Ehhhh…