top | item 42777061

(no title)

cbo100 | 1 year ago

I get the right answer on the 8B model too.

It could be the quantized version failing?

discuss

order

ein0p|1 year ago

My models are both 4 bit. But yeah, that could be - small models are much worse at tolerating quantization. That's why people use LoRA to recover the accuracy somewhat even if they don't need domain adaptation.