top | item 45751661

(no title)

docandrew | 4 months ago

Hype aside, if you can get an answer to a computing problem with error bars in significantly less time, where precision just isn’t that important (such as LLMs) this could be a game changer.

discuss

order

alyxya|4 months ago

Precision actually matters a decent amount in LLMs. Quantization is used strategically in places that’ll minimize performance degradation, and models are smart enough so some loss in performance still gives a good model. I’m skeptical how well this would turn out, but it’s probably always possible to remedy precision loss with a sufficiently larger model though.

fastball|4 months ago

LLMs are inherently probabilistic. Things like ReLU throw out a ton of data deliberately.