top | item 35172161

(no title)

nshm | 3 years ago

Do you have the numbers? I suspect is is way worse. Original llama.cpp authors never measure any numbers as well.

discuss

order

ddren|3 years ago

The python implementation[1] ran some tests using the same quantization algorithm as llama.cpp (4 bit RTN).

1: https://github.com/qwopqwop200/GPTQ-for-LLaMa

nshm|3 years ago

Great thanks a lot.

So we have numbers on PTB original perplexity 8.79 quantized 9.68, already 10% worse. And PPL reported per token I suppose? Because word PPL for PTB must be around 20, not less than 10.

Any numbers on more complex tasks then? like QA?

sottol|3 years ago

They're using GTPQ -- here you go: https://arxiv.org/abs/2210.17323 . The authors benchmarked two families of models over a wide range of numbers of params.

ddren|3 years ago

llama.cpp is using RTN at the moment.