top | item 41610109

(no title)

mrfinn | 1 year ago

It's kinda funny how nowadays an AI with 8 billion parameters is something "small". Specially when just two years back entire racks were needed to run something giving way worst performance.

discuss

order

atemerev|1 year ago

IDK, 8B-class quantized models run pretty fast on commodity laptops, with CPU-only inference. Thanks to the people who figured out quantization and reimplemented everything in C++, instead of academic-grade Python.

actualwitch|1 year ago

A solid chunk of python is just wrappers around C/C++, most tensor frameworks included.