top | item 44821414

(no title)

DrPhish | 6 months ago

Its also easy to do 120b on CPU if you have the resources. I had 120b running on my home LLM CPU inference box in just as long as it took to download the GGUFs, git pull and rebuild llama-server. I had it running at 40t/s with zero effort and 50t/s with a brief tweaking. Its just too bad that even the 120b isn't really worth running compared to the other models that are out there.

It really is amazing what ggerganov and the llama.cpp team have done to democratize LLMs for individuals that can't afford a massive GPU farm worth more than the average annual salary.

discuss

order

wkat4242|6 months ago

What hardware do you have? 50tk/s is really impressive for cpu.

DrPhish|6 months ago

2xEPYC Genoa w/768GB of DDR5-4800 and an A5000 24GB card. I built it in January 2024 for about $6k and have thoroughly enjoyed running every new model as it gets released. Some of the best money I’ve ever spent.

SirMaster|6 months ago

I'm getting 20 tokens/sec on the 120B model with a 5060Ti 16GB and a regular desktop Ryzen 7800x3d with 64GB of DDR5-6000.

exe34|6 months ago

I imagine the gguf is quantised stuff?

DrPhish|6 months ago

No, I’m running the unquantized 120b