top | item 47064279

(no title)

nyrikki | 11 days ago

It is crazy to me that it is that slow, 4 bit quants don't lose much with Qwen3 coder next and unsloth/Qwen3-Coder-Next-UD-Q4_K_XL gets 32 tps with a 3090 (24gb) as a VM with 256k context size with llama.cpp

Same with unsloth/gpt-oss-120b-GGUF:F16 gets 25 tps and gpt-oss20b gets 195 tps!!!

The advantage is that you can use the APU for booting, and pass through the GPU to a VM, and have nice safer VMs for agents at the same time while using DDR4 IMHO.

discuss

order

lambda|11 days ago

Yeah, this is an AMD laptop integrated GPU, not a discrete NVIDIA GPU on a desktop. Also, I haven't really done much to try tweaking performance, this is just the first setup I've gotten that works.

nyrikki|11 days ago

The memory bandwidth of the Laptop CPU is better for fine tuning, but MoE really works well for inference.

I won’t use a public model for my secret sauce, no reason to help the foundation models on my secret sauce.

Even an old 1080ti works well for FIM for IDEs.

IMHO the above setup works well for boilerplate and even the sota models fail for the domain specific portions.

While I lucked out and foresaw the huge price increases, you can still find some good deals. Old gaming computers work pretty well, especially if you have Claude code locally churn on the boring parts while you work on the hard parts.