top | item 44835536

(no title)

Mkengin | 6 months ago

Thank you for testing, I will test GPT-OSS for my use case as well. If you're interested I have 8 GB VRAM, 32 GB RAM and get around 21 token/s with tensor offloading, I would assume that your setup should be even faster than mine with the optimizations. I use the IQ4_KSS quant (by ubergarm on hf) with ik_llama.cpp with this command:

$env:LLAMA_SET_ROWS = "1"; ./llama-server -c 140000 -m D:\ik_llama.cpp\build\bin\Release\models\Qwen3-Coder-30B-A3B-Instruct-IQ4_KSS.gguf -ngl 999 --flash-attn -ctk q8_0 -ctv q8_0 -ot "blk\.(19|2[0-9]|3[0-9]|4[0-7])\.ffn_.*_exps\.=CPU" --temp 0.7 --top-p 0.8 --top-k 20 --repeat_penalty 1.05 --threads 8

In my case I offload layers 19-47, maybe you would just have to offload 37-47, so "blk\.(3[7-9]|4[0-7])\.ffn_.*_exps\.=CPU"

discuss

order

magicalhippo|6 months ago

Yeah I think I could get better performance out of both by tweaking, but so far the ease of use has triumphed so far.