WingNews logo WingNews
top | new | best | ask | show | jobs
top | item 45792491

(no title)

IceWreck | 3 months ago

LlamaCPP supports offloading some experts in a MoE model to CPU. The results are very good and even weaker GPUs can run larger models at reasonable speeds.

n-cpu-moe in https://github.com/ggml-org/llama.cpp/blob/master/tools/serv...

discuss

order

No comments yet.

powered by hn/api // news.ycombinator.com