top | item 45792491 (no title) IceWreck | 3 months ago LlamaCPP supports offloading some experts in a MoE model to CPU. The results are very good and even weaker GPUs can run larger models at reasonable speeds.n-cpu-moe in https://github.com/ggml-org/llama.cpp/blob/master/tools/serv... discuss order hn newest No comments yet.
No comments yet.