top | item 44852806

(no title)

xvv | 6 months ago

As of today, what is the best local model that can be run on a system with 32gb of ram and 24gb of vram?

discuss

order

fwystup|6 months ago

Qwen3-Coder-30B-A3B-Instruct-FP8 is a good choice ('qwen3-coder:30b' when you use ollama). I have also had good experiences with https://mistral.ai/news/devstral (built under a collaboration between Mistral AI and All Hands AI)

ethan_smith|6 months ago

DeepSeek Coder 33B or Llama 3 70B with GGUF quantization (Q4_K_M) would be optimal for your specs, with Mistral Large 2 providing the best balance of performance and resource usage.

v5v3|6 months ago

Start with Qwen of a size that fits in the vram.