Qwen3-Coder-30B-A3B-Instruct-FP8 is a good choice ('qwen3-coder:30b' when you use ollama). I have also had good experiences with https://mistral.ai/news/devstral (built under a collaboration between Mistral AI and All Hands AI)
DeepSeek Coder 33B or Llama 3 70B with GGUF quantization (Q4_K_M) would be optimal for your specs, with Mistral Large 2 providing the best balance of performance and resource usage.
fwystup|6 months ago
ethan_smith|6 months ago
v5v3|6 months ago