I've tried Qwen3.5-35B-A3B locally on our H100's, and it was horrible. Much more halluzinations than anything else. The big qwen coder would take 4 GPU's which is a bit too much. Trying GPT-oss-120b tomorrow.
For now we are happy with GitHub CoPilot for open source for free with Opus 4.6. That's by far the best for opencode. For images qwen rules.
It's not just Qwen; we also recently had GLM-4.7-Flash in the same roughly 30B-A3 range. Seems to me like there's no shortage of competition for good old GPT-OSS 20B (not just Qwen3.5-35B and GLM-4.7-Flash, but also Qwen3(-Coder)-30B or Granite 4 Small).
rurban|1 hour ago
For now we are happy with GitHub CoPilot for open source for free with Opus 4.6. That's by far the best for opencode. For images qwen rules.
beAroundHere|6 days ago
Especially that Qwen3.5-35B-A3 looks great for cheaper GPUs. Since a quant version of it would need a <32 GB RAM.
ColonelPhantom|6 days ago