top | item 46687271

(no title)

primaprashant | 1 month ago

You should check out Devstral 2 Small [1]. It's 24B and scores 68.0% on SWE-bench Verified.

[1]: https://mistral.ai/news/devstral-2-vibe-cli

discuss

order

Palmik|1 month ago

To be clear, GLM 4.7 Flash is MoE with 30B total params but <4B active params. While Devstral Small is 24B dense (all params active, all the time). GLM 4.7 Flash is much much cheaper, inference wise.

dajonker|1 month ago

I don't know whether it just doesn't work well in GGUF / llama.cpp + OpenCode but I can't get anything useful out of Devstal 2 24B running locally. Probably a skill issue on my end, but I'm not very impressed. Benchmarks are nice but they don't always translate to real life usefulness.