(no title)
NicoJuicy | 1 month ago
Ollama with qwen3 and starcoder2 are ok.
I'd recomment to experiment with the following models atm. (eg. with "open-webui"): - gpt-oss:20b ( fast) - nemotron-3-nano:30b ( good general purpose)
It doesn't compare to the large LLM's atm. though.
No comments yet.