top | item 45602313

(no title)

mchiang | 4 months ago

Qwen3-coder:30b is in the blog post. This is one that most users will be able to run locally.

We are in this together! Hoping for more models to come from the labs in varying sizes that will fit on devices.

discuss

order

bigyabai|4 months ago

I'm looking forward to future ollama releases that might attempt parity with the cloud offerings. I've since moved onto the Ollama compatibility API on KoboldCPP since they don't have any such limits with their inference server.

mchiang|4 months ago

I am super hopeful! Hardware is improving, inference costs will continue to decrease, models will only improve...

Balinares|4 months ago

How does Qwen3-Coder:30B compare to Instruct-2507 as a coding agent backend? I was under the impression that Instruct was intended to supersede Coder?

hephaes7us|4 months ago

In this case, it's not about whether it fits on my physical hardware or not. It's about what seems like an arbitrary restriction designed to start pushing users to their cloud offering.