top | item 47074693

(no title)

dgdosen | 11 days ago

Is it me, or will this just speed up the timeline where a 'good enough' open model (Qwen? Deepseek? - I'm sure the Chinese will see a value in undermining OpenAI/Anthropic/Google) combined with good enough/cheap hardware (10x inference improvement in a M7 Macbook Air?) makes running something like opencode code locally a no brainer?

discuss

order

ac29|11 days ago

The good enough alternative models are here or will be soon, depending on your definition of good enough. MiniMax-M2.5 looks really competitive and its a tenth of the cost of Sonnet-4.6 (they also have subscriptions).

Running locally is going to require a lot of memory, compute, and energy for the foreseeable future which makes it really hard to compete with ~$20/mo subscriptions.

irishcoffee|11 days ago

People running models locally has always been the scare for the sama's of the world. "Wait, I don't need you to generate these responses for me? I can get the same results myself?"

trillic|11 days ago

He can't buy all the RAM

kevstev|11 days ago

Personally I am already there- I go to Qwen and Deepseek locally via ollama for my dumb questions and small tasks, and only go to Claude if they fail. I do this partially because I am just so tired of everything I do over a network being logged, tracked, mined and monetized, and also partially because I would like my end state to be using all local tools, at least for personal stuff.