Is it me, or will this just speed up the timeline where a 'good enough' open model (Qwen? Deepseek? - I'm sure the Chinese will see a value in undermining OpenAI/Anthropic/Google) combined with good enough/cheap hardware (10x inference improvement in a M7 Macbook Air?) makes running something like opencode code locally a no brainer?
ac29|11 days ago
Running locally is going to require a lot of memory, compute, and energy for the foreseeable future which makes it really hard to compete with ~$20/mo subscriptions.
irishcoffee|11 days ago
trillic|11 days ago
kevstev|11 days ago