top | item 45614211

(no title)

Arisaka1 | 4 months ago

>and if you’re training an agent for this specific task anyway, you’re effectively locking yourself to that specific LLM in perpetuity rather than a replaceable or promotable worker.

That's ONE of the long games that are currently played, and is arguably their fallback strategy: The equivalent of vendor lock-in but for LLM providers.

discuss

order

stego-tech|4 months ago

From my IT POV, that’s what this is all about. It’s why none of these major players produce locally-executable LLMs (Mistral, Llama, and DeepSeek being notable exceptions), it’s why their interfaces are predominantly chat-based (to reduce personal skills growth and increase dependency on the chatbot), it’s why they keep churning out new services like Skills and Agents and “Research”, etc.

If any of these outfits truly cared about making AI accessible and beneficial to everyone, then all of them would be busting hump to distill models better to run on a wider variety of hardware, create specialized niches that collaborate with rather than seek to replace humans, and promote sovereignty over the AI models rather than perpetual licensing and dependency forever.

No, not one of these companies actually gives a shit about improving humanity. They’re all following the YC playbook of try everything, rent but never own, lock-in customers, and hope you get that one lucrative bite that allows for an exit strategy of some sort while promoting the hell out of it and yourself as the panacea to a problem.

simonw|4 months ago

"It’s why none of these major players produce locally-executable LLMs (Mistral, Llama, and DeepSeek being notable exceptions)"

OpenAI have gpt-oss-20b and 120b. Google have the Gemma 3 models. At this point the only significant AI lab that doesn't provide a locally executable model are Anthropic!