(no title)
crazylogger | 3 months ago
My theory is this:
- we know from benchmarks that open-weight models like Deepseek R1 and Kimi K2's capabilities are not far behind SOTA GPT/Claude
- open-weight API pricing (e.g. on openrouter) is roughly 1/10~1/5 that of GPT/Claude
- users can more or less choose to hook their agent CLI/IDEs to either closed or open models
If these points are true, then the only reason people are primarily on CC & Codex plans is because they are subsidized by at least 5~10x. When confronted with true costs, users will quickly switch to the lowest inference cost vendor, and we get perfect competition + zero margin for all vendors.
wahnfrieden|3 months ago