(no title)
woeirua | 15 days ago
>The usefulness of AI agents is dominated by how few mistakes they make, not by their raw speed. Buying 6x the speed at the cost of 20% more mistakes is a bad bargain, because most of the user’s time is spent handling mistakes instead of waiting for the model6.
That might be true today. I think the OpenAI-Cerebras partnership ultimately is going to lead to a paradigm shift because it will be possible to scale these chips up to the point where a model like full Codex-5.3 can run on them and then you'll have a super fast model that makes relatively few errors. A Codex-5.3 model running at these speeds is more than sufficient to actually start replacing customer facing jobs.
olivermuty|15 days ago
The world will be much more interesting when real bespoke hardware built for actual LLM usage comes to market. This means silicon of the SIMD flavour or other variants, but using DRAM so you can pack more tightly.
croes|15 days ago
If not then updates to the current models will become harder and harder
altcunn|15 days ago
[deleted]
boston_clone|15 days ago