> The idea is to have a chip with SRAM large enough to fit the entire model, so inference can happen entirely in-memory. [...] So how much internal memory does the latest Cerebras chip have? 44GB. This puts OpenAI in kind of an awkward position. 44GB is enough to fit a small model (~20B params at fp16, ~40B params at int8 quantization), but clearly not enough to fit GPT-5.3-Codex.
You don't really need to fit the entire model on a single chip. Just as with GPUs, you can shard the model across multiple chips. Of course when you have a long pipeline of chips that each token needs to pass through, that decreases the end-to-end tokens per second correspondingly.
So the size of GPT-5.3-Codex-Spark isn't limited by the memory of a single Cerebras chip, but the number of such chips that you can chain together and still hit the 1000 tokens per second target. Given that Cerebras offers models much larger than 40B at faster speeds https://www.cerebras.ai/pricing#exploration GPT-5.3-Codex-Spark is likely closer to GLM 4.7 in size. (≈355B total parameters, 32B active)
Sharding the model is really slow. The point of building a wafer-scale chip is memory bandwidth for on-chip transfer is far more than you would get from even using chiplets with an interposer/high-bandwidth connection, let alone going off-chip. You're giving up your whole advantage, especially since Cerebras clearly isn't trying to maximize total throughput per watt - Groq, TPUs, and even the latest nVidia solutions are preferable there.
> Of course when you have a long pipeline of chips that each token needs to pass through, that decreases the end-to-end tokens per second correspondingly.
No, it only increases the latency, and does not affect the throughput.
> So the size of GPT-5.3-Codex-Spark isn't limited by the memory of a single Cerebras chip, but the number of such chips that you can chain together and still hit the 1000 tokens per second target.
Chaining chips does not decrease token throughput. In theory, you could run models of any size on Cerebras chips. See for example Groq's (not to be confused with Grok) chips, which only have 230 MB SRAM, yet manage to run Kimi K2.
People are misunderstanding Anthropic's fast mode because they chose to name it that way. The hints all point to a specific thing they did. The setup is costlier, its also smarter and better on tougher problems which is unheard of in terms of speed. This paper[1] fits perfectly:
The setup is parallel distill and refine. You start with parallel trajectories instead of one, then distill from them, and refine that to get to an answer. Instead of taking all trajectories to completion, they distill it quickly and refine so it gives outputs fast and yet smarter.
- paper came out in nov 2025
- three months is a good research to production pipeline
- one of the authors is at anthropic
- this approach will definitely burn more tokens than a usual simple run.
- > Anthropic explicitly warns that time to first token might still be slow (or even slower)
To what people are saying, speculative decoding wont be smarter or make any difference. Batching could be faster, but then not as costly.
Gemini Deepthink and gpt-5.2-pro use the same underlying parallel test time compute but they take each trajectory to completion before distilling and refining for the user.
> Fast mode is not a different model. It uses the same Opus 4.6 with a different API configuration that prioritizes speed over cost efficiency. You get identical quality and capabilities, just faster responses.
Yes this article is full of misunderstanding. The main explanation of bottleneck is wrong: it’s the model weights which dominate memory bandwidth (and hence why batching multiple requests in a single pass increases total throughput). If copying user tokens was the bottle neck, batching would not achieve any speed up.
When an author is confused about something so elementary, I can’t trust anything else they write.
One other thing I'd assume Anthropic is doing is routing all fast requests to the latest-gen hardware. They most certainly have a diverse fleet of inference hardware (TPUs, GPUs of different generations), and fast will be only served by whatever is fastest, whereas the general inference workload will be more spread out.
This was my assumption - GB200 memory bandwidth is 2.4x faster than H100, so I think personally that's all it is. Doesn't really make sense otherwise as yes there are tricks to get faster time to first token but not really for the same model in throughput terms (speculative decoding etc, but they already use that).
I'm happy to be wrong but I don't think it's batching improvements.
>The usefulness of AI agents is dominated by how few mistakes they make, not by their raw speed. Buying 6x the speed at the cost of 20% more mistakes is a bad bargain, because most of the user’s time is spent handling mistakes instead of waiting for the model6.
That might be true today. I think the OpenAI-Cerebras partnership ultimately is going to lead to a paradigm shift because it will be possible to scale these chips up to the point where a model like full Codex-5.3 can run on them and then you'll have a super fast model that makes relatively few errors. A Codex-5.3 model running at these speeds is more than sufficient to actually start replacing customer facing jobs.
At 40gb and a rumoured 5 to 7 TB size of the proprietary flagships you are looking at several megawatts to run one single model instance. Cerebras is insanely power hungry. It is funny how they are essentially a parallell happenstance (chips being made for other compute stuff also works for LLMs) to gaming processors accidentally being good for LLMs.
The world will be much more interesting when real bespoke hardware built for actual LLM usage comes to market. This means silicon of the SIMD flavour or other variants, but using DRAM so you can pack more tightly.
If the author is right, OpenAI have room for improvement where they can further improve the fast models for correctness for certain tasks while Anthropic are left with scaling vertically. OFC, it is likely that over time both approaches will converge when the companies understand the problem space better and what tradeoofs are worth making.
My personal take is that they will need a big model to plan and break down tasks and schedule them to specialized smaller models while there is a good enough model for real time interactions with the user, but it is the naive take and many other things might be shaping the decisions.
> A good analogy is a bus system. If you had zero batching for passengers - if, whenever someone got on a bus, the bus departed immediately - commutes would be much faster for the people who managed to get on a bus.
A good analogy? I wonder... how do buses work at your place? Do they wait to be at least half-full before departing? I used to do that in the Simutrans game!
Where I'm from, buses usually depart on schedule, whether you get on the bus or not...
Interesting theory. So how does ChatGPT begin responding instantly, as soon as I send the message? Shouldn't it need to wait for the batch to fill? Or do they have so much traffic that this happens in a few ms?
(I think they might also be filling the message onto a GPU while you're typing over a websocket or something, but I'm not sure.)
> So how much internal memory does the latest Cerebras chip have? 44GB. This puts OpenAI in kind of an awkward position. 44GB is enough to fit a small model (~20B params at fp16, ~40B params at int8 quantization), but clearly not enough to fit GPT-5.3-Codex. That’s why they’re offering a brand new model, and why the Spark model has a bit of “small model smell” to it: it’s a smaller distil of the much larger GPT-5.3-Codex model.
This doesn't make sense.
1. Nvidia already sells e.g. the H100 with 80GB memory, so having 44GB isn't an advance, let alone a differentiator.
2. As I suspect anyone that's played with open weights models will attest, there's no way that 5.3-Codex-Spark is getting close to top-level performance and being sold in this way while being <44GB. Yes it's weaker and for sure it's probably a distil and smaller, but not by ~two orders of magnitude as suggested.
You’re mixing up HBM and SRAM - which is an understandable confusion.
NVIDIA chips use HBM (High Bandwidth Memory) which is a form of DRAM - each bit is stored using a capacitor that has to be read and refreshed.
Most chips have caches on them built out of SRAM - a feedback loop of transistors that store each bit.
The big differences are in access time, power and density: SRAM is ~100 times faster than DRAM but DRAM uses much less power per gigabyte, and DRAM chips are much smaller per gigabyte of stored data.
Most processors have a few MB of SRAM as caches. Cerebras is kind of insane in that they’ve built one massive wafer-scale chip with a comparative ocean of SRAM (44GB).
In theory that gives them a big performance advantage over HBM-based chips.
As with any chip design though, it really isn’t that simple.
I think being faster probably is important but it brings a bunch of challenges:
- the split pricing model makes it hard to tune model architecture for faster inference as you need to support fast and cheap versions.
- the faster the model is, the more it becomes a problem that they don’t ’understand’ time – they sit idle waiting for big compilations or they issue tools sequentially when they ought to have issued them in parallel.
I don’t really get the bus analogy. It seems like it massively increases latency but as soon as you’re “on the bus” throughput is normal? When in reality (if I understand correctly) opus-fast is just giving you a bigger portion of the batch so increasing throughput with little affect on latency? (I’m assuming anthropic gets enough volume that these batches fill up pretty much instantly)
It is worth noting that consumers are completely and totally incapable of detecting quality degradation with any accuracy. Which is a given since the models are already effectively random, but there is a strong bent to hallucinate degradations. Having done frontend work for an AI startup, complaints of degrading the model were by far the most common, despite the fact that not only did our model not change, users could easily verify that it didn't change because we expose seeds. A significant portion of complainers continue to complain about model degradation even when shown they could regenerate from the same seed+input and get the exact same output. Humans, at scale, are essentially incapable of comprehending the concept of randomness.
I like how the article is interspersed with corrections based on this discussion. Sometimes I wondered if there are more corrections than the original material. But in any case it is very convenient. No need to read the discussion here.
In my view, if we aim to speed up progress toward the next generation, diffusion-model-based generation is a very promising direction.
That said, there are likely many hurdles to tackle, because the output is not a sequence; it is produced in parallel.
Author is clearly confused about the Anthropic case. The request rate at these generation endpoints is so high that the current batching delay is effectively negligible.
Lol, without any evidence this is just vaporblog, it could just be reudced precision for whatever model either one of them runs & not necessarily a distillation or smaller model to boot or heck even a combo since at this point in time most frontier models are MoEs & getting absurd speeds for 1-20B experts is trivial regardless of batch sizes
Cerebras has been serving their own inference users for sometime. Not unreasonable to deploy a turnkey product as-is to start a partnership and then iterate from there?
Very interesting. OAI releases since their router all seem focused on cost cutting/efficiency while anthropic is mostly going the opposite direction spending all budget to overhype their models in media and release neo-hipster (aka normies) ads on taste and on how they wont do ads. The first red flag - beside every time dario speaks - was the popup events with shitty caps overhyped by all ai influencers.
It seems OAI was forced by investors to shift quickly to making money. Anthropic seem to have more time? Might be hard for OAI to keep the pace while focusing on cost
yorwba|15 days ago
You don't really need to fit the entire model on a single chip. Just as with GPUs, you can shard the model across multiple chips. Of course when you have a long pipeline of chips that each token needs to pass through, that decreases the end-to-end tokens per second correspondingly.
So the size of GPT-5.3-Codex-Spark isn't limited by the memory of a single Cerebras chip, but the number of such chips that you can chain together and still hit the 1000 tokens per second target. Given that Cerebras offers models much larger than 40B at faster speeds https://www.cerebras.ai/pricing#exploration GPT-5.3-Codex-Spark is likely closer to GLM 4.7 in size. (≈355B total parameters, 32B active)
zozbot234|15 days ago
kristjansson|14 days ago
This fact really should have given the author pause. It’s hard to take his any of his claims seriously in its face.
amelius|15 days ago
No, it only increases the latency, and does not affect the throughput.
johndough|15 days ago
Chaining chips does not decrease token throughput. In theory, you could run models of any size on Cerebras chips. See for example Groq's (not to be confused with Grok) chips, which only have 230 MB SRAM, yet manage to run Kimi K2.
ankit219|14 days ago
The setup is parallel distill and refine. You start with parallel trajectories instead of one, then distill from them, and refine that to get to an answer. Instead of taking all trajectories to completion, they distill it quickly and refine so it gives outputs fast and yet smarter.
- paper came out in nov 2025
- three months is a good research to production pipeline
- one of the authors is at anthropic
- this approach will definitely burn more tokens than a usual simple run.
- > Anthropic explicitly warns that time to first token might still be slow (or even slower)
To what people are saying, speculative decoding wont be smarter or make any difference. Batching could be faster, but then not as costly.
Gemini Deepthink and gpt-5.2-pro use the same underlying parallel test time compute but they take each trajectory to completion before distilling and refining for the user.
[1]: https://arxiv.org/abs/2510.01123
xcodevn|14 days ago
> Fast mode is not a different model. It uses the same Opus 4.6 with a different API configuration that prioritizes speed over cost efficiency. You get identical quality and capabilities, just faster responses.
ankit219|14 days ago
writer has not heard of continuous batching. this is no longer an issue. this is what makes claude code that affordable. https://huggingface.co/blog/continuous_batching
dist-epoch|15 days ago
The real reason which batching increases latency is multi-factored and more complex to explain.
qeternity|15 days ago
When an author is confused about something so elementary, I can’t trust anything else they write.
criemen|15 days ago
martinald|14 days ago
I'm happy to be wrong but I don't think it's batching improvements.
woeirua|14 days ago
>The usefulness of AI agents is dominated by how few mistakes they make, not by their raw speed. Buying 6x the speed at the cost of 20% more mistakes is a bad bargain, because most of the user’s time is spent handling mistakes instead of waiting for the model6.
That might be true today. I think the OpenAI-Cerebras partnership ultimately is going to lead to a paradigm shift because it will be possible to scale these chips up to the point where a model like full Codex-5.3 can run on them and then you'll have a super fast model that makes relatively few errors. A Codex-5.3 model running at these speeds is more than sufficient to actually start replacing customer facing jobs.
olivermuty|14 days ago
The world will be much more interesting when real bespoke hardware built for actual LLM usage comes to market. This means silicon of the SIMD flavour or other variants, but using DRAM so you can pack more tightly.
croes|14 days ago
If not then updates to the current models will become harder and harder
altcunn|14 days ago
[deleted]
gostsamo|15 days ago
My personal take is that they will need a big model to plan and break down tasks and schedule them to specialized smaller models while there is a good enough model for real time interactions with the user, but it is the naive take and many other things might be shaping the decisions.
tasuki|14 days ago
A good analogy? I wonder... how do buses work at your place? Do they wait to be at least half-full before departing? I used to do that in the Simutrans game!
Where I'm from, buses usually depart on schedule, whether you get on the bus or not...
[Edit:] Otherwise an insightful article I guess.
andai|15 days ago
(I think they might also be filling the message onto a GPU while you're typing over a websocket or something, but I'm not sure.)
mft_|15 days ago
This doesn't make sense.
1. Nvidia already sells e.g. the H100 with 80GB memory, so having 44GB isn't an advance, let alone a differentiator.
2. As I suspect anyone that's played with open weights models will attest, there's no way that 5.3-Codex-Spark is getting close to top-level performance and being sold in this way while being <44GB. Yes it's weaker and for sure it's probably a distil and smaller, but not by ~two orders of magnitude as suggested.
EdNutting|15 days ago
NVIDIA chips use HBM (High Bandwidth Memory) which is a form of DRAM - each bit is stored using a capacitor that has to be read and refreshed.
Most chips have caches on them built out of SRAM - a feedback loop of transistors that store each bit.
The big differences are in access time, power and density: SRAM is ~100 times faster than DRAM but DRAM uses much less power per gigabyte, and DRAM chips are much smaller per gigabyte of stored data.
Most processors have a few MB of SRAM as caches. Cerebras is kind of insane in that they’ve built one massive wafer-scale chip with a comparative ocean of SRAM (44GB).
In theory that gives them a big performance advantage over HBM-based chips.
As with any chip design though, it really isn’t that simple.
aurareturn|15 days ago
The whole reason Cerebras can inference a model thousands of tokens per second is because it hosts the entire model in SRAM.
There are two possible scenarios for Codex Spark:
1. OpenAI designed a model to fit exactly 44GB.
2. OpenAI designed a model that require Cerebras to chain multiple wafer chips together; IE, an 88GB or 132GB or 176GB model or more.
Both options require the entire model to fit inside SRAM.
yencabulator|12 days ago
Cerebras can handle weights larger than on-chip memory:
https://www.cerebras.ai/blog/announcing-the-cerebras-archite...
dan-robertson|15 days ago
- the split pricing model makes it hard to tune model architecture for faster inference as you need to support fast and cheap versions.
- the faster the model is, the more it becomes a problem that they don’t ’understand’ time – they sit idle waiting for big compilations or they issue tools sequentially when they ought to have issued them in parallel.
nmilo|14 days ago
Der_Einzige|15 days ago
Another possible explanation is speculative decoding, where you trade unused GPU memory for speed (via a drafting model).
But my money is on the exact two mechanisms the OP proposes.
anonymous908213|15 days ago
It is worth noting that consumers are completely and totally incapable of detecting quality degradation with any accuracy. Which is a given since the models are already effectively random, but there is a strong bent to hallucinate degradations. Having done frontend work for an AI startup, complaints of degrading the model were by far the most common, despite the fact that not only did our model not change, users could easily verify that it didn't change because we expose seeds. A significant portion of complainers continue to complain about model degradation even when shown they could regenerate from the same seed+input and get the exact same output. Humans, at scale, are essentially incapable of comprehending the concept of randomness.
ordu|12 days ago
dennysora|13 days ago
unknown|15 days ago
[deleted]
unknown|14 days ago
[deleted]
christina97|14 days ago
villgax|15 days ago
EdNutting|15 days ago
Seems like nonsense to me.
bob1029|15 days ago
OpenAI and Cerebras have been working together at some level for nearly a decade.
kristjansson|14 days ago
janlucien|15 days ago
[deleted]
anvevoice|15 days ago
[deleted]
unknown|15 days ago
[deleted]
nmilo|14 days ago
qaqqqqaq|12 days ago
[deleted]
kittbuilds|14 days ago
[deleted]
nivcmo|14 days ago
[deleted]
intellirim|15 days ago
[deleted]
phucnet|15 days ago
[deleted]
Xiol|15 days ago
plutodev|14 days ago
[deleted]
semessier|15 days ago
retinaros|15 days ago
It seems OAI was forced by investors to shift quickly to making money. Anthropic seem to have more time? Might be hard for OAI to keep the pace while focusing on cost