top | item 46996849

(no title)

ElijahLynn | 17 days ago

Wow, I wish we could post pictures to HN. That chip is HUGE!!!!

The WSE-3 is the largest AI chip ever built, measuring 46,255 mm² and containing 4 trillion transistors. It delivers 125 petaflops of AI compute through 900,000 AI-optimized cores — 19× more transistors and 28× more compute than the NVIDIA B200.

From https://www.cerebras.ai/chip:

https://cdn.sanity.io/images/e4qjo92p/production/78c94c67be9...

https://cdn.sanity.io/images/e4qjo92p/production/f552d23b565...

discuss

order

dotancohen|17 days ago

  > 46,255 mm²
To be clear: that's the thousandths separator, not the Nordic decimal. It's the size of a cat, not the size of a thumbnail.

ash_091|16 days ago

*thousands, not thousandths, right?

The correct number is fourty six thousand, two hundred and fifty five square mm.

Sharparam|16 days ago

This is why space is the only acceptable thousands/grouping separator (a non-breaking space when possible). Avoids any confusion.

shwetanshu21|17 days ago

Thanks, I was acutally wondering how would someone even manage to make that big a chip.

codyb|17 days ago

Wow, I'm staggered, thanks for sharing

I was under the impression that often times chip manufacture at the top of the lines failed to be manufactured perfectly to spec and those with say, a core that was a bit under spec or which were missing a core would be down clocked or whatever and sold as the next in line chip.

Is that not a thing anymore? Or would a chip like this maybe be so specialized that you'd use say a generation earners transistor width and thus have more certainty of a successful cast?

Or does a chip this size just naturally ebb around 900,000 cores and that's not always the exact count?

20kwh! Wow! 900,000 cores. 125 teraflops of compute. Very neat

fulafel|17 days ago

Designing to tolerate the defects is well trodden territory. You just expect some rate of defects and have a way of disabling failing blocks.

graboy|16 days ago

IIRC, a lot of design went into making it so that you can disable parts of this chip selectively.

carter2099|14 days ago

I sent this to someone I know knowledgeable about this type of thing, here’s what he had to say, sharing because I thought it was interesting:

Pretty cool tech, silicon is very advanced. That said, this is how every wafer comes out of the fab. This process does not dice out individual chips but instead adds interonnects. I doubt they have 100% yield, but probably just don't connect that die. This type of setup is one of the reasons Apple's M series chips are so effective. Their CPU/GPU/RAM are all on one die/directly interconnected instead of going through some motherboard based connector. I think Apple doesn't have them all go through the same process so those are connected via a different process but same layed on silicon direct connection. This solves the problem data centers tend to have of tons of latency for the connections between processors. This is also similar to AMD's infinity fabric of their Zen architecture. It's cool how all of these technologies build from another.

It's also all reliant on fab from TSMC who did the heavy lifting is making the process a reality

elorant|17 days ago

There have been discussions about this chip here in the past. Maybe not that particular one but previous versions of it. The whole server if I remember correctly eats some 20KWs of power.

zozbot234|17 days ago

A first-gen Oxide Computer rack puts out max 15 kW of power, and they manage to do that with air cooling. The liquid-cooled AI racks being used today for training and inference workloads almost certainly have far higher power output than that.

(Bringing liquid cooling to the racks likely has to be one of the biggest challenges with this whole new HPC/AI datacenter infrastructure, so the fact that an aircooled rack can just sit in mostly any ordinary facility is a non-trivial advantage.)

dyauspitr|17 days ago

That’s wild. That’s like running 15 indoor heaters at the same time.

neya|17 days ago

20KW? Wow. That's a lot of power. Is that figure per hour?

hugh-avherald|17 days ago

Maybe I'm silly, but why is this relevant to GPT-5.3-Codex-Spark?

tonyarkles|17 days ago

It’s the chip they’re apparently running the model on.

> Codex-Spark runs on Cerebras’ Wafer Scale Engine 3 (opens in a new window)—a purpose-built AI accelerator for high-speed inference giving Codex a latency-first serving tier. We partnered with Cerebras to add this low-latency path to the same production serving stack as the rest of our fleet, so it works seamlessly across Codex and sets us up to support future models.

https://www.cerebras.ai/chip

thunderbird120|17 days ago

That's what it's running on. It's optimized for very high throughput using Cerebras' hardware which is uniquely capable of running LLMs at very, very high speeds.

lanthissa|16 days ago

for cerbras, can we call them chips? you're no longer breaking the wafer we should call them slabs

amelius|16 days ago

They're still slices of a silicon ingot.

Just like potato chips are slices from a potato.

DeathArrow|17 days ago

>Wow, I wish we could post pictures to HN. That chip is HUGE!!!!

Using a waffer sized chip doesn't sound great from a cost perspective when compared to using many smaller chips for inference. Yield will be much lower and prices higher.

Nevertheless, the actual price might not be very high if Cerebras doesn't apply an Nvidia level tax.

energy123|17 days ago

> Yield will be much lower and prices higher.

That's an intentional trade-off in the name of latency. We're going to see a further bifurcation in inference use-cases in the next 12 months. I'm expecting this distinction to become prominent:

(A) Massively parallel (optimize for token/$)

(B) Serial low latency (optimize for token/s).

Users will switch between A and B depending on need.

Examples of (A):

- "Search this 1M line codebase for DRY violations subject to $spec."

An example of (B):

- "Diagnose this one specific bug."

- "Apply this diff".

(B) is used in funnels to unblock (A). (A) is optimized for cost and bandwidth, (B) is optimized for latency.

magicalhippo|17 days ago

As I understand it the chip consists of a huge number of processing units, with a mesh network between them so to speak, and they can tolerate disabling a number of units by routing around them.

Speed will suffer, but it's not like a stuck pixel on an 8k display rendering the whole panel useless (to consumers).

kumarvvr|17 days ago

Is this actually beneficial than, say having a bunch of smaller ones communicating on a bus? Apart from space constraints that is.

zamadatix|17 days ago

It's a single wafer, not a single compute core. A familiar equivalent might be putting 192 cores in a single Epyc CPU (or, more to be more technically accurate, the group of cores in a single CCD) rather than trying to interconnect 192 separate single core CPUs externally with each other.

santaboom|17 days ago

Yes, bandwidth within a chip is much higher than on a bus.

larodi|17 days ago

Is all of it one chip? Seems like a waffer with several at least?

txyx303|17 days ago

Those are scribe lines where you usually would cut out chips which is why it resembles multiple chips. However, they work with TSMC to etch across them.

kreelman|17 days ago

Wooshka.

I hope they've got good heat sinks... and I hope they've plugged into renewable energy feeds...

thrance|17 days ago

Fresh water and gas turbines, I'm afraid...

King-Aaron|17 days ago

Nope! It's gas turbines

xnx|16 days ago

Bigger != Better