top | item 47110614

(no title)

the__alchemist | 7 days ago

I believe this is a CPU/GPU vs ASIC comparison, rather than CPU vs GPU. They have always(ish) coexisted, being optimized for different things: ASICs have cost/speed/power advantages, but the design is more difficult than writing a computer program, and you can't reprogram them.

Generally, you use an ASIC to perform a specific task. In this case, I think the takeaway is the LLM functionality here is performance-sensitive, and has enough utility as-is to choose ASIC.

discuss

order

RobotToaster|7 days ago

It reminds me of the switch from GPUs to ASICs in bitcoin mining. I've been expecting this to happen.

yunohn|7 days ago

But the BTC mining algorithm has not and will not change. That’s the only reason ASICs atleast make a bit of sense for crypto.

AI being static weights is already challenged with the frequent model updates we already see - but may even be a relic once we find a new architecture.

hkt|7 days ago

Heh, I said this exact thing in another thread the other day. Nice to see I wasn't the only one thinking it.

GTP|7 days ago

The middle ground here would be an FPGA, but I belive you would need a very expensive one to implement an LLM on it.

dogma1138|7 days ago

FPGAs would be less efficient than GPUs.

FPGAs don’t scale if they did all GPUs would’ve been replaced by FPGAs for graphics a long time ago.

You use an FPGA when spinning a custom ASIC doesn’t makes financial sense and generic processor such as a CPU or GPU is overkill.

Arguably the middle ground here are TPUs, just taking the most efficient parts of a “GPU” when it comes to these workloads but still relying on memory access in every step of the computation.