(no title)
jchonphoenix | 2 years ago
The media keeps missing the real lock in Nvidia has: CUDA. It's not the hardware. It's the ability for someone to use it painlessly.
jchonphoenix | 2 years ago
The media keeps missing the real lock in Nvidia has: CUDA. It's not the hardware. It's the ability for someone to use it painlessly.
nl|2 years ago
pjmlp|2 years ago
bootsmann|2 years ago
ugh123|2 years ago
- Google search (vs previous entrenched search engines in the early '00s)
- Adsense/doubleclick (vs early ad networks at the time)
- Gmail (vs aol, hotmail, etc)
- Android (vs iOS, palm, etc)
- Chrome (vs all other browsers)
Sure, i'm picking the obvious winners, but these are all market leaders now (Android by global share) where earlier incumbents were big, but not Google-big.
Even if Google's use of TPUs are purely self-serving, it will have a noticeable effect on their ability to scale their consumer AI usage at diminishing costs. Their ability to scale AI inference to meet "Google scale" demand, and do it cheaply (at least by industry standards), will make them formidable in the "ai race". This is why altman/microsoft and others are investing heavily in AI chips.
But I don't think their TPU will be only self-serving, rather, they'll scale it's use through GCP for enterprise customers to run AI. Microsoft is already tapping their enterprise customers for this new "product". But those kinds of customers will care more about cost than anything else.
The long-term game here is a cost game, and Google is very, very good at that and has a headstart on the chip side.
dekhn|2 years ago
The TPU hardware is great in a lot of ways and it allowed google to move quickly in ML research and product deployments, but I don't think it was ever a money-maker for cloud.
amelius|2 years ago
Really? What if someone writes a new back-end to PyTorch, TensorFlow and perhaps a few other popular libraries? Then will CUDA still matter that much?
p1esk|2 years ago
If that was easy to do surely AMD would have done it by now? After many years of trying?
fritzo|2 years ago
pjmlp|2 years ago
kjkjhgkjyj|2 years ago
Mehdi2277|2 years ago
Additionally many operations that run on GPU but are just unsupported for TPU. Sparse tensors have pretty limited support and there's bunch of models that will crash on TPU and require refactoring. Sometimes pretty heavy thousands of lines refactoring.
edit: Pytorch is even worse. Pytorch does not implement efficient tpu device data loading and generally has poor performance no where comparable to tensorflow/jax numbers. I'm unaware of any pytorch benchmarks where tpu actually wins. For tensorflow/jax if you can get it running and your model suits tpu assumptions (so basic CNN) then yes it can be cost effective. For pytorch even simple cases tend to lose.
htrp|2 years ago
Unless you physically work next to the TPU hardware team, the torch support for TPUs is pretty brittle.
dkarras|2 years ago
ipsum2|2 years ago
moffkalast|2 years ago
buildbot|2 years ago
sidibe|2 years ago
lordswork|2 years ago
refulgentis|2 years ago
c.f. https://news.ycombinator.com/item?id=39149854