top | item 47054903

(no title)

bri3d | 12 days ago

The lack of CUDA support on AMD is absolutely not that AMD "couldn't" (although I certainly won't deny that their software has generally been lacking), it's clearly a strategic decision.

Supporting CUDA on AMD would only build a bigger moat for NVidia; there's no reason to cede the entire GPU programming environment to a competitor and indeed, this was a good gamble; as time goes on CUDA has become less and less essential or relevant.

Also, if you want a practical path towards drop-in replacing CUDA, you want ZLUDA; this project is interesting and kind of cool but the limitation to a C subset and no replacement libraries (BLAS, DNN, etc.) makes it not particularly useful in comparison.

discuss

order

imtringued|11 days ago

They've already ceded the entire GPU programming environment to their competitor. CUDA is as relevant as it always has been.

The primary competitors are Google's TPU which are programmed using JAX and Cerebras which has an unrivaled hardware advantage.

If you insist on an hobbyist accessible underdog, you'd go with Tenstorrent, not AMD. AMD is only interesting if you've already been buying blackwells by the pallet and you're okay with building your own inference engine in-house for a handful of models.

enlyth|12 days ago

Even disregarding CUDA, NVidia has had like 80% of the gaming market for years without any signs of this budging any time soon.

When it comes to GPUs, AMD just has the vibe of a company that basically shrugged and gave up. It's a shame because some competition would be amazing in this environment.

cebert|12 days ago

What about PlayStation and Xbox? They use AMD graphics and are a substantial user base.

fdefitte|12 days ago

Agreed on ZLUDA being the practical choice. This project is more impressive as a "build a GPU compiler from scratch" exercise than as something you'd actually use for ML workloads. The custom instruction encoding without LLVM is genuinely cool though, even if the C subset limitation makes it a non-starter for most real CUDA codebases.

tgtweak|11 days ago

ZLUDA doesn't have full coverage though and that means only a subset of cuda codebases can be ported successfully - they've focused on 80/20 coverage for core math.

Specifically:

CuBLAS (limited/partial scope), cuBLASLt (limited/partial scope), cuDNN (limited/partial scope), cuFFT, cuSPARSE, NVML (very limited/partial scope)

Notably Missing: cuSPARSELt, cuSOLVER, cuRAND, cuTENSOR, NPP, nvJPEG, nvCOMP, NCCL, OptiX

I'd estimate it's around 20% of CUDA library coverage.