top | item 42793283

(no title)

juliangoldsmith | 1 year ago

AMD's hardware might be compelling if it had good software support, but it doesn't. CUDA regularly breaks when I try to use Tensorflow on NVIDIA hardware already. Running a poorly-implemented clone of CUDA where even getting Pytorch running is a small miracle is going to be a hard sell.

All AMD had to do was support open standards. They could have added OpenCL/SYCL/Vulkan Compute backends to Tensorflow and Pytorch and covered 80% of ML use cases. Instead of differentiating themselves with actual working software, they decided to become an inferior copy of NVIDIA.

I recently switched from Tensorflow to Tinygrad for personal projects and haven't looked back. The performance is similar to Tensorflow with JIT [0]. The difference is that instead of spending 5 hours fixing things when NVIDIA's proprietary kernel modules update or I need a new box, it actually Just Works when I do "pip install tinygrad".

0: https://cprimozic.net/notes/posts/machine-learning-benchmark...

discuss

order

latchkey|1 year ago

> AMD's hardware might be compelling if it had good software support, but it doesn't. CUDA regularly breaks when I try to use Tensorflow on NVIDIA hardware already.

So it is all shit, but tinygrad saves the day?

juliangoldsmith|1 year ago

It works out of the box without jumping through any hoops, and the fact that it has an OpenCL backend means it can run on a wide variety of hardware.

I don't know of any other autograd libraries with a non-CUDA backend, but I'd be interested to learn about them.