Running Windows is perfectly fine; the major libraries for GPU-accelerated autodiff and networks (CUDNN with Pytorch or Tensorflow) have great support nowadays. It's the AMD GPU that remains essentially useless, as of 2019. If you want to get into the game, I'd recommend buying a middle-of-the-road NVIDIA GPU like the RTX2060.For toying with autodiff and basic CNNs, CPU works just fine by the way...
abrichr|6 years ago
This appears to finally be starting to change. See:
https://github.com/RadeonOpenCompute/ROCm
https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/
mrguyorama|6 years ago
I guess more important question... Whyyyyyyyyyyyyy
dragandj|6 years ago
Despite all the talk about autodiff this or that, the stuff that matters is implemented by hand by Nvidia and Intel engineers and then high level libraries build on top. AMD is simply lagging in providing low-level C libraries and GPU kernels for that.
For example, let me chip in with the libraries I develop, in Clojure, no less. They support BOTH Nvidia GPU AND AMD GPU backends. Most of the stuff is equally good on AMD GPU and Nvidia GPU. With less fuss than in Julia and Python, I'd argue.
Check out Neanderthal, for example: https:neanderthal.uncomplicate.org
Top performance on Intel CPU, Nvidia GPU, AND AMD GPU, from Clojure, with no overhead, faster than Numpy etc. You can even mix all three in the same thread with the same code.
Lots of tutorials are available at https://dragan.rocks
I'm writing two books about that:
Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure [1]
and
Numerical Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, OpenCL, MKL, Java, and Clojure [2]
Drafts are available right now at https://aiprobook.com
[1] https://aiprobook.com/deep-learning-for-programmers [2] https://aiprobook.com/numerical-linear-algebra-for-programme...
jawilson2|6 years ago