top | item 38655647

(no title)

oelang | 2 years ago

They have improved their software significantly in the last year, but there is a movement that's broader than AMD that wants to get rid of CUDA.

The entire industry is motivated to break the nvidia monopoly. The cloud providers, various startups & established players like intel are building their own AI solutions. Simultaneously, CUDA is rarely used directly, typically a higher level (Python) API that can target any low-level API like cuda, PTX or rocm.

What AMD is lacking right now is decent support for rocm on their customer cards on all platforms. Right now if you don't have one of these MI cards or a rx7900 & you're not running linux you're not going to have a nice time. I believe the reason for this is that they have 2 different architectures, CDNA (the MI cards) and RDNA (the customer hardware).

discuss

order

bornfreddy|2 years ago

> Right now if you don't have one of these MI cards or a rx7900 & you're not running linux you're not going to have a nice time.

Are you saying that having rx7900 + linux = happy path for ML? This is news to me, can you tell more?

I would love to escape cuda & high prices for nvidia gpus.

pbalcer|2 years ago

That's what I have (RX 7900XT on Arch), and ROCm with pytorch has been reasonably stable so far. Certainly more than good enough for my experimentation. Pytorch itself has official support and things are pretty much plug & play.