I was able to use ROCm recently with Pytorch and after pulling some hair it worked quite well. The Radeon GPU I had on hand was a bit old and underpowered (RDNA2) and it only supported matmul on fp64, but for the job I needed done I saw a 200x increase in it/s over CPU despite the need to cast everywhere, and that made me super happy.
Best of all is that I simply set the device to `torch.device('cuda')` rather than openCL, which does wonders for compatibility and to keep code simple.
Protip: Use the official ROCM Pytorch base docker image [0]. The AMD setup is so finicky and dependent on specific versions of sdk/drivers/libraries and it will be much harder to make work if you try to install them separately.
Sigh. It's great that these container images exist to give people an easy on-ramp, but they definitely don't work for every use case (especially once you're in embedded where space matters and you might not be online to pull multi-gb updates from some registry).
So it's important that vendors don't feel let off the hook to provide sane packaging just because there's an option to use a kitchen-sink container image they rebuild every day from source.
CUDA is the only reason I have an Nvidia card, but if more projects start migrating to a more agnostic environment, I'll be really grateful.
Running Nvidia in Linux isn't as much fun. Fedora and Debian can be incredibly reliable systems, but when you add an Nvidia card, I feel like I am back in Windows Vista with kernel crashes from time to time.
My Arch system would occasionally boot to a black screen. When this happened, no amount of tinkering could get it back. I had to reinstall the whole OS.
Turns out it was a conflict between nvidia drivers and my (10 year old) Intel integrated GPU. But once I switched to an AMD card, everything works flawlessly.
Ubuntu based systems barely worked at all. Incredibly unstable and would occasionally corrupt the output and barf colors and fragments of the desktop all over my screens.
AMD on arch has been an absolute delight. It just. Works. It's more stable than nvidia on windows.
For a lot of reasons-- but mainly Linux drivers-- I've totally sworn off nvidia cards. AMD just works better for me.
> CUDA is the only reason I have an Nvidia card, but if more projects start migrating to a more agnostic environment, I'll be really grateful.
What AMD really needs is to have 100% feature parity with CUDA without changing a single line of code. Maybe for this to happen it needs to add hardware features or something (I see people saying that CUDA as an API is very tailored to the capabilities of nvidia GPUs), I don't know.
If AMD relies on people changing their code to make it portable, it already lost.
I see these complains from time to time and I never understand them.
I've literally been running nvidia on linux since the TNT2 days and have _never_ had this sort of issue. That's across many drivers and many cards over the many many years.
I use a rolling distro (OpenSUSE Tumbleweed) and have had zero issues with my NVIDIA card despite it pulling the kernel and driver updates as they get released. The driver repo is maintained by NVIDIA itself, which is amazing.
I often have issues booting to the installer or first boot after install with an NVidia GPU.
Pop_OS, Fedora and OpenSUSE work out of the box. Those are all Wayland I believe. Debian/Ubuntu distros are a bad time. I think they’re still X11. It’s ironic because X11 is supposed to be the more stable window manager.
Those problems might just be GNOME-related at this point. I've been daily-driving two different Nvidia cards for ~3 years now (1050 Ti then 3070 Ti) and Wayland has felt pretty stable for the past 12 months. The worst problem I had experienced in that time was Electron and Java apps drawing incorrectly in xWayland, but both of those are fixed upstream.
I'm definitely not against better hardware support for AI, but I think your problems are more GNOME's fault than Nvidia's. KDE's Wayland session is almost flawless on Nvidia nowadays.
Nvidia on Linux is more like running Windows 95 from the gulag, and you're covered in ticks. I absolutely detest Nvidia because of the Linux hell they've created.
It isn't the hobbyists who are making sure that PyTorch and other frameworks runs well on these chips, but teams of engineers who work for NVIDIA, AMD, Intel, etc. who are doing this as their primary assigned jobs, in exchange for money from their employer, who are paying those salaries because they want to sell chips into the enormous demand for running PyTorch faster.
Hobbyist and open-source are definitely not synonyms.
CUDA is the result of years of NVIDIA supporting the ecosystem, some people likes to complain because they bought hardware that was cheaper but can't use it for what they want to use it, when you buy NVIDIA, you aren't buying only the hardware, but the insane amount of work they have put into the ecosystem, the same goes for Intel, mkl and scikit-learn intelex aren't free to develop.
AMD has the hardware but the support for HPC is non-existent outside of the joke that is bliss and AOCL.
I really wish for more competitors to enter the market in HPC, but AMD has a shitload of work to do.
> AMD has the hardware but the support for HPC is non-existent outside of the joke that is bliss and AOCL.
You are probably two years behind the state of the art. The world's largest supercomputer, OLCF's Frontier, runs AMD CPUs and GPUs. It's emphatically using ROCm, not just BLIS and AOCL. See for example: https://docs.olcf.ornl.gov/systems/frontier_user_guide.html
Exactly, with NVIDIAs core focus on AI way before it was cool has lead to them being in this advantageous position. For AMD just being a price friendly competitor to Intel and Nvidia was the motto.
Yeah, that's a pretty shortsighted take of things. Do you really believe that Nvidia hasn't taken steps do make sure their moat is as wide as possible?
There is only limited empirical evidence of AMD closing the gap that NVidia has created in the science or ML software. Even when considering pytorch only, the engineering effort to maintain specialized ROCm along with CUDA solutions is not trivial (think flashattention, or any customization that optimizes your own model). If your GPUs only need a simple ML workflow all times for a few years nonstop, maybe there exist corner cases where the finances make sense. It is hard for AMD now to close the gap across the scientific/industrial software base of CUDA. NVidia feels like a software company for the hardware they produce; luckily they make the money from hardware thus cannot lock the software libraries.
(Edited “no” to limited empirical evidence after a fellow user mentioned El Capitan.)
ROCm has HIP (1) which is a compatibility layer to run CUDA code on AMD GPUs. In theory, you only have to adjust #includes, and everything should just work, but as usual, reality is different.
Newer backends for AI frameworks like OpenXLA and OpenAI Triton directly generate GPU native code using MLIR and LLVM, they do not use CUDA apart from some glue code to actually load the code onto the GPU and get the data there. Both already support ROCm, but from what I've read the support is not as mature yet compared to NVIDIA.
I think the article claiming "PyTorch has dropped the drawbridge on the CUDA moat" is way over optimistic. Jest pytorch is widely used by researchers and by users to quickly iterate various over various ways to use the models, but when it comes to inference there are huge gains to be had by going a different route. Llama.cpp has showed 10x speedups on my hardware for example (32gb of gpu ram + 32gb of cpu ram)for models like falcon-40b-instruct, for much smaller models on the cpu (under 10b) I saw up to 3x speedup just by switching to onnc and openvino.
Apple has showed us in practice the benefits of CPU/GPU memory sharing, will AMD be able to follow in their footsteps? The article claims AMD has a design with up to 192gb of shared ram. Apple is already shipping a design with the same amount of RAM(if you can afford it). I wish them-and) success, but I believe they need to aim higher than just matching apple in some unspecified future.
It depends on the domain. Increasingly people's interfaces to this stuff are the higher level libraries like tensorflow, pytorch, numpy/cupy, and to a lesser degree accelerated processing libraries such as opencv, PCL, suitesparse, ceres-solver, and friends.
If you can add hardware support to a major library and improve on the packaging and deployment front while also undercutting on price, that's the moat gone overnight. CUDA itself only matters in terms of lock-in if you're calling CUDA's own functions.
Does AMD have a solution to forward device combatibility (like PTX for NVidia)?
Last time I looked into ROCm (two years ago?), you seemed to have to compile stuff explicitly for the architecture you were using, so if a new card came out, you couldn't use it without a recompile.
Not natively, but AdaptiveCpp (previously hiSycl, then OpenSycl) has a single source single compiler pass, where they basically store LLVM IR as an intermediate representation.
> Crossing the CUDA moat for AMD GPUs may be as easy as using PyTorch.
Nvidia has spent huge amount of work to make code run smoothly and fast. AMD has to work hard to catch up. ROCm code is slower , has more bugs, don't have enough features and they have compatibility issues between cards.
> Nvidia has spent huge amount of work to make code run smoothly and fast.
Well, let's say "smoother" rather than "smoothly".
> ROCm code is slower
On physically-comparable hardware? Possible, but that's not an easy claim to make, certainly not as expansively as you have. References?
> has more bugs
Possible, but - NVIDIA keeps their bug database secret. I'm guessing you're concluding this from anecdotal experience? That's fair enough, but then - say so.
> ROCm ... don't have enough features and
Likely. while AMD has both spent less in that department (and had less to spend I guess); plus, and no less importantly - it tried to go along with the OpenCL initiative, as specified by the Khronos consortium, while NVIDIA has sort of "betrayed" the initiative by investing in it's vendor-locked, incompatible ecosystem and letting their OpenCL support decay in some respects.
Everyone knows that CUDA is a core competency of Nvidia and they have stuck to it for years and years refining it, fixing bugs, and making the experience smoother on Nvidia hardware.
On the other hand, AMD has not had the same level of commitment. They used to sing the praises of OpenCL. And then there is ROCm. Tomorrow, it might be something else.
Thus, Nvidia CUDA will get a lot more attention and tuning from even the portability layers because they know that their investment in it will reap dividends even years from now, whereas their investment in AMD might be obsolete in a few years.
In addition, even if there is theoretical support, getting specific driver support and working around driver bugs is likely to be more of a pain with AMD.
This is what people complain about, but at the same time there aren't enough cards, so the people with AMD cards want to use them. So they fix the bugs, or report them to AMD so they can fix them, and it gets better. Then more people use them and submit patches and bug reporters, and it gets better.
At some point the old complaints are no longer valid.
People complain about Nvidia being anticompetitive with CUDA, but I don't really see it. They saw a gap in the standards for on-GPU compute and put tons of effort into a proprietary alternative. They tied CUDA to their own hardware, which sorta makes technical sense given the optimizations involved, but it's their choice anyway. They still support the open standards, but many prefer CUDA and will pay the Nvidia premium for it because it's actually nicer. They also don't have CPU marketshare to tie things to.
Good for them. We can hope the open side catches up either by improving their standards, or adding more layers like this article describes.
CUDA was released in 2007 and the development of it started even earlier - possibly even in the 90s. Back then nobody else cared about GPU compute. OpenCL came out 2 years after that.
And the question for most that remains once AMD catches up: will the duopoly result in lower prices to a reasonable level for hobbyists or bootstrapped startups, or will AMD just gouge like NVidia?
I think in this case the changes needed to make AMD useful will open the market to other players as well (e.g. Intel).
PyTorch is already walking down this path and while CUDA-based performance is significantly better, that is changing and of course an area of continued focus.
It's not that people don't like Nvidia, rather it's just that there is a lot of hardware out there that can technically perform competitively, but the work needs to be done to bring it into the circle.
AMD prices will go up because of the newfound ability to gouge for AI/ML/GPGPU workloads. Nvidia's will likely go down, but I don't expect it will be by much. The market demand is high, so the equilibrium price will also be high. Supply isn't at pandemic / crypto-rush lows, but the supply of cards useful for CUDA/ROCm still is.
When AMD caught up to Intel in CPUs, prices went down (at least compared to when Intel had a complete monopoly). The same was true when AMD gaming cards were more competitive. Chip manufacturers have shown themselves willing to both raise prices when they can and lower them when they must.
Demand will push AMD prices up by couple hundred bucks and Nvidia cards down by couple hundred bucks. A hobbyist customer will be neither better or worse.
In general I think it will lower prices, certainly not as much as if there were 4+ on the market where it's hard to anticipate your rivals. a 2 body system is pretty straight forward, 3 body can be stable for a while with some restrictions, a 4 body problem is really damn hard...
>There is also a version of PyTorch that uses AMD ROCm, an open-source software stack for AMD GPU programming. Crossing the CUDA moat for AMD GPUs may be as easy as using PyTorch.
Unfortunately since the AMD firmware doesn't reliably do what it's supposed to those ROCm calls often don't either. That's if your AMD card is even still supported by ROCm: the AMD RX 580 I bought in 2021 (the great GPU shortage) had it's ROCm support dropped in 2022 (4 years support total).
The only reliable interface in my experience has been via opencl.
When coding using Vulkan, for graphics or compute (The latter is the relevant one here), you need to have CPU code (Written in C++, Rust etc), then serialize it as bytes, then have shaders which run on the graphics card. This 3-step process creates friction, much in the same way as backend/serialization/frontend does in web dev. Duplication of work, type checking not going across the bridge, the shader language being limited etc.
My understanding is CUDA's main strength is avoiding this. Do you agree? Is that why it's such a big deal? Ie, why this article was written, since you could always do compute shaders on AMD etc using Vulkan.
NVidia hardware/CUDA stack is great, but I also love to see competition from AMD, George Hotz’s Tiny Corp, etc.
Off topic, but I am also looking with great interest at Apple Silicon SOCs with large internal RAM. The internal bandwidth also keeps getting better which is important for running trained LLMs.
Back on topic: I don’t own any current Intel computers but using Colab and services like Lambda Labs GPU VPSs is simple and flexible. A few people here mentioned if AMD can’t handle 100% of their workload they will stick with Intel and NVidia - understandable position, but there are workarounds.
Don’t agree at all. PyTorch is one library - yes, it’s important that it supports AMD GPUs but it’s not enough.
The ROCm libraries just aren’t good enough currently. The documentation is poor. AMD need to heavily invest in their software ecosystem around it, because library authors need decent support to adopt it. If you need to be a Facebook sized organisation to write an AMD and CUDA compatible library then the barrier to entry is too high.
Disagree that the Rocm libraries are poor. Their integration with everything else is poor because everything else is so highly Nvidia centric, and AMD can't just write to the same API because it's copyright Nvidia (see Oracle's Java case).
The adoption of CUDA has been such a coop for Nvidia, it's going to take some time to dismantle it.
I don't understand the author's argument (if there is one) - pytorch has existed for ages. AMD's Instinct MI* range has existed for years now. If these are the key ingredients why has it not already happened?
I'm lazy, so it's 99% for me. I don't even mess with AMD CPUs; I know they're not exactly the same instruction set as Intel, and more importantly they work with a different (and less mainstream) set of mobos, so I don't want em. If AMD manages to pull more customers their way, that's great, it just means lower Intel premium for me.
If the AI hype persists the CUDA moat will be less relevant in ~2 yrs.
Historically HPC was simply not sufficiently interesting (in commercial sense) for people to throw serious resources in the direction of making it a mass market capability.
NVIDIA first capitalized on the niche crypto industry (which faded) and was then well positioned to jump into the AI hype. The question is how much of the hype will become real business.
The critical factor for the post-CUDA world is not any circumstantial moat but who will be making money servicing stable, long term computing needs. I.e., who will be buying this hardware not with speculative hot money but with cashflow from clients that regularly use and pay for a HPC-type application.
These actors will be the long term buyers of commercially relevant HPC and they will have quite a bit of influence on this market.
When I try to install rocm-ml-sdk on Arch linux it'll tell me the total installed size would be about 18GB.
What can possibly explain this much bloat for what should essentially be a library on top of a graphics driver as well as some tools (compiler, profiler etc.)?
A couple hundred MB I could understand if they come with graphical apps and demos, but not this..
ROCm is great. We were able to get run and finetune LLMs on AMD Instincts with parity to NVIDIA A100s - and built an SDK that’s as easy to use as HuggingFace or easier (Lamini). Or at the very least, our designer is able to finetune/train the latest LLMs on them like Llama 2 - 70B and Mistral 7B with ease. The ROCm library isn’t as easy to use as CUDA because as another poster said, the ecosystem was built around CUDA. For example, it’s even called “.cuda()” in PyTorch to put a model on a GPU, when in reality you’d use it for an AMD GPU too.
Nope. PyTorch is not enough, you have to do come C++ occasionally (as the code there can be optimized radically, as we see in llama.cpp and the like). ROCm is unusable compared to CUDA (4x more code for the same problem).
I don't understand why everyone neglects good, usable and performant lower-level APIs. ROCm is fast, low-level, but much much harder to use than CUDA, and the market seems to agree.
I know a lot of people don’t like George, I dislike plenty of people who are doing the right thing thing (including by some measures sama and siebel while they were pushing YC forward).
But not admitting the tinygrad project is the best Rebel Alliance on this is just a matter of letting vibe overcome results.
As a former ETH miner I learned the hard way that saving a few bucks on hardware may not be worth operational issues.
I had a miner running with Nividia cards and a miner running with AMD cards. One of them had massive maintenance demand and the other did not. I will not state which brand was better imho.
Currently I estimate that running miners and running gpu servers has similar operational requirements and finally at scale similar financial considerations.
So, whatever is cheapest to operate in terms of time expenditure, hw cost, energy use,… will be used the most.
P.s.: I ran the mining operation not to earn money but mainly out of curiosity. And it was a small scale business powered by a pv system and a attached heat pump.
I ran 150,000+ AMD cards for mining ETH. Once I fully automated all the vbios installs and individual card tuning, it ran beautifully. Took a lot of work to get there though!
Fact is that every single GPU chip is a snowflake. No two operate the same.
On my PC workstation (Debian Testing) I have absolutely no problems running NVIDIA PNY Quadro P2200, which I'm going to upgrade with PNY Quadro RTX 4000 soon. I'd love to make a switch for AMD Radeon, but the very short (and shrinking) list of ROCm supported cards makes this move highly improbable for the not-so-nearest future.
This article doesn’t address the real challenge [in my mind].
Framework support is one thing, but what about the million standalone CUDA kernels that have been written, especially common in research. Nobody wants to spend time re-writing/porting those, especially when they probably don’t understand the low-level details in the first place.
Not to mention, what is the plan for comprehensive framework support? I’ve experienced the pain of porting models to different hardware architectures where various ops are unsupported. Is it realistic to get full coverage of e.g., PyTorch?
Someone could reimplement CUDA for AMD hardware. That would be legal because copying APIs for compatibility purposes is not copyright infringement. (See Google LLC v. Oracle America, Inc., 593 U.S. ___ (2021)).
AMD is unlikely to do this, however, because it would commodify their own products under their competitor’s API.
A third party could do it though. It may make sense as an open source project.
I suspect that AMD will use their improved compatibility with the leading ML stack for data center deals. Presumably by offering steep discounts over NVIDIA’s GPUs. This might help them to break into the market.
Individual ML practitioners will probably not be tempted to switch to AMD cards anytime soon. Whatever the price difference is: it will hardly offset the time that is subsequently sunk into working around remaining issues resulting from a non-CUDA (and less mature) stack underneath PyTorch.
Is there any reason OpenCL is not the standard in implementations like PyTorch? Similar performance, open standard, runs everywhere - what's the downside?
Downsides are it can't express a bunch of stuff cuda or openmp can plus the nvidia opencl implementation is worse than their cuda one. So opencl is great if you want a lower performance way of writing a subset of the programs you want to write.
AMD playing catch up is a good thing, their SW solution is intended to run on any HW, and with hip being basically line for line compatible with cuda it makes porting very easy. They did it with FSR,and they are doing it with rocm. Hopefully it takes off as it's a more open ecosystem for the industry. Necessity is the mother of invention and all that.
1. Since PyTorch has grown very popular, and there's an AMD backend for that, one can switch GPU vendors when doing Generative AI work.
2. Like NVIDIA's Grace+Hopper CPU-GPU combo, AMD is/will be offering "Instinct MI300A", which improves performance over having the GPU across a PCIe bus from a regular CPU.
What's wrong with CUDA? I avoided it for years because it's proprietory but about one year ago I started using it because all the alternatives (OpenGL/Vulkan compute, OpenCL, WebGPU, ...) couldn't quite do what I wanted, and it turned out to be a game changer. Nothing comes close to it. Now I'm hooked because there simply isn't an alternative that's as easy to use, yet powerfull and fast.
I wish there was an open alternative, but NVIDIA did several things right that others, especially Khronos, do not: The UX is top-notch. It makes the common cases easy yet still fast, and from there you can optimize to your hearts content. Khronos, however, usually completely over-engineers things and makes the common case hard and cumbersome with massive entry barriers.
Not happening. WGSL wants to support the lowest common denominator, so it'll always mainly be a 5-year old mobile-phone API. Also if you want to beat CUDA, you'll need some functionality that's completely missing in compute shaders, especially WGSL. Like pointers and pointer casting (and that glsl buffer reference extension is the worst emulation of that feature I've every seen).
If you can field a competitively priced consumer card that can run llama fast then you're already halfway there because then the ecosystem takes off. Especially since nvidia is being really stingy with their vram amounts.
H100 & datacenter is a separate battle certainly, but on mindshare I think some deft moves from AMD will get them there quite fast once they pull their finger out their A and actually try sorting out the driver stack.
omneity|2 years ago
Best of all is that I simply set the device to `torch.device('cuda')` rather than openCL, which does wonders for compatibility and to keep code simple.
Protip: Use the official ROCM Pytorch base docker image [0]. The AMD setup is so finicky and dependent on specific versions of sdk/drivers/libraries and it will be much harder to make work if you try to install them separately.
[0]: https://rocm.docs.amd.com/en/latest/how_to/pytorch_install/p...
mikepurvis|2 years ago
So it's important that vendors don't feel let off the hook to provide sane packaging just because there's an option to use a kitchen-sink container image they rebuild every day from source.
wyldfire|2 years ago
Man oh man where did we go wrong that cuda is the more compatible option over OpenCL?
RockRobotRock|2 years ago
incognition|2 years ago
javchz|2 years ago
Running Nvidia in Linux isn't as much fun. Fedora and Debian can be incredibly reliable systems, but when you add an Nvidia card, I feel like I am back in Windows Vista with kernel crashes from time to time.
distract8901|2 years ago
Turns out it was a conflict between nvidia drivers and my (10 year old) Intel integrated GPU. But once I switched to an AMD card, everything works flawlessly.
Ubuntu based systems barely worked at all. Incredibly unstable and would occasionally corrupt the output and barf colors and fragments of the desktop all over my screens.
AMD on arch has been an absolute delight. It just. Works. It's more stable than nvidia on windows.
For a lot of reasons-- but mainly Linux drivers-- I've totally sworn off nvidia cards. AMD just works better for me.
nextaccountic|2 years ago
What AMD really needs is to have 100% feature parity with CUDA without changing a single line of code. Maybe for this to happen it needs to add hardware features or something (I see people saying that CUDA as an API is very tailored to the capabilities of nvidia GPUs), I don't know.
If AMD relies on people changing their code to make it portable, it already lost.
PH95VuimJjqBqy|2 years ago
I've literally been running nvidia on linux since the TNT2 days and have _never_ had this sort of issue. That's across many drivers and many cards over the many many years.
kombine|2 years ago
chaostheory|2 years ago
gymbeaux|2 years ago
Pop_OS, Fedora and OpenSUSE work out of the box. Those are all Wayland I believe. Debian/Ubuntu distros are a bad time. I think they’re still X11. It’s ironic because X11 is supposed to be the more stable window manager.
smoldesu|2 years ago
I'm definitely not against better hardware support for AI, but I think your problems are more GNOME's fault than Nvidia's. KDE's Wayland session is almost flawless on Nvidia nowadays.
orangetuba|2 years ago
codemk8|2 years ago
wubrr|2 years ago
IronWolve|2 years ago
Now what I'd like to see is real benchmarks for compute power. Might even get a few startups to compete in this new area.
mandevil|2 years ago
Hobbyist and open-source are definitely not synonyms.
mattnewton|2 years ago
jauntywundrkind|2 years ago
withwarmup|2 years ago
AMD has the hardware but the support for HPC is non-existent outside of the joke that is bliss and AOCL.
I really wish for more competitors to enter the market in HPC, but AMD has a shitload of work to do.
arcanus|2 years ago
You are probably two years behind the state of the art. The world's largest supercomputer, OLCF's Frontier, runs AMD CPUs and GPUs. It's emphatically using ROCm, not just BLIS and AOCL. See for example: https://docs.olcf.ornl.gov/systems/frontier_user_guide.html
That's hardly non-existent support for HPC.
aiunboxed|2 years ago
runiq|2 years ago
pama|2 years ago
(Edited “no” to limited empirical evidence after a fellow user mentioned El Capitan.)
fotcorn|2 years ago
Newer backends for AI frameworks like OpenXLA and OpenAI Triton directly generate GPU native code using MLIR and LLVM, they do not use CUDA apart from some glue code to actually load the code onto the GPU and get the data there. Both already support ROCm, but from what I've read the support is not as mature yet compared to NVIDIA.
1: https://github.com/ROCm-Developer-Tools/HIP
Certhas|2 years ago
falconroar|2 years ago
Roark66|2 years ago
Apple has showed us in practice the benefits of CPU/GPU memory sharing, will AMD be able to follow in their footsteps? The article claims AMD has a design with up to 192gb of shared ram. Apple is already shipping a design with the same amount of RAM(if you can afford it). I wish them-and) success, but I believe they need to aim higher than just matching apple in some unspecified future.
bigcat12345678|2 years ago
NVIDIA moat is the years of work built by oss community, big corporations, research insistute
They spend all time building for cuda, a lot of implicit designs are derived from cuda's characteristic
That will be the main challenge
mikepurvis|2 years ago
If you can add hardware support to a major library and improve on the packaging and deployment front while also undercutting on price, that's the moat gone overnight. CUDA itself only matters in terms of lock-in if you're calling CUDA's own functions.
pixelesque|2 years ago
Last time I looked into ROCm (two years ago?), you seemed to have to compile stuff explicitly for the architecture you were using, so if a new card came out, you couldn't use it without a recompile.
mnau|2 years ago
https://github.com/AdaptiveCpp/AdaptiveCpp/blob/develop/doc/...
Performance penalty was within ew precents, at least according to the paper (figure 9 and 10) https://cdrdv2-public.intel.com/786536/Heidelberg_IWOCL__SYC...
einpoklum|2 years ago
https://www.khronos.org/spir/
nabla9|2 years ago
Nvidia has spent huge amount of work to make code run smoothly and fast. AMD has to work hard to catch up. ROCm code is slower , has more bugs, don't have enough features and they have compatibility issues between cards.
latchkey|2 years ago
einpoklum|2 years ago
Well, let's say "smoother" rather than "smoothly".
> ROCm code is slower
On physically-comparable hardware? Possible, but that's not an easy claim to make, certainly not as expansively as you have. References?
> has more bugs
Possible, but - NVIDIA keeps their bug database secret. I'm guessing you're concluding this from anecdotal experience? That's fair enough, but then - say so.
> ROCm ... don't have enough features and
Likely. while AMD has both spent less in that department (and had less to spend I guess); plus, and no less importantly - it tried to go along with the OpenCL initiative, as specified by the Khronos consortium, while NVIDIA has sort of "betrayed" the initiative by investing in it's vendor-locked, incompatible ecosystem and letting their OpenCL support decay in some respects.
> they have compatibility issues between cards.
such as?
RcouF1uZ4gsC|2 years ago
Everyone knows that CUDA is a core competency of Nvidia and they have stuck to it for years and years refining it, fixing bugs, and making the experience smoother on Nvidia hardware.
On the other hand, AMD has not had the same level of commitment. They used to sing the praises of OpenCL. And then there is ROCm. Tomorrow, it might be something else.
Thus, Nvidia CUDA will get a lot more attention and tuning from even the portability layers because they know that their investment in it will reap dividends even years from now, whereas their investment in AMD might be obsolete in a few years.
In addition, even if there is theoretical support, getting specific driver support and working around driver bugs is likely to be more of a pain with AMD.
AnthonyMouse|2 years ago
At some point the old complaints are no longer valid.
hot_gril|2 years ago
Good for them. We can hope the open side catches up either by improving their standards, or adding more layers like this article describes.
zirgs|2 years ago
binarymax|2 years ago
quitit|2 years ago
PyTorch is already walking down this path and while CUDA-based performance is significantly better, that is changing and of course an area of continued focus.
It's not that people don't like Nvidia, rather it's just that there is a lot of hardware out there that can technically perform competitively, but the work needs to be done to bring it into the circle.
klysm|2 years ago
adamsvystun|2 years ago
evanjrowley|2 years ago
johngossman|2 years ago
rafaelmn|2 years ago
rdsubhas|2 years ago
stjohnswarts|2 years ago
wil421|2 years ago
superkuh|2 years ago
Unfortunately since the AMD firmware doesn't reliably do what it's supposed to those ROCm calls often don't either. That's if your AMD card is even still supported by ROCm: the AMD RX 580 I bought in 2021 (the great GPU shortage) had it's ROCm support dropped in 2022 (4 years support total).
The only reliable interface in my experience has been via opencl.
htrp|2 years ago
zucker42|2 years ago
65a|2 years ago
the__alchemist|2 years ago
My understanding is CUDA's main strength is avoiding this. Do you agree? Is that why it's such a big deal? Ie, why this article was written, since you could always do compute shaders on AMD etc using Vulkan.
mark_l_watson|2 years ago
Off topic, but I am also looking with great interest at Apple Silicon SOCs with large internal RAM. The internal bandwidth also keeps getting better which is important for running trained LLMs.
Back on topic: I don’t own any current Intel computers but using Colab and services like Lambda Labs GPU VPSs is simple and flexible. A few people here mentioned if AMD can’t handle 100% of their workload they will stick with Intel and NVidia - understandable position, but there are workarounds.
physicsguy|2 years ago
The ROCm libraries just aren’t good enough currently. The documentation is poor. AMD need to heavily invest in their software ecosystem around it, because library authors need decent support to adopt it. If you need to be a Facebook sized organisation to write an AMD and CUDA compatible library then the barrier to entry is too high.
weebull|2 years ago
The adoption of CUDA has been such a coop for Nvidia, it's going to take some time to dismantle it.
alecco|2 years ago
ris|2 years ago
fluxem|2 years ago
hot_gril|2 years ago
nologic01|2 years ago
Historically HPC was simply not sufficiently interesting (in commercial sense) for people to throw serious resources in the direction of making it a mass market capability.
NVIDIA first capitalized on the niche crypto industry (which faded) and was then well positioned to jump into the AI hype. The question is how much of the hype will become real business.
The critical factor for the post-CUDA world is not any circumstantial moat but who will be making money servicing stable, long term computing needs. I.e., who will be buying this hardware not with speculative hot money but with cashflow from clients that regularly use and pay for a HPC-type application.
These actors will be the long term buyers of commercially relevant HPC and they will have quite a bit of influence on this market.
ddtaylor|2 years ago
ginko|2 years ago
What can possibly explain this much bloat for what should essentially be a library on top of a graphics driver as well as some tools (compiler, profiler etc.)? A couple hundred MB I could understand if they come with graphical apps and demos, but not this..
sharonzhou|2 years ago
atemerev|2 years ago
I don't understand why everyone neglects good, usable and performant lower-level APIs. ROCm is fast, low-level, but much much harder to use than CUDA, and the market seems to agree.
voz_|2 years ago
whywhywhywhy|2 years ago
freedomben|2 years ago
benreesman|2 years ago
But not admitting the tinygrad project is the best Rebel Alliance on this is just a matter of letting vibe overcome results.
frnkng|2 years ago
I had a miner running with Nividia cards and a miner running with AMD cards. One of them had massive maintenance demand and the other did not. I will not state which brand was better imho.
Currently I estimate that running miners and running gpu servers has similar operational requirements and finally at scale similar financial considerations.
So, whatever is cheapest to operate in terms of time expenditure, hw cost, energy use,… will be used the most.
P.s.: I ran the mining operation not to earn money but mainly out of curiosity. And it was a small scale business powered by a pv system and a attached heat pump.
latchkey|2 years ago
Fact is that every single GPU chip is a snowflake. No two operate the same.
pjmlp|2 years ago
ElectronBadger|2 years ago
upbeat_general|2 years ago
Framework support is one thing, but what about the million standalone CUDA kernels that have been written, especially common in research. Nobody wants to spend time re-writing/porting those, especially when they probably don’t understand the low-level details in the first place.
Not to mention, what is the plan for comprehensive framework support? I’ve experienced the pain of porting models to different hardware architectures where various ops are unsupported. Is it realistic to get full coverage of e.g., PyTorch?
bdowling|2 years ago
AMD is unlikely to do this, however, because it would commodify their own products under their competitor’s API.
A third party could do it though. It may make sense as an open source project.
blueboo|2 years ago
hankman86|2 years ago
Individual ML practitioners will probably not be tempted to switch to AMD cards anytime soon. Whatever the price difference is: it will hardly offset the time that is subsequently sunk into working around remaining issues resulting from a non-CUDA (and less mature) stack underneath PyTorch.
falconroar|2 years ago
LoganDark|2 years ago
JonChesterfield|2 years ago
tails4e|2 years ago
tormeh|2 years ago
einpoklum|2 years ago
1. Since PyTorch has grown very popular, and there's an AMD backend for that, one can switch GPU vendors when doing Generative AI work.
2. Like NVIDIA's Grace+Hopper CPU-GPU combo, AMD is/will be offering "Instinct MI300A", which improves performance over having the GPU across a PCIe bus from a regular CPU.
ur-whale|2 years ago
I really wish they would, and properly, as in: fully open solution to match CUDA.
CUDA is a cancer on the industry.
mschuetz|2 years ago
I wish there was an open alternative, but NVIDIA did several things right that others, especially Khronos, do not: The UX is top-notch. It makes the common cases easy yet still fast, and from there you can optimize to your hearts content. Khronos, however, usually completely over-engineers things and makes the common case hard and cumbersome with massive entry barriers.
raggi|2 years ago
mschuetz|2 years ago
jeffreygoesto|2 years ago
jiggawatts|2 years ago
arcanus|2 years ago
spandextwins|2 years ago
cantaloupe|2 years ago
tpmx|2 years ago
Zetobal|2 years ago
Havoc|2 years ago
Late certainly, too late I don't think so.
If you can field a competitively priced consumer card that can run llama fast then you're already halfway there because then the ecosystem takes off. Especially since nvidia is being really stingy with their vram amounts.
H100 & datacenter is a separate battle certainly, but on mindshare I think some deft moves from AMD will get them there quite fast once they pull their finger out their A and actually try sorting out the driver stack.
Andrew018|2 years ago
[deleted]
KingLancelot|2 years ago
[deleted]