top | item 29234671

Blender 3.0 takes support for AMD GPUs to the next level

317 points| WithinReason | 4 years ago |gpuopen.com | reply

116 comments

order
[+] mastax|4 years ago|reply
It's good that AMD is providing support for some of their customers since OpenCL is being removed in 3.0. I hope that this is just the start of far greater investment into their software ecosystem. It was understandable that AMD invested less in drivers and support software when they were almost bankrupt but that excuse is drying up. There are rumors that future AMD GPU architectures will be quite good and it would be a shame for that advantage to be wasted by a lack of software support.
[+] AnthonyMouse|4 years ago|reply
They should really get together with Intel on an open standard, now that Intel is doing real GPUs. They both have a shared interest in people not being stuck on proprietary Nvidia interfaces.
[+] xbmcuser|4 years ago|reply
AMD didn't have the money to spend on software as much as Nvidia and intel were able to now that they do they seem to be investing heavily into software. Xilinx acquisition should also help with that.
[+] jjoonathan|4 years ago|reply
Glad to hear it. We're going on 10 years since Blender notionally got OpenCL support, I bought an AMD GPU on that basis, found out that the "OpenCL support" was so thoroughly dismal that it was slower than my CPU, sold my AMD card, ate the ebay fees, bought an NVidia card, ate the green tax, and got on with my work.

I had to repeat that lesson a few years later with deep learning code before it stuck.

Hopefully this time is different. I'm cautiously optimistic... but I'll let someone else be the guinea pig.

[+] moffkalast|4 years ago|reply
This is mostly why I've stopped buying anything but Nvidia GPUs years ago, in 99% of software people just support CUDA and call it a day it seems.
[+] philovivero|4 years ago|reply
I wish I could be that guinea pig, but looks like this is a Windows-only thing, and I'm on a Mac.
[+] KennyBlanken|4 years ago|reply
Is it a 'green tax' when NVIDIA's product is superior in almost every way? There's only two negatives to NVIDIA products right now: poor MacOS support and their linux drivers aren't quite as good as AMD's.

Everything compute uses NVIDIA/OptiX.

AMD's Windows drivers suck. I had no end of problems with games crashing and micro-stuttering. I used DDU to remove the old drivers, ran Tron, all sorts of troubleshooting steps. Switching to an NVIDIA card fixed it. Can't remember the last time a game crashed on me.

AMD's GPUs use significantly more power in their class. The RX 580 is a 185W TDP card. NVIDIA's 1060 is 120W and outperforms the RX 580. The 1070 is a 150W card and handily outperforms the 580. The 1070TI is still (slightly) under the RX580 (by 5W), and wipes the floor with it.

The 5xxx series cards were also garbage. AMD had nothing to respond to DLSS or RTX - and the RTX cards were a huge leap in compute performance in particular. The cards crashed like crazy because of driver problems, and people also discovered that all the "creators" with pre-release cards had been basing their performance tests off benchmark runs that lasted just long enough before the cards overheated. NVIDIA cards performed better if you planned on gaming for more than 5 minutes at a time.

[+] my123|4 years ago|reply
Note that this is only supported on RDNA2 onwards.

It is _enabled_ on 1st gen RDNA, but with no "official support" from AMD. (as such, no SLAs or any guarantees really)

On prior generations, it's not enabled at all. (Vega, Polaris)

[+] volta83|4 years ago|reply
NVIDIA, Intel and AMD sit on the ISO C++ and ISO Fortran standard committees. They voted GPU support into the C++ 2017 and Fortran 2018 ISO standards. While NVIDIA implemented this YEARS ago, almost 5 years after these ISO standards were voted in, Intel and AMD have not even announced a plan for implementing GPU support for ISO standard languages on their GPUs.

Instead, Intel and AMD strategy is pushing applications to "rewrite CUDA with HIP/OneAPI", which are not real standards and are no more portable than CUDA. The people claiming that they are "open", probably also think that OpenACC is an open standard, even though it mostly only runs on NVIDIA GPUs.

If you care about portable GPU code, unfortunately, NVIDIA is the only vendor that actually delivers.

You can take your standard conforming C++ code _today_, not a single line of CUDA, OpenMP, OpenACC, etc. and the nvidia compiler compiles it and runs it on GPUs and multi-core CPUs. You can also take your normal Python NumPy code and run it on NVIDIA GPUs "as is" without changes.

It doesn't get any more portable than that.

I don't understand how there are so many people in these threads that apparently care about portability happy about AMD and Intel rewriting open source applications in their proprietary vendor-specific technologies, instead of angry about them pushing their proprietary technologies, instead of implementing GPU support for ISO standard languages, just like NVIDIA does.

[+] einpoklum|4 years ago|reply
> If you care about portable GPU code, unfortunately, NVIDIA is the only vendor that actually delivers.

Absolutely false. First, GPU code is inherently unportable in terms of performance. Switch to another vendor's GPU, or to another microarchitecture, and you may need to rewrite your entire kernel.

But even other than that: NVIDIA promotes its own proprietary and mostly-closed-source ecosystem, named CUDA, and has actively hampered the adoption and development of OpenCL, which was supposed to be the open standard. Not that OpenCL was not "betrayed" by others, but still.

> They voted GPU support into the C++ 2017 and Fortran 2018 ISO standards

There is no GPU support in the C++ standard. If you mean something like `std::par` as an execution strategy for standard library algorithm, that's rather limited in scope and not really GPU-specific.

> While NVIDIA implemented this YEARS ago

Again not quite sure what you mean. Are you talking about how you can't write GPU kernels in C++17? Or about `std::par` implementations for the standard libraries?

> You can take your standard conforming C++ code _today_, not a single line of CUDA, OpenMP, OpenACC, etc. and the nvidia compiler compiles it and runs it on GPUs and multi-core CPUs.

This doesn't make sense, and it's also not true. On a GPU, you would not use a lot of the standard library to get things done; that would either be exceedingly slow or just not work. Also, some language features (like exceptions) don't work on GPUs, and for good reason; hence anything in the standard library which relies on exceptions for error handling also doesn't work as-is - which is perfectly ok.

> You can also take your normal Python NumPy code and run it on NVIDIA GPUs "as is" without changes.

This is the only part of what you've written which is mostly-true (and I haven't fully verified even this since I don't do Python work).

[+] toolz|4 years ago|reply
just playing devils advocate here, but if only one vendor is conforming to these standards - could it be that the standards are heavily biased towards their implementation?
[+] nspattak|4 years ago|reply
On top of that, the only supported cards are radeon rx 6x00, as if the 10 miners who own these cards around the world even care about writing GPGPU code :)
[+] ur-whale|4 years ago|reply
> This removed OpenCL™ support for rendering on AMD GPUs for technical and performance reasons.

So, that's it, OpenCL is out the door, no more standardized way to write GPU compute code, I guess.

[+] chmod775|4 years ago|reply
> Luckily, AMD has an open-source solution for developers just for that. HIP (Heterogeneous-computing Interface for Portability) is a C++ runtime API and kernel language that allows developers to create portable applications for AMD and NVIDIA® GPUs from a single source code. This allows the Blender Cycles developers to write one set of rendering kernels and run them across multiple devices. The other advantage is that the tools with HIP allow easy migration from existing CUDA® code to something more generic.
[+] Const-me|4 years ago|reply
In my experience, on Windows, DirectCompute shaders work OK on AMD too. For optimal performance they require optimizations for specific GPUs, though.
[+] floatboth|4 years ago|reply
Vulkan! Compute! Shaders!

I hate compute-specific APIs so much. Use Vulkan!

[+] esistgut|4 years ago|reply
Maybe ROCm support for Navi hardware is finally going to be released.
[+] lvl100|4 years ago|reply
AMD software support is pretty bad for a company of its size. They’re trying to do the minimum. It’s almost as if they don’t want to compete with Nvidia due to collusion.
[+] shmerl|4 years ago|reply
Why HIP and not for example Vulkan?

UPDATE:

Looks like this more about adapting existing CUDA code to work with AMD. So it makes sense as the least resistance approach I suppose.

[+] my123|4 years ago|reply
> Looks like this more about adapt existing CUDA code to work with AMD. So it makes sense as the least resistance approach I suppose.

It isn't about that. Vulkan isn't flexible enough to allow it to happen. Give true pointers support in Vulkan for example first. GLSL/HLSL is much worse than modern C++ as a programming language too.

As described by Brecht Van Lommel (Blender and Cycles developer):

Vulkan has limitations in how you can write kernels, in practice you can’t currently use pointers for example. But also, GPU vendors will recommend certain platforms for writing production renderers, provide support around that, and various renderers will use it. Choosing a different platform means you will hit more bugs and limitations, have slower or no access to certain features, are not likely to see library support like OSL, etc.

Our strategy for Cycles X is to rely on the GPU vendors to support us and provide APIs that meet our requirements. We want to support as many GPUs as possible, but not at any cost.

[+] Ono-Sendai|4 years ago|reply
How lame, it would have been much nicer to just keep OpenCL functioning well and supported.
[+] dry_soup|4 years ago|reply
I don't think that was an option. Kernel size limitations prevented Blender from ever supporting OpenCL well. Not to mention Apple dropping official support for it years ago.
[+] de6u99er|4 years ago|reply
Looks like AMD's open source strategy is starting to pay off.
[+] blihp|4 years ago|reply
Not really on the compute side yet. This is their 2nd attempt at supporting Blender as the first (an OpenCL renderer for Blender) was deemed unmaintainable and deprecated shortly after it was released. All indications are AMD is doing much better on the video driver side (which is where the good news is), but on the compute side they appear to have a long way to go still.
[+] smoldesu|4 years ago|reply
I wouldn't go that far. Nvidia and even Intel have had hardware-accelerated features in Blender (RTX-acceleration and Optix, respectively), so this is more about getting them back on the same level. I do really like AMD GPUs though, I'd be interested to see how this performs on their APUs with the Vega graphics.
[+] EFruit|4 years ago|reply
Two major questions come to mind:

1. What will the impact be for ROCm's Navi support on Linux?

2. Does this mean they're getting more confident at handling simultaneous display AND compute with ROCm?

[+] my123|4 years ago|reply
> 1. What will the impact be for ROCm's Navi support on Linux?

You will see that supported officially in ROCm 5.0, the next ROCm release. Note that ROCm 5.0 drops support for first generation Vega GPUs. (Radeon Instinct MI25)

The only consumer card to have shipped with second gen Vega is the Radeon VII.

[+] pram|4 years ago|reply
What does this mean for ProRender? AMDs Blender strategy is kinda confusing.
[+] bsavery|4 years ago|reply
Hi there. I help there. ProRender is still very important as it has hardware accelerated raytracing while Cycles does not yet on AMD.

Furthermore we're focusing our efforts on ProRender going forward through USD. https://github.com/GPUOpen-LibrariesAndSDKs/BlenderUSDHydraA... Adding USD workflows to blender and other apps while using ProRender for the rendering solution in USD.

[+] theHIDninja|4 years ago|reply
I cannot understand Blender's inanely arcane interface

Every tutorial out there seems to be for a drastically different version where half the shit no longer works in the current one without delving into more and more hidden menus

Every feature is its own labyrinth of menus, every model downloaded seems to be customized to a unique version with nothing similar to another

It is massively disheartening to try to comprehend video tutorials when they're pressing keyboard shortcuts at 20k wpm without explaining them and scouring over every detail trying to understand what they're doing turning an hour long tutorial into a day of confusion

I made a shitty donut and don't even know how I did, that's it

[+] hrnnnnnn|4 years ago|reply
It's a powerful tool and it takes time and effort to learn. I was in the same boat as you when I started, but after ~3 years of using Blender more or less daily I find the UI very intuitive and powerful.

One of the nice things about having a very active community is that there's always new tutorials being made, often at the beginner level, on the very latest versions.

[+] boyadjian|4 years ago|reply
Wow, being an AMD fan, I’ve been waiting for this for a long time. Impatient to try that with my RX 5700
[+] PostThisTooFast|4 years ago|reply
Most open-source graphics software is crippled by dogshit UI.

So far Blender is not. I'm impressed.

[+] ldehaan|4 years ago|reply
When amd can compete with Nvidia in this space they'll both be the same price. Just buy nvidia.
[+] AnthonyMouse|4 years ago|reply
They might both be the same price, but the price will be lower, because that's how competition works. Which only happens if you support the underdog.
[+] mupuff1234|4 years ago|reply
That's not what happened when AMD was competitive in the CPU market vs Intel.

If AMD will succeeds, at the very least there will be a long period where they will undercut prices in order to gain market share.