There was an article on here a couple of months back that was an intro to blender from a geek / vim perspective. I felt a bit inspired and downloaded it to have a play. It's an absolutely brilliant application - I highly recommend giving it a try.
Having learned blender first is likely why Vim was so appealing to me.
Blender's UI is so much better this way. The idea that the functionality of any professional software tool should be immediately visible is just silly. Photoshop, GIMP, Autocad, 3DS Max, Maya, etc. all follow that philosophy, and just end up with way too many buttons and menus for anyone to want to sift through. Blender shows functionality only where it is needed in the most beautiful modular system I have ever used.
Blender currently has the best UI I have ever experienced in a desktop app. Their design philosophy is highly modular. But have you tried an older version from the Blender UI dark ages? Things weren't always so intuitive and everything was a big unstructured mess.
I also highly recommend trying out the Pie Menu addon addon in the settings. It's more or less becoming an official part of the Blender UI flow
Blender is an alien in the world of software, but I agree -- it's brilliant. Never stopped using it since I tried it 15 years ago. Took a long time to get productive, but it was definitely a good time investment.
It's also very dev friendly. And exploring its file format makes for very good nerdy times.
Interesting post and analogy. Maybe it's time for me to look at Blender again too! I think the last time was somewhere around 2007-2009. In 2010 I found http://www.vim3d.com/ and have kind of been hoping it would take off since that seems like an interface I might enjoy more, but I never devoted any time to learning it.
"OpenCL works fine on NVIDIA cards, but performance is reasonably slower (up to 2x slowdown) compared to CUDA, so it doesn't really worth using OpenCL on NVIDIA cards at this moment."
I wonder if that's intentional on NVIDIA's part.
Does it mention which version of OpenCL they're using? I'm looking forward to hearing news about v2.x and SPIR-V.
"I wonder if that's intentional on NVIDIA's part."
I think that's a reasonable guess. NVIDIA only supports OpenCL 1.2 (and it took them about 6 years to get there from 1.0, while other vendors were at 2.x).
> I wonder if that's intentional on NVIDIA's part.
I highly doubt it, a normal resource allocation conflict would suffice.
NVIDIA has everything to gain by being ahead on each and every metric and to purposefully hobble that would eventually come out and would make people decide to buy a non-NVIDIA product in the meantime.
I think the reason is more that NVIDIA has been able to tailor CUDA to a much larger degree to match the architecture of their cards and by extension anybody that writes for CUDA automatically benefits from that. OpenCL is more general but that also immediately implies that it will be less efficient. And it doesn't take all that much in terms of missing optimizations (a single memory fetch penalty would do) to get a 2x penalty. GPU programming is much less forgiving when it comes to subtle mistakes than regular programming because it immediately gets multiplied by a very large factor rather than a single missed cycle.
A nice example of the reverse: ATI cards dominated the bitcoin GPU mining scene because of their ability to do a particular operation in one clocktick instead of two on the NVIDIA cards.
We used to have both an OpenCL and CUDA implementation in earlier versions. The CUDA version was slightly faster (10-15%?) on Nvidia cards, but not worth the effort of maintaining both implementations.
I don't think it has anything to do with the driver itself. At least in my experience, the custom code you write performs more or less the same across both CUDA and OpenCL. The issues arise when you are using the toolkit libraries (like BLAS and FFT). cuBLAS and cuFFT are much much tuned for NVIDIA hardware than any OpenCL versions available at the moment.
I have rarely observed that much difference between CUDA and OpenCL. I have, however, noticed that CUDA (at least by default) is more aggressive about picking faster but less accurate instructions for transcendental functions like inverse square root. You can ask for the same kind of stuff by passing extra OpenCL compiler flags, but it seems that CUDA optimises more aggressively by default (maybe analogous to the kind of stuff -ffast-math does).
NVIDIA supports CL1.2 and it's intentionally slow so people will use CUDA.
Plain CL2.0 shouldn't bring performance improvements unless the kernels used by Blender perform things like work_group functions. Also CL2.0 will require people to specify if a WG is uniform, otherwise they must assume it's not uniform and that's taxing in some cases.
SPIR-V on the other hand should be interesting to see.
Note that this is a wiki article, and that section in particular hasn't been updated in quite a while. Other parts of the article are talking about recent significant speedups in the OpenCL renderer so it makes me wonder how much they've closed the gap with CUDA on NVIDIA hardware.
I would test myself but don't have an NVIDIA card handy.
Blender's progress has been astounding. I remember a while back they anounced the OpenCL implementation was being put on hold for a undetermined amount of time due to limitations with AMD cards. This really is an exciting announcement. It's great to see it on HN too.
Blender is probably one of the most successful OSS projects since Linux. In the span of five years, you wouldn't be caught dead using it over Maya. Now, its visa versa.
I just wish they'd put some more effort into letting the renderer be used standalone --- I'm not particularly interested in the modeller, but I would really like a renderer with a C++ API that would allow me to play with procedural volumetric effects.
Technically you can use it standalone, but it's not a great experience. There are no binaries, it's painful to build, and the only way to interact with it is via an undocumented XML format; and I haven't found any way to do truly procedural volumes or geometry (i.e. via a density function callback specified by the user).
Right now I have an extremely patched version of Povray, but I'd love to switch to something faster.
The performance of OpenCL has generally been fine for me, particularly on AMD GPUs, but I have to say I think CUDA is a lot simpler to work with.
OpenCL is one of those things that I never felt fully comfortable working with, but I felt productive in CUDA after a week or two. Granted, I learned them in that order, so it's possible that CUDA got an unfair head-start, but I stand by my initial thesis.
I tried picking up OpenCL then CUDA as well and had a similar experience.
CUDA simply feels less hacked together to me. It's like working with a well thought out and documented code base (CUDA) vs working with some library that has little documentation and contradicting syntax and formats (OpenCL).
IMO that's why AMD is losing the "deep learning" battle, it's just not easy to develop using OpenCL. At least, not as easy as it should be.
I don't quite get this page; isn't it more accurate to say 'amd on par with nvidia'? It seems for amd they use opencl, for nvidia cuda; but you can run opencl on nvidia too (1.2 only, apart from experimental, partial 2.0 support in the very latest drivers, but still).
I mean, there are numerical libraries that run 2x as fast on nvidia compared to their most optimized opencl implementations, because they use 'gpu assembly' specific for nvidia cards; how does that fit the 'opencl on par with cuda' claim? It depends on what effort is spent on optimizing for a certain platform, not what api is used...
I'm working in opencl myself but it's frustrating that I'll never get as much performance as I would when using opencl, even when I'm using gtx gpus myself.
Blender is a really fun program to use, Tons of tutorials and information on youtube etc...
I think more education could use 3d programs like this to help with algorithm visualization.
I made this video about worker in tech with blender.
The worker is slaving away at his terminal, he is writing code that creates the 'feed' of apps/entertainment/media/etc.. for the insatiable appetite of society (Represented by the somewhat-similar-to-a-hungry-hippo character in the depths).
Who are these other two glowing beings? What do they represent? My friends have tried to guess some explanations, but I'll let each audience member decide for themselves.
Would it be a completely stupid idea to write a CUDA-based OpenCL back-end? i.e., an OpenCL-to-CUDA translator, so you can program your kernels in one single language but still get the benefit of the NVidia CUDA compiler?
Or are their machine models so different that that is an unreasonable thing to even try..
It's likely better to just write/use libs which abstract the CUDA/OpenCL level away for generic GPU type tasks. Unless you have really strict performance requirements or are writing a very specific piece of GPU code, in which case I cant image OpenCL-to-CUDA translator would handle that case well either.
They both have very similar ideas for the core of the APIs then CUDA adds a bunch of stuff on top of it for ease of use and performance which is tightly coupled with the hardware in some cases. It would be a lowest common denominator situation with a lot of those features in CUDA.
As the other poster said, you need Clover, and then you can use clinfo [1] to check you have everything installed and working.
I can't say for sure what's the latest state of Blender; somebody on Freenode #radeon mentioned a few months ago that Blender is failing to compile its OpenCL kernel, while before that somebody mentioned it as working, but quite a bit slower than with the proprietary driver. I suggest to try it yourself and report any bugs you encounter as blocking [2].
The 3D artists using GPU production rendering require at least 4 of the top of the line cards for their workstation. My workstation crams x5 980Tis in the case (2 off the board with PCIe risers).
Joining the community using this approach not only requires the hardware and ability to build it but also new rendering software and a lot of time to learn a new approach/mindset/workflow. The best software available is crucial. It is worth every penny to invest in the best rendering software when entering this environment. Right now, there are 3 that matter and none of the GPU-specific remdering softwares support OpenCL. There is a unique exception with V-Ray, the last gen maverick of rendering engines. V-Ray's future in GPU rendering could be bright if the new companies don't entirely outpace them in GPu development. Either way, every part of the people actually using this solution in the real world is investing all of their time, money, energy into Nvidia right now.
The devs at Redshift, my chosen renderer, insist OpenCL is not even close to having what they need.
Pseudo-realtime feedback could actually advance the craft to a new era and Nvidia is carrying the entire ecosystem.
Just out of curiosity (it's not my field), what are those 3 renderers that matter right now?
Somehow I read your comment as V-Ray is not among them as being last-gen. I remember that years ago there was a lot of buzz regarding Arnold (and it was justified to some extent AFAIR, at least judging by opinions of pleased 3D crowd), but maybe it's last-gen too now? Many years ago there was Brazil, but quick googling shows it's only for Rhino now? I haven't heard about Redshift till now, though.
[+] [-] aidos|9 years ago|reply
Edit here's the post https://news.ycombinator.com/item?id=13379597
[+] [-] thomastjeffery|9 years ago|reply
Blender's UI is so much better this way. The idea that the functionality of any professional software tool should be immediately visible is just silly. Photoshop, GIMP, Autocad, 3DS Max, Maya, etc. all follow that philosophy, and just end up with way too many buttons and menus for anyone to want to sift through. Blender shows functionality only where it is needed in the most beautiful modular system I have ever used.
[+] [-] kakarot|9 years ago|reply
I also highly recommend trying out the Pie Menu addon addon in the settings. It's more or less becoming an official part of the Blender UI flow
[+] [-] hashmal|9 years ago|reply
It's also very dev friendly. And exploring its file format makes for very good nerdy times.
[+] [-] Jach|9 years ago|reply
[+] [-] zengid|9 years ago|reply
"OpenCL works fine on NVIDIA cards, but performance is reasonably slower (up to 2x slowdown) compared to CUDA, so it doesn't really worth using OpenCL on NVIDIA cards at this moment."
I wonder if that's intentional on NVIDIA's part.
Does it mention which version of OpenCL they're using? I'm looking forward to hearing news about v2.x and SPIR-V.
[+] [-] robbies|9 years ago|reply
I think that's a reasonable guess. NVIDIA only supports OpenCL 1.2 (and it took them about 6 years to get there from 1.0, while other vendors were at 2.x).
While I don't think NVIDIA ever outright stated OpenCL was on the back burner, their support clearly waned, which devs noticed (https://streamcomputing.eu/blog/2012-09-10/nvidias-industry-...).
As for why...well, why should NVIDIA participate? They don't have anything to gain when they already have CUDA dominating the industry.
[+] [-] jacquesm|9 years ago|reply
I highly doubt it, a normal resource allocation conflict would suffice.
NVIDIA has everything to gain by being ahead on each and every metric and to purposefully hobble that would eventually come out and would make people decide to buy a non-NVIDIA product in the meantime.
I think the reason is more that NVIDIA has been able to tailor CUDA to a much larger degree to match the architecture of their cards and by extension anybody that writes for CUDA automatically benefits from that. OpenCL is more general but that also immediately implies that it will be less efficient. And it doesn't take all that much in terms of missing optimizations (a single memory fetch penalty would do) to get a 2x penalty. GPU programming is much less forgiving when it comes to subtle mistakes than regular programming because it immediately gets multiplied by a very large factor rather than a single missed cycle.
A nice example of the reverse: ATI cards dominated the bitcoin GPU mining scene because of their ability to do a particular operation in one clocktick instead of two on the NVIDIA cards.
[+] [-] Ono-Sendai|9 years ago|reply
See https://www.indigorenderer.com/benchmark-results for some benchmarks. All GPU results are using OpenCL.
We used to have both an OpenCL and CUDA implementation in earlier versions. The CUDA version was slightly faster (10-15%?) on Nvidia cards, but not worth the effort of maintaining both implementations.
[+] [-] pavanky|9 years ago|reply
[+] [-] Athas|9 years ago|reply
[+] [-] DRAGONERO|9 years ago|reply
[+] [-] opencl|9 years ago|reply
I would test myself but don't have an NVIDIA card handy.
[+] [-] deepnotderp|9 years ago|reply
[+] [-] valine|9 years ago|reply
[+] [-] thrillgore|9 years ago|reply
[+] [-] david-given|9 years ago|reply
Technically you can use it standalone, but it's not a great experience. There are no binaries, it's painful to build, and the only way to interact with it is via an undocumented XML format; and I haven't found any way to do truly procedural volumes or geometry (i.e. via a density function callback specified by the user).
Right now I have an extremely patched version of Povray, but I'd love to switch to something faster.
[+] [-] tombert|9 years ago|reply
OpenCL is one of those things that I never felt fully comfortable working with, but I felt productive in CUDA after a week or two. Granted, I learned them in that order, so it's possible that CUDA got an unfair head-start, but I stand by my initial thesis.
[+] [-] lettergram|9 years ago|reply
CUDA simply feels less hacked together to me. It's like working with a well thought out and documented code base (CUDA) vs working with some library that has little documentation and contradicting syntax and formats (OpenCL).
IMO that's why AMD is losing the "deep learning" battle, it's just not easy to develop using OpenCL. At least, not as easy as it should be.
[+] [-] shawn-butler|9 years ago|reply
There is an open-source implementation to dig into: https://github.com/Xilinx/triSYCL
[+] [-] tmbsundar|9 years ago|reply
I like freebsd a lot, but for this reason had to switch back to debian/ kubuntu.
[+] [-] roel_v|9 years ago|reply
I mean, there are numerical libraries that run 2x as fast on nvidia compared to their most optimized opencl implementations, because they use 'gpu assembly' specific for nvidia cards; how does that fit the 'opencl on par with cuda' claim? It depends on what effort is spent on optimizing for a certain platform, not what api is used...
I'm working in opencl myself but it's frustrating that I'll never get as much performance as I would when using opencl, even when I'm using gtx gpus myself.
[+] [-] jacquesm|9 years ago|reply
Didn't you mean CUDA?
[+] [-] dharma1|9 years ago|reply
https://github.com/cuda-on-cl/cuda-on-cl
[+] [-] throwblender|9 years ago|reply
I made this video about worker in tech with blender.
https://vimeo.com/98728314
The worker is slaving away at his terminal, he is writing code that creates the 'feed' of apps/entertainment/media/etc.. for the insatiable appetite of society (Represented by the somewhat-similar-to-a-hungry-hippo character in the depths). Who are these other two glowing beings? What do they represent? My friends have tried to guess some explanations, but I'll let each audience member decide for themselves.
[+] [-] radarsat1|9 years ago|reply
Or are their machine models so different that that is an unreasonable thing to even try..
[+] [-] jjn2009|9 years ago|reply
They both have very similar ideas for the core of the APIs then CUDA adds a bunch of stuff on top of it for ease of use and performance which is tightly coupled with the hardware in some cases. It would be a lowest common denominator situation with a lot of those features in CUDA.
[+] [-] k_sze|9 years ago|reply
I think the actual page that talks about Cycles rendering performance improvements in version 2.79 is this wiki page: https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.7...
But that page doesn't seem to mention any comparison between OpenCL and CUDA.
I think something is still missing to make a concrete link between "OpenCL on par with CUDA" and version 2.79 specifically.
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] anc84|9 years ago|reply
[+] [-] floatboth|9 years ago|reply
[+] [-] KaoruAoiShiho|9 years ago|reply
[+] [-] geertj|9 years ago|reply
[+] [-] vedranm|9 years ago|reply
I can't say for sure what's the latest state of Blender; somebody on Freenode #radeon mentioned a few months ago that Blender is failing to compile its OpenCL kernel, while before that somebody mentioned it as working, but quite a bit slower than with the proprietary driver. I suggest to try it yourself and report any bugs you encounter as blocking [2].
[1] https://github.com/Oblomov/clinfo
[2] https://bugs.freedesktop.org/show_bug.cgi?id=99553
[+] [-] my123|9 years ago|reply
[+] [-] gt_|9 years ago|reply
The 3D artists using GPU production rendering require at least 4 of the top of the line cards for their workstation. My workstation crams x5 980Tis in the case (2 off the board with PCIe risers).
Joining the community using this approach not only requires the hardware and ability to build it but also new rendering software and a lot of time to learn a new approach/mindset/workflow. The best software available is crucial. It is worth every penny to invest in the best rendering software when entering this environment. Right now, there are 3 that matter and none of the GPU-specific remdering softwares support OpenCL. There is a unique exception with V-Ray, the last gen maverick of rendering engines. V-Ray's future in GPU rendering could be bright if the new companies don't entirely outpace them in GPu development. Either way, every part of the people actually using this solution in the real world is investing all of their time, money, energy into Nvidia right now.
The devs at Redshift, my chosen renderer, insist OpenCL is not even close to having what they need.
Pseudo-realtime feedback could actually advance the craft to a new era and Nvidia is carrying the entire ecosystem.
[+] [-] przemoc|9 years ago|reply
Somehow I read your comment as V-Ray is not among them as being last-gen. I remember that years ago there was a lot of buzz regarding Arnold (and it was justified to some extent AFAIR, at least judging by opinions of pleased 3D crowd), but maybe it's last-gen too now? Many years ago there was Brazil, but quick googling shows it's only for Rhino now? I haven't heard about Redshift till now, though.