Considering that each kernel / kernel size is usually custom tuned on NVIDIA, I'd say no. Working in this field at several different companies, there are likely thousands of hand-tuned variations of a simple GEMM kernel. Each one required an engineer to look at specifically, even if they're all variations on a common theme.
As far as I know (and again, I work in the field of AI compilers), we're still a ways off from complete end-to-end generation of highly optimized kernels. If you want it to go fast, you need to write it by hand [1], and then test and validate.
Moreover, chip makers are constantly adding new features (Tensor Cores in NVIDIA for example), so the compiler is always playing catch up and at some point an engineer has to sit down (likely a team of them) and think 'what's the best way to exploit this hardware functionality for software performance?'. Then they have to test and validate that, and then either write a kernel, or attempt to put that know-how into a compiler.
Multiply this times the number of kernels in a typical suite, and... yeah.
And that was my point about herculean effort on modern chips. Assembly language isn't just the old 'Add register 1 and 2 and dump in R3' anymore. It's 'Use this instruction to access memory in this way, so that it's in a compatible format for the next instruction' and 'oh yeah, make sure your memory synchronization primitives are such that the whole thing is coherent'. Good luck!
Even going one step up into a higher-level language, you have to know how the kernel gets compiled to make it worthwhile. Again, it is trivial to write a correct opencl matrix multiply, but that's never going to be the highest performance. You have to know the hardware intimately. This is where having the software co-designed with hardware is very important. Basically, every AI chipmaker of any importance does this, including the startups, like Groq and Cerebras.
[1] A lot of kernels share basic patterns, so its not as hard as it sounds, but definitely requires engineering effort to get the design right.
> Considering that each kernel / kernel size is usually custom tuned on NVIDIA, I'd say no. Working in this field at several different companies, there are likely thousands of hand-tuned variations of a simple GEMM kernel. Each one required an engineer to look at specifically, even if they're all variations on a common theme.
Lol that's absolutely not true. What you're describing is literally impossible for any company that has more than one product family on the market since each product has different scratch sizes, number of vector registers, data types supported/emulated etc.
Outside of trade show demos, kernels are codegened. What is true is there are recurring "themes/patterns" that are handled by engineers for a class of products. Lately this is flash attention...
> Again, it is trivial to write a correct opencl matrix multiply, but that's never going to be the highest performance.
I guess you work at AMD. The reason AMD ships a whole bunch of binary kernels is not because someone tuned/designed each one but because AMD doesn't have a PTX/SASS equivalent. So each kernel has to be compiled at build time for each device (it's also why they can't have LTS support for architectures).
1000 engineers don’t automatically crank out 50x more code than 20 engineers. But GP is just saying there are a lot of subcomponents involved that each need major engineering effort dedicated to them.
I see it less as an engineering problem and more as a market problem. AMD stuff has existed, it’s the market that doesn’t see a point in it, and at this point, even feature parity or CUDA compatibility for that matter won’t make a huge dent. People will just keep using what they know and are recommended.
It’s more amazing to me that NVDA is so intensely inflated by this LLM hype wave. I find it genuinely scary to think about what’s going to happen when 95+% of AI slopware startups fold. Nvidia won’t be the only company financially impacted. Our entire economy runs on fads.
anon291|1 year ago
As far as I know (and again, I work in the field of AI compilers), we're still a ways off from complete end-to-end generation of highly optimized kernels. If you want it to go fast, you need to write it by hand [1], and then test and validate.
Moreover, chip makers are constantly adding new features (Tensor Cores in NVIDIA for example), so the compiler is always playing catch up and at some point an engineer has to sit down (likely a team of them) and think 'what's the best way to exploit this hardware functionality for software performance?'. Then they have to test and validate that, and then either write a kernel, or attempt to put that know-how into a compiler.
Multiply this times the number of kernels in a typical suite, and... yeah.
And that was my point about herculean effort on modern chips. Assembly language isn't just the old 'Add register 1 and 2 and dump in R3' anymore. It's 'Use this instruction to access memory in this way, so that it's in a compatible format for the next instruction' and 'oh yeah, make sure your memory synchronization primitives are such that the whole thing is coherent'. Good luck!
Even going one step up into a higher-level language, you have to know how the kernel gets compiled to make it worthwhile. Again, it is trivial to write a correct opencl matrix multiply, but that's never going to be the highest performance. You have to know the hardware intimately. This is where having the software co-designed with hardware is very important. Basically, every AI chipmaker of any importance does this, including the startups, like Groq and Cerebras.
[1] A lot of kernels share basic patterns, so its not as hard as it sounds, but definitely requires engineering effort to get the design right.
almostgotcaught|1 year ago
Lol that's absolutely not true. What you're describing is literally impossible for any company that has more than one product family on the market since each product has different scratch sizes, number of vector registers, data types supported/emulated etc.
Outside of trade show demos, kernels are codegened. What is true is there are recurring "themes/patterns" that are handled by engineers for a class of products. Lately this is flash attention...
> Again, it is trivial to write a correct opencl matrix multiply, but that's never going to be the highest performance.
I guess you work at AMD. The reason AMD ships a whole bunch of binary kernels is not because someone tuned/designed each one but because AMD doesn't have a PTX/SASS equivalent. So each kernel has to be compiled at build time for each device (it's also why they can't have LTS support for architectures).
binary132|1 year ago
I see it less as an engineering problem and more as a market problem. AMD stuff has existed, it’s the market that doesn’t see a point in it, and at this point, even feature parity or CUDA compatibility for that matter won’t make a huge dent. People will just keep using what they know and are recommended.
It’s more amazing to me that NVDA is so intensely inflated by this LLM hype wave. I find it genuinely scary to think about what’s going to happen when 95+% of AI slopware startups fold. Nvidia won’t be the only company financially impacted. Our entire economy runs on fads.