top | item 30436108

(no title)

noxa | 4 years ago

RE cooperative matrix: they have functional units (AMX/ANE) that could hopefully be exposed in MSL via something shaped like cooperative matrix and I'm pretty sure it'd be fantastic. Everything today is locked behind CoreML and Accelerate and those are poor targets for modern compiler-based approaches :( On the Vulkan side there's been rumblings of a vendor-agnostic extension for cooperative matrix and support from major vendors - at which point I'm hoping that leads to Apple wanting to show off their own HW features.

RE MoltenVK: we were surprised how robust it is nowadays - we're definitely going to build out our own Metal backend for our abstraction layer but wanted to see what we could hit with the zero-code option and it's proven very useful for that! A MoltenVK build is on the order of ~12MB (last I looked) while the entire IREE runtime is 50-100KB so it's a hard pill to swallow, all other issues (security/memory consumption/startup time/etc) notwithstanding :)

AFAIK the memoryless storage is only for textures and mostly useful in render passes - Vulkan has this via VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT. In a way what we do in our compiler is fuse dispatches such that they never hit memory at all so it hasn't been something we've needed yet. For heavy compute workloads more control over virtual memory would be useful, though, ala https://developer.nvidia.com/blog/introducing-low-level-gpu-... - some of the biggest performance hurdles when it comes to highly dynamic ML is around memory management (being able to zero-copy resize buffers when data-dependent shapes change would be killer).

// IREE dev

discuss

order

raphlinus|4 years ago

Great, thanks. That answers my questions. I'll read up on the lazily allocated bit; I wasn't aware that this provided similar functionality as dispatchThreadsPerTile[1], but perhaps it's something I'm misunderstanding. I'm excited about that as a way to stitch 2D graphics rendering operations together without having to hit main memory, but from your explanation I can see that functionality might not be very useful in AI workloads.

Amen on more control over dynamic memory access patterns. It's something I'm struggling with too, and I have a feeling that whatever solution I come up with is going to be a compromise.

Keep up the good work, these are exciting times!

[1]: https://developer.apple.com/documentation/metal/mtlrendercom...

noxa|4 years ago

Oh nice, I hadn't seen dispatchThreadsPerTile! I don't believe that's possible today in Vulkan :(

Our goal (though still WIP) is to have the interaction between user applications and our compiled code happen at the command buffer boundary - you would submit some work, pass in a VkSemaphore/MTLSharedEvent/cuEvent/futex/etc, we would use that when submitting our own work, and then we'd pass you back a VkSemaphore/etc you can continue chaining with. So one level of granularity coarser than mid-pass interleaving but still hopefully all pipelined properly with no host/device synchronization required. There will be programs that this doesn't work well with (heavily data-dependent stuff) but at least making it work turns it into an optimization problem vs today's representation problem!