(no title)
noxa | 4 years ago
RE MoltenVK: we were surprised how robust it is nowadays - we're definitely going to build out our own Metal backend for our abstraction layer but wanted to see what we could hit with the zero-code option and it's proven very useful for that! A MoltenVK build is on the order of ~12MB (last I looked) while the entire IREE runtime is 50-100KB so it's a hard pill to swallow, all other issues (security/memory consumption/startup time/etc) notwithstanding :)
AFAIK the memoryless storage is only for textures and mostly useful in render passes - Vulkan has this via VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT. In a way what we do in our compiler is fuse dispatches such that they never hit memory at all so it hasn't been something we've needed yet. For heavy compute workloads more control over virtual memory would be useful, though, ala https://developer.nvidia.com/blog/introducing-low-level-gpu-... - some of the biggest performance hurdles when it comes to highly dynamic ML is around memory management (being able to zero-copy resize buffers when data-dependent shapes change would be killer).
// IREE dev
raphlinus|4 years ago
Amen on more control over dynamic memory access patterns. It's something I'm struggling with too, and I have a feeling that whatever solution I come up with is going to be a compromise.
Keep up the good work, these are exciting times!
[1]: https://developer.apple.com/documentation/metal/mtlrendercom...
noxa|4 years ago
Our goal (though still WIP) is to have the interaction between user applications and our compiled code happen at the command buffer boundary - you would submit some work, pass in a VkSemaphore/MTLSharedEvent/cuEvent/futex/etc, we would use that when submitting our own work, and then we'd pass you back a VkSemaphore/etc you can continue chaining with. So one level of granularity coarser than mid-pass interleaving but still hopefully all pipelined properly with no host/device synchronization required. There will be programs that this doesn't work well with (heavily data-dependent stuff) but at least making it work turns it into an optimization problem vs today's representation problem!