(no title)
Mernit | 1 year ago
A more general issue is that the workloads that tend to run on GPU are much bigger than a standard Lambda-sized workload (think a 20Gi image with a smorgasbord of ML libraries). I've spent time working around this problem and wrote a bit about it here: https://www.beam.cloud/blog/serverless-platform-guide
akdev1l|1 year ago
You can do this with SR-IOV enabled hardware.
https://docs.nvidia.com/networking/display/mlnxofedv581011/s...