(no title)
pqn | 3 years ago
As to the tech, we have APIs closely resembling common deep learning frameworks, so once you add our Python/C++ client locally, you can change a small amount of code to start remotely using GPUs. We also have the ability to handle arbitrary stateful CUDA code for more complex use cases. On the server side, you can deploy our work scheduler inside your own VPC, so we take over orchestration for you as well.
Our customers are currently confidential, but safe to say we've seen a 5-10x decrease in cloud costs (or equivalently, the ability to fit 5-10x larger workloads given a GPU quota). It really depends on the utilization of your current workload.
nharada|3 years ago
pqn|3 years ago