(no title)
hacker_9 | 2 years ago
Firstly it'll be a universal low level software layer, that can run on top of any cloud hardware, which can then be developed against to enable maximal cloud reach when training models.
This sort of virtualisation might be considered slower than direct access, but they've also created a DSL on top of python, which looks to enable the compiler to make smarter decisions about how to allocate memory and compute during training. So both together presumably producing a speedup worthy of the hype.
Kudos to them if they deliver on their promise.
No comments yet.