(no title)
yukIttEft | 2 years ago
Considering its the latter, considering pytorch takes care of providing optimized backends for various hardwares, how big of a moat is Cuda then really?
yukIttEft | 2 years ago
Considering its the latter, considering pytorch takes care of providing optimized backends for various hardwares, how big of a moat is Cuda then really?
david-gpu|2 years ago
In other words, it goes something like this:
My opinion, based on what I saw those wizards do, is that reproducing the feature set and efficiency of cuDNN/cuBLAS is deeply nontrivial.