top | item 39670837

(no title)

yukIttEft | 2 years ago

I'm wondering how AI scientists work these days. Do they really hack Cudakernels or do they plug models together with highlevel toolkits like pytorch?

Considering its the latter, considering pytorch takes care of providing optimized backends for various hardwares, how big of a moat is Cuda then really?

discuss

order

david-gpu|2 years ago

Pytorch relies heavily on the extensive libraries of high-performance kernels provided by NVidia, such as cuDNN.

In other words, it goes something like this:

    Application
    Pytorch (and similar)
    cuDNN (and similar)
    CUDA (and similar)
    NVidia GPU
My opinion, based on what I saw those wizards do, is that reproducing the feature set and efficiency of cuDNN/cuBLAS is deeply nontrivial.