top | item 39507203

(no title)

zdyn5 | 2 years ago

Is software that important on the inference side, assuming all the key ops are supported by the compiler? Once the model is quantized and frozen the deployment to alternative chips while somewhat cumbersome hasn’t been too challenging, at least in my experience with Qualcomm NPU deployment (trained on NVIDIA)

discuss

order

p1esk|2 years ago

Let me put it this way: if there’s even a slightest issue with my Pytorch code (training or inference) running on a non Nvidia chip it will be an automatic no from me. More than that - if I simply suspect there will be any issues I will not try it. Regardless of any promised speedups.

Whoever wants to sell me their chip better do an amazing demo of their flawless software integration.

tester756|2 years ago

What an approach.

It is very simple math

If the savings on hw/compute are greater than cost of adjustments, then it is probably worth it.

So, if you prefer to avoid spending e.g 1 month on adjusting and testing just to keep using e.g 1.x more expensive hw, then it is your loss in long run