top | item 39593503

(no title)

Tistel | 2 years ago

People should check out Google's JAX. Work in a high level language and run anywhere. Nvidia should just be commodity hardware if people avoid vendor lock in.

discuss

order

nerpderp82|2 years ago

Shimming CUDA is a waste of effort that only reinforces Nvidia's market dominance. Targeting higher level interfaces, Jax, Taichi, ArrayFire, etc is imho a better strategy. We have already seen systems like LLama.cpp and their ilk support alternative backends for training and inference.

Now the vast majority of the compute cycles have centered around a handful of model architectures, implementing those specific architectures in whatever bespoke hardware isn't difficult.

Target specific applications not the whole complex library/language layer.

fisf|2 years ago

That's fine and dandy, until you realize that Jax only has a limited amount of backends. E.g. rocm support is still experimental.

Somebody has to build those optimized backends -- it's not just a matter of people picking the wrong stack.

nerpderp82|2 years ago

I just looked at Jax and XLA, it is odd to me that they aren't targeting SPIR-V directly.