top | item 39622645

(no title)

lyapunova | 2 years ago

To be honest, most researchers in applied ML in the bay say the opposite. If you are trying to be nimble and prototype, use pytorch. If you're trying to gain some optimizations as you near deployment, rewrite in Jax.

discuss

order

pama|2 years ago

Interesting perspective about possible Jax optimizations. Assuming these models are trained and deployed on non-TPU hardware, are there any real advantages in using Jax for deployment on GPU? I’d have assumed that inference is largely a solved optimization for large transformer based models (with any low hanging fruits from custom CUDA code already written) and the details are shifting towards infrastructure tradeoffs and availability of efficient GPUs. But I may be out of the loop with the latest gossip. Or do you simply mean that maybe there exist cases where TPU inference makes sense financially and using jax makes a difference?

axpy906|2 years ago

Interesting. I’ve never heard that. I could see that argument going both ways as PyTorch has the larger ecosystem and is published the most.

plumeria|2 years ago

Where does Tensorflow stand in this?

rockinghigh|2 years ago

Tensorflow has been falling behind since they stopped caring about backward compatibility. PyTorch is the leading framework. Jax is getting some traction at Google and was used to train Gemini.

axpy906|2 years ago

Somewhere next to Theano, Mxnet or Caffe.