top | item 30049270

(no title)

kanaffa12345 | 4 years ago

>missing the point

facts

1. this is a thread about cpython. jax is as relevant to users of cpython as CUDA or OpenCL or whatever. jax cannot do absolutely anything with e.g. django.

2. for all intents and purposes all numerical code always runs in a lower-level implementation (C++, CUDA, XLA, whatever). so from that perspective, jax is just a convenient way to get from numerical python (i.e., loops and muls and adds) to kernels.

discuss

order

erwincoumans|4 years ago

I didn't claim Jax can accelerate Django, it all depends. A lot of our Python code is/was running partly in cpython and partly in extension modules such as Numpy.

There are many ways to achieve faster Python execution. One is a faster cpython implementation, another is moving cpu intensive parts of the code to extension modules (such as Numpy). Yet another is to jit compile Python (and Numpy) code to run on accelerators.

kanaffa12345|4 years ago

>jit compile Python

given who you are (googling your name) i'm surprised that you would say this. jax does not jit compile python in any sense of the word `Python`. jax is a tracing mechanism for a very particular set of "programs" specified using python; i put programs in quotes because it's not like you could even use it to trace through `if __name__ == "__main__"` since it doesn't know (and doesn't care) anything about python namespaces. it's right there in the first sentence of the description:

>JAX is Autograd and XLA

autograd for tracing and building the tape (wengert list) and xla for the backend (i.e., actual kernels). there is no sense in which jax will ever play a role in something like faster hash tables or more efficient loads/stores or virtual function calls.

in fact it doesn't even jit in the conventional understanding of jit, since there is no machine code that gets generated anew based on code paths (it simply picks different kernels and such that have already been compiled). not that i fault you for this substitution since everyone in ML does this (pytorch claims to jit as well).