top | item 26840200

(no title)

Joky | 4 years ago

Python isn't really driving the compute intensive part of ML actually, whether it's JAX, PyTorch, or TensorFlow the code is really mostly native. Convolution are implemented by hand in highly optimized libraries (Intel MKL-DNN, Nvidia cuDNN) and the Python glue is really just a light "dispatcher".

A lot of it is also asynchronous for performance: the Python code just enqueues more work to a queue which some native C++ code processes. For TensorFlow the Python code traces an entire computation graph that is stored a protobuf and then executed by a C++ native stack, potentially remotely/distributed. Serving ML with TensorFlow does not involve any Python code in many scenarios.

Python is still quite useful for scientist to quickly glue everything together, and to describe their dataset, or when they collect result and need to produce graphs or other data analyses.

discuss

order

No comments yet.