(no title)
el_oni | 1 year ago
But python doesn't vertically scale very well. A language like Elixir can grow to fit the size of the box it is on, making use of all the cores, and then without too much additional ceremony scale horizontally aswell with distributed elixir/erlang.
Elixir getting a good story around webdev (phoenix and liveview) and more recently the a good story around ML is going to increase it's adoption, but it's not going to happen overnight. But maybe tomorrows CTOs will take the leap.
greenavocado|1 year ago
Strong disagree, this is a skill issue. I have written C++ modules for Python with pybind11 to speed up some code significantly but ended up reverting to pure Python once I learned how to move memory efficiently in Python. Numpy is very good at what it does and if you really have some custom code that needs to go faster you can run it externally through something like pybind11. It is a skill issue mostly. If you are writing ultra low latency code then you're right. You can make Python really fast if you are hyper aware of how memory is managed; I recommend tracemalloc. For instance, instead of pickling numpy arrays to send them to child processes, you can use shared memory and mutexes to define a common buffer which can be represented as a numpy array and shared between parent and child processes. Massive performance win right there, and most people simply would have never realized Python is capable of such things.
el_oni|1 year ago
Sure. Writing C++ that utilises your resources effectively then writing bindings so you can use that in python is great. But with elixir, if I've got 8 cores and 8 processes, those 8 processes run in parallel.
If I want raw cpu speed I can write something in rust, c, cpp or zig and then call it still using elixir semantics.
Not to mention that with Nx you can write elixir code that compiles to run on the GPU. Without writing any bindings.
h0l0cube|1 year ago
I think the point with Elixir and Nx is that any difficulty in parallel performance is abstracted away