(no title)
jarpineh | 1 year ago
It wasn't mentioned in the article, but there's older blog post on fly.io [1] about live book, GPUs, and their FLAME serverless pattern [2]. Since there seems to be some common ground between these companies I'm now hoping Pythonx support is coming to FLAME enabled Erlang VM. I'm just going off from the blog posts, and am probably using wrong terminology here.
For Python's GIL problem mentioned in the article I wonder if they have experimented with free threading [3].
[1] https://fly.io/blog/ai-gpu-clusters-from-your-laptop-liveboo...
[2] https://fly.io/blog/rethinking-serverless-with-flame/
[3] https://docs.python.org/3/howto/free-threading-python.html
lawik|1 year ago
Chris Grainger who pushed for the value of Python in Livebook has given at least two talks about the power and value of FLAME.
And of course Chris McCord (creator of Phoenix and FLAME) works at Fly and collaborates closely with Dashbit who do Livebook and all that.
These are some of the benefits of a cohesive ecosystem. Something I enjoy a lot in Elixir. All these efforts are aligned. There is nothing weird going on, no special work you need to do.
solid_fuel|1 year ago
I'll add: FLAME is probably a great addition to pythonx. While a NIF can crash the node it is executed on, FLAME calls are executed on other nodes by default. So a crash here would only hard-crash processes on the same node (FLAME lets you group calls so that a flame node can have many being executed on it at any time).
Errors bubble back up to the calling process (and crash it by default but can be handled explicitly), so managing and retrying failed calls is easy.
jarpineh|1 year ago
abrookewood|1 year ago