I've also been doing the fastai course, where you learn Gradio and pytorch.
Python has such a messy library story. I'm not a python developer, and coming into this ecosystem and trying to make things work with pip, conda, docker, etc. It's a mess.
I like Gradio, and built a few small apps, but it is still messy compared to Bumblebee.
Livebook + Bumblebee is magical. I'm productive in an instant, and the opportunity to build with Elixir and Phoenix makes this so exciting.
I agree on python libraries. There are probably contextual/philosophical reasons why the dependency management works the way it does, but I also don't use it a ton so its not obvious to me. RubyGems with Bundler and Node with NPM/Yarn have pretty much just worked for me. Pip and pipenv just feel clunkier somehow and there seems to be more tweaking to get things working.
Hi everyone, glad to be back here with the official announcement!
It is late here but feel free to drop questions and I will answer them when I am up. Meanwhile I hope you will enjoy the content we put out: the announcements, example apps, and sample notebooks!
Congrats on the launch, this is fantastic. I love seeing these releases. They bring me so much joy & excitement.
Can you provide some thoughts on the benefits of doing ML on Elixir vs. Python? Is the benefit the language semantics? Is it much easier to get distributed work in Elixir ML vs Python ML? Are the tools better/smoother? Are there ML ops improvements? Perhaps there’s a blog post I missed :)
What's the motivation for the name? I get "numbat" to "numerical", but not immediately seeing any connection for bumblebee.
Maybe a more serious question: is anyone using Elixir ML in production? I'm absolutely gobsmacked at the quantity and quality of development effort that's gone into it (and use Livebook daily, though not for ML stuff). It's clearly a major focus for the team. I'm wondering if it's ready for production adoption, and if so, if anyone has used it "in anger" yet.
If I have a Huggingface model that I've finetuned, can I load it using Bumblebee? I fine-tuned ConvNext and changed it into a multi-label classifier and saved it as a PyTorch model. It works great but being able to use it in LiveBook instead of Jupyter Notebook would be fantastic.
I think I'd have to convert the format, but what then?
Alternatively to EXLA/Torchx, any thoughts on supporting an ML compiler frontend like Google IREE/MLIR by generating StableHLO or LinAlg? This could pave the way towards supporting multiple hardware targets (Vulkan-based, RVV-based, SME-based etc.) with minimal effort from the framework.
I’ve been following live-view and elixir (and LiveBook as a result) and the people working on it are incredible! Inspired me to start learning elixir on the side a few weeks ago after admiring from afar.
Elixir is quirky and the functional aspects especially take getting used to (eg the learning guides say you should rarely write for loops), but the runtime and some of these amazing tools really show the potential. I don’t know if I’ll ever use it professionally (I hope so, honestly) but I think I’ll learn something that’ll make me better regardless.
LiveView has such a unique and refreshing take on web-dev (I say having never written any significant amount of HTML), that seems to make it behave much more like a local application. And LiveBook builds on that to be a next-gen alternative to Jupiter notebooks. I can totally imagine live book expanding into a more general purpose interactive playground type tool.. good for internal tools and dashboard and development experiments… almost a WebUI alternative to a REPL. I feel like all that’s missing is being able to inject/attach to an existing VM.
That's basically how my team have been increasingly using it. Simply connect Livebook to a locally running Phoenix project and you have a Livebook REPL into your server. When you're dealing with complex data, pulling from different sources and have to build up a bunch of context before you iterate on a function it's super useful to be able to break up that code into chunks, take form inputs[0] along the way and document any quirks. We keep a bunch of livebooks committed in the repo to help debug and iterate on the more complex parts of our codebase.
This is pretty amazing, are the models more intelligible in Elixir than they are in python? I was under the impression that there lots of separate pieces that are all configurable on the python side but that organisation seemed quite messy (when I looked at the Stable Diffusion code anyway). Will have a look at the elixir shortly. Running these models is cool, is there the ability to train new models too?
You can train models with Axon (the neural network library) but it is not yet fully integrated into Bumblebee. We will start exploring those topics next and hope to provide an equally seamless experience.
I am guessing no, otherwise that would have been showcased. I think most of the python implementations are actually C/C++ implementations, which would be hard to beat, or would be easily patched to be faster.
Awesome release, I might want to try this in my Phoenix project. Are there any major disadvantages to doing this vs bringing Python ecosystem into the mix, aside from obviously, missing out on tooling?
I think perhaps the bigger issue is that you will be trailing ahead a new path which means fewer resources if you get stuck. BUT you can join the #machine-learning channel of the Erlang Ecosystem Foundation or ask around in the Elixir Forum!
I have given a presentation on the fundamentals of Livebook at the LIVE Workshop @ SPLASH 2022 just this week. The video is not officially out yet but you can check the pre-recorded version [0] (Google Drive link)
I've trained models using Jupyter and Livebook (though only smaller toy models [1]) so I can deposit my 2 cents here. Small disclaimer that I started with Jupyter, so in some sense my mental model was biased towards Jupyter.
I think the biggest difference that'll trip you up coming from Jupyter is that Livebook enforces linear execution. You can't arbitrarily run cells in any order like you can in Jupyter - if you change an earlier cell all the subsequent cells have to be run in order. The only deviation from this is branches which allow you to capture the state at a certain point and create a new flow from there on. There's a section in [1] that explains how branching works and how you can use it when training models.
The other difference is that if you do something that crashes in a cell, you'll lose the state of the entire branch and have to rerun from the beginning of the branch. Iirc if you stop a long running cell, that forces a rerun as well. That can also be painful when running training loops that run for a while, but there are some pretty neat workarounds you can do using Kino. Using those workarounds does break the reproducibility guarantees though.
Personally while building NN models I find that I prefer the Jupyter execution model because for NNs, rerunning cells can be really time-consuming. Being able to quickly change some variables and run a cell out of order helps while I'm exploring/experimenting.
Two things I love about Livebook though are 1) the file format makes version control super easy and 2) Kino allows for real interactivity in the notebook in a way that's much harder to do in Jupyter. So in Livebook you can easily create live updating charts, images etc that show training progress or have other kinds of interactivity.
If you're interested to see what my model training workflow looks like with Livebook (and I have no idea if it's the best workflow!), check out the examples below [1][2]. Overall I'd say it definitely works well, you just have to shift your mental model a bit if you're coming from Jupyter. If I were doing something where rerunning cells wasn't expensive I would probably prefer the Livebook model.
Not to distract from the awesomeness of this too much, but: what would it take to do the same type of thing but in Ruby? Being able to easily build rails apps around ML stuff would be absolutely awesome for us boring rails guys vs building python APIs for rails apps to talk to…
The amount of work varies because there are many different design decisions you can make.
We have been working on this (Nx + Axon + Livebook + Bumblebee) for almost 2 years. One person full-time (part-time for the first year) and 3 part-time. But we have taken a foundational approach: instead of providing bindings to high-level libraries, we choose to build the whole foundation in Elixir and only leverage 3rd-party libraries at the compiler level. It looks like this:
Everything is done in Elixir, except for the (pluggable) Compilers layer, which can be written in anything. The ones we use by default are from Google/Facebook and are written in C++.
This design gives us agency and resiliency: we have control over how to evolve the whole stack and we can swap between different compilers depending how state of the art evolves. We are more expressive too, as we are not limited by design decisions done in libraries written in a separate language.
But you can skip ahead provide direct bindings to libraries such as torch + torch.nn. This will cut corners but also will officially tie you to a Python library.
However it is important to notice that embedding a neural network model inside your Phoenix app, as done in the video, is only really practical in a language that can fully leverage concurrency and effectively run both CPU/IO workflows in the same OS process. I am not following the progress of concurrent Ruby but my rough understanding is that it is not there. So in practice you would need to deploy your machine learning models to a separate service anyway.
It builds on top of Nx, Numerical Elixir, which allows for models to be compiled for cpu or gpu. Done this way, the model does not have to run by executing BEAM bytecode. Additionally, Liveview and Livebook enables the equivalent of Jupyter notebooks. All of this has been happening for the past couple years.
The advantage here is data being fed into the model can use frameworks like Broadway (streaming ingestion) or Membrane (realtime media streaming), or Nerves (BEAM at edge / IoT), or as a controller for unreliable agents.
Digging into the source of its dependencies a bit, it looks like it uses NIFs (inlined functions) to bind into your choice of a couple of different potential C backends, the big ones being Torch and Google's XLA compiler.
So performance shouldn't be an issue at all, when compared to similar "high-level language calling low-level language for heavy tasks" situations, like Tensorflow or PyTorch. It does, however, come at the cost of sacrificing the extreme stability BEAM applications typically enjoy: misbehavior by the backend can cause hangs or even crashes of the BEAM node they're running in.
This is really amazing, and will make wiring up some of these models together much easier I think. Elixir is making great strides in being a fantastic choice for ML
[+] [-] xrd|3 years ago|reply
The ergonomics of Bumblebee are so perfect.
I've also been doing the fastai course, where you learn Gradio and pytorch.
Python has such a messy library story. I'm not a python developer, and coming into this ecosystem and trying to make things work with pip, conda, docker, etc. It's a mess.
I like Gradio, and built a few small apps, but it is still messy compared to Bumblebee.
Livebook + Bumblebee is magical. I'm productive in an instant, and the opportunity to build with Elixir and Phoenix makes this so exciting.
I'm blown away.
[+] [-] Tobani|3 years ago|reply
[+] [-] josevalim|3 years ago|reply
It is late here but feel free to drop questions and I will answer them when I am up. Meanwhile I hope you will enjoy the content we put out: the announcements, example apps, and sample notebooks!
[+] [-] shay_ker|3 years ago|reply
Can you provide some thoughts on the benefits of doing ML on Elixir vs. Python? Is the benefit the language semantics? Is it much easier to get distributed work in Elixir ML vs Python ML? Are the tools better/smoother? Are there ML ops improvements? Perhaps there’s a blog post I missed :)
[+] [-] losvedir|3 years ago|reply
Maybe a more serious question: is anyone using Elixir ML in production? I'm absolutely gobsmacked at the quantity and quality of development effort that's gone into it (and use Livebook daily, though not for ML stuff). It's clearly a major focus for the team. I'm wondering if it's ready for production adoption, and if so, if anyone has used it "in anger" yet.
[+] [-] AlphaWeaver|3 years ago|reply
How easy would it be to support OpenAI's new Whisper transcription model in Bumblebee?
[+] [-] pbowyer|3 years ago|reply
If I have a Huggingface model that I've finetuned, can I load it using Bumblebee? I fine-tuned ConvNext and changed it into a multi-label classifier and saved it as a PyTorch model. It works great but being able to use it in LiveBook instead of Jupyter Notebook would be fantastic.
I think I'd have to convert the format, but what then?
[+] [-] chem83|3 years ago|reply
Alternatively to EXLA/Torchx, any thoughts on supporting an ML compiler frontend like Google IREE/MLIR by generating StableHLO or LinAlg? This could pave the way towards supporting multiple hardware targets (Vulkan-based, RVV-based, SME-based etc.) with minimal effort from the framework.
[+] [-] tommica|3 years ago|reply
[+] [-] vineyardmike|3 years ago|reply
Elixir is quirky and the functional aspects especially take getting used to (eg the learning guides say you should rarely write for loops), but the runtime and some of these amazing tools really show the potential. I don’t know if I’ll ever use it professionally (I hope so, honestly) but I think I’ll learn something that’ll make me better regardless.
LiveView has such a unique and refreshing take on web-dev (I say having never written any significant amount of HTML), that seems to make it behave much more like a local application. And LiveBook builds on that to be a next-gen alternative to Jupiter notebooks. I can totally imagine live book expanding into a more general purpose interactive playground type tool.. good for internal tools and dashboard and development experiments… almost a WebUI alternative to a REPL. I feel like all that’s missing is being able to inject/attach to an existing VM.
[+] [-] afhammad|3 years ago|reply
That's basically how my team have been increasingly using it. Simply connect Livebook to a locally running Phoenix project and you have a Livebook REPL into your server. When you're dealing with complex data, pulling from different sources and have to build up a bunch of context before you iterate on a function it's super useful to be able to break up that code into chunks, take form inputs[0] along the way and document any quirks. We keep a bunch of livebooks committed in the repo to help debug and iterate on the more complex parts of our codebase.
0: https://hexdocs.pm/kino/Kino.html
[+] [-] zusoomro|3 years ago|reply
If you mean connecting to a running node, that’s already possible! Here’s an example on fly.io’s infra: https://fly.io/docs/elixir/advanced-guides/connect-livebook-...
[+] [-] xrd|3 years ago|reply
https://podcast.thinkingelixir.com/102
[+] [-] andy_ppp|3 years ago|reply
[+] [-] josevalim|3 years ago|reply
[+] [-] sam5q|3 years ago|reply
[+] [-] vegabook|3 years ago|reply
[+] [-] thomasfortes|3 years ago|reply
https://github.com/livebook-dev/vega_lite
[+] [-] hargup|3 years ago|reply
[+] [-] josevalim|3 years ago|reply
[+] [-] itake|3 years ago|reply
[+] [-] fellellor|3 years ago|reply
[+] [-] hellows|3 years ago|reply
[+] [-] seanmor5|3 years ago|reply
[+] [-] pawelduda|3 years ago|reply
[+] [-] josevalim|3 years ago|reply
[+] [-] anonymousDan|3 years ago|reply
[+] [-] josevalim|3 years ago|reply
0: https://drive.google.com/file/d/1Mw_NEER4VzA1qhFIq6WH9PYLLbN...
[+] [-] hanrelan|3 years ago|reply
I think the biggest difference that'll trip you up coming from Jupyter is that Livebook enforces linear execution. You can't arbitrarily run cells in any order like you can in Jupyter - if you change an earlier cell all the subsequent cells have to be run in order. The only deviation from this is branches which allow you to capture the state at a certain point and create a new flow from there on. There's a section in [1] that explains how branching works and how you can use it when training models.
The other difference is that if you do something that crashes in a cell, you'll lose the state of the entire branch and have to rerun from the beginning of the branch. Iirc if you stop a long running cell, that forces a rerun as well. That can also be painful when running training loops that run for a while, but there are some pretty neat workarounds you can do using Kino. Using those workarounds does break the reproducibility guarantees though.
Personally while building NN models I find that I prefer the Jupyter execution model because for NNs, rerunning cells can be really time-consuming. Being able to quickly change some variables and run a cell out of order helps while I'm exploring/experimenting.
Two things I love about Livebook though are 1) the file format makes version control super easy and 2) Kino allows for real interactivity in the notebook in a way that's much harder to do in Jupyter. So in Livebook you can easily create live updating charts, images etc that show training progress or have other kinds of interactivity.
If you're interested to see what my model training workflow looks like with Livebook (and I have no idea if it's the best workflow!), check out the examples below [1][2]. Overall I'd say it definitely works well, you just have to shift your mental model a bit if you're coming from Jupyter. If I were doing something where rerunning cells wasn't expensive I would probably prefer the Livebook model.
[1] https://github.com/elixir-nx/axon/blob/main/notebooks/genera... [2] https://github.com/elixir-nx/axon/blob/main/notebooks/genera...
[+] [-] dchuk|3 years ago|reply
[+] [-] josevalim|3 years ago|reply
We have been working on this (Nx + Axon + Livebook + Bumblebee) for almost 2 years. One person full-time (part-time for the first year) and 3 part-time. But we have taken a foundational approach: instead of providing bindings to high-level libraries, we choose to build the whole foundation in Elixir and only leverage 3rd-party libraries at the compiler level. It looks like this:
Everything is done in Elixir, except for the (pluggable) Compilers layer, which can be written in anything. The ones we use by default are from Google/Facebook and are written in C++.This design gives us agency and resiliency: we have control over how to evolve the whole stack and we can swap between different compilers depending how state of the art evolves. We are more expressive too, as we are not limited by design decisions done in libraries written in a separate language.
But you can skip ahead provide direct bindings to libraries such as torch + torch.nn. This will cut corners but also will officially tie you to a Python library.
However it is important to notice that embedding a neural network model inside your Phoenix app, as done in the video, is only really practical in a language that can fully leverage concurrency and effectively run both CPU/IO workflows in the same OS process. I am not following the progress of concurrent Ruby but my rough understanding is that it is not there. So in practice you would need to deploy your machine learning models to a separate service anyway.
I hope this helps!
[+] [-] Exuma|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] tiffanyh|3 years ago|reply
[+] [-] hosh|3 years ago|reply
The advantage here is data being fed into the model can use frameworks like Broadway (streaming ingestion) or Membrane (realtime media streaming), or Nerves (BEAM at edge / IoT), or as a controller for unreliable agents.
[+] [-] OkayPhysicist|3 years ago|reply
So performance shouldn't be an issue at all, when compared to similar "high-level language calling low-level language for heavy tasks" situations, like Tensorflow or PyTorch. It does, however, come at the cost of sacrificing the extreme stability BEAM applications typically enjoy: misbehavior by the backend can cause hangs or even crashes of the BEAM node they're running in.
[+] [-] thedangler|3 years ago|reply
[+] [-] fud101|3 years ago|reply
[+] [-] JediLuke|3 years ago|reply
[+] [-] tylrbrkr|3 years ago|reply