Launch HN: Moonglow (YC S24) – Serverless Jupyter Notebooks
112 points| tmychow | 1 year ago
With Moonglow, you can start and stop pre-configured remote cloud machines within VSCode, and it makes those servers appear to VSCode like normal Jupyter kernels that you can connect your notebook to.
We built this because we learned from talking to data scientists and ML researchers that scaling up experiments is hard. Most researchers like to start in a Jupyter notebook, but they rapidly hit a wall when they need to scale up to more powerful compute resources. To do so, they need to spin up a remote machine and then also start a Jupyter server on it so they can access it from their laptop. To avoid wasting compute resources, they might end up setting this up and tearing it down multiple times a day.
When Trevor used to do ML research at Stanford, he faced this exact problem: often, he needed to move between cloud providers to find GPU availability. This meant he was constantly clicking through various cloud compute UIs, as well as copying both notebook files and data across different providers over and over again.
Our goal with Moonglow is to make it easy to transfer your dev environment from your local machine to your cloud GPU. If you’ve used Google Colab, you’ve seen how easy it is to switch from a CPU to a GPU - we want to bring that experience to VSCode and Cursor.
If you’re curious, here’s some background on how it works. You can model a local Jupyter server as actually having three parts: a frontend (also known as a notebook), a server and an underlying backend kernel. The frontend is where you enter code into your notebook, the kernel is what actually executes it, and the server in the middle is responsible for spinning up and restarting kernels. Moonglow is a rewrite of this middle server part: where an ordinary Jupyter server would just start and stop kernels, we’ve added extra orchestration that provisions a machine from your cloud, starts a kernel on it, then sets up a tunnel between you and that kernel.
In the demo video (https://www.youtube.com/watch?v=Bf-xTsDT5FQ), you can see Trevor demonstrate how he uses Moonglow to train a ResNet to 94% accuracy on the CIFAR-10 classification benchmark in 1m16s of wall clock time. (In fact, it only takes 5 seconds of H100 time; the rest of it is all setup.)
On privacy: we tunnel your code and notebook output through our servers. We don’t store or log this data if you are bringing your own compute provider. However, we do monitor it if you are using compute provided by us, to make sure that what you are running doesn’t break our compute vendor’s terms of service.
We currently aren’t charging individuals for Moonglow. When we do, we plan to price individuals a reasonable amount per-seat, and we have a business plan for teams with more requirements.
Right now, we support Runpod and AWS. We’ll add support for GCP and Azure soon, too. (If you’d like to use us to connect to your own AWS account, please email me at trevor@moonglow.ai.)
For today’s launch on HN only, you can get a free API key at https://app.moonglow.ai/hn-launch. You don’t need to sign in and you don’t need to bring your own compute; we’ll let you run it on servers we provide. This API key will give you enough credit to run Moonglow with an A40 for an hour.
If you're signed in, you won't be able to see the free credits page, but your account will have automatically had free credits added to it.
We’re still very early, and there are a lot of features we’d still like to add, but we’d love to get your feedback on this. We look forward to your comments!
randomcatuser|1 year ago
What do you think of this compared with running a Jupyter server on Modal? (I think Modal is slightly harder, ie, you run a terminal command, but curious!) https://modal.com/docs/guide/notebooks
tmychow|1 year ago
On the other hand, if you are trying to run an entire notebook on a remote machine by starting a Jupyter server with Modal, then the workflow with Modal is not that different from other clouds (e.g. you can start an EC2 instance and run a Jupyter server there). For that, Moonglow still makes it easier by letting you stay in your IDE and avoid juggling Jupyter server URLs.
Also, you might need to use a specific cloud e.g. if you have cloud credits, sensitive data that needs to stay on that cloud or just expensive egress fees. One of Moonglow's strengths is that you can do your work in that cloud, rather than having to move stuff around.
AnotherGoodName|1 year ago
I suspect that’s a matter of time right?
dinobones|1 year ago
For a lot of ML/AI workloads and tasks, Python is just a binding for underlying C/C++.
It's already a nightmare to try to reproduce any ML/AI paper, pip breaks 3 times, incompatible peer deps, some obscure library emits an obscure CLANG error that means I need to brew install some libwhatever, etc...
I don't think the WebAssembly toolchain is quite ready for plug and play "pip install" time yet. I hope it eventually will be though.
bblcla|1 year ago
Hopefully someday you'll have 8 H100s on your Macbook, but I think we're still a long way away from that.
AnotherGoodName|1 year ago
Yay I really can have serverless notebooks! Not just an easy to manage server environment but literally a static html file that can be passed around and runs the full notebook environment. It’s weird it was ever done any other way.
ukd1|1 year ago
williamstein|1 year ago
bblcla|1 year ago
sidcool|1 year ago
bblcla|1 year ago
The big difference is that Google Colab runs in your web browser, whereas Moonglow lets you connect to compute in the VSCode/Cursor notebook interface. We've found a lot of people really like the code-completion in VSCode/Cursor and want to be able to access it while writing notebook code.
Colab only lets you connect to compute provided by Google. For instance, even Colab Pro doesn't offer H100s, whereas you can get that pretty easily on Runpod.
yanniszark|1 year ago
bblcla|1 year ago
We don't yet transfer the python environment on the self-serve options, though for customers on AWS we'll help them create and maintain images with the packages they need.
I do have some ideas for making it easy to transfer environments over - it would probably involve letting people specify a requirements.txt and some apt dependencies and then automatically creating/deploying containers around that. Your idea of actually just detecting what's installed locally is pretty neat too, though.
ayakang31415|1 year ago
jerpint|1 year ago
joouha|1 year ago
[1] https://github.com/joouha/euporie
fluxode|1 year ago
1. How is this different from Syncthing and similar solutions? Syncthing is free, open-source, cloud agnostic and easy to use to accomplish what seems to be the same task as Moonglow. 2. What is serverless about this? It is not clear from the pitch above.
bblcla|1 year ago
2. I think the serverless here is actually pretty literal - you don't have to think about or spin up a Jupyter server. We normally describe it as 'run local jupyter notebooks on your own cloud compute,' which I think might be a little more clear.
aresant|1 year ago
This is brilliant and "obvious" in a good way along those lines, congrats on the launch!
tmychow|1 year ago
nobarpgp|1 year ago
Curious, are the SSH keys stored on Moonglow's internal servers?
bblcla|1 year ago
There's no SSH keys to store - we start a tunnel from the remote machine and connect to that.
daft_pink|1 year ago
I really like their Jupyter repl format because it separates the cells with python comments so it’s much easier to deploy your code when you are done versus a notebook.
bblcla|1 year ago
One nice thing about our VSCode extension is that it's not just a remote kernel - our extension also lets you see what kernels you have and other details, so we'd need to write something like it for Zed. We probably wouldn't do this unless there's a lot of demand.
By the way, VSCode also supports the # %% repl spec and Moonglow does work with that (though we haven't optimized for it as much as we've optimized for the notebooks).
CuriouslyC|1 year ago
bblcla|1 year ago
One thing I've found while working in the ML space is that it seems like ML researchers have to deal with a lot of systems cruft. I think that in the limit, ML researchers basically only care having about a few things set up well:
- secrets and environment management
- making sure their dependencies are installed
- efficient access to their data
- quick access to their code
- using expensive compute efficiently
But to get all this set up for their research they need to wade through a ton of documentation about git, bash, docker containers, mountpoints, availability zones, cluster management and other low-level systems topics.
I think there's space for something like Replit or Vercel for ML researchers, and Moonglow is a (very early!) attempt at creating something like it.
saurabhchalke|1 year ago
fabmilo|1 year ago
whinvik|1 year ago
bblcla|1 year ago
However, looking at its replacement here (https://docs.databricks.com/en/dev-tools/bundles/index.html) - I think we're trying to solve the same problems at different levels. My guess is Databricks is the right solution for big teams that need well-defined staging/prod/dev environment. We're targeting smaller teams that might be doing more of their own devops or are still at the 'using a bash script to run notebooks remotely' stage.
dcreater|1 year ago
bblcla|1 year ago
Moonglow abstracts over this, so you don't need to think about the server connection details at all. We're aiming for an experience where it feels like you've moved your notebook from local to cloud compute while staying in your editor.
xra_11|1 year ago
tmychow|1 year ago
mistrial9|1 year ago
nickagliano|1 year ago