Thank you, good question! My original implementation was actually a bunch of manifests on my own microk8s cluster. I was finding that this meant a lot of ad-hoc adjustments with every little tweak. (Ironic, given the whole "pets vs cattle" thing.) So I started testing the changes in a VM.
Then I was talking to a security engineer at my company, who pointed out that a VM would make him feel better about the whole thing anyway. And it occurred to me: if I packaged it as a VM, then I'd get both isolation and determinism. It would be easier to install and easier to debug.
So that's why I decided to go with a Vagrant-based installation. The obvious downside is that it's harder now to integrate it with external systems or to use the full power of whatever environment you deploy it in.
I peeked at the Vagrantfile, and I noticed that you rsync the working directory into the VM. I have two more questions.
1. Is it safe to assume that I am expected to develop inside the VM? How do run IDE/vim as well as using Claude code, while the true copy of the code lives in the VM?
2. What does yolo-cage provide on top of just running a VM? I mean, there is a lot of code in the GitHub. Is this the glue code to prepare the VM? Is this just QOL scripts to run/attach to the VM?
See: A field guide to sandboxes for AI¹ on the threat models.
> I want to be direct: containers are not a sufficient security boundary for hostile code. They can be hardened, and that matters. But they still share the host kernel. The failure modes I see most often are misconfiguration and kernel/runtime bugs — plus a third one that shows up in AI systems: policy leakage.
I find using docker containers more complex - you need a Dockerfile instead of a regular install script, they tend to be very minimal and lack typical linux debugging tools for the agent, they lose state when stopped.
Instead I'm using LXC containers in a VM, which are containers that look and feel like a VM.
borenstein|1 month ago
Then I was talking to a security engineer at my company, who pointed out that a VM would make him feel better about the whole thing anyway. And it occurred to me: if I packaged it as a VM, then I'd get both isolation and determinism. It would be easier to install and easier to debug.
So that's why I decided to go with a Vagrant-based installation. The obvious downside is that it's harder now to integrate it with external systems or to use the full power of whatever environment you deploy it in.
fnoef|1 month ago
I peeked at the Vagrantfile, and I noticed that you rsync the working directory into the VM. I have two more questions.
1. Is it safe to assume that I am expected to develop inside the VM? How do run IDE/vim as well as using Claude code, while the true copy of the code lives in the VM?
2. What does yolo-cage provide on top of just running a VM? I mean, there is a lot of code in the GitHub. Is this the glue code to prepare the VM? Is this just QOL scripts to run/attach to the VM?
m-hodges|1 month ago
> I want to be direct: containers are not a sufficient security boundary for hostile code. They can be hardened, and that matters. But they still share the host kernel. The failure modes I see most often are misconfiguration and kernel/runtime bugs — plus a third one that shows up in AI systems: policy leakage.
¹ https://www.luiscardoso.dev/blog/sandboxes-for-ai
ajb|1 month ago
dist-epoch|1 month ago
Instead I'm using LXC containers in a VM, which are containers that look and feel like a VM.