adammiribyan
|
4 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
Not yet. Current design is run code, return result. Adding virtio-net to forks is on the roadmap. What's your use case that needs it?
adammiribyan
|
5 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
Glad to see the approach validated at scale! I hadn't seen your blog posts until they were linked here, going to dig into the userfaultfd path. Would love to chat if you're open to it.
adammiribyan
|
5 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
Great writeup, bookmarked. The fault storm point is interesting -- our forks are short-lived (execute and discard) so the working set is small, but for longer-running sandboxes that would absolutely be a problem.
adammiribyan
|
5 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
Both. The engine is open source. You can self-host it on any Linux box with KVM. There's also a live API you can hit right now (curl example in the README). Building the managed service for teams that don't want to run their own infra.
adammiribyan
|
6 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
1 vCPU per fork currently. Multi-vCPU is doable (per-vCPU state restore in a loop) but would multiply fork time.
On Firecracker version: tested with v1.12, but the vmstate parser auto-detects offsets rather than hardcoding them, so it should work across versions.
adammiribyan
|
6 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
On tail latency: KVM VM creation is 99.5% of the fork cost - create_vm, create_irq_chip, create_vcpu, and restoring CPU state. The CoW mmap is ~4 microseconds regardless of load. P99 at 1000 concurrent is 1.3ms. The mmap CoW page faults during execution are handled transparently by the host kernel and don't contribute to fork latency.
On snapshot staleness: yes, forks inherit all internal state including RNG seeds. For dependency updates you rebuild the template (~15s). No incremental update - full re-snapshot, similar to rebuilding a Docker image.
On the memory number: 265KB is the fork overhead before any code runs. Under real workloads we measured 3.5MB for a trivial print(), ~27MB for numpy operations. But 93% of pages stay shared across forks via CoW. We measured 100 VMs each running numpy sharing 2.4GB of read-only pages with only 1.75MB private per VM. So the real comparison to E2B's ~128MB is more like 3-27MB depending on workload, with most of the runtime memory shared.
adammiribyan
|
6 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
Exactly —- they skip the OS, we make it free to clone.
adammiribyan
|
6 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
Agreed, cross-node is the hard next step. For now single-node density gets you surprisingly far. 1000 concurrent sandboxes on one $50 box. When we need multi-node, userfaultfd with remote page fetch is the likely path.
adammiribyan
|
6 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
Good callout. We seed entropy before snapshot to unblock getrandom(), but forks still share CSPRNG state. The proper fix per Firecracker’s docs is RNDADDENTROPY + RNDRESEEDCRNG after each fork, plus reseeding userspace PRNGs like numpy separately. On the roadmap.
https://github.com/firecracker-microvm/firecracker/blob/main...
adammiribyan
|
7 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
CRIU is great for save/restore. The nice thing about CoW forking is it's cheap branching, not just checkpointing. You can clone a running state thousands of times at a few hundred KB each.
adammiribyan
|
7 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
The API in the readme is live right now -- you can curl it. Plan is multi-region, custom templates with your own dependencies, and usage-based pricing. Email in my profile if you want early access.
adammiribyan
|
7 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
Thanks! Yes, there's going to be a managed version.
adammiribyan
|
7 days ago
|
on: Show HN: Sub-millisecond VM sandboxes using CoW memory forking
Fair question. The fork engine itself is general purpose -- you could use it for anything that needs fast isolated execution. We say 'AI agents' because that's where the demand is right now. Every agent framework (LangChain, CrewAI, OpenAI Assistants) needs sandboxed code execution as a tool call, and the existing options (E2B, Daytona, Modal) all boot or restore a VM/container per execution. At sub-millisecond fork times, you can do things that aren't practical with 100-200ms startup: speculative parallel execution (fork 10 VMs, try 10 approaches, keep the best), treating code execution like a function call instead of an infrastructure decision, etc.
adammiribyan
|
1 month ago
|
on: Launch HN: AgentMail (YC S25) – An API that gives agents their own email inboxes
adammiribyan
|
4 months ago
|
on: Kratos - Cloud native Auth0 open-source alternative (self-hosted)
Does OpenAI use Ory? I thought they’re using Auth0.
adammiribyan
|
2 years ago
|
on: Is Cloudflare down?
Their home page shows “Sorry, you have been blocked” and the status page is not event responding.