I’m experiencing a similar issue hosting MCP Server on Cloud Run with scale-to-zero for cost optimization. As far as I know, Cloud Functions v2 and Cloud Run both are container-based, and they tend to have noticeable startup times.
In contrast, AWS Lambdas, which run on Firecracker, have sub-second startup latency, often just a few hundred milliseconds.
Is there anything comparable on GCP that achieves similar low latency cold starts?
I'm a huge GCP fan, but cloud run wouldn't fit our use case because of the routing and ephemeral nature. I think you would have to try to build something yourself using GKE + gVisor
Thanks for sharing. Makes a lot of sense that removing that routing layer would improve e2e latency.
We had a similar bottleneck building out our sandbox routing layer, where we were doing a lookup to a centralized db to route the query. We found that even with a fast KV store, that lookup still added too much overhead.
We moved to encoding the routing logic (like region, cluster ID, etc) directly into the subdomain/hostname. This allowed to drop the db read entirely on the hot path and rely on Anycast + latency-based DNS to route the user to the exact right regional gateway instantly. Also, if you ever find yourselves outgrowing standard HTTP proxies for those long-lived agent sessions, I highly recommend looking at Pingora. It gave us way more control over connection lifecycles than NGINX.
For the compute aspect doing sandbox pooling is cool but might kill your unit economics, especially if at some point each tenant has different images. Have you looked into memory snapshots (that way you only have storage costs not full VMs)?
Why is there all the sudden an explosion of sandbox related posts and tools? Llms and agents always needed sandboxes… was it just the collective conscious decided all at once that it mattered and the area to focus building tools?
I think sandboxes are having their moment because it's become undeniable that coding agents are useful, and that they're more useful if you run them in YOLO mode rather than having to approve everything they want to do.
Coding agents are still a relatively new category to most people. Claude Code dates back to February last year, and it took a while for the general engineering public to understand why that format - coding LLMs that can execute and iterate on the code they are writing - was such a big deal.
As a result the demand for good sandboxing options is skyrocketing.
It also takes a while for new solutions to spin up - if someone realized sandboxes were a good commercial idea back in September last year the products they built may only just be ready for people to start trying out today.
Particularly an explosion of SaaS sandboxes... why should I pay a subscription for some remote sandbox with paltry compute power, which I need a constant internet connection to access? I have this brilliant processor in my own laptop I want to use that I have already paid for, I don't want to use someone else's!
Great write-up on the evolution of your architecture. The progression from 200ms → 14ms is impressive.
The lesson about "delete code to improve performance" resonates. I've been down similar paths where adding middleware/routing layers seemed like good abstractions, but they ended up being the performance bottleneck.
A few thoughts on this approach:
1. Warm pools are brilliant but expensive - how are you handling the economics? With multi-region pools, you're essentially paying for idle capacity across multiple data centers. I'm curious how you balance pool size vs. cold start probability.
2. Fly's replay mechanism is clever, but that initial bounce still adds latency. Have you considered using GeoDNS to route users to the correct regional endpoint from the start? Though I imagine the caching makes this a non-issue after the first request.
3. For the JWT approach - are you rotating these tokens per-session? Just thinking about the security implications if someone intercepts the token.
The 79ms → 14ms improvement is night and day for developer experience. Latency under 20ms feels instant to humans, so you've hit that sweet spot.
1. The pools are very shallow- two machines per pool. While it's certainly possible for 3 tasks to get requested in the same region within 30 seconds, we handle that by falling back to the next closest region if a pool is empty. This is uncommon, though.
2. I haven't considered it, but yeah- the caching seems to work great for us.
3. The tokens are generated per-task, so if you are worried about your token getting leaked, you can just delete the task!
This is a problem that doesn't need to exist. Just run stuff locally on your dev machine with 12 cores and 32Gi of memory. What the hell has happened to need an entire computing cluster and all the network infrastructure between just to write software?
If you suspect that someone is tracking your digital footprints, do not hesitate to reach out to ( techhackers330@gmail.com ) and kindly contact their mail for more consultation ( techhackers330@gmail.com ), Their expertise will empower you to reclaim your privacy and security in the digital world.
With so many apps in need of these sandboxes I wonder if a browser plugin could be built which provisions a sandbox on the users computer. A type of infra which could be utilized by different providers. The security implications are a little tough, but the attack surface could be likely reduced with the right practices
So they used edge servers? How is this novel or insightful?
This article reads like a thinly veiled ad. Certainly not the best way to start a technical blog. If you didn't have the technical insight to know that physics is a factor in latency, why should I trust you with the problems your product actually solves?
Interesting. It seems to me that client side prediction and lag compensation (aka the basics for games in similar situations) would have been a viable alternative.
While I can see that working well for echoing keystrokes in a terminal, I'm not sure how it would work when you actually enter commands into the terminal. Same for opening files in the IDE.
When Covid hit I wasn’t the only one working remotely at my company, but I was the only one working remotely in North America, and apparently the only one trying to Work Smarter. By then there were a handful of feature toggles I had implemented that I quickly set to always on in development, but chief among them was that gzip service calls were a net loss in AWS but very very handy while working from home.
I also had switched a head of line service call that was, for reasons I never sorted out, costing us 30ms TTFB per request for basically fifty bytes of data, to use a long poll in Consul because the data was only meant to be changed at most once every half hour and in practice twice a week. So that latency was hidden in dev sandbox except for startup time, where we had several consul keys being fetched in parallel and applied in order, so one more was hardly noticeable.
The nasty one though was that Artifactory didn’t compress its REST responses, and when you have a CI/CD pipeline that’s been running for six years with half a hundred devs that response is huge because npm is teh dumb. So our poor UI lead kept having npm install timeout and the UI team’s answer for “my environment isn’t working” started with clearing your downloaded deps and starting over.
They finally fixed it after we (and presumably half of the rest of their customers) complained but I was on the back 9 of migrating our entire deployment pipeline to docker and so I had nginx config fairly fresh in my brain and I set them up a forward proxy to do compression termination. It still blew up once a week but that was better than him spending half his day praying to the gods of chaos.
One of the most dangerous ideologies is "all good things come to those who wait" or that waiting is a virtue. Applied by people working at all the levels of a system for years and years it leads to steps that could be 30ms taking 30s.
I have! It's pretty interesting and handles a lot of the problems discussed here, but is a little young for us. For one thing, it doesn't have fly replay, so we'd have to build a separate proxy again.
If we were starting from 0, I would definitely try it. My favorite thing about it is the progressive checkpointing- you can snapshot file system deltas and store them at s3 prices. Cool stuff!
tuhgdetzhh|1 month ago
In contrast, AWS Lambdas, which run on Firecracker, have sub-second startup latency, often just a few hundred milliseconds.
Is there anything comparable on GCP that achieves similar low latency cold starts?
mnazzaro|1 month ago
nicolaslecomte|1 month ago
We had a similar bottleneck building out our sandbox routing layer, where we were doing a lookup to a centralized db to route the query. We found that even with a fast KV store, that lookup still added too much overhead. We moved to encoding the routing logic (like region, cluster ID, etc) directly into the subdomain/hostname. This allowed to drop the db read entirely on the hot path and rely on Anycast + latency-based DNS to route the user to the exact right regional gateway instantly. Also, if you ever find yourselves outgrowing standard HTTP proxies for those long-lived agent sessions, I highly recommend looking at Pingora. It gave us way more control over connection lifecycles than NGINX.
For the compute aspect doing sandbox pooling is cool but might kill your unit economics, especially if at some point each tenant has different images. Have you looked into memory snapshots (that way you only have storage costs not full VMs)?
metadat|1 month ago
iterateoften|1 month ago
simonw|1 month ago
Coding agents are still a relatively new category to most people. Claude Code dates back to February last year, and it took a while for the general engineering public to understand why that format - coding LLMs that can execute and iterate on the code they are writing - was such a big deal.
As a result the demand for good sandboxing options is skyrocketing.
It also takes a while for new solutions to spin up - if someone realized sandboxes were a good commercial idea back in September last year the products they built may only just be ready for people to start trying out today.
cedws|1 month ago
jasonjmcghee|1 month ago
jpalepu33|1 month ago
The lesson about "delete code to improve performance" resonates. I've been down similar paths where adding middleware/routing layers seemed like good abstractions, but they ended up being the performance bottleneck.
A few thoughts on this approach:
1. Warm pools are brilliant but expensive - how are you handling the economics? With multi-region pools, you're essentially paying for idle capacity across multiple data centers. I'm curious how you balance pool size vs. cold start probability.
2. Fly's replay mechanism is clever, but that initial bounce still adds latency. Have you considered using GeoDNS to route users to the correct regional endpoint from the start? Though I imagine the caching makes this a non-issue after the first request.
3. For the JWT approach - are you rotating these tokens per-session? Just thinking about the security implications if someone intercepts the token.
The 79ms → 14ms improvement is night and day for developer experience. Latency under 20ms feels instant to humans, so you've hit that sweet spot.
mnazzaro|1 month ago
globular-toast|1 month ago
elena2223|1 month ago
barishnamazov|1 month ago
mnazzaro|1 month ago
rbbydotdev|1 month ago
imiric|1 month ago
This article reads like a thinly veiled ad. Certainly not the best way to start a technical blog. If you didn't have the technical insight to know that physics is a factor in latency, why should I trust you with the problems your product actually solves?
mlhpdx|1 month ago
mnazzaro|1 month ago
jgtrosh|1 month ago
nickandbro|1 month ago
hinkley|1 month ago
I also had switched a head of line service call that was, for reasons I never sorted out, costing us 30ms TTFB per request for basically fifty bytes of data, to use a long poll in Consul because the data was only meant to be changed at most once every half hour and in practice twice a week. So that latency was hidden in dev sandbox except for startup time, where we had several consul keys being fetched in parallel and applied in order, so one more was hardly noticeable.
The nasty one though was that Artifactory didn’t compress its REST responses, and when you have a CI/CD pipeline that’s been running for six years with half a hundred devs that response is huge because npm is teh dumb. So our poor UI lead kept having npm install timeout and the UI team’s answer for “my environment isn’t working” started with clearing your downloaded deps and starting over.
They finally fixed it after we (and presumably half of the rest of their customers) complained but I was on the back 9 of migrating our entire deployment pipeline to docker and so I had nginx config fairly fresh in my brain and I set them up a forward proxy to do compression termination. It still blew up once a week but that was better than him spending half his day praying to the gods of chaos.
PaulHoule|1 month ago
sam_lowry_|1 month ago
mnazzaro|1 month ago
alooPotato|1 month ago
mnazzaro|1 month ago
If we were starting from 0, I would definitely try it. My favorite thing about it is the progressive checkpointing- you can snapshot file system deltas and store them at s3 prices. Cool stuff!
hackomorespacko|1 month ago
[deleted]
yellow_lead|1 month ago
Valuable insight /s