top | item 47011287

(no title)

killbot_2000 | 17 days ago

Interesting approach. I've been going the opposite direction - building a local orchestration platform where 70+ agents share resources on my own machine. The isolation problem you mention is real. I've found that for many dev tasks, local-first avoids the latency and cost of cloud VMs, though GPU workloads are a different story. Curious how you handle agent state persistence across VM sessions?

discuss

order

lawrencechen|16 days ago

I also personally prefer running agents locally instead of in the cloud. For some reason, it feels easier to steer Claude Code when it's running in my terminal vs steering something in the cloud. Maybe part of it is the latency from typing into ssh'd TUIs, and this is something that a GUI can solve... but I still feel more at home with Claude Code/codex in the CLI vs something like Claude Code Web/Codex Cloud. Part of it is likely reliability and "time to first token that AI responded that I can see." But local has tradeoff of conflicting ports/other resources, lag (maybe it's time to upgrade my M1 Max 64gb...), and slight latency incurred since LLM calls have slightly more network latency.

Curious to hear more about your local orchestration platform, how did you solve resource sharing (mainly ports for web stuff tbh)? Or is it more intra-task vs inter-task parallelism?