(no title)
horsawlarway | 5 days ago
For the same reasons we go to extreme measures to try to make dev environments identical with tooling like docker, and we work hard to ensure that there's consistency between environments like staging and production.
Viewing the "state of things" from the context of the user is much more valuable than viewing a "fog of war" minimal view with a lack of trust.
> Usually because it inherited the developer's shell, env, and network. That's the actual problem. Not the file format.
I'd argue this is folly. The actual problem is that the LLM behind the agent is running on someone else's computer, with zero accountability except the flimsy promise of legal contracts (at the best case - when backed by well funded legal departments working for large businesses).
This whole category of problems goes out of scope if the model is owned by you (or your company) and run on hardware owned by you (or your company).
If you want to fix things - argue for local.
zahlman|5 days ago
And on the flip side, a remote model isn't creating risk in and of itself. That comes from the agent harness being permitted to make network and filesystem calls. Even the most evil possible version of ChatGPT isn't going to exfiltrate anything except by somehow social-engineering you into volunteering the information.
selridge|5 days ago
It's why people are hooking Open Claw up to stuff and letting it rip--putting it into a sandbox in a VM in a jail is like getting a brand new smartphone and setting it on Airplane Mode first thing.