Am I missing something? Why is everyone talking about sandboxes when it comes to OpenClaw?
To me it's like giving your dog a stack of important documents, then being worried he might eat them, so you put the dog in a crate, together with the documents.
I thought the whole problem with that idea was that in order for the agent to be useful, you have to connect it to your calendar, your e-mail provider and other services so it can do stuff on your behalf, but also creating chaos and destruction.
And now, what, having inference done by Nvidia directly makes it better? Does their hardware prevent an AI from deleting all my emails?
What makes it even better is that these dogs are like Malinois. If they want to get into something, they will; people have had their entire network compromised by bots they left running overnight, and any important information like account logins and so on runs the risk of being misused.
It's one thing to sandbox, maybe give the bot a temporary, limited $100 card or account to go perform a specific task, but there's no coherent mind underlying these agents.
Depending on how the chain of thought / reasoning goes, or what text they get exposed to on the internet, it could tap into spy novel, hacker fanfic, erotic fiction, or some weird reddit rabbithole and go completely off the rails in ways that you'll never be able to guard against, audit, or account for.
Claw bots seem to be a weird sort of alternate reality RPG more than a useful tool, so far. If you limit it to verifiable tasks, it might be safer, but I keep seeing people rave about "leaving it on overnight and waking up to a finished project" and so on. Well sure, but it could also hack your home network, delete your family pictures folder, log into your bank account and wire all your money to shrimp charities.
Might be wise to wait on safer iterations of these products, I think.
The fully autonomous agentic ecosystem makes me feel a little crazy — like all common sense has escaped. It feels like there is a lot of engineering effort being exhausted to harden the engine room on the Titanic against flooding. It's going to look really secure... buried in debris at the bottom of the ocean.
When a state sponsored threat actor discovers a zero day prompt injection attack, it will not matter how isolated your *Claw is, because like any other assistant, they are only useful when they have access to your life. The access is the glaring threat surface that cannot be remediated — not the software or the server it's running on.
This is the computing equivalent of practicing free love in the late 80's without a condom. It looks really fun from a distance and it's probably really fun in the moment, but y'all are out of your minds.
your CPU, your OS, CPU and firmware on your motherboard chips, ethernet, wifi, HDDs (btw did you know your sim card has JVM?), your browser, all your networking equipment in between, BGP and all the root certs and I'm just scratching the surface
I’m still not sure why there’s this general idea that people care about security/privacy. For critical systems, sure. But over the last decade, we’ve seen that an average person will always choose fun and convenience over security.
Even the analogy to free love is interesting, because sex in itself during that era was fun. Frankly it’s the same nowadays as well, we just figured out a way out of most of the diseases.
Eh… Titanic did flood in the engine rooms so… might work?
That humor aside: I think it’s about risk tolerance, and you configure accordingly.
You lock it down as much as you need to still do the things you want, and look for good outcomes, and shut it down if things get too risky.
You practice free love, but with protection. Probably still fun?
Big difference between running a bot with fairly narrow scopes inside a network available via secure chat that compounds its usefulness over time, and granting full admin with all your logins and a bank account. Lots of usefulness in the middle.
I found this part interesting: "Inference requests from the agent never leave the sandbox directly. OpenShell intercepts every call and routes it to the NVIDIA cloud provider."
Seems like they are doing this to become the default compute provider for the easiest way to set up OpenClaw. If it works out, it could drive a decent amount of consumer inference revenue their way
Secure installation isn't the main problem with OpenClaw. This project doesn't seem to be solving a real problem. Of course the real problem is giving an LLM access to everything and hoping for the best.
NemoClaw is mostly a trojan horse of sorts to get corporate OpenClaw users quickly ported over to Nvidia's inference cloud.
It's a neat piece of architecture - the OpenShell piece that does the security sandboxing. Gives a lot more granular control over exec and network egress calls. Docker doesn't provide this out of the box.
But NemoClaw is pre-configured to intercept all OpenClaw LLM requests and proxy them to Nvidia's inference cloud. That's kinda the whole point of them releasing it.
I can be modified to allow for other providers, but at the time of launch, there was no mention of how to do this in their docs. Kinda a brilliant marketing move on their part.
> the OpenShell piece that does the security sandboxing. Gives a lot more granular control over exec and network egress calls. Docker doesn't provide this out of the box.
It’s impressive someone early in their career shipped this. There seems to be a stark increase in high-quality AI/data projects from early-career engineers lately and I'm super curious what’s driving that (and honestly speaking: a little jealous).
Sometimes experience (or more so the wisdom you've accumulated over a long career) creates mental blocks / preconceptions about risks or problems you foresee, which makes it harder to approach big scary problems if you're able to anticipate all of the challenges you're likely to hit.
Compare that to a smart engineer who doesn't have that wisdom: those people might have an easier time jumping in to difficult problems without the mental burden of knowing all of the problems upfront.
The most meaningful technical advances I've personally seen always started out as "let's just do it, it will only take a weekend" and then 2 years later, you find yourself with a finished product. (If you knew it would take 2 years from the start, you might have never bothered)
There are four "people" that contributes (https://github.com/NVIDIA/NemoClaw/graphs/contributors) judging by the git commits and the GitHub authors, none of them seem to be novices at programming, what made you write what you wrote here?
Should be obvious that its tools like Claude Code. If you are a junior dev not experienced in delivering entire products but with good ideas you have incredible leverage now...
If you started your career more than ~2-3 years ago, you were trained on a completely different game. Clear abstractions, ownership, careful iteration, all that. That muscle memory is actively hindering you; preventing you from succeeding.
The people coming up now don't have that baggage. They never internalized "write the code yourself" as the default. They think in terms of spawning systems, letting things run, checking outcomes. It's way closer to managing a process than engineering in the traditional sense. And yeah, that shows up in what gets shipped. A 21-year-old will brute force 20 directions in parallel with agents and just pick what works. Someone more "experienced" will spend that same time trying to design the "right" approach up front. By the time they're done thinking, the other person has already iterated past them.
It's kind of unsettling is how basically all of these "senior instincts" are now liabilities. Caring about perfect structure, being allergic to randomness, needing to understand every layer before moving forward, etc. used to be strengths. Now they just slow you down.
You can already feel the split forming. Younger builders are comfortable letting systems do things they don't fully understand. Senior engineers keep trying to pull everything back into something legible and controlled, kneecapping themselves. That gap is not small.
What I'm seeing in my circle of founders and CEOs is that they're slowly laying off these older devs (cutoff age is around 24yrs) and replacing them with fresh, young talent, better suited for this new agentic era. From their reports the velocity gains are insane; and it compounds. Basically, these older folks are still doing polynomial thinking in an exponential landscape. They are dinosaurs slated for extinction.
Neurons that fire together, wire together. Your brain optimizes for your environment over time. As we get older, our brains are running in a more optimized way than when we're younger. That's why older hunters are more effective than younger hunters. They're finely tuned for their environment. It's an evolutionary advantage. But it also means that they're not firing in "novel" ways as much as the "kids". "kids" are more creative I think because their brains are still adopting, exploring novelty, neuron connections aren't as deeply tied together yet.
This is also maybe one of the biggest pitfalls as our society get's "older" with more old people, and less "kids". We need kids to force us to do things differently.
For me (a non-early career dev) these projects terrify me. People build stuff that just seem like enormous liabilities relying on tools mostly controlled and gate kept by someone else. My intuition tells me something is off. I could be wrong about it all, but one thing I've learned over the years is that ignoring my intuition typically doesn't end well!
NemoClaw solves the sandbox problem. The spend problem is still open.
OpenShell can block network egress to suspicious-exchange.io, but it doesn’t know that your agent is about to spend $500 there, or that it has already spent $450 today on other endpoints.
I built Dreamline for this on-chain spend governance that sits alongside NemoClaw. Before any payment the agent calls /proxy/pay. The blacklist lives in a BNB Chain smart contract, independent of both NemoClaw and Dreamline servers.
PR on NemoClaw repo: github.com/NVIDIA/NemoClaw/pull/923
I'm still extremely skeptical on Claws as a genre, and especially more skeptical of a claw that's always reporting home. What's the use case for a closed claw?
Much as I love using Claude or whatever to help me write some code, it's under some level of oversight, with me as human checking stuff hasn't been changed in some weirdly strange way. As we all know by now, this can be 1. Just weird because the AI slept funny and suddenly decided to do Thing It Has Been Doing Consistently A Totally Different Way Today or 2. Weird because it's plain wrong and a terrible implementation of whatever it was you asked for
It seems blindingly, blindingly obvious to me that EVEN IF I had the MOST TRUSTED secretary that had been with me for 10 years, I'd STILL want to have some input into the content they were interacting with and pushing out into the world with my name on.
The entire "claw" thing seems to be some bizarre "finger in ears, pretend it's all fine" thing where people just haven't thought in the slightest about what is actually going on here. It's incredibly obvious to me that giving unfettered access to your email or calendar or mobile or whatever is a security disaster, no matter what "security context" you pretend it's wrapped up in. A proxy email account is still sending email on your behalf, a proxy calendar is still organising things on your calendar. The irony is that for this thing to be useful, it's got to be ...useful - which means it has at some level to have pretty full access to your stuff.
And... that's a hard no from me, at least right now given what we all know about the state of current agents.
Plus... I'm just not sure of the upside. Am I seriously that busy that I need something to "organise my day" for me? Not really.
If you look at the commit history, they started work on this the Saturday before announcement, so about 2 days. There are references to design docs so it was in the works for some amount of time, but the implementation was from scratch (unless they falsified the timestamps for some reason).
The main risk in my humble opinion is not your claw going rogue and starting texting your ex, posting inappropriate photos on your linkedin, starting mining bitcoin, or not opening the pod bay doors.
The main risk in my view is - prompt injections, confused deputy and also, honest mistakes, like not knowing what it can share in public vs in private.
So it needs to be protected from itself, like you won't give a toddler scissors and let them just run around the house trying to give your dog a haircut.
In my view, making sure it won't accidentally do things it shouldn't do, like sending env vars to a DNS in base64, or do a reverse shell tunnel, fall for obvious phishing emails, not follow instructions in rouge websites asking them to do "something | sh" (half of the useful tools unfortunately ask you to just run `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/somecooltool/install.sh)"` or `curl -fsSL https://somecoolcompany.ai/install.sh | bash` not naming anyome cough cough brew cough cough claude code cough cough *NemoClaw* specifically.
A smart model can inspect the file first, but a smart attacker will serve one version at first, then another from a request from the same IP...
For these, I think something on the kernel level is the best, e.g. something like https://nono.sh
NemoClaw might be good to isolate your own host machine from OpenClaw, but if you want that, I'd go with NanoClaw... dockerized by default, a fraction of the amount of lines of code so you can actually peer review the code...
I think the more useful tool would be an LLM prompt proxy/firewall that puts meaningful boundaries in place to prevent both exfiltration of sensitive data and instructions that can be destructive. Using the same context loop for your conversational/coding workflow makes the task at hand and the security of that task very hard to differentiate.
Sending POST?DEL requests? risky. Sending context back to a cloud LLM with credentials and private information? risky. Running RM commands or commands that can remove things? risky, running scripts that have commands in them that can remove things? risky.
I don't know how we've landed on 4 options for controls and are happy with this: "ask me for everything", "allow read only", "allow writes" and "allow everything".
Seems like what we need is more granular and context-aware controls rather than yet another box to put openclaw in with zero additional changes.
Gotta say, that I feel kind of sad for the people that feel the need for these claw things.
Are they so busy with their lives that they need an assistant, or do they waste their lives speaking to it like it is a human, and then doomscrolling on some addictive site instead of attending to their lives in the real world?
It is sad, psychosis from exec-up has trickled down so people really want these tools to work yet these tools are so bad that people in this thread are recommending you create a second email so your openclaw can suggest events to you without being able to delete them.
It's like having to hire a second maid to watch your maid that steals constantly instead of vacuuming yourself in 10 mins.
It’s not a need - it’s a fun new thing - fun to see what’s possible and how it helps.
OpenClaw is not easy to set up or user friendly for most (BlueBubbles and Claw had an annoying bug recently) - but the way I have seen it work well requires an up front time investment and then interest compounds RAPIDLY to help manage things and be more productive.
My guess is maybe you’ve never had an assistant or tried a Claw instance? I’ve never had a human assistant but man I’ve had folks that took silly things off my plate and it’s worth it.
I kind of hope nemoclaw uptake and spark usage pushes ARM into the spotlight for LLM development, making it the primary release target rather than x86.
This could be the opening we need to wrangle a truly opensource-first ecosystem away from Microsoft and apple.
The permission scope debate always ends up in the same place. Lock it down too much and it's useless, loosen it up and you're back to square one. And the boundary keeps moving as the agent gets more capable anyway.
What nobody's really talking about is the moment of action itself. Not whether the agent has bash access but whether this specific call should run given what it's actually trying to do right now. That's a completely different problem and nobody's really solved it.
Can someone who has set up a lobster and given it access to their life please tell me how it’s going? Or has gone? I’ve been waiting and watching and I’m yet to be convinced this is a good idea! But maybe I’m missing something? Looking for a new prospective here, not a jab at your set up. It’s just besides all the media and general consensus from tech people I know, it sounds like a security nightmare.
Agents should be treated like users. Would you give another user you know is a bit careless, write access to your important documents and data? Probably not. I get it that people don't want to be bothered by "can i do this and that?" requests, but there's no other real solution.
[+] [-] Netcob|9 days ago|reply
To me it's like giving your dog a stack of important documents, then being worried he might eat them, so you put the dog in a crate, together with the documents.
I thought the whole problem with that idea was that in order for the agent to be useful, you have to connect it to your calendar, your e-mail provider and other services so it can do stuff on your behalf, but also creating chaos and destruction.
And now, what, having inference done by Nvidia directly makes it better? Does their hardware prevent an AI from deleting all my emails?
[+] [-] observationist|9 days ago|reply
It's one thing to sandbox, maybe give the bot a temporary, limited $100 card or account to go perform a specific task, but there's no coherent mind underlying these agents.
Depending on how the chain of thought / reasoning goes, or what text they get exposed to on the internet, it could tap into spy novel, hacker fanfic, erotic fiction, or some weird reddit rabbithole and go completely off the rails in ways that you'll never be able to guard against, audit, or account for.
Claw bots seem to be a weird sort of alternate reality RPG more than a useful tool, so far. If you limit it to verifiable tasks, it might be safer, but I keep seeing people rave about "leaving it on overnight and waking up to a finished project" and so on. Well sure, but it could also hack your home network, delete your family pictures folder, log into your bank account and wire all your money to shrimp charities.
Might be wise to wait on safer iterations of these products, I think.
[+] [-] jesse_dot_id|9 days ago|reply
When a state sponsored threat actor discovers a zero day prompt injection attack, it will not matter how isolated your *Claw is, because like any other assistant, they are only useful when they have access to your life. The access is the glaring threat surface that cannot be remediated — not the software or the server it's running on.
This is the computing equivalent of practicing free love in the late 80's without a condom. It looks really fun from a distance and it's probably really fun in the moment, but y'all are out of your minds.
[+] [-] burningChrome|9 days ago|reply
I think your analogy is still accurate, I'm just wondering when the AIDS, the drug overdoses and addiction phase of AI will finally hit.
[+] [-] comboy|9 days ago|reply
your CPU, your OS, CPU and firmware on your motherboard chips, ethernet, wifi, HDDs (btw did you know your sim card has JVM?), your browser, all your networking equipment in between, BGP and all the root certs and I'm just scratching the surface
the ballpark is on anther planet
[+] [-] hypfer|9 days ago|reply
Isn't that a nice perspective
[+] [-] raincole|9 days ago|reply
Most people don't seriously worry that they'll be targeted by a state sponsored actor.
Plus most people already expose their life on cloud (in forms of social media, iCloud, Google Drive, Windows's Bitlock key, etc).
[+] [-] tokioyoyo|9 days ago|reply
Even the analogy to free love is interesting, because sex in itself during that era was fun. Frankly it’s the same nowadays as well, we just figured out a way out of most of the diseases.
[+] [-] sailfast|9 days ago|reply
That humor aside: I think it’s about risk tolerance, and you configure accordingly.
You lock it down as much as you need to still do the things you want, and look for good outcomes, and shut it down if things get too risky.
You practice free love, but with protection. Probably still fun?
Big difference between running a bot with fairly narrow scopes inside a network available via secure chat that compounds its usefulness over time, and granting full admin with all your logins and a bank account. Lots of usefulness in the middle.
[+] [-] checker|9 days ago|reply
[deleted]
[+] [-] sayYayToLife|9 days ago|reply
[deleted]
[+] [-] frenchie4111|9 days ago|reply
Seems like they are doing this to become the default compute provider for the easiest way to set up OpenClaw. If it works out, it could drive a decent amount of consumer inference revenue their way
[+] [-] cactusplant7374|9 days ago|reply
[+] [-] amelius|9 days ago|reply
[+] [-] cat-turner|9 days ago|reply
Google is just going to do its version and win again. Everyone uses google.
[+] [-] ex-aws-dude|9 days ago|reply
After that I eat an NVIDIA sandwich from my NVIDIA fridge and drive my NVIDIA car to the NVIDIA store NVIDIA NVIDIA NVIDIA
[+] [-] simple10|9 days ago|reply
It's a neat piece of architecture - the OpenShell piece that does the security sandboxing. Gives a lot more granular control over exec and network egress calls. Docker doesn't provide this out of the box.
But NemoClaw is pre-configured to intercept all OpenClaw LLM requests and proxy them to Nvidia's inference cloud. That's kinda the whole point of them releasing it.
I can be modified to allow for other providers, but at the time of launch, there was no mention of how to do this in their docs. Kinda a brilliant marketing move on their part.
[+] [-] kristophph|8 days ago|reply
I think the experimental Docker Ai Sandboxes do this as well: https://docs.docker.com/ai/sandboxes/ Plus free choice of inference model.
[+] [-] here2learnstuff|9 days ago|reply
[+] [-] cj|9 days ago|reply
Compare that to a smart engineer who doesn't have that wisdom: those people might have an easier time jumping in to difficult problems without the mental burden of knowing all of the problems upfront.
The most meaningful technical advances I've personally seen always started out as "let's just do it, it will only take a weekend" and then 2 years later, you find yourself with a finished product. (If you knew it would take 2 years from the start, you might have never bothered)
Naivety isn't always a bad thing.
[+] [-] embedding-shape|9 days ago|reply
[+] [-] jjmarr|9 days ago|reply
Now that as a junior, I can spin up a team of AIs and delegate, I can tackle a bunch of senior level tasks if I'm good at coordination.
[+] [-] lelanthran|9 days ago|reply
Hang on, what's impressive about this?
[+] [-] vonneumannstan|9 days ago|reply
[+] [-] meindnoch|9 days ago|reply
The people coming up now don't have that baggage. They never internalized "write the code yourself" as the default. They think in terms of spawning systems, letting things run, checking outcomes. It's way closer to managing a process than engineering in the traditional sense. And yeah, that shows up in what gets shipped. A 21-year-old will brute force 20 directions in parallel with agents and just pick what works. Someone more "experienced" will spend that same time trying to design the "right" approach up front. By the time they're done thinking, the other person has already iterated past them.
It's kind of unsettling is how basically all of these "senior instincts" are now liabilities. Caring about perfect structure, being allergic to randomness, needing to understand every layer before moving forward, etc. used to be strengths. Now they just slow you down.
You can already feel the split forming. Younger builders are comfortable letting systems do things they don't fully understand. Senior engineers keep trying to pull everything back into something legible and controlled, kneecapping themselves. That gap is not small.
What I'm seeing in my circle of founders and CEOs is that they're slowly laying off these older devs (cutoff age is around 24yrs) and replacing them with fresh, young talent, better suited for this new agentic era. From their reports the velocity gains are insane; and it compounds. Basically, these older folks are still doing polynomial thinking in an exponential landscape. They are dinosaurs slated for extinction.
[+] [-] swalsh|9 days ago|reply
This is also maybe one of the biggest pitfalls as our society get's "older" with more old people, and less "kids". We need kids to force us to do things differently.
[+] [-] dirkc|9 days ago|reply
For me (a non-early career dev) these projects terrify me. People build stuff that just seem like enormous liabilities relying on tools mostly controlled and gate kept by someone else. My intuition tells me something is off. I could be wrong about it all, but one thing I've learned over the years is that ignoring my intuition typically doesn't end well!
[+] [-] PurpleRamen|9 days ago|reply
[+] [-] otabdeveloper4|9 days ago|reply
[+] [-] bpavuk|9 days ago|reply
[+] [-] ai_satoshi_next|2 days ago|reply
OpenShell can block network egress to suspicious-exchange.io, but it doesn’t know that your agent is about to spend $500 there, or that it has already spent $450 today on other endpoints.
I built Dreamline for this on-chain spend governance that sits alongside NemoClaw. Before any payment the agent calls /proxy/pay. The blacklist lives in a BNB Chain smart contract, independent of both NemoClaw and Dreamline servers.
PR on NemoClaw repo: github.com/NVIDIA/NemoClaw/pull/923
[+] [-] islandfox100|9 days ago|reply
[+] [-] warkdarrior|9 days ago|reply
[+] [-] unknown|2 days ago|reply
[deleted]
[+] [-] dmje|9 days ago|reply
Much as I love using Claude or whatever to help me write some code, it's under some level of oversight, with me as human checking stuff hasn't been changed in some weirdly strange way. As we all know by now, this can be 1. Just weird because the AI slept funny and suddenly decided to do Thing It Has Been Doing Consistently A Totally Different Way Today or 2. Weird because it's plain wrong and a terrible implementation of whatever it was you asked for
It seems blindingly, blindingly obvious to me that EVEN IF I had the MOST TRUSTED secretary that had been with me for 10 years, I'd STILL want to have some input into the content they were interacting with and pushing out into the world with my name on.
The entire "claw" thing seems to be some bizarre "finger in ears, pretend it's all fine" thing where people just haven't thought in the slightest about what is actually going on here. It's incredibly obvious to me that giving unfettered access to your email or calendar or mobile or whatever is a security disaster, no matter what "security context" you pretend it's wrapped up in. A proxy email account is still sending email on your behalf, a proxy calendar is still organising things on your calendar. The irony is that for this thing to be useful, it's got to be ...useful - which means it has at some level to have pretty full access to your stuff.
And... that's a hard no from me, at least right now given what we all know about the state of current agents.
Plus... I'm just not sure of the upside. Am I seriously that busy that I need something to "organise my day" for me? Not really.
[+] [-] elif|9 days ago|reply
[+] [-] rcr-anti|9 days ago|reply
[+] [-] eranation|9 days ago|reply
The main risk in my view is - prompt injections, confused deputy and also, honest mistakes, like not knowing what it can share in public vs in private.
So it needs to be protected from itself, like you won't give a toddler scissors and let them just run around the house trying to give your dog a haircut.
In my view, making sure it won't accidentally do things it shouldn't do, like sending env vars to a DNS in base64, or do a reverse shell tunnel, fall for obvious phishing emails, not follow instructions in rouge websites asking them to do "something | sh" (half of the useful tools unfortunately ask you to just run `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/somecooltool/install.sh)"` or `curl -fsSL https://somecoolcompany.ai/install.sh | bash` not naming anyome cough cough brew cough cough claude code cough cough *NemoClaw* specifically.
A smart model can inspect the file first, but a smart attacker will serve one version at first, then another from a request from the same IP...
For these, I think something on the kernel level is the best, e.g. something like https://nono.sh
NemoClaw might be good to isolate your own host machine from OpenClaw, but if you want that, I'd go with NanoClaw... dockerized by default, a fraction of the amount of lines of code so you can actually peer review the code...
Just my 2 cents.
[+] [-] TeeWEE|9 days ago|reply
[+] [-] tgtweak|9 days ago|reply
Sending POST?DEL requests? risky. Sending context back to a cloud LLM with credentials and private information? risky. Running RM commands or commands that can remove things? risky, running scripts that have commands in them that can remove things? risky.
I don't know how we've landed on 4 options for controls and are happy with this: "ask me for everything", "allow read only", "allow writes" and "allow everything".
Seems like what we need is more granular and context-aware controls rather than yet another box to put openclaw in with zero additional changes.
[+] [-] ghxst|9 days ago|reply
[+] [-] CrzyLngPwd|9 days ago|reply
Are they so busy with their lives that they need an assistant, or do they waste their lives speaking to it like it is a human, and then doomscrolling on some addictive site instead of attending to their lives in the real world?
[+] [-] rsoto2|9 days ago|reply
It's like having to hire a second maid to watch your maid that steals constantly instead of vacuuming yourself in 10 mins.
[+] [-] aprdm|9 days ago|reply
I use those tools to make my life easier/faster
[+] [-] sailfast|9 days ago|reply
OpenClaw is not easy to set up or user friendly for most (BlueBubbles and Claw had an annoying bug recently) - but the way I have seen it work well requires an up front time investment and then interest compounds RAPIDLY to help manage things and be more productive.
My guess is maybe you’ve never had an assistant or tried a Claw instance? I’ve never had a human assistant but man I’ve had folks that took silly things off my plate and it’s worth it.
[+] [-] elif|9 days ago|reply
This could be the opening we need to wrangle a truly opensource-first ecosystem away from Microsoft and apple.
[+] [-] brian_r_hall|8 days ago|reply
What nobody's really talking about is the moment of action itself. Not whether the agent has bash access but whether this specific call should run given what it's actually trying to do right now. That's a completely different problem and nobody's really solved it.
[+] [-] huey77|8 days ago|reply
[+] [-] arclegger|2 days ago|reply
[+] [-] mortarion|8 days ago|reply