I rushed out nono.sh (the opposite of yolo!) in response to this and its already negated a few gateway attacks.
It uses kernel-level security primitives (Landlock on Linux, Seatbelt on macOS) to create sandboxes where unauthorized operations are structurally impossible. API keys are also stored in apples secure enclave (or the kernel keyring in linux) , and injected at run time and zeroized from memory after use. There is also some blocking of destructive actions (rm -rf ~/)
its as simple to run as: nono run --profile openclaw -- openclaw gateway
You can also use it to sandbox things like npm install:
nono run --allow node_modules
--allow-file package.json package.lock npm install pkg
Its early in, there will be bugs! PR's welcome and all that!
I'm curious, outside of AI enthusiasts have people found value with using Clawdbot, and if so, what are they doing with it? From my perspective it seems like the people legitimately busy enough that they actually need an AI assistant are also people with enough responsibilities that they have to be very careful about letting something act on their behalf with minimal supervision. It seems like that sort of person could probably afford to hire an administrative assistant anyway (a trustworthy one), or if it's for work they probably already have one.
On the other hand, the people most inclined to hand over access to everything to this bot also strike me as people without a lot to lose? I don't want to make an unfair characterization or anything, it just strikes me that handing over the keys to your entire life/identity is a lot more palatable if you don't have much to lose anyway?
From my perspective, not everybody is busy but they are using AI to remove the load from them.
You might think: But that is great right??
I had a chat with a friend also in IT, ChatGPT and alike is the one doing all the "brain part and execution" in most cases.
Entire workflows are done by AI tools, he just presses a button in some cases.
People forget that our brain needs stimulation, if you don't use it, you forget things and it gets dumber.
Watch the next generation of engineers that are very good at using AI but are unable to do troubleshooting on their own.
Look at what happened with ChatGPT4 -> 5, companies workflows worldwide stopped working setting companies back by months.
Do you wanna a real world example???
Watch people who spent their entire lives within an university getting all sort of qualification but never really touched the real deal unable to do anything.
Sure, there are the smarter ones who would put things to the test and found awesome job, but many are jobless because all they did is "press a button", they are just like the AI enthusiasts, remove such tools and they can no longer work.
The whole premise of this thing seems to be that it has access to your email, web browser, messaging, and so on. That's what makes it, in theory, useful.
The prompt injection possibilities are incredibly obvious... the entire world has write access to your agent.
I'm working in AI, but I'd have made this anyway: Molty is my language learning accountability buddy. It crawls the web with a sandboxed subagent to find me interesting stuff to read in French and Japanese. It makes Anki flashcards for me. And it wraps it up by quizzing me on the day's reading in the evening.
All this is running on a cheap VPS, where the worst it has access to is the LLM and Discord API keys and AnkiWeb login.
Moltbot is a security nightmare, especially it's premise (tap into all your data sources) and the rapid uptake by inexperienced users makes it especially attractive for criminal networks.
Things like this are why I don't use AI agents like moltbot/openclaw. Security is just out the window with these things. It's like the last 50 years never happened.
No need to look back 50 years, people already forgot 2021 crypto security lapses that collectively cost billions. Or maybe the target audience here just doesn't care.
It's not perfect but it does have a few opt-in security features: running all tools in a docker container with minimal mounts, requiring approvals for exec commands, specifying tools on an agent by agent basis so that the web agent can't see files and the files agent can't see the web, etc.
That said, I still don't trust it and have it quarantined in a VPS. It's still surprisingly useful even though it doesn't have access to anything that I value. Tell it to do something and it'll find a way!
What I would have expected is prompt injection or other methods to get the agent to do something its user doesn't want it to, not regular "classical" attacks.
At least currently, I don't think we have good ways of preventing the former, but the latter should be possible to avoid.
Apart from the actual exploit, it is intriguing to see how a security researcher can leverage an AI tool to give them an asymmetric advantage to the actual developers of the code. Devs are pretty focused on their own subsystem and it would take serendipity or a ton of experience to be able to spot such patterns.
Thinking about this more .. given all the AI generated code being put into production these days (I routinely see posts of anthropic and others boast how much code is being written by AI). I can see it being much, much harder to review all the code being written by AIs. It makes a lot of sense to use an AI system to find vulnerabilities that humans don't have time to catch.
what worries me here is that the entire personal AI agent product category is built on the premise of “connect me to all your data + give me execution.” At that point, the question isn’t “did they patch this RCE,” it’s more about what does a secure autonomous agent deployment even look like when its main feature is broad authority over all of someone's connected data?
Is the only real answer sandboxing + zero trust + treating agents as hostile by default? Or is this category fundamentally incompatible with least privilege?
do people even care about security anymore? I'll bet many consumers wouldn't even think twice about just giving full access to this thing (or any other flavor of the month AI agent product)
You sound like the confident techie character in a Michael Crichton novel pronouncing "We've thought of everything there's no way for the demon to escape" shortly before the demon escapes.
What I find really amazing is that the same ones who kept saying that cars were/are wasteful and that kept making fun of cryptocurrencies and complaining about the high energy usage to mine Bitcoin are now head first spending $$$ on the most energy intensive endeavour the human race ever invented: AI.
I mean: there are literally people spending $200 and more per month to have their personal, a bit schizophrenic, assistant engage moreover in conspicuous consumption for them.
Now as to my take on it: I think energy, when it comes to 8 billion humans, is basically infinite so I think it's only a matter of converting enough of that energy that either is or reaches our planet into a usable form. So I don't mind energy consumption.
But it'd be nice if could we at least have those who use AI not being hypocrites and stop criticizing Bitcoin mining and ICE cars? (by ICE I mean "Internal Combustion Engine" in case you thought I was talking about other kind of cars)
From now on you're only allowed to criticize ICE cars and Bitcoin mining if you don't use AI.
decodebytes|28 days ago
It uses kernel-level security primitives (Landlock on Linux, Seatbelt on macOS) to create sandboxes where unauthorized operations are structurally impossible. API keys are also stored in apples secure enclave (or the kernel keyring in linux) , and injected at run time and zeroized from memory after use. There is also some blocking of destructive actions (rm -rf ~/)
its as simple to run as: nono run --profile openclaw -- openclaw gateway
You can also use it to sandbox things like npm install:
nono run --allow node_modules --allow-file package.json package.lock npm install pkg
Its early in, there will be bugs! PR's welcome and all that!
https://nono.sh
stijnveken|28 days ago
hedgehog|28 days ago
krackers|28 days ago
Wuzado|28 days ago
overgard|28 days ago
On the other hand, the people most inclined to hand over access to everything to this bot also strike me as people without a lot to lose? I don't want to make an unfair characterization or anything, it just strikes me that handing over the keys to your entire life/identity is a lot more palatable if you don't have much to lose anyway?
Am I missing something?
h4kunamata|28 days ago
You might think: But that is great right??
I had a chat with a friend also in IT, ChatGPT and alike is the one doing all the "brain part and execution" in most cases. Entire workflows are done by AI tools, he just presses a button in some cases.
People forget that our brain needs stimulation, if you don't use it, you forget things and it gets dumber. Watch the next generation of engineers that are very good at using AI but are unable to do troubleshooting on their own.
Look at what happened with ChatGPT4 -> 5, companies workflows worldwide stopped working setting companies back by months.
Do you wanna a real world example???
Watch people who spent their entire lives within an university getting all sort of qualification but never really touched the real deal unable to do anything.
Sure, there are the smarter ones who would put things to the test and found awesome job, but many are jobless because all they did is "press a button", they are just like the AI enthusiasts, remove such tools and they can no longer work.
lxgr|28 days ago
mh2266|28 days ago
The prompt injection possibilities are incredibly obvious... the entire world has write access to your agent.
???????
amelius|28 days ago
jondwillis|28 days ago
Trufa|28 days ago
It’s definitely not it it’s final form but it’s showing potential.
voxgen|28 days ago
All this is running on a cheap VPS, where the worst it has access to is the LLM and Discord API keys and AnkiWeb login.
mentalgear|28 days ago
g947o|28 days ago
chrisjj|28 days ago
avaer|28 days ago
jungfty|28 days ago
[deleted]
ethin|28 days ago
avaer|28 days ago
voxgen|28 days ago
That said, I still don't trust it and have it quarantined in a VPS. It's still surprisingly useful even though it doesn't have access to anything that I value. Tell it to do something and it'll find a way!
charcircuit|28 days ago
dotancohen|28 days ago
lxgr|28 days ago
At least currently, I don't think we have good ways of preventing the former, but the latter should be possible to avoid.
brutus1213|28 days ago
Thinking about this more .. given all the AI generated code being put into production these days (I routinely see posts of anthropic and others boast how much code is being written by AI). I can see it being much, much harder to review all the code being written by AIs. It makes a lot of sense to use an AI system to find vulnerabilities that humans don't have time to catch.
mortsnort|28 days ago
bmit|28 days ago
vulnwrecker5000|28 days ago
Is the only real answer sandboxing + zero trust + treating agents as hostile by default? Or is this category fundamentally incompatible with least privilege?
yikes
mh2266|28 days ago
no, they documented it
https://docs.openclaw.ai/gateway/security#node-execution-sys...
chrisjj|28 days ago
/i
unknown|28 days ago
[deleted]
bmit|28 days ago
lxgr|28 days ago
Also, if you think about it, billions of people aren't running Moltbot at all.
ejcho|28 days ago
nsm100|28 days ago
avaer|28 days ago
lxgr|28 days ago
clawsyndicate|28 days ago
[deleted]
hughw|28 days ago
chrisjj|28 days ago
electroglyph|28 days ago
TacticalCoder|28 days ago
I mean: there are literally people spending $200 and more per month to have their personal, a bit schizophrenic, assistant engage moreover in conspicuous consumption for them.
Now as to my take on it: I think energy, when it comes to 8 billion humans, is basically infinite so I think it's only a matter of converting enough of that energy that either is or reaches our planet into a usable form. So I don't mind energy consumption.
But it'd be nice if could we at least have those who use AI not being hypocrites and stop criticizing Bitcoin mining and ICE cars? (by ICE I mean "Internal Combustion Engine" in case you thought I was talking about other kind of cars)
From now on you're only allowed to criticize ICE cars and Bitcoin mining if you don't use AI.