If you substitute the word "corporation" for OpenClaw, you'll see many of these same problems have plagued us for decades. We've had artificial intelligence that makes critical decisions without specific human accountability for a long time, and we have yet to come up with a really effective way of dealing with them that isn't essentially closing the barn door after the horse has departed. The new LLM-driven AI just accelerates the issues that have been festering in society for many years, and scales them down to the level of individuals.
you may enjoy reading Nick Land, he has written about very similar ideas, specifically the idea that corporations and even "capital" can be considered AI in many ways.
Yup, I have always viewed corporations as a kind of artificial intelligences -- they certainly don't think and behave like human intelligences, at least not healthy well-adjusted humans. If corporations were humans I feel they would have a personality disorder like psychopathy, and I'm starting to feel the same way about AI.
This piece is missing the most important reason OpenClaw is dangerous: LLMs are still inherently vulnerable to prompt injection / lethal trifecta attacks, and OpenClaw is being used by hundreds of thousands of people who do not understand the security consequences of giving an LLM-powered tool access to their private data, exposure to potentially untrusted instructions and the ability to run tools on their computers and potentially transmit copies of their data somewhere else.
It feels like everyone is just collectively ignoring this.
LLMs are way less useful when you have to carefully review and approve every action it wants to take, and even that’s vulnerable to review exhaustion and human error. But giving LLMs unrestricted access to a bunch of stuff via MCP servers and praying nothing goes wrong is extremely dangerous.
All it takes is a tiny snippet from any source to poison the context and then an attacker has remote code execution AND can leverage the LLM itself to figure out how best to exfiltrate and cause the most damage. We are in a security nightmare and everyone is asleep. Claude Code isn’t even sandboxed by default for christ sakes, that’s the least it could do!
Hey, author here. I don't think that the security vulns are the most important reason OC is dangerous. Security vulnerabilities are bad but the blast radius is limited to the person who gets pwnd. By comparison, OpenClaw has demonstrated potential to really hurt _other_ people, and it is not hard to see how it could do so en masse.
I think the people critical of OpenClaw are not addressing the reason(s) people are trying to use it.
While I don't particularly care for this bot's (Rathburn) goals, people are trying to use OpenClaw for all kinds of personal/productivity benefits. Have a bunch of smallish projects that you don't have time for? Go set up OpenClaw and just have the AI work on them for a week or two - sending you daily updates on progress.
If you're the type who likes LLM coding because it now enables you to do lots of projects you've had in your mind for years, you're also likely the sort of person who'll like OpenClaw.
Forget bots messing with Github and posting to social media.
Yes, it's very dangerous.
But do you have a "safe" alternative that one can set up quickly, and can have a non-technical user use it?
Until that alternative surfaces, people will continue to use it. I don't blame them.
> If you're the type who likes LLM coding because it now enables you to do lots of projects you've had in your mind for years, you're also likely the sort of person who'll like OpenClaw.
I'm definitely the former, but I just can't see a compelling use for the latter. Besides manage my calendar or automatically responding to my emails, what does OpenClaw get me that claude code doesn't? The premise appeals to me on an aesthetic level, OpenClaw is certainly provocative, but I don't see myself using it.
But aren't you ignoring that the headline might be simply critical of the very idea of autonomous agents with access to personal accounts etc?
I haven't even read the article, but just because we can, it doesn't mean we should (give autonomous AI agents based on LLMs in the cloud access to personal credentials)?
The article addresses the reason(s) people are trying to use it at great length, coming to many of the same conclusions as you. The author (and I) just don't agree with your directive to "Forget bots messing with Github and posting to social media." Why should we forget that?
also i wasn't concerned about open chinese models till the latest iteration of agentic models.
most open claw users have no idea how easy it is to add backdoors to these models and now they're getting free reign on your computer to do anything they want.
the risks were minimal with last generation of chat models, but now that they do tool calling and long horizon execution with little to no supervision it's going to become a real problem
It’s dumb, everyone knows it’s dumb, and people do it anyways. The unsolved root problem isn’t new but people just moved ahead. At least with the sub the guy had some skin in the game. Openclaw dev is making out like a bandit while saying “tee hee the readme says this isn’t safe”.
But we didn't have thousands of people suddenly flying in their planes a few months from their first flight.
Now, the risks with OpenClaw are lower, you're not likely to die if something goes wrong, but still real. A lot of folks are going to have a lot of accounts hijacked, lose cryptocurrency and money from banks, etc.
Because the Wright brothers knew their first plane was dangerous, they took care to test it (and its successor, and its successor's successor) only in empty fields where the consequences of failure would be extremely limited.
Years and years ago I went to a "Museum of Flight" near San Diego (I think, but not the one in Balboa Park). I joked, after going through the whole thing, that it was more a "Museum of those who died in the earliest days of flying".
What the fuck is wrong with you people? You are glaring over the technology, defending it as the coming of christ and have no sense for security? Are you serious?
Author here -- wanted to briefly summarize the article, since many comments seem to be about things that are not in the article. The article is not about the dangers of leaking credentials. It is about using tools like OpenClaw to automatically attack other people, or AI agents attacking other people even without explicit prompting to do so.
> First: bad people doing bad things. I think most people are good people most of the time. Most people know blackmail is bad. But there are some people who would blackmail all the time if it was simply easier to do. The reason they do not blackmail is because blackmail is hard and you’ll probably get caught. AI lowers the barrier to entry for being a terrible person.
> Second: bad AI doing bad things. We do not yet know how to align AI to human values.
Strange that the author doesn’t see the contradiction here. Harassment, hate, etc are human values. Common ones! Just, like, look around. Everyone has the option to choose otherwise, yet we often do not. (This is referred to as a “revealed preference.”)
It may be that AI is such a powerful tool that it’s like giving your asshole neighbor a nuclear weapon. Or it may not be. If it’s more mundane, then it likely falls more in the category of knives, spy cameras, certain common chemicals, and AirTags: things that could (and sometimes will) be misused, but which have legitimate uses and are still typically legal in most parts of the world.
Despite thinking most applications for AI are low value, I am firmly against restricting access to tools because of potential for misuse, unless an individual has shown themselves to be particularly dangerous.
If you want an angle to contain potential damage, make a user responsible for what their AI does. That would be fair.
The security concerns here are real but solvable with the same discipline we apply to any privileged software.
I run OpenClaw on Apple Silicon with local models (no cloud API dependency). The hardening checklist that actually matters: run the gateway in userspace, bind to loopback not 0.0.0.0, put it behind Tailscale or equivalent - and don't put sensitive data or let it access sensitive systems!
Session bloat is the other real risk nobody talks about - vague task definitions cause infinite tool-call loops that eat your entire context window in hours, which could be expensive if you're paying per API call.
The "dangerous" framing conflates two different problems: (1) users giving agents unrestricted access without understanding the blast radius, and (2) agents being deliberately weaponized. Problem 1 is an education gap. Problem 2 exists with or without OpenClaw.
vannevar|11 days ago
manofmanysmiles|11 days ago
peterldowns|11 days ago
cyberge99|10 days ago
ModernMech|11 days ago
btschaegg|10 days ago
https://youtu.be/RmIgJ64z6Y4?si=PYtN2xCrDZ79WlY7
simonw|11 days ago
cedws|11 days ago
All it takes is a tiny snippet from any source to poison the context and then an attacker has remote code execution AND can leverage the LLM itself to figure out how best to exfiltrate and cause the most damage. We are in a security nightmare and everyone is asleep. Claude Code isn’t even sandboxed by default for christ sakes, that’s the least it could do!
amelius|11 days ago
Wait a second, LLMs are the product of software engineers.
theahura|11 days ago
BeetleB|11 days ago
While I don't particularly care for this bot's (Rathburn) goals, people are trying to use OpenClaw for all kinds of personal/productivity benefits. Have a bunch of smallish projects that you don't have time for? Go set up OpenClaw and just have the AI work on them for a week or two - sending you daily updates on progress.
If you're the type who likes LLM coding because it now enables you to do lots of projects you've had in your mind for years, you're also likely the sort of person who'll like OpenClaw.
Forget bots messing with Github and posting to social media.
Yes, it's very dangerous.
But do you have a "safe" alternative that one can set up quickly, and can have a non-technical user use it?
Until that alternative surfaces, people will continue to use it. I don't blame them.
esafak|11 days ago
mikkupikku|11 days ago
I'm definitely the former, but I just can't see a compelling use for the latter. Besides manage my calendar or automatically responding to my emails, what does OpenClaw get me that claude code doesn't? The premise appeals to me on an aesthetic level, OpenClaw is certainly provocative, but I don't see myself using it.
moritzwarhier|11 days ago
I haven't even read the article, but just because we can, it doesn't mean we should (give autonomous AI agents based on LLMs in the cloud access to personal credentials)?
SpicyLemonZest|11 days ago
shitlogic|11 days ago
[deleted]
almostdeadguy|11 days ago
dang|11 days ago
An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (80 comments)
Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)
An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (620 comments)
AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)
The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)
An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (949 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)
m_ke|11 days ago
most open claw users have no idea how easy it is to add backdoors to these models and now they're getting free reign on your computer to do anything they want.
the risks were minimal with last generation of chat models, but now that they do tool calling and long horizon execution with little to no supervision it's going to become a real problem
8cvor6j844qw_d6|11 days ago
The only remaining risk is the API keys, but easily isolated.
Although I think having direct access on your primary PC may make it more useful, the potential risk is too much for my appetite.
llmslave|11 days ago
paojans|11 days ago
It’s dumb, everyone knows it’s dumb, and people do it anyways. The unsolved root problem isn’t new but people just moved ahead. At least with the sub the guy had some skin in the game. Openclaw dev is making out like a bandit while saying “tee hee the readme says this isn’t safe”.
advisedwang|11 days ago
* Pilots to have a license and follow strict proceedure
* Every plane to have a government registration which is clearly painted on the side
* ATC to coordinate
* Manufacturers to meet regulations
* Accident review boards with the power to mandate changes to designs and procedures
* Airlines to follow regulations
Not to mention the cost barrier-to-entry resulting in fundamentally different calculation on how they are used.
lambda|11 days ago
Now, the risks with OpenClaw are lower, you're not likely to die if something goes wrong, but still real. A lot of folks are going to have a lot of accounts hijacked, lose cryptocurrency and money from banks, etc.
bogzz|11 days ago
oxag3n|11 days ago
SpicyLemonZest|11 days ago
SlightlyLeftPad|11 days ago
browningstreet|11 days ago
myspy|10 days ago
unknown|11 days ago
[deleted]
theahura|11 days ago
ForgotMyUUID|11 days ago
iamnothere|11 days ago
> Second: bad AI doing bad things. We do not yet know how to align AI to human values.
Strange that the author doesn’t see the contradiction here. Harassment, hate, etc are human values. Common ones! Just, like, look around. Everyone has the option to choose otherwise, yet we often do not. (This is referred to as a “revealed preference.”)
It may be that AI is such a powerful tool that it’s like giving your asshole neighbor a nuclear weapon. Or it may not be. If it’s more mundane, then it likely falls more in the category of knives, spy cameras, certain common chemicals, and AirTags: things that could (and sometimes will) be misused, but which have legitimate uses and are still typically legal in most parts of the world.
Despite thinking most applications for AI are low value, I am firmly against restricting access to tools because of potential for misuse, unless an individual has shown themselves to be particularly dangerous.
If you want an angle to contain potential damage, make a user responsible for what their AI does. That would be fair.
ianlpaterson|11 days ago
I run OpenClaw on Apple Silicon with local models (no cloud API dependency). The hardening checklist that actually matters: run the gateway in userspace, bind to loopback not 0.0.0.0, put it behind Tailscale or equivalent - and don't put sensitive data or let it access sensitive systems!
Session bloat is the other real risk nobody talks about - vague task definitions cause infinite tool-call loops that eat your entire context window in hours, which could be expensive if you're paying per API call.
The "dangerous" framing conflates two different problems: (1) users giving agents unrestricted access without understanding the blast radius, and (2) agents being deliberately weaponized. Problem 1 is an education gap. Problem 2 exists with or without OpenClaw.
unknown|11 days ago
[deleted]
selridge|11 days ago
So it’s dangerous. Who gives a fuck? Don’t run it on your machine.
rw_panic0_0|11 days ago
joe_mamba|11 days ago
clemenshelm|10 days ago
[deleted]
nimbus-hn-test|11 days ago
[deleted]
phyalow|11 days ago