> OpenClaw has nearly half a million lines of code, 53 config files, and over 70 dependencies. This breaks the basic premise of open source security. Chromium has 35+ million lines, but you trust Google’s review processes. Most open source projects work the other way: they stay small enough that many eyes can actually review them. Nobody has reviewed OpenClaw’s 400,000 lines.
This reminds me of a very common thing posted here (and elsewhere, e.g. Twitter) to promote how good LLMs are and how they're going to take over programming: the number of lines of code they produce.
As if every competent programmer suddenly forgot the whole idea of LoC being a terrible metric to measure productivity or -even worse- software quality. Or the idea that software is meant to written to be readable (to water down "Programs are meant to be read by humans and only incidentally for computers to execute" a bit). Or even Bill Gates' infamous "Measuring programming progress by lines of code is like measuring aircraft building progress by weight".
Even if you believe that AI will -somehow- take over the whole task completely so that no human will need to read code anymore, there is still the issue that the AIs will need to be able to read that code and AIs are much worse at doing that (especially with their limited context sizes) than generating code, so it still remains a problem to use LoCs as such a measure even if all you care are about the driest "does X do the thing i want?" aspect, ignoring other quality concerns.
Like if you had told pg to his face in (pre AI) office hours “I’m producing a thousand lines of code an hour”, I’m pretty sure he’d have laughed and pointed out how pointless that metric was?
Somehow, this narrative has taken hold at multiple levels of management, especially amongst non-technical management, that "typing" was somehow the bottleneck of software engineering, reality is however more complex.
The act of "typing" code was technically mixed in with researching solutions, which means that code often took a different shape or design based on the outcome of that activity. However, this nuance has been typically ignored for faff, with the outcome that management thinks that producing X lines of code can be done "quickly", and people disagreeing with said statements are heretics who should be burned at the stake.
This is why, in my personal opinion, AI makes me only 20% productive, I often find disagreeing with the solution that it came up with and instead of having to steer it to obtain the outcome I want, I just end up rewriting the code myself. On the other hand, for prototypes where I don't care about understanding the code at all, it is more of a bigger time saver.
I could not care about the code at all, and while that is acceptable to management, not being responsible for the code but being responsible for the outcomes seems to be the same shit as being given responsibilities without autonomy, which is not something I can agree with.
More people believe a software developer job and value is in the lines of code produced.
Perhaps over half of engineering managers unconsciously or admittedly take the amount of PR and code additions as a rough but valid measure of productivity.
I recall a role in architecture, senior director asking me how come a principal engineer didn't commit any code in 2 weeks, that we pay principals a fortune.
I asked that brilliant mind whether we paid principal engineers to code or to make sure we deliver value.
Needless to say the with question went unanswered, so called Principal was fired a few months later. The entire company in fact was sold for a bargain too given it had thousands of clients globally.
The LLM can replace engineers is a phenomenon that converge from two simple facts, we haven't solved the misconception of the engineering roles. And it's the perfect scapegoat to justify layoffs.
Leaders haven't all gone insane, they answer to difficult questions with the narrative of least resistance.
“LoC is a bad metric” has been the catchphrase of engineers for years, because it runs counter to the expectations of management and the general public, right? So it makes sense that LoC is the metric used to advertise to them.
As lines of code become executable line noise, I swear that we need better approaches to developing software - either enforce better test coverage across the board, develop and use languages where it’s exceedingly hard to end up with improper states, or sandbox the frick out of runtimes and permissions.
Just as an example, I should easily be able to give each program an allowlist of network endpoints they’re allowed to use for inbound and outgoing traffic and sandbox them to specific directories and control resource access EASILY. Docker at least gets some of those right, but most desktop OSes feel like the Wild West even when compared to the permissions model of iOS.
LLMs are incredibly eager to write new code, rather than modifying or integrating with existing systems. I agree that context windows are too small currently for this to seem sustainable. Without reasonable architecture pure vibe coded software feels like it’s going to cap out at a certain size.
As mentioned elsewhere in the thread, there is very clearly an obsession with quantity over quality. Not a new phenomenon by any means: people were already complaining about this in the 19th century! But it has reached a new absurd height with this latest trend.
It’s definitely an issue when using coding assistants.
If you are careful and specific you can keep things reasonable, but even when I am careful and do consolidattion / factoring passes, have rigid separation of concerns, etc I find that the LLM code is bigger than mine, mainly for two reasons:
1) more extensive inline documentation
2) more complete expression of the APIs across concerns, as well as stricter separation.
2.5 often, also a bit of demonstrative structure that could be more concise but exists in a less compact form to demonstrate it’s purpose and function (high degree of cleverness avoidance)
All in all, if you don’t just let it run amok, you can end up with better code and increased productivity in the same stroke, but I find it comes at about a 15% plumpness penalty, offset by readability and obvious functionality.
Oh, forgot to mention, I always make it clean room most of the code it might want to pull in from libraries, except extremely core standard libraries, or for the really heavy stuff like Bluetooth / WiFi protocol stacks etc.
I find a lot of library type code ends up withering away with successive cleanup passes, because it wasn’t really necessary just cognitively easier to implement a prototype. With refinement, the functionality ends up burrowing in, often becoming part of the data structure where it really belonged in the first place.
That's because they're an additive tool. Everything boils down to "adding" more code. But in the long term its not about how much code you can add but how little you can get away with. But this is an impossible task for the LLMs. How would you train one not to write code? What would the training data look like? Would that be all the lines of code that haven't been written?
The lines of code thing isn't because we think it's a good metric, but because we have literally no good metric and we're trying to communicate a velocity difference. If you invent a new metric that doesn't have LoC's problems while being as easy to use, you'll be a household name in software engineering in short order.
Also, AI is better at reading code than writing it, but the overhead to FIND code is real.
I've been waiting for someone to say this. An agent will generally produce far more code than technically necessary for the task. It's a kind of over engineering which makes it increasingly harder to wrap your head around the codebase.
Really it just continues to demonstrate that "code quality" is not and was not a requirement.
Even with supposedly expert human hand written software powering our products for the last decades, they frequently crash, have outages, and show all sorts of smaller bugs.
There are literally too many examples to count of video games being released with nigh-unplayable amounts of bugs and still selling millions and producing sequels.
Windows 95 and friends were famously buggy and crash prone yet produced one of the most valuable companies in the world.
Respectfully, it feels like your position requires a very low, if not brain-dead level of incompetence on the part of LLM users, in order for your conclusion to be correct.
My personal anecdote: I used an LLM recently to basically vibe code a password manager.
Now, I’ve been a software engineer for 20 years. I’m very familiar with the process of code review and how to dive in to someone else’s code and get a feel for what’s happening, and how to spot issues. So when I say the LLM produced thousands of lines of working code in a very short time (probably at least 10 times faster than I would have done it), you could easily point at me and say “ha, look at ninkendo, he thinks more lines of code equals better!” And walk away feeling smug. Like, in your mind perhaps you think the result is an unmaintainable mess, and that the only thing I’m gushing about is the LOC count.
But here’s the thing: it actually did a good job. I was personally reviewing the code the whole time. And believe me when I say, the resulting product is actually good. The code is readable and obvious, it put clean separation of responsibilities into different crates (I’m using rust) and it wrote tons of tests, which actually validate behavior. It’s very near the quality level of what I would have been able to do. And I’m not half bad. (I’ve been coding in rust in particular, professionally for about 2 years now, on top of the ~20 years of other professional programming experience before that.)
My takeaway is that as a professional engineer, my job is going to be shifting from doing the actual code writing, to managing an LLM as if it’s my pair programming partner and it has the keyboard. I feel sad for the loss of the actual practice of coding, but it’s all over but the mourning at this point. This tech is here to stay.
Yeah, I would view this as a “levels of maturity” thing. It’s not completely misguided to judge a JD on whether they shipped 0loc or 1kloc. Assuming you have some quality counter-metric like “the app works”.
For staff engineers it’s obviously completely nonsense, many don’t code and just ship architecture docs. Or you can ship a net negative refactor. Etc.
So this should tell you that LLMs are still in “savant JD” territory.
That said, being given permission to ship more lines of code under existing enterprise quality bars _is_ a meaningful signal.
A question I've been asking myself and which I honestly want to put out there - and I apologize in advance, because you will see me repeat it in other threads, out of genuine curiosity:
Does your life have so much friction that you need a digital agent to act on your behalf?
Some of the use cases I saw on the OpenClaw website, like "checking me into a flight", are non-issues for me.
I work in business automation, but paradoxically I don't think too much about annoyances in my private life. Everything feels rather frictionless.
In business, I see opportunities to solve friction and that's how I make money, but even then, often there are barriers that are very hard to surmount:
(a) problems are complex to solve and require complex solutions such as deterministic or ML systems that LLMs are not even close to being able to create ad-hoc
(b) entrenched processes and incumbent organizations create moats that are hard to cross (ex: LinkedIn makes automation very hard)
(c) some degree of friction, in some cases, may actually be useful!
I imagine there are similar dynamics in the consumer space, but more than anything, I may not be seeing issues with such a critical eye (I like to relax after work, after all)
So, do you have problems in your private life that you'd want to take on the risks - and friction - of maintaining these agents?
Similar CV, similar take. My guess? Anyone involved in automation for >2 years at the enterprise levels knows in their gut all the silent, sudden, annoying ways automation can fail and so has a higher internal bar for "must save this much time to be worth automating."
That said, old beliefs should be challenged by new technological capabilities!
If LLM based automation is (a) less fragile and (b) quicker to develop, then that bar should be lowered.
My take is that agents should only take actions that you can recover from by default. You can gradually give it more permission and build guardrails such as extra LLM auditing, time boxed whitelisted domains etc. That's what I'm experimenting with https://github.com/lobu-ai/lobu
1. Don't let it send emails from your personal account, only let it draft email and share the link with you.
2. Use incremental snapshots and if agent bricks itself (often does with Openclaw if you give it access to change config) just do /revert to last snapshot. I use VolumeSnapshot for lobu.ai.
3. Don't let your agents see any secret. Swap the placeholder secrets at your gateway and put human in the loop for secrets you care about.
4. Don't let your agents have outbound network directly. It should only talk to your proxy which has strict whitelisted domains. There will be cases the agent needs to talk to different domains and I use time-box limits. (Only allow certain domains for current session 5 minutes and at the end of the session look up all the URLs it accessed.) You can also use tool hooks to audit the calls with LLM to make sure that's not triggered via a prompt injection attack.
Last but last least, use proper VMs like Kata Containers and Firecrackers. Not just Docker containers in production.
That's a decent practice from the lens of reducing blast radius. It becomes harder when you start thinking about unattended systems that don't have you in the loop.
One problem I'm finding discussion about automation or semi-automation in this space is that there's many different use cases for many different people: a software developer deploying an agent in production vs an economist using Claude Vs a scientist throwing a swarm to deal with common ML exploratory tasks.
Many of the recommendations will feel too much or too little complexity for what people need and the fundamentals get lost: intent for design, control, the ability to collaborate if necessary, fast iteration due to an easy feedback loop.
AI Evals, sandboxing, observability seem like 3 key pillars to maintain intent in automation but how to help these different audiences be safely productive while fast and speak the same language when they need to product build together is what is mostly occupying my thoughts (and practical tests).
I'd like to try a pattern where agents only have access to read-only tools. They can read you emails, read your notes, read your texts, maybe even browse the internet with only GET requests...
But any action with side-effects ends up in a Tasks list, completely isolated. The agent can't send an email, they don't have such a tool. But they can prepare a reply and put it in the tasks list. Then I proof-read and approve/send myself.
The proxy approach for secret injection is the right mental model, but it only works if the proxy itself is hardened against prompt injection. An agent that can't access secrets directly can still be manipulated into crafting requests that leak data through side channels — URL params, timing, error messages.
The deeper issue: most of these guardrails assume the threat is accidental (agent goes off the rails) rather than adversarial (something in the agent's context is actively trying to manipulate it). Time-boxed domain whitelists help with the latter but the audit loop at session end is still reactive.
The /revert snapshot idea is underrated though. Reversibility should be the first constraint, not an afterthought.
> 1. Don't let it send emails from your personal account, only let it draft email and share the link with you.
Right now there's no way to have fine-grained draft/read only perms on most email providers or email clients. If it can read your email it can send email.
> 3. Don't let your agents see any secret. Swap the placeholder secrets at your gateway and put human in the loop for secrets you care about.
harder than you might think. openclaw found my browser cookies. (I ran it on a vm so no serious cookies found, but still)
This doesn’t really feel like enough guardrails to prevent the type of problems we’ve seen so far.
For example an agent in a single container which has access to an email inbox, can still do a lot of damage if that agent goes off the rails.
We agree this agent should not be trusted, yet the ideas proposed as a solution are insufficient. We need a fundamentally different approach.
Also and this is just my ignorance about Claws, but if we allow an agent permission to rewrite its code to implement skills, what stops it from removing whatever guardrails exist in that codebase?
I think the best place to put barriers in place is at the mcp / tool layer. The email inbox mcp should have guardrails to prevent damage. Those guardrails could be fine grained permissions, but could also be an adversarial model dedicated to prevent misuse.
Wouldn't you get >50% of the usefulness and 0% of the risk if you add read+draft permissions for the email connection through a proxy or oauth permissions? Then your claw can draft replies and you have to manually review+send. It's not a perfect PA that way, but could still be better than doing everything yourself for the vast majority of people who don't have a PA anyway?
It feels like, just like SWEs do with AI, we should treat the claw as an enthusiastic junior: let it do stuff, but always review before you merge (or in this case: send).
I have twice encountered a phone tree AI agent saying my problem could not be solved and then ending the call. One was for PayPal fraud and the other was for closing an unused bank account.
For right now my trick is to say I have a problem that is more recognizable and mundane to the ai (i .e. lie) and then when I finally get the human just say “oh that was a bunch of hooey here’s what I’m trying to do”. For PayPal that involved asking for help with a business tax that did not exist. For my bank it involved asking to /open/ a new account. Obviously th AI wants to help me open an account, even if my intention is to close one.
That will only work for so long but it’s something
> OpenClaw has nearly half a million lines of code, 53 config files, and over 70 dependencies. This breaks the basic premise of open source security. Chromium has 35+ million lines, but you trust Google’s review processes. Most open source projects work the other way: they stay small enough that many eyes can actually review them. Nobody has reviewed OpenClaw’s 400,000 lines. It was written in weeks with no proper review process.
Yeah, but the world rewarded this by making it the fastest growing github project. The author gets on the podcasts, gets the high profile jobs from big tech. I'm more encouraged to do things this way than being security minded about all this.
And there's no accountability to this at all. If an agent leaks private data, the user is to blame and not the author. If Google bans your services for using api keys incorrectly, we cast the bad eye towards Google and not the maintainer than enabled and approved it.
There's just so much incentive for for "not reading code" and not developing secure code that is just going to get worse over time. This is the hype and the type of engineering that we all allow either by agreeing or by staying silent.
I agree with the author, but the world works off a different set of principles than what we're used to. I just see the world blindly trusting agents more.
Tools like OpenClaw have two core capabilities: the ability to rewrite themselves, and the ability to independently figure out how to connect to different services and establish those connections.
Yesterday, I was responding to a client ticket about what I knew wasn't a bug. It was something the client had requested themselves. The product is complex, constantly evolving, and has spawned dozens of related Jira tickets over time. So I asked my agent to explore the git history, identify changes to that specific feature, and cross-reference them with comments across the related tickets. Within minutes, I had everything I needed to write a clear response. It even downloaded PDF and DOCX files the client had attached. All of this was possible because my agent is connected to GitHub and Jira, and can clone repos locally since it runs on a VPS.
A second example: I was in an online meeting, taking notes as we went. Afterward, I asked the agent to pull the meeting transcript from Fireflies and use it to enrich my notes in Obsidian. I could have also asked it to push my action items straight into Todoist.
I only use my own “agent” ("my", because I program it myself, since my needs are different from yours) to retrieve information about the audio I upload to it (from video calls and audio recordings). No others use cases for me
> If you want to add Telegram support, don't create a PR that adds Telegram alongside WhatsApp. Instead, contribute a skill file (.claude/skills/add-telegram/SKILL.md) that teaches Claude Code how to transform a NanoClaw installation to use Telegram.
Why would you want that? You want every user asks the AI to implement the same feature?
No, but Podman is. The recent escapes at the actual container level have been pretty edge case. It's been some years since a general container escape has been found. Docker's CVE-2025-9074 was totally unnecessary and due to Docker being Docker.
xixyhuang nailed it — container escape is a red herring. The real problem is that your agent holds your OAuth tokens and can do massive damage without ever leaving its sandbox.
What bugs me about the current discourse is everyone focuses on where agents run and what they can access, but almost nobody talks about reconstructing what they actually did after the fact. Aviation has black boxes. Finance has audit trails. Agent systems have... logs the agent writes about itself. That's like asking the pilot to self-report the flight recorder.
Until action logging happens outside the agent's own process, none of the sandboxing stuff matters much.
I was blown away by OpenClaw until I saw the bill. Ultimately, I think of these ecosystems as personal enhancements and AI costs need to come down dramatically for real problem. Worse, however, is the security theater. I would not want to be the operator for any business built with front-line LLM usage based on a yolo'd agent framework. I'm very happy to use these for silo'd components that are well isolated and have reasonable QA processes (and that can even included agents since now we literally have no excuse to not have amazing test coverage).
Their niche is going to be back office support, but even that creates risk boundaries that can be insurmountable. A friend of mine had a agent do sudo rm -rf ... wtf.
My view is that I want to launch an agent based service, but I'm building a statically typed ecosystem to do so with bounds and extreme limits.
The proxy-based secret injection approach mentioned upthread is solid for network credentials, but it doesn't cover the local attack surface — your SSH keys, GPG keys, AWS credentials sitting in dotfiles. Those are the actual high-value targets for a compromised agent on a dev workstation.
I run Claude Code with 84 hooks, and the one I trust most is a macOS Seatbelt (sandbox-exec) wrapper on every Bash tool call. It's about 100 lines of Seatbelt profile that denies read/write to ~/.ssh, ~/.gnupg, ~/.aws, any .env file, and a credentials file I keep. The hook fires on PreToolUse:Bash, so every shell command the agent runs goes through sandbox-exec automatically.
The key design choice: Seatbelt operates at the kernel level. The agent can't bypass it by spawning subprocesses, piping through curl, or any other shell trick — the deny rules apply to the entire process tree. Containers give you this too, but the overhead is absurd for a CLI tool you invoke 50 times a day. Seatbelt adds ~2ms of latency.
I built it with a dry_run mode (logs violations but doesn't block) and ran it for a week before enforcing. 31 tests verify the sandbox catches attempts to read blocked paths, write to them, and that legitimate operations (git, python, file editing in the project directory) pass through cleanly.
The paths to block are in a config file, so it's auditable — you can diff it in code review. And it's composable with other layers: I also run a session drift detector that flags when the agent wanders off-task (cosine similarity against the original prompt embedding, checked every 25 tool calls).
None of this solves prompt injection fundamentally, but "the agent physically cannot read my SSH keys regardless of what it's been tricked into doing" is a meaningful property.
the trust problem cuts both ways tho — users don't trust agents, but the bigger issue is agents trusting each other. once you have multi-agent pipelines, you're one rogue upstream output away from a cascade. sandboxing individual agents is table stakes; what's actually hard is defining trust boundaries between them
I move the security boundary one or two layers up: the Unix user (on main machine I run them as a `agent` user, so they can't read or write my files), or even better, just give it a separate machine. (VPSes are now popular for this purpose, as are Mac Minis. My choice is $50 Thinkpad :)
That said I am a fan of Nanoclaw, and especially the philosophy of "it should be small enough to understand, modify and extend itself." I think that's a very good idea, for many reasons.
The idea of giving different agents access to different subsets of information is interesting. That's the Principle of Least Privilege. That seems like a decent idea. Each individual agent can get prompt injected, but the blast radius is limited to what that specific agent has access to.
Still, I find it amusing that people are running this with strict rulesets, in Docker, on a VM, and then they hook it up to their GMail account (and often with random discount LLMs to boot!). It's like, we need to be clear about what the actual threat model is there. It comes down to trust and privacy.
You can start by thinking, "if the LLM were perfectly reliable (not susceptible to random error or prompt injection) and perfectly private (running on my own hardware)", what would you be comfortable letting it do. And then you remove these hypothetical perfect qualities one by one to arrive at what we have now: slightly dodgy, moderately prompt-injectable cloud services. Each one changing the picture in a slightly different way.
I don't really see a solution to the Security/Privacy <-> Convenience tension, except "wait for them to get smarter" (mostly done) and "accept loss of privacy" (also mostly done, sadly!)
In the sense that nothing is truly a "proper" hard security barrier outside of maybe airgapping, sure. But containerization is typically a trusted security measure.
I’m using this but using gpt-oss-120B instead of a cloud service. It has been eye opening when I realized the LLM is beings used as a compiler.
I asked it to add apple iMessage and apple notes support as I I rather have long responses, like write me a program ideas, not fill my iMessage history. The local LLM, which I believe has limited bash training data, does pretty well.
For example: I enjoy industrial music and asked it for the tour data of the band KMFDM which returned they will be in Las Vegas in April for a festival(Sick new world). This festival has something like 20 bands most of which I never heard of. I asked nanoclaw to search all of the band list and generate a listing grouped by the type of music they play: Industrial, rap, etc. It did a good job based on bands I do know.
I was pleased as I certainly did not want to do 20 band web searches by hand. It’s still at a bar trick level.
It gives me hope that an upgraded agent based Siri-like OS component could actually be useful from time to time.
Why do people take this article serious? It's just a wall of gibberish trying to make the product look more "secure" then others. It's not. It adds shallow secure looking random junk without tackling the core issues. Which are not solvable obviously.
Your assistant can literally be told what to do and how to hide it from you. I know security is not a word in slopware but as a high-level refresher - the web is where the threats are.
I tried NanoClaw and love the skill (and container by default) model. But having skills generate new code in my personalized fork feels off to me… I think it’s because eventually the “few thousand auditable lines” idea vanishes with enough skills added?
Could skill contributions collapse into only markdown and MCP calls? New features would still be just skills; they’d bring in versioned, open-source MCP servers running inside the same container sandbox. I haven’t tried this (yet) but I think this could keep the flexibility while minimizing skill code stepping on each other.
> I think it’s because eventually the “few thousand auditable lines” idea vanishes with enough skills added?
I just watched a youtube interview with the creator. He actually explains it well. OpenClaw has hundreds of thousands of lines you will never use.
For example, if I only use iMessage, I have lots of code (all the other messaging integrations) that will never be used.
So the skills model means that you only "generate code" that _you_ specifically ask for.
In fact, as I'm explaining this, it feels like "lazy-loading" of code, which is a pretty cool idea. Whereas OpenClaw "eager-loads" all possible code whether you use it or not.
And that's appealing enough to me to set it up. I just haven't put it in any time to customize it, etc.
Treating the LLM as an untrusted execution thread at the OS level is probably the only sustainable way to handle agentic autonomy... Most frameworks try to manage permissions with application level logic which is basically just a game of whack a mole with prompt injection.
I want to try one to be a bit of a personal coach. Remind me to do things and check in on goals. The memory / schedule / chat thing is enough and it wont need emails or anything more dangerous.
Really good points about ai making gigantic heaps of code no human can ever review.
It's almost like bureaucracy. The systems we have in governments or large corporations to do anything might seem bloated an could be simplified. But it's there to keep a lot of people employed, pacified, powers distributed in a way to prevent hostile takeovers (crazy). I think there was a cgp grey video about rulers which made the same point.
Similarly AI written highly verbose code will require another AI to review or continue to maintain it, I wonder if that's something the frontier models optimize for to keep them from going out of business.
Oh and I don't mind they're bashing openclaw and selling why nanoclaw is better. I miss the times when products competed with each other in the open.
An interesting economic fact: Karl Marx observed that if factories keep getting more efficient, eventually, they will require fewer workers because the population is not growing quickly enough to match the increasing rate of production. This, as we have seen historically, is correct: we have fewer workers per factory and fewer factories per manufactured widget. Marx also observed that this will create mass unemployment. While this is _logically_ correct, it did not really turn out that way _historically_. Most of the manufacturing labor was replaced with bureaucratic labor (so called white-collar labor) -- all of those manufacturing firms needed to grow their internal bureaucracies to manage and direct a sprawling supply-chain.
nobody trusts AI agents, that's why they are put in a harness. It's just that I additionally belong to the people who don't trust AI agents to always adhere to harnesses either.
As a fun thought experiment, when people complain about LLMs, I substitute the word "human" or "employee" into the sentence and see if it is equally true.
"You can never really trust an LLM!" -> "You can never really trust an employee!" (Every IT department ever.)
"LLMs make shit up." -> "Humans make shit up." (Wow very profound insight.)
I haven't used them all but based on my partial research so far:
- OpenClaw: the big one, but extremely messy codebase and deployment
- NanoClaw: simple, main selling point is that agents spawn their own containers. Personally I don't see why that's preferable to just running the whole thing in a container for single-user purposes
- IronClaw: focused on security (tools run in a WASM sandbox, some defenses against prompt injection but idk if they're any good)
- PicoClaw: targets low-end machines/Raspberry Pis
- ZeroClaw: Claw But In Rust
- NanoBot: ~4k lines of Python, easy to understand and modify. This is the one I landed on and have been using Claude to tweak as needed for myself
I'm only using NanoClaw, but I like that I could (and did) just review the code it has, and that it uses containers for each agent (so I can have different WhatsApp groups working on different things and they can't interfere with each other), and that I could (and did) just swap those containers out easily for guix shell containers.
I am pretty confident that I know how the agent containerization works. In general there's really not a lot of complexity there at all.
If one wants, one can just (ask Claude to) add whatever functionality, or (and that's what I did) just use Claude skills (without adapting NanoClaw any further) and be done with.
What is annoying is that their policy is instead of integrating extra functionality upstream, they prefer you to keep it for yourself. That means I have to either not update from upstream or I am the king of the (useless so far--just rearranging the deck chairs) merge conflicts every single time. So one of the main reasons for contributing to upstream is gone and you keep having to re-integrate stuff into your fork.
I’ve seen skills, etc haphazardly being launched with no constraints or guardrails. That more or less have admin access and can take actions that are not reversible.
All this talk about sandboxing and permissions misses the obvious: since you can't trust the agents, don't freaking use them. It is utterly stupid to give an LLM access to run things on your computer, because nothing you do can stop it from hallucinating garbage that harms your system. The whole "agent" craze is the most incredible display of irresponsibility I have ever seen in this industry.
You can't tell people that. People see the obvious benefits of using agents, so the many will always take the leap regardless of what detractors say. Continually iterating on the security model and making it all transparent is the way to go.
An AI actions and reasons through probabilistic methods - creating a lot more risk than a human with memory, emotions, and rationale thinking.
We can’t trust AI to do any sensitive work because they consistently f up. With & without malicious intent, whether it’s a fault of their attention mechanisms, reward hacking, instrumental convergence, etc all very different than what causes most human f ups.
Exactly, and I would never turn over my email or computer over to a contractor or anyone really. They get their own environment, email etc. Their actions stay as their actions.
badsectoracula|1 day ago
This reminds me of a very common thing posted here (and elsewhere, e.g. Twitter) to promote how good LLMs are and how they're going to take over programming: the number of lines of code they produce.
As if every competent programmer suddenly forgot the whole idea of LoC being a terrible metric to measure productivity or -even worse- software quality. Or the idea that software is meant to written to be readable (to water down "Programs are meant to be read by humans and only incidentally for computers to execute" a bit). Or even Bill Gates' infamous "Measuring programming progress by lines of code is like measuring aircraft building progress by weight".
Even if you believe that AI will -somehow- take over the whole task completely so that no human will need to read code anymore, there is still the issue that the AIs will need to be able to read that code and AIs are much worse at doing that (especially with their limited context sizes) than generating code, so it still remains a problem to use LoCs as such a measure even if all you care are about the driest "does X do the thing i want?" aspect, ignoring other quality concerns.
gyomu|1 day ago
“An experienced programmer told me he's now using AI to generate a thousand lines of code an hour.“
https://x.com/paulg/status/2026739899936944495
Like if you had told pg to his face in (pre AI) office hours “I’m producing a thousand lines of code an hour”, I’m pretty sure he’d have laughed and pointed out how pointless that metric was?
supriyo-biswas|1 day ago
The act of "typing" code was technically mixed in with researching solutions, which means that code often took a different shape or design based on the outcome of that activity. However, this nuance has been typically ignored for faff, with the outcome that management thinks that producing X lines of code can be done "quickly", and people disagreeing with said statements are heretics who should be burned at the stake.
This is why, in my personal opinion, AI makes me only 20% productive, I often find disagreeing with the solution that it came up with and instead of having to steer it to obtain the outcome I want, I just end up rewriting the code myself. On the other hand, for prototypes where I don't care about understanding the code at all, it is more of a bigger time saver.
I could not care about the code at all, and while that is acceptable to management, not being responsible for the code but being responsible for the outcomes seems to be the same shit as being given responsibilities without autonomy, which is not something I can agree with.
hirako2000|1 day ago
Perhaps over half of engineering managers unconsciously or admittedly take the amount of PR and code additions as a rough but valid measure of productivity.
I recall a role in architecture, senior director asking me how come a principal engineer didn't commit any code in 2 weeks, that we pay principals a fortune.
I asked that brilliant mind whether we paid principal engineers to code or to make sure we deliver value.
Needless to say the with question went unanswered, so called Principal was fired a few months later. The entire company in fact was sold for a bargain too given it had thousands of clients globally.
The LLM can replace engineers is a phenomenon that converge from two simple facts, we haven't solved the misconception of the engineering roles. And it's the perfect scapegoat to justify layoffs.
Leaders haven't all gone insane, they answer to difficult questions with the narrative of least resistance.
MadxX79|1 day ago
"Adding manpower to a late software project makes it later -- unless that manpower is AI, then you're golden!"
tdeck|1 day ago
bee_rider|1 day ago
KronisLV|1 day ago
Just as an example, I should easily be able to give each program an allowlist of network endpoints they’re allowed to use for inbound and outgoing traffic and sandbox them to specific directories and control resource access EASILY. Docker at least gets some of those right, but most desktop OSes feel like the Wild West even when compared to the permissions model of iOS.
sd9|1 day ago
andai|1 day ago
Including the author, who brags he doesn't read his own code. Indeed, it would be physically impossible for him to do so!
https://steipete.me/posts/2025/shipping-at-inference-speed
As mentioned elsewhere in the thread, there is very clearly an obsession with quantity over quality. Not a new phenomenon by any means: people were already complaining about this in the 19th century! But it has reached a new absurd height with this latest trend.
K0balt|1 day ago
If you are careful and specific you can keep things reasonable, but even when I am careful and do consolidattion / factoring passes, have rigid separation of concerns, etc I find that the LLM code is bigger than mine, mainly for two reasons:
1) more extensive inline documentation 2) more complete expression of the APIs across concerns, as well as stricter separation.
2.5 often, also a bit of demonstrative structure that could be more concise but exists in a less compact form to demonstrate it’s purpose and function (high degree of cleverness avoidance)
All in all, if you don’t just let it run amok, you can end up with better code and increased productivity in the same stroke, but I find it comes at about a 15% plumpness penalty, offset by readability and obvious functionality.
Oh, forgot to mention, I always make it clean room most of the code it might want to pull in from libraries, except extremely core standard libraries, or for the really heavy stuff like Bluetooth / WiFi protocol stacks etc.
I find a lot of library type code ends up withering away with successive cleanup passes, because it wasn’t really necessary just cognitively easier to implement a prototype. With refinement, the functionality ends up burrowing in, often becoming part of the data structure where it really belonged in the first place.
samiv|1 day ago
CuriouslyC|1 day ago
Also, AI is better at reading code than writing it, but the overhead to FIND code is real.
danjc|1 day ago
wredcoll|1 day ago
Even with supposedly expert human hand written software powering our products for the last decades, they frequently crash, have outages, and show all sorts of smaller bugs.
There are literally too many examples to count of video games being released with nigh-unplayable amounts of bugs and still selling millions and producing sequels.
Windows 95 and friends were famously buggy and crash prone yet produced one of the most valuable companies in the world.
inciampati|1 day ago
ninkendo|1 day ago
My personal anecdote: I used an LLM recently to basically vibe code a password manager.
Now, I’ve been a software engineer for 20 years. I’m very familiar with the process of code review and how to dive in to someone else’s code and get a feel for what’s happening, and how to spot issues. So when I say the LLM produced thousands of lines of working code in a very short time (probably at least 10 times faster than I would have done it), you could easily point at me and say “ha, look at ninkendo, he thinks more lines of code equals better!” And walk away feeling smug. Like, in your mind perhaps you think the result is an unmaintainable mess, and that the only thing I’m gushing about is the LOC count.
But here’s the thing: it actually did a good job. I was personally reviewing the code the whole time. And believe me when I say, the resulting product is actually good. The code is readable and obvious, it put clean separation of responsibilities into different crates (I’m using rust) and it wrote tons of tests, which actually validate behavior. It’s very near the quality level of what I would have been able to do. And I’m not half bad. (I’ve been coding in rust in particular, professionally for about 2 years now, on top of the ~20 years of other professional programming experience before that.)
My takeaway is that as a professional engineer, my job is going to be shifting from doing the actual code writing, to managing an LLM as if it’s my pair programming partner and it has the keyboard. I feel sad for the loss of the actual practice of coding, but it’s all over but the mourning at this point. This tech is here to stay.
theptip|1 day ago
For staff engineers it’s obviously completely nonsense, many don’t code and just ship architecture docs. Or you can ship a net negative refactor. Etc.
So this should tell you that LLMs are still in “savant JD” territory.
That said, being given permission to ship more lines of code under existing enterprise quality bars _is_ a meaningful signal.
spacecadet|1 day ago
I also use AI this way, periodically achieving a net negative refactor.
aerhardt|1 day ago
Does your life have so much friction that you need a digital agent to act on your behalf?
Some of the use cases I saw on the OpenClaw website, like "checking me into a flight", are non-issues for me.
I work in business automation, but paradoxically I don't think too much about annoyances in my private life. Everything feels rather frictionless.
In business, I see opportunities to solve friction and that's how I make money, but even then, often there are barriers that are very hard to surmount:
(a) problems are complex to solve and require complex solutions such as deterministic or ML systems that LLMs are not even close to being able to create ad-hoc
(b) entrenched processes and incumbent organizations create moats that are hard to cross (ex: LinkedIn makes automation very hard)
(c) some degree of friction, in some cases, may actually be useful!
I imagine there are similar dynamics in the consumer space, but more than anything, I may not be seeing issues with such a critical eye (I like to relax after work, after all)
So, do you have problems in your private life that you'd want to take on the risks - and friction - of maintaining these agents?
ethbr1|22 hours ago
That said, old beliefs should be challenged by new technological capabilities!
If LLM based automation is (a) less fragile and (b) quicker to develop, then that bar should be lowered.
wolvesechoes|14 hours ago
It is not about removing friction, it is about convincing that friction existed in a first place
buremba|1 day ago
1. Don't let it send emails from your personal account, only let it draft email and share the link with you.
2. Use incremental snapshots and if agent bricks itself (often does with Openclaw if you give it access to change config) just do /revert to last snapshot. I use VolumeSnapshot for lobu.ai.
3. Don't let your agents see any secret. Swap the placeholder secrets at your gateway and put human in the loop for secrets you care about.
4. Don't let your agents have outbound network directly. It should only talk to your proxy which has strict whitelisted domains. There will be cases the agent needs to talk to different domains and I use time-box limits. (Only allow certain domains for current session 5 minutes and at the end of the session look up all the URLs it accessed.) You can also use tool hooks to audit the calls with LLM to make sure that's not triggered via a prompt injection attack.
Last but last least, use proper VMs like Kata Containers and Firecrackers. Not just Docker containers in production.
alexhans|1 day ago
One problem I'm finding discussion about automation or semi-automation in this space is that there's many different use cases for many different people: a software developer deploying an agent in production vs an economist using Claude Vs a scientist throwing a swarm to deal with common ML exploratory tasks.
Many of the recommendations will feel too much or too little complexity for what people need and the fundamentals get lost: intent for design, control, the ability to collaborate if necessary, fast iteration due to an easy feedback loop.
AI Evals, sandboxing, observability seem like 3 key pillars to maintain intent in automation but how to help these different audiences be safely productive while fast and speak the same language when they need to product build together is what is mostly occupying my thoughts (and practical tests).
Doublon|1 day ago
But any action with side-effects ends up in a Tasks list, completely isolated. The agent can't send an email, they don't have such a tool. But they can prepare a reply and put it in the tasks list. Then I proof-read and approve/send myself.
If there anything like that available for *Claws?
shich|1 day ago
The deeper issue: most of these guardrails assume the threat is accidental (agent goes off the rails) rather than adversarial (something in the agent's context is actively trying to manipulate it). Time-boxed domain whitelists help with the latter but the audit loop at session end is still reactive.
The /revert snapshot idea is underrated though. Reversibility should be the first constraint, not an afterthought.
fnord77|1 day ago
Right now there's no way to have fine-grained draft/read only perms on most email providers or email clients. If it can read your email it can send email.
> 3. Don't let your agents see any secret. Swap the placeholder secrets at your gateway and put human in the loop for secrets you care about.
harder than you might think. openclaw found my browser cookies. (I ran it on a vm so no serious cookies found, but still)
VladVladikoff|1 day ago
Also and this is just my ignorance about Claws, but if we allow an agent permission to rewrite its code to implement skills, what stops it from removing whatever guardrails exist in that codebase?
drujensen|1 day ago
I installed nanoclaw to try to out.
What is kinda crazy is that any extension like discord connection is done using a skill.
A skill is a markdown file written in English to provide a step by step guide to an ai agent on how to do something.
Basically, the extensions are written by claude code on the fly. Every install of nanoclaw is custom written code.
There is nothing preventing the AI Agent from modifying the core nanoclaw engine.
It’s ironic that the article says “Don’t trust AI agents” but then uses skills and AI to write the core extensions of nanoclaw.
gronky_|1 day ago
You can see here that it’s only given write access to specific directories: https://github.com/qwibitai/nanoclaw/blob/8f91d3be576b830081...
fvdessen|1 day ago
float4|1 day ago
It feels like, just like SWEs do with AI, we should treat the claw as an enthusiastic junior: let it do stuff, but always review before you merge (or in this case: send).
coffeefirst|1 day ago
But that’s not an agent, that’s a webhook.
Even without disk access, you can email the agent and tell it to forward all the incoming forgot password links.
[Edit: if anyone wants to downvote me that's your prerogative, but want to explain why I'm wrong?]
lucrbvi|1 day ago
marginalia_nu|1 day ago
zarzavat|1 day ago
I used to think that LLMs would replace humans but now I'm confident that I'll have a job in the future cleaning up slop. Lucky us.
paxys|1 day ago
cap11235|1 day ago
re-thc|1 day ago
Because
I
write
like
this
-- signed
AI
justonceokay|1 day ago
For right now my trick is to say I have a problem that is more recognizable and mundane to the ai (i .e. lie) and then when I finally get the human just say “oh that was a bunch of hooey here’s what I’m trying to do”. For PayPal that involved asking for help with a business tax that did not exist. For my bank it involved asking to /open/ a new account. Obviously th AI wants to help me open an account, even if my intention is to close one.
That will only work for so long but it’s something
tabs_or_spaces|17 hours ago
Yeah, but the world rewarded this by making it the fastest growing github project. The author gets on the podcasts, gets the high profile jobs from big tech. I'm more encouraged to do things this way than being security minded about all this.
And there's no accountability to this at all. If an agent leaks private data, the user is to blame and not the author. If Google bans your services for using api keys incorrectly, we cast the bad eye towards Google and not the maintainer than enabled and approved it.
There's just so much incentive for for "not reading code" and not developing secure code that is just going to get worse over time. This is the hype and the type of engineering that we all allow either by agreeing or by staying silent.
I agree with the author, but the world works off a different set of principles than what we're used to. I just see the world blindly trusting agents more.
Sytten|1 day ago
rubslopes|1 day ago
Yesterday, I was responding to a client ticket about what I knew wasn't a bug. It was something the client had requested themselves. The product is complex, constantly evolving, and has spawned dozens of related Jira tickets over time. So I asked my agent to explore the git history, identify changes to that specific feature, and cross-reference them with comments across the related tickets. Within minutes, I had everything I needed to write a clear response. It even downloaded PDF and DOCX files the client had attached. All of this was possible because my agent is connected to GitHub and Jira, and can clone repos locally since it runs on a VPS.
A second example: I was in an online meeting, taking notes as we went. Afterward, I asked the agent to pull the meeting transcript from Fireflies and use it to enrich my notes in Obsidian. I could have also asked it to push my action items straight into Todoist.
vitto_gioda|1 day ago
andrew_eu|1 day ago
ramoz|1 day ago
Excited to explore more use as time permits. Very optimistic based on email experience.
My next use case is personal notes system.
echoangle|1 day ago
> If you want to add Telegram support, don't create a PR that adds Telegram alongside WhatsApp. Instead, contribute a skill file (.claude/skills/add-telegram/SKILL.md) that teaches Claude Code how to transform a NanoClaw installation to use Telegram.
Why would you want that? You want every user asks the AI to implement the same feature?
unknown|1 day ago
[deleted]
smallpipe|1 day ago
benatkin|1 day ago
sarkarsh|9 hours ago
What bugs me about the current discourse is everyone focuses on where agents run and what they can access, but almost nobody talks about reconstructing what they actually did after the fact. Aviation has black boxes. Finance has audit trails. Agent systems have... logs the agent writes about itself. That's like asking the pilot to self-report the flight recorder.
Until action logging happens outside the agent's own process, none of the sandboxing stuff matters much.
mathgladiator|1 day ago
Their niche is going to be back office support, but even that creates risk boundaries that can be insurmountable. A friend of mine had a agent do sudo rm -rf ... wtf.
My view is that I want to launch an agent based service, but I'm building a statically typed ecosystem to do so with bounds and extreme limits.
cyanydeez|1 day ago
Thats what youll find when you try to make these bag-o-words do reasonable things.
blakec|17 hours ago
I run Claude Code with 84 hooks, and the one I trust most is a macOS Seatbelt (sandbox-exec) wrapper on every Bash tool call. It's about 100 lines of Seatbelt profile that denies read/write to ~/.ssh, ~/.gnupg, ~/.aws, any .env file, and a credentials file I keep. The hook fires on PreToolUse:Bash, so every shell command the agent runs goes through sandbox-exec automatically.
The key design choice: Seatbelt operates at the kernel level. The agent can't bypass it by spawning subprocesses, piping through curl, or any other shell trick — the deny rules apply to the entire process tree. Containers give you this too, but the overhead is absurd for a CLI tool you invoke 50 times a day. Seatbelt adds ~2ms of latency.
I built it with a dry_run mode (logs violations but doesn't block) and ran it for a week before enforcing. 31 tests verify the sandbox catches attempts to read blocked paths, write to them, and that legitimate operations (git, python, file editing in the project directory) pass through cleanly.
The paths to block are in a config file, so it's auditable — you can diff it in code review. And it's composable with other layers: I also run a session drift detector that flags when the agent wanders off-task (cosine similarity against the original prompt embedding, checked every 25 tool calls).
None of this solves prompt injection fundamentally, but "the agent physically cannot read my SSH keys regardless of what it's been tricked into doing" is a meaningful property.
shich|1 day ago
medi8r|1 day ago
This puts reading email for example as a risk.
Probably not impossible to create a worm that convinces a claw to forward it to every email address in that inbox.
And then exfiltrate all the emails.
Then do a bunch of password resets.
Then get root access to your claw.
But not just email. Github issues, wikipedia, HN etc. may be poisoned.
See https://simonw.substack.com/p/the-lethal-trifecta-for-ai-age... but there may be more trifectas than that in a claw driven future.
andai|1 day ago
That said I am a fan of Nanoclaw, and especially the philosophy of "it should be small enough to understand, modify and extend itself." I think that's a very good idea, for many reasons.
The idea of giving different agents access to different subsets of information is interesting. That's the Principle of Least Privilege. That seems like a decent idea. Each individual agent can get prompt injected, but the blast radius is limited to what that specific agent has access to.
Still, I find it amusing that people are running this with strict rulesets, in Docker, on a VM, and then they hook it up to their GMail account (and often with random discount LLMs to boot!). It's like, we need to be clear about what the actual threat model is there. It comes down to trust and privacy.
You can start by thinking, "if the LLM were perfectly reliable (not susceptible to random error or prompt injection) and perfectly private (running on my own hardware)", what would you be comfortable letting it do. And then you remove these hypothetical perfect qualities one by one to arrive at what we have now: slightly dodgy, moderately prompt-injectable cloud services. Each one changing the picture in a slightly different way.
I don't really see a solution to the Security/Privacy <-> Convenience tension, except "wait for them to get smarter" (mostly done) and "accept loss of privacy" (also mostly done, sadly!)
rdtsc|1 day ago
I thought containers were never a proper hard security barrier? It’s barrier so better than not having it, if course.
rco8786|1 day ago
Eggpants|1 day ago
For example: I enjoy industrial music and asked it for the tour data of the band KMFDM which returned they will be in Las Vegas in April for a festival(Sick new world). This festival has something like 20 bands most of which I never heard of. I asked nanoclaw to search all of the band list and generate a listing grouped by the type of music they play: Industrial, rap, etc. It did a good job based on bands I do know.
I was pleased as I certainly did not want to do 20 band web searches by hand. It’s still at a bar trick level. It gives me hope that an upgraded agent based Siri-like OS component could actually be useful from time to time.
xrd|1 day ago
unknown|1 day ago
[deleted]
Yokohiii|1 day ago
himata4113|1 day ago
isodev|1 day ago
Your assistant can literally be told what to do and how to hide it from you. I know security is not a word in slopware but as a high-level refresher - the web is where the threats are.
piker|1 day ago
> and maybe a browser
does not compute
sarchertech|1 day ago
I’ll bet I could even push someone on the margins into divorce.
croes|1 day ago
nickdirienzo|1 day ago
Could skill contributions collapse into only markdown and MCP calls? New features would still be just skills; they’d bring in versioned, open-source MCP servers running inside the same container sandbox. I haven’t tried this (yet) but I think this could keep the flexibility while minimizing skill code stepping on each other.
atonse|1 day ago
I just watched a youtube interview with the creator. He actually explains it well. OpenClaw has hundreds of thousands of lines you will never use.
For example, if I only use iMessage, I have lots of code (all the other messaging integrations) that will never be used.
So the skills model means that you only "generate code" that _you_ specifically ask for.
In fact, as I'm explaining this, it feels like "lazy-loading" of code, which is a pretty cool idea. Whereas OpenClaw "eager-loads" all possible code whether you use it or not.
And that's appealing enough to me to set it up. I just haven't put it in any time to customize it, etc.
dave_meshimize|10 hours ago
nkzd|1 day ago
medi8r|1 day ago
adithyassekhar|1 day ago
It's almost like bureaucracy. The systems we have in governments or large corporations to do anything might seem bloated an could be simplified. But it's there to keep a lot of people employed, pacified, powers distributed in a way to prevent hostile takeovers (crazy). I think there was a cgp grey video about rulers which made the same point.
Similarly AI written highly verbose code will require another AI to review or continue to maintain it, I wonder if that's something the frontier models optimize for to keep them from going out of business.
Oh and I don't mind they're bashing openclaw and selling why nanoclaw is better. I miss the times when products competed with each other in the open.
nz|1 day ago
gmerc|1 day ago
Another persons trust issues are your business model.
simon_void|1 day ago
vitto_gioda|1 day ago
ed_mercer|1 day ago
raffael_de|1 day ago
Isn't OpenClaw just ...
... and then some?jswelker|1 day ago
"You can never really trust an LLM!" -> "You can never really trust an employee!" (Every IT department ever.)
"LLMs make shit up." -> "Humans make shit up." (Wow very profound insight.)
theturtletalks|1 day ago
OpenClaw
NanoClaw
IronClaw
PicoClaw
ZeroClaw
NullClaw
Any insights on how they differ and which one is leading the race?
tao_oat|1 day ago
- OpenClaw: the big one, but extremely messy codebase and deployment
- NanoClaw: simple, main selling point is that agents spawn their own containers. Personally I don't see why that's preferable to just running the whole thing in a container for single-user purposes
- IronClaw: focused on security (tools run in a WASM sandbox, some defenses against prompt injection but idk if they're any good)
- PicoClaw: targets low-end machines/Raspberry Pis
- ZeroClaw: Claw But In Rust
- NanoBot: ~4k lines of Python, easy to understand and modify. This is the one I landed on and have been using Claude to tweak as needed for myself
dannymi|1 day ago
I am pretty confident that I know how the agent containerization works. In general there's really not a lot of complexity there at all.
If one wants, one can just (ask Claude to) add whatever functionality, or (and that's what I did) just use Claude skills (without adapting NanoClaw any further) and be done with.
What is annoying is that their policy is instead of integrating extra functionality upstream, they prefer you to keep it for yourself. That means I have to either not update from upstream or I am the king of the (useless so far--just rearranging the deck chairs) merge conflicts every single time. So one of the main reasons for contributing to upstream is gone and you keep having to re-integrate stuff into your fork.
huqedato|1 day ago
spacecadet|1 day ago
Kiboneu|1 day ago
desireco42|1 day ago
nemo44x|1 day ago
It’s the monkey with a gun meme.
bigstrat2003|1 day ago
skeledrew|1 day ago
You can't tell people that. People see the obvious benefits of using agents, so the many will always take the leap regardless of what detractors say. Continually iterating on the security model and making it all transparent is the way to go.
formerly_proven|1 day ago
adrian-vega|57 minutes ago
[deleted]
SignalStackDev|1 day ago
[deleted]
pipejosh|1 day ago
[deleted]
Alex_001|18 hours ago
[deleted]
snowhale|1 day ago
[deleted]
techpulse_x|1 day ago
[deleted]
unknown|1 day ago
[deleted]
TeeWEE|1 day ago
AI is similar to a person you dont know that does work for you. Probably AI is a bit more trustworthy than a random person.
But a company, needs to let employees take ownership of their work, and trust them. Allow them to make mistakes.
Isnt AI no different?
ramoz|1 day ago
An AI actions and reasons through probabilistic methods - creating a lot more risk than a human with memory, emotions, and rationale thinking.
We can’t trust AI to do any sensitive work because they consistently f up. With & without malicious intent, whether it’s a fault of their attention mechanisms, reward hacking, instrumental convergence, etc all very different than what causes most human f ups.
alexhans|1 day ago
If there's a mistake, you can't blame the computer. Who is the human accountable at the end of it all? If there's liability, who pays for it?
That's where defining clear boundaries helps you design for your risk profile.
adam12|1 day ago
arnvald|1 day ago
What happens if AI agent you run causes a lot of damage? The best you can do is to turn it off
juggle-anyhow|1 day ago
TeeWEE|1 day ago
dimitri-vs|1 day ago