Really don’t understand why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AI. Why would you ever let a non deterministic program god level access to everything? What could possibly go wrong?
frenchtoast8|6 days ago
ropetin|6 days ago
It should be noted that this exec also mentioned we should try "all the AIs", without offering up their credit card to cover the costs. I guess when your base salary is more than most people make in a life time, a few hundred bucks a month to test something doesn't even register.
kermatt|5 days ago
Testing new and cutting edge tech has always been a good idea, but this rampant application of it is the ultimate Running-With-Scissors meme. Risks are not being evaluated, and everything is bleeding edge.
My disgust probably comes from the instinct that the excitement is based on the allure of doing more with less, and layoffs are the only idea so many business have left.
The other camp is excited about selling more stuff because AI has been slapped onto it.
danielmarkbruce|6 days ago
trehalose|5 days ago
huey77|5 days ago
zx8080|5 days ago
StopDisinfo910|5 days ago
You have to understand that the security department operates with a fundamentaly different mindset and reality than a business executive. One is responsible for compliance and avoiding adverse events and the other for ensuring the ongoing survival and relevance of the organisation.
Specific waivers for high level members are fully expected. They also have waivers for procurements. It makes sense because they can engage their personnal responsibility for this level of decisions. They don't need the security department to act as their shield.
It's clear that something like Open Claw has the potential to be deeply disruptive so seeing leaders exploring makes sense.
ekjhgkejhgk|6 days ago
HeliumHydride|6 days ago
jacquesm|6 days ago
It's a Venn diagram: there are two camps and there is no doubt some overlap because the number of people involved. GP was obviously talking about the overlap, not literally equating this with two specific people or two groups that are 100% overlapping.
Capricorn2481|6 days ago
throw10920|6 days ago
Macha|6 days ago
otabdeveloper4|5 days ago
All of them. Apparently uploading all your codez to some cloud provider that doesn't even have a figleaf of a EULA is okay now, because "AI".
monksy|6 days ago
bubblewand|6 days ago
autoexec|6 days ago
hugs|6 days ago
people who have been around long enough know that we're currently in the wild west of networked agentic systems. it's an exciting time to build and explore. (just like napster and early digital music.) eventually some big company will come along and pave the cow paths and make everything safe and secure. but the people who will actually deliver that are likely playing with openclaw (and openclaw-like systems) now.
trymas|5 days ago
- Alexa (and other voice assistants) spy microphones in their homes;
- Internet connected:
Giving full and unfettered control to their personal computer with all its accounts, apps, etc does not surprise me at all.I wonder what anthropologists will write about us these days 100 years in the future. What is super creepy and super illegal to do for a physical individual, but is given a blank check from society to be done by tech corporations at unimaginable scale.
EDIT: also corporations (from my social bubble) are giving (almost) unfiltered access to their data from LLMs (and probably soon a control of that data through "Claw" trend), that would be instantly fireable offence for any employee.
Imagine giving enterprise access to some Joe-Claw from the street and allowing him to press any buttons he wants..
overfeed|5 days ago
The deep irony is that the email deletion victim is an "AI alignment specialist" at Meta, and she didn't consider this failure mode.
resonious|6 days ago
simooooo|5 days ago
neya|6 days ago
I'm a sane developer. I do not trust AI at all. I built my own personal OpenClaw clone (long before it was even a thing) and ran controlled experiments inside a sandbox. My stack is Elixir, so this is pretty much easy. If an agent didn't actually respect your requirements, it's just as easy as running an iex command to kill that particular task.
In my experience, AI, be it any model - consistently disobeys direct commands. And worse, it consistently tried to cover up its tracks. For example, I will ask it to create a task within my backend. It will tell me it did - for no reason at all, even share me a task ID that never existed. And when asked why it lied, it would actually spin the task up and accuse me of not trusting it.
It doesn't matter which vendor, which model. This behaviour is repeatable across models and vendors. Now, why would I give something like this access to my entire personal and professional life?
To group me and others like me with the clowns doing this is an insult to me and others who have accumulated decades of experience and security best practices and who had nothing to do with OpenClaw.
cosmic_cheese|6 days ago
tptacek|6 days ago
eucyclos|6 days ago
cedws|5 days ago
j45|6 days ago
dylan604|6 days ago
JumpCrisscross|5 days ago
Risk and reward. That balance, currently, seems tipped to favour risk taking. (Which in turn encompasses both boldness and recklessness.)
andai|6 days ago
Naturally I was horrified by what I had created.
But suddenly I realized, wait a minute... strictly this is less bad than what I had before, which is the same thing except piped through a LLM!
Funny how that works, subjectively...
(I have it, and all coding agents, running as my "agent" user, which can't touch my files. But I appear to be in the minority, especially on the discord, where it's popular to run it as the main admin user on Windows.)
As for what could go wrong, that is an interesting question. RCE aside, the agentic thing is its own weird security situation. Like people will run it sandboxed in Docker, but then hook it up to all their cloud accounts. Or let it remote control their browser for hours unattended...
https://xkcd.com/1200/
xantronix|6 days ago
lofaszvanitt|4 days ago
mhitza|5 days ago
tempodox|5 days ago
cromka|6 days ago
rk06|5 days ago
Relevant xkcd https://xkcd.com/2030/
petterroea|6 days ago
mountainriver|6 days ago
Seems that it was by and large just people wanting to feel important, and holding onto their positions.
Apps need great security, but security can also get out of control. Apps need good abstractions and code hygiene but that too can get out of control.
I’ve fallen in love with programming all of again now that I’m not so tied down by perceived perfection.
pjc50|5 days ago
xmcp123|6 days ago
cl0zedmind|6 days ago
[deleted]
co_king_5|6 days ago
[deleted]
autoexec|6 days ago
observationist|6 days ago
Learn fast or die trying, lol.
almosthere|6 days ago
miki123211|5 days ago
Customers say that they want security with their mouths, but they say that they want features with their wallets. The best improvement to computer security you can make is turning the computer off, but this is clearly not what your (non-HN) customers want you to do.
AI has serious security risks (E.G. prompt injection), but it lets you deliver customer value a lot faster. Security doesn't matter if the competitors' technology is so much better that nobody is buying yours.
antisol|5 days ago