On the one hand... the operator who started the agent is responsible. If you fire a gun into the air, the bullet will probably not land on anybody. It's pretty unpredictable what will happen unless you have perfect data and forecasting. But it's still your fault if it kills someone when it comes back down.
But on the other hand, with so many tech companies pushing their ICs to use AI - and pushing more and more to use agentic tools - I'm generally in agreement that the fault lies more with the employer than the employed. You can't tell someone to use a tool, then punish them when the non-deterministic tool they didn't want to use in the first place misbehaves.
Maybe if we had decades of history showing these tools could be operated safely this would be different. But asking an entire industry to adopt new tools with minimal training and real-life experience, and asking them to adopt them at scale and in production, is not the same as asking them to use DynamoDB instead of RDS.
FYI, the HN guidelines state, "Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity." I would encourage you to submit content from others rather than just your own.
A better analogy would be: you hire a robot contractor to do work. Before it arrives, you are asked if the robot should request permission before going into rooms. You say no, and the robot enters the server room.
It does change something for me, despite the meat of your argument still being valid. It clearly is responsible, but so are you.
I won't read the article but I think this is the role that will remain for humans, to be the 'fall guy' when the vibes go wrong. You will have to live in chronic stress, on call so to speak, so when prod goes down, you will take the blame. Maybe that vibe coded PR you didn't and couldn't read contained a serious bug or security lapse, maybe the system design you didn't do but approved contained a RCE you never knew about. Fun times ahead.
gilfaethwy|9 days ago
But on the other hand, with so many tech companies pushing their ICs to use AI - and pushing more and more to use agentic tools - I'm generally in agreement that the fault lies more with the employer than the employed. You can't tell someone to use a tool, then punish them when the non-deterministic tool they didn't want to use in the first place misbehaves.
Maybe if we had decades of history showing these tools could be operated safely this would be different. But asking an entire industry to adopt new tools with minimal training and real-life experience, and asking them to adopt them at scale and in production, is not the same as asking them to use DynamoDB instead of RDS.
CqtGLRGcukpy|9 days ago
kylecazar|9 days ago
It does change something for me, despite the meat of your argument still being valid. It clearly is responsible, but so are you.
2OEH8eoCRo0|9 days ago
hirvi74|9 days ago
ulfw|9 days ago
j45|9 days ago
The user who put the AI agent in motion probably would bear some involvement in responsibility.
fud101|9 days ago
paperclipmaxi|9 days ago
[deleted]