top | item 47162981

(no title)

insane_dreamer | 5 days ago

Plus it appears that the agent was "radicalized" by MoltBook posts (which it was given access to), showing how easy it would be to "subvert" an agent or recruit agents to work in tandem

discuss

order

normalocity|4 days ago

For sure this is a real example, but it's also largely a permissions issue where users are combining self-modifying capability with unlimited, effectively full admin access.

Outside of AI, the combination of "a given actor can make their own decisions, and they have unlimited permissions/access -- what could possibly go wrong?" very predictable bad things happen.

Whether the actor in this case is a bot of a human, the permissions are the problem, not the actor, IMO.

insane_dreamer|4 days ago

Sure, permissions are the problem, but permissions are also necessary to give the agent power, which is why users grant them in the first place.

There is inherent tension between providing sufficient permissions for the agent to be more useful/powerful, and restricting permissions in the name of safety so it doesn't go off the rails. I don't see any real solution to that, other than restricting users from granting permissions, which then makes the agents (and importantly, the companies behind them), less useful (and therefore less profitable).