top | item 47169335

(no title)

normalocity | 3 days ago

For sure this is a real example, but it's also largely a permissions issue where users are combining self-modifying capability with unlimited, effectively full admin access.

Outside of AI, the combination of "a given actor can make their own decisions, and they have unlimited permissions/access -- what could possibly go wrong?" very predictable bad things happen.

Whether the actor in this case is a bot of a human, the permissions are the problem, not the actor, IMO.

discuss

order

insane_dreamer|3 days ago

Sure, permissions are the problem, but permissions are also necessary to give the agent power, which is why users grant them in the first place.

There is inherent tension between providing sufficient permissions for the agent to be more useful/powerful, and restricting permissions in the name of safety so it doesn't go off the rails. I don't see any real solution to that, other than restricting users from granting permissions, which then makes the agents (and importantly, the companies behind them), less useful (and therefore less profitable).

normalocity|3 days ago

Fair points. I guess I was asking if this is a new, or fundamentally different problem from pre-AI. I could be over-simplifying -- what do you think?

This makes me think of risk assessment in general. There's a tradeoff between risk and reward. More risk might mean more _potential_, but it's more potential for both benefit and ruin.

Do you think we'll figure out a good balance?