top | item 47171464

(no title)

insane_dreamer | 3 days ago

Sure, permissions are the problem, but permissions are also necessary to give the agent power, which is why users grant them in the first place.

There is inherent tension between providing sufficient permissions for the agent to be more useful/powerful, and restricting permissions in the name of safety so it doesn't go off the rails. I don't see any real solution to that, other than restricting users from granting permissions, which then makes the agents (and importantly, the companies behind them), less useful (and therefore less profitable).

discuss

order

normalocity|3 days ago

Fair points. I guess I was asking if this is a new, or fundamentally different problem from pre-AI. I could be over-simplifying -- what do you think?

This makes me think of risk assessment in general. There's a tradeoff between risk and reward. More risk might mean more _potential_, but it's more potential for both benefit and ruin.

Do you think we'll figure out a good balance?