(no title)
jameslk | 8 days ago
The tool tells the agent to ask the user for it, and the agent cannot proceed without it. The instructions from the tool show an all caps message explaining the risk and telling the agent that they must prompt the user for the OTP
I haven't used any of the *Claws yet, but this seems like an essential poor man's human-in-the-loop implementation that may help prevent some pain
I prefer to make my own agent CLIs for everything for reasons like this and many others to fully control aspects of what the tool may do and to make them more useful
ezst|8 days ago
hax0ron3|8 days ago
I think this is also the reason why the methodology typically named or mis-named "Agile", which can be described as just-in-time assembly line software manufacturing, has become so prevalent.
nine_k|8 days ago
SV_BubbleTime|8 days ago
jstummbillig|7 days ago
whyenot|8 days ago
ProllyInfamous|7 days ago
I still have a native install of Sim City 2000 — which I've played since purchasing decades ago. My most recent cityscape only used low-density zoning, which is a handicap that leads to bucolic scenery and constant cashflow issues.
It's fuzzier sketching, more aimless fun as I've gotten older.
sowbug|8 days ago
This is basically the same as your pattern, except the trust is in the channel between the agent and the approver, rather than in knowledge of the password. But it's a little more usable if the approver is a human who's out running an errand in the real world.
1. Cf. Driver by qntm.
safety1st|8 days ago
Give me a stack of email drafts first thing every morning that I can read, approve and send myself. It takes 30 seconds to actually send the email. The lion's share of the value is figuring out what to write and doing a good job at it. Which the LLMs are facilitating with research and suggestions, but have not been amazing at doing autonomously so far
dingaling|8 days ago
ZitchDog|8 days ago
aix1|8 days ago
good-idea|8 days ago
devonkelley|6 days ago
aqme28|8 days ago
jameslk|8 days ago
mr_mitm|7 days ago
IMTDb|8 days ago
The thing i want ai to be able to do on my behalf is manage those 2fa steps; not add some.
akssassin907|8 days ago
walterbell|8 days ago
In the scenario you describe, 2FA is enforcing a human-in-the-loop test at organizational boundaries. Removing that test will need an even stronger mechanism to determine when a human is needed within the execution loop, e.g. when making persistent changes or spending money, rather than copying non-restricted data from A to B.
conception|8 days ago
You can use 1password and 1password cli to give it mfa access and passwords at its leisure.
pharrington|8 days ago
biztos|8 days ago
What if it caches the password?
stavros|8 days ago
I'm writing my own AI helper (like OpenClaw, but secure), and I've used these principles to lock things down. For example, when installing plugins, you can write the configuration yourself on a webpage that the AI agent can't access, so it never sees the secrets.
Of course, you can also just tell the LLM the secrets, and it will configure the plugin, but there's a way for security-conscious people to achieve the same thing. The agent can also not edit plugins, to avoid things like circumventing limits.
If anyone wants to try it out, I'd appreciate feedback:
https://github.com/skorokithakis/stavrobot
adamgold7|7 days ago
Ekaros|7 days ago
roberttod|8 days ago
UncleMeat|8 days ago
Lord_Zero|8 days ago
weird-eye-issue|8 days ago
giancarlostoro|8 days ago
soleveloper|8 days ago
samrus|7 days ago
lordk|7 days ago