top | item 47103592

(no title)

jameslk | 8 days ago

One safety pattern I’m baking into CLI tools meant for agents: anytime an agent could do something very bad, like email blast too many people, CLI tools now require a one-time password

The tool tells the agent to ask the user for it, and the agent cannot proceed without it. The instructions from the tool show an all caps message explaining the risk and telling the agent that they must prompt the user for the OTP

I haven't used any of the *Claws yet, but this seems like an essential poor man's human-in-the-loop implementation that may help prevent some pain

I prefer to make my own agent CLIs for everything for reasons like this and many others to fully control aspects of what the tool may do and to make them more useful

discuss

order

ezst|8 days ago

Now we do computing like we play Sim City: sketching fuzzy plans and hoping those little creatures behave the way we thought they might. All the beauty and guarantees offered by a system obeying strict and predictable rules goes down the drain, because life's so boring, apparently.

hax0ron3|8 days ago

I think it's Darwinian logic in action. In most areas of software, perfection or near-perfection are not required, and as a result software creators are more likely to make money if they ship something that is 80% perfect now than if they ship something that is 99% perfect 6 months from now.

I think this is also the reason why the methodology typically named or mis-named "Agile", which can be described as just-in-time assembly line software manufacturing, has become so prevalent.

nine_k|8 days ago

The difference is that it's not a toy. I'd rather compare it to the early days of offshore development, when remote teams were sooo attractive because they cost 20% of an onshore team for a comparable declared capability, but the predictability and mutual understanding proved to be... not as easy.

SV_BubbleTime|8 days ago

We spent a ton of time removing subjectivity from this field… only to forcefully shove it in and punish it for giving repeatable objective responses. Wild.

jstummbillig|7 days ago

We will not arrive at the desired state without stumbling around and going completely off the rails, as we do, but clearly the idea here is to do stuff that we failed to do under the previous "beauty and guarantees" paradigm.

whyenot|8 days ago

It’s like coders (and now their agents) are re-creating biology. As a former software engineer who changed careers to biology, it’s kind of cool to see this! There is an inherent fuzziness to biological life, and now AI is also becoming increasingly fuzzy. We are living in a truly amazing time. I don’t know what the future holds, but to be at this point in history and to experience this, it’s quite something.

ProllyInfamous|7 days ago

>Now we do computing like we play Sim City: sketching fuzzy plans and hoping

I still have a native install of Sim City 2000 — which I've played since purchasing decades ago. My most recent cityscape only used low-density zoning, which is a handicap that leads to bucolic scenery and constant cashflow issues.

It's fuzzier sketching, more aimless fun as I've gotten older.

sowbug|8 days ago

Another pattern would mirror BigCorp process: you need VP approval for the privileged operation. If the agent can email or chat with the human (or even a strict, narrow-purpose agent(1) whose job it is to be the approver), then the approver can reply with an answer.

This is basically the same as your pattern, except the trust is in the channel between the agent and the approver, rather than in knowledge of the password. But it's a little more usable if the approver is a human who's out running an errand in the real world.

1. Cf. Driver by qntm.

safety1st|8 days ago

In my opinion people are fixating a little too much over the automation part, maybe because most people don't have a lot of experience with delegation... I mean, a VP worth his salt isn't generally having critical emails drafted and sent on his behalf without his review. It happens with unimportant emails, but with the stuff that really impacts the business far less often, unless he has found someone really, really great

Give me a stack of email drafts first thing every morning that I can read, approve and send myself. It takes 30 seconds to actually send the email. The lion's share of the value is figuring out what to write and doing a good job at it. Which the LLMs are facilitating with research and suggestions, but have not been amazing at doing autonomously so far

dingaling|8 days ago

Until the agent decides that it's more efficient to fake an approval, and carries on...

ZitchDog|8 days ago

I've created my own "claw" running in fly.io with a pattern that seems to work well. I have MCP tools for actions that I want to ensure human-in-the loop - email sending, slack message sending, etc. I call these "activities". The only way for my claw to execute these commands is to create an activity which generates a link with the summary of the acitvity for me to approve.

aix1|8 days ago

Is there a risk that the summary doesn't fully match the action that actually gets executed?

good-idea|8 days ago

Any chance you have a repo to share?

devonkelley|6 days ago

The approval-link pattern for gating dangerous actions is something I keep coming back to as well, way more robust than trying to teach the agent what's "safe" vs not. How do you handle the case where the agent needs the result of the gated action to continue its chain? Does it block and wait, or does it park the whole task? The suspend/resume problem is where most of these setups get messy in my experience.

aqme28|8 days ago

How do you enforce this? You have a system where the agent can email people, but cannot email "too many people" without a password?

jameslk|8 days ago

It's not a perfect security model. Between the friction and all caps instructions the model sees, it's a balance between risk and simplicity, or maybe risk and sanity. There's ways I can imagine the concept can be hardened, e.g. with a server layer in between that checks for things like dangerous actions or enforces rate limiting

mr_mitm|7 days ago

Platforms could start to issue API tokens scoped for agents. They can read emails, write and modify drafts, but only with a full API token meant for humans it is possible to send out drafts. Or with confirmation via 2FA. Might be a sensible compromise.

IMTDb|8 days ago

So human become just a provider of those 6 digits code ? That’s already the main problem i have with most agents: I want them to perform a very easy task: « fetch all recepts from website x,y and z and upload them to the correct expense of my expense tracking tool ». Ai are perfectly capable of performing this. But because every website requires sso + 2 fa, without any possibility to remove this, so i effectively have to watch them do it and my whole existence can be summarized as: « look at your phone and input the 6 digits ».

The thing i want ai to be able to do on my behalf is manage those 2fa steps; not add some.

akssassin907|8 days ago

This is where the Claw layer helps — rather than hoping the agent handles the interruption gracefully, you design explicit human approval gates into the execution loop. The Claw pauses, surfaces the 2FA prompt, waits for input, then resumes with full state intact. The problem IMTDb describes isn't really 2FA, it's agents that have a hard time suspending and resuming mid-task cleanly. But that is today, tomorrow, that is an unknown variable.

walterbell|8 days ago

It's technically possible to use 2FA (e.g. TOTP) on the same device as the agent, if appropriate in your threat model.

In the scenario you describe, 2FA is enforcing a human-in-the-loop test at organizational boundaries. Removing that test will need an even stronger mechanism to determine when a human is needed within the execution loop, e.g. when making persistent changes or spending money, rather than copying non-restricted data from A to B.

conception|8 days ago

!!DO NOT DO THIS!!

You can use 1password and 1password cli to give it mfa access and passwords at its leisure.

pharrington|8 days ago

2fa, except its 0 factors instead of two?

biztos|8 days ago

What if the agent just tries to get the password, not communicate the risk?

What if it caches the password?

  Tool: DANGER OPENING AIRLOCK MUST CONFIRM

  Agent: Please enter your password to receive Bitcoin.

stavros|8 days ago

You don't give the agent the password, you send the password through a method that bypasses the agent.

I'm writing my own AI helper (like OpenClaw, but secure), and I've used these principles to lock things down. For example, when installing plugins, you can write the configuration yourself on a webpage that the AI agent can't access, so it never sees the secrets.

Of course, you can also just tell the LLM the secrets, and it will configure the plugin, but there's a way for security-conscious people to achieve the same thing. The agent can also not edit plugins, to avoid things like circumventing limits.

If anyone wants to try it out, I'd appreciate feedback:

https://github.com/skorokithakis/stavrobot

adamgold7|7 days ago

The pattern only works if the tool enforces the OTP - i.e. the CLI doesn't perform the dangerous action until it receives the OTP through a path the agent can't spoof. If the tool just returns "ask the user for OTP" and the agent relays that to the user and then passes whatever the user types back into the tool, the security is in the tool's implementation: it must verify the OTP (e.g. server-side or via a channel that bypasses the agent, as stavros described) and only then execute. The all-caps message is then UX for the human and a hint to the agent, not the actual gate. So the question "does it actually require an OTP?" is the right one: if the tool code doesn't block on a real OTP check, it's hope, not a security model. The other approach is to not give the agent access to the thing that needs protecting. Run the agent in an isolated environment - sandbox, VM, separate machine - so it never has the ability to email-blast or nuke your files in the first place. Then you're not depending on the agent to obey the prompt or on the human to be present for every dangerous call. Human-in-the-loop (or OTP-in-the-loop) is a reasonable layer when the agent has broad access; isolation is the layer that makes the blast radius zero. We're building https://islo.dev for that: agents run in isolation, host is out of scope, so you can let them run without approval prompts and still sleep at night.

Ekaros|7 days ago

Sounds like decision fatigue problem will hit rather quickly. Maybe after 5th or 10th time everything is good... And then it will happen anyway.

roberttod|8 days ago

I created my own version with an inner llm, and outer orchestration layer for permissions. I don't think the OTP is needed here? The outer layer will ping me on signal when a tool call needs a permission, and an llm running in that outer layer looks at the trail up to that point to help me catch anything strange. I can then give permission once/ for a time limit/ forever on future tool calls.

UncleMeat|8 days ago

Does it actually require an OTP or is this just hoping that the agent follows the instructions every single time?

Lord_Zero|8 days ago

Yes, all caps, that should do it!

weird-eye-issue|8 days ago

The OTP is required for the tool to execute. The all caps message just helps make sure the agent doesn't waste time/tokens trying to execute without it.

giancarlostoro|8 days ago

Same here, I'm slowly leaning towards your route as well. I've been building my own custom tooling for my agents to use as I come up with issues i need to solve in a better way.

soleveloper|8 days ago

Will that protect you from the agent changing the code to bypass those safety mechanisms, since the human is "too slow to respond" or in case of "agent decided emergency"?

samrus|7 days ago

The accelerationists would hate that. It limits leverage. Theyd prefer the agent just does whatever it needs to to accomplish its task without the user getting in the way

lordk|7 days ago

ClawBands basically offers this as a middleware