top | item 47043829

(no title)

ryanrasti | 13 days ago

This is a really good question because it hits on the fundamental issue: LLMs are useful because they can't be statically modeled.

The answer is to constrain effects, not intent. You can define capabilities where agent behavior is constrained within reasonable limits (e.g., can't post private email to #general on Slack without consent).

The next layer is UX/feedback: can compile additional policy based as user requests it (e.g., only this specific sender's emails can be sent to #general)

discuss

order

botusaurus|13 days ago

but how do you check that an email is being sent to #general, agents are very creative at escaping/encoding, they could even paraphrase the email in words

decades ago securesm OSes tracked the provenience of every byte (clean/dirty), to detect leaks, but it's hard if you want your agent to be useful

ryanrasti|13 days ago

> decades ago securesm OSes tracked the provenience of every byte (clean/dirty), to detect leaks, but it's hard if you want your agent to be useful

Yeah, you're hitting on the core tradeoff between correctness and usefulness.

The key differences here: 1. We're not tracking at byte-level but at the tool-call/capability level (e.g., read emails) and enforcing at egress (e.g., send emails) 2. Agent can slowly learn approved patterns from user behavior/common exceptions to strict policy. You can be strict at the start and give more autonomy for known-safe flows over time.

gostsamo|13 days ago

you can restrict the email send tool to have to/cc/bcc emails hardcoded in a list and an agent independent channel should be the one to add items to it. basically the same for other tools. You cannot rewire the llm, but you can enumerate and restrict the boundaries it works through.

exfiltrating info through get requests won't be 100% stopped, but will be hampered.

ATechGuy|13 days ago

TBH, this looks like an LLM-assisted response.

zmmmmm|13 days ago

and then the next:

> you're hitting on the core tradeoff between correctness and usefulness

The question is, is it a completely unsupervised bot or is a human in the loop. I kind of hope a human is not in the loop with it being such a caricature of LLM writing.