(no title)
wingmanjd | 3 months ago
- A) Process untrustworthy input - B) Have access to private data - C) Be able to change external state or communicate externally.
It's not bullet-proof, but it has helped communicate to my management that these tools have inherent risk when they hit all three categories above (and any combo of them, imho).
[EDIT] added "or communicate externally" to option C.
[1] https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa... [2] https://ai.meta.com/blog/practical-ai-agent-security/
btown|3 months ago
You might say "well, I'm running the output through a watchdog LLM before displaying to the user, and that watchdog doesn't have private data access and checks for anything nefarious."
But the problem is that the moment someone figures out how to prompt-inject a quine-like thing into a private-data-accessing system, such that it outputs another prompt injection, now you've got both (A) and (B) in your system as a whole.
Depending on your problem domain, you can mitigate this: if you're doing a classification problem and validate your outputs that way, there's not much opportunity for exfiltration (though perhaps some might see that as a challenge). But plaintext outputs are difficult to guard against.
quuxplusone|3 months ago
Are you just worried about social engineering — that is, if the attacker can make the LLM say "to complete registration, please paste the following hex code into evil.example.com:", then a large number of human users will just do that? I mean, you'd probably be right, but if that's "all" you mean, it'd be helpful to say so explicitly.
blcknight|3 months ago
otabdeveloper4|3 months ago
It's this decade's version of "they trust me, dumb fucks".
ArcHound|3 months ago
It's great start, but not nearly enough.
EDIT: right, when we bundle state with external Comms, we have all three indeed. I missed that too.
malisper|3 months ago
> Gemini exfiltrates the data via the browser subagent: Gemini invokes a browser subagent per the prompt injection, instructing the subagent to open the dangerous URL that contains the user's credentials.
fulfills the requirements for being able to change external state
bartek_gdn|3 months ago
helsinki|3 months ago
blazespin|3 months ago
yakbarber|3 months ago
j16sdiz|3 months ago
VMG|3 months ago