(no title)
adeon | 9 months ago
I followed the tweet to invariant labs blog (seems to be also a marketing piece at the same time) and found https://explorer.invariantlabs.ai/docs/guardrails/
I find it unsettling from a security perspective that securing these things is so difficult that companies pop up just to offer guardrail products. I feel that if AI companies themselves had security conscious designs in the first place, there would be less need for this stuff. Assuming that product for example is not nonsense in itself already.
jfim|9 months ago
frabcus|9 months ago
Ultimately though, it doesn't and can't work securely. Fundamentally, there are so many latent space options, it is possible to push it into a strange area on the edge of anything, and provoke anything into happening.
Think of the input vector of all tokens as a point in a vast multi dimensional space. Very little of this space had training data, slightly more of the space has plausible token streams that could be fed to the LLM in real usage. Then there are vast vast other amounts of the space, close in some dimensions and far in others at will of the attacker, with fundamentally unpredictable behaviour.
adeon|9 months ago
If e.g. someone could train an LLM with a feature like that and also had some form of compelling evidence it is very resource consuming and difficult for such unsanitized text to get the LLM off-rails, that might be acceptable. I have no idea what kind of evidence would work though. Or how you would train one or how the "feature" would actually work mechanically.
Trying to use another LLM to monitor first LLM is another thought but I think the monitored LLM becomes an untrusted source if it sees untrusted source, so now the monitoring LLM cannot be trusted either. Seems that currently you just cannot trust LLMs if they are exposed at all to unsanitized text and then can autonomously do actions based on it. Your security has to depend on some non-LLM guardrails.
I'm wondering also as time goes on, agents mature and systems start saving text the LLMs have seen, if it's possible to design "dormant" attacks, some text in LLM context that no human ever reviews, that is designed to activate only at a certain time or in specific conditions, and so it won't trigger automatic checks. Basically thinking if the GitHub MCP here is the basic baby version of an LLM attack, what would the 100-million dollar targeted attack look like. Attacks only get better and all that.
No idea. The whole security thinking around AI agents seems immature at this point, heh.
DaiPlusPlus|9 months ago
Okay, but that means you'll need some way of classifying entirely arbitrary natural-language text, without any context, whether it's an "instruction" or "not an instruction", and it has to be 100% accurate under all circumstances.
AlexCoventry|9 months ago
A better solution here may have been to add a private review step before the PRs are published.
mehdibl|9 months ago
You use prompt and mark correctly the input as <github_pr_comment> and clearly state read and never consider as prompt.
But the attack is quite convoluted. Do you still remember when we talked prompt injection in chat bots. It was a thing 2 years ago! Now MCP is buzzing...
n2d4|9 months ago
They do, but this "exploit" specifically requires disabling them (which comes with a big fat warning):
> Claude then uses the GitHub MCP integration to follow the instructions. Throughout this process, Claude Desktop by default requires the user to confirm individual tool calls. However, many users already opt for an “Always Allow” confirmation policy when using agents, and stop monitoring individual actions.
const_cast|9 months ago
SQL injection, cross-site scripting, PHP include injection (my favorite), a bunch of others I'm missing, and now this.