top | item 46623831

(no title)

schmichael | 1 month ago

I'm unconvinced we're as powerless as LLM companies want you to believe.

A key problem here seems to be that domain based outbound network restrictions are insufficient. There's no reason outbound connections couldn't be forced through a local MITM proxy to also enforce binding to a single Anthropic account.

It's just that restricting by domain is easy, so that's all they do. Another option would be per-account domains, but that's also harder.

So while malicious prompt injections may continue to plague LLMs for some time, I think the containerization world still has a lot more to offer in terms of preventing these sorts of attacks. It's hard work, and sadly much of it isn't portable between OSes, but we've spent the past decade+ building sophisticated containerization tools to safely run untrusted processes like agents.

discuss

order

NitpickLawyer|1 month ago

> as powerless as LLM companies want you to believe.

This is coming from first principles, it has nothing to do with any company. This is how LLMs currently work.

Again, you're trying to think about blacklisting/whitelisting, but that also doesn't work, not just in practice, but in a pure theoretical sense. You can have whatever "perfect" ACL-based solution, but if you want useful work with "outside" data, then this exploit is still possible.

This has been shown to work on github. If your LLM touches github issues, it can leak (exfil via github since it has access) any data that it has access to.

schmichael|1 month ago

Fair, I forget how broadly users are willing to give agents permissions. It seems like common sense to me that users disallow writes outside of sandboxes by agents but obviously I am not the norm.

mbreese|1 month ago

I don’t think it is the LLM companies want anyone to believe they are powerless. I think the LLM companies would prefer it if you didn’t think this was a problem at all. Why else would we stay to see Agents for non-coding work start to get advertised? How can that possibly be secured in the current state?

I do think that you’re right though in that containerized sandboxing might offer a model for more protected work. I’m not sure how much protection you can get with a container without also some kind of firewall in place for the container, but that would be a good start.

I do think it’s worthwhile to try to get agentic workflows to work in more contexts than just coding. My hesitation is with the current security state. But, I think it is something that I’m confident can be overcome - I’m just cautious. Trusted execution environments are tough to get right.

heliumtera|1 month ago

>without also some kind of firewall in place for the container

In the article example, an Anthropic endpoint was the only reachable domain. Anthropic Claude platform literally was the exfiltration agent. No firewall would solve this. But a simple mechanism that would tie the agent to an account, like the parent commenter suggested, would be an easy fix. Prompt Injection cannot by definition be eliminated, but this particular problem could be avoided if they were not vibing so hard and bragging about it

rafram|1 month ago

Containerization can probably prevent zero-click exfiltration, but one-click is still trivial. For example, the skill could have Claude tell the user to click a link that submits the data to an attacker-controlled server. Most users would fall for "An unknown error occurred. Click to retry."

The fundamental issue of prompt injection just isn't solvable with current LLM technology.

alienbaby|1 month ago

It's not about being unconvinced, it is a mathematical truth. The control and data streams are both in the prompt and there is no way to definitively isolate one from another.