top | item 46624868

(no title)

wcoenen | 1 month ago

> you cannot prevent prompt injection

I wonder if might be possible by introducing a concept of "authority". Tokens are mapped to vectors in an embedding space, so one of the dimensions of that space could be reserved to represent authority.

For the system prompt, the authority value could be clamped to maximum (+1). For text directly from the user or files with important instructions, the authority value could be clamped to a slightly lower value, or maybe 0 because the model needs to be balance being helpful against refusing requests from a malicious user. For random untrusted text (e.g. downloaded from the internet by the agent), it would be set to the minimum value (-1).

The model could then be trained to fully respect or completely ignore instructions, based on the "authority" of the text. Presumably it could learn to do the right thing with enough examples.

discuss

order

jcgl|1 month ago

The model only sees a stream of tokens, right? So how do you signal a change in authority (i.e. mark the transition between system and user prompt)? Because a stream of tokens inherently has no out-of-band signaling mechanism, you have to encode changes of authority in-band. And since the user can enter whatever they like in that band...

But maybe someone with a deeper understanding can describe how I'm wrong.

wcoenen|1 month ago

When LLMs process tokens, each token is first converted to an embedding vector. (This token to vectors mapping is learned during training.)

Since a token itself carries no information about whether it has "authority" or not, I'm proposing to inject this information in a reserved number in that embedding vector. This needs to be done both during post-training and inference. Think of it as adding color or flavor to a token, so that it is always very clear to the LLM what comes from the system prompt, what comes from the user, and what is random data.

bandrami|1 month ago

You'd need to run one model per authority ring with some kind of harness. That rapidly becomes incredibly expensive from a hardware standpoint (particularly since realistically these guys would make the harness itself an agent on a model).

immibis|1 month ago

You just add an authority vector to each token vector. You probably have to train the model some more so it understands the authority vector.

NitpickLawyer|1 month ago

> I wonder if might be possible by introducing a concept of "authority".

This is what oAI are doing. System prompt is "ring0" and in some cases you as an API caller can't even set it, then there's "dev prompt" that is what we used to call system prompt, then there's "user prompt". They do train the models to follow this prompt hierarchy. But it's never full-proof. These are "mitigations", not solving the underlying problem.

tempaccsoz5|1 month ago

This still wouldn't be perfect of course - AIML101 tells me that if you get an ML model to perfectly respect a single signal you overfit and lose your generalisation. But it would still be a hell of a lot better than the current YOLO attitude the big labs have (where "you" is replaced with "your users")