top | item 35930040

(no title)

r13a | 2 years ago

> Doesn't it shift the risk > instead of eliminating it?

Yes it's exactly that.

Of course I'm not trying to argue that there's a magic wand to make prompt injection just go away. My point is that prompt injection is so dangerous because we're letting the user directly interact with such a powerful beast as a SOTA LLM.

By filtering prompts and answers by much less powerful but more specialized models we are heavily mitigating risks. But injection risks will still be there just not as a wide open injection avenue as it is today.

Update: typos.

discuss

order

No comments yet.