(no title)
Xirdus | 1 month ago
Prompt injection is possible when input is interpreted as prompt. The protection would have to work by making it possible to interpret input as not-prompt, unconditionally, regardless of content. Currently LLMs don't have this capability - everything is a prompt to them, absolutely everything.
kentm|1 month ago
unknown|1 month ago
[deleted]
acjohnson55|1 month ago