(no title)
Legend2440 | 11 days ago
Unfortunately, prompt injection does strongly limit what you can safely use LLMs for. But people are willing to accept the limitations because they do a lot of really awesome things that can't be done any other way.
They will figure out a solution to prompt injection eventually, probably by training LLMs in a way that separates instructions and data.
No comments yet.