The issue is if you want to prevent your LLM from actually doing anything other than responding to text prompts with text output, then you have to give it permissions to do those things.
No-one is particularly concerned about prompt injection for pure chatbots (although they can still trick users into doing risky things). The main issue is with agents, who by definition perform operations on behalf of users, typically with similar roles to the users, by necessity.
antonvs|1 month ago
No-one is particularly concerned about prompt injection for pure chatbots (although they can still trick users into doing risky things). The main issue is with agents, who by definition perform operations on behalf of users, typically with similar roles to the users, by necessity.