Giving different permissions levels to different email senders would be very challenging to implement reliably with LLMs. With an AI assistant like this, the typical implementation would be to feed it the current instruction, history of interactions, content of recent emails, etc, and ask it what command to run to best achieve the most recent instruction. You could try to ask the LLM to say which email the command originates from, but if there's a prompt injection, the LLM can be tricked in to lying about that. Any permissions details need to be implemented outside the LLM, but that pretty much means that each email would need to be handled in its own isolated LLM instance, which means that it's impossible to implement features like summarizing all recent emails.
williamcotton|2 years ago
All of the data that came from remote commands, such as the body of a newly created Todo item, should still be considered tainted and and treated in a similar manner.
These are the exact same security issues for any case of remote API calls with arbitrary execution.
alangpierce|2 years ago