I think you can't see the forest for the trees.
The issue is not a process isolation, it’s pretty trivial to solve in a lot of ways.
The actual problem is LLMs proneness to the prompt injection. The second you give an agent ability to consume the info from the outside world - like reading emails; you expose yourself to this ginormous security vulnerability.
I genuinely don’t understand how people able to sleep at night knowing anyone can trick the magic process with access to their digital lives to do absolutely anything.
No comments yet.