(no title)
snailmailman | 25 days ago
It’s not a critical flaw in the entirety of the LLM ecosystem that now the computers themselves can be tricked into doing things by asking in just the right way. Anything in the context might be a prompt injection attack, and there isn’t really any reliable solution to that but let’s hook everything up to it, and also give it the tools to do anything and everything.
There is still a long way to go to securing these. Apple is, I think wisely, staying out of this arena until it’s solved, or at least less of a complete mess.
mastermage|25 days ago
vimda|25 days ago
nilamo|24 days ago
Maybe, just maybe, this thing that was, until recently, just research papers, is not actually a finished product right now? Incredibly hot take, I know.
AlexandrB|24 days ago