(no title)
lbeurerkellner | 9 months ago
Your caution is wise, however, in my experience, large parts of the eco-system do not follow such practices. The report is an educational resource, raising awareness that indeed, LLMs can be hijacked to do anything if they have the tokens, and access to untrusted data.
The solution: To dynamically restrict what your agent can and cannot do with that token. That's precisely the approach we've been working on for a while now [1].
ljm|9 months ago
It's one of those things where a token creation wizard would come in really handy.
sam-cop-vimes|9 months ago
arccy|9 months ago
robertlagrant|9 months ago
weego|9 months ago
idontwantthis|9 months ago
flakeoil|9 months ago
I think I have to go full offline soon.
TeMPOraL|9 months ago
The fine-grained access forces people to solve a tough riddle, that may actually not have a solution. E.g. I don't believe there's a token configuration in GitHub that corresponds to "I want to allow pushing to and pulling from my repos, but only my repos, and not those of any of the organizations I want to; in fact, I want to be sure you can't even enumerate those organizations by that token". If there is one, I'd be happy to learn - I can't figure out how to make it out of checkboxes GitHub gives me, and honestly, when I need to mint a token, solving riddles like this is the last thing I need.
Getting LLMs to translate what user wants to do into correct configuration might be the simplest solution that's fully general.
spacebanana7|9 months ago
Conceivably, prompt injection could be leveraged to make LLMs give bad advice. Almost like social engineering.