(no title)
splittydev | 4 months ago
If you have absolutely no idea what you're doing, well, then it doesn't really matter in the end, does it? You're never gonna recognize any security vulnerabilities (as has happened many times with LLM-assisted "no-code" platforms and without any actual malicious intent), and you're going to deploy unsafe code either way.
tcdent|4 months ago
Having access to open models is great, and even if their capabilities are somewhat lower than the closed-source SoTA models, and we should be aware of the differences in behavior.
thayne|4 months ago
the keyword here is "more". The big models might not be quite as susceptible to them, but they are still susceptible. If you expect these attacks to be fully handled, then maybe you should change your expectations.
unknown|4 months ago
[deleted]
BoiledCabbage|4 months ago
Well this is wrong. And it's exactly this type of thinking why people will get absolutely burned by this.
First off the fact they chose obvious exploits for explanatory purposes doesn't mean this attack only supports obvious exploits...
And to your second point of "review the code before you deploy to prod", the second attack did not involve deploying any code to prod. It involved an LLM reading a reddit comment or github comment and immediately executing.
People not taking security seriously and waving it off as trivial is what's gonna make this such a terrible problem.
thayne|4 months ago
right, so you shouldn't give the LLM access to execute arbitrary commands without review.