(no title)
OakNinja | 1 month ago
Promptarmor did a similar attack(1) on Google's Antigravity that is also a beta version. Since then, they added secure mode(2).
These are still beta tools. When the tools are ready, I'd argue that they will probably be safer out of the box compared to a whole lot of users that just blindly copy-paste stuff from the internet, adding random dependencies without proper due diligence, etc. These tools might actually help users acting more secure.
I'm honestly more worried about all the other problems these tools create. Vibe coded problems scale fast. And businesses have still not understood that code is not an asset, it's a liability. Ideally, you solve your business problems with zero lines of code. Code is not expensive to write, it's expensive to maintain.
(1) https://www.promptarmor.com/resources/google-antigravity-exf... (2) https://antigravity.google/docs/secure-mode
InsideOutSanta|1 month ago
iLoveOncall|1 month ago
You don't even need to give it access to Internet to have issues. The training data is untrusted.
It's a guarantee that bad actors are spreading compromised code to infect the training data of future models.
derektank|1 month ago
mistrial9|1 month ago
acessoproibido|1 month ago
Security shouldn't be viewed in absolutes (either you are secure or you aren') but more in degrees. Llms can be used securely just the same as everything else, nothing is ever perfectly secure
strken|1 month ago
It's a good message for software engineers, who have the context to understand when to take on that liability anyway, but it can lead other job functions into being too trigger-happy on solutions that cause all the same problems with none of the mitigating factors of code.
Eufrat|1 month ago
This speculative statement is holding way too much of the argument that they are just “beta tools”.
cyanydeez|1 month ago
They cant. Why? Because the smartest bear ia smarter than the dumbest human.
So, these AIs are suppose to interface with humans and use nondeterminant language.
That vector will always be exploitable, unless youre talking about AI that no han controls.
OakNinja|1 month ago
The non-deterministic nature of an LLM can also be used to catch a lot of attacks. I often use LLM’s to look through code, libraries etc for security issues, vulnerabilities and other issues as a second pair of eyes.
With that said, I agree with you. Anything can be exploited and LLM’s are no exception.