(no title)
rpicard | 6 months ago
It seems very short sighted.
I think of it more like self driving cars. I expect the error rate to quickly become lower than humans.
Maybe in a couple of years we’ll consider it irresponsible not to write security and safety critical code with frontier LLMs.
xnorswap|6 months ago
Very quickly he went straight to, "Fuck it, the LLM can execute anything, anywhere, anytime, full YOLO".
Part of that is his risk-appetite, but it's also partly because anything else is just really furstrating.
Someone who doesn't themselves code isn't going to understand what they're being asked to allow or deny anyway.
To the pure vibe-coder, who doesn't just not read the code, they couldn't read the code if they tried, there's no difference between "Can I execute grep -e foo */*.ts" and "Can I execute rm -rf /".
Both are meaningless to them. How do you communicate real risk? Asking vibe-coders to understand the commands isn't going to cut it.
So people just full allow all and pray.
That's a security nightmare, it's back to a default-allow permissive environment that we haven't really seen in mass-use, general purpose internet connected devices since windows 98.
The wider PC industry has got very good at UX to the point where most people don't need to worry themselves about how their computer works at all and still successfully hide most of the security trappings and keep it secure.
Meanwhile the AI/LLM side is so rough it basically forces the layperson to open a huge hole they don't understand to make it work.
tootubular|6 months ago
voidUpdate|6 months ago
bpt3|6 months ago
Today, LLMs make development faster, not better.
And I'd be willing to bet a lot of money they won't be significantly better than a competent human in the next decade, let alone the next couple years. See self-driving cars as an example that supports my position, not yours.
anonzzzies|6 months ago
furyofantares|6 months ago
You don't have to use them this way. It's just extremely tempting and addictive.
You can choose to talk to them about code rather than features, using them to develop better code at a normal speed instead of worse code faster. But that's hard work.
philipp-gayret|6 months ago
kriops|6 months ago
Analogous to the way I think of self-driving cars is the way I think of fusion: perpetually a few years away from a 'real' breakthrough.
There is currently no reason to believe that LLMs cannot acquire the ability to write secure code in the most prevalent use cases. However, this is contingent upon the availability of appropriate tooling, likely a Rust-like compiler. Furthermore, there's no reason to think that LLMs will become useful tools for validating the security of applications at either the model or implementation level—though they can be useful for detecting quick wins.
lxgr|6 months ago
rpicard|6 months ago
andrepd|6 months ago
kingstnap|6 months ago
It's optimistic but maybe once we start training them on "remove the middle" instead it could help make code better.
tptacek|6 months ago
rpicard|6 months ago
I might also be hyper sensitive to the cynicism. It tends to bug me more than it probably should.
croes|6 months ago
[deleted]
croes|6 months ago
Self driving cars maybe be better than the average driver but worse than the top drivers.
For security code it’s the same.
lxgr|6 months ago