It's definitely both. A bunch of 20 year olds were let loose to be "super efficient." So, to be efficient they use LLMs to implement what should be a major government oversight webpage. Even after the fix the list is a few half-baked partial document excerpts with a few sentences saying, "look how great we are!" It's embarrassing.
> At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.
Not my experience at all. Every LLM produces lots of trivial SQLI/XSS/other-injection vulnerabilities. Worse they seem to completely authorization business logic, error handling, and logging even when prompted to do so.
monocasa|1 year ago
That's currently how I model my usage of LLMs in code. A smart veeeery junior engineer that needs to be kept on a veeeeery short leash.
ellisv|1 year ago
NewJazz|1 year ago
gvx|1 year ago
daveguy|1 year ago
unknown|1 year ago
[deleted]
Maxatar|1 year ago
Maybe they used Grok ;P
tatersolid|1 year ago
Not my experience at all. Every LLM produces lots of trivial SQLI/XSS/other-injection vulnerabilities. Worse they seem to completely authorization business logic, error handling, and logging even when prompted to do so.
zamalek|1 year ago
Smells like getting a backdoor in early.
daveguy|1 year ago
AirMax98|1 year ago
0 - https://blog.arcjet.com/next-js-server-action-security/
rcpt|1 year ago
caboteria|1 year ago