(no title)
danielvf | 9 months ago
AI spam is bad. We've also never had a valid report from an by an LLM (that we could tell).
People using them will take any being told why a bug report is not valid, questions, or asks for clarification and run them back through the same confused LLM. The second pass through generates even deeper nonsense.
It's making even responding with anything but "closed as spam" not worth the time.
I believe that one day there will be great code examining security tools. But people believe in their hearts that that day is today, and that they are riding the backs of fire breathing hack dragons. It's the people that concern me. They cannot tell the difference between truth and garbage.
phs318u|9 months ago
Suffice to say, this statement is an accurate assessment of the current state of many more domains than merely software security.
immibis|9 months ago
Seb-C|9 months ago
As for programming, I think that we will simply continue to have incrementally better tools based on sane and appropriate technologies, as we have had forever.
What I'm sure about is that no such tool can come out of anything based on natural language, because it's simply the worst possible interface to interact with a computer.
cratermoon|9 months ago
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
VladVladikoff|9 months ago
rwmj|9 months ago
holuponemoment|9 months ago
datatrashfire|9 months ago
Based on current state, what makes you think this is given?
ASalazarMX|9 months ago
michaelcampbell|9 months ago
I honestly think that in this context, they don't care - they put in essentially zero effort on the minuscule chance that you'll pay out something.
It's the same reason we have spam. The return rates are near zero, but so is the effort.