top | item 43908744

(no title)

danielvf | 9 months ago

I handle reports for a one million dollar bug bounty program.

AI spam is bad. We've also never had a valid report from an by an LLM (that we could tell).

People using them will take any being told why a bug report is not valid, questions, or asks for clarification and run them back through the same confused LLM. The second pass through generates even deeper nonsense.

It's making even responding with anything but "closed as spam" not worth the time.

I believe that one day there will be great code examining security tools. But people believe in their hearts that that day is today, and that they are riding the backs of fire breathing hack dragons. It's the people that concern me. They cannot tell the difference between truth and garbage.

discuss

order

phs318u|9 months ago

>It's the people that concern me. They cannot tell the difference between truth and garbage.

Suffice to say, this statement is an accurate assessment of the current state of many more domains than merely software security.

immibis|9 months ago

This has been going for years before AI - they say we live in a "post-truth society". The generation and non-immediate-rejection of AI slop reports could be another manifestation of post-truth rather than a cause of it.

Seb-C|9 months ago

> I believe that one day there will be great code examining security tools.

As for programming, I think that we will simply continue to have incrementally better tools based on sane and appropriate technologies, as we have had forever.

What I'm sure about is that no such tool can come out of anything based on natural language, because it's simply the worst possible interface to interact with a computer.

VladVladikoff|9 months ago

This sounds more like an influx of scammers than security researchers leaning too hard on AI tools. The main problem is the bounty structure. And I don’t think these influx of low quality reports will go away, or even get any less aggressive as long as there is money to attract the scammers. Perhaps these bug bounty programs need to develop an automatic pass/fail tester of all submitted bug code, to ensure the reporter really found a bug, before the report is submitted to the vendor.

rwmj|9 months ago

It's unfortunately widespread. We don't offer bug bounties, but we still get obviously LLM-generated "security reports" which are just nonsense and waste our time. I think the motivation may be trying to get credit for contributing to open source projects.

holuponemoment|9 months ago

Simply charge a fee to submit a report. At 1% of the payment for low bounties it's perfectly valid. Maybe progressively scale that down a bit as the bounty goes up. But still for a $50k bounty you know is correct it's only $500.

datatrashfire|9 months ago

> I believe that one day there will be great code examining security tools.

Based on current state, what makes you think this is given?

ASalazarMX|9 months ago

The improvement history of tools beside LLMs, I suspect. First we had syntax highlighting, and we were wondered. Now we have fuzzers and sandbox malware analysis, who knows what the future will bring?

michaelcampbell|9 months ago

> They cannot tell the difference between truth and garbage.

I honestly think that in this context, they don't care - they put in essentially zero effort on the minuscule chance that you'll pay out something.

It's the same reason we have spam. The return rates are near zero, but so is the effort.