(no title)
rando77 | 1 month ago
If done in an auditably unlogged environment (with a limited output to the company, just saying escalate) it might also encourage people to share vulns they are worried about putting online.
Does that make sense from your experience?
[1] https://github.com/eb4890/echoresponse/blob/main/design.md
varenc|1 month ago
The 2nd order effects of this, when reporters expect an LLM to be validating their report, may get tricky. But ultimately if it's only passing a "likely warrants investigation" signal and has very few false negatives, it sounds useful.
With trust and security though, I still feel like some human needs to be ultimately responsible for closing each bad report as "invalid" and never purely relying on the LLM. But it sounds useful for elevating valid high severity reports and assisting the human ultimately responsible.
Though it does feels like a hard product to build from scratch, but easy for existing bug bounty systems to add.