For all the discussions about the slopification of the internet, the human toll on open source maintainers isn’t really talked about. It's one thing to get flooded with bad reports; it's another to have to mentally filter AI-generated submissions designed to "sound correct" but offer no real value. Totally agree with the author mentioning the emotional toll it takes to deal with these mind-numbing stupidities.
And it's not just vulnerability reports that are affected by this general trend. I use social media, X specifically, to follow a lot of artists, mostly for inspiration and because I find it fun to share some of the work that other artists have created, but over the past year or so I find that the mental workload it takes for me to figure out if a particular piece of art is AI-generated is too much and I start leaning into the safe option of "don't share anything that seems even remotely suspicious unless I can verify the author".
The amount of art posts that I have shared with others has decreased significantly, to the point where I am almost certain some artists who have created genuine works simply get filtered out because their work "looks" like it could have been AI-generated... It's getting to the point where if I see anything that is AI it's an instant mute or block, because there is nothing of value there - it's just noise clogging up my feed.
> The length check only accounts for tmplen (the original string length), but this msnprintf call expands the string by adding two control characters (CURL_NEW_ENV_VAR and CURL_NEW_ENV_VALUE). This discrepancy allows an attacker ...hey chat, give this in a nice way so I reply on hackerone with this comment
> You still have not told us on which source code line the buffer overflow occurs.
> > hey chat, give this in a nice way so I reply on hackerone with this comment
> This looks like you accidentally pasted a part of your AI chat conversation into this issue, even though you have not disclosed that you're using an AI even after having been asked multiple times.
The abuse of AI here blows my mind. Not just the use of AI to try to find a vulnerability in a widely-used repo, but the complete ignorance when using the AI.
"hey chat, give this in a nice way so I reply on hackerone with this comment" is not language used naturally. It virtually never precedes high-quality conversation between humans so you aren't going to get that. You would only say this when prompting an LLM (poorly at that) so you are activating weights encoding information from LLM slop in the training data.
They're still very graceful about it, seeing it as a learning opportunity instead of closing it as spam like a lot of the internet has done for the past 30 years.
Oh god, going through some of the reports listed on the bottom of the page feels like a nightmare. I cannot imagine how it is for the actual maintainers.
I wonder what's the solution here. You need to be able to receive reports from anyone, so a reputation based system is not applicable. It also seems like we cannot detect whether a piece of text was generated with LLM..
I would have closed ALL of the linked reports much sooner, and banned the reporters. In most cases it is extremely obvious from very early on in the thread that these people have not the slightest idea what they are saying and just copy-paste AI responses.
> It also seems like we cannot detect whether a piece of text was generated with LLM
Based on reading those same reports, I think you can totally can detect it, and Daniel also thinks that -- or at least, you can tell when it's very obvious and the user has pretty much just pasted what they got from the LLM into the submit box. Sneaky humans, trying to disguise their sources by removing the obvious tells, make it harder.
The curl staff assume good faith and let the submitter explain themselves. Maybe the submitter has a reason for using it -- the submitter may be honest or dishonest as they wish.
I like that the curl staff ask submitters to admit up-front if any AI was used, so they can discriminate between people with a legitimate use case (e.g. people who don't speak English but can find valid exploits and want to use machine translation for their writeup), versus others (e.g. people who think generalised LLMs can do security analysis).
But even so, the central point of this blog post is that the bad humans waste their time, they can't get that time back, and even directly banning them does not have much of an effect on the bad humans, or the next batch of bad humans.
Why reputation is not applicable. Arrange for whitelist off-channel and then submit your requests. I think that reputation and non-anonymity is the only applicable way forward.
And when i say "non-anonymity" i don't mean "public". You can be non-anonymous with one person not the whole world.
As the only developer maintaining a big bounty program. I believe they are all trending downward.
I've recently cut bounties to zero for all but the most severe issues, hoping to refocus the program on rewarding interesting findings instead of the low value reports.
So far it's done nothing to improve the situation, because nobody appears to read the rewards information before emailing. I think reading scope/rewards takes too much time per company for these low value reports.
I think that speaks volumes about how much time goes into the actual discoveries.
Open to suggestions to improve the signal to noise ratio from anyone whose made notable improvements to a bug bounty program.
Similarly from a hacker's point of view, I also think vulnerability reporting is in a downwards spiral. Particularly the ones organised through a platform like this just aren't reaching the right people. It used to be pgp email to whoever needs to know of it and that worked great. I have no idea if it still would today for you guys, but from my point of view it's the only reliable way to reach a human who cares about the product and not someone whose job it is to refuse bounties. I don't want bounties, I've got a day job as security consultant for that, I'm just reporting what I stumble across. Chocolate and handwritten notes are nice, but primarily I want developers and sysadmins to fix their damn software
Putting on my tinfoil hat, I wonder if some of that slop might be coming from actual black-hat groups or state actors - who have an interest in making it harder to find and close real exploits.
Those people wouldn't care about the bounty, overwhelming the system would be the point.
You could charge a fee and give the money back if the report is wrong but seems well-intentioned.
I see the issue with this, it's payment platforms. Despite the hate, cryptocurrency seems like it could be a solution. But in practice, people won't take time to set up a crypto wallet just to submit a bug report, and if crypto becomes popular, it may get regulations and middlemen like fiat (which add friction, e.g. chargebacks, KYC, revenue cuts).
However if more services use small fees to avoid spam it could work eventually. For instance, people could install a client that pays such fees automatically for trusted sites which refund for non-spam behavior.
> You could charge a fee and give the money back if the report is wrong but seems well-intentioned.
That idea was considered and rejected in the article:
> People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.
Charging a fee to submit a bug report raises the barrier to entry and will reduce the amount of effort people are willing to spend. Which can be a net positive - charging $100 to be able to submit an app to Apple's app store helped prevent a lot of spammy low-effort iFart apps in the early days.
These AI reports are just an acceleration of the slop created by similar human “researchers”. The real root cause of this is that most security “professionals” have been trained to do the bare minimum of work and expect a payday from it.
There’s an entire industry of “penetration testers” that do nothing more than run Fortify against your code base and then expect you to pay them $100k for handing over the findings report. And now AI makes it even easier to do that faster.
We have an industry that pats security engineers on the back for discovering the “coolest” security issue - and nothing that incentivizes them to make sure that it actually is a useful finding, or more importantly, actually helping to fix it. Even at my big tech company, where I truly think some of the smartest security people work, they all have this attitude that their job is just to uncover an issue, drop it in someone else’s lap, and then expect a gold star and a payout, never mind the fact that their finding made no sense and was just a waste of time for the developer team. There is an attitude that security people don’t have any responsibility for making things better - only for pointing out the bad things. And that attitude is carrying over into this AI slop.
There’s no incentive for security people to not just “spray and pray” security issues at you. We need to stop paying out but bounties for discovering things, and instead better incentivize fixing them - in the process weeding out reports that don’t actually lead to a fix.
Professional Vulnerability Researcher here... You are correct. Over the years this industry has seen an influx of script kiddies who do nothing but run tools. It's sad but I really think this field needs more gate keeping...
Oh yes. AI has nothing to do with it! It is Totally Outrageous and Unexpected that AI would be abused to spew a lot of low value crap.
Haha, I kid. Make no mistake, this is the AI sales pitch. A *weapon* to use on your opposition. If the hackers were trying to win by using it to wear down the defenders it could not possibly be working better.
> charging a fee [...] rather hostile way for an Open Source project that aims to be as open and available as possible
The most hostile is Apple where you cannot expect any kind of feedback on bug reports. You are really lucky if you get any kind of feedback from Apple.
Getting good feedback is the most valuable thing ever. I don't mind having to pay $5/year to be make reports if I know I would get feedback.
> You are really lucky if you get any kind of feedback from Apple.
Hard disagree. When you get feedback from Apple, it’s more often than not a waste of time. You are lucky when you get no feedback and the issue is fixed.
This is because Apple software is perfect by definition. Any perceived bug is an example of someone failing to use the software correctly. Bug reports are records of user incompetence, whose only purpose is to be ritually mocked in morale-enhancing genius confirmation sessions.
That would likely fix some of it, but I suspect that you'd still get a lot, anyway, because people program their crawlers to hit everything, regardless of their relevance. Doesn't cost anything more, so why not? Every little hit adds to the coffers.
> Doesn't cost anything more, so why not? Every little hit adds to the coffers.
Uhh... How does it not cost more to hit everything vs specific areas? Especially when you consider the actual payout rate for such approaches, which cannot possibly be very high - every little hit does not add to the coffers, which means you have to be more selective about what you try.
Right? I thought the value of these vuln programs like hackerone and bugbounty would be you could use the submitters reputation to filter the noise? Don't want to accept low quality submissions from new or low experience reports? Turn the knob up..
"Submit deposit." They get the money back in all cases where the bug is determined not to be AI slop, including it not being a real bug, user error, etc. Otherwise, deposit gone.
How about only sending submissions to humans if they include a reproducible test case? Actual compilable source code + payload that reproduces an attack. Would this be too easily gamed by security researchers as well?
You could require that submissions include an expletive or anything else that LLMs are sanitized to not produce. With how lazy these people are that ought to filter out at least some of them.
They are lazy up until they lose money if they don't do something. So if this was the only way to submit the reports, they'll find a way to prompt-hack the LLM to produce the expletive.
...or, just add it to the generated text themselves.
Depends. I'm not suffering it at all, but I'm a sort of research project producing variations on audio processing under MIT license.
And I don't take pull requests: only exception has been to accomodate a downstream user who was running a script to incorporate the code, and that was so out of my usual experience that it took way to long to register it was a legitimate pull request.
AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better.
This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary.
Permissively-licensed projects (which is the majority or FOSS projects out there) could always be re-licensed to proprietary. I publish most of my code under permissive licences and will continue doing that. LLM training doesn't really change anything for me
> The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions)
Of course that 20% of AI slop submissions are not good, but there’s an overarching problem with juniors clamoring for open source contributions without having the skills or abilities to contribute something useful.
They heard that open source contributions gets jobs, so they spam contributions to famous projects.
Maybe a curl Patreon for would-be H1 contributors? Just need to figure out a donation amount that is trivial for legitimate security researchers, but too rich for spammers.
Of the 21 reports included as an example i have looked at number two, Buffer Overflow Vulnerability in WebSocket Handling #2298307
The style is obviously gpt generated and I think the curl team knows that, still they proceed to answer and keep making questions about the report to its author to get more info.
It really bothers me is that these idiots are consuming the time and patience of nice and reasonable people, I really hope the can find a solution and don't eventually snap by having to deal with this bullshit.
[+] [-] raywatcher|8 months ago|reply
[+] [-] leovingi|8 months ago|reply
The amount of art posts that I have shared with others has decreased significantly, to the point where I am almost certain some artists who have created genuine works simply get filtered out because their work "looks" like it could have been AI-generated... It's getting to the point where if I see anything that is AI it's an instant mute or block, because there is nothing of value there - it's just noise clogging up my feed.
[+] [-] DaSHacka|8 months ago|reply
[+] [-] EdwardDiego|8 months ago|reply
Ohhh, copy and pasted a bit too much there.
[+] [-] Hendrikto|8 months ago|reply
These people don’t even make the slightest effort whatsoever. I admire Daniel’s patience in dealing with them.
Reading these threads is infuriating. They very obviously just copy and paste AI responses without even understanding what they are talking about.
[+] [-] disqard|8 months ago|reply
> > hey chat, give this in a nice way so I reply on hackerone with this comment
> This looks like you accidentally pasted a part of your AI chat conversation into this issue, even though you have not disclosed that you're using an AI even after having been asked multiple times.
A sample of what they have to deal with. Source:
https://hackerone.com/reports/3230082
[+] [-] toshinoriyagi|8 months ago|reply
"hey chat, give this in a nice way so I reply on hackerone with this comment" is not language used naturally. It virtually never precedes high-quality conversation between humans so you aren't going to get that. You would only say this when prompting an LLM (poorly at that) so you are activating weights encoding information from LLM slop in the training data.
[+] [-] Cthulhu_|8 months ago|reply
[+] [-] 4gotunameagain|8 months ago|reply
I wonder what's the solution here. You need to be able to receive reports from anyone, so a reputation based system is not applicable. It also seems like we cannot detect whether a piece of text was generated with LLM..
[+] [-] Hendrikto|8 months ago|reply
[+] [-] amiga386|8 months ago|reply
Based on reading those same reports, I think you can totally can detect it, and Daniel also thinks that -- or at least, you can tell when it's very obvious and the user has pretty much just pasted what they got from the LLM into the submit box. Sneaky humans, trying to disguise their sources by removing the obvious tells, make it harder.
The curl staff assume good faith and let the submitter explain themselves. Maybe the submitter has a reason for using it -- the submitter may be honest or dishonest as they wish.
I like that the curl staff ask submitters to admit up-front if any AI was used, so they can discriminate between people with a legitimate use case (e.g. people who don't speak English but can find valid exploits and want to use machine translation for their writeup), versus others (e.g. people who think generalised LLMs can do security analysis).
But even so, the central point of this blog post is that the bad humans waste their time, they can't get that time back, and even directly banning them does not have much of an effect on the bad humans, or the next batch of bad humans.
[+] [-] vincnetas|8 months ago|reply
And when i say "non-anonymity" i don't mean "public". You can be non-anonymous with one person not the whole world.
[+] [-] anthonyryan1|8 months ago|reply
I've recently cut bounties to zero for all but the most severe issues, hoping to refocus the program on rewarding interesting findings instead of the low value reports.
So far it's done nothing to improve the situation, because nobody appears to read the rewards information before emailing. I think reading scope/rewards takes too much time per company for these low value reports.
I think that speaks volumes about how much time goes into the actual discoveries.
Open to suggestions to improve the signal to noise ratio from anyone whose made notable improvements to a bug bounty program.
[+] [-] Aachen|8 months ago|reply
[+] [-] xg15|8 months ago|reply
Those people wouldn't care about the bounty, overwhelming the system would be the point.
[+] [-] jgb1984|8 months ago|reply
[+] [-] anilgulecha|8 months ago|reply
No need to throw out the baby with the bathwater.
[+] [-] armchairhacker|8 months ago|reply
I see the issue with this, it's payment platforms. Despite the hate, cryptocurrency seems like it could be a solution. But in practice, people won't take time to set up a crypto wallet just to submit a bug report, and if crypto becomes popular, it may get regulations and middlemen like fiat (which add friction, e.g. chargebacks, KYC, revenue cuts).
However if more services use small fees to avoid spam it could work eventually. For instance, people could install a client that pays such fees automatically for trusted sites which refund for non-spam behavior.
[+] [-] latexr|8 months ago|reply
That idea was considered and rejected in the article:
> People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.
[+] [-] jannes|8 months ago|reply
https://hackerone.com/curl/hacktivity
[+] [-] Cthulhu_|8 months ago|reply
[+] [-] placardloop|8 months ago|reply
There’s an entire industry of “penetration testers” that do nothing more than run Fortify against your code base and then expect you to pay them $100k for handing over the findings report. And now AI makes it even easier to do that faster.
We have an industry that pats security engineers on the back for discovering the “coolest” security issue - and nothing that incentivizes them to make sure that it actually is a useful finding, or more importantly, actually helping to fix it. Even at my big tech company, where I truly think some of the smartest security people work, they all have this attitude that their job is just to uncover an issue, drop it in someone else’s lap, and then expect a gold star and a payout, never mind the fact that their finding made no sense and was just a waste of time for the developer team. There is an attitude that security people don’t have any responsibility for making things better - only for pointing out the bad things. And that attitude is carrying over into this AI slop.
There’s no incentive for security people to not just “spray and pray” security issues at you. We need to stop paying out but bounties for discovering things, and instead better incentivize fixing them - in the process weeding out reports that don’t actually lead to a fix.
[+] [-] jdefr89|8 months ago|reply
[+] [-] conartist6|8 months ago|reply
Haha, I kid. Make no mistake, this is the AI sales pitch. A *weapon* to use on your opposition. If the hackers were trying to win by using it to wear down the defenders it could not possibly be working better.
[+] [-] silvestrov|8 months ago|reply
The most hostile is Apple where you cannot expect any kind of feedback on bug reports. You are really lucky if you get any kind of feedback from Apple.
Getting good feedback is the most valuable thing ever. I don't mind having to pay $5/year to be make reports if I know I would get feedback.
[+] [-] latexr|8 months ago|reply
Hard disagree. When you get feedback from Apple, it’s more often than not a waste of time. You are lucky when you get no feedback and the issue is fixed.
[+] [-] omnicognate|8 months ago|reply
[+] [-] ChrisMarshallNY|8 months ago|reply
That would likely fix some of it, but I suspect that you'd still get a lot, anyway, because people program their crawlers to hit everything, regardless of their relevance. Doesn't cost anything more, so why not? Every little hit adds to the coffers.
[+] [-] squigz|8 months ago|reply
Uhh... How does it not cost more to hit everything vs specific areas? Especially when you consider the actual payout rate for such approaches, which cannot possibly be very high - every little hit does not add to the coffers, which means you have to be more selective about what you try.
[+] [-] caioluders|8 months ago|reply
[+] [-] spydum|8 months ago|reply
[+] [-] unknown|8 months ago|reply
[deleted]
[+] [-] yayitswei|8 months ago|reply
[+] [-] bla3|8 months ago|reply
[+] [-] cjs_ac|8 months ago|reply
[+] [-] komali2|8 months ago|reply
[+] [-] whatevsmate|8 months ago|reply
[+] [-] IsTom|8 months ago|reply
[+] [-] xg15|8 months ago|reply
...or, just add it to the generated text themselves.
[+] [-] konsalexee|8 months ago|reply
My bet is that git hosting providers like GitHub etc. should start providing features to allow us for better signal/noise ratio
[+] [-] nkrisc|8 months ago|reply
[+] [-] Applejinx|8 months ago|reply
And I don't take pull requests: only exception has been to accomodate a downstream user who was running a script to incorporate the code, and that was so out of my usual experience that it took way to long to register it was a legitimate pull request.
[+] [-] detaro|8 months ago|reply
[+] [-] rwmj|8 months ago|reply
https://gitlab.com/ququruza
[+] [-] bit1993|8 months ago|reply
This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary.
[+] [-] Expurple|8 months ago|reply
[+] [-] Aurornis|8 months ago|reply
> The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions)
Of course that 20% of AI slop submissions are not good, but there’s an overarching problem with juniors clamoring for open source contributions without having the skills or abilities to contribute something useful.
They heard that open source contributions gets jobs, so they spam contributions to famous projects.
[+] [-] pinebox|8 months ago|reply
[+] [-] EdwardDiego|8 months ago|reply
https://hackerone.com/reports/2298307
[+] [-] soyyo|8 months ago|reply
The style is obviously gpt generated and I think the curl team knows that, still they proceed to answer and keep making questions about the report to its author to get more info.
It really bothers me is that these idiots are consuming the time and patience of nice and reasonable people, I really hope the can find a solution and don't eventually snap by having to deal with this bullshit.