Crazy how he doubled down by just pasting badger's answer into Chat and submitting the (hilariously obvious AI) reply:
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.
Is it that crazy? He's doing exactly what the AI boosters have told him to do.
Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.
In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.
Quite a few people using AI are using it not only to do analysis, but to do translation for them as well; many people leaping onto this technology don't have English as a fluent language, so they can't evaluate the output of the AI for sensibility or "not sounding like AI."
(It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).
I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.
If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.
You're doing it wrong: You should just feed other peoples AI-generated responses into your own AI tools and let the tool answer for you! The loop is then closed, no human time wasted, and the only effect is wasted energy to run the AI tools. It's the perfect business model to turn energy into money.
Wasting time for others is a net positive, meaning jobs won't be lost, since some human individual still needs to make sense out of AI generated rubbish.
This is not unique to AI tools. I've seen it with new expense tools that are great for accounting but terrible to use, or some contract review process that makes it easier on legal or infosec review of a SaaS tool that everyone and their uncle already uses. It's always natural to push all the work off to someone else because it feels like you saved time.
Yeah when reviewing code nowadays once I'm 5-10 comments in and it becomes obvious it was AI generated, I say to go fix it and that I'll review it after. The time waste is insane.
Over time, I've gotten a feel for what kind of content is AI-generated (e.g., images, text, and especially code...), and this text screams "AI" from top to bottom. I think badger responded very professionally; I'd be interested to see Linus Torvalds' reaction in such a situation :D
Start charging users to submit a vulnerability report.
It doesn't matter if it made by AI or a human, spammers operate by cheaply overproducing and externalizing their work onto you to validate their shit. And it works because sometimes they do deliver value by virtue of large numbers. But they are a net negative for society. Their model stops working if they have to pay for the time they wasted.
Even a deposit works well (and doesn't have to be large). Someone who has actually found a serious bug in cURL will probably pay $2-5 dollars as a deposit to report (especially given the high probability of a payout).
It's all in aid of some streetsweeper being able to add "contributor to X, Y, Z projects!" to their GitHub résumé. Before LLMs were a thing I also received worthless spelling-incorrection pull requests with the same aim.
This is why I refuse to interact with people who use AI. You have to invest orders of magnitude more time to review their hallucinated garbage than they used to generate it. I’m not going to waste my time talking to a computer.
This is essentially what teachers are dealing with every day, across the majority of their students, for every subject where its even remotely possible to use AI.
Education as a profession will have to change. Homework is pointless. Verbal presentations will have to become the new norm, or all written answers must be in the confines of the classroom... with pen and paper. Etc...
Recently a customer pasted a complete ChatGPT chat in the support system and then wrote “it doesn’t work” as subject. I kindly declined.
I’ve also received tickets where the code snippets contained API calls that I never added to the API. A real “am I crazy” situation where I started to doubt I added it and had to double check.
On top of that you get “may I get a refund” emails but expanded to four paragraphs by our friend Chat. It’s getting kinda ridiculous.
Overall it’s been a huge additional time drain.
I think it may be time to update the “what’s included in support” section of my softwares license agreement.
Lord, did anyone else click through and read the actual attached "POC"? It's (for now) hilariously obviously doing nothing interesting at all, but my blood runs cold at AI potentially being able to generate more plausible-looking POC code in the future to waste even more dev time...
I wonder what's going on in the minds of these people.
I would just be terribly embarrassed and not be able to look at myself in the mirror if I did shit like this.
> batuhanilgarr posted a comment (6 days ago) Thanks for the quick review. You’re right ...
On one hand, it's sort of surprising that they double down, copy and paste the response to the llm prompt, paste back that response and hope for the best. But, of course it shouldn't be surprising. This is not just a mistake, it's deliberate lying and manipulating.
Pity HN doesn't support all of those green checkboxes and bold bullet points. Every time I see these in supposedly humans generated documents and pull requests I laugh.
this LLM-emboldened, mass Dunning-Kruger schizophrenia has gone from hilarious to sad to simply invoking disgust. this isn't even an earnest altruistic effort but some insecure fever dream of finally being acknowledged as a "genius" of some sort. the worst i've seen of this is some random redditor claiming to have _the_ authoritative version of a theory of everything and spamming it in every theoretical physics adjacent subreddit, claims to have a phd but anonymous and doesn't represent any research group/institution nor does the spam have any citations.
[+] [-] dansmith1919|5 months ago|reply
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.
[+] [-] Sharlin|5 months ago|reply
[+] [-] dragontamer|5 months ago|reply
IE: They're farming out the work now to OSS volunteers not even sure if the fucking thing works, and eating up OSS maintainer's time.
[+] [-] rapidaneurism|5 months ago|reply
[+] [-] l5870uoo9y|5 months ago|reply
[+] [-] ToucanLoucan|5 months ago|reply
Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.
In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.
[+] [-] pizlonator|5 months ago|reply
(I mean I guess it has to mean that if we are able to spot them so easily)
[+] [-] Havoc|5 months ago|reply
[+] [-] shadowgovt|5 months ago|reply
(It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).
[+] [-] dansmith1919|5 months ago|reply
[+] [-] rpigab|5 months ago|reply
Me: "yes, as a matter of fact I am"
Interviewer: "Whats 14x27"
Me: "49"
Interviewer: "that's not even close"
me: "yeah, but it was fast"
[+] [-] jtwaleson|5 months ago|reply
[+] [-] donohoe|5 months ago|reply
[+] [-] kqr|5 months ago|reply
[+] [-] nenenejej|5 months ago|reply
[+] [-] poszlem|5 months ago|reply
“Is this your card?”
“No, but damn close, you’re the man I seek”
[+] [-] misnome|5 months ago|reply
[+] [-] simsla|5 months ago|reply
I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.
If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.
[+] [-] stahorn|5 months ago|reply
[+] [-] miroljub|5 months ago|reply
[+] [-] sanex|5 months ago|reply
[+] [-] iLoveOncall|5 months ago|reply
[+] [-] zaik|5 months ago|reply
[+] [-] duxup|5 months ago|reply
I spend a lot of time doing cleanup for a predecessor who took shortcuts.
Granted I'm agreeing, just saying the methods / volume maybe changed.
[+] [-] Miraltar|5 months ago|reply
[+] [-] ttyyzz|5 months ago|reply
[+] [-] joz1-k|5 months ago|reply
I suppose there's a reason why kids are usually banned from using calculators during their first years of school when they're learning basic math.
[+] [-] hermannj314|5 months ago|reply
It doesn't matter if it made by AI or a human, spammers operate by cheaply overproducing and externalizing their work onto you to validate their shit. And it works because sometimes they do deliver value by virtue of large numbers. But they are a net negative for society. Their model stops working if they have to pay for the time they wasted.
[+] [-] sealeck|5 months ago|reply
[+] [-] GalaxyNova|5 months ago|reply
[+] [-] scosman|5 months ago|reply
The “fix” was setting completely fictitious properties. Someone has plugged the GitHub issue into ChatGPT, spat out an untested answer.
What’s even the point…
[+] [-] thenickdude|5 months ago|reply
[+] [-] vultour|5 months ago|reply
[+] [-] dboreham|5 months ago|reply
[+] [-] alexisread|5 months ago|reply
I'm wondering (sadly) if this is a kind of defense-prodding phishing similar to the XZ utils hack, curl is a pretty fundamental utility.
Similar to 419 scams, it tests the gullibility, response time/workload of the team, etc.
We have an AI DDoS problem here, which may need a completely new pathway for PRs or something. Maybe Nostr based so PRs can be validated in a WOT?
[+] [-] jmuguy|5 months ago|reply
[+] [-] mock-possum|5 months ago|reply
[+] [-] jiggawatts|5 months ago|reply
[+] [-] rsynnott|5 months ago|reply
[+] [-] rikschennink|5 months ago|reply
I’ve also received tickets where the code snippets contained API calls that I never added to the API. A real “am I crazy” situation where I started to doubt I added it and had to double check.
On top of that you get “may I get a refund” emails but expanded to four paragraphs by our friend Chat. It’s getting kinda ridiculous.
Overall it’s been a huge additional time drain.
I think it may be time to update the “what’s included in support” section of my softwares license agreement.
[+] [-] keyle|5 months ago|reply
What an absolute shamble of an industry we have ended up with.
[+] [-] spacecow|5 months ago|reply
[+] [-] rdtsc|5 months ago|reply
I would just be terribly embarrassed and not be able to look at myself in the mirror if I did shit like this.
> batuhanilgarr posted a comment (6 days ago) Thanks for the quick review. You’re right ...
On one hand, it's sort of surprising that they double down, copy and paste the response to the llm prompt, paste back that response and hope for the best. But, of course it shouldn't be surprising. This is not just a mistake, it's deliberate lying and manipulating.
[+] [-] panstromek|5 months ago|reply
I don't even... You just have to laugh at this I guess.
[+] [-] raffraffraff|5 months ago|reply
Pity HN doesn't support all of those green checkboxes and bold bullet points. Every time I see these in supposedly humans generated documents and pull requests I laugh.
[+] [-] dimaor|5 months ago|reply
even if not AI, there are probably many un skilled developers which submit bogus bug reports, even un knowingly.
[+] [-] spicyusername|5 months ago|reply
LLMs produce so much text, including code, and most of it is not needed.
[+] [-] redbell|5 months ago|reply
[+] [-] unknown|5 months ago|reply
[deleted]
[+] [-] mrsvanwinkle|5 months ago|reply