top | item 45330378

You did this with an AI and you do not understand what you're doing here

1178 points| redbell | 5 months ago |hackerone.com

542 comments

order
[+] dansmith1919|5 months ago|reply
Crazy how he doubled down by just pasting badger's answer into Chat and submitting the (hilariously obvious AI) reply:

> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.

[+] Sharlin|5 months ago|reply
Unfortunately that seems to be the norm now – people literally reduce themselves to a copy-paste mechanism.
[+] dragontamer|5 months ago|reply
This might be some kind of asshole Tech-guy trying to make the "This AI creates pull-requests that are accepted into well regarded OSS projects".

IE: They're farming out the work now to OSS volunteers not even sure if the fucking thing works, and eating up OSS maintainer's time.

[+] rapidaneurism|5 months ago|reply
I wonder if there was a human in the loop to begin with. I hope the future of CVS is not agents opening accounts and posting 'bugs'
[+] l5870uoo9y|5 months ago|reply
This reads as an AI generated response as well with the; "thanks", "you're right", flawless grammar, and plenty of technical references.
[+] ToucanLoucan|5 months ago|reply
Is it that crazy? He's doing exactly what the AI boosters have told him to do.

Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.

In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.

[+] pizlonator|5 months ago|reply
Wait so are we now saying that these AIs are failing the Turing test?

(I mean I guess it has to mean that if we are able to spot them so easily)

[+] Havoc|5 months ago|reply
Makes me wonder whether the submitter even speaks english
[+] shadowgovt|5 months ago|reply
Quite a few people using AI are using it not only to do analysis, but to do translation for them as well; many people leaping onto this technology don't have English as a fluent language, so they can't evaluate the output of the AI for sensibility or "not sounding like AI."

(It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).

[+] dansmith1919|5 months ago|reply
At some point they told ChatGPT to put emoji's everywhere which is also a dead giveaway on the original report that it's AI. They're the new em dash.
[+] rpigab|5 months ago|reply
"I heard you were extremely quick at math"

Me: "yes, as a matter of fact I am"

Interviewer: "Whats 14x27"

Me: "49"

Interviewer: "that's not even close"

me: "yeah, but it was fast"

[+] jtwaleson|5 months ago|reply
There should be a language that uses "Almost-In-Time" compilation. If it runs out of time, it just gives a random answer.
[+] donohoe|5 months ago|reply

  function getRandomNumber() {
    return 4
  }
[+] nenenejej|5 months ago|reply
The lowest latency responses in my load tests is when something went wrong!
[+] misnome|5 months ago|reply
I wonder where the balance of “Actual time saved for me” vs “Everyone else's time wasted” lies in this technological “revolution”.
[+] simsla|5 months ago|reply
Agreed.

I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.

If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.

I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.

[+] stahorn|5 months ago|reply
You're doing it wrong: You should just feed other peoples AI-generated responses into your own AI tools and let the tool answer for you! The loop is then closed, no human time wasted, and the only effect is wasted energy to run the AI tools. It's the perfect business model to turn energy into money.
[+] miroljub|5 months ago|reply
Wasting time for others is a net positive, meaning jobs won't be lost, since some human individual still needs to make sense out of AI generated rubbish.
[+] sanex|5 months ago|reply
This is not unique to AI tools. I've seen it with new expense tools that are great for accounting but terrible to use, or some contract review process that makes it easier on legal or infosec review of a SaaS tool that everyone and their uncle already uses. It's always natural to push all the work off to someone else because it feels like you saved time.
[+] iLoveOncall|5 months ago|reply
Yeah when reviewing code nowadays once I'm 5-10 comments in and it becomes obvious it was AI generated, I say to go fix it and that I'll review it after. The time waste is insane.
[+] zaik|5 months ago|reply
How much time did they save if they didn't find any vulnerability? They just wasted someone's time and nothing else.
[+] duxup|5 months ago|reply
Arguably that's been a part of coding for a long time ...

I spend a lot of time doing cleanup for a predecessor who took shortcuts.

Granted I'm agreeing, just saying the methods / volume maybe changed.

[+] ttyyzz|5 months ago|reply
Over time, I've gotten a feel for what kind of content is AI-generated (e.g., images, text, and especially code...), and this text screams "AI" from top to bottom. I think badger responded very professionally; I'd be interested to see Linus Torvalds' reaction in such a situation :D
[+] joz1-k|5 months ago|reply
We will see more problems related to the attitude: "I know AI, and therefore I'm smarter than trilobites who coded this before the AI boom."

I suppose there's a reason why kids are usually banned from using calculators during their first years of school when they're learning basic math.

[+] hermannj314|5 months ago|reply
Start charging users to submit a vulnerability report.

It doesn't matter if it made by AI or a human, spammers operate by cheaply overproducing and externalizing their work onto you to validate their shit. And it works because sometimes they do deliver value by virtue of large numbers. But they are a net negative for society. Their model stops working if they have to pay for the time they wasted.

[+] sealeck|5 months ago|reply
Even a deposit works well (and doesn't have to be large). Someone who has actually found a serious bug in cURL will probably pay $2-5 dollars as a deposit to report (especially given the high probability of a payout).
[+] GalaxyNova|5 months ago|reply
This is a horrible idea. If you want to discourage people from submitting reports then this is how you do it..
[+] scosman|5 months ago|reply
Spent 15 minutes the other day testing a patch I received that claimed to fix a bug (Linux UI bug, not my forte).

The “fix” was setting completely fictitious properties. Someone has plugged the GitHub issue into ChatGPT, spat out an untested answer.

What’s even the point…

[+] thenickdude|5 months ago|reply
It's all in aid of some streetsweeper being able to add "contributor to X, Y, Z projects!" to their GitHub résumé. Before LLMs were a thing I also received worthless spelling-incorrection pull requests with the same aim.
[+] vultour|5 months ago|reply
This is why I refuse to interact with people who use AI. You have to invest orders of magnitude more time to review their hallucinated garbage than they used to generate it. I’m not going to waste my time talking to a computer.
[+] dboreham|5 months ago|reply
Ultimately it's always about someone somewhere getting a bigger boat.
[+] alexisread|5 months ago|reply
> The reporter was banned and now it looks like he has removed his account.

I'm wondering (sadly) if this is a kind of defense-prodding phishing similar to the XZ utils hack, curl is a pretty fundamental utility.

Similar to 419 scams, it tests the gullibility, response time/workload of the team, etc.

We have an AI DDoS problem here, which may need a completely new pathway for PRs or something. Maybe Nostr based so PRs can be validated in a WOT?

[+] jmuguy|5 months ago|reply
This is essentially what teachers are dealing with every day, across the majority of their students, for every subject where its even remotely possible to use AI.
[+] mock-possum|5 months ago|reply
Why not deal with it the same way teachers have always felt with students breaking the rules?
[+] jiggawatts|5 months ago|reply
Education as a profession will have to change. Homework is pointless. Verbal presentations will have to become the new norm, or all written answers must be in the confines of the classroom... with pen and paper. Etc...
[+] rsynnott|5 months ago|reply
This must be _absolutely exhausting_.
[+] rikschennink|5 months ago|reply
Recently a customer pasted a complete ChatGPT chat in the support system and then wrote “it doesn’t work” as subject. I kindly declined.

I’ve also received tickets where the code snippets contained API calls that I never added to the API. A real “am I crazy” situation where I started to doubt I added it and had to double check.

On top of that you get “may I get a refund” emails but expanded to four paragraphs by our friend Chat. It’s getting kinda ridiculous.

Overall it’s been a huge additional time drain.

I think it may be time to update the “what’s included in support” section of my softwares license agreement.

[+] keyle|5 months ago|reply
Resume hit piece, <failed/>.

What an absolute shamble of an industry we have ended up with.

[+] spacecow|5 months ago|reply
Lord, did anyone else click through and read the actual attached "POC"? It's (for now) hilariously obviously doing nothing interesting at all, but my blood runs cold at AI potentially being able to generate more plausible-looking POC code in the future to waste even more dev time...
[+] rdtsc|5 months ago|reply
I wonder what's going on in the minds of these people.

I would just be terribly embarrassed and not be able to look at myself in the mirror if I did shit like this.

> batuhanilgarr posted a comment (6 days ago) Thanks for the quick review. You’re right ...

On one hand, it's sort of surprising that they double down, copy and paste the response to the llm prompt, paste back that response and hope for the best. But, of course it shouldn't be surprising. This is not just a mistake, it's deliberate lying and manipulating.

[+] panstromek|5 months ago|reply
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug.

I don't even... You just have to laugh at this I guess.

[+] raffraffraff|5 months ago|reply
Verification Status: CONFIRMED bullet points

Pity HN doesn't support all of those green checkboxes and bold bullet points. Every time I see these in supposedly humans generated documents and pull requests I laugh.

[+] dimaor|5 months ago|reply
maybe submitters should pay a dollar to submit bugs which they will get a refund for when bug is confirmed?

even if not AI, there are probably many un skilled developers which submit bogus bug reports, even un knowingly.

[+] spicyusername|5 months ago|reply
The amount of text alone in the original post was a giveaway.

LLMs produce so much text, including code, and most of it is not needed.

[+] redbell|5 months ago|reply
[+] mrsvanwinkle|5 months ago|reply
this LLM-emboldened, mass Dunning-Kruger schizophrenia has gone from hilarious to sad to simply invoking disgust. this isn't even an earnest altruistic effort but some insecure fever dream of finally being acknowledged as a "genius" of some sort. the worst i've seen of this is some random redditor claiming to have _the_ authoritative version of a theory of everything and spamming it in every theoretical physics adjacent subreddit, claims to have a phd but anonymous and doesn't represent any research group/institution nor does the spam have any citations.