top | item 45331135

(no title)

dansmith1919 | 5 months ago

Crazy how he doubled down by just pasting badger's answer into Chat and submitting the (hilariously obvious AI) reply:

> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.

discuss

order

Sharlin|5 months ago

Unfortunately that seems to be the norm now – people literally reduce themselves to a copy-paste mechanism.

f4stjack|5 months ago

To be honest, I do not understand this new norm. A few months ago I applied to an internal position. I was a NGO IT worker, deployed twice to emergency response operations, knew the policies & operations and had good relations with users and coworkers.

The interview went well. I was honest. When asked what my weakness regarding this position I told that I am a good analyst but when it comes to writing new exploits, that's beyond my expertise. The role doesn't have this as a requirement so I thought it was a good answer.

I was not selected. Instead they selected a guy and then booted him off after 2 months due to his excessive (and non-correct like the link) use of LLM and did not open the position again.

So in addition to wasting the hirers' time those nice people block other people's progress as well. But, as long as the hirers expect wunderkinds crawling out of the woods the applicants try to fake it and win in the short term.

This needs to end but I don't see any progress towards it. This is especially painful as I am seeking a job at the moment and thinking these fakers are muddying the waters. It feels like no one cares about your attitude - like how geniunely you want to work. I am an old techie and the world I was in valued this rather than technical aptitude for you can teach/learn technical information but character is another thing. This gets lost in our brave new cyberpunk without the cool gadgets era I believe.

jackdawed|5 months ago

I once had a conversation with a potential co-founder who literally told me he was pasting my responses into AI to try to catch up.

Then a few months later, another nontechnical CEO did the same thing, after moving our conversation from SMS into email where it was very clear he was using AI.

These are CEOs who have raised $1M+ pre-seed.

goalieca|5 months ago

Just try to challenge and mentor people on not using it because it’s incapable of the job and wasting all our time when the mandate from down high is to use more of it.

pravj|5 months ago

This resonates a lot with some observations I drafted last week about "AI Slop" at the workplace.

Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.

This sounds similar to a few patterns I noted

- The average length of documents and emails has increased.

- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)

- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]

account42|5 months ago

I like the term "echoborg" for those people: https://en.wikipedia.org/wiki/Echoborg

> An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI).

I've seen people who can barely manage to think on their own anymore and pull out their phone to ask it even relatively basic questions. Seems almost like an addiction for some.

lm28469|5 months ago

If seen more than one post on reddit being answered by a screenshot of a chatgpt mobile app including OP's question and the llm's answer

Imagine the amount of energy and compute power used...

BHSPitMonkey|5 months ago

For all we know, there's no human in the loop here. Could just be an agent configured with tools to spin up and operate Hacker One accounts in a continuous loop.

tptacek|5 months ago

This has been a norm on Hacker One for over a decade.

silverliver|5 months ago

Ha! We've become the robots!

balamatom|5 months ago

We're that for genes, if you trust positivist materialism. (Recently it's also been forced to permit the existence of memes.)

If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard? Because you're mad at them for being shit at the craft you've lovingly honed? They don't really know why they're there in the first place.

If one sets a different bar with one's expectations of people, one ought to at least clearly make the case for what exactly it is. And even then the bots have made it quite clear that such things are largely matters of personal conviction, and as such are not permitted much resonance.

dragontamer|5 months ago

This might be some kind of asshole Tech-guy trying to make the "This AI creates pull-requests that are accepted into well regarded OSS projects".

IE: They're farming out the work now to OSS volunteers not even sure if the fucking thing works, and eating up OSS maintainer's time.

rapidaneurism|5 months ago

I wonder if there was a human in the loop to begin with. I hope the future of CVS is not agents opening accounts and posting 'bugs'

zaphodias|5 months ago

I don't think there are humans involved. I've now seen countless PRs to some repos I maintain that claim to be fixing non-existent bugs, or just fixing typos. One that I got recently didn't even correctly balanced the parenthesis in the code, ugh.

I call this technique: "sprAI and prAI".

pjc50|5 months ago

The future of everything with a text entry box is AIs shoveling plausible looking nonsense into it. This will result in a rise of paranoia, pre-verification hoops, Cloudflare like agent-blocking, and communities "going dark" or closed to new entrants who have not been verified in person somewhere.

(The CVE system has been under strain for Linux: https://www.heise.de/en/news/Linux-Criticism-reasons-and-con... )

stronglikedan|5 months ago

Don't need a human until someone is ready to pay a bounty!

l5870uoo9y|5 months ago

This reads as an AI generated response as well with the; "thanks", "you're right", flawless grammar, and plenty of technical references.

gryfft|5 months ago

I think you might be onto something-- perhaps something from the first sentence of the post to which you are replying.

SoKamil|5 months ago

Faking grammar mistakes is the new meta of proving that you wrote something yourself.

Or faking generated content into real one.

ToucanLoucan|5 months ago

Is it that crazy? He's doing exactly what the AI boosters have told him to do.

Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.

In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.

elzbardico|5 months ago

I have a long-running interest in NLP, LLMs basically solved or almost solved a lot of NLP problems.

The usefulness of LLMs for me, in the end, is their ability to execute classic NLP tasks, so I can incorporate a call for them in programs to do useful stuff that would be hard to do otherwise when dealing with natural language.

But, a lot of times, people try to make LLMs do things that they can only simulate doing, or doing by analogy. And this is where things start getting hairy. When people start believing LLMs can do things they can't do really.

Ask an LLM to extract features from a bunch of natural language inputs, and probably it will do a pretty good job in most domains, as long as you're not doing anything exotic and novel enough to not being sufficiently represented in the training data. It will be able to output a nice JSON with nice values for those features, and it will be mostly correct. It will be great for aggregate use, but a bit riskier for you to depend on the LLM evaluation for individual instances.

But then, people ignore this, and start asking on their prompts for the LLM to add to their output confidence scores. Well. LLMs CAN'T TRULY EVALUATE the fitness of their output for any imaginable criteria, at least not with the kind of precision a numeric score implies. They absolutely can't do it by themselves, even if sometimes they seem to be able to. If you need to trust it, you'd better have some external mechanism to validate it.

rpcope1|5 months ago

So basically a hundred billion dollar industry for just spam and fraud. Truly amazing technological progress.

pizlonator|5 months ago

Wait so are we now saying that these AIs are failing the Turing test?

(I mean I guess it has to mean that if we are able to spot them so easily)

blharr|5 months ago

You don't spot the ones you don't spot

Havoc|5 months ago

Makes me wonder whether the submitter even speaks english

t0lo|5 months ago

AI's other acronym...

mda|5 months ago

Probably yes, but not as smooth and eloquent as the AI they use.

unmole|5 months ago

The username sounds Turkish. Make what you will of it.

shadowgovt|5 months ago

Quite a few people using AI are using it not only to do analysis, but to do translation for them as well; many people leaping onto this technology don't have English as a fluent language, so they can't evaluate the output of the AI for sensibility or "not sounding like AI."

(It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).

dansmith1919|5 months ago

At some point they told ChatGPT to put emoji's everywhere which is also a dead giveaway on the original report that it's AI. They're the new em dash.

rasz|5 months ago

You dont even have to instruct it for emojis, it does it on its own. printf with emoji is an instant red flag

jcul|5 months ago

It loves to put emojis in print statements, it's usually a red flag for me that something is written by AI.

listic|5 months ago

What was it with em dash?

badgersnake|5 months ago

Some people actually do that on Github too. Absolute psychopaths.

lumost|5 months ago

Was this all actually an agent? I could see someone making the claim that a security research LLM should always report issues immediately from an ethics standpoint (and in turn acquire more human generated labels of accuracy).

To be clear, I personally disagree with AI experiments that leverage humans/businesses without their knowledge. Regardless of the research area.

BoredPositron|5 months ago

It's an n8n bot without user input. If you Google the username you'll find a GitHub full of agent stuff.

listic|5 months ago

Who was likely to start it and for what purpose?

belter|5 months ago

Crazy on how the current 400 Billion AI bubble is based on this being feasible...

koolba|5 months ago

The rationale is that the AI companies are selling the shovels to both generate this pile as well as the ones we'll need to clean it up.

pjc50|5 months ago

And on externalizing costs - the actual humans who have to respond to bad vulnerability report spam.

Lerc|5 months ago

I felt like it was more likely to be a complete absence of a human in the loop.

jonplackett|5 months ago

Do you think it’s a person doing it? When I saw that reply I though maybe it’s a bot doing the whole thing!

dolmen|5 months ago

I think we are now beyond just copy-pasting. I guess we are in the era where this shit is full automated.

ainiriand|5 months ago

Is this for internet points?

filcuk|5 months ago

If it's an individual, it could be as simple as portfolio cred ('look, I found and helped fix a security flaw in this program that's on millions of devices ')

zzzeek|5 months ago

why assume someone is copy-pasting and didn't just build a bot to "report bugs everywhere" ?

chinathrow|5 months ago

The '—' gave it away. No one types this character on purpose.

jaymzcampbell|5 months ago

I really loved how easy MacOS made these (option+hypen for en, with shift for em), so I used to use them all the time. I'm a bit miffed by good typography now being an AI smell.

sevg|5 months ago

Just because you don’t, doesn’t mean other people don’t. Plenty of real humans use emdash. You probably don’t realise that on some platforms it’s easy to type an emdash.

kstrauser|5 months ago

And where did you suppose AIs learned this, if not from us?

Turns out lots of us use dashes — and semicolons! And the word “the”! — and we’re going to stuff just because others don’t like punctuation.

ceejayoz|5 months ago

The AI is trained on human input. It uses the dash because humans did.

ulimn|5 months ago

Or at least not anymore since this became the number 1 sign whether a text was written with AI. Which is a bit sad imo.

yreg|5 months ago

I do all the time, but might have to stop. Same with `…`.

vagrantJin|5 months ago

That got a giggle out of me. Not entirely relevant but AI tends to be overzealous in its use of emojis and punctuation, in a way people almost never do (too cumbersome on desktop where majority of typing work is done)

_fizz_buzz_|5 months ago

I started using hyphens a few years ago. But now I had to stop, because AI ruined it :(

viridian|5 months ago

Academia certainly does, although, humorously, we also have professors making the same proclamation you do, while while en or em dashes in their syllabi.

johnisgood|5 months ago

Keep in mind that now that people know what to pay attention to: em-dash, emojis, etc. they will instruct the LLM to not use that, so yeah.

easton|5 months ago

Two dashes on the Mac or iOS do it unless you explicitly disable it, I think.

Balinares|5 months ago

I absolutely bloody do -- though more commonly as a double dash when not at the keyboard -- and I'm so mad it was cargo-culted into the slop machines as a superficial signifier of literacy.