top | item 45218624

(no title)

angst | 5 months ago

> There is an increasing crowd of people who ask a large language model to "find a problem in curl, make it sound terrible", then send the result, which is never correct, to the project, thinking that they are somehow helping.

Our worst nightmares are becoming true indeed..

discuss

order

bgwalter|5 months ago

The problem is that open source maintainers rarely react, because most projects are captured by some big tech employees. Independent authors like Stenberg are the exception.

If the rebellious spirit of the 1990s and early 2000s still existed, open source could sink "AI" code laundromats within a month. But since 2010 everyone is falling over themselves to please big tech. Big Tech now rewards the cowards with layoffs and intimidation.

Most developers do not understand that power balances in corporations work on a primal level. If you show fear/submission, managers will treat you like a beta dog. That is all they understand.

timeon|5 months ago

This is getting more common. I've seen CVEs posted to several opensource projects that included made-up APIs.

blahgeek|5 months ago

The worst nightmare would be the maintainers in turn use large language model to review or apply these patches

szszrk|5 months ago

I already have some processes at work that are reviewed by AI only. Which means we are advised to use another AI to fill out the data quicker.

It's nothing critical, but still both scary and hilarious at the same time. Shit on the input, shit on the output - nothing new, just fancier tools.

Asimov's vision of history so tangled and noisy that no one really knows what is truth and what is a legend is happening in front of our own eyes. It didn't need millennia, just a few years of AI companies abusing our knowledge that was available for anyone for free.

the_biot|5 months ago

Not to one-up you, but my worst nightmare is an open source project where all the maintainers are LLM copy-pasters, with little clue to be had otherwise.

And it's already happened, of course. A project I saw mentioned here on HN a while back seemed interesting, and it was exactly that kind of disaster. They started off as a fork of another project, so had a working codebase. But the project lead is a grade-A asshole who gets off on being grumpy to people, and considers any ideas not his to be ridiculous. Their kernel guy is an actual moron; his output is either (clearly) LLM output or just idiocies. Even the side contributors are 100% chatbot pasters.

signa11|5 months ago

and then have another one duke it out with the first one to reject the patch. that would be a nice llm-vs-llm, prompt-fight-prompt :o)