top | item 44708884

Generative AI. "Slop Generators, are unsuitable for use [ ]"

58 points| aleksjess | 7 months ago |asahilinux.org

89 comments

order

pjc50|7 months ago

The IP point: this hasn't yet been fully resolved, and I expect companies to continue to fudge it by sticking "if disney return false" in their image generators. My own employer has a stance of "no AI generated code in production use" (but allowed for testing and infrastructure which will not be distributed).

I'm suprised they didn't make a point which is especially painful for Open Source projects, that AI might reduce coding effort but increases review effort, and maintainers are generally spending the majority of their time on review anyway. Making it easier to generate bad pull requests is seen as well-poisoning.

am17an|7 months ago

In the future there will be safe havens where LLM generated code has not been merged. It will marketed as “hand-crafted” by Romanian programmers or something like that, akin to Swiss watches. It will be extremely high quality, but too expensive to mass produce.

captn3m0|7 months ago

The submission title should be “Asahi Linux Generative AI Policy”

aleksjess|7 months ago

I mean, I used the page totle, and an excerpt from the test which mostly summarizes the test. If youre right, I'll change the title

mpalmer|7 months ago

No it shouldn't.

TrackerFF|7 months ago

I am by no means an AI/ML evangelist, but these types of posts just come off as something written by modern day luddites.

terminalbraid|7 months ago

Luddites in what sense? Because there's the lazy, gross stereotype that's been perpetuated with negative connotation and the movement that was not at face anti-technology, but against how that technology was used to suppress labor.

https://en.wikipedia.org/wiki/Luddite

tim333|7 months ago

The issue

>FOSS projects like Asahi Linux cannot afford costly intellectual property lawsuits in US courts

seems quite practical and non Ludditeish.

simianwords|7 months ago

Environmental aspect is listed as one of the reasons which makes me think this is more ideological than practical.

mschuster91|7 months ago

First of all, it isn't wrong. The power consumption of AI training and inference is massive.

Second, that page is obviously meant to shed light at AI issues from a lot of different viewpoints, it would have been a serious omission to not mention environmental concerns.

ezst|7 months ago

How are environmental aspects of LLMs "non-practical"?

mathiaspoint|7 months ago

I'm generally against the environmental movements and I think part of the problem is that the utility markets aren't elastic enough (or at all) to appropriately charge these companies for what they're doing.

But the huge amount of fresh water going to waste for cooling makes me very uncomfortable. In an ideal world it really should go the other way where the heat from the DC is used to desalinate salt water.

insane_dreamer|7 months ago

> more ideological than practical

how so?

the impact of increased energy consumption, from non-zero-emission sources, on global warming is a highly practical matter based on established science

AI is/will cause a significant increase in energy consumption (still largely powered by fossil fuels) at a time when it's well established that we're supposed to be reducing emissions (Paris Accord etc.)

meindnoch|7 months ago

So far I was aversive of Asahi Linux mostly due to the furry anime girl roleplay thing, but this is something I can stand by!

openmarkand|7 months ago

Do I miss something? I've been following the project since the beginning and just checked the wiki, the website and all over the documentation and haven't found something relevant to anime.

Fade_Dance|7 months ago

>Slop Generator

Using that term through the article makes it hard to take seriously. I know nothing of this project but right off the bat it seems like a project that has little credibility just because of the tone used throughout.

There's no need to turn it into a full-on tirade against this set of technologies either. Is this an appropriate place to complain about Reddit comments?

Ironically, the author could well benefit from running this slop through an llm to make it more professional.

doesnt_know|7 months ago

Interesting that you could have posted about any of the points being made in the link but you choose this one.

Personally I think the term is well deserved and am glad it continues to be popularised.

tonypapousek|7 months ago

> I know nothing of this project

It's an impressive one, to say the least. It's worth taking a closer look and weighing the excellence created by the human mind before completely dismissing the article's arguments.

> Ironically, the author could well benefit from running this slop through an llm to make it more professional.

True, that would effectively strip out all the heart and soul from the prose.

RUnconcerned|7 months ago

> Ironically, the author could well benefit from running this slop through an llm to make it more professional.

Have you considered that it is not the intent of the author to appear professional? That running it through the Slop Generator would obfuscate their intent to be snarky towards those who outsource all their thinking to Slop Generators?

washadjeffmad|7 months ago

That mindset is itself part of the problem. A healthy sign of free speech is poor taste.

skerit|7 months ago

I agree. They have some good reasons on why AI-generated code should not be used in this project, but the page just devolves into a "All AI is bad" and the constant use of "Slop generator" just makes me think of all the people that used to write "Micro$oft" (Well, I did too. When I was twelve)

rimbo789|7 months ago

[deleted]

electricboots|7 months ago

I think for a project like asahi, and frankly any project with a reasonable blast radius, software or otherwise, the article is on point.

The authors lack of professionalism is a reasonable counter to the completely unhinged mainstream takes on ai/llms that we hear daily.

I think the Reddit example provides useful, generally relatable context, otherwise missing to the average reader.

My opinions are not to detract from the use of the tech or engineers working in the space, but motivated by a disgust for the hype.

josefritzishere|7 months ago

I really have to agree with this take. Most of what AI spits out is hot trash. This is why studies have found AI makes us code 19% slower. https://arstechnica.com/ai/2025/07/study-finds-ai-tools-made...

queenkjuul|7 months ago

I've been genuinely trying to incorporate Claude Code into my workflow but it just ends up wasting my time and i write everything myself anyway. Its output is occasionally _useful_ but it's absolutely never _good_

serf|7 months ago

Have whatever opinion you want , we're going to rapidly run into issues where certain organizations are outran by others that have more liberal policies about this kind of work.

Call it slop all you want, doesn't change the unreasonable effectiveness that some individuals seem to have with such systems.

satisfice|7 months ago

How do you know it is effective? Many of us think it isn’t. Prove us wrong, if you can.

A problem with the slop coding movement is that they are happy living with wishful thinking.

cadamsdotcom|7 months ago

Agreed on the IP point. Strongly agreed on not wanting to go to court in the current US political climate.

On the rest, especially the confidently incorrect argument.. Not. so. much.

Firstly, models are stochastic parrots, but that truth is irrelevant because they're useful stochastic parrots.

Second, hallucinations and confidently incorrect outputs may yet be a thing of the past, so we should keep an open mind. It's possible that mechanistic interpretability research (a fancy term for "understanding what the model is thinking as it produces your response") will allow the UI to warn the user that uncertainty was detected in the model's response.

Unfortunately none of that matters because the IP point is a blocker. Bummer.

rootnod3|7 months ago

It seems apparent in this thread that any remote criticism of AI results in downvotes.

Yes, the article might have some wording issues, but for an operating system project to choose to not allow AI written code for a product that is inherently in need of good security, and rather opt for “think before you write and fully understand what you are doing”, I don’t think that their choice is invalid.

I wouldn’t wanna get into a plane where have the core systems are written by hallucinating AI.

harringdev|7 months ago

This is very hard to take seriously, it feels ideologically driven.

The introduction where they claim LLMs are useless for software engineering is just incorrect. They are useful for many software engineering tasks. I do think that vibe coding is rubbish however, and more junior SWEs very regularly misuse LLMs to produce nonsense code.

The only substantive point is that the LLM may regurgitate pieces of proprietary training data; although it seems unlikely that it would be incorporated wholesale into the codebase in such as way it matters or opens them up to liability.

I do question if LLMs would even be useful for such a niche project -- but I think this should be left up to developers to figure out how it complements their workflows rather then ruling out all uses of LLMs.

EDIT: I want to point out that I think the Asahi Linux project is a jewel of engineering and is extremely impressive.

roxolotl|7 months ago

Why is something ideologically driven inherently not worth taking seriously? I appreciate maybe their understanding will be more challenging to change but people don’t necessarily arrive at ideologies for unserious reasons nor do ideologies have unserious results.

WithinReason|7 months ago

> This is fundamentally the same mathematics that is used to predict the weather.

And yet, weather prediction works. Therefore, LLMs work?

pfoof|7 months ago

I can agree with all but last point.

There’s no proof nor counter-proof that human brain doesn’t work like that.

krige|7 months ago

Can't prove the negative. Prove that it does instead.

sandworm101|7 months ago

> "These resources are better used on quite literally anything else."

What shocks me most is that we have found something less useful than bitcoin mining. Remember all the articles about the environmental impact of bitcoin? That is peanuts compared to what the worlds largest companies are biulding to power the next LLM.

RUnconcerned|7 months ago

While I agree that the value of LLMs is wildly overstated, I disagree that it is less useful than bitcoin mining, which is entirely useless. At least LLMs can produce usable output.