The IP point: this hasn't yet been fully resolved, and I expect companies to continue to fudge it by sticking "if disney return false" in their image generators. My own employer has a stance of "no AI generated code in production use" (but allowed for testing and infrastructure which will not be distributed).
I'm suprised they didn't make a point which is especially painful for Open Source projects, that AI might reduce coding effort but increases review effort, and maintainers are generally spending the majority of their time on review anyway. Making it easier to generate bad pull requests is seen as well-poisoning.
In the future there will be safe havens where LLM generated code has not been merged. It will marketed as “hand-crafted” by Romanian programmers or something like that, akin to Swiss watches. It will be extremely high quality, but too expensive to mass produce.
Luddites in what sense? Because there's the lazy, gross stereotype that's been perpetuated with negative connotation and the movement that was not at face anti-technology, but against how that technology was used to suppress labor.
First of all, it isn't wrong. The power consumption of AI training and inference is massive.
Second, that page is obviously meant to shed light at AI issues from a lot of different viewpoints, it would have been a serious omission to not mention environmental concerns.
I'm generally against the environmental movements and I think part of the problem is that the utility markets aren't elastic enough (or at all) to appropriately charge these companies for what they're doing.
But the huge amount of fresh water going to waste for cooling makes me very uncomfortable. In an ideal world it really should go the other way where the heat from the DC is used to desalinate salt water.
the impact of increased energy consumption, from non-zero-emission sources, on global warming is a highly practical matter based on established science
AI is/will cause a significant increase in energy consumption (still largely powered by fossil fuels) at a time when it's well established that we're supposed to be reducing emissions (Paris Accord etc.)
Do I miss something? I've been following the project since the beginning and just checked the wiki, the website and all over the documentation and haven't found something relevant to anime.
Using that term through the article makes it hard to take seriously. I know nothing of this project but right off the bat it seems like a project that has little credibility just because of the tone used throughout.
There's no need to turn it into a full-on tirade against this set of technologies either. Is this an appropriate place to complain about Reddit comments?
Ironically, the author could well benefit from running this slop through an llm to make it more professional.
It's an impressive one, to say the least. It's worth taking a closer look and weighing the excellence created by the human mind before completely dismissing the article's arguments.
> Ironically, the author could well benefit from running this slop through an llm to make it more professional.
True, that would effectively strip out all the heart and soul from the prose.
> Ironically, the author could well benefit from running this slop through an llm to make it more professional.
Have you considered that it is not the intent of the author to appear professional? That running it through the Slop Generator would obfuscate their intent to be snarky towards those who outsource all their thinking to Slop Generators?
I agree. They have some good reasons on why AI-generated code should not be used in this project, but the page just devolves into a "All AI is bad" and the constant use of "Slop generator" just makes me think of all the people that used to write "Micro$oft" (Well, I did too. When I was twelve)
I've been genuinely trying to incorporate Claude Code into my workflow but it just ends up wasting my time and i write everything myself anyway. Its output is occasionally _useful_ but it's absolutely never _good_
Have whatever opinion you want , we're going to rapidly run into issues where certain organizations are outran by others that have more liberal policies about this kind of work.
Call it slop all you want, doesn't change the unreasonable effectiveness that some individuals seem to have with such systems.
Agreed on the IP point. Strongly agreed on not wanting to go to court in the current US political climate.
On the rest, especially the confidently incorrect argument.. Not. so. much.
Firstly, models are stochastic parrots, but that truth is irrelevant because they're useful stochastic parrots.
Second, hallucinations and confidently incorrect outputs may yet be a thing of the past, so we should keep an open mind. It's possible that mechanistic interpretability research (a fancy term for "understanding what the model is thinking as it produces your response") will allow the UI to warn the user that uncertainty was detected in the model's response.
Unfortunately none of that matters because the IP point is a blocker. Bummer.
It seems apparent in this thread that any remote criticism of AI results in downvotes.
Yes, the article might have some wording issues, but for an operating system project to choose to not allow AI written code for a product that is inherently in need of good security, and rather opt for “think before you write and fully understand what you are doing”, I don’t think that their choice is invalid.
I wouldn’t wanna get into a plane where have the core systems are written by hallucinating AI.
This is very hard to take seriously, it feels ideologically driven.
The introduction where they claim LLMs are useless for software engineering is just incorrect. They are useful for many software engineering tasks. I do think that vibe coding is rubbish however, and more junior SWEs very regularly misuse LLMs to produce nonsense code.
The only substantive point is that the LLM may regurgitate pieces of proprietary training data; although it seems unlikely that it would be incorporated wholesale into the codebase in such as way it matters or opens them up to liability.
I do question if LLMs would even be useful for such a niche project -- but I think this should be left up to developers to figure out how it complements their workflows rather then ruling out all uses of LLMs.
EDIT: I want to point out that I think the Asahi Linux project is a jewel of engineering and is extremely impressive.
Why is something ideologically driven inherently not worth taking seriously? I appreciate maybe their understanding will be more challenging to change but people don’t necessarily arrive at ideologies for unserious reasons nor do ideologies have unserious results.
> "These resources are better used on quite literally anything else."
What shocks me most is that we have found something less useful than bitcoin mining. Remember all the articles about the environmental impact of bitcoin? That is peanuts compared to what the worlds largest companies are biulding to power the next LLM.
While I agree that the value of LLMs is wildly overstated, I disagree that it is less useful than bitcoin mining, which is entirely useless. At least LLMs can produce usable output.
pjc50|7 months ago
I'm suprised they didn't make a point which is especially painful for Open Source projects, that AI might reduce coding effort but increases review effort, and maintainers are generally spending the majority of their time on review anyway. Making it easier to generate bad pull requests is seen as well-poisoning.
am17an|7 months ago
captn3m0|7 months ago
aleksjess|7 months ago
mpalmer|7 months ago
TrackerFF|7 months ago
terminalbraid|7 months ago
https://en.wikipedia.org/wiki/Luddite
tim333|7 months ago
>FOSS projects like Asahi Linux cannot afford costly intellectual property lawsuits in US courts
seems quite practical and non Ludditeish.
satisfice|7 months ago
simianwords|7 months ago
mschuster91|7 months ago
Second, that page is obviously meant to shed light at AI issues from a lot of different viewpoints, it would have been a serious omission to not mention environmental concerns.
ezst|7 months ago
mathiaspoint|7 months ago
But the huge amount of fresh water going to waste for cooling makes me very uncomfortable. In an ideal world it really should go the other way where the heat from the DC is used to desalinate salt water.
insane_dreamer|7 months ago
how so?
the impact of increased energy consumption, from non-zero-emission sources, on global warming is a highly practical matter based on established science
AI is/will cause a significant increase in energy consumption (still largely powered by fossil fuels) at a time when it's well established that we're supposed to be reducing emissions (Paris Accord etc.)
meindnoch|7 months ago
openmarkand|7 months ago
Fade_Dance|7 months ago
Using that term through the article makes it hard to take seriously. I know nothing of this project but right off the bat it seems like a project that has little credibility just because of the tone used throughout.
There's no need to turn it into a full-on tirade against this set of technologies either. Is this an appropriate place to complain about Reddit comments?
Ironically, the author could well benefit from running this slop through an llm to make it more professional.
doesnt_know|7 months ago
Personally I think the term is well deserved and am glad it continues to be popularised.
tonypapousek|7 months ago
It's an impressive one, to say the least. It's worth taking a closer look and weighing the excellence created by the human mind before completely dismissing the article's arguments.
> Ironically, the author could well benefit from running this slop through an llm to make it more professional.
True, that would effectively strip out all the heart and soul from the prose.
RUnconcerned|7 months ago
Have you considered that it is not the intent of the author to appear professional? That running it through the Slop Generator would obfuscate their intent to be snarky towards those who outsource all their thinking to Slop Generators?
washadjeffmad|7 months ago
skerit|7 months ago
nurettin|7 months ago
https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
Reading the first two vulnerability reports makes it very clear.
rimbo789|7 months ago
[deleted]
electricboots|7 months ago
The authors lack of professionalism is a reasonable counter to the completely unhinged mainstream takes on ai/llms that we hear daily.
I think the Reddit example provides useful, generally relatable context, otherwise missing to the average reader.
My opinions are not to detract from the use of the tech or engineers working in the space, but motivated by a disgust for the hype.
josefritzishere|7 months ago
queenkjuul|7 months ago
serf|7 months ago
Call it slop all you want, doesn't change the unreasonable effectiveness that some individuals seem to have with such systems.
satisfice|7 months ago
A problem with the slop coding movement is that they are happy living with wishful thinking.
cadamsdotcom|7 months ago
On the rest, especially the confidently incorrect argument.. Not. so. much.
Firstly, models are stochastic parrots, but that truth is irrelevant because they're useful stochastic parrots.
Second, hallucinations and confidently incorrect outputs may yet be a thing of the past, so we should keep an open mind. It's possible that mechanistic interpretability research (a fancy term for "understanding what the model is thinking as it produces your response") will allow the UI to warn the user that uncertainty was detected in the model's response.
Unfortunately none of that matters because the IP point is a blocker. Bummer.
rootnod3|7 months ago
Yes, the article might have some wording issues, but for an operating system project to choose to not allow AI written code for a product that is inherently in need of good security, and rather opt for “think before you write and fully understand what you are doing”, I don’t think that their choice is invalid.
I wouldn’t wanna get into a plane where have the core systems are written by hallucinating AI.
harringdev|7 months ago
The introduction where they claim LLMs are useless for software engineering is just incorrect. They are useful for many software engineering tasks. I do think that vibe coding is rubbish however, and more junior SWEs very regularly misuse LLMs to produce nonsense code.
The only substantive point is that the LLM may regurgitate pieces of proprietary training data; although it seems unlikely that it would be incorporated wholesale into the codebase in such as way it matters or opens them up to liability.
I do question if LLMs would even be useful for such a niche project -- but I think this should be left up to developers to figure out how it complements their workflows rather then ruling out all uses of LLMs.
EDIT: I want to point out that I think the Asahi Linux project is a jewel of engineering and is extremely impressive.
roxolotl|7 months ago
WithinReason|7 months ago
And yet, weather prediction works. Therefore, LLMs work?
pfoof|7 months ago
There’s no proof nor counter-proof that human brain doesn’t work like that.
krige|7 months ago
sandworm101|7 months ago
What shocks me most is that we have found something less useful than bitcoin mining. Remember all the articles about the environmental impact of bitcoin? That is peanuts compared to what the worlds largest companies are biulding to power the next LLM.
RUnconcerned|7 months ago