top | item 47320661

Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

409 points| pjmlp | 19 days ago |gitlab.redox-os.org | reply

465 comments

order
[+] ptnpzwqd|19 days ago|reply
I think this is a reasonable decision (although maybe increasingly insufficient).

It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.

In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. That is no longer the case, as LLMs can quickly generate PRs that might look superficially correct. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.

Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.

Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools.

[+] stabbles|19 days ago|reply
For well-intended open source contributions using GenAI, my current rules of thumb are:

* Prefer an issue over a PR (after iterating on the issue, either you or the maintainer can use it as a prompt)

* Only open a PR if the review effort is less than the implementation effort.

Whether the latter is feasible depends on the project, but in one of the projects I'm involved in it's fairly obvious: it's a package manager where the work is typically verifying dependencies and constraints; links to upstream commits etc are a great shortcut for reviewers.

[+] darkwater|19 days ago|reply
The problem was already there with lazy bug reports and inflammatory feature requests. Now there is a lazy (or inflammatory) accompanying code. But there were also well-written bug reports with no code attached due to lack of time/skills that now can potentially become useful PRs if handled with application and engineering knowledge and good faith and will.
[+] adjfasn47573|19 days ago|reply
> Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons

Wait but under that assumption - LLMs being good enough - wouldn't the maintainer also be able to leverage LLMs to speed up the review?

Often feels to me like the current stance of arguments is missing something.

[+] andrewchambers|19 days ago|reply
Isn't the obvious solution to not accept drive by changes?
[+] ketzu|19 days ago|reply
> Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools.

Wouldn't an agent run by a maintainer require the same scrutiny? An agent is imo "someone else" and not a trusted maintainer.

[+] NitpickLawyer|19 days ago|reply
Project maintainers will always have the right to decide how to maintain their projects, and "owe" nothing to no one.

That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable. Others have already commented that it's likely unenforceable, but I'd also say it's unreasonable for the sake of utility. It leaves stuff on the table in a time where they really shouldn't. Things like documentation tracking, regression tracking, security, feature parity, etc. can all be enhanced with carefully orchestrated assistance. To simply ban this is ... a choice, I guess. But it's not reasonable, in my book. It's like saying we won't use ci/cd, because it's automated stuff, we're purely manual here.

I think a lot of projects will find ways to adapt. Create good guidelines, help the community to use the best tools for the best tasks, and use automation wherever it makes sense.

At the end of the day slop is slop. You can always refuse to even look at something if you don't like the presentation. Or if the code is a mess. Or if it doesn't follow conventions. Or if a PR is +203323 lines, and so on. But attaching "LLMs aka AI" to the reasoning only invites drama, if anything it makes the effort of distinguishing good content from good looking content even harder, and so on. In the long run it won't be viable. If there's a good way to optimise a piece of code, it won't matter where that optimisation came from, as long as it can be proved it's good.

tl;dr; focus on better verification instead of better identification; prove that a change is good instead of focusing where it came from; test, learn and adapt. Dogma was never good.

[+] amelius|19 days ago|reply
> It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.

But the maintainers can use AI too, for their reviewing.

[+] eyk19|19 days ago|reply
I feel like the pattern here is donate compute, not code. If agents are writing most of the software anyway, why deal with the overhead of reviewing other people's PRs? You're basically reviewing someone else's agent output when you could just run your own.

Maintainers could just accept feature requests, point their own agents at them using donated compute, and skip the whole review dance. You get code that actually matches the project's style and conventions, and nobody has to spend time cleaning up after a stranger's slightly-off take on how things should work.

[+] eatonphil|19 days ago|reply
If you're curious to see what everyone else is doing, I did a survey of over 100 major source available projects and four of them banned AI assisted commits (NetBSD, GIMP, Zig, and qemu).

On the other hand projects with AI assisted commits you can easily find include Linux, curl, io_uring, MariaDB, DuckDB, Elasticsearch, and so on. Of the 112 projects surveyed, 70 of them had AI assisted commits already.

https://theconsensus.dev/p/2026/03/02/source-available-proje...

[+] lukaslalinsky|19 days ago|reply
I think we will be getting into an interesting situation soon, where project maintainers use LLMs because they truly are useful in many cases, but will ban contributors for doing so, because they can't review how well did the user guide the LLM.
[+] konschubert|19 days ago|reply
The bottlenecks today are:

* understanding the problem

* modelling a solution that is consistent with the existing modelling/architecture of the software and moves modelling and architecture in the right direction

* verifying that the the implementation of the solution is not introducing accidental complexity

These are the things LLMs can't do well yet. That's where contributions will be most appreciated. Producing code won't be it, maintainers have their own LLM subscriptions.

[+] mixedbit|19 days ago|reply
If an author of a PR just generated code with an LLM, the GitHub PR becomes an incredibly inefficient interface between a repository owner and the LLM. A much better use of the owner time would be to interact with LLM directly instead of responding to LLM generated PR, waiting for updates, responding again, etc.
[+] bandrami|19 days ago|reply
And in general a lot more people want to use LLMs to generate things than want to consume the things LLMs generate. Some of the more bullish people should think harder about this pretty clear trend.
[+] throwaway2037|19 days ago|reply

    > any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed
Note the word "clearly". Weirdly, as a native English speaker this term makes the policy less strict. What about submarine LLM submissions?

I have no beef with Redox OS. I wish them well. This feels like the newest form of OSS virtue signaling.

[+] layer8|19 days ago|reply
> What about submarine LLM submissions?

That would constitute an attempt to circumvent their policy, with the consequence of being banned from the project. In other words, it makes not clearly labeling any LLM use a bannable offense.

[+] eesmith|19 days ago|reply
As a native English speaker I read this as two parts. If it's obvious, the response is immediate and not up for debate. If it's not obvious then it falls in the second part - "any attempt to bypass this policy will result in a ban from the project".

A submarine submission, if discovered, will result in a ban.

Using the phrase "virtual signaling" long ago became a meaningless term other than to indicate one's views in a culture war. 10 years ago David Shariatmadari wrote "The very act of accusing someone of virtue signalling is an act of virtue signalling in itself", https://www.theguardian.com/commentisfree/2016/jan/20/virtue... .

[+] oytis|19 days ago|reply
Don't ask don't tell looks like a reasonable policy. If no one can tell that your code was written by an LLM and you claim authorship, then whether you have actually written it is a matter of your conscience.
[+] BlackLotus89|19 days ago|reply
I read that as benefit of the doubt, which is a reasonable stance.
[+] khalic|19 days ago|reply
The LLM ban is unenforceable, they must know this. Is it to scare off the most obvious stuff and have a way to kick people off easily in case of incomplete evidence?
[+] BlackFly|19 days ago|reply
It is enforceable, I think you mean to say that it cannot be prevented since people can attempt to hide their usage? Most rules and laws are like that, you proscribe some behavior but that doesn't prevent people from doing it. Therefore you typically need to also define punishments:

> This policy is not open to discussion, any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed, and any attempt to bypass this policy will result in a ban from the project.

[+] ptnpzwqd|19 days ago|reply
I suspect this is for now just a rough filter to remove the lowest effort PRs. It likely will not be enough for long, though, so I suspect we will see default deny policies soon enough, and various different approaches to screening potential contributors.
[+] bonesss|19 days ago|reply
Any sufficiently advanced LLM-slop will be indistinguishable from regular human-slop. But that’s what they are after.

This heuristic lets the project flag problematic slop with minimal investment avoiding the cost issues with reviewing low-quality low-effort high-volume contributions, which should be near ideal.

Much like banning pornography on an artistic photo site, the perfect application on the borderline of the rule is far less important than filtering power “I know it when I see it” provides to the standard case. Plus, smut peddlers aren’t likely to set an OpenClaw bot-agent swarm loose arguing the point with you for days then posting blogs and medium articles attacking you personally for “discrimination”.

[+] buzzardbait|19 days ago|reply
Probably just an attempt to stop low effort LLM copy pasta.
[+] scuff3d|18 days ago|reply
Speed limits are unenforceable. You'll never catch everyone speeding so why even bother trying.
[+] _zagj|19 days ago|reply
> The LLM ban is unenforceable

Just require that the CLA/Certificate of Origin statement be printed out, signed, and mailed with an envelope and stamp, where besides attesting that they appropriately license their contributions ((A)GPL, BSD, MIT, or whatever) and have the authority to do so, that they also attest that they haven't used any LLMs for their contributions. This will strongly deter direct LLM usage. Indirect usage, where people whip up LLM-generated PoCs that they then rewrite, will still probably go on, and go on without detection, but that's less objectionable morally (and legally) than trying to directly commit LLM code.

As an aside, I've noticed a huge drop off in license literacy amongst developers, as well as respect for the license choices of other developers/projects. I can't tell if LLMs caused this, but there's a noticeable difference from the way things were 10 years ago.

[+] yla92|19 days ago|reply
Zig has a similar stance on no-LLM policy

https://codeberg.org/ziglang/zig#strict-no-llm-no-ai-policy

[+] pmarreck|19 days ago|reply
Yep, that’s why my forks of all their libraries with bugs fixed such as https://github.com/pmarreck/zigimg/commit/52c4b9a557d38fe1e1... will never ever go back to upstream, just because an LLM did it. Lame, but oh well- their loss. Also, this is dumb because anyone who wants fixes like this will have to find a fork like mine with them, which is an increased maintenance burden.
[+] dakolli|19 days ago|reply
If you rely on llms, you're simply not going to make it. The person who showed their work on the math test is 9/10 times is doing better in life than the person that only knew how to use a calculator. Now how do we think things are going to turn out for the person that doesn't even think they need to learn how to use a calculator.

Just like when people started losing their ability to navigate without a GPS/Maps app, you will lose your ability to write solid code, solve problems, hell maybe even read well.

I want my brain to be strong in old age, and I actually love to write code unlike 99% in software apparently (like why did you people even start doing this career.. makes no sense to me).

I'm going to keep writing the code myself! Stop paying Billionaires for their thinking machines, its not going to work out well for you.

[+] 0xbadcafebee|18 days ago|reply
Dangerous that all these projects keep going MIT. We wouldn't have an open source community if it weren't for protections against modification without sharing. Almost all software today would be proprietary, as it was before.
[+] okanat|18 days ago|reply
No. People shared code because they wanted to. Open standards are great tools against emerging monopolies. So the losing side used that. IBM lost OS/2 vs NT war. They propped up Linux. Intel wanted to have a second option to Microsoft in server space. AMD wants to gain some developers against Nvidia Cuda monopoly. That's the reason they contribute. Even Linux's own leadership decided against extra freedoms for users; they rejected GPLv3 to keep company contributions going. That's why LLVM gets the first implementations of certain optimizations and architectures, yet being permissive licensed.

Quite a bit of the Linux userspace is already permissively licensed. Nobody has built a full-fledged open source alternative yet. Because it is hard to build an ecosystem, it is hard to test thousands of different pieces of hardware. None of that would happen without well-paid engineers contributing.

[+] tkel|19 days ago|reply
Glad to see they are applying some rigor. I've started removing AI-heavy projects from my dependency tree.
[+] butILoveLife|19 days ago|reply
Are you and Redox just going to fall behind? Projects that used to take months take days or hours.

It seems well intentioned, but lots of bad ideas are like this.

I was told by my customer they didn't need my help because Claude Code did the program they wanted me to quote. I sheepishly said, 'I can send an intern to work in-house if you don't want to spend internal resources on it.'

I can't really imagine what kind of code will be done by hand anymore... Even military level stuff can run large local models.

[+] stuaxo|19 days ago|reply
We need LLMs that have a certificate of origin.

For instance a GPL LLM trained only on GPL code where the source data is all known, and the output is all GPL.

It could be done with a distributed effort.

[+] hparadiz|19 days ago|reply
I am 100% certain that code that Redox OS relies on in upstream already has LLM code in it.
[+] inder1|18 days ago|reply
the skills that protect against displacement long-term are exactly what vibe coding erodes. an engineer who built with AI but never developed the instincts to spot its mistakes has a gap they don't know they have. this maintainer problem is a preview: when you can't tell the difference between a PR from someone who understood the code and one from someone who just prompted into it, the verification burden doesn't disappear. it shifts to whoever has enough skill to catch the errors.
[+] munk-a|18 days ago|reply
A long list of contribution PRs are seen as a resume currency in the modern world. A way to game that system is to autogenerate a whole bunch of PRs and hope some of them are accepted to buff your resume. Our issue is that we've been impressed with volume of PRs and not the quality of PRs. The correction is that we should start caring about the volume of rejected PRs and quality of those accepted PRs (like reviewing merge discussions since they're a close corollary to what can be expected during an internal PR). As long as the volume of PRs is seen as a positive indicator people will try and maximize that number.

This is made more complex that the most senior members of organizations tend to be irrationally AI positive - so it's difficult for the hiring layer to push back on a candidate for over reliance on tools even if they fail to demonstrate core skills that those tools can't supplement. The discussion has become too political[1] in most organizations and that's going to be difficult to overcome.

1. In the classic intra-organizational meaning of politics - not the modern national meaning.

[+] sbcorvus|18 days ago|reply
I understand the knee-jerk reaction to restrict LLM's, but that feels like a failing prospect. They're going to be doing an incredible amount of heavy lifting on code generation, so why would you intentionally cut out what will likely be 90% or more of potential contributions? Wouldn't it be better to come up with a system that tags the type of contributor, ie. human vs. AI? What about building an Agentic architecture that reduces your review burden? Just a thought.
[+] ajstars|18 days ago|reply
The interesting tension here is that "no LLM-generated code" is easy to state but hard to enforce - a developer who uses an LLM to understand a concept and then writes the code themselves is indistinguishable from one who didn't. The policy probably works as a cultural signal more than a technical guarantee, which might be exactly what they want.
[+] The-Ludwig|19 days ago|reply
Hm, wondering how to enforce this rule. Rules without any means to enforce them can put the honest people into a disadvantage.
[+] jacquesm|19 days ago|reply
Hiring managers could help here: the only thing that should count as a positive when - if - you feel like someone's open source contributions are important for your hiring decision is to make it plain that you only accept this if someone is a core contributor. Drive-by contributions should not count for anything, even if accepted.
[+] qsera|19 days ago|reply
I think clients who care about getting good software will eventually require that LLMs are not directly used during the development.

I think one way to compare the use of LLMs is that it is like comparing a dynamically typed language with a functional/statically typed one. Functional programming languages with static typing makes it harder to implement the solution without understanding and developing an intuition of the problem.

But programming languages with dynamic typing will let you create a (partial) solutions with a lesser understanding the problem.

LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..

If I am a client who wants reliable software, then I want an competent programmer to

1. actually understand the problem,

2. and then come up with a solution.

The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs.

[+] witx|15 days ago|reply
Good someone is taking a stance against slopware
[+] scotty79|19 days ago|reply
I see a lot of oss forks in the future where people just fork to fix their issues with LLMs without going through maintainers. Or even doing full LLM rewrites of smaller stuff.