> Users posting unreviewed LLM-generated content without any admission will be immediately be banned without recourse.
Yikes. If maintainers want to ban people for wasting their time, that's great, but considering how paranoid people have gotten about whether something is from an LLM or not, this seems heavy-handed. There needs to be some kind of recourse. How many legitimate-but-simply-wrong contributors will be banned due to policies like this?
That discussion doesn’t track the change: the discussion is around unreviewed content and is quite nuanced, but the change actually is far stricter, extending to any use of an LLM, reviewed or not.
As it stands, a potential contributor couldn’t even use basic tab completion for even a single line of code. That’s… certainly a choice, and one that makes me less confident in the project’s ability to retain reliable human contributors than would otherwise be the case.
There's definitely a "too big to fail" thing going on here given how many billion/trillion dollar companies around the world now have 18+ months of AI-assisted code in their shipped products.
Several of the big LLM vendors offer a "copyright shield" policy to their paying customers, which effectively means that their legal teams will step in to fight for you if someone makes a copyright claim against you.
* quote: "No court has recognized copyright in material created by non-humans".
The problem now becomes how much AI work is influence and what about modifications
* Courts have recognized that using AI as reference and then doing all the work by yourself is copyrightable
* AI can not be considered "Joint work"
* No amount of prompt engineering counts.
* Notable: in case of a hand-drawn picture modified by AI, copyright was assigned exclusively to the originally human hand-drawn parts.
Notable international section:
* Korea allows copyright only on human modifications.
* Japan in case-by-case
* China allows copyright
* EU has no court case yet, only comments. Most of the world is in various levels of "don't really know"
After 40 pages of "People have different opinions, can't really tell", the conclusion section says "existing legal doctrines are adequate", but explicitly excludes using only prompt engineering as copyrightable
> Am I now assured that the copyright is mine if the code is generated by AI? Worldwide? (or at least North America-EU wide)?
Not only is the answer to that no, you have no guarantee that it isn't someone else's copyright. The EU AI Act states that AI providers have to make sure that the output of AI isn't infringing on the source copyright, but I wouldn't trust any one of them bar Mistral to actually do that.
Products directly generated by a generative model are not copyrightable in the US. And therefore public domain. Not a lawyer, but I think the cases and commentary have been pretty clear. If you make significant human contribution / arrangement / modification / etc it can be copyrighted.
Long story short, you can't prevent anyone from using AI slop in any way they want. You would have to keep the slop as a trade secret if you want it to remain intellectual property.
> Has the situation changed on AI code legally speaking?
lol,
l m a o,
essentially people who use LLMs have been betting that courts will rule on their favour, because shit would hit the fan if it didn't.
The courts however, have consistently ruled against AI-generated content. It's really only a matter of time until either the bubble bursts, or legislation happens that pops the bubble. Some people here might hope otherwise, of course, depending on reliant they are on the hallucinating LSD-ridden mechanical turks.
I read this as "LLM-generated contributions" are not welcome, not "any contribution that used LLMs in any way".
More generally, this is clearly a rule to point to in order to end discussions with low effort net-negative contributors. I doubt it's going to be a problem for actually valuable contributions.
> Does this mean Copilot tab complete is banned too? What about asking an LLM for advice and then writing all the code yourself?
You're brushing up against some of the reasons why I am pretty sure policies like this will be futile. They may not diminish in popularity but they will be largely unenforceable. They may serve as an excuse for rejecting poor quality code or code that doesn't fit the existing conventions/patterns but did maintainers need a new reason to reject those PRs?
How does one show that no assistive technologies below some threshold were used?
I'm pretty sure the point is that anything clearly generated will result in an instant ban. That seems rather fair, you want contributors who only submit code they can fully understand and reason about.
Not sure about this project in particular, but many more popular projects (curl comes to mind) have adopted similar policies not out of spite but because they'd get submerged by slop.
Sure, a smart guy with a tool can do so much more, but an idiot with a tool can ruin it for everyone.
I am wondering why you are posting this link, then asking this question to the HN community, instead of asking the project directly for more details.
I does look like your intent is to stir some turmoil over the project position, and not to contribute constructively to the project.
chezmoi is a great tool, and I admire this project taking a strong stand. However I can’t help but feel that policies like this are essentially unenforceable as stated: there’s no way to prove an LLM wasn’t used to generate code. In many cases it may be obvious, but not all.
Some people post vulnerability disclosures or pull requests which are obviously fake and generated by LLM. One example: https://hackerone.com/reports/2298307
These people are collaborating in bad faith and basically just wasting project time and resources. I think banning them is very legitimate and useful. It does not matter if you manage to "catch" exactly 100% of all such cases or not.
I don't think rules like that are meant to be 100% perfectly enforced. It's essentially a policy you can point to when banning somebody, and the a locus of disagreement. If you get banned for alleged AI use, you have to argue that you didn't use AI. It doesn't matter to the project if you were helpful and kind, the policy is no AI.
> Isn't it more reasonable to explain in excruciating detail what kind of contributions you will allow?
No, it's not. You can read the rule as "If it's obvious enough your code has been LLM-generated, you will get banned" if you feel like the conciseness of the current rule makes you uneasy about using Copilot.
Besides, I suspect in the maintainer's case, banning unreviewed LLM contributions is effectively congruent to banning all LLM contributions.
If you think the rule is unfair towards LLMs because they can do such good, feel free to open a good, clean, useful PR clearly stating how you used the LLM to generate code.
This sounds limiting. I compare LLM generated content to autocomplete.
When autocomplete shows you options, you can choose any of the options blindly and obviously things will fail, but you can also pick right method to call and continue your contribution.
When it comes to LLM generated content, its better if you provide guidelines for contribution rather than banning it. For example:
* if you want to generate any doc use our llms_doc_writing.txt
* for coding use our llms_coding.txt
> * if you want to generate any doc use our llms_doc_writing.txt
That's exactly what the project is providing here. The guidelines for how to use LLMs for this project are "don't".
You say "generally better to", but that depends on what you're trying to achieve. Your suggestion is better if you want to change how people use LLMs, the project's is better if the project is trying to change whether people use LLMs.
This false equivalence with autocomplete is a red herring. You can't just dismiss the very real problems of slop coding the maintainers were forced to deal with[1] by comparing it with something tangentially related that has none of the problems.
I'm minimally exposed to vibecoding, but already finding it immensely useful. That said, one thing I don't want to do, is to touch that autogenerated code, hardly opening in an editor.
Anyone feeling the same? That they're not for humans to see?
Yes, and that means that it should never be used for anything connected to the internet or where there is a human cost if it is wrong.
Vibe coding is great for local tools where security isn't a concern and where it is easy for the user to verify correctness. It is when people want to do that professionally, on software that actually needs to work, that it becomes a massive ethical problem.
It seems clear to me that this isn't a well thought out policy, but more of a tantrum by yet another developer angry about the industry changing out from under them. Sadly, it won't help, it'll just hasten this project's death.
What's interesting is the change in the policy. Old policy:
> If you use an LLM (Large Language Model, like ChatGPT, Claude, Gemini, GitHub Copilot, or Llama) to make a contribution then you must say so in your contribution and you must carefully review your contribution for correctness before sharing it. If you share un-reviewed LLM-generated content then you will be immediately banned.
...and the new one:
> If you use an LLM (Large Language Model, like ChatGPT, Claude, Gemini, GitHub Copilot, or Llama) to make any kind of contribution then you will immediately be banned without recourse.
Looking at twpayne's discussion about the LLM policy[1], it seems like he got fed up with people not following those instructions:
> I stumbled across an LLM-generated podcast about chezmoi today. It was bland, impersonal, dull, and un-insightful, just like every LLM-generated contribution so far.
> I will update chezmoi's contribution guide for LLM-generated content to say simply "no LLM-generated content is allowed and if you submit anything that looks even slightly LLM-generated then you will be immediately be banned."
Even more yikes. They found a third-party LLM-generated podcast and made the policy even harsher because of it? What happens when they continue to run into more LLM-generated content out in the wild?
Interestingly, this is exactly the sort of behavior people have been losing their minds about lately with regards to Codes of Conduct.
This is dumb. Llms are a tool, a very useful one. Bad PRs should be rejected always no matter the source, but banning a tool because some people can't use it is not what engineering is about.
mambo_giro|4 months ago
And corresponding discussion: https://github.com/twpayne/chezmoi/discussions/4010
bryanlarsen|4 months ago
squigz|4 months ago
Yikes. If maintainers want to ban people for wasting their time, that's great, but considering how paranoid people have gotten about whether something is from an LLM or not, this seems heavy-handed. There needs to be some kind of recourse. How many legitimate-but-simply-wrong contributors will be banned due to policies like this?
btown|4 months ago
As it stands, a potential contributor couldn’t even use basic tab completion for even a single line of code. That’s… certainly a choice, and one that makes me less confident in the project’s ability to retain reliable human contributors than would otherwise be the case.
koakuma-chan|4 months ago
Maintainer sounds angry
Luker88|4 months ago
Am I now assured that the copyright is mine if the code is generated by AI? Worldwide? (or at least North America-EU wide)?
Do projects still risk becoming public domain if they are all AI generated?
Does anyone know of companies that have received *direct lawyer* clearance on this, or are we still at the stage "run and break, we'll fix later"?
Maybe having a clear policy like this might be a defense in case this actually becomes a problem in court.
simonw|4 months ago
Several of the big LLM vendors offer a "copyright shield" policy to their paying customers, which effectively means that their legal teams will step in to fight for you if someone makes a copyright claim against you.
Some examples:
OpenAI (search for "output indemnity"): https://openai.com/policies/service-terms/
Google Gemini: https://cloud.google.com/blog/products/ai-machine-learning/p...
Microsoft: https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot...
Anthropic: https://www.anthropic.com/news/expanded-legal-protections-ap...
Cohere: https://cohere.com/blog/cohere-intellectual-property
Luker88|4 months ago
https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
--
* lots of opinions of many different parties
* quote: "No court has recognized copyright in material created by non-humans". The problem now becomes how much AI work is influence and what about modifications
* Courts have recognized that using AI as reference and then doing all the work by yourself is copyrightable
* AI can not be considered "Joint work"
* No amount of prompt engineering counts.
* Notable: in case of a hand-drawn picture modified by AI, copyright was assigned exclusively to the originally human hand-drawn parts.
Notable international section:
* Korea allows copyright only on human modifications.
* Japan in case-by-case
* China allows copyright
* EU has no court case yet, only comments. Most of the world is in various levels of "don't really know"
After 40 pages of "People have different opinions, can't really tell", the conclusion section says "existing legal doctrines are adequate", but explicitly excludes using only prompt engineering as copyrightable
sofixa|4 months ago
Not only is the answer to that no, you have no guarantee that it isn't someone else's copyright. The EU AI Act states that AI providers have to make sure that the output of AI isn't infringing on the source copyright, but I wouldn't trust any one of them bar Mistral to actually do that.
JustFinishedBSG|4 months ago
I think the position has shifted to "let's pretend this problem doesn't exist because the AI market is too big to fail"
freejazz|4 months ago
Certainly not in the US
daveguy|4 months ago
Long story short, you can't prevent anyone from using AI slop in any way they want. You would have to keep the slop as a trade secret if you want it to remain intellectual property.
Jan 29th 2025 clarification from US Copyright Office: https://www.copyright.gov/newsnet/2025/1060.html
fao_|4 months ago
lol,
l m a o,
essentially people who use LLMs have been betting that courts will rule on their favour, because shit would hit the fan if it didn't.
The courts however, have consistently ruled against AI-generated content. It's really only a matter of time until either the bubble bursts, or legislation happens that pops the bubble. Some people here might hope otherwise, of course, depending on reliant they are on the hallucinating LSD-ridden mechanical turks.
pkilgore|4 months ago
bryanlarsen|4 months ago
The final policy posted contradicts this statement.
singiamtel|4 months ago
Valodim|4 months ago
> Any contribution of any LLM-generated content
I read this as "LLM-generated contributions" are not welcome, not "any contribution that used LLMs in any way".
More generally, this is clearly a rule to point to in order to end discussions with low effort net-negative contributors. I doubt it's going to be a problem for actually valuable contributions.
baby_souffle|4 months ago
You're brushing up against some of the reasons why I am pretty sure policies like this will be futile. They may not diminish in popularity but they will be largely unenforceable. They may serve as an excuse for rejecting poor quality code or code that doesn't fit the existing conventions/patterns but did maintainers need a new reason to reject those PRs?
How does one show that no assistive technologies below some threshold were used?
Lalabadie|4 months ago
pkilgore|4 months ago
odie5533|4 months ago
qsort|4 months ago
Sure, a smart guy with a tool can do so much more, but an idiot with a tool can ruin it for everyone.
polonbike|4 months ago
jolux|4 months ago
colonwqbang|4 months ago
These people are collaborating in bad faith and basically just wasting project time and resources. I think banning them is very legitimate and useful. It does not matter if you manage to "catch" exactly 100% of all such cases or not.
delusional|4 months ago
pkilgore|4 months ago
alt187|4 months ago
No, it's not. You can read the rule as "If it's obvious enough your code has been LLM-generated, you will get banned" if you feel like the conciseness of the current rule makes you uneasy about using Copilot.
Besides, I suspect in the maintainer's case, banning unreviewed LLM contributions is effectively congruent to banning all LLM contributions.
If you think the rule is unfair towards LLMs because they can do such good, feel free to open a good, clean, useful PR clearly stating how you used the LLM to generate code.
willahmad|4 months ago
When autocomplete shows you options, you can choose any of the options blindly and obviously things will fail, but you can also pick right method to call and continue your contribution.
When it comes to LLM generated content, its better if you provide guidelines for contribution rather than banning it. For example:
JoshTriplett|4 months ago
Coding guidelines generally are, by design.
> * if you want to generate any doc use our llms_doc_writing.txt
That's exactly what the project is providing here. The guidelines for how to use LLMs for this project are "don't".
You say "generally better to", but that depends on what you're trying to achieve. Your suggestion is better if you want to change how people use LLMs, the project's is better if the project is trying to change whether people use LLMs.
etiennebausson|4 months ago
roguecoder|4 months ago
soraminazuki|4 months ago
[1]: https://github.com/twpayne/chezmoi/discussions/4010
WhitneyLand|4 months ago
Then before submitting if I ask an LLM to review my code and it proposes a few changed lines that are more efficient. Should I then
- Leave my less efficient code unchanged?
- Try to rewrite what was suggested in a way that’s not too similar to what the LLM suggested?
muli_d|4 months ago
Unreviewed is a key word here.
pkilgore|4 months ago
johnisgood|4 months ago
numpad0|4 months ago
Anyone feeling the same? That they're not for humans to see?
roguecoder|4 months ago
Vibe coding is great for local tools where security isn't a concern and where it is easy for the user to verify correctness. It is when people want to do that professionally, on software that actually needs to work, that it becomes a massive ethical problem.
deepanwadhwa|4 months ago
senordevnyc|4 months ago
roguecoder|4 months ago
Personally it's like looking at a ransom note made up of letters cut out of magazines & having people tell me how beautiful the handwriting is.
rufo|4 months ago
> If you use an LLM (Large Language Model, like ChatGPT, Claude, Gemini, GitHub Copilot, or Llama) to make a contribution then you must say so in your contribution and you must carefully review your contribution for correctness before sharing it. If you share un-reviewed LLM-generated content then you will be immediately banned.
...and the new one:
> If you use an LLM (Large Language Model, like ChatGPT, Claude, Gemini, GitHub Copilot, or Llama) to make any kind of contribution then you will immediately be banned without recourse.
Looking at twpayne's discussion about the LLM policy[1], it seems like he got fed up with people not following those instructions:
> I stumbled across an LLM-generated podcast about chezmoi today. It was bland, impersonal, dull, and un-insightful, just like every LLM-generated contribution so far.
> I will update chezmoi's contribution guide for LLM-generated content to say simply "no LLM-generated content is allowed and if you submit anything that looks even slightly LLM-generated then you will be immediately be banned."
[1]: https://github.com/twpayne/chezmoi/discussions/4010#discussi...
squigz|4 months ago
Interestingly, this is exactly the sort of behavior people have been losing their minds about lately with regards to Codes of Conduct.
luckydata|4 months ago
pkilgore|4 months ago