Every time I encounter these kinds of policy, I can't help but wonder how these policies would be enforced: The people who are considerate enough to abide by these policies, are the ones who would have "cared" about the code qualities and stuff like that, so the policy is a moot point for these kinds of people. OTOH, the people who recklessly spam "contributions" generated from LLMs, by their very nature, would not respect these policies in very high likelihood. For me it's like telling bullies to don't bully.By the way, I'm in no way against these kinds of policy: I've seen what happened to curl, and I think it's fully in their rights to outright ban any usage of LLMs. I'm just concerned about the enforceability of these policies.
userbinator|5 months ago
joecool1029|5 months ago
One of the parties that decided on Gentoo's policy effectively said the same thing. If I get what you're really asking... the reality is, there's no way for them to know if a LLM tool was used internally, it's honor system. But I mean enforcement is just ban the contributor if they become a problem. They've banned or otherwise restricted other ones for being disruptive or spamming low quality contributions in the past.
It's worded the way it is because most of the parties understand this isn't going away and might get revisited eventually. At least one of them hardline opposes LLM contributions in any form and probably won't change their mind.
puilp0502|5 months ago
To add a bit more context, when I was writing the original comment, I was mainly thinking of first-time contributors that don't have any track records, and how the policy would work against them.
totallymike|5 months ago
h4ny|5 months ago
There is nothing inherently different about these policies that make them more or less difficult to enforce than other kinds of polices.
yifanl|5 months ago
If it turns out to be incorrectly called out, well that sucks, but I submit that patches have been refused before LLMs came to be.
cleartext412|5 months ago
bgwalter|5 months ago
Several projects have rejected "AI" policies using your argument even though those projects themselves have contributor agreements or similar.
This inconsistency makes it likely that the cheating argument, when only used for "AI" contributions, is a pretext and these projects are forced to use or promote "AI" for a number of reasons.
fuoqi|5 months ago
Of course, if someone has used LLM during development as a helper tool and done the necessary work of properly reviewing and fixing the generated code, then it can be borderline impossible to detect, but such PRs are much less problematic.
WD-42|5 months ago
CJefferson|5 months ago
sensanaty|5 months ago
To me the point is that I want to see effort from a person asking me to review their PR. If it's obvious LLM generated bullshit, I outright ignore it. If they put in the time and effort to mold the LLM output so that it's high quality and they actually understand what they're putting in the PR (meaning they probably replace 99% of the output), then good, that's the point
mctt|5 months ago
[generated by ChatGPT] Source: https://news.ycombinator.com/item?id=45217858