> Whether it was generated by human or AI is irrelevant.
No, some projects take fundamental issues with AI, be it ethical, copyright related, or raising doubts over whether people even understand the code they're submitting and whether it'll be maintainable long term or even work.
There was some drama around that with GZDoom: https://arstechnica.com/gaming/2025/10/civil-war-gzdoom-fan-... (although that was a particular messy case where the code broke things because the dev couldn't even test it and also straight up merged it; so probably governance problems in the project as well)
But the bottom line is that some projects will disallow AI on a principled basis and they don't care just about the quality of the code, rather that it was written by an actual person. Whether it's possible to just not care about that and sneak stuff in regardless (e.g. using autocomplete and so on, maybe vibe coding a prototype and then making it your own to some degree), or whether it's possible to use it as any other tool in development, that's another story.
Edit: to clarify my personal stance, I'm largely in the "code is code" camp - either it meets some standard, or it doesn't. It's a bit like with art - whether you prefer something with soul or mindless slop, unfortunately for some the reckoning is that the purse holders often really do not care.
> No, some projects take fundamental issues with AI, be it ethical, copyright related, or raising doubts over whether people even understand the code they're submitting and whether it'll be maintainable long term or even work.
These issues are no different for normal submissions.
You are responsible for taking ownership and having sorted out copyright. You may accidentally through prior knowledge write something identical to pre-existing code with pre-existing copyright. Or steal it straight off StackOverflow. Same for an LLM - at least Github Copilot has a feature to detect literal duplicates.
You are responsible for ensuring the code you submit makes sense and is maintainable, and the reviewer will question this. Many submit hand-written, unmaintainable garbage. This is not an LLM specific issue.
Ethics is another thing, but I don't agree with any proposed issues. Learning from the works of others is an extremely human thing, and I don't see a problem being created by the fact that the experience was contained in an intermediate box.
The real problem is that there are a lot of extremely lazy individuals thinking that they are now developers because they can make ChatGPT/Claude write them a PR, and throw a tantrum over how it's discriminating against them to disallow the work on the basis that they don't understand it.
That is: The problem is people, as it always has been. Not LLMs.
Yes absolutely relevant, especially in this software case. There is no requirement for mass-amounts of boilerplate code to be written here, just supposedly smart and correct cryptography and as little code as possible to do the job right... so if someone is using AI... that is a huge red flag.
An obvious sign that something is going horribly wrong in this project.
In fact i think this kind of news is enough to garner a huge influx of international hackers all targeting this package now, if they weren't already. They will be looking closely at the supply chain, phishing the hell out of the developers, physical intrusions where they can, its a hint that the developers might be stressed and making poor decisions, with huge payoff for infiltrating
I would agree, IMHO keepassXC should however actually lay out their review standards better to actually be able to review security relevant code. I am a happy keepassxc user on multiple devices. However, trying to use and extend it in various settings, I simply still do not understand their complete threat model, which makes it very difficult to understand the impact of many of extensions it provides: being it for quick unlocking or API connection to browsers that can be used for arbitrary clients.
People get confused talking about AI. For some reason they skip the fact that a human prompted the LLM for the generated output. One could almost think AI is an agent all on its own.
>Whether it was generated by human or AI is irrelevant.
No. These systems are still so mindboggingly bad at anything that involves manual memory management and pointers that even entertaining the idea of using them for something as critical as a non-trivial large c++ codebase, for a password manager no less, is nuts. It displays a lack of concern for security and propensity for shortcuts that I don't want to touch anything by people who even remotely consider this appropriate.
KronisLV|3 months ago
No, some projects take fundamental issues with AI, be it ethical, copyright related, or raising doubts over whether people even understand the code they're submitting and whether it'll be maintainable long term or even work.
There was some drama around that with GZDoom: https://arstechnica.com/gaming/2025/10/civil-war-gzdoom-fan-... (although that was a particular messy case where the code broke things because the dev couldn't even test it and also straight up merged it; so probably governance problems in the project as well)
But the bottom line is that some projects will disallow AI on a principled basis and they don't care just about the quality of the code, rather that it was written by an actual person. Whether it's possible to just not care about that and sneak stuff in regardless (e.g. using autocomplete and so on, maybe vibe coding a prototype and then making it your own to some degree), or whether it's possible to use it as any other tool in development, that's another story.
Edit: to clarify my personal stance, I'm largely in the "code is code" camp - either it meets some standard, or it doesn't. It's a bit like with art - whether you prefer something with soul or mindless slop, unfortunately for some the reckoning is that the purse holders often really do not care.
arghwhat|3 months ago
These issues are no different for normal submissions.
You are responsible for taking ownership and having sorted out copyright. You may accidentally through prior knowledge write something identical to pre-existing code with pre-existing copyright. Or steal it straight off StackOverflow. Same for an LLM - at least Github Copilot has a feature to detect literal duplicates.
You are responsible for ensuring the code you submit makes sense and is maintainable, and the reviewer will question this. Many submit hand-written, unmaintainable garbage. This is not an LLM specific issue.
Ethics is another thing, but I don't agree with any proposed issues. Learning from the works of others is an extremely human thing, and I don't see a problem being created by the fact that the experience was contained in an intermediate box.
The real problem is that there are a lot of extremely lazy individuals thinking that they are now developers because they can make ChatGPT/Claude write them a PR, and throw a tantrum over how it's discriminating against them to disallow the work on the basis that they don't understand it.
That is: The problem is people, as it always has been. Not LLMs.
Sincere6066|3 months ago
beefnugs|3 months ago
An obvious sign that something is going horribly wrong in this project.
In fact i think this kind of news is enough to garner a huge influx of international hackers all targeting this package now, if they weren't already. They will be looking closely at the supply chain, phishing the hell out of the developers, physical intrusions where they can, its a hint that the developers might be stressed and making poor decisions, with huge payoff for infiltrating
riedel|3 months ago
s_ting765|3 months ago
Barrin92|3 months ago
No. These systems are still so mindboggingly bad at anything that involves manual memory management and pointers that even entertaining the idea of using them for something as critical as a non-trivial large c++ codebase, for a password manager no less, is nuts. It displays a lack of concern for security and propensity for shortcuts that I don't want to touch anything by people who even remotely consider this appropriate.
phoerious|3 months ago