(no title)
akdev1l | 18 days ago
Now it is not true. Someone can spend a few minutes generating a non-sense change and push for review. I will have to spend a non-trivial amount of time to even know it’s non-sense.
This problem is already impacting projects like curl who just recently closed their bug bounty because of low-effort AI generated PRs
saghm|18 days ago
> Now it is not true. Someone can spend a few minutes generating a non-sense change and push for review. I will have to spend a non-trivial amount of time to even know it’s non-sense.
The problem sounds basically the same to me honestly. If someone submits code that I can't understand and asks me to review it, the onus on them to explain it. In the previous case, maybe they could, but if they can't now, the review is blocked on them figuring out how to deal with that. If that's not what's happening, it sounds more like an process or organizational problem that wouldn't be possible to fix with the presence or absence of tooling.
> This problem is already impacting projects like curl who just recently closed their bug bounty because of low-effort AI generated PRs
External contributions are a bit of a different problem IMO. I'd argue that open source maintainers have never had any obligation to accept or review external PRs though. Low effort PRs can be closed immediately with no explanation, and that's fine. It's also totally possible and acceptable to limit PRs to only people explicitly listed as contributors. I've even seen projects hosted on their own git infrastructure that don't allow signing up through the web UI so that you can only view everything in the browser (and of course clone the repo, which already isn't something that requires credentials for public git servers).
I guess my overall point is that the changes are more social than technical, and that this isn't the first time that there was a large social shift in how development worked (and likely won't be the last one either). I think viewing it through the lens of "before good, after bad" is reductive because of how it implies that the current changes are so large that everything else beforehand was similar enough to gloss over what had been changing over time already. I'm not convinced that the differences in how programming was achieved socially and technically between 43 years ago (when the author says they started programming) and the dawn of LLM coding assistants were obviously smaller than the new changes that having AI coding tools have introduced, but that isn't reflected by the level of cynicism in most of these discussions.
akdev1l|16 days ago
Yes in the past you could check “oh this doesn’t have back trace or any steps to reproduce, close with won’t fix”
Now you cannot do that, the “low-effort” could be a 500+ lines code change with accompanying documentation and a 300 lines in prose describing the “problem” alongside with “backtraces” showing the issue
Except, the fix is non-sense but you have to read 500+ lines too know that. The documentation doesn’t match the changes but you have to read it to know that. The backtraces literally contain made up functions but once again you need to look closely to verify.
And if the thing isn’t immediately obvious to be AI generated then you’ll end up asking questions which will get forwarded to some AI and end up playing broken telephone.
All of this literally happened to curl in different issues.
abustamam|18 days ago
I can't speak to open source orgs like curl, but at least at the office, the company should invest time in educating engineers on how to use AI in a way that doesn't waste everyone's time. It could be introducing domain-specific skills, rules that ensure TDD is followed, ADRs are generated, work logs, etc.
I found that when I started implementing workflows like this, slop was less and if anyone wanted to know "why did we do it like X" then we can point to the ADR and show what assumptions were made. If an assumption was fundamentally wrong, we can tell the agent to fix the assumption and fix the issue (and of course leave a paper trail).
Engineers who waste other engineers' time reviewing slop PRs should just be fired. AI is no excuse to start producing bad code. The engineer should still be responsible for the code they ship.
saghm|18 days ago
Yeah, this is the unfortunate truth about what's going on here in my opinion. The underlying problem is that some workplaces just have bad culture or processes that don't do enough to prevent (or even actively encourage) being a bad teammate. AI isn't going to solve that, but it's also not really the cause, and at the end of the day, you're going to have problem at a place like that regardless of whether AI is being used or not.