(no title)
robertheadley | 3 months ago
If you want AI code merged, make it small so it it's an easy review.
That being said, I completely understand being unwilling to merge AI code at all.
robertheadley | 3 months ago
If you want AI code merged, make it small so it it's an easy review.
That being said, I completely understand being unwilling to merge AI code at all.
joelreymont|3 months ago
Consider my other PR against the Zig compiler [1]... I was careful to make it small and properly document it but there's a strict anti-AI policy for Zig and they closed the PR.
Why?
Is it not small? Not carefully documented? Is there no value it int?
I'm not complaining or arguing for justice. I'm genuinely interested in how people think in this instance. If the sausage looks good and tastes great, and was made observing the proper health standards, do you still care how the sausage was made?!
[1] https://github.com/joelreymont/zig/pull/1 [2] https://ziggit.dev/t/bug-wrong-segment-ordering-for-macos-us...
losvedir|3 months ago
It takes time to review these things, and when you haven't shown yourself to be acting responsibly, there's no reason to give you the benefit of the doubt and spend time even checking if the damn alleged bug is real. It doesn't even link to an existing issue, which I'm pretty sure would exist for something as basic as this.
How do you know it's an issue? I think you're letting the always confident LLM trick you into thinking it's doing something real and useful.
gexla|3 months ago
Because structurally it's a flag for being highly likely to waste extremely scare time. It's sort of like avoiding bad neighborhoods,not because everyone is bad, but because there is enough bad there that it's not worth bothering with.
What sticks out for me in these cases is that the AI sticks out like a sore thumb. Go ahead and use AI, it's as if the low effort nature of AI sets users on a course of using low effort throughout the cycle of whatever it is they are trying to accomplish as an end game.
The AI shouldn't look like AI. The proposed contributions shouldn't stand out from the norm. This include the entire process, not just the provided code. It's just a bad aesthetic and for most people it screams "low effort."
xign|3 months ago
Honestly, I really suggest reading up on what self-reflection means. I read through your various PRs, and the fact that you can't even answer why a random author name shows up in your PR means the code can't be trusted. It's not just about attribution (although that's important). It's that it's such a simple thing that you can't even reason through.
You may claim you have written loads of tests, but that means literally nothing. How do you know they are testing the important parts? Also you haven't demonstrated that it "works" other than in the simplest use cases.
whilenot-dev|3 months ago
Are you leaving the third-party aspect out of your question on purpose?
Not GP but for me, it pretty much boils down to the comment from Mason[0]: "If I wanted an LLM to generate [...] unreviewed code [...], I could do it myself."
To put it bluntly, everybody can generate code via LLMs and writing code isn't what defines the dominant work of an existing project anymore, as the write/verify-balance shifts to become verify-heavy. Who's better equipped to verify generated code than the maintainers themselves?
Instead of prompting LLMs for a feature, one could request the desired feature from the maintainers in the issue tracker and let them decide whether they want to generate the code via LLMs or not, discuss strategies etc. Whether the maintainers will use their time for reviews should remain their choice, and their choice only - anyone besides the maintainers should have no say in this.
There's also the cultural problem where the review efforts are non-/underrepresented in any contemporary VCS, and the amount of merged code grants for a higher authority over a repository than any time spent doing reviews or verification (the Linux kernel might be an exception here?). We might need to rethink that approach moving forward.
[0]: https://discourse.julialang.org/t/ai-generated-enhancements-...
heavyset_go|3 months ago
Because AI code cannot be copyrighted. It is not anyone's IP. That matters when you're creating IP.
edit: Assuming this is a real person I'm responding to, and this isn't just a marketing gimmick, having seen the trail you've left on the internet over the past few weeks, it strikes me of mania, possibly chatbot-induced. I don't know what I can say that could help, so I'm dropping out of this conversation and wish you the best.
weare138|3 months ago