top | item 46042732

(no title)

lambda_foo | 3 months ago

Pretty much. I guess it’s open source but it’s not in the spirit of open source contribution.

Plus it puts the burden of reviewing the AI slop onto the project maintainers and the future maintenance is not the submitters problem. So you’ve generated lots of code using AI, nice work that’s faster for you but slower for everyone else around you.

discuss

order

skeledrew|3 months ago

Another consideration here that hits both sides at once is that the maintainers on the project are few. So while it could be a great burden pushing generated code on them for review, it also seems a great burden to get new features done in the first place. So it boils down to the choice of dealing with generated code for X feature, or not having X feature for a long time, if ever.

swiftcoder|3 months ago

> or not having X feature for a long time, if ever

Given that the feature is already quite far into development (i.e. the implementation that the LLM copied), it doesn't seem like that is the case here

dudinax|3 months ago

With the understanding that generated code for X may never be mergable given the limited resources.

gexla|3 months ago

Their issue seemed to be the process. They're setup for a certain flow. Jamming that flow breaks it. Wouldn't matter if it were AI or a sudden surge of interested developers. So, it's not a question of accepting or not accepting AI generated code, but rather changing the process. That in itself is time-consuming and carries potential risk.