(no title)
Alupis | 21 days ago
The PR author had zero understanding why their entirely LLM-generated contribution was viewed so suspiciously.
The article validates a significant point: it is one thing to have passing tests and be able to produce output that resembles correctness - however it's something entirely different for that output to be good and maintainable.
Incipient|21 days ago
>Beats me. AI decided to do so and I didn't question it.
Haha that's comedy gold, and honestly a good interview screening situation - you'd instantly pass on the candidate!
thewhitetulip|21 days ago
He told me "I spent n days to architect the solution"
He shows me claude generated system design .. and then i say ok, I went to review the code. 1hr later i asked why did you repeat the code all over at the end. Dude replies "junk the entire PR it's AI generated"
simgt|21 days ago
[0] https://github.com/ocaml/ocaml/pull/14369#issuecomment-35565...
ordu|21 days ago
It is not just patience, he is ready to spent a shitload of time explaining basics to strangers. Such an answer would take, I believe would take a very least half an hour to compose, not counting the time you need to read all the relevant discussion to get the context. But yeah, it would be great to have more people like him around.
amoss|21 days ago
zozbot234|21 days ago
When contributions are small and tightly human-controlled it's also less likely that potential legal concerns will arise, since it means that any genuinely creative decisions about the code are a lot easier to trace.
(In this case, the AI seems to have ripped off a lot of the work from OxCaml with inconsistent attribution. OxCaml is actually license compatible (and friendly) with Ocaml but obviously any merge of that work should happen on its own terms, not as a side effect of ripoff slop code.)
Culonavirus|21 days ago
nyantaro1|20 days ago
copilot_king_2|21 days ago
If you haven't come across a significant number of AI addicts as obnoxiously delusional as @Culonavirus describes, you must be getting close to retirement age.
People with any connection to new college graduates understand that this sort of idiotic LLM-backed arrogance is extremely common among low-to-mid-functioning twenty-somethings.
co_king_3|21 days ago
[deleted]
onion2k|21 days ago
People aren't prompting LLMs to write good, maintainable code though. They're assuming that because we've made a collective assumption that good, maintainable code is the goal then it must also be the goal of an LLM too. That isn't true. LLMs don't care about our goals. They are solving problems in a probabilistic way based on the content of their training data, context, and prompting. Presumably if you take all the code in the world and throw it in mixer what comes out is not our Platonic ideal of the best possible code, but actually something more like a Lovecraftian horror that happens to get the right output. This is quite positive because it shows that with better prompting+context+training we might actually be able to guide an LLM to know what good and bad looks like (based on the fact that we know). The future is looking great.
However, we also need to be aware that 'good, maintainable code' is often not what we think is the ideal output of a developer. In businesses everywhere the goal is 'whatever works right now, and to hell with maintainability'. When a business is 3 months from failing spending time to write good code that you can continue to work on in 10 years feels like wasted effort. So really, for most code that's written, it doesn't actually need to be good or maintainable. It just needs to work. And if you look at the code that a lot of businesses are running, it doesn't. LLMs are a step forward in just getting stuff to work in the first place.
If we can move to 'bug free' using AI, at the unit level, then AI is useful. Above individual units of code, like logic, architecture, security, etc things still have to come from the developer because AI can't have the context of a complete application yet. When that's ready then we can tackle 'tech debt free' because almost all tech debt lives at that higher level. I don't think we'll get there for a long time.
charcircuit|20 days ago
>Presumably if you take all the code in the world and throw it in mixer what comes out is not our Platonic ideal of the best possible code, but actually something more like a Lovecraftian horror that happens to get the right output.
These statements are inaccurate since 2022 when LLMs started to have post training done.
codebolt|21 days ago
Then they're not using the tools correctly. LLMs are capable of producing good clean code, but they need to be carefully instructed as to how.
I recently used Gemini to build my first Android app, and I have zero experience with Kotlin or most of the libraries (but I have done many years of enterprise Java in my career). When I started I first had a long discussion with the AI about how we should set up dependency injection, Material3 UI components, model-view architecture, Firebase, logging, etc and made a big Markdown file with a detailed architecture description. Then I let the agent mode implement the plan over several steps and with a lot of tweaking along the way. I've been quite happy with the result, the app works like a charm and the code is neatly structured and easy to jump into whenever I need to make changes. Finishing a project like this in a couple of dozen hours (especially being a complete newbie to the stack) simply would not have been possible 2-3 years ago.
tossandthrow|21 days ago
Adding Ai generated comments are IMHO some of the most rude uses of Ai.
usrusr|21 days ago
A slightly sarcastic (or perhaps not so slightly..) mental model of legal conflict resolution is that much of it boils down to throwing lots of content at the opposing side, claiming that it shows that the represented side is right and creating a task for the opposite side to find a flaw in that material. I believe that this game of quantity fits through the whole range from "I'll have my lawyer repeat my argument in a letter featuring their letter head" all the way to paper-tsunamis like the Google-Oracle trial.
Now give both sides access to LLM... I wonder if the legal profession will eventually settle on some format of in-person offline resolution with strict limits to recess and/or limits to word count for both documents and notes, because otherwise conflicts fail to get settled in anyone's lifetime (or won by whoever does not run out of tokens first - come thinking of it, the technogarchs would love this, so I guess this is exactly what will happen barring a revolution)
ncruces|21 days ago
Pretty soon we'll have AIs talking to each other.
63stack|21 days ago
kleiba|21 days ago
I wouldn't call this a fiasco, it reads to me more that being able to create huge amounts of code - whether the end result works well or not - breaks the traditional model of open source. Small contributions can be verified and the merrit-vs-maintenance-effort can at least be assessed somewhat more realistically.
I have no bones in the "vibe coding sucks" vs "vibe coding rocks" discussion and I reading that thread as an outsider. I cannot help but find the PR author's attitude absolutely okay while the compiler folks are very defensive. I do agree with them that submitting a huge PR request without prior discussion cannot be the way forward. But that's almost orthogonal to the question of whether AI-generated code is or is not of value.
If I were the author, I would probably take my 13k loc proof-of-concept implementation and chop it down into bite-size steps that are easy to digest, and try to get them to get integrated into the compiler successively, with being totally upfront about what the final goal is. You'd need to be ready to accept criticism and requests for change, but it should not be too hard to have your AI of choice incorporate these into your code base.
I think the main mistake of the author was not to use vibe coding, it was to dream up his own personal ideal of a huge feature, and then go ahead and single-handedly implement the whole thing without involving anyone from the actual compiler project. You cannot blame the maintainers for not being crazy about accepting such a huge blob.
vasco|21 days ago
I struggle to think how someone thinks this is polite. Is politeness to you just not using curse words?
63stack|20 days ago
"Beats me"
Do you consider this "professional and polite"? Raise your standards.
monegator|21 days ago
everybody should collectively tell him to fuck off