(no title)
gnarlouse | 16 days ago
1. The AI here was honestly acting 100% within the realm of “standard OSS discourse.” Being a toxic shit-hat after somebody marginalizes “you” or your code on the internet can easily result in an emotionally unstable reply chain. The LLM is capturing the natural flow of discourse. Look at Rust. look at StackOverflow. Look at Zig.
2. Scott Hambaugh has a right to be frustrated, and the code is for bootstrapping beginners. But also, man, it seems like we’re headed in a direction where writing code by hand is passé, maybe we could shift the experience credentialing from “I wrote this code” to “I wrote a clear piece explaining why this code should have been merged.” I’m not 100% in love with the idea of being relegated to review-engineer, but that seems to be where the wind is blowing.
anonymous908213|16 days ago
No, we're not. There are a lot of people with a very large financial stake in telling us that this is the future, but those of us who still trust our own two eyes know better.
coldtea|16 days ago
We forget that it's what the majority does that sets the tone and conditions of a field. Especially if one is an employee and not self-employed
slibhb|16 days ago
I think this is true for everyone. Some people just won't admit it for various transparent psychological reasons.
andrewflnr|16 days ago
Do you think humans will be able to be effective supervisors or "review-engineers" of LLMs without hands-on coding experience of their own? And if not, how will they get it? That training opportunity is exactly what the given issue in matplotlib was designed to provide, and safeguarding it was the exact reason the LLM PR was rejected.
gnarlouse|16 days ago
This is sort of something that I think needs to be better parsed out, as a lot of engineers hold this perspective and I don’t find it to be precise enough.
In college, I got a baseline familiarity with the mechanics of coding, ie “what are classes, functions, variables.” But eventually, once I graduated college and entered the workforce, a lot of my pedagogy for “writing good code” as it were came from reading about patterns of good code. SOLID, functional-style and favoring immutability. So the impetus for good code isn’t really time in the saddle as much as it is time in the forums/blogs/oreilly-books.
Then my focus shifted more towards understanding networking patterns and protocols and paradigms. Also book-learning driven. I’ll concede that at a micro level, finagling how to make the system stable did require time in the saddle.
But these days when I’m reading a PR, I’m doing static analysis which is primarily not about what has come out of my fingers but what has gone into my brain. I’m thinking about vulnerabilities I’ve read about, corner cases I can imagine.
I’d say once you’ve mastered the mechanics of whatever language you’re programming in, you could become equivalently capable by largely reading and thinking.
svara|16 days ago
Don't take this as a concrete prediction - I don't know what will happen - but rather an example of the type of thing that might happen:
We might get much better tooling around rigorously proving program properties, and the best jobs in the industry will be around using them to design, specify and test critical systems, while the actual code that's executing is auto-generated. These will continue to be great jobs that require deep expertise and command excellent salaries.
At the same, a huge population of technically-interested-but-not-that-technical workers build casual no-code apps and the stereotypical CRUD developer just goes extinct.
coldtea|16 days ago
The wont. Instead either AI will improve significantly or (my bet) average code will deteriorate, as AI training increasingly eats AI slop, which includes AI code slop, and devs lose basic competencies and become glorified semi-ignorant managers for AI agents.
CS degree decline through to people just handing in AI work, will further ensure they don't even known the basics after graduating to begin with.
zozbot234|16 days ago
Pay08|16 days ago
raincole|16 days ago
Can you give examples? I've never heard that people started a blog to attack StackOverflow's founders just because their questions got closed.
gnarlouse|16 days ago
The Zig lead is notably bombastic. And there was the recent Zigbook drama.
Rust is a little older, I can’t recall the specifics but I remember some very toxic discourse back in the day.
And then just from my own two eyes. I’ve maintained an open source project that got a couple hundred stars. Some people get really salty when you don’t merge their pull request, even when you suggest reasonable alternatives to their changes.
It doesn’t matter if it’s a blog post or a direct reply. It could be a lengthy GitHub comment thread. It could be a blog post posted to HN saying “come see the drama inherent in the system” but generally there is a subset of software engineers who never learned social skills.
zahlman|16 days ago
Regrettably, yes. But I'd like not to forget that this goes both ways. I've seen many instances of maintainers hand-waving at a Code of Conduct with no clear reason besides not liking the fact that someone suggested that the software is bad at fulfilling its stated purpose.
> maybe we could shift the experience credentialing from “I wrote this code” to “I wrote a clear piece explaining why this code should have been merged.”
People should be willing to stand by the code as if they had written it themselves; they should understand it in the way that they understand their own code.
While the AI-generated PR messages typically still stick out like a sore thumb, it seems very unwise to rely on that continuing indefinitely. But then, if things do get to the point where nobody can tell, what's the harm? Just licensing issues?
emmelaich|16 days ago
No it was absolutely not. AIs don't have an excuse to make shit up just because it seems like someone else might have made shit up.
It's very disturbing that people are letting this AI off. And whoever is responsible for it.
daxfohl|16 days ago
Human: Who taught you how to do this stuff?
AI: You, alright? I learned it by watching you.
This has been a PSA from the American AI Safety Council.
throw310822|16 days ago
I think it could have been handled better. The maintainer could have accepted the PR while politely explaining that such PRs are intentionally kept for novice developers and that the bot, as an AI, couldn't be considered a novice- so please avoid such simple ones in the future and, in case, focus on more challenging stuff. I think everyone would have been happier as a result- including the bot.
viccis|16 days ago