(no title)
halfadot | 5 months ago
No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side. This lack of will to learn will not change the outcomes for you regardless of whether you're using an LLM. You can spend as much time as you want asking the LLM for in-depth explanations and examples to test your understanding.
So many of the criticisms of coding with LLMs I've seen really do sound like they're coming from people who already started with a pre-existing bias, fiddled with with for a short bit (or worse, never actually tried it at all) and assumed their limited experience is the be-all end-all of the subject. Either that, or they're typical skill issues.
bccdee|5 months ago
There's nothing about C that "forces" people to write buffer overflows. But, when writing C, the path of least resistance is to produce memory-unsafe code. Your position reminds me of C advocates who say that "good developers possess the expertise and put in the effort to write safe code without safeguards," which is a bad argument because we know memory errors do show up in critical code regardless of what a hypothetical "good C dev" does.
If the path of least resistance for a given tool involve using that tool dangerously, then it's a dangerous tool. We say chefs should work with sharp knives, but with good knife technique (claw grip, for instance) safety is the path of least resistance. I have yet to hear of an LLM workflow where skimming the generated code is made harder than comprehensively auditing it, and I'm not sure that such a workflow would feel good or be productive.
foopod|5 months ago
People tend to take the path of least resistance, maybe not everyone, maybe not right away, but if you create opportunities to write poor code then people will take them - more than ever it becomes important to have strong CI, review and testing practices.
Edit: okay, maybe I am feeling a little pessimistic this morning :)
itsoktocry|5 months ago
ZephyrBlu|5 months ago
This is the whole point. The marginal dev will go to the path of least resistance, which is to skip the understanding and churn out a bunch of code. That is why it's a problem.
You are effectively saying "just be a good dev, there's literally nothing about AI which is stopping you from being a good dev" which is completely correct and also missing the point.
The marginal developer is not going to put in the effort to wield AI in a skillful way. They're going to slop their way through. It is a concern for widespread AI coding, even if it's not a concern for you or your skill peers in particular.
v3xro|5 months ago
latentsea|5 months ago
My mental model of it is that coding with LLMs amplified both what you know and what you don't.
When you know something, you can direct it productively much faster to a desirable outcome than you could on your own.
When you don't know something, the time you normally would have spent researching to build a sufficient understanding to start working on it can be replaced with evaluating the random stuff the LLM comes up with which oftentimes works but not in the way it ought to, though since you can get to some result quickly, the trade-off to do the research feels somehow less worth it.
Probably if you don't have any idea how to accomplish the task you need to cultivate the habit of still doing the research first. Wielding it skillfully is now the task of our industry, so we ought to be developing that skill and cultivating it in our team members.
keeda|5 months ago
Purely vibe-coded projects will soon break in unexplainable ways as they grow beyond trivial levels. Once that happens their devs will either need to adapt and learn coding for real or be PIP'd. I can't imagine any such devs lasting long in the current layoff-happy environment. So it seems like a self-correcting problem no?
(Maybe AGI, whatever that is, will change things, but I'm not holding my breath.)
The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
mrcrumb1|5 months ago
keeda|5 months ago
The industry has institutionalized this by making code reviews a very standard best practice. People think of code reviews mainly as a mechanism to reduce bugs, but turns out the biggest benefits (born out by studies) actually are better context-sharing amongst the team, mentoring junior engineers, and onboarding of new team-mates. It ensures that everyone has the same mental model of the system despite working on different parts of it (c.f. the story of the blind men and the elephant.) This results in better ownership and fewer defects per line of code.
Note, this also doesn't mean everybody reviews each and every PR. But any non-trivial PR should be reviewed by team-mates with appropriate context.
WalterSear|5 months ago
The comparison is oniy reasonable if most of your job is spent trying to understand their code, and make sure it did what you wanted. And with them standing next to you, ready to answer questons, explain anything I don't understand and pull in any external, relevant parts of the codebase.
mikelockz|5 months ago
I do think when AI writes comprehensible code you can spend as much time as necessary asking questions to better understand it. You can ask about tradeoffs and alternatives without offending anybody and actually get to a better place in your own understanding than would be possible alone.