(no title)
encomiast | 1 day ago
It seems like that is the open question. The article suggests that people don't maintain this ability:
"The AI group scored 17% lower on conceptual understanding, debugging, and code reading. The largest gap was in debugging, the exact skill you need to catch what AI gets wrong. One hour of passive AI-assisted work produced measurable skill erosion."
From my own (anecdotal) experience I am seeing a lot more cases of what I call developer bullshit where developers can't even talk about the work they are vibe-coding on in a coherent way. Management doesn't notice this since it's all techno-bable to them and sounds fancy, but other developers do.
mirsadm|1 day ago
tisdadd|1 day ago
Edit: I had an instance once where about once a month another developer would ask me about workplace setup, mentioned it to someone and was told maybe they were the English speaker of the group. Upon further investigation, that seemed to be the case.
lenkite|1 day ago
logicprog|1 day ago
I think it's also extremely worth pointing out that when you break down the AI using group by how they actually used AI, those who had the AI both provide code and afterwards provide a summary of the concepts and what it did actually scored among the highest. The same for ones who actually use the AI to ask it questions about the code it generated after it generated that code. Which seems to indicate to me that as long as you're having the AI explain and summarize what it did after each badge of edits. And you're also using it to explore and explain existing code bases. You're not going to see this problem.
I'm so extremely tired of people like you who want to engage in this moral panic completely misinterpreting these studies
encomiast|1 day ago
dangus|1 day ago
The embarrassment is understanding. It feels wrong, because in many ways it is wrong.
The only way I’ve had this feel any better is by using it on a non-critical internal tool. I can confidently say “I didn’t write any of this code because it’s a quality of life tool that only lives on developer manners and is not required at any point in our workflow.”
I also agree with the article that, unless computer science departments maintain some pretty strict discipline, this idea of a seniority collapse could be very real.
Will we need those senior engineers if AI keeps getting better? I don’t know. Maybe one day the AI systems are going to just be trusted to be able to untangle complex architectural problems.
If it wasn’t for leaded gasoline, rudimentary cancer treatment, and a good section of my modern video game catalog. I might be wishing I was born earlier.