(no title)
_shadi
|
9 months ago
I might have not been clear in my original reply, I don't have this problem when using an LLM myself, I sometimes notice this when I review code by new joiners that was written with the help of an LLM, the code quality is usually ok unless I want to be pedantic, but sometimes the agent helper make new comers dig themselves deeper in the wrong approach while if they asked a human coworker they would probably have noticed that the solution is going the wrong way from the start, which touches on what the original article is about, I don't know if that is incompetence acceleration, but if used wrong or maybe not in a clear directed way, it can produce something that works but has monstrous unneeded complexity.
viraptor|9 months ago