(no title)
pajtai | 1 day ago
We've always had the problem that understanding while writing code is easier than understanding code you've written. This is why, in the pre-AI era, Joel Spolsky wrote: "It's harder to read code than to write it."
pajtai | 1 day ago
We've always had the problem that understanding while writing code is easier than understanding code you've written. This is why, in the pre-AI era, Joel Spolsky wrote: "It's harder to read code than to write it."
Vexs|1 day ago
Verdex|1 day ago
The worst code bases I have to deal with have either no philosophy or a dozen competing and incompatible philosophies.
The best are (obviously) written in my battle tested and ultra refined philosophy developed over the last ~25 years.
But I'm perfectly happy to be working in code bases written even with philosophies that I violently disagree with. Just as long as the singular (or at least compatible) philosophy has a certain maturity and consistency to it.
senko|1 day ago
I didn't remember every line but I still had a very good grasp of how and why it's put together.
(edit: and no, I don't have some extra good memory)
copperx|1 day ago
SoftTalker|1 day ago
Other times, I can make a small change to something that doesn't require much time, and once it's tested and committed, I quickly lose any memory of even having done it.
vjvjvjvjghv|1 day ago
seba_dos1|1 day ago
The hard part is to gain familiarity with the project's coding style and high level structure (the "intuition" of where to expect what you're looking for) and this is something that comes back to you with relative ease if you had already put that effort in the past - like a song you used to have memorized in the past, but couldn't recall it now after all these years until you heard the first verse somewhere. And of course, memorizing songs you wrote yourself is much easier, it just kinda happens on its own.
softwaredoug|1 day ago
I don’t know if this becomes prod code, but I often feel the need to create like a Jupyter notebook to create a solution step by step to ensure I understand.
Of course I don’t need to understand most silly things in my codebase. But some things I need to reason about carefully.
Vexs|1 day ago
With llm-first coding, this experience is lost
Retric|1 day ago
AI tools don’t prevent people from understanding the code they are producing as it wouldn’t actually take that much time, but there’s a natural tendency to avoid hard work. Of course AI code is generally terrible making the process even more painful, but you where just looking at the context that created it so you have a leg up.
layer8|1 day ago
forgetfreeman|1 day ago
TallGuyShort|1 day ago
zeroonetwothree|1 day ago
Meanwhile some stuff Claude wrote for me last week I barely remember what it even did at a high level.
bikelang|1 day ago
I absolutely feel the cognitive debt with our codebase at work now. It’s not so much that we are churning out features faster with ai (although that is certainly happening) - but we are tackling much more complex work that previously we would have said No to.
iainctduncan|1 day ago
Anyone pretending gen-ai code is understood as well as pre-gen-ai, handwritten code is totally kidding themselves.
Now, whether the trade off is still worth it is debatable, but that's a different question.
bogzz|1 day ago
The hope being that if the feature were to be kept or the demo fleshed out, developers would need to shape and refactor the project as per newly discovered requirements, or start from scratch having hopefully learnt from the agentic rush.
To me, it always boils down to LLMs being probabilistic models which can do more of the same that has been done thousands of times, but also exhibit emergent reasoning-like properties that allow them to combine patterns sometimes. It's not actual reasoning, it's a facsimile of reasoning. The bigger the models, the better the RLHF and fine-tuning, the more useful they become but my intuition is that they'll always (LLMs) asymptotically try to approach actual reasoning without being able to get there.
So the notion of no-human-brain-in-the-loop programming is to me, a fool's errand. I do obviously hope I am right here, but we'll see. Ultimately you need accountability and for accountability you need human understanding. Trying to move fast without waiting for comprehension to catch up (which would most likely result in alternate, better approaches to solving the problem at hand) increases entropy and pushes problems further down the road.
unknown|1 day ago
[deleted]
predkambrij|1 day ago
Thanemate|1 day ago
For example, handwritten code also tended to be reviewed manually by each other member of the team, so the probability of someone recalling was higher than say, LLM generated code that was also LLM reviewed.
red_admiral|1 day ago
barrkel|1 day ago
fritzo|1 day ago
yakattak|1 day ago
maqp|1 day ago
SpicyLemonZest|1 day ago
AIorNot|1 day ago
AI spec docs and documentation also have this documentation problem
empath75|1 day ago
Claude often makes a hash of our legacy code and then i go look at what we had there before it started and think “i don’t even know what i was thinking, why is this even here?”