top | item 47197037

(no title)

pajtai | 1 day ago

The whole premise of the post, that coders remember what and why they wrote things from 6 months ago, is flawed.

We've always had the problem that understanding while writing code is easier than understanding code you've written. This is why, in the pre-AI era, Joel Spolsky wrote: "It's harder to read code than to write it."

discuss

order

Vexs|1 day ago

I don't remember exactly what I wrote and how the logic works, but I generally remember the broad flow of how things tie together, which makes it easier to drop in on some aspect and understand where it is code-wise.

Verdex|1 day ago

There's code structure but then there's also code philosophy.

The worst code bases I have to deal with have either no philosophy or a dozen competing and incompatible philosophies.

The best are (obviously) written in my battle tested and ultra refined philosophy developed over the last ~25 years.

But I'm perfectly happy to be working in code bases written even with philosophies that I violently disagree with. Just as long as the singular (or at least compatible) philosophy has a certain maturity and consistency to it.

senko|1 day ago

I recently did some work on a codebase I last touched 4 years ago.

I didn't remember every line but I still had a very good grasp of how and why it's put together.

(edit: and no, I don't have some extra good memory)

copperx|1 day ago

Lucky you. I always go "huh, so I wrote this?". And this was in the pre-AI era.

SoftTalker|1 day ago

I find this to be the case if it was something I was deeply involved with.

Other times, I can make a small change to something that doesn't require much time, and once it's tested and committed, I quickly lose any memory of even having done it.

vjvjvjvjghv|1 day ago

I definitely understand my own code better than what other people wrote, even from 10 years ago. I often see code and think "this makes sense to do it this way". Turns out I wrote it years ago.

seba_dos1|1 day ago

I juggle between various codebases regularly, some written by me and some not, often come back to things after not even months but years, and in my experience there's very little difference in coming back to a codebase after 6 months or after a week.

The hard part is to gain familiarity with the project's coding style and high level structure (the "intuition" of where to expect what you're looking for) and this is something that comes back to you with relative ease if you had already put that effort in the past - like a song you used to have memorized in the past, but couldn't recall it now after all these years until you heard the first verse somewhere. And of course, memorizing songs you wrote yourself is much easier, it just kinda happens on its own.

softwaredoug|1 day ago

If I’m learning for the first time, I think it matters to hand code something. The struggle internalizes critical thinking. How else am I supposed to have “taste”? :)

I don’t know if this becomes prod code, but I often feel the need to create like a Jupyter notebook to create a solution step by step to ensure I understand.

Of course I don’t need to understand most silly things in my codebase. But some things I need to reason about carefully.

Vexs|1 day ago

Almost anything I write in Python I start in jupyter just so I can roll it around and see how it feels- which determines how I build it out and to some degree, how easy it is to fix issues later on.

With llm-first coding, this experience is lost

Retric|1 day ago

Harder here doesn’t mean slower. Reading and understanding your own code is way faster than writing and testing it, but it’s not easy.

AI tools don’t prevent people from understanding the code they are producing as it wouldn’t actually take that much time, but there’s a natural tendency to avoid hard work. Of course AI code is generally terrible making the process even more painful, but you where just looking at the context that created it so you have a leg up.

layer8|1 day ago

The reason it’s hard is exactly because you have to do it in shorter time and without a feedback cycle that has you learn bit by bit, like when you’d write the code yourself. It has some similarity with short-term cramming for an exam, where you will soon forget most of it afterwards, as opposed to when you built up the knowledge and problem-solving exercise over a longer period of time.

forgetfreeman|1 day ago

Certainly AI tools don't prevent anything per se, that's management's job. Deadlines and other forms of time pressure being what they are it's trivial to construct a narrative where developers are producing (and shipping) code significantly faster than the resulting codebase can be fully comprehended.

TallGuyShort|1 day ago

This is also an area where AI can help. Don't just tell it to write your code. Before you get going, have it give you an architectural overview of certain parts you're rusty on, have it summarize changes that have happened since you were familiar, have it look at the bigger picture of what you're about to do and have it critique your design. If you're going to have it help you write code, don't have it ONLY help you write code. Have it help you with all the cognitive load.

zeroonetwothree|1 day ago

I still remember the core architecture of code I wrote 20 years ago at my first job. I can visualize the main classes and how they interact even though I haven’t touched it since then.

Meanwhile some stuff Claude wrote for me last week I barely remember what it even did at a high level.

bikelang|1 day ago

It’s hard to keep the minutiae in your memory over a long period of time - but I certainly remember the high level details. Patterns, types, interfaces, APIs, architectural decisions. This is why I write comments and have thorough tests - the documentation of the minutiae is critical and gives guardrails when refactoring.

I absolutely feel the cognitive debt with our codebase at work now. It’s not so much that we are churning out features faster with ai (although that is certainly happening) - but we are tackling much more complex work that previously we would have said No to.

iainctduncan|1 day ago

Oh come on, that is complete nonsense. I can reunderstand complicated code I wrote a year ago far, far faster than complicated code someone else wrote. Especially if I also wrote tests, accompanying notes, and docs. If you can't understand your old code when you come back to it... including looking through your comments and docs and tests... I'm going to say you're doing it wrong. Maybe it takes a while, but it shouldn't be that hard.

Anyone pretending gen-ai code is understood as well as pre-gen-ai, handwritten code is totally kidding themselves.

Now, whether the trade off is still worth it is debatable, but that's a different question.

bogzz|1 day ago

The trade-off is worth it in my opinion when you are in a time crunch to deliver a demo, or are asked to test out an idea for a new feature (also in a time crunch).

The hope being that if the feature were to be kept or the demo fleshed out, developers would need to shape and refactor the project as per newly discovered requirements, or start from scratch having hopefully learnt from the agentic rush.

To me, it always boils down to LLMs being probabilistic models which can do more of the same that has been done thousands of times, but also exhibit emergent reasoning-like properties that allow them to combine patterns sometimes. It's not actual reasoning, it's a facsimile of reasoning. The bigger the models, the better the RLHF and fine-tuning, the more useful they become but my intuition is that they'll always (LLMs) asymptotically try to approach actual reasoning without being able to get there.

So the notion of no-human-brain-in-the-loop programming is to me, a fool's errand. I do obviously hope I am right here, but we'll see. Ultimately you need accountability and for accountability you need human understanding. Trying to move fast without waiting for comprehension to catch up (which would most likely result in alternate, better approaches to solving the problem at hand) increases entropy and pushes problems further down the road.

predkambrij|1 day ago

My experience with Perl. "Write-only" language.

Thanemate|1 day ago

OP talks about the increased frequency of such events happening, and not that this is a new problem.

For example, handwritten code also tended to be reviewed manually by each other member of the team, so the probability of someone recalling was higher than say, LLM generated code that was also LLM reviewed.

red_admiral|1 day ago

In the past, it was also an optimistic assumption that your engineers would still be working for you in a year's time? You need some kind of documentation / instructive testing anyway. And maybe more than one person who understands each bit of the system (bus factor).

barrkel|1 day ago

Understanding other people's code is harder than understanding your own code though.

fritzo|1 day ago

I recently spent 1.5 weeks fixing a bug I introduced 20 years ago. Can confirm, I have no idea what I was thinking back then.

yakattak|1 day ago

The individual details, probably not. But the high level/broad strokes I definitely remember 6+ months later.

maqp|1 day ago

A lot of bug fixing relies on some mental model about the code. It manifests as rapid "Oh 100% I know what's causing" -heureka moments. With generated code, that part's gone for good. The "black box written by a black box" is spot on on, you're completely dependent on any LLM to maintain the codebase. Right now it's not a vendor lock thing but I worry it's going to be a monopoly thing. There's going to be 2-3 big companies at most, and with the bubble eventually bursting and investor money dying, running agents might get a lot more expensive. Who's going to propose the rewrite of thousands of LLM-generated features especially after the art of programming dies along with current seniors who burn out or retire.

SpicyLemonZest|1 day ago

I’m very confused by this statement. I routinely answer questions about why we wrote the code we wrote 6 months ago and expect other people to do the same. In my mind that skill is one of the key differences between good and bad developers. Is it really so rare?

AIorNot|1 day ago

Also the article is AI written itself or AI assisted - there’s a tendency in AI text to bloviate and expound on irrelevant stuff so as to lose the plot

AI spec docs and documentation also have this documentation problem

empath75|1 day ago

I have been laboriously going through the process of adding documentation and comments in code explaining the purpose and all the interfaces we expect and adding tests for the purpose of making it easier for claude to work with it but it also makes it easier for me to work with it.

Claude often makes a hash of our legacy code and then i go look at what we had there before it started and think “i don’t even know what i was thinking, why is this even here?”