top | item 47197552

(no title)

jasode | 2 days ago

Not to disagree with anything the article talks about but to add some perspective...

The complaint about "code nobody understands" because of accumulating cognitive debt also happened with hand-written code. E.g. some stories:

- from https://devblogs.microsoft.com/oldnewthing/20121218-00/?p=58... : >Two of us tried to debug the program to figure out what was going on, but given that this was code written several years earlier by an outside company, and that nobody at Microsoft ever understood how the code worked (much less still understood it), and that most of the code was completely uncommented, we simply couldn’t figure out why the collision detector was not working. Heck, we couldn’t even find the collision detector! We had several million lines of code still to port, so we couldn’t afford to spend days studying the code trying to figure out what obscure floating point rounding error was causing collision detection to fail. We just made the executive decision right there to drop Pinball from the product.

- and another about the Oracle RDBMS codebase from https://news.ycombinator.com/item?id=18442941

(That hn thread is big and there are more top-level comments that talk about other ball-of-spaghetti projects besides Oracle.)

discuss

order

bootsmann|2 days ago

This underlines the argument of the OP no? The argument presented is that the situation where nobody knows how and why a piece of code is written will happen more often and appear faster with AI.

layer8|2 days ago

Indeed, it’ll just result in legacy code faster. We’d need AI to be much better in reliably maintaining code quality, architecture and feature rationale documentation, than the average developer in the average software project. And that may be indistinguishable from AGI.

the_arun|2 days ago

Probably, we need to start saving prompts in Version Control. Prompts could be the context for both humans & machines.

abustamam|2 days ago

I've been doing a version of this in a side project. Instead of saving the prompt directly, I have a road map. When implementing features, I tell it to brainstorm implementation for the road map. When fixing a bug, I tell it to brainstorm fixes from the roadmap. There's some back and forth, and then it writes a slice that is committed. Then, I look it over, verify scope, and it makes a plan (also committed). Then it generates work logs as it codes.

My prompts are literally "brainstorm next slice" or "brainstorm how to fix this bug" or "talk me through trades offs of approach A Vs B" so those prompts aren't meaningful in their own.

It's quite effective, but I'm a team of one.

layer8|2 days ago

I wonder how scalable that is. After the twentieth feature has been added, how much connection will the conversation about the first feature still have with the current code? And you’ll need a larger and larger context for the LLM to grok the history; or you’ll have to have it rewrite it in shorter form, but that has the same failure modes why we can’t just have it maintain complete documentation (obviating the need to keep a history) in the first place.

lurkshark|2 days ago

I agree with this, I like spec-driven-development tooling partially for this reason. That being said, what I’ve found is often that I don’t include enough of the “why” in my prompt artifacts. The “what” and “how” are pretty well covered but sometimes I find myself looking back at them thinking “Why did I do this?” I’ve started including it but it does sometimes feel weird because I feel like “Why would the LLM ‘care’ about this story?”

abustamam|2 days ago

"when I wrote the code, only me and God understood it. Now, only God understands it."

(attributed to Martin Fowler but I can't find any solid evidence)