top | item 43282313

(no title)

Karellen | 1 year ago

> During the discussion, Hu said that even if product builders rely heavily on AI, one skill they would have to be good at is reading the code and finding bugs.

Naturally, that brings to mind the classic:

> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

-- Brian Kernighan, The Elements of Programming Style

And also:

> Programs must be written for people to read, and only incidentally for machines to execute.

-- Abelson & Sussman, Structure and Interpretation of Computer Programs

discuss

order

reverendsteveii|1 year ago

> Programs must be written for people to read, and only incidentally for machines to execute.

-- Abelson & Sussman, Structure and Interpretation of Computer Programs

I've been living by this my entire career without ever having read this book. I just snagged it. I really do believe that our primary job is to write code that people can understand and I justify it with a 2 axis analysis: code is either correct or incorrect, and comprehensible or incomprehensible. Code that is correct doesn't require any attention and isn't worth considering. All the code we care about is incorrect, then, and so it's either comprehensible (and therefore fixable) or it's not. Given those facts, and the understanding that code rot and AC changes cause all code to become incorrect over time, my primary job is to write code that other developers can understand so that when it becomes incorrect they can do something about it.

sunrunner|1 year ago

> Code that is correct doesn't require any attention and isn't worth considering.

Until the requirements of the system (of any kind) change and now the code isn't 'correct' for the new requirements and needs updating.

Evil_Saint|1 year ago

AC changes? Can you expand that acronym?

tengbretson|1 year ago

> Programs must be written for people to read, and only incidentally for machines to execute.

-- Abelson & Sussman, Structure and Interpretation of Computer Programs

I think a lot about this whenever I hear blanket statements about software performance. A program should be optimized for its performance on whichever piece of hardware it executes on with the highest cost / hour. 95+% of the time that piece of hardware is yours or your coworker's brain.

74847575639|1 year ago

Sounds like you're just trying to justify poor software development practices. You can have code that is both performant and readable. Programmers once had to write software for machines 100 times weaker than current machines, yet they had no issue creating software more complex than anything we'll ever build.

nadis|1 year ago

This stood out to me as well, along with this quote:

"“Let’s say a startup with 95% AI-generated code goes out [in the market], and a year or two out, they have 100 million users on that product, does it fall over or not? The first versions of reasoning models are not good at debugging. So you have to go in depth of what’s happening with the product,” he suggested."

Prosammer|1 year ago

>So if you're as clever as you can be when you wrote it, how will you ever debug it?

LLMs do not write very "clever" code by default. Without prompting them continuously to make it more "clever", they tend to write lots and lots of simple code, vs writing "clever" code that reduces redundant code, improves performance, etc.

What I am curious about is if these slop-filled codebases will be a problem or not in the future - traditionally it's been bad practice to have duplicate code everywhere, but with LLMs it feels like it matters less, as long as the code is simple and readable.

evil-olive|1 year ago

> traditionally it's been bad practice to have duplicate code everywhere, but with LLMs it feels like it matters less, as long as the code is simple and readable.

duplicate code is a bad practice "traditionally" because it means if you have a bug, you have to fix it in N spots (each of which may have drifted to be slightly different) instead of just 1.

how do LLMs improve that? if you have a bug (which everyone seems to agree happens more often with LLM-generated code) you'll still need to fix it in N spots. being able to feed those N instances into the LLM and ask it to fix the bug maybe speeds the process up a little, but it doesn't solve the underlying problem.

when I got into the industry in the 2000s, saving costs by outsourcing to India was the hype cycle of the day. would you have the same opinion that duplicate code doesn't matter, because you can just pay cheap outsourced engineers to make those N redundant bugfixes?

Fade_Dance|1 year ago

I'd imagine AI refactoring meta layers will continue to develop and grow in importance with time. Right now the focus is on generating reams of new code. As time passes this will inevitably shift to more of a focus on maintaining the code.

I would expect there to be innovation in this arena as well, beyond auto-generating code comments for future maintainers.