(no title)
Karellen | 1 year ago
Naturally, that brings to mind the classic:
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
-- Brian Kernighan, The Elements of Programming Style
And also:
> Programs must be written for people to read, and only incidentally for machines to execute.
-- Abelson & Sussman, Structure and Interpretation of Computer Programs
reverendsteveii|1 year ago
-- Abelson & Sussman, Structure and Interpretation of Computer Programs
I've been living by this my entire career without ever having read this book. I just snagged it. I really do believe that our primary job is to write code that people can understand and I justify it with a 2 axis analysis: code is either correct or incorrect, and comprehensible or incomprehensible. Code that is correct doesn't require any attention and isn't worth considering. All the code we care about is incorrect, then, and so it's either comprehensible (and therefore fixable) or it's not. Given those facts, and the understanding that code rot and AC changes cause all code to become incorrect over time, my primary job is to write code that other developers can understand so that when it becomes incorrect they can do something about it.
sunrunner|1 year ago
Until the requirements of the system (of any kind) change and now the code isn't 'correct' for the new requirements and needs updating.
Evil_Saint|1 year ago
tengbretson|1 year ago
-- Abelson & Sussman, Structure and Interpretation of Computer Programs
I think a lot about this whenever I hear blanket statements about software performance. A program should be optimized for its performance on whichever piece of hardware it executes on with the highest cost / hour. 95+% of the time that piece of hardware is yours or your coworker's brain.
74847575639|1 year ago
nadis|1 year ago
"“Let’s say a startup with 95% AI-generated code goes out [in the market], and a year or two out, they have 100 million users on that product, does it fall over or not? The first versions of reasoning models are not good at debugging. So you have to go in depth of what’s happening with the product,” he suggested."
Prosammer|1 year ago
LLMs do not write very "clever" code by default. Without prompting them continuously to make it more "clever", they tend to write lots and lots of simple code, vs writing "clever" code that reduces redundant code, improves performance, etc.
What I am curious about is if these slop-filled codebases will be a problem or not in the future - traditionally it's been bad practice to have duplicate code everywhere, but with LLMs it feels like it matters less, as long as the code is simple and readable.
evil-olive|1 year ago
duplicate code is a bad practice "traditionally" because it means if you have a bug, you have to fix it in N spots (each of which may have drifted to be slightly different) instead of just 1.
how do LLMs improve that? if you have a bug (which everyone seems to agree happens more often with LLM-generated code) you'll still need to fix it in N spots. being able to feed those N instances into the LLM and ask it to fix the bug maybe speeds the process up a little, but it doesn't solve the underlying problem.
when I got into the industry in the 2000s, saving costs by outsourcing to India was the hype cycle of the day. would you have the same opinion that duplicate code doesn't matter, because you can just pay cheap outsourced engineers to make those N redundant bugfixes?
Fade_Dance|1 year ago
I would expect there to be innovation in this arena as well, beyond auto-generating code comments for future maintainers.