top | item 43689847

(no title)

ExxKA | 10 months ago

For one, I have started using the tools that are available right now, to increase my productivity and intuition with what the new capabilities are.

I think the original author is on to something, about how the structure of our codebases will change, and therefore our preferred frameworks will change as well.

The frameworks we use today, assume that the codebase is DRY[1], and that a human will verify the workings of the codebase. A human will write a single test, for a single component and verify that the test functions correctly - then leave it in there, for successive runs to prove there has been no regression to the code quality.

But as the author points out - that doesn't lend itself to truly parallel programming. Because as one programmer/agent changes a central component that another programmer/agent also changed in the same release cycle, merge conflicts arise and grown into architectural conflicts and grow into team conflicts.

I see how accepting more duplication, can lead to more parallelization. I mostly cared about keeping code DRY because its hell to refactor a codebase with 5 implementations of the same thing. But if I am just instructing an LLM to make the change, I dont care how many files it has to visit - its still the same single instruction from me.

So I will think long and hard about how tooling needs to improve, and how frameworks need to change, to be part of this new paradigm. Similar to how Object Oriented Programming optimised for human logic rather than cpu cycles, the time of LLMs will optimise for testability and parallelisation rather than gpu cycles, or DRY paradigmes.

1. https://en.wikipedia.org/wiki/Don%27t_repeat_yourself

discuss

order

franktankbank|10 months ago

DRY is also about coherence not just refactorability. If you have multiple different behaviors for the "same" thing then your customers are going to think you suck.