Write everything (generally, new features) twice has turned out to be really good strategy for me, but it doesn't sit well with bizdev or project managers and tends to be perceived as unnecessary slowness.
But if you plow through a feature and get it "working," you'll do much of that work cleaning up the logic and refactoring through your first pass. What rewriting allows you to do is crystalize the logic flow you developed the first time and start cherry-picking in a more linear fashion to meet the blueprint. It also tends to reduce the urge (/ need) for larger scale refactorings later on.
A project manager or bizdev person writes, rewrites, and rewrites again the document they produce do they not? Or do they write the perfect document at first go?
> it doesn't sit well with bizdev or project managers
To be fair, it makes everything twice as expensive. Managers are always going to reflexively push back against that, even if the new feature covers that cost and more.
> Write everything (generally, new features) twice has turned out to be really good strategy for me, but it doesn't sit well with bizdev or project managers and tends to be perceived as unnecessary slowness.
Silo-isation compounds this. If the maintenance costs are borne by another team or if any rework will be funded out of a different project, the managers are not going to care about quality beyond the basic "signed off by uat".
I spent some time doing consulting with an engineering manager who would keep requesting different (correct) implementations of the same functionality until had seen enough and then he'd pick one. This did lead to some high quality software for what needed to be a high reliability product.
I should probably mention that I was doing consulting engineering here because no employees work work for the guy...
> "gun to your head, you have to finish in 24 hours, what do you do?"
PSA: if you are a project manager / owner or some other similar position you do not get to ask this. This is a personal educational excercise not a way to get stuff done faster.
100% this should never be an excuse to push for a faster outcome. I have to admit though, as a personal mental exercise, this has saved me countless of hours from going down the rabbit hole of over-engineering. Some problems just need a simple solution, sometimes even without any changes to code.
"gun to your head" is maybe not appropriate for work, but the exercise is good for cutting to the core of a task when necessary. It's really the same question as what is the minimum viable product.
A good code, in my opinion, is written by appropriate selection of suitably contained abstractions. The problem with this, and the article does try to talk about it, is that for you to select appropriate abstractions, you need to know the "entire" thing. Which is to say, you need to have a knowledge of something that isn't there yet.
In other engineering disciplines like say civil or architecture, this problem is solved by using a good blueprinting paradigm like CAD layouts, but I find a distinct lack of this in software[1]. Ergo this advice which is a rephrasing of "know first and build later". But it is also equally easy to lose oneself in what's called an analysis paralysis i.e. get stuck in finding the best design instead of implementing a modest one. In the end, this is what experience brings to table I suppose, balance.
[1]closest I can think of are various design diagrams like the class diagrams etc.
Very interesting suggestions, all worth trying. Having a very capable coworker can help here, because they can show you what can be done in a short amount of time. Specifically I've noticed that some devs get "winded" by a change and want to take a break before moving on; others simply continue. This ability can be improved with practice, both within and across contexts. Doing things quickly is valuable for many intrinsic reasons that are often overlooked because we descry the poor extrinsic reasons. As with car repair, the odds that you forget how to reassemble the car scales with the time the repair takes. Similarly, if you can execute a feature in a day (especially a complex one that requires changes to many parts of a repo, and/or more than one repo) this is much less risky than taking many days or weeks. (To get there requires that you have firm command of your toolset in the same way a mechanic understands his tools, or a musician understands her instrument. It also requires that externalities be systematically smooth - I'm thinking particularly of a reliable, locally repeatable, fast CI/CD process.)
(The calculus here is a little different when you are doing something truly novel, as long periods of downtime are required for your brain to understand how the solution and the boundary conditions affect each other. But for creating variations of a known solution to known boundary conditions, speed is essential.)
There's an enhancement in a software I use/maintain that I wrote once and lost (the PC I wrote kaput and I was writing offline so I also didn't backup). It was an entire weekend of coding that I got very in the zone and happily coded.
After I lost that piece of code I never could get the will to write that code again. Whenever I try to start that specific enhancement I get distracted and can't focus because I also can't remember the approach I took to get that working and get lazy to figure it out again how that was done. It's been two years now.
That's a good point. Particularly good pieces of work are hard to rewrite.
I remember rewriting some piece of infrastructure once when I moved to another job, but I failed to summon the energy to rewrite it a second time at another job.
> If, after a few days, you can't actually implement the feature, think of what groundwork, infrastructure, or refactoring would need to be done to enable it. Use this method to implement that, then come back to the feature
really good, this is key. building a 'vocabulary' of tools and sticking to it will keep your velocity high. many big techs lose momentum because they dont
I really like the footnote that indirectly says that sometimes you just need to spin up a background thread to figure something out. Resonates heavily with my experience, to the point where I feel like a lot of the value my experience brings is identifying this class of problems faster. You stumble onto it, recognize it's the think about it passively type and move on to other things in the meanwhile. It would be easy to bang your head on it and get nowhere, sometimes you just need to let it sit for a bit.
Dan Abramov talks about WET (write everything twice) [1] as generally a good approach, primarily because you often don’t know the right abstraction up front, and a wrong abstraction is way worse than a copy/paste.
He has some good visuals that illustrate how incorrectly dependent and impossible to unwind wrong abstractions can become.
I’d say « Write everything three times » because it usually take 3 versions to get it right: first is under-engineered, second is over-engineered and third is hopefully just-right-engineering
I remember seeing somewhere a popular list of top 10 algorithms used in systems, and it's kinda depressing to realize that the most recent algorithm on the list, Skip List, was invented roughly 30 years ago, and every single one of them was taught in an introductory data structure course. That is, we most likely do not need to study the internals of algorithms nor need to implement them in production. For such a long time in history, smart and selfless engineers already encapsulated the algorithms into well abstracted and highly optimized libraries and frameworks.
Of course, there are exceptions. ClickHouse implemented dozens of variations of HashTable just to squeeze out as much performance as possible. The algorithms used in ClickHouse came from many recent papers that are heavy and deep on math, which few people could even understand. That said, that's just exception instead of norm.
Don't get me wrong. Having a stable list of algorithms is arguably a hallmark of modern civilization and everyone benefits from it. It's just that I started studying CS in the early 2000s, and at that time we still studied Knuth because knowing algorithms in-depth was still a core advantage to ordinary programmers like me.
> start over each day
This reminds me of "spaced repetition" in learning theory. Drilling the same problem from scratch is a great way to get better at iterating through your rolodex of mental models, but so many people prioritize breadth because they think it is the only way to generalize to new problems.
I usually won't rewrite the whole thing twice, but would rewrite parts of it multiple times. For the very least, the second time around I would format things and add comments to make things easier to be understood. Code should be written for comprehension.
> Another heuristic I've used is to ask someone to come up with a solution to a problem. Maybe they say it'll take 4 weeks to implement. Then I say "gun to your head, you have to finish in 24 hours, what do you do?"
Pretend to be capable of doing this, and in the short moment where the other person is not attentive, get the gun and kill him/her. This satisfies the stated criteria:
> The purpose here is to break their frame and their anchoring bias. If you've just said something will take a month, doing it in a day must require a radically different solution.
> The purpose of the thought experiment isn't to generate the real solution.
:-)
---
Lesson learned from this: if you can't solve the problem that the manager asks you for, a solution is to kill the manager (of course you should plan this murder carefully so that you don't become a suspect).
"you have 24 hrs" and "write everything twice" ......they go hand in hand don't they? You're definitely going to rewrite it if you slap code out there.
I like the "gun to the head" heuristic but I would probably rephrase it to be something like "If you only had 24hrs to solve this or the world would come to an end".
My blogging engine [1] is almost 25 years old now. Have I rewritten it? If by "rewritten" you mean "from scratch", then no. I haven't. It has, however, seen several serious workings and refactorings over the years (the last great one was the removal of all global variables [2] a few years ago). Starting over would have been just too much work.
Sorry, are you saying unit testing is dumb? Not that you'd be the first to say such a thing, but I've never really understood this if people find them valuable. 100% test coverage is one thing, but having some interdependent functions that do one small thing is a perfect use case for unit tests.
nkozyra|1 year ago
But if you plow through a feature and get it "working," you'll do much of that work cleaning up the logic and refactoring through your first pass. What rewriting allows you to do is crystalize the logic flow you developed the first time and start cherry-picking in a more linear fashion to meet the blueprint. It also tends to reduce the urge (/ need) for larger scale refactorings later on.
andsoitis|1 year ago
dclowd9901|1 year ago
lylejantzi3rd|1 year ago
To be fair, it makes everything twice as expensive. Managers are always going to reflexively push back against that, even if the new feature covers that cost and more.
pjbster|1 year ago
Silo-isation compounds this. If the maintenance costs are borne by another team or if any rework will be funded out of a different project, the managers are not going to care about quality beyond the basic "signed off by uat".
eschneider|1 year ago
I should probably mention that I was doing consulting engineering here because no employees work work for the guy...
from-nibly|1 year ago
PSA: if you are a project manager / owner or some other similar position you do not get to ask this. This is a personal educational excercise not a way to get stuff done faster.
Lws803|1 year ago
dclowd9901|1 year ago
colechristensen|1 year ago
probably_wrong|1 year ago
"I set aside the slides for the pointless CEO presentation tomorrow and work exclusively on this."
"No, you can't cancel on the CEO. Let's say you have two guns to your head and 24 hours, what do you do?"
"I take lots of coffee, skip sleeping tonight, cancel the group status meeting for Wednesday and focus on these two things."
"If you do that we'll look bad in front of the whole group. Let's say you have three guns to your head..."
haliskerbas|1 year ago
aidos|1 year ago
richk449|1 year ago
pkoird|1 year ago
In other engineering disciplines like say civil or architecture, this problem is solved by using a good blueprinting paradigm like CAD layouts, but I find a distinct lack of this in software[1]. Ergo this advice which is a rephrasing of "know first and build later". But it is also equally easy to lose oneself in what's called an analysis paralysis i.e. get stuck in finding the best design instead of implementing a modest one. In the end, this is what experience brings to table I suppose, balance.
[1]closest I can think of are various design diagrams like the class diagrams etc.
simpaticoder|1 year ago
(The calculus here is a little different when you are doing something truly novel, as long periods of downtime are required for your brain to understand how the solution and the boundary conditions affect each other. But for creating variations of a known solution to known boundary conditions, speed is essential.)
a1o|1 year ago
There's an enhancement in a software I use/maintain that I wrote once and lost (the PC I wrote kaput and I was writing offline so I also didn't backup). It was an entire weekend of coding that I got very in the zone and happily coded.
After I lost that piece of code I never could get the will to write that code again. Whenever I try to start that specific enhancement I get distracted and can't focus because I also can't remember the approach I took to get that working and get lazy to figure it out again how that was done. It's been two years now.
mgaunard|1 year ago
I remember rewriting some piece of infrastructure once when I moved to another job, but I failed to summon the energy to rewrite it a second time at another job.
jesse__|1 year ago
halfcat|1 year ago
BoiledCabbage|1 year ago
201984|1 year ago
vanjajaja1|1 year ago
really good, this is key. building a 'vocabulary' of tools and sticking to it will keep your velocity high. many big techs lose momentum because they dont
physicles|1 year ago
> for each desired change, make the change easy (warning: this may be hard), then make the easy change"
(earliest source I could find is @KentBeck on X)
I love the idea of that vocabulary of tools and libraries, too. I strongly resist attempts to add to or complicate it unnecessarily.
Etheryte|1 year ago
halfcat|1 year ago
He has some good visuals that illustrate how incorrectly dependent and impossible to unwind wrong abstractions can become.
[1] https://youtu.be/17KCHwOwgms
rmnclmnt|1 year ago
I’d say « Write everything three times » because it usually take 3 versions to get it right: first is under-engineered, second is over-engineered and third is hopefully just-right-engineering
passion__desire|1 year ago
hintymad|1 year ago
Of course, there are exceptions. ClickHouse implemented dozens of variations of HashTable just to squeeze out as much performance as possible. The algorithms used in ClickHouse came from many recent papers that are heavy and deep on math, which few people could even understand. That said, that's just exception instead of norm.
Don't get me wrong. Having a stable list of algorithms is arguably a hallmark of modern civilization and everyone benefits from it. It's just that I started studying CS in the early 2000s, and at that time we still studied Knuth because knowing algorithms in-depth was still a core advantage to ordinary programmers like me.
smusamashah|1 year ago
layer8|1 year ago
justinl33|1 year ago
ww520|1 year ago
DrScientist|1 year ago
1. First write down a bunch of idea of how I might tackle the problem - includes lists of stuff that I might need to find out.
2. Look at ways I break the task down to 'complete-able in a session'.
3. Implement, in a way the code is always 'working' at the end of session.
4. Always do a brain dump into a comment/readme at the end of the session - to make it easy to get going again.
aleph_minus_one|1 year ago
Pretend to be capable of doing this, and in the short moment where the other person is not attentive, get the gun and kill him/her. This satisfies the stated criteria:
> The purpose here is to break their frame and their anchoring bias. If you've just said something will take a month, doing it in a day must require a radically different solution.
> The purpose of the thought experiment isn't to generate the real solution.
:-)
---
Lesson learned from this: if you can't solve the problem that the manager asks you for, a solution is to kill the manager (of course you should plan this murder carefully so that you don't become a suspect).
:-) :-) :-)
gregors|1 year ago
steve918|1 year ago
mgaunard|1 year ago
What you should be worried about is the code that hasn't been rewritten in ten years.
spc476|1 year ago
[1] https://github.com/spc476/mod_blog
[2] As therapy for stuff going on at work.
snapcaster|1 year ago
>What you should be worried about is the code that hasn't been rewritten in ten years.
Why would I worry? it's been running for 10 years without significant changes. Isn't that a sign it's more or less accomplishing its purpose?
mempko|1 year ago
gavmor|1 year ago
> A spike is a product development method originating from extreme programming that uses the simplest possible program to explore potential solutions.
In my career, I have often spiked a solution, thrown it away, and then written a test to drive out a worthy implementation.
0. https://en.wikipedia.org/wiki/Spike_(software_development)
nkozyra|1 year ago