Unfortunately it happens often enough that you manage to sell the step where you make things worse, but then you never get management buy-in for the step that makes things better again.
The multi-dimensional nature of this problem makes it extremely fascinating to me, even twenty years into my career.
There's a certain dopamine hit you get for voluntarily trudging into the trough of despair, pushing the Sisphyean boulder of better design uphill in the optimistic belief that you can do better, and then actually arriving at something slightly better.
Way more fun than conceptualizing programming as duct-taping libraries, frameworks, and best practices together.
The ELI5 version is that atoms are all trying to find a comfy place to be. Typically, they make some friends and hang out together, which makes them very comfy, and we call the group of friend-atoms a molecule. Sometimes there are groups of friendly atoms that would be even comfier if they swapped a few friends around, but losing friends and making new friends can be scary and seem like it won't be comfy, so it takes a bit of a push to convince the atoms to do it. That push is precisely activation energy, and the rearrangement won't happen without it (modulo quantum tunneling but this is the ELI5 version.)
In the software world, everyone is trying to make "good" software. Just like atoms in molecules, our ideas and systems form bonds with other ideas and systems where those bonds seem beneficial. But sometimes we realize there are better arrangements that weren't obvious at the outset, so we have to break apart the groupings that formed originally. That act of breakage and reforming takes energy, and is messy, and is exactly what this author is writing about.
On one hand you have guys like the OpenBSD team that work on Mostly Boring Things and making serious inroads at improving software quality of Mostly Boring Components that power the hidden bits of the Internet that go relatively unnoticed.
On the other hand, you have "improvements" from Apple and everyone else that involve an ever-changing shell game of moving around UI widgets perpetuated by UI designers on hallucinogens.
Are these browsers like Chrome that are elaborate ad dispensing machines really improvements from the browsers of yore? IE 4 may have sucked by modern standards but it also didn't forward every URL I visit to Google.
I've been around since the beginnings of the WWW and it's reached the point where I am struggling to understand how to navigate these software "improvements". For the first time I have felt like my elderly parents using technology. I haven't gotten stupider; the software has become more difficult to use. It has now become some sort of abstract art rather than a tool for technologists.
First paragraph: “better at supporting new features.”
Further down, he talks about changing the structure of the software in order to support planned features, etc.
So putting it all together, “better” == more featureful at lower cost with reduced marginal pain (to the developers) of further expansion.
I’d say “better” should mean enabling users to achieve their goals with minimal friction for the user (i.e., program p is designed to allow users do task (or set of tasks) t faster/better/more efficiently/whatever). But of course I would say that, I’m a user of software, not a developer of it.
Consider the notion of a Mac-assed apps. They make life as a Mac user much nicer because they integrate so well with the environment and other native apps. But lo! L unto man was revealed his Lord and Savior Electron. Much nicer for developers than having to port programs across several different native environments. So native goes the way of the dinosaur (with some exceptions, of course). That’s a massively canned just-so story, of course, so don’t take it too seriously as actual analysis.
But the moral of the story is that, as a user, it’s endlessly fascinating to me, watching developers talk about development and how much their focus tends towards making their lives as developers easier, even at the cost of sacrificing users’ experiences and expectations as guiding principles.
Love him or hate him, but it’s one of the things that I appreciate Linus Torvalds for emphasizing occasionally: computers are tools people use in order to get things done (for whatever purposes, including recreation).
(That said: There is an irreducibly human element of play involved here for developers too. And even non-developers can be fascinated by computers in/for themselves, not just as sheer tools you’d ideally not even notice (in the Heideggerian sense of tools ready at hand versus present at hand). I’m one of those outsiders. No shame in it.)
Bad code is hard to read.
Good code is easy to change.
That's it, I think. Then you recurse up into architecture.
Bad architecture is hard to follow. (spaghetti code)
Good architecture is easy to change.
Yes, this means you can have code that's neither bad (it's easy to read) nor good (but still hard to change). In the past I've called this "lasagna code" - the layers are super clear, and it's easy to follow how each connects, but they're far too integrated to allow for any changes.
It's harder to phrase on the level of "software", but mabye something like:
Bad software is hard to use.
Good software does its job and then gets out of the way.
> I've been around since the beginnings of the WWW
Ditto.
> I haven't gotten stupider; the software has become more difficult to use.
I can't speak for you, but I'm becoming less interested in new shiny in a lot of things beyond UI widgets. There's a reason why we olds have a reputation of falling behind, and it's not because engineers and inventors explicitly make things that only young people can learn.
At first glance I didn’t like this article (due to a long history of poorly executed redesigns for design’s sake) so I gave it a few minutes and reread it, and now I like it.
Sometimes when reviewing people’s redesigns, I can’t see the beautiful thing that they’re envisioning, only the trough. And over the years I’ve noticed that a lot of redesigns never make it out of the trough. I like the idea of doing small things quickly, I think that’s good, but that’s also technical debt if the redesign never results in a benefit.
You can build stairs on the near side of the trough before you commit to climbing down into it.
Prototyping and figuring out where the most friction is, chipping away at it with each new feature that touches that area.
One of the cleverest things I figured out on my own rather than stealing from others, was to draw the current architecture, the ideal one, and the compromise based on the limits of the our resources and consequences of earlier decisions. This is what we would implement if we had a magic wand. This is what we can implement right now.
It’s easier to figure out how to write the next steps without climbing into a local optimum if you know where the top of the mountain is. Nothing sucks like trying to fix old problems and painting yourself into new corners. If the original plan is flawed it’s better to fix it by moving closer to the ideal design than running in the opposite direction.
What usually happens is people present an ideal design, get dickered down by curmudgeons or reality, and start chopping up their proposal to fit the possible. Then the original plan exists only in their heads and nobody else can help along the way, or later on.
> Sometimes when reviewing people’s redesigns, I can’t see the beautiful thing that they’re envisioning, only the trough.
Distinguishing between the idea and the implementation is vital.
If the idea is good then a few rounds of review is all that's needed to shore it up. If the idea is bad, then there's more work to be done. Letting people know that you like the idea is key. There's also room for being okay with the implementation if it differs from how you'd do it.
Metaphors get abused in this article in a confusing way, and I don't think it explains why the quality curve goes downward at first -- the initial drop in quality is compared to an initial capital investment? what? -- but I agree with the truth of it.
I think the article could be a lot shorter and easier to understand if it simply said that the current design is in a local maximum, and you have to work your way incrementally out of the local maximum to reach a different local maximum. I think programmers would get that metaphor a lot more easily than the "buying widgets for a new factory" metaphor.
I do like how the article puts the spotlight on designing the process of change: picking the route, picking the size of the steps, and picking the right spot to announce as the goal. That gives me a lot of food for thought about the changes my team is contemplating right now.
What a wonderfully constructive comment. This is a great model for me to remember when I encounter things I like the substance of but dislike some of the specifics. Thank you!
I wonder how this translates to today's microservice craze that is more of a infra/devops/org decision that leaks into the software design in various imo detrimental ways. I can certainly see scenarios where merging microservices could unclog the pipes immensely - but I guess that could be construed as a rewrite.
Kent Beck is not talking about rewriting from scratch. He's explaining how to transform software to a better state by taking tiny steps in changing the existing system.
In context that's a pretty funny article to see how it didn't survive.
Netscape pre 5 or 6 was a mess. It was a downloadable desktop application that kept getting pushed to deliver new features with a struggling UI framework. Additionally, I would imagine that the group delivering this was rather small in respect to the size of the task. They didn't have CI/CD, git, etc to give feedback. This reeks of an overmainaged project that was intentionally underfunded.
Ultimately.. it was an unmaintable mess that required a rewrite to even continue. To me it sounds like it was tech debt piled deeper and higher.
What came of this? Complete browser rebuilds (mozilla mosaic, chrome, etc), and finally this caught fire through the Chrome project and Javascript acceleration at google.
good design and implementation requires skilled people. you don't get either with bottom of the barrel pay grades.
something I have noticed in this industry is that big companies think they can outsource their staffing issues and "save on labor". But in the end they pay more in management of outsourced assets, inevitable maintenance of poorly designed and implemented software, delays in delivery, and of course the churn and burn of hiring/firing contractors. Then they end up redoing everything with local talent with 1/8th the team in half the time.
It only took 3-4 years to realize this but this is what the "trough of despair" really looks like.
It's beautiful that this sort of expediency often comes back to bite decision makers. Unfortunately, the timescale in which it occurs makes it very possible to simply ignore the fact that they created the problem in the first place.
This also is why I do not believe LLMs pose as big a threat to software development as we're told. Maintenance will always require humans that can simultaneously comprehend the system as it is today and the system as it should be in the future.
You don't necessarily get them at high pay grades, either. I know people making fat salaries who truly can't manage to write anything decent, it's all a big JS monstrosity with 500 MB of broken dependencies and six build tools that all jump major versions every eight weeks.
Salary has long since been disconnected from skill, ever since cheap money flooded the industry, and easier abstractions made it seem like "everyone can code". Perhaps "fog a mirror" shouldn't be the only programmer criterion.
I don't understand that this article at all or what it means by worse. It clearly defines what better means: that the architecture is such that implementing a feature is no harder than it absolutely has to be. So if it has to get worse first that means initially we're making it harder to implement the desired features? Why are we doing that? Are we counting a half-done, under-construction state? It's harder to implement a feature now than before because we partially wrecked the old architecture, and then you architecture is not done yet? Or is it because we're accounting all this new re-architecting work towards the cost of the first new feature? The first toilet install is hard because we have to do the whole plumbing in the building, and redo the sewage pipe out to the street? The second toilet is easy?
That’s the lesson we learned: implement the most simplistic thing first with just a bit of basic principles like separation of concerns. Humans are terrible at predicting where a system will expand in the future. Therefore just stay out of your own way by not over building!
I mainly agree although I think that the trough of despair often comes after an initial bump. At first when designing the new system, you pluck the low hanging fruit of improvement for a small subset of the system. There is no dip yet -- things are just getting better. But when you start migrating the rest of the system, you inevitably do hit that dip and descend into the trough of despair before climbing back out.
The art is to design things in such a way so that a minimum amount of time is spent in the trough.
Interesting discussion … it appears that the nonlinear nature of modifying a software by a dev team with incomplete tacit knowledge of the underlying design makes it inevitable that things would end up in a state of ruin: small changes become very costly and risky, etc.
What so often happens is you make a plan like this, then business priorities change/things took longer than expected/people leave or join and then you wish you never started...
The absolute highlight of my work is when I get a new projected 'scaffolded' (bad choice of words, but I was using it before it became a buzzword)
You know when you get to the point your data structures just make working on the code a breeze, when your library functions provide all the right low pieces to whip up new features quickly and easily with names and functionality that actually fit the domain... Basically, when all the pieces 'gel' :-D
My experience is the exact opposite.
To implement a new feature, I usually first refactor, make space for the new feature, improve existing design. (uphill). Then I implement the new feature as pristine and clear as possible (top). Then I face the reality, integration tests fail, I add edge cases I forgot, (downhill). And I end up at the bottom, ready to push that abomination to git and forget about it.
Not my experience at all. My original design decisions rarely change fundamentally. And whatever small changes I decide to implement, I implement step-by-step with each step being a refactored improvement.
It probably helps that I have 30+ years of experience and always pick architectures I have used before on successful projects.
Secondly:
I think this may be reflective of someone that hasn't sat down and realized the environment that they're in. Creating a poor architecture or approach for the first go is usually a sign of dysfunction or inexperience.
Inexperience: It's more that the individual hasn't sat down, realized that the initial approaches are in appropriate and should be designing first before pushing forward. Experience should be fleshing out a lot of these details before coding anything and get the protocols and conflicts resolved months before they happen. (This is where I see a Staff+ being responsible and assisting in the development of the project)
Dysfunctional environment: Our culture in software engineering has forgone formal design before creating a solution. Typically, most development is dictated by "creating a microservice" first and then trying to design as you go along. This is so aggressive in a coding first approach that many even forgo testing. Why does this exist? Partly the incentives by business/management to deliver fast and distrust in the survivibility in the product.
---
That being said: Am I promoting a "perfect design" (as I've been accused of doing) first? No, iteration will happen. Requirements will change, but if you're applying strong tooling and good coding practices.. rearranging your arch shouldn't be as big of an issue as it currently is.
I’d be interested in the tail end of the graph. I assume that the longer the software is in operation, the more complex and worse it gets. From an anecdotal perspective, that’s my experience anyway having worked on some legacy projects in my time.
Genuine question, is it a property of software design only? Think about construction, for any change in the architecture, one has to demolish stuff and make a mess.
I'd argue, that's a general property of change.
Absolutely. There's a gap between code which expresses/communicates its intentions and code which achieves the same goals while being much more streamlined and suited to the constructs of the language.
[+] [-] Rhapso|1 year ago|reply
People are vaguely good and competent, they leave systems in a locally-optimal state.
In general only changes that are "one step" are considered, and they allways leave things worse when you are currently in a locally optimal state.
A multi-step solution will require a stop in a lower-energy state on the way to a better one.
Monotonic-only improvement is the path to getting trapped. Take chances, make mistakes, and get messy.
[+] [-] m463|1 year ago|reply
Better for developers? Better for users?
Better for speed? Better for maintenance? Better license? Better software stack? Better telemetry? Better revenues through subscriptions?
[+] [-] Jensson|1 year ago|reply
[+] [-] adrianN|1 year ago|reply
[+] [-] airstrike|1 year ago|reply
[+] [-] zb3|1 year ago|reply
The evolution disagrees.
[+] [-] mattgreenrocks|1 year ago|reply
There's a certain dopamine hit you get for voluntarily trudging into the trough of despair, pushing the Sisphyean boulder of better design uphill in the optimistic belief that you can do better, and then actually arriving at something slightly better.
Way more fun than conceptualizing programming as duct-taping libraries, frameworks, and best practices together.
[+] [-] ozim|1 year ago|reply
Only if new joiners wouldn’t feel like they have to “show up with something” making existing stuff obsolete.
Well not blaming people or companies just thinking out loud.
[+] [-] bloaf|1 year ago|reply
https://en.wikipedia.org/wiki/Activation_energy
The ELI5 version is that atoms are all trying to find a comfy place to be. Typically, they make some friends and hang out together, which makes them very comfy, and we call the group of friend-atoms a molecule. Sometimes there are groups of friendly atoms that would be even comfier if they swapped a few friends around, but losing friends and making new friends can be scary and seem like it won't be comfy, so it takes a bit of a push to convince the atoms to do it. That push is precisely activation energy, and the rearrangement won't happen without it (modulo quantum tunneling but this is the ELI5 version.)
In the software world, everyone is trying to make "good" software. Just like atoms in molecules, our ideas and systems form bonds with other ideas and systems where those bonds seem beneficial. But sometimes we realize there are better arrangements that weren't obvious at the outset, so we have to break apart the groupings that formed originally. That act of breakage and reforming takes energy, and is messy, and is exactly what this author is writing about.
[+] [-] wannacboatmovie|1 year ago|reply
On one hand you have guys like the OpenBSD team that work on Mostly Boring Things and making serious inroads at improving software quality of Mostly Boring Components that power the hidden bits of the Internet that go relatively unnoticed.
On the other hand, you have "improvements" from Apple and everyone else that involve an ever-changing shell game of moving around UI widgets perpetuated by UI designers on hallucinogens.
Are these browsers like Chrome that are elaborate ad dispensing machines really improvements from the browsers of yore? IE 4 may have sucked by modern standards but it also didn't forward every URL I visit to Google.
I've been around since the beginnings of the WWW and it's reached the point where I am struggling to understand how to navigate these software "improvements". For the first time I have felt like my elderly parents using technology. I haven't gotten stupider; the software has become more difficult to use. It has now become some sort of abstract art rather than a tool for technologists.
[+] [-] dmvdoug|1 year ago|reply
Further down, he talks about changing the structure of the software in order to support planned features, etc.
So putting it all together, “better” == more featureful at lower cost with reduced marginal pain (to the developers) of further expansion.
I’d say “better” should mean enabling users to achieve their goals with minimal friction for the user (i.e., program p is designed to allow users do task (or set of tasks) t faster/better/more efficiently/whatever). But of course I would say that, I’m a user of software, not a developer of it.
Consider the notion of a Mac-assed apps. They make life as a Mac user much nicer because they integrate so well with the environment and other native apps. But lo! L unto man was revealed his Lord and Savior Electron. Much nicer for developers than having to port programs across several different native environments. So native goes the way of the dinosaur (with some exceptions, of course). That’s a massively canned just-so story, of course, so don’t take it too seriously as actual analysis.
But the moral of the story is that, as a user, it’s endlessly fascinating to me, watching developers talk about development and how much their focus tends towards making their lives as developers easier, even at the cost of sacrificing users’ experiences and expectations as guiding principles.
Love him or hate him, but it’s one of the things that I appreciate Linus Torvalds for emphasizing occasionally: computers are tools people use in order to get things done (for whatever purposes, including recreation).
(That said: There is an irreducibly human element of play involved here for developers too. And even non-developers can be fascinated by computers in/for themselves, not just as sheer tools you’d ideally not even notice (in the Heideggerian sense of tools ready at hand versus present at hand). I’m one of those outsiders. No shame in it.)
[+] [-] RangerScience|1 year ago|reply
That's it, I think. Then you recurse up into architecture.
Bad architecture is hard to follow. (spaghetti code) Good architecture is easy to change.
Yes, this means you can have code that's neither bad (it's easy to read) nor good (but still hard to change). In the past I've called this "lasagna code" - the layers are super clear, and it's easy to follow how each connects, but they're far too integrated to allow for any changes.
It's harder to phrase on the level of "software", but mabye something like:
Bad software is hard to use. Good software does its job and then gets out of the way.
[+] [-] NegativeK|1 year ago|reply
Ditto.
> I haven't gotten stupider; the software has become more difficult to use.
I can't speak for you, but I'm becoming less interested in new shiny in a lot of things beyond UI widgets. There's a reason why we olds have a reputation of falling behind, and it's not because engineers and inventors explicitly make things that only young people can learn.
[+] [-] breckenedge|1 year ago|reply
Sometimes when reviewing people’s redesigns, I can’t see the beautiful thing that they’re envisioning, only the trough. And over the years I’ve noticed that a lot of redesigns never make it out of the trough. I like the idea of doing small things quickly, I think that’s good, but that’s also technical debt if the redesign never results in a benefit.
[+] [-] hinkley|1 year ago|reply
Prototyping and figuring out where the most friction is, chipping away at it with each new feature that touches that area.
One of the cleverest things I figured out on my own rather than stealing from others, was to draw the current architecture, the ideal one, and the compromise based on the limits of the our resources and consequences of earlier decisions. This is what we would implement if we had a magic wand. This is what we can implement right now.
It’s easier to figure out how to write the next steps without climbing into a local optimum if you know where the top of the mountain is. Nothing sucks like trying to fix old problems and painting yourself into new corners. If the original plan is flawed it’s better to fix it by moving closer to the ideal design than running in the opposite direction.
What usually happens is people present an ideal design, get dickered down by curmudgeons or reality, and start chopping up their proposal to fit the possible. Then the original plan exists only in their heads and nobody else can help along the way, or later on.
[+] [-] mattgreenrocks|1 year ago|reply
Distinguishing between the idea and the implementation is vital.
If the idea is good then a few rounds of review is all that's needed to shore it up. If the idea is bad, then there's more work to be done. Letting people know that you like the idea is key. There's also room for being okay with the implementation if it differs from how you'd do it.
[+] [-] dkarl|1 year ago|reply
I think the article could be a lot shorter and easier to understand if it simply said that the current design is in a local maximum, and you have to work your way incrementally out of the local maximum to reach a different local maximum. I think programmers would get that metaphor a lot more easily than the "buying widgets for a new factory" metaphor.
I do like how the article puts the spotlight on designing the process of change: picking the route, picking the size of the steps, and picking the right spot to announce as the goal. That gives me a lot of food for thought about the changes my team is contemplating right now.
[+] [-] hu3|1 year ago|reply
Perhaps to rephrase it even simpler:
To reach higher mountains we need to climb down our current peak, walk through valleys, until we find higher mountains to climb.
[+] [-] lpcrealmadrid|1 year ago|reply
[+] [-] Chrisoaks|1 year ago|reply
Why would the current design be at a local maximum in the first place?
[+] [-] sdwr|1 year ago|reply
The curve picture feels like a false idol, as soon as he starts doing TA on it, the carriage is well in front of the horse
[+] [-] _hcuq|1 year ago|reply
[+] [-] elktown|1 year ago|reply
[+] [-] cratermoon|1 year ago|reply
[+] [-] monksy|1 year ago|reply
Netscape pre 5 or 6 was a mess. It was a downloadable desktop application that kept getting pushed to deliver new features with a struggling UI framework. Additionally, I would imagine that the group delivering this was rather small in respect to the size of the task. They didn't have CI/CD, git, etc to give feedback. This reeks of an overmainaged project that was intentionally underfunded.
Ultimately.. it was an unmaintable mess that required a rewrite to even continue. To me it sounds like it was tech debt piled deeper and higher.
What came of this? Complete browser rebuilds (mozilla mosaic, chrome, etc), and finally this caught fire through the Chrome project and Javascript acceleration at google.
[+] [-] xyst|1 year ago|reply
something I have noticed in this industry is that big companies think they can outsource their staffing issues and "save on labor". But in the end they pay more in management of outsourced assets, inevitable maintenance of poorly designed and implemented software, delays in delivery, and of course the churn and burn of hiring/firing contractors. Then they end up redoing everything with local talent with 1/8th the team in half the time.
It only took 3-4 years to realize this but this is what the "trough of despair" really looks like.
[+] [-] mattgreenrocks|1 year ago|reply
This also is why I do not believe LLMs pose as big a threat to software development as we're told. Maintenance will always require humans that can simultaneously comprehend the system as it is today and the system as it should be in the future.
[+] [-] oopsallmagic|1 year ago|reply
Salary has long since been disconnected from skill, ever since cheap money flooded the industry, and easier abstractions made it seem like "everyone can code". Perhaps "fog a mirror" shouldn't be the only programmer criterion.
[+] [-] kazinator|1 year ago|reply
[+] [-] tedunangst|1 year ago|reply
[+] [-] exabrial|1 year ago|reply
[+] [-] diffxx|1 year ago|reply
The art is to design things in such a way so that a minimum amount of time is spent in the trough.
[+] [-] quantum_state|1 year ago|reply
[+] [-] dagss|1 year ago|reply
[+] [-] mech422|1 year ago|reply
You know when you get to the point your data structures just make working on the code a breeze, when your library functions provide all the right low pieces to whip up new features quickly and easily with names and functionality that actually fit the domain... Basically, when all the pieces 'gel' :-D
That for me is programming nirvana :-D
[+] [-] piinbinary|1 year ago|reply
(Yes there's a typo in the url. It bugs me, too)
prior discussion: https://news.ycombinator.com/item?id=30128627
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] danybittel|1 year ago|reply
[+] [-] chuckadams|1 year ago|reply
[+] [-] deterministic|1 year ago|reply
It probably helps that I have 30+ years of experience and always pick architectures I have used before on successful projects.
[+] [-] monksy|1 year ago|reply
Secondly: I think this may be reflective of someone that hasn't sat down and realized the environment that they're in. Creating a poor architecture or approach for the first go is usually a sign of dysfunction or inexperience.
Inexperience: It's more that the individual hasn't sat down, realized that the initial approaches are in appropriate and should be designing first before pushing forward. Experience should be fleshing out a lot of these details before coding anything and get the protocols and conflicts resolved months before they happen. (This is where I see a Staff+ being responsible and assisting in the development of the project)
Dysfunctional environment: Our culture in software engineering has forgone formal design before creating a solution. Typically, most development is dictated by "creating a microservice" first and then trying to design as you go along. This is so aggressive in a coding first approach that many even forgo testing. Why does this exist? Partly the incentives by business/management to deliver fast and distrust in the survivibility in the product.
---
That being said: Am I promoting a "perfect design" (as I've been accused of doing) first? No, iteration will happen. Requirements will change, but if you're applying strong tooling and good coding practices.. rearranging your arch shouldn't be as big of an issue as it currently is.
[+] [-] junto|1 year ago|reply
[+] [-] _l7dh|1 year ago|reply
[+] [-] uoaei|1 year ago|reply