Good write-up. I've been observing this phenomenon for years and I think that one of the main causes is poor cost estimation by developers (or, often, no cost estimation).
The new project lead asks management to allow him to re-write the project. It's a 10 manyear task so the management declines.
At that point he decides to do the changes in gradual manner, completely ignoring the fact that it's still a 10 manyear task, while, realistically, his team can spend 2 manmonths a year on refactoring. Thus, the re-write will be accomplished in 60 years.
The same pattern repeats with the next project lead and so on.
There's that wonderful point in a project where you can point--point directly at!--a fix to a fix to make legacy thing work. I love that point. That's the point I start looking for new work and resenting my forerunners.
"second system" syndrome is very real, but nobody in management ever seems to appreciate "Look, technology has progressed in the last five years since this was written...we can do more with less, pleeeeaaaase let us fix this.".
Nobody looks forward to working with legacy code. The business logic is usually wrong, the business itself has changed, the original developers are gone, the market talent composition has changed, etc.
I'd honestly rather see a business embrace the ablative nature of code and instead build itself with the assumption that, every 2-3 years, chunks will be rewritten. This is an argument in favor of services and better documentation and tests, by the way.
EDIT:
And even worse is when the manager/CTO/whatever admits that the code is going to need rewriting--but not yet. Because that means that a) you're going to be wasting effort temporarily fixing something you know will be thrown out and b) tacitly admitting that they are too scared of change to embrace it and too short-sighted to figure out how to balance business requirements with technical debt.
I've been burned by that multiple times.
A man-month of unpaid overtime for rewrites, not a nano-second for legacy support!
I think projects displaying the pattern often have a root problem that the writeup didn't discuss: An initial application with requirements created by non-technical folks who had it implemented by whatever the equivalent is of outsourced workers/students/interns/guy they hired without knowing how to hire a developer.
The initial developer doesn't understand, know, or follow basic CS knowledge or design trade trade offs and constraints, but will try to create a reasonable system using whatever has press at the time - Rails, Php + Zend, etc.
Unfortunately they will follow weird aspects of tutorials they googled for studiously while missing out on the design implication of their choices leading to monstrosities like a product display page that requires hundreds of recursively made SQL queries, an entire application where 90% of the code is in one file, database tables that are composed in the most awful ways, and more.
It's this first layer that causes everything that comes after. The root of the problem isn't that trying something new is right or wrong. It's that the original was done so badly nobody can reasonably continue down the same path and everyone who comes after can't figure out how to cope.
Of course it's typically exacerbated by deadlines. When you can add a new dependency and meet your deadline or try to do things the inscrutably bad way they were done previously and miss it (let alone have any ability to test it), it's hardly surprising that people choose to add what they know.
When confronted with something intractable of course you throw your favorite spaghetti at the wall. And of course there's not any sort of order to what sticks.
So I think the lesson is more about the criticalness of doing things remotely right to begin with.
It's not "CS knowledge" that causes tech fashion to change over time. CS knowledge hasn't actually changed very much in the last 30 years. The people that designed SQL knew all about the advantages and tradeoffs of "NoSQL" key-value stores. The impedance mismatch of ORM Object-SQL systems has been studied for decades.
If anything changes it's the engineering "best practices", like unit testing methodologies. But mostly it's just fashion, and who has the biggest platforms and marketing budgets for pushing their framework.
Software development as a field sucks, no one really knows what they're doing. We're all just figuring it out as we go along.
As he says in the article: "We think that the current shiny best way is the end of history; that it will never be superseded or seen to be suspect with hindsight."
I think that's just one kind of initial nucleation point for the future crystals of failure.
Looking at the 10-year-old PHP codebase we have at work, the real cause is the organizational fluid around the project, a lack of accountability/ownership.
Nobody on the business-die can confidently say what ought to be current practice, so the system becomes--via circular logic--its own reference-implementation
Since no end-user can independently state "How We Do Things Here" except by saying "Whatever The System Already Does", that channels the software's future evolutionary paths: You cannot make "common sense" fixes, because the bug might be a feature.
Throw away the first prototype version of a software and rewire v1.0 from scratch that you can use as a solid base for future development (for any application that is expected to have a long life - think years, not months) even if it's awesome for the users and works business-wise.
And always write a prototype/MVP first, and plan to throw it away, even if you end up using it in production and it ends up crucial for you business. Yes, you should try and build the MVP in weeks, not months, and you should already have its successor done when you're out of beta... but regardless of this, the pancakes rule will apply: the first one is always awful, and if try to stack the next ones on top of it, the whole pancakes tower will crumble. So throw away the first pancake and start from the second!
stop blaming non-developers for the problems created by developers.
It's the result of developer who don't have the maturity to consider the long term risk of their decisions. That has nothing to do with the business leaders.
I freelance which means I see a lot of shit. In any given week I can touch/work with 10+ tech stacks across both Windows and *nix. When you do that on a regular basis it gives you a different perspective on things.
In particular, you write/design code in the same vein as the code around it. It doesn't matter what your personal values as a developer are. Consistency is more important than that. I have been known to pull people to the side and ask them "I'm about to do X, how do you normally do X in codebase Y? I want to do it exactly the way you do".
If you're going to change the way something like a DAL accesses its data you better be able to either do all of it, or you do none of it for exactly the reasons the article stated.
That is purely a technical decision, and the blame for that lies with every single developer who chose to introduce a new tech into the stack for no other reason than sensibilities.
I 95% agree with this article, but think that 5% is an important disagreement.
Background: I spent most of the last decade writing, maintaining, and managing a large enterprise app that had to evolve from its .NET 1.1 foundations to the present day. Our code base looked almost precisely like the one talked about in that article, and with the very real problems mentioned there.
But...
1. Sometimes the productivity wins of a new approach are serious enough that it's worth taking the consistency hit. I'd argue that ORMs, for instance, really are worth it compared to the quasi-code-gen ADO.NET layer. Switching to those midway through the project introduced inconsistency, but all future development done with the ORM was much faster (and more reliable) than the stuff done with the old approach.
(And yes, this is a dangerous argument, because it's easy to convince yourself that it's true all the time, if you want an excuse to use a new technology. But if applied skeptically, it really is true sometimes.)
2. Developer morale is a thing. "You can only use technology invented in 2003, none of this fancy .NET 2.0 stuff, and haha LINQ, are you kidding me?" is going to crush people's spirits, whereas "sure, let's try this new module out using ASP.NET MVC" will introduce an inconsistency into the codebase, but will keep developers more engaged and learning new things (and who knows, maybe this experiment will prove to be one of those ones that gives you a big productivity win).
You have to be careful with that, because you can't literally try out every crazy new thing, but you also can't ignore everything new in the name of consistency and expect anyone to want to work on the project.
> you also can't ignore everything new in the name of consistency and expect anyone to want to work on the project.
The strategy will cause a variant of the Cobol effect. You'll either have to pay developers obscene wages to put up with your consistent but antiquated codebase, or try to find developers who are willing to put up with it for normal wages. The latter probably aren't the people you want on your team.
I think this is ultimately caused by developers not quite knowing where and how to place their ambition. It's easy to look at the big legacy app and be like, "I can fix that!" Then they roll up their shoulders and go to work, management be damned. Management can't see or understand what he does, and they're paying him for his expertise, so they give him the benefit of the doubt and let him do what he thinks he needs to do.
I documented my own answer to this situation here. Basically, don't treat organizational problems as technical ones. They're paying you to do a job, everything else you do is basically an ambition. First rule of ambition is to do no harm.
If you're going to refactor, don't refactor in the direction of new. Refactor in the direction of coherency. Do things that you can finish, or, that at least, don't matter that much if you don't. Like adding tests. Nothing wrong with that.
If you want to fix the legacy app, you need to make less ambitious changes, because you need to be able to finish them. Instead of changing the data types, refactor the initialization code. The initialization code runs once, data manipulation code is basically convention and you need to change it everywhere at once or not at all. Refactor it to the point where you can change it everywhere at once and then do that. Or get really good with a text editor and macro the changes.
Giles Bowkett has a great book on it. Unfuck A Monorail For Great Justice. It's Rails-specific but the ideas can work everywhere. Write code that understands code.
But understand that there are probably better things to do with your time. You can pick out pieces of the legacy app and try to understand them well enough to rubber-duck debug them without having the source code in front of you. Once you have that then really, you don't need to be a hero. You're already a hero because you know the system better than anyone. You can focus on other things. Maybe use your new-found credibility to build some consensus among your peers and managers. Then when you make your case to replace the app, it won't just be you.
I feel like this is very sound advice but i'm not sure if i can really accept it fully.
I'm a fairly green developer (3 years professionally) working on an C# application that suffers from design schizophrenia as detailed in the article. I'm more or less the sole developer assigned to it now.
Although I am new, I feel very strongly about my 'pride as an engineer' in my application of sound design, knowledge and rigor to the best of my abilities. So it discourages me greatly when i hear a lot of "Uhh, that was a long time ago...", "We had tough deadlines..", "Well developer 'X' and me disagreed about that..." from the former maintainers of the code.
I've taken to a rash practice of just tearing out tightly coupled, untestable, duplicated and poorly designed code by the roots. I feel like i am somehow passing a 'holier-than-thou' judgment on the work of my predecessors. But due to it's tight coupling i can't have much confidence about even seemingly simple changes since they always seem to come back with a wagon of defects. The new code i write surely isn't perfect and has it's own defects that come back to me, but i feel much better about the code in many ways and i am always open to criticism and discussion over my design choices. If anyone would ever give them serious study...
I have this gut feeling that if i leave the code i touch messy that i'm not doing my job and the next poor soul to come behind me will lose weekends to trying to clean up an even deeper mess. I feel judgmental even voicing these thoughts.
> If you're going to refactor, don't refactor in the direction of new. Refactor in the direction of coherency.
1000x times this. And making things coherent will make it a lot easier to put the new hotness in too. Add in the new hotness only when you can do it in a coherent way. If you can't do it in a coherent way, you might need to refactor until you can get there.
You might never get there in a legacy codebase, but if everyone took this attitude from the start, you might be less likely to end up there.
You can put new things in and change architecture, but you always have to do so coherently -- which does mean you can put in fewer new things, and sometimes not the new things you want to put in as soon as you want to put them in.
And oh, yes, so very much: "don't treat organizational problems as technical ones." Without understanding the organizational disfunction at the root of your technical issues (it's almost always there), you're doomed. Of course, then you've got to figure out what to do about the organizational problems (or how to operate succesfully in their shadow), which is not what you find the most fun part of your job, but that's how it is.
While the example shows five different people bringing in four different ways of doing things, I was sad to recognize a lot of my own work in that story. I've been the sole developer on a smallish web site for five years.
I've personally switched patterns over the years when adding features, always telling myself that I'll go back and refactor the earlier work.
I've seen this happen so many times, it feels like the norm. I guess it's inevitable though, it's more interesting to work on something "new" and "cool" than adjust your mindset to year old design decisions.
Would be interesting to know if there is a relationship between years of experience in programming and how likely you are going to inject the latest "cool" design pattern / framework into code. Personally I have become far more cautious about this stuff. I'd rather adapt what is already present than subject anyone else to an abstract monstrosity I just cooked up after reading the latest hacker news post on the latest hype.
I think it's more a matter of as programmer mature/learn, they overly disparage old code and idioms. It's not the new orthodoxy...
I've seen in with RPC calls (hand-coded HTTP -> Hessian -> Protobufs), serialization languages (XML -> Json -> Protobufs/Bson), ad nauseam.
I think this kind of changes is the natural state of multi-year development efforts. On a 10-15 years old code base, you'll see all sorts of things like that cropping up and it's incredibly hard to steer clear of these kinds of things.
What are the alternatives? Wholesale replacement of all uses? Wasteful in resources! Not introducing new technologies? Offends the sensibilities of the devs...
The flipside to this is the older technologies start to become end-of-lifed or unsupported by the vendor (like .NET Remoting or .NET 1.1). In that case, there seems to be some level of continual need to push the application forward, but of course we want to avoid the kind of inconsistent layering described here. It's a tough problem.
The easy answer of "regular rewrites every 5-7 years" doesn't sit well with paying customers or management.
The best project I've ever worked on was like this. I don't think it's an anti-pattern; if anything I'd see it as a positive sign, as it indicates a project that's under continued development and willing to investigate new technology.
Just as it's vital to accept that business requirements will change during a project, and structure your methodology to accommodate this, the same is true of technical considerations. And the same solution applies, viz. Agile: break up any "redesign" into small steps that can deliver concrete benefits inside a two-week period. Don't treat the "transitional architecture" as a temporary thing that doesn't need to be good, doesn't need documenting etc.; like with a big database migration, you make sure that at every stage it's something that you can develop under, and you know what you'd do if you had to abandon the plan at any stage.
If you try to keep a codebase on 2003 tech forever, you're storing up trouble for later; devs will become harder to find and more expensive to hire, and eventually you'll be forced into a Big Rewrite, and we know how those turn out. Like with deployment, better to embrace Continuous Re-Architecting. If it hurts, you're not doing it often enough.
> Look at Stack-Overflow ... they also use only static methods for performance, we should do the same ... Gordy had dismissed Ina’s love of unit testing. It’s hard to unit test code written mostly with static methods.
I also happen to like static methods quite a bit, precisely because I thought they were EASY to unit test. I thought that a static method, which is essentially just a function in the enclosing object's namespace, would be EASY to unit test. Input -> output. No state.
Am I digging my own code grave here? What am I missing?
Unfortunately some Java developers don't distinguish between stateful and stateless static methods and just say all static methods are bad. (But note that if the method does I/O it's stateful; you generally want I/O calls to be replaceable.)
The problem is that any code which calls a static method is hard to test. Because the thing that is static about static methods - the reason they are called 'static' methods - is that they are static-bound. Code which calls a static method, will be compiled with a hard, static dependency on that static method. You can't mock it, stub it, or dependency inject it. It's... static.
This is seen in the infrastructure side at any non-startup company. Let me give you an example of ACLs management.
1. First they get devices from one vendor. They set up ACLs in certain way: hardcoded IP addreses/ranges, hardcoded ports.
2. Then come devices from another vendor. This addes another layer.
3. Another vendor comes with an acl automation tool and tells team to create objects/references for ip addresses/ranges/ports. Some guys follow it; some thers don't.
4. You can see a big mess of things created by many vendors, many software automation tools, manual additions, etc.
One day, some guy changes the object/reference; all of sudden, production outage: add another layer of manual edits on the top.
That's why they preach "if it aint broke, dont fix it, but add another layer on top of it".
When working with a poorly-written or unstable legacy app, there is a middle ground approach. Rather than rewrite the whole thing (often impossible), or introduce a new framework in the app, you can draw a circle around a core subsystem and rewrite just that component as a microservice. This frees you from the (often) monolithic app, gives you the freedom to pick an entirely different language, if you want, and leaves you with a part of the app that any new developer could completely wrap their head around.
A more realistic way to fight this is to make Lava layer programs less harmful by keeping documentation and testing efforts consistent throughout the project (and back filling when previous developers didn't).
The constant narrative in the example is developers having no idea why and where things are built a certain way. With more information and verification, gradual changes and refactors can be more manageable.
Avoiding Lava layer applications is not feasible, so calling it an anti-pattern is silly.
I've seen incremental changes really well. But in those cases there were some important forces at play. Firstly, there was always at least one person advocating for the change. They could make improvements over time, educate other engineers, and react to feedback. Secondly, changes won out on merit. Engineers used new abstractions because they were better, not because they were new. Sometimes abstractions didn't work. And were expunged from the codebase.
I have experienced this personally in a couple of companies I worked for - Just like mentioned in the article, they had high staff turnover. What happens is that nobody truly understands the codebase and it requires many more employees to maintain. To make matters worse, it's also a self-perpetuating cycle. It's more a management issue than an engineering issue.
Hmm, this has been true for some projects I worked on. When there is a stable dev lead for many many years. He needs to teach others his approach. Then it's perfect. People tend to keep things that work well. Just don't hire those who want changes:)
There isn;t and will likely never be any strategy to prevent the conditions that the author discusses. Software is biological in nature, and to take it further, subject to the same laws as everything else, in particular entropy. There is no "Maxwell's Demon" approach to writing software. There are best practices and mitigation strategies, but those are incompatible with the business of writing software.
Further, and I think this should be more and more clear to people, _any_ retained unencrypted data is _compromised_ data.
Unfortunately, our society is on the way to being completely dependent upon centralized systems and institutions which are unaware of their limitations, and in fact we live in an age where centralization is considered a virtue, when it is instead a significant liability.
[+] [-] rumcajz|11 years ago|reply
The new project lead asks management to allow him to re-write the project. It's a 10 manyear task so the management declines.
At that point he decides to do the changes in gradual manner, completely ignoring the fact that it's still a 10 manyear task, while, realistically, his team can spend 2 manmonths a year on refactoring. Thus, the re-write will be accomplished in 60 years.
The same pattern repeats with the next project lead and so on.
[+] [-] angersock|11 years ago|reply
"second system" syndrome is very real, but nobody in management ever seems to appreciate "Look, technology has progressed in the last five years since this was written...we can do more with less, pleeeeaaaase let us fix this.".
Nobody looks forward to working with legacy code. The business logic is usually wrong, the business itself has changed, the original developers are gone, the market talent composition has changed, etc.
I'd honestly rather see a business embrace the ablative nature of code and instead build itself with the assumption that, every 2-3 years, chunks will be rewritten. This is an argument in favor of services and better documentation and tests, by the way.
EDIT:
And even worse is when the manager/CTO/whatever admits that the code is going to need rewriting--but not yet. Because that means that a) you're going to be wasting effort temporarily fixing something you know will be thrown out and b) tacitly admitting that they are too scared of change to embrace it and too short-sighted to figure out how to balance business requirements with technical debt.
I've been burned by that multiple times.
A man-month of unpaid overtime for rewrites, not a nano-second for legacy support!
[+] [-] Glyptodon|11 years ago|reply
The initial developer doesn't understand, know, or follow basic CS knowledge or design trade trade offs and constraints, but will try to create a reasonable system using whatever has press at the time - Rails, Php + Zend, etc.
Unfortunately they will follow weird aspects of tutorials they googled for studiously while missing out on the design implication of their choices leading to monstrosities like a product display page that requires hundreds of recursively made SQL queries, an entire application where 90% of the code is in one file, database tables that are composed in the most awful ways, and more.
It's this first layer that causes everything that comes after. The root of the problem isn't that trying something new is right or wrong. It's that the original was done so badly nobody can reasonably continue down the same path and everyone who comes after can't figure out how to cope.
Of course it's typically exacerbated by deadlines. When you can add a new dependency and meet your deadline or try to do things the inscrutably bad way they were done previously and miss it (let alone have any ability to test it), it's hardly surprising that people choose to add what they know.
When confronted with something intractable of course you throw your favorite spaghetti at the wall. And of course there's not any sort of order to what sticks.
So I think the lesson is more about the criticalness of doing things remotely right to begin with.
[+] [-] guelo|11 years ago|reply
If anything changes it's the engineering "best practices", like unit testing methodologies. But mostly it's just fashion, and who has the biggest platforms and marketing budgets for pushing their framework.
Software development as a field sucks, no one really knows what they're doing. We're all just figuring it out as we go along.
As he says in the article: "We think that the current shiny best way is the end of history; that it will never be superseded or seen to be suspect with hindsight."
[+] [-] Terr_|11 years ago|reply
Looking at the 10-year-old PHP codebase we have at work, the real cause is the organizational fluid around the project, a lack of accountability/ownership.
Nobody on the business-die can confidently say what ought to be current practice, so the system becomes--via circular logic--its own reference-implementation
Since no end-user can independently state "How We Do Things Here" except by saying "Whatever The System Already Does", that channels the software's future evolutionary paths: You cannot make "common sense" fixes, because the bug might be a feature.
[+] [-] debaserab2|11 years ago|reply
Not mutually exclusive with what you mentioned, but can exacerbate the situation or leave a codebase with even an adept programmer a mess.
[+] [-] nnq|11 years ago|reply
Throw away the first prototype version of a software and rewire v1.0 from scratch that you can use as a solid base for future development (for any application that is expected to have a long life - think years, not months) even if it's awesome for the users and works business-wise.
And always write a prototype/MVP first, and plan to throw it away, even if you end up using it in production and it ends up crucial for you business. Yes, you should try and build the MVP in weeks, not months, and you should already have its successor done when you're out of beta... but regardless of this, the pancakes rule will apply: the first one is always awful, and if try to stack the next ones on top of it, the whole pancakes tower will crumble. So throw away the first pancake and start from the second!
[+] [-] mreiland|11 years ago|reply
It's the result of developer who don't have the maturity to consider the long term risk of their decisions. That has nothing to do with the business leaders.
I freelance which means I see a lot of shit. In any given week I can touch/work with 10+ tech stacks across both Windows and *nix. When you do that on a regular basis it gives you a different perspective on things.
In particular, you write/design code in the same vein as the code around it. It doesn't matter what your personal values as a developer are. Consistency is more important than that. I have been known to pull people to the side and ask them "I'm about to do X, how do you normally do X in codebase Y? I want to do it exactly the way you do".
If you're going to change the way something like a DAL accesses its data you better be able to either do all of it, or you do none of it for exactly the reasons the article stated.
That is purely a technical decision, and the blame for that lies with every single developer who chose to introduce a new tech into the stack for no other reason than sensibilities.
[+] [-] mkozlows|11 years ago|reply
Background: I spent most of the last decade writing, maintaining, and managing a large enterprise app that had to evolve from its .NET 1.1 foundations to the present day. Our code base looked almost precisely like the one talked about in that article, and with the very real problems mentioned there.
But...
1. Sometimes the productivity wins of a new approach are serious enough that it's worth taking the consistency hit. I'd argue that ORMs, for instance, really are worth it compared to the quasi-code-gen ADO.NET layer. Switching to those midway through the project introduced inconsistency, but all future development done with the ORM was much faster (and more reliable) than the stuff done with the old approach.
(And yes, this is a dangerous argument, because it's easy to convince yourself that it's true all the time, if you want an excuse to use a new technology. But if applied skeptically, it really is true sometimes.)
2. Developer morale is a thing. "You can only use technology invented in 2003, none of this fancy .NET 2.0 stuff, and haha LINQ, are you kidding me?" is going to crush people's spirits, whereas "sure, let's try this new module out using ASP.NET MVC" will introduce an inconsistency into the codebase, but will keep developers more engaged and learning new things (and who knows, maybe this experiment will prove to be one of those ones that gives you a big productivity win).
You have to be careful with that, because you can't literally try out every crazy new thing, but you also can't ignore everything new in the name of consistency and expect anyone to want to work on the project.
[+] [-] kevan|11 years ago|reply
The strategy will cause a variant of the Cobol effect. You'll either have to pay developers obscene wages to put up with your consistent but antiquated codebase, or try to find developers who are willing to put up with it for normal wages. The latter probably aren't the people you want on your team.
[+] [-] rumcajz|11 years ago|reply
[+] [-] vinceguidry|11 years ago|reply
I documented my own answer to this situation here. Basically, don't treat organizational problems as technical ones. They're paying you to do a job, everything else you do is basically an ambition. First rule of ambition is to do no harm.
https://news.ycombinator.com/item?id=8768524
If you're going to refactor, don't refactor in the direction of new. Refactor in the direction of coherency. Do things that you can finish, or, that at least, don't matter that much if you don't. Like adding tests. Nothing wrong with that.
If you want to fix the legacy app, you need to make less ambitious changes, because you need to be able to finish them. Instead of changing the data types, refactor the initialization code. The initialization code runs once, data manipulation code is basically convention and you need to change it everywhere at once or not at all. Refactor it to the point where you can change it everywhere at once and then do that. Or get really good with a text editor and macro the changes.
Giles Bowkett has a great book on it. Unfuck A Monorail For Great Justice. It's Rails-specific but the ideas can work everywhere. Write code that understands code.
http://gilesbowkett.blogspot.com/2013/03/new-ebook-unfuck-mo...
But understand that there are probably better things to do with your time. You can pick out pieces of the legacy app and try to understand them well enough to rubber-duck debug them without having the source code in front of you. Once you have that then really, you don't need to be a hero. You're already a hero because you know the system better than anyone. You can focus on other things. Maybe use your new-found credibility to build some consensus among your peers and managers. Then when you make your case to replace the app, it won't just be you.
[+] [-] algorithmsRcool|11 years ago|reply
I'm a fairly green developer (3 years professionally) working on an C# application that suffers from design schizophrenia as detailed in the article. I'm more or less the sole developer assigned to it now.
Although I am new, I feel very strongly about my 'pride as an engineer' in my application of sound design, knowledge and rigor to the best of my abilities. So it discourages me greatly when i hear a lot of "Uhh, that was a long time ago...", "We had tough deadlines..", "Well developer 'X' and me disagreed about that..." from the former maintainers of the code.
I've taken to a rash practice of just tearing out tightly coupled, untestable, duplicated and poorly designed code by the roots. I feel like i am somehow passing a 'holier-than-thou' judgment on the work of my predecessors. But due to it's tight coupling i can't have much confidence about even seemingly simple changes since they always seem to come back with a wagon of defects. The new code i write surely isn't perfect and has it's own defects that come back to me, but i feel much better about the code in many ways and i am always open to criticism and discussion over my design choices. If anyone would ever give them serious study...
I have this gut feeling that if i leave the code i touch messy that i'm not doing my job and the next poor soul to come behind me will lose weekends to trying to clean up an even deeper mess. I feel judgmental even voicing these thoughts.
[+] [-] jrochkind1|11 years ago|reply
1000x times this. And making things coherent will make it a lot easier to put the new hotness in too. Add in the new hotness only when you can do it in a coherent way. If you can't do it in a coherent way, you might need to refactor until you can get there.
You might never get there in a legacy codebase, but if everyone took this attitude from the start, you might be less likely to end up there.
You can put new things in and change architecture, but you always have to do so coherently -- which does mean you can put in fewer new things, and sometimes not the new things you want to put in as soon as you want to put them in.
And oh, yes, so very much: "don't treat organizational problems as technical ones." Without understanding the organizational disfunction at the root of your technical issues (it's almost always there), you're doomed. Of course, then you've got to figure out what to do about the organizational problems (or how to operate succesfully in their shadow), which is not what you find the most fun part of your job, but that's how it is.
[+] [-] _asummers|11 years ago|reply
This can be generalized: refactor the code to the point where what you need to next is easy to do.
[+] [-] function_seven|11 years ago|reply
I've personally switched patterns over the years when adding features, always telling myself that I'll go back and refactor the earlier work.
[+] [-] jamesu|11 years ago|reply
Would be interesting to know if there is a relationship between years of experience in programming and how likely you are going to inject the latest "cool" design pattern / framework into code. Personally I have become far more cautious about this stuff. I'd rather adapt what is already present than subject anyone else to an abstract monstrosity I just cooked up after reading the latest hacker news post on the latest hype.
[+] [-] acveilleux|11 years ago|reply
I've seen in with RPC calls (hand-coded HTTP -> Hessian -> Protobufs), serialization languages (XML -> Json -> Protobufs/Bson), ad nauseam.
I think this kind of changes is the natural state of multi-year development efforts. On a 10-15 years old code base, you'll see all sorts of things like that cropping up and it's incredibly hard to steer clear of these kinds of things.
What are the alternatives? Wholesale replacement of all uses? Wasteful in resources! Not introducing new technologies? Offends the sensibilities of the devs...
[+] [-] rumcajz|11 years ago|reply
That makes them unable to solve the domain problems. They just follow the spec written by business analyst.
With no interesting problems to solve they get bored and look for "new" and "cool" technologies to make their work more palatable.
[+] [-] steven777400|11 years ago|reply
The easy answer of "regular rewrites every 5-7 years" doesn't sit well with paying customers or management.
[+] [-] lmm|11 years ago|reply
Just as it's vital to accept that business requirements will change during a project, and structure your methodology to accommodate this, the same is true of technical considerations. And the same solution applies, viz. Agile: break up any "redesign" into small steps that can deliver concrete benefits inside a two-week period. Don't treat the "transitional architecture" as a temporary thing that doesn't need to be good, doesn't need documenting etc.; like with a big database migration, you make sure that at every stage it's something that you can develop under, and you know what you'd do if you had to abandon the plan at any stage.
If you try to keep a codebase on 2003 tech forever, you're storing up trouble for later; devs will become harder to find and more expensive to hire, and eventually you'll be forced into a Big Rewrite, and we know how those turn out. Like with deployment, better to embrace Continuous Re-Architecting. If it hurts, you're not doing it often enough.
[+] [-] scribu|11 years ago|reply
[+] [-] bglazer|11 years ago|reply
> Look at Stack-Overflow ... they also use only static methods for performance, we should do the same ... Gordy had dismissed Ina’s love of unit testing. It’s hard to unit test code written mostly with static methods.
I also happen to like static methods quite a bit, precisely because I thought they were EASY to unit test. I thought that a static method, which is essentially just a function in the enclosing object's namespace, would be EASY to unit test. Input -> output. No state.
Am I digging my own code grave here? What am I missing?
[+] [-] rverghes|11 years ago|reply
[+] [-] praptak|11 years ago|reply
[+] [-] skybrian|11 years ago|reply
[+] [-] jameshart|11 years ago|reply
[+] [-] raincom|11 years ago|reply
1. First they get devices from one vendor. They set up ACLs in certain way: hardcoded IP addreses/ranges, hardcoded ports.
2. Then come devices from another vendor. This addes another layer.
3. Another vendor comes with an acl automation tool and tells team to create objects/references for ip addresses/ranges/ports. Some guys follow it; some thers don't.
4. You can see a big mess of things created by many vendors, many software automation tools, manual additions, etc.
One day, some guy changes the object/reference; all of sudden, production outage: add another layer of manual edits on the top.
That's why they preach "if it aint broke, dont fix it, but add another layer on top of it".
[+] [-] blakecaldwell|11 years ago|reply
[+] [-] oconnore|11 years ago|reply
The constant narrative in the example is developers having no idea why and where things are built a certain way. With more information and verification, gradual changes and refactors can be more manageable.
Avoiding Lava layer applications is not feasible, so calling it an anti-pattern is silly.
[+] [-] underwater|11 years ago|reply
[+] [-] jonpress|11 years ago|reply
[+] [-] esro360|11 years ago|reply
[+] [-] fsloth|11 years ago|reply
[+] [-] blakecaldwell|11 years ago|reply
[+] [-] kuni-toko-tachi|11 years ago|reply
Further, and I think this should be more and more clear to people, _any_ retained unencrypted data is _compromised_ data.
Unfortunately, our society is on the way to being completely dependent upon centralized systems and institutions which are unaware of their limitations, and in fact we live in an age where centralization is considered a virtue, when it is instead a significant liability.
[+] [-] jrochkind1|11 years ago|reply