"What if it changes?" is a reasonable question to ask. But every time you do you are walking a tightrope. My rule of thumb is that we look at what is in use TODAY, and then write a decent abstraction around that. If something is used once, ignore any abstractions. If it's used twice, just copy it, it's better. If it's used 3 or more times, look at writing an abstraction that suits us TODAY not for the future. Bonus points if the abstraction allows us to extend easily in the future, but nothing should be justified with a "what if".
The reason a lot of Java or C# code is written with all these abstractions is because it aids unit testing. But I've come to love just doing integration testing. I still use unit testing to test complex logic, but things like "does this struct mapper work correctly" are ignored, we'll find out from our integration tests. If our integration tests work, we've fulfilled our part of the contract, that's all we care about. Focus on writing them and making them fast and easy to run. It's virtually no different to unit tests but just 10x easier to maintain.
> If something is used once, ignore any abstractions. If it's used twice, just copy it, it's better. If it's used 3 or more times, look at writing an abstraction...
That is a good rule of thumb, and I often follow it too. But it does take some discernment to recognize cases where something would benefit from an abstraction or some common code, even if it is only used twice.
I used to work for a company that imported airspace data from the FAA (the US Federal Aviation Administration) and other sources. The FAA has two main kinds of airspace: Class Airspace and Special Use Airspace.
The data files that describe these are rather complex, but about 90% of the format is common between the two. In particular, the geographical data is the same, and that's what takes the most code to process.
I noticed that each of these importers was about 3000 lines of C++ code and close to 1000 lines of protobuf (protocol buffer) definitions. As you may guess, about 90% of the code and protobufs were the same between the two.
It seemed clear that one was written first, and then copied and pasted and edited here and there to make the second. So when a bug had to be fixed, it had to be fixed both places.
There wasn't any good path toward refactoring this code to reduce the duplication. Most of the C++ code referenced the protobufs directly, and even if most of the data in one had the same names as in the other, you couldn't just interchange or combine them.
When I asked the author about this code duplication, they cited the same principle of "copy for two, refactor for three" that you and I approve of.
But this was a case where it was spectacularly misapplied.
> If something is used once, ignore any abstractions.
This is a terrible advice.
According to this, a classic program that loads data from a file, processes it, then writes the results to another file should be a single giant main() that mixes input parsing, computation and output formatting. Assuming file formats don't change, all of those would be used only once. CS 101 style. :D
The primary reason for building abstractions is not removing redundancy (DRY) nor allowing big changes, but making things simpler to reason about.
It is way simpler to analyze a program that separates input parsing from processing from output formatting. Such separation is valuable even if you don't plan to ever change the data formats. Flexibility is just added bonus.
If the implementation complexity (the "how") is a lot higher than the interface (the "what") then hiding such complexity behind an abstraction is likely a good idea, regardless of the number of uses or different implementations.
I would add, though, that in my experience you can often identity parts of a design that are more likely to change than others (for example, due to “known unknowns”).
I’ve used microservices to solve this problem in the past. Write a service that does what you know today, and rewrite it tomorrow when you know more. The first step helps you identify the interfaces, the second step lets you improve the logic.
In my experience this approach gives you a good trade off between minimal abstraction and maximum flexibility.
(Of course lots of people pooh-pooh microservices as adding a bunch of complexity, but that hasn’t been my experience at all - quite the opposite in fact)
> If something is used once, ignore any abstractions. If it's used twice, just copy it, it's better. If it's used 3 or more times, look at writing an abstraction
I refactor for the second time. I don't like chasing bugs in multiple places.
My rule of thumb is that there are only three quantities in the software development industry: 0, 1 and infinity. If I have more than 1 of something, I support (a reasonable approximation of) infinite quantities of that something.
Agreed, except avoid the term "abstraction". When one starts to talk about abstractions, one stops thinking.
The right word is "generalization", and that's what you are actually doing: you start with a down-to-earth, "solve the problem you've got!" approach, and then when something similar comes up you generalize your first solution.
Perhaps part of the problem is that in OO, inheritance is usually promoting the opposite: you have a base class and then you specialize it. So the base class has to be "abstract" from day one, especially if you are a true follower of the Open Close Principle. I don't know about others, but for me abstractions are not divine revelations. I can only build an abstraction from a collection of cases that exhibit similarities. Abstracting from one real case and imaginary cases is more like "fabulation" than "abstraction".
The opposite cult is "plan to throw away one", except more than just one. Not very eco-friendly, some might say; it does not looks good at all when you are used to spend days writing abstractions, writing implementations, debugging them, and testing them. That's a hassle but at least once you are done, you can comfort yourself with the idea that you can just extend it... Hopefully. Provided the new feature (that your salesman just sold without asking if you could do it, pretending they thought your product did that already) is "compatible" with your design.
The one thing people may not know is how much faster, smaller and better the simpler design is. Simple is not that easy in unexpected ways. In my experience, "future proofing" and other habitual ways of doing things can be deeply embedded in your brain. You have to hunt them down. Simplifying feels to me like playing Tetris: a new simplification idea falls down, which removes two lines, and then you can remove one more line with the next simplification, etc.
Java in particular is missing certain language features necessary for easily changing code functionality. This leads to abstractions getting written in to the code so that they can be added if needed later.
A specific example is getters and setters for class variables. If another class directly accesses a variable, you have to change both classes to replace direct access with methods that do additional work. In other languages (Python specifically), you can change the callee so that direct access gets delegated to specific functions, and the caller doesn't have to care about that refactor.
> If something is used once, ignore any abstractions. If it's used twice, just copy it, it's better.
That is just as bad as a general rule as "What if it ever changes, we need to abstract over it!". As always: It depends. If the abstraction to build is very simple, like making a magic number a named variable, which is threaded through some function calls, at the same time making things more readable, then I will rather do that, than copying copy and introducing the chance of introducing bugs in the future, by only updating one place. If the abstraction requires me to introduce 2 new design patterns to the code, which are only used in this one case ... well, yes, I would rather make a new function or object or class or whatever have you. Or I would think about my over all design and try to find a better one.
Generally, if one finds oneself in a situation, where one seems to be nudged towards duplicating anything, one should think about the general approach to the problem and whether the approach and the design of the solution are good. One should ask oneself: Why is it, that I cannot reuse part of my program? Why do I have to make a copy, to implement this feature? What is the changing aspect inside the copy? These questions will often lead to a better design, which might avoid further abstraction for the feature in question and might reflect the reality better or even in a simpler way.
This is similar in a way to starting to program inside configuration files (only possible in some formats). Generally it should not be done and a declarative description of the configuration should be found, on top of which a program can make appropriate decisions.
> If something is used once, ignore any abstractions. If it's used twice, just copy it, it's better. If it's used 3 or more times, look at writing an abstraction...
As others have said, this is a good rule of thumb in many cases because finding good abstractions is hard and so we often achieve code re-use through bad abstractions.
But really good abstractions add clarity to the code.
And thus, a good abstraction may be worth using when there is only two, or even just once instance of something.
If an abstraction causes a loss of clarity, developers should try to think if they can structure it better.
When I'm asked "what if it changes?", I usually answer with something like "we'll solve it when, and if, it happens". I'm a fan of solving the task at hand, not more, not less. If I know for sure that we're going to add feature X in a future version, sure I'll prepare my code for its addition in advance. But if I don't know for certain whether something will happen, I act as if it won't. It's fine to refactor your code as the problem it solves evolves. You can't predict future, and you'll have to be able to deal with mispredictions if you try, too.
Not to mention most unit tests are utterly useless in reality and test things we know to be true (1 + 1 -level nonsense), not real edge cases.
The logic that usually gets ignored in unit tests is the ones that actually needs to be tested, but skipped because it is too difficult and might involve a few trips to the database which makes it tricky (in some scenarios you need valid data to get a valid test result, but you cannot just go grab a copy of production data to run some test).
And then there is the problem of testing related code, packages and artifacts being deployed to production which is really gross in my mind and bloats everything further.
A team I've worked on has resorted to building actual endpoints to trigger test code that live alongside other normal code (basically not a testing framework), so that they could trigger test and "prove the system works" by testing against production data at runtime.
"Copy code if it's used twice" is terrible advice. You are creating a landmine for future maintainers of your code (often yourself, of course). Someone will inevitably change only one of the two versions at some point in the future and then you're going to have to rely on tests to catch the issue - except that your tests will probably also reflect the duplication and you'll also forget to change the 2nd test.
The only possible justification for duplicating code would be that creating an appropriate abstraction is harder. Given that there are generally economies of testing when you factor out common code, that's usually just not true.
It is important to carefully look into the functional context where that abstraction is used.
If you are looking for example into System Integration, Data Integration, ETL and so on, not using a canonical format from the beginning, will get you into the type of almost exponential grow in mappings between sources and targets.
I think the test pyramid still has legs. Write both.
I do agree a lot of abstractions in C#/Java seems to be testing implementation stuff leaking into the abstraction layer. A lot of inversion of control in these languages seems purely to allow unit testing, which is kind of crazy.
Personally I prefer the "write everything in as functional a style as possible, then you'll need less IoC/DI". This can be done in C# and Java too, especially the modern versions.
I got handed off a little online customer service chat application at a previous job, it had been written by someone I put at a similar skill level as mine but with different personality traits. One of his personality traits was to code to the spec and not consider "what if it changes".
This online chat had two functionalities, chat with a worker a and leave a message from worker with suggestions as to what to look at in response, there was no connection between these two functionalities specified and so my friend had written it without connection, it was difficult without doing a full rewrite to get state information from one part of the application to another one (this was written in JQuery)
Anyway, 6 months+ down the line it got respecified, now it needed shared state between the two parts of the application, which meant either significant rewrite or hacks, so hacks was chosen. Ugly hacks but worked (I think ugly hacks was the correct choice here definitely because chat application almost completely scrapped a year later for a bot)
After I was done I said "but why write it like that?" It was specified no state was needed between the two parts, "yeah but it should be obvious that is going to change, they would keep wanting to add functionality to it and probably share state as to the two communications channels"
tldr: there are some potential changes that seem more likely than others, and the architecture should take those potential changes into consideration.
Yip dealing with a code base right now that's like this. Devs do this for job security. Make the thing incomprehensible to others guarantees you can't be fired right? It also guarantees a promotion right up to team lead or dev manager.
Another note it's not what's the best SOLID design it's what that original dev thinks is the best SOLID design. SOLID in itself is loose enough that you can have designs that vary massively but are still technically SOLID.
I haven't seen anyone else point out: One of the ways you can often identify really experienced programmers is by them being able to pretty accurately separate thing-that-probably-will-change from things-that-probably-won't.
Finding a good way to balance YAGNI with this-will-definitely-change-in-a-few-months-because-it-always-does is incredibly hard, and I've really appreciated working with engineers who make that prediction correctly.
I'd say it goes even beyond that into being able to separate the hypothetical future problems that will be a minor irritation from the ones that will wreck your whole month.
As you get to learn more failure modes for software, you start to realize you can't plug all the holes in the dyke, not if you had absolute control of every hand on your team. You can stop four problems or you can hedge against twenty. The problem is that hedging is way harder to explain to people. It's how we ended up with unsatisfying concepts like 'code smells'. It's not evil, it might not even be that bad, but... don't do it anyway.
I think there are also very low hanging fruits that doesn't take more effort but makes something way more future proof. Experienced engineers can identify those, and most of the low hanging fruits seem to be DB design related.
> Finding a good way to balance YAGNI with this-will-definitely-change-in-a-few-months-because-it-always-does is incredibly hard, and I've really appreciated working with engineers who make that prediction correctly.
Is this a generalizable skill? To me, it feels like changes are more often driven by shifting business requirements and ephemeral victories of internal political battles rather than sound technical reasons.
Since OP works in Amazon he might be familiar with the mental model of "one and two way doors". A one way door is a decision that is impossible or very difficult to undo or change. A two way door is easy to change. The idea is to spend most of your energy on one way door decisions and little in two way doors. This act as a remedy to things like bike shedding. If something is easy to change, just go ahead and do it!
The relevance here is that we can apply this concept in reverse. If we make something easy to change, is close to a two way door. Hence we reduce the time we need to spend on it's design/consideration/ etc.
Personally I like to write code that is more on the flexibler side to increase my optionality. I can then iterate faster throwing things at the wall and change my mind as needed. Of course this flexibility doesn't come for free. Overengineering and cost of carrying are real, so apply your best judgement.
"One and two-way doors" is a nice way to phrase it. I use a similar heuristic to figure out where to spend more effort designing things upfront. For most web apps, the one-way door is usually going to be the database schema - data migrations are trickier to do once you have to deal with real data.
The other big class of "one-way" decisions are with regards to code that live in environments you don't control, e.g. mobile and desktop apps.
One tip I would offer is when building something new, you should try and delay making one-way decisions as long as you can, until you have a clearer picture of how things should work.
This is probably a bit of an aside to the implied problem at hand but.. considering this from the React side of my company's codebase, I wish this question was at least kept softly in mind when designing components. I frequently encounter components that were very clearly created as "one-shot"s that are then unfortunately extended by piling on more props and conditional behavior by the next developers who need something like the current component but ever so slightly different.
Often the solution initially would have been to separate out the presentation side of the component from the behavioral wrapper that chooses what data needs to be shown / what actions are performed by interactions. By the time the component arrives at my lap (because I too need something same same but different), however, it has become a monstrosity that can take a long time to disentangle via ADD (anger driven development).
I think asking oneself a simple question such as "how would someone make this search box work with a different data source?" would probably result in components that are decomposed into simpler, smaller parts that allow for much easier reuse and adaptation.
On the flip side, I'm also of the belief that the second developer to touch a component is necessarily better equipped to answer that question, so the onus should probably be on them to make the proper generalizing changes.
(I'm still trying to figure out how to write a document that expresses this idea more concretely to my coworkers because it often isn't quite this simple..)
> On the flip side, I'm also of the belief that the second developer to touch a component is necessarily better equipped to answer that question, so the onus should probably be on them to make the proper generalizing changes.
This.
Given a long enough timeline, pretty much all abstractions fail. People are too timid to replace them when they do, and given enough churn, the code gets out of hand.
I think there are some quick wins that doesn't take too much dev time but makes components reusable or at least easily make it reusable in the future.
- Like you've already said, prefer splitting components into purely presentational components("dumb components") and components with some logics ("smart components")
- If you are using design components, make sure to pass on those props. e.g. if you are using Chakra, pass BoxProps to <Box>.
- try to split out logic into hooks. This can be very specific to the current use case.
These aren't hard things to do but lets you quickly create a generic component by only changing the smart component/hooks or only reuse the dumb components with another specific hook, if the use case is a bit different.
Yes! This is such a great summary of the pains of front-end, although I'm sure these pains transcend any one arm of software development. What laypeople may see as a simple search box is probably a monstrosity of spaghetti code that was coded by a single person under a surprise deadline and interfaces with 3 different APIs (recent suggestions, search-on-type quick results, full search...) don't forget the many different states- focus, active, disabled, loading- necessary to consider and to build styling around. Everything is tightly coupled and semantically horrific because that's what the designers and managers demand for this year's particular flavor of the product's design under whatever deadline has been sprung on those involved in actually building the thing.
I think the interactive nature of these things and years of 'training' that the average computer user has endured to expect how a search box behaves tends to hide the hidden complexity lurking in UI elements everywhere. Another great example is a <select> element.
Most abstractions will accept one layer of ugly hacks for situations they were never meant to deal with. I'd recommend waiting until the second layer of hacks starts to form, then refactoring with what you've learned, since that second layer of hacks is when things start to really fall apart.
Good code rarely needs to change because it's complete. It's meant to be built on top of, rather than modified for every new consumer. Think standard libraries. There is no reason for the linked list module to ever change unless it's for bug fixes or performance improvements.
Business logic needs to change all the time, because businesses are always changing. This is why we separate it out cleanly, so it can change easily.
Know what type of code you're writing so you can plan and design appropriately.
I am mostly a C++ developer but I have been on some Java projects recently, and I am a bit shocked by the "what if it changes?" culture. Lots of abstraction, weak coupling, design patterns, etc... It looks like a cult to the gang of four.
Of course, it is not exclusive to Java, and these patterns are here for a reason, but I have a feeling that Java developers tend to overdo it.
I think the most important mind shift is from "let's make this extendable by plugins/scriptable so we can modify it while it's live" to "if requirements change, let's just change the source code and redeploy".
I also disagree with the SOLID principles. KISS is more important than adding extra code and sacrificing performance to allow extension without touching the original source files. Unless you goal is explicitly that.
You're trying to write the simplest, most straight-forward encoding of the solution. If you can avoid duplication and make the code read well, you're golden.
My only criticism of this piece, is that it's so dry and well articulated, some might not realize it's satire.
There is some conversation here about the number of instances something has to happen before you should abstract it, which is a handy rule of thumb. You should also consider the tradeoff of complexity, sometimes even if you have 5, 10, 15 snippets of code that are almost the same you still don't need to abstract it, because the differences are not complex to manage, but an abstraction would be.
There’s a lot of value to being able to command-click a function or method call and jump to one single definition. This substantially reduces friction when reading/understanding code or change sets.
One of the best things about dynamic languages like Typescript is that in these languages you can avoid interface/implementation duality while still being able to mock or test code by using test-only runtime mutation of class instances or modules.
I've worked with people who would prefer a complex function to generate a list of properties from a data source over a much simpler hardcoded list, on the basis that if a new option is added its easier. They used this pattern for things like asset classes, which admittedly did change about once every 5 years. It made me sad.
I think "what if it changes?" can also be used to argue for more concrete, simpler code that is less DRY and therefore can be rewritten or deleted with greater ease.
I wouldn't refer to overly generic code as easier to change.
> Never let anyone explore the answer to your "what if it changes?" question. The impact of such a change is irrelevant! For the question to retain its power, fear must live in the imagination. The change's impact must be unstated, horrible, and so fraught with Lovecraftian terrors that only your approach can defend against it.
I see this in a lot of contexts at work, not just defensive coding. Red tape that slows us down a lot, supposedly protecting us against unimaginably horrible things which no one can seem to articulate under questioning.
I mean, look how red that tape is! It must be protecting us against something pretty dangerous. Red means danger, right?
Say you’re piloting a sailboat downwind and there’s an obstacle between you and your destination. You will need to plot a course to circumvent the obstacle. You have a choice to turn the boat toward the wind (edit: not through the the wind, just toward it) to pass the obstacle on one side, or away from the wind to pass on the other side. One of these two options retains optionality at little or no cost, that’s likely to be the better choice. Likewise in software, look for zero cost opportunities to retain optionality (eg. free ways to defer decisions till later).
This is a bit sophisticated to teach junior developers, so we just teach them “consider the implications of your design decisions with regard to supporting future needs, including those not yet known.” Yes, you can certainly over-index on this dimension, but that doesn’t make it a useless or necessarily harmful consideration. (Not implying the article disagrees with this; it does appear to be satire)
The idea should be that it probably will change, and to prepare for that you need to code it a way that if it does change, you have to update the code only in one or two places, you haven't scattered the knowledge of that detail all over the code.
I don't like this authors approach. Always ask the question, but never add complexity because you assume you know the answer. If you have "multiple layers of indirection" that is "unused" because of paranoia that you might have to refactor your code, then you're doing it wrong. Write code such that you never have to be afraid to refactor it. That's not write code that never has to be refactored. In fact, refactor often and liberally. And if things break when you refactor, then that code is bad code. But don't add worthless layers of indirection because you write code that can't be tested, full of side effects, requiring entire components be rewritten from scratch, etc.
The top commenter says good code rarely needs to be changed. I think thats foolish. Good code is constantly changed, because its good enough it can be.
My personal bugaboo is worrying about if the size of `int` changes. It's not going to change. It's 32 bits. 25 years ago was the end of 16 bits. 25 years ago.
Even if you do want to target real mode DOS, good luck getting your modern app to fit in 64K. Heck, just upper-casing a Unicode string takes 640K.
[+] [-] Philip-J-Fry|3 years ago|reply
The reason a lot of Java or C# code is written with all these abstractions is because it aids unit testing. But I've come to love just doing integration testing. I still use unit testing to test complex logic, but things like "does this struct mapper work correctly" are ignored, we'll find out from our integration tests. If our integration tests work, we've fulfilled our part of the contract, that's all we care about. Focus on writing them and making them fast and easy to run. It's virtually no different to unit tests but just 10x easier to maintain.
[+] [-] Stratoscope|3 years ago|reply
That is a good rule of thumb, and I often follow it too. But it does take some discernment to recognize cases where something would benefit from an abstraction or some common code, even if it is only used twice.
I used to work for a company that imported airspace data from the FAA (the US Federal Aviation Administration) and other sources. The FAA has two main kinds of airspace: Class Airspace and Special Use Airspace.
The data files that describe these are rather complex, but about 90% of the format is common between the two. In particular, the geographical data is the same, and that's what takes the most code to process.
I noticed that each of these importers was about 3000 lines of C++ code and close to 1000 lines of protobuf (protocol buffer) definitions. As you may guess, about 90% of the code and protobufs were the same between the two.
It seemed clear that one was written first, and then copied and pasted and edited here and there to make the second. So when a bug had to be fixed, it had to be fixed both places.
There wasn't any good path toward refactoring this code to reduce the duplication. Most of the C++ code referenced the protobufs directly, and even if most of the data in one had the same names as in the other, you couldn't just interchange or combine them.
When I asked the author about this code duplication, they cited the same principle of "copy for two, refactor for three" that you and I approve of.
But this was a case where it was spectacularly misapplied.
[+] [-] pkolaczk|3 years ago|reply
This is a terrible advice. According to this, a classic program that loads data from a file, processes it, then writes the results to another file should be a single giant main() that mixes input parsing, computation and output formatting. Assuming file formats don't change, all of those would be used only once. CS 101 style. :D
The primary reason for building abstractions is not removing redundancy (DRY) nor allowing big changes, but making things simpler to reason about.
It is way simpler to analyze a program that separates input parsing from processing from output formatting. Such separation is valuable even if you don't plan to ever change the data formats. Flexibility is just added bonus.
If the implementation complexity (the "how") is a lot higher than the interface (the "what") then hiding such complexity behind an abstraction is likely a good idea, regardless of the number of uses or different implementations.
[+] [-] doctor_eval|3 years ago|reply
I would add, though, that in my experience you can often identity parts of a design that are more likely to change than others (for example, due to “known unknowns”).
I’ve used microservices to solve this problem in the past. Write a service that does what you know today, and rewrite it tomorrow when you know more. The first step helps you identify the interfaces, the second step lets you improve the logic.
In my experience this approach gives you a good trade off between minimal abstraction and maximum flexibility.
(Of course lots of people pooh-pooh microservices as adding a bunch of complexity, but that hasn’t been my experience at all - quite the opposite in fact)
[+] [-] dotancohen|3 years ago|reply
My rule of thumb is that there are only three quantities in the software development industry: 0, 1 and infinity. If I have more than 1 of something, I support (a reasonable approximation of) infinite quantities of that something.
[+] [-] astrobe_|3 years ago|reply
The right word is "generalization", and that's what you are actually doing: you start with a down-to-earth, "solve the problem you've got!" approach, and then when something similar comes up you generalize your first solution.
Perhaps part of the problem is that in OO, inheritance is usually promoting the opposite: you have a base class and then you specialize it. So the base class has to be "abstract" from day one, especially if you are a true follower of the Open Close Principle. I don't know about others, but for me abstractions are not divine revelations. I can only build an abstraction from a collection of cases that exhibit similarities. Abstracting from one real case and imaginary cases is more like "fabulation" than "abstraction".
The opposite cult is "plan to throw away one", except more than just one. Not very eco-friendly, some might say; it does not looks good at all when you are used to spend days writing abstractions, writing implementations, debugging them, and testing them. That's a hassle but at least once you are done, you can comfort yourself with the idea that you can just extend it... Hopefully. Provided the new feature (that your salesman just sold without asking if you could do it, pretending they thought your product did that already) is "compatible" with your design.
The one thing people may not know is how much faster, smaller and better the simpler design is. Simple is not that easy in unexpected ways. In my experience, "future proofing" and other habitual ways of doing things can be deeply embedded in your brain. You have to hunt them down. Simplifying feels to me like playing Tetris: a new simplification idea falls down, which removes two lines, and then you can remove one more line with the next simplification, etc.
[+] [-] ThrustVectoring|3 years ago|reply
A specific example is getters and setters for class variables. If another class directly accesses a variable, you have to change both classes to replace direct access with methods that do additional work. In other languages (Python specifically), you can change the callee so that direct access gets delegated to specific functions, and the caller doesn't have to care about that refactor.
[+] [-] zelphirkalt|3 years ago|reply
That is just as bad as a general rule as "What if it ever changes, we need to abstract over it!". As always: It depends. If the abstraction to build is very simple, like making a magic number a named variable, which is threaded through some function calls, at the same time making things more readable, then I will rather do that, than copying copy and introducing the chance of introducing bugs in the future, by only updating one place. If the abstraction requires me to introduce 2 new design patterns to the code, which are only used in this one case ... well, yes, I would rather make a new function or object or class or whatever have you. Or I would think about my over all design and try to find a better one.
Generally, if one finds oneself in a situation, where one seems to be nudged towards duplicating anything, one should think about the general approach to the problem and whether the approach and the design of the solution are good. One should ask oneself: Why is it, that I cannot reuse part of my program? Why do I have to make a copy, to implement this feature? What is the changing aspect inside the copy? These questions will often lead to a better design, which might avoid further abstraction for the feature in question and might reflect the reality better or even in a simpler way.
This is similar in a way to starting to program inside configuration files (only possible in some formats). Generally it should not be done and a declarative description of the configuration should be found, on top of which a program can make appropriate decisions.
[+] [-] konschubert|3 years ago|reply
As others have said, this is a good rule of thumb in many cases because finding good abstractions is hard and so we often achieve code re-use through bad abstractions.
But really good abstractions add clarity to the code.
And thus, a good abstraction may be worth using when there is only two, or even just once instance of something.
If an abstraction causes a loss of clarity, developers should try to think if they can structure it better.
EDIT: This comment below talks about good example of how a good abstraction adds clarity, while a bad abstraction takes it away: https://news.ycombinator.com/item?id=31476408
[+] [-] grishka|3 years ago|reply
[+] [-] BobbyJo|3 years ago|reply
If you write an integration test, and it fails, what's broken?
[+] [-] BatteryMountain|3 years ago|reply
The logic that usually gets ignored in unit tests is the ones that actually needs to be tested, but skipped because it is too difficult and might involve a few trips to the database which makes it tricky (in some scenarios you need valid data to get a valid test result, but you cannot just go grab a copy of production data to run some test).
And then there is the problem of testing related code, packages and artifacts being deployed to production which is really gross in my mind and bloats everything further.
A team I've worked on has resorted to building actual endpoints to trigger test code that live alongside other normal code (basically not a testing framework), so that they could trigger test and "prove the system works" by testing against production data at runtime.
[+] [-] urban_winter|3 years ago|reply
The only possible justification for duplicating code would be that creating an appropriate abstraction is harder. Given that there are generally economies of testing when you factor out common code, that's usually just not true.
"Duplication is evil" is a more reliable mantra.
[+] [-] belter|3 years ago|reply
If you are looking for example into System Integration, Data Integration, ETL and so on, not using a canonical format from the beginning, will get you into the type of almost exponential grow in mappings between sources and targets.
https://www.bmc.com/blogs/canonical-data-model/
https://www.enterpriseintegrationpatterns.com/CanonicalDataM...
[+] [-] davedx|3 years ago|reply
I do agree a lot of abstractions in C#/Java seems to be testing implementation stuff leaking into the abstraction layer. A lot of inversion of control in these languages seems purely to allow unit testing, which is kind of crazy.
Personally I prefer the "write everything in as functional a style as possible, then you'll need less IoC/DI". This can be done in C# and Java too, especially the modern versions.
[+] [-] _carbyau_|3 years ago|reply
Once is an incident. Deal with it.
Twice is a co-incident. Deal with it. But keep an eye out for it...
Third time? Ok, this needs properly sorting out.
[+] [-] bryanrasmussen|3 years ago|reply
This online chat had two functionalities, chat with a worker a and leave a message from worker with suggestions as to what to look at in response, there was no connection between these two functionalities specified and so my friend had written it without connection, it was difficult without doing a full rewrite to get state information from one part of the application to another one (this was written in JQuery)
Anyway, 6 months+ down the line it got respecified, now it needed shared state between the two parts of the application, which meant either significant rewrite or hacks, so hacks was chosen. Ugly hacks but worked (I think ugly hacks was the correct choice here definitely because chat application almost completely scrapped a year later for a bot)
After I was done I said "but why write it like that?" It was specified no state was needed between the two parts, "yeah but it should be obvious that is going to change, they would keep wanting to add functionality to it and probably share state as to the two communications channels"
tldr: there are some potential changes that seem more likely than others, and the architecture should take those potential changes into consideration.
[+] [-] rr808|3 years ago|reply
[+] [-] civilized|3 years ago|reply
You never know when you might need to change the implementation of how the "Fuzz" string is returned, so you need a FuzzStringReturner.
And you never know when you might need multiple different ways of returning "Fuzz", so you need a FuzzStringReturnerFactory.
And for SOLID it's important to separate concerns, so you want your FuzzStringReturnerFactory separate from your FuzzStringPrinterFactory.
And that barely scratches the surface of what you need!
[+] [-] Dave3of5|3 years ago|reply
Another note it's not what's the best SOLID design it's what that original dev thinks is the best SOLID design. SOLID in itself is loose enough that you can have designs that vary massively but are still technically SOLID.
[+] [-] Spivak|3 years ago|reply
[+] [-] isoprophlex|3 years ago|reply
[+] [-] chunkyks|3 years ago|reply
Finding a good way to balance YAGNI with this-will-definitely-change-in-a-few-months-because-it-always-does is incredibly hard, and I've really appreciated working with engineers who make that prediction correctly.
[+] [-] hinkley|3 years ago|reply
As you get to learn more failure modes for software, you start to realize you can't plug all the holes in the dyke, not if you had absolute control of every hand on your team. You can stop four problems or you can hedge against twenty. The problem is that hedging is way harder to explain to people. It's how we ended up with unsatisfying concepts like 'code smells'. It's not evil, it might not even be that bad, but... don't do it anyway.
[+] [-] creakingstairs|3 years ago|reply
[+] [-] JeremyNT|3 years ago|reply
Is this a generalizable skill? To me, it feels like changes are more often driven by shifting business requirements and ephemeral victories of internal political battles rather than sound technical reasons.
[+] [-] angarg12|3 years ago|reply
The relevance here is that we can apply this concept in reverse. If we make something easy to change, is close to a two way door. Hence we reduce the time we need to spend on it's design/consideration/ etc.
Personally I like to write code that is more on the flexibler side to increase my optionality. I can then iterate faster throwing things at the wall and change my mind as needed. Of course this flexibility doesn't come for free. Overengineering and cost of carrying are real, so apply your best judgement.
[+] [-] yen223|3 years ago|reply
The other big class of "one-way" decisions are with regards to code that live in environments you don't control, e.g. mobile and desktop apps.
One tip I would offer is when building something new, you should try and delay making one-way decisions as long as you can, until you have a clearer picture of how things should work.
[+] [-] dgb23|3 years ago|reply
[+] [-] thatswrong0|3 years ago|reply
Often the solution initially would have been to separate out the presentation side of the component from the behavioral wrapper that chooses what data needs to be shown / what actions are performed by interactions. By the time the component arrives at my lap (because I too need something same same but different), however, it has become a monstrosity that can take a long time to disentangle via ADD (anger driven development).
I think asking oneself a simple question such as "how would someone make this search box work with a different data source?" would probably result in components that are decomposed into simpler, smaller parts that allow for much easier reuse and adaptation.
On the flip side, I'm also of the belief that the second developer to touch a component is necessarily better equipped to answer that question, so the onus should probably be on them to make the proper generalizing changes.
(I'm still trying to figure out how to write a document that expresses this idea more concretely to my coworkers because it often isn't quite this simple..)
¯\_(ツ)_/¯
[+] [-] bcrosby95|3 years ago|reply
This.
Given a long enough timeline, pretty much all abstractions fail. People are too timid to replace them when they do, and given enough churn, the code gets out of hand.
[+] [-] creakingstairs|3 years ago|reply
- Like you've already said, prefer splitting components into purely presentational components("dumb components") and components with some logics ("smart components")
- If you are using design components, make sure to pass on those props. e.g. if you are using Chakra, pass BoxProps to <Box>.
- try to split out logic into hooks. This can be very specific to the current use case.
These aren't hard things to do but lets you quickly create a generic component by only changing the smart component/hooks or only reuse the dumb components with another specific hook, if the use case is a bit different.
[+] [-] wildrhythms|3 years ago|reply
I think the interactive nature of these things and years of 'training' that the average computer user has endured to expect how a search box behaves tends to hide the hidden complexity lurking in UI elements everywhere. Another great example is a <select> element.
[+] [-] jtolmar|3 years ago|reply
[+] [-] aprdm|3 years ago|reply
[+] [-] jackblemming|3 years ago|reply
Business logic needs to change all the time, because businesses are always changing. This is why we separate it out cleanly, so it can change easily.
Know what type of code you're writing so you can plan and design appropriately.
[+] [-] GuB-42|3 years ago|reply
I am mostly a C++ developer but I have been on some Java projects recently, and I am a bit shocked by the "what if it changes?" culture. Lots of abstraction, weak coupling, design patterns, etc... It looks like a cult to the gang of four.
Of course, it is not exclusive to Java, and these patterns are here for a reason, but I have a feeling that Java developers tend to overdo it.
[+] [-] Benjammer|3 years ago|reply
[+] [-] strictfp|3 years ago|reply
I also disagree with the SOLID principles. KISS is more important than adding extra code and sacrificing performance to allow extension without touching the original source files. Unless you goal is explicitly that.
You're trying to write the simplest, most straight-forward encoding of the solution. If you can avoid duplication and make the code read well, you're golden.
[+] [-] ehnto|3 years ago|reply
There is some conversation here about the number of instances something has to happen before you should abstract it, which is a handy rule of thumb. You should also consider the tradeoff of complexity, sometimes even if you have 5, 10, 15 snippets of code that are almost the same you still don't need to abstract it, because the differences are not complex to manage, but an abstraction would be.
[+] [-] jitl|3 years ago|reply
One of the best things about dynamic languages like Typescript is that in these languages you can avoid interface/implementation duality while still being able to mock or test code by using test-only runtime mutation of class instances or modules.
[+] [-] onion2k|3 years ago|reply
[+] [-] lhnz|3 years ago|reply
I wouldn't refer to overly generic code as easier to change.
[+] [-] civilized|3 years ago|reply
I see this in a lot of contexts at work, not just defensive coding. Red tape that slows us down a lot, supposedly protecting us against unimaginably horrible things which no one can seem to articulate under questioning.
I mean, look how red that tape is! It must be protecting us against something pretty dangerous. Red means danger, right?
[+] [-] gkop|3 years ago|reply
This is a bit sophisticated to teach junior developers, so we just teach them “consider the implications of your design decisions with regard to supporting future needs, including those not yet known.” Yes, you can certainly over-index on this dimension, but that doesn’t make it a useless or necessarily harmful consideration. (Not implying the article disagrees with this; it does appear to be satire)
[+] [-] skybrian|3 years ago|reply
Often the best way to design for change is to make it easy to edit the code, test it, commit, and deploy, but not everything is a web app.
[1] https://martinfowler.com/bliki/Yagni.html
[+] [-] weatherlight|3 years ago|reply
In a business environment, Write code that is meant to be extended upon by people other than you.
It's not mindless tyranny, its good design.
[+] [-] not2b|3 years ago|reply
[+] [-] c3534l|3 years ago|reply
The top commenter says good code rarely needs to be changed. I think thats foolish. Good code is constantly changed, because its good enough it can be.
[+] [-] WalterBright|3 years ago|reply
Even if you do want to target real mode DOS, good luck getting your modern app to fit in 64K. Heck, just upper-casing a Unicode string takes 640K.