No, this is how everyone incompetent designs systems
Layers of generic APIs required to be 1000x more complex than would be required if they were just coupled to the layer above
Changing requirements means tunneling data through many layers
Layers are generic, which means either you tightly couple your APIs for the above-layer's use case, or your API will limit the performance of your system
Everyone who thinks they can design systems does it this way, then they end up managing a system that runs 10x slower than it should + complaining about managers changing requirements 'at the last minute'
The point of abstraction is to limit blast radius of requirement changes.
Someone decides to rename field in API? You don't need to change your database schema and 100500 microservices on top of it. You just change DTO object and keep old name in the other places. May be you'll change old name some day, but you can do it in small steps.
If your layer repeats another layer, why is it a layer in the first place? The point of layer is to introduce abstraction and redirection. There's cost and there's gain.
Every problem can be solved by introducing another layer of indirection. Except the problem of having too many layers of indirection.
I've experienced the same. It's difficult for frontend and backend to communicate because there's a "translation layer" in between. Shipping a new feature is 100x harder than it needs to be because everything has to be translated between two different paradigms.
I feel like systems design is a bit like the Anna Karenina quote, every good software is alike, but the bad ones are different in their own way.
The Gary Bernhardt talk "Boundaries" shows an end result that is very close to The Onion Architecture presented here. And Onion is of course very close to the also popular Clean Architecture and Hexagonal Architecture. Which at the end are very close to applications built using the principles that cjohnson318 mentioned: "have well defined interfaces, and pass simple data through them".
This is all very close to some of the principles Bertrand Meyer teaches. For example, having different modules that make decisions and different modules that perform actions. Which is close to Event Sourcing and CQRS. Which once again is close to BASIC having SUBs and FUNs.
Sure, under a microscope you will have different terminologies, and even apply different techniques and patterns, but the principles in the end are very similar. You might not have anti-corruption layers anywhere, as the sibling commenter mentioned, but that's missing the forest for the trees: the end goal and end result are virtually the same, even if the implementation is different.
In the end happy families have different socioeconomic backgrounds, different ethnicities and religions, but they're still alike. It's the bad ones that have lots of special cases and exceptions everywhere in their design or whatever it is.
No, you do not need to design system like that. Just because there is a chance that something might change (domain logic, library/REST API), does not mean you need to create anti corruption layers everywhere. They limit problems during the (possible but not certain) change but they make code less readable, less performant and harder to test.
Yes and no. Onion is similar to IO-less Rust or "Functional core, imperative shell" in that it goes one step further from inversion of control/monadic effects and removes all control/effects from the inner layers.
You get some benefits like being able to write very straightforward business logic, but in return you:
- have to constantly fight the entropy, because every day you'll have to implement another corner case that is 2 points if you violate the layer isolation and 10 points if you reengineer the layers to preserve it
- have to constantly repeat yourself and create helpers, because your API layer objects, your domain layer objects, your DB layer objects all look very similar to each other.
Sometimes a transaction script (in Fowler's terminology) with basic DI scaffolding is easier both to write and maintain, especially when the domain isn't rocket science.
It was 10-20 years ago. Today nobody competent does it because it doesn't scale. A lot of the good parts of onion layering still exists in more modern architectures, especially in languages which are still very tied to the original OOP academic principles like Java or C# where you're likely to see interfaces for every class implementation, but as time has moved forward it's not really necessary to organize your functions inside classes, even if you're still doing heavy OOP. So today you're more likely to see the good parts of the onion layering build into how you might do domain based architecture. So even if you're doing models, services and so on, you build them related to a specific business domain where they live fully isolated from any other domain. Which goes against things like DRY, but if you've ever worked on something that actually needed to scale, or something which lived for a long time, you'll know that the only real principle that you have to care about is YAGNI and that you should never, ever build abstractions until you actually need them.
Part of the reason onion still exists is because academia is still teaching what they did almost 30 years ago, because a lot of engineers were taught 30 years ago and because a lot of code bases are simply old. The primary issue with onion layering is that it just doesn't scale. Both in terms of actual compute but also in terms of maintenance. That being said, a lot of the ideas and principles in onion layering are excellent and as I mentioned still in use even in more modern architectures. You'll likely even see parts of onion layering in things like micro-services, and I guess you could even argue that some micro-service architectures are a modern form of onion layering.
The more competent a system is designed, however, is often shown in how few abstractions are present. Not because abstractions are an inherent evil, but because any complexity you add is something you'll have to pay for later. Not when you create it, not a year later, but in five years when 20 different people have touched the same lines of code a hundred times you're very likely going to be up to your neck in technical debt. Which is why you really shouldn't try to be clever until you absolutely need to.
tomck|1 year ago
Layers of generic APIs required to be 1000x more complex than would be required if they were just coupled to the layer above
Changing requirements means tunneling data through many layers
Layers are generic, which means either you tightly couple your APIs for the above-layer's use case, or your API will limit the performance of your system
Everyone who thinks they can design systems does it this way, then they end up managing a system that runs 10x slower than it should + complaining about managers changing requirements 'at the last minute'
vbezhenar|1 year ago
The point of abstraction is to limit blast radius of requirement changes.
Someone decides to rename field in API? You don't need to change your database schema and 100500 microservices on top of it. You just change DTO object and keep old name in the other places. May be you'll change old name some day, but you can do it in small steps.
If your layer repeats another layer, why is it a layer in the first place? The point of layer is to introduce abstraction and redirection. There's cost and there's gain.
Every problem can be solved by introducing another layer of indirection. Except the problem of having too many layers of indirection.
redman25|1 year ago
whstl|1 year ago
The Gary Bernhardt talk "Boundaries" shows an end result that is very close to The Onion Architecture presented here. And Onion is of course very close to the also popular Clean Architecture and Hexagonal Architecture. Which at the end are very close to applications built using the principles that cjohnson318 mentioned: "have well defined interfaces, and pass simple data through them".
This is all very close to some of the principles Bertrand Meyer teaches. For example, having different modules that make decisions and different modules that perform actions. Which is close to Event Sourcing and CQRS. Which once again is close to BASIC having SUBs and FUNs.
Sure, under a microscope you will have different terminologies, and even apply different techniques and patterns, but the principles in the end are very similar. You might not have anti-corruption layers anywhere, as the sibling commenter mentioned, but that's missing the forest for the trees: the end goal and end result are virtually the same, even if the implementation is different.
In the end happy families have different socioeconomic backgrounds, different ethnicities and religions, but they're still alike. It's the bad ones that have lots of special cases and exceptions everywhere in their design or whatever it is.
Sankozi|1 year ago
Always analyze and decide if it is worth it.
orthoxerox|1 year ago
You get some benefits like being able to write very straightforward business logic, but in return you:
- have to constantly fight the entropy, because every day you'll have to implement another corner case that is 2 points if you violate the layer isolation and 10 points if you reengineer the layers to preserve it
- have to constantly repeat yourself and create helpers, because your API layer objects, your domain layer objects, your DB layer objects all look very similar to each other.
Sometimes a transaction script (in Fowler's terminology) with basic DI scaffolding is easier both to write and maintain, especially when the domain isn't rocket science.
Quothling|1 year ago
Part of the reason onion still exists is because academia is still teaching what they did almost 30 years ago, because a lot of engineers were taught 30 years ago and because a lot of code bases are simply old. The primary issue with onion layering is that it just doesn't scale. Both in terms of actual compute but also in terms of maintenance. That being said, a lot of the ideas and principles in onion layering are excellent and as I mentioned still in use even in more modern architectures. You'll likely even see parts of onion layering in things like micro-services, and I guess you could even argue that some micro-service architectures are a modern form of onion layering.
The more competent a system is designed, however, is often shown in how few abstractions are present. Not because abstractions are an inherent evil, but because any complexity you add is something you'll have to pay for later. Not when you create it, not a year later, but in five years when 20 different people have touched the same lines of code a hundred times you're very likely going to be up to your neck in technical debt. Which is why you really shouldn't try to be clever until you absolutely need to.
usrbinbash|1 year ago
This should be put on the cover of every programming textbook, in very bold, very red, letters.
Right next to: "Measure before you optimize" and "Code is read 1000x more often than it is written"
efnx|1 year ago
barbariangrunge|1 year ago