(no title)
tomck | 1 year ago
Layers of generic APIs required to be 1000x more complex than would be required if they were just coupled to the layer above
Changing requirements means tunneling data through many layers
Layers are generic, which means either you tightly couple your APIs for the above-layer's use case, or your API will limit the performance of your system
Everyone who thinks they can design systems does it this way, then they end up managing a system that runs 10x slower than it should + complaining about managers changing requirements 'at the last minute'
vbezhenar|1 year ago
The point of abstraction is to limit blast radius of requirement changes.
Someone decides to rename field in API? You don't need to change your database schema and 100500 microservices on top of it. You just change DTO object and keep old name in the other places. May be you'll change old name some day, but you can do it in small steps.
If your layer repeats another layer, why is it a layer in the first place? The point of layer is to introduce abstraction and redirection. There's cost and there's gain.
Every problem can be solved by introducing another layer of indirection. Except the problem of having too many layers of indirection.
tomck|1 year ago
So let's say you have some 'User' ORM entity for a food app. Each user has a favourite food and food preferences. You have a function `List<User> getListOfUsersWithFoodPreferences(FoodPreference preference)` which queries another service for users with a given food preference.
The `User` entity has a `String getName()` and `String getFavouriteFood()` methods, cool
Some other team builds some UI on top of that, which takes a list of users and displays their names and their favourite food.
Another team in your org uses the same API call to get a list of users with the same food prefs as you, so they loop over all your food prefs + call the function multiple times.
Amazing, we've layered the system and reused it twice!
Now, the database needs to change, because users can have multiple favourite foods, so the database gets restructured and favourite foods are now more expensive to query - they're not just in the same table row anymore.
As a result, `getListOfUsersWithFoodPreferences` runs a bit slower, because the favourite food query is more expensive.
This is fine for the UI, but the other team using this function to loop over all your food prefs now have their system running 4x slower! They didn't even need the user's favourite food!
If we're lucky that team gets time to investigate the performance regression, and we end up with another function `getListOfUsersWithFoodPreferencesWithoutFavouriteFoods`. Nice.
The onion layer limited the 'blast radius' of the DB change, but only in the API - the performance of the layer changed, and that broke another team.
usrbinbash|1 year ago
No, the point of abstraction is to make things easier to handle.
At least that is the original meaning of the term, before the OOP ideology got its hands on it. A biology textbook talks about organs before it talks about tissues before it talks about cells before it talks about enzymes. That is the meaning of abstraction: Simple interface to a complex implementation.
In OOP-World however, "abstraction", for some reason, denotes something MORE COMPLEX than the things that are abstracted. It's a kind of logic-flow-routing-layer between the actually useful components that implement the actual business logic.
And such middleware is perfectly fine ... as long as it is required. Usually it isn't, which is where YAGNI comes from.
Now, pointless abstractions are bad enough. But things get REALLY bad, when we drag things that should sit together in the same component, kicking and screaming, into yet another abstraction, so we can maybe, someday, but really never going to happen, do something like rename or add a field to a component. Because now we don't even have useful components any more, we have abstractions, which make up components, and seeing where a component starts and ends, becomes a non-trivial task.
In theory this all seems amazing, sure. It's flexible, it's OOP, it is correct according to all kinds of books written by very smart people.
In reality however, these abstractions introduce a cost, and I am not even talking about performance here, I am talkig about readability and maintainability. And as it turns out in the majority of usecases, these costs far outweigh any gains from applying this methodology. Again: There is a reason YAGNI became a thing.
As someone who had the dubious pleasure to bring several legacy Java services into the 21st century, usually what following these principles dogmatically results in, is a huge, bloated, unreadable codebase, where business functionality is nearly impossible to locate, and so are types that actually represent business objects. Because things that could be handled in 2 functions and a struct that are tightly coupled (which is okay, because they represent one unit of business logic anyway), are instead spread out between 24 different types in as many files. And not only does this make the code slow and needlessly hard to maintain, it also makes it brittle. Because when I change the wrong Base-Type, the whole oh-so-very-elegant pile of abstractions suddenly comes crashing down like a house of cards.
When "where does X happen" stops being answerable with a simple `grep` over the codebase, things have taken a wrong turn.
seanhunter|1 year ago
The problem is in many/most? systems there's no way it can possibly do this, because the abstraction that looked like a perfect fit for requirements set 1 can't know what the requirements in set 2 look like. So in my experience what ends up happening with the abstraction thing is people put all sorts of abstractions all over the place that seem like a good idea and when requirements set #2, #3, etc come along you end up having to change all the actual code to meet the requirements and all of the abstraction layers which no longer fit.
To choose a couple of many examples from my personal experience:
- One place I worked had a system the author thought was very elegant which used virtual functions to do everything. "When we need to extend it we can just add a new set of classes which implement this interface and it will Just Work". Except when the new requirements came in we now needed to dispatch based on the type of two things, not just one. Although you can do this type of thing in lisp and haskell you can't in C++ which is what we were using. So the whole abstraction ediface cost us extra to build in the first place, performance while in use and extra to tear down and rewrite when the actual requirements changed
- One place I worked allowed people to extend the system by implementing a particular java interface to make plugins. Client went nuts developing 300+ of these. When the requirements changed it was clear we needed to change this interface in a way a straight automated refactor just couldn't achieve. Cue me having to rewrite 300+ plugins from InterfaceWhichIsDefinitelyNeverGoingToChangeA format to InterfaceWhichIsHonestlyISwearThisTimeAbsolutelyNeverGoingToChangeB format. I was really happy with all the time this abstraction was saving me while doing so.
Most of the time abstraction doesn't save you time. It may save you cognitive overload by making certain parts of the system simpler to reason about, and that can be a valid reason to do it, but multiple layers is almost never worth it and the idea that you can somehow see the future and know the right abstraction to prevent future pain is delusional unless the problem space is really really well known and understood, which is almost never the case in my experience.
redman25|1 year ago