(no title)
ferd | 3 months ago
That is: an instance of a subclass calls a method defined on a parent class, which in turn may call a method that's been overridden by the subclass (or even another sub-subclass in the hierarchy) and that one in turn may call another parent method, and so on. It can easily become a pinball of calls around the hierarchy.
Add to that the fact that "objects" have state, and each class in the hierarchy may add more state, and modify state declared on parents. Perfect combinatory explosion of state and control-flow complexity.
I've seen this scenario way too many times in projects, and worse thing is: many developers think it's fine... and are even proud of navigating such a mess. Heck, many popular "frameworks" encourage this.
Basically: every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe. That's a horrendous way to write software, against the most basic principles of modularity and low coupling.
kccqzy|3 months ago
ferd|3 months ago
trallnag|3 months ago
sidewndr46|3 months ago
DeathArrow|3 months ago
It's a much pleasurable and easier way to work, for me at least.
Trying to follow the flow through gazillion of objects with state changing everywhere is a nightmare and I rather not return to that.
mrsmrtss|3 months ago
OrderlyTiamat|3 months ago
This is why hierarchies should have limited depth. I'd argue some amount of "co-recursion" is to be expected: after all the point of the child class is to reuse logic of the parent but to overwrite some logic.
But if the lineage goes too deep, it becomes hard to follow.
> every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe.
I'd say this is a fact of life for all pieces of code which are reused more than once. This is another reason why low coupling high cohesion is so important: if the parent method does one thing and does it well, when it needs to be changed, it probably needs to be changed for all child classes. If not, then the question arises why they're all using that same piece of code, and if this refactor shouldn't include breaking that apart into separate methods.
This problem also becomes less pressing if the test pyramid is followed properly, because that parent method should be tested in the integration tests too.
ferd|3 months ago
That's the point: You can reuse code without paying that price of inheritance. You DON'T have to expect co-recursion or shared state just for "code-reuse".
And, I think, is the key point: Behavior inheritance is NOT a good technique for code-reuse... Type-inheritance, however, IS good for abstraction, for defining boundaries, to enable polymorphism.
> I'd say this is a fact of life for all pieces of code which are reused more than once
But you want to minimize that complexity. If you call a pure function, you know it only depends on its arguments... done. If you can a method on a mutable object, you have to read its implementation line-by-line, you have to navigate a web of possibly polymorphic calls which may even modify shared state.
> This is another reason why low coupling high cohesion is so important
exactly. Now, I would phrase it the other way around though: "... low coupling high cohesion is so important..." that's the reason why using inheritance of implementation for code-reuse is often a bad idea.
wseqyrku|3 months ago
What if you are actually dealing with state and control-flow complexity. I'm curious what would be the "ideal" way to do this in your view. I am trying to implement a navigation system stripping interface design and all the application logic, even at this level it can get pretty complicated.
ferd|3 months ago
Closer to the "ideal": declarative approaches, pure functions, data-oriented pipelines, logic programming.
brabel|3 months ago
franga2000|3 months ago
On the flip side, if the author didn't want to let me do that, I really appreciate having the ability to do it anyways, even if it means tighter coupling for that one part.
movpasd|3 months ago
With interface-inheritance, each method is providing two interfaces with one single possible usage pattern: to be called by client code, but implemented by a subclass.
With implementation-inheritance, suddenly, you have any of the following possibilities for how a given method is meant to be used:
(a) called by client code, implemented by subclass (as with interface-inheritance) (b) called by client code, implemented by superclass (e.g.: template method) (c) called by subclass, implemented by superclass (e.g.: utility methods) (d) called by superclass, implemented by subclass (e.g.: template's helper methods)
And these cases inevitably bleed into each other. For example, default methods mix (a) and (b), and mixins frequently combine (c) and (b).
Because of the added complexity, you have to carefully design the relationship between the superclass, the subclass, and the client code, making sure to correctly identify which methods should have what visibility (if your language even allows for that level of granularity!). You must carefully document which methods are intended for overriding and which are intended for use by whom.
But the code structure itself in no way documents that complexity. (If we want to talk SOLID, it flies in the face of the Interface Segregation Principle). All these relationships get implicitly crammed into one class that might be better expressed explicitly. Split out the subclassing interface from the superclass and inject it so it can be delegated to -- that's basically what implementation-inheritance is syntactic sugar for anyway and now the complexity can be seen clearly laid out (and maybe mitigated with refactoring).
There is a trade-off in verbosity to be sure, especially at the call site where you might have to explicitly compose objects, but when considering the system complexity as a whole I think it's rarely worth it when composition and a tiny factory function provides the same external benefit without the headache.
These are powerful tools, if used with discipline. But especially in application code interfaces change often and are rarely well-documented. It seems inevitable that if the tool is made available, it will eventually be used to get around some design problem that would have required a more in-depth refactor otherwise -- a refactor more costly in the short-term but resulting in more maintainable code.
grahamlee|3 months ago
bccdee|3 months ago
reactordev|3 months ago