I think that composition is absolutely better than inheritance except for one thing: boilerplate. The issue is that boilerplate is kind of important.
You don't want to litter your code with "f150.ford.car.vehicle.object.move(50, 50)". You can and should re-implement "move" so that you only have to call "f150.move(50, 50)", but that still requires boilerplate, just in the "F150" class.
Often you have class containing all of the functionality of another class, except a bit more functionality. You can always use composition but this happens so often you're creating a lot of boilerplate.
You could develop some other "syntax sugar" to replace inheritance. Maybe Haskell's type-classes are better (although they also kind of use inheritance, since there are subclasses). But chances are you'll go back to something like inheritance, because it's very useful very often.
COM solves this with delegation, where objects can only implement the methods that they care about and delegate everything else to the aggregated type, which provided the full interface.
However, depending on which stack one is using (VB 6, .NET, MFC, ATL, WRL, WinRT), the amount of boilerplate to deal with the runtime differs.
This doesn't inhherently have anything to do with inheritance. Delegation is the compositional solution to this problem and some languages do have built in sugar for that.
It usually looks something like:
class F150(@delegate private val underlying: Car) { ... }
class F150(private val underlying: Car) : Car by Underlying { ... }
// etc
With it, you F150 can say it implements the "movable" interface, just buy stating which field it contains that implements it, and the you can run "f150.move"
I'd like languages to have some kind of "delegate" functionality, where you can just delegate names to point to nested names without screwing around with ownership - it would just act like a symlink. The scope of that action is limited and clear (and easy for your IDE to understand), and it's explicit that the subclass is still the "owner" of that property, which makes the whole thing a lot easier to navigate.
E.g. something like:
class MyClass:
def __init__(self, member_class):
self.member_class = member_class
# Delegate one member
delegate move member_class.position.move
# Delegate all members
delegate * subclass.position.*
I'd like to point out that the article isn't disagreeing with you. It's saying inheritance is a dangerous interface for other users of your code (across packages is their terminology). So, if you write a library, maybe don't design it around extending classes. This is a much milder stance than the title implies, and seems pretty reasonable to me.
That's not general [implementation] inheritance, it's just delegation. The problematic, non-compositional feature of implementation inheritance is something known as open recursion, viz. the fact that every call to an overridden method like .move(...) - importantly, even a call that's merely a private implementation detail of some code in the base class - goes through an implied dispatch step that introduces a dependency on the actual derived class instance that the base-class method is being called on. This creates the well-known "fragile base class" problem since method calls to these possibly-overridden methods are relying on fragile, unstated invariants that might be broken in the derived classes, or altered in future versions of the base class.
For the same reason, I'm not so absolutist about DRY. Having the most elegant codebase also often means the codebase that's hardest to work on, and it's often better to clean things up afterwards once you know how things will be structured.
Go solves this with problem with embedding. If a type is imbedded inside of a struct and has its own methods, those methods are implicitly available on the new struct.
I really enjoyed the article above, which I read many years ago (before Rust 1.0!) which discusses how Golang and Rust handle polymorphism and code-reuse without classic object inheritance. My current thinking is that software objects are a general-purpose tool, but classic object inheritance should rarely be used as it is a solution to a narrow problem—classes should be "final" by default, and if not the inheritance pattern should be completely designed up front.
Java had the misfortune to be designed at a time when OOP was the new craze and the design decision to force all code into an object hierarchy has not held up well. I'd rather use languages designed either before or after Java, where you can use objects when they are appropriate and ignore them when they aren't.
I mostly agree, but there's one place where it does make a lot of sense to keep the hierarchy open: exceptions. Ability raise a specific error and catch it in a generic handler is very useful.
Arguably, subtyping in am OO language should either be signatures/interfaces only, or you should go full blown multiple inheritance for everything, as with the Fortress language.
I (happily) write a lot of OOP code, "inheritance is bad, use composition" is such a trite and unhelpful dogma that gets in the way of any actual discussion about where inheritance is useful.
IMO, the case where inheritance makes the most sense is when you have a set of objects polymorphically answering some question, usually with a simple answer.
class Subset
class Whole < Subset
def of(items)
items
end
end
class Range < Subset
def initialize(from:, to:)
@from = from
@to = to
end
def of(items)
items[@from:@to]
end
end
end
In most languages Subset would be called a trait or interface, rather than general inheritance. You've picked an example with no fields or overriden methods, so it's impossible for it to demonstrate the shortcomings of inheritance.
OP title said "general inheritance is bad", not " Generally, inheritance is bad".
And the text support that. The "general inheritance" the author describe is not the one you've just used.
And i'm hijacking your post, sorry, but i really agree with the author with the "incidental inheritance" point. This is the worst. I lost a month to a bug caused by this kind of inheritance (Jenkins package that tried to be cute and interfered with a cloudbees class). I won't take a java gig ever again. Not worth the brain damage.
The thing I don’t like about passing objects around is that the state inside the object is opaque, and debugging it can be extremely frustrating, especially in something like Ruby some people are way too liberal with magic for my taste. My personal preference is to see immutable data structures being passed around through reasonably named functions, and that the is usually good enough for me.
that approach gives me headaches to think about. Why not just have polymorphic functions?
fn subset(superset, start, end){
// superset is type inferred as long as it supports the [] operator
// logic to collect superset[start] to superset[end] into an array and return it
}
with uniform function call syntax:
[1,2,3,4,5,6].subset(1,4) == [2,3,4,5]
If you really want to reuse a subset range, you can use lambdas/closures, or in this case a simple wrapper
// in some code
fn subset1to4(superset){
return subset(superset,1,4)
}
array.subset1to4()
anotherArray.subset1to4()
The term "general inheritance" was not familiar to me as "inheritance across package structures". However, my OOP design intuition feels pretty good about that idea.
This quickly devolves into the inheritance vs. composition argument which isn't where I thought the Author wanted to go (but then sort of ended up going there). I agree with other commenters that it's an overstated idea. Inheritance is ridiculously useful in the right design structure, as is Composition. They both have a place. (Incidentally, bad Inheritance design usually looks very ugly very fast - bad Composition is often less glaring).
I find that years of designing in OOP has led me to build designs that have a goal of preventing me from making future mistakes and correctly consider implications of my code.
I find that my most immediate designs tend me towards Abstract Classes and Interfaces. While I usually get credit for "programming to the Interface" for this, that's not what usually led me there.
I like abstract methods. They (i.e. the compiler will) FORCE me to think about something if I ever decide to create another subclass of the Abstract class. The Author points out the "forget to call super" bug which is particularly nefarious and I avoid it at all costs. I can do that by providing a final concrete method which calls the abstract method. Let the subclasses implement that and never worry about super.
Anyway - governing inheritance across package hierarchies seems like a reasonable guideline. As for Inheritance vs. Composition, I don't favor either. When designing a class structure, I just make my best guess (as we'd all do) and find the structure quickly evolves on it's own. Usually, this ends up in a blend of shallow Inheritance trees with logical composition. There's always multiple Class Structures that will work - my goal is to find a reasonable one of those.
> Inheritance is ridiculously useful in the right design structure
I’ve made very little use of inheritance since I turned my back on C++/Java a decade and change ago. Can you give some examples where you feel inheritance wins out over composition?
Inheritance is flawed, in Java, mainly because it is the only organizing principle offered, so gets shoehorned into all kinds of problems where it is a poor fit.
Inheritance is just the right thing once in a while, but Java coders are obliged to apply it well beyond its useful range.
Just because it exists doesn’t mean it has to be used beyond its intended domain. One is entirely free to create flat “hierarchies” in Java. But I agree that in hindsight, final classes as a default would be better.
Fortunately nowadays, records and sealed classes remedy this for the most part in java.
Make no mistake, designing classes to support inheritance is much harder than just declaring everything final, and in many scenarios there is no good reason to do so
Isn't the "use it properly"-argument pretty much the same arguments as those saying that real C developers don't need the safeties offered by rust, they just need to use C or C++ properly?
The whole idea of language design (in my opinion) is to reduce the opportunities for mistakes, without getting in the way (thus reducing productivity). The biggest problem with Java and C# is that they are deceiptively simple. Anyone can get off the ground and the path of least resistance initially is the path of maximum pain in the end. That's the path of making large classes, lots of mutable state, long inheritance chains and so on. The languages aren't forcing anyone to use these antipatterns, but neither are they guiding the hand of the newcomer not to do that.
I _hate_ smart asses that do "everything final by default". It all fun and giggles until I can't mock some stupid class in some stupid library that I have no choice but to use just because someone is high on "inheritance is bad" hype. Instead of normal mocking/stubbing I now have to use stuff like PowerMock which does byte code hacking just so I can have a test.
How about you stop making decisions for me and let _me_ decide whether I want to inherit your class or not.
The final keyword is one of those places where Java shows its age to me. I agree with the overall point that inheritance is flawed, but I cannot bring myself to conclude that the use of final is the answer to the problem.
Simple example, String is final in Java. It is also immutable, and that is (mostly) irrelevant. Lots of string fields on inbound requests have validations, a simple one would be a field that contains a fixed length string. So obviously you validate that at the ingress before passing it down. Now, the question arises, should the core library be defensive and re-validate the string? Why not simply capture the subtype, TenCharacterString and parameterize methods with that?
Modern languages get this right. Subtyping is not inheritance. Inheritance is not subtyping. I should be able to subtype at zero cost, I don't need inheritance to do that, [and encapsulation is definitely not subtyping].
But Java doesn't have that. You mark something as final and you lose the ability to subtype just to eliminate the possibility of inheritance. On the other hand, to be fair to the argument against final, the real answer to my complaint is a proper type aliasing support.
The "extends" keyword gives it away that inheritance in Java was not positioned to support the restrictive cases you have in mind (Square : Rectangle, NegativeNumber : Number, TenCharacterString : String).
What issues do you see with wrapping? TenCharacterString eg. could use char[] as its backing store and implement CharSequence if you want to get it to speak a common language with String.
The author is onto something but I’m afraid it’s not explained very well. I think he’s mostly right though.
While reading it I was reminded of a design/implementation style I’ve run across several times over the years which is to find an existing class that does something similar to what you want. Then, subclass it and override methods until you get the behavior you want. And you’re done!
This leads directly to the Fragile Base Class problem. I think it also violates the Open/Closed principle. When subclassing occurs across components that are released independently (e.g., a library and an application), it either leads to continual breakage at each release, or ossification of the library. The latter happened to Java’s Swing. It got to the point where it was difficult to fix any bugs, because any “fix” would end up breaking some subclass that relied on the old behavior.
(See also Hyrum’s Law, which is more general than subclassing & inheritance.)
I have to agree with you. On the other hand, closed hierarchies can be an elegant solution to certain problems. Eg. sealed classes (and basically their single-class counterparts, final classes) avoid the mentioned problems in Java’s parlance.
One exception comes to mind though — Java’s SAMs, or in the general case, classes that more or less only wrap around a few methods intended to be overridden/implemented with clear requirements (but maybe this use case also should be restricted to interfaces?)
But the default should be to add an explicit open instead of defaulting to non-final.
I feel the flaw is in building ontologies. Satic ones, at that.
There is great value in reducing type errors at runtime. Is hilariously ironic that one of the main tools we reach for seems intent on just moving them to design time.
Notably, not compile time. Design. Most failures from mistakes in ontology stall the problem out before release.
The real world rarely sticks with nice hierarchies. Variations of set theory is more powerful, but would generally require merging RDBMS with IDE's, which does deserve more R&D. Using code to manage complex sets is limiting; query languages do it smoother because that's what they were intended for.
I've experimented myself with "table oriented programming", but don't have time to explore all the leads I uncover and rework the problem areas. Maybe when I retire?
For example, modern CRUD stacks are really just "event handling databases" done poorly. An RDBMS would be better at managing the gazillion event snippets, if it could "talk to" the compiler properly.
The "do everything in code" mantra of the web era is a mistake. Databases are better at managing complex relationships and masses of field/UI attributes, code better at non-collection-oriented algorithms. We should use the right tool for the job. "Data annotations" in Java and C# look like JCL's mutant stepdaughter. If that's the pinnacle of CRUD, then slap me silly.
I got the same feeling reading this article that I got from reading _Design Patterns_: that the author(s) present some useful techniques to deal with the shortcomings of Java, but that most of their recommendations seem like kludgy fixes to problems that are absent or at least much less severe in better-designed languages.
I also note that, while the author does make some useful points about how to program more defensively, especially in the face of unexpected modifications to super/sub classes written by other programmers, one is inevitably beholden to at least a certain extent on the trustworthiness of code that one depends upon. (Even languages like LambdaMoo that start from the assumption that a program consists of code written by multiple mutually-untrusting programmers cannot entirely protect each against malicious subterfuge by the others.) I therefore question the value of the kind of 'hardening' the author recommends, especially when it might have unfortunate consequences on extensibility and testability.
On the "how to program more defensively" point, the author techniques are not meant to protect against malice (you are right that in such case of malice there is nothing that can be done), but instead to protect against foot guns, where innocent and reasonable changes in a module internals might unknowingly break another module.
I believe the author's arguments are quite valid, inheritance breaks the concept of a "black box" in Object Oriented Design. Once you inherit from a class, all that class internals become an "unadvertised signature", nothing is a black box that can be transparently changed anymore, any internal change may break a subclass.
Thank you, interesting read, haven’t heard of it before.
Though I feel it would be dishonest to “blame it” on inheritance over on concurrency itself, when we don’t have any good solution to general concurrency as far as I know. We can only deal with it reasonably well by heavily restricting the domain-space to begin with (eg. immutables, no globals/sharing).
It might be more productive if you were to outright disagree with the statement instead of simply noting a claim that isn't strongly substantiated. If you have experiences that demonstrate to you that this claim is weak I'm sure everyone else would be interested in them (I know I would be!).
I mean, I'm not sure where this 'fear' even came from?
At its core, inheritence is a special case of composition anyways (looked at from the other perspective it's syntactic sugar over either static or dynamic delegation), so it can't really be "faster".
At any rate, there's no abstraction so powerful it can prevent a programmer from making it slow.
Language Design:
Let's start with the admission that there's a lot of cargo culting in language design. Evidence is hard to come by and the best evidence is other languages that succeed with different choices. I remember the flame wars about how multiple inheritance would cause a language to fall apart. When will languages adopt the idea that encapsulation is not sacrosant, as the theoretical issue about the backdooring like _method in python or package level public in Go^ is, someone might abuse it? Howabout make testing a first class concept and lump it in there or just use the shortcuts that have been working.
What is inheritance?
Inheritance (for any given language) is a language supported type of class composition (https://youtu.be/eEBOvqMfPoI?t=2874), as a closure. A class is a function and once you understand this, it opens up possibilities in how you design and test. This has nothing to do with performance, which is a nonsequitor. Is Rust less performant than Java because of how it does composition? No. Perhaps there's something in the JVM that makes mixins difficult to optimize for, but that would require some evidence (there's no general branch prediction in the JVM, last I checked) and is, ultimately, at the feet of the JVM implementation. Have a look at Go and Rust.
Naturally, because a specific kind of inheritance is a language feature, it gets overused and a language, (like Java) for backward compatibility's sake, overuses it in designing new features. Looking at other languages like Javascript, Lua, Erlang, PHP, Ruby, Rust, etc. saner heads have prevailed and even Java has resorted to using "Aspects" ...which are runtime traits for additional types of composition^^.
Regarding the rest of the article...
His arguments for using final include: 1. Someone may forget to use super() and that's bad because what I want to happen trumps what they want to happen. 2. People can't subclass my class across package boundaries, because I don't know why handwaved JPMS (then covered in Should Inheritance Across Package Boundaries Ever be Used?).
His reasoning is not compelling, in the least. I can say, without hesitation, that 'final' is harmful. Adding final to a class is such a violation of the concept of reusable software, I'm surprised the FSF doesn't boycott languages. In C++ there are performance benefits. In Java it's just to put up a roadblock. This was never a good idea and makes testing impossible in some cases (where final classes are injected into final classes). This is purely because of Java's design as a language, not because of some demonstrably helpful concept, implemented poorly^^.
^If you are a language designer, always allow backdooring of accessors for testing, at the very least. This conviction that they must always apply to protect developers from each other is misplaced and has hurt the reliability of software, badly.
^^Spring has a form tacked-on composition, which is both ugly from a conceptual standpoint and problematic from a testing standpoint. Java always seems 20+ years behind.
Object oriented programming gets a horrible wrap on the basis of inheritance alone, and it's no wonder. Outside of limited domains, such as GUI programming, object inheritance makes little sense. Computer science students are right to question their introductory classes on inheritance when they teach contrived examples of dogs barking and cats meowing as an example of Mammal.makeSound() inheritance.
It's almost as if we're shoehorning in a code dispatch framework as a major language feature, except that framework sucks and we're stuck building with it. The best strategy working in languages with inheritance is to avoid it.
Duck typing or traits are better ways to represent polymorphic behaviors. We've known this for over a decade now.
Here's hoping that no new languages come with object inheritance as a concept. It's deader than NULL and shouldn't be resurrected.
> Duck typing or traits are better ways to represent polymorphic behaviors. We've known this for over a decade now.
Any objective measure of that? Because there is a catch, we can’t do what doctors can. There are no double-blind tests for language design. All we have is empirical studies and based on that, OOP languages do objectively much better. So if anything, your exceptional claim require exceptional evidence.
But otherwise I agree that these Animal hierarchies are just dumb and many definitely overuse inheritance.
armchairhacker|4 years ago
You don't want to litter your code with "f150.ford.car.vehicle.object.move(50, 50)". You can and should re-implement "move" so that you only have to call "f150.move(50, 50)", but that still requires boilerplate, just in the "F150" class.
Often you have class containing all of the functionality of another class, except a bit more functionality. You can always use composition but this happens so often you're creating a lot of boilerplate.
You could develop some other "syntax sugar" to replace inheritance. Maybe Haskell's type-classes are better (although they also kind of use inheritance, since there are subclasses). But chances are you'll go back to something like inheritance, because it's very useful very often.
pjmlp|4 years ago
However, depending on which stack one is using (VB 6, .NET, MFC, ATL, WRL, WinRT), the amount of boilerplate to deal with the runtime differs.
PhineasRex|4 years ago
tm-guimaraes|4 years ago
https://kotlinlang.org/docs/delegation.html
With it, you F150 can say it implements the "movable" interface, just buy stating which field it contains that implements it, and the you can run "f150.move"
MillenialMan|4 years ago
E.g. something like:
Then: etc.caseymarquis|4 years ago
Edit: Totally with you on boilerplate though. +1.
zozbot234|4 years ago
dgb23|4 years ago
[0] https://en.wikipedia.org/wiki/Macro_(computer_science)
rsj_hn|4 years ago
esarbe|4 years ago
https://docs.scala-lang.org/scala3/reference/other-new-featu...
w-j-w|4 years ago
nickm12|4 years ago
I really enjoyed the article above, which I read many years ago (before Rust 1.0!) which discusses how Golang and Rust handle polymorphism and code-reuse without classic object inheritance. My current thinking is that software objects are a general-purpose tool, but classic object inheritance should rarely be used as it is a solution to a narrow problem—classes should be "final" by default, and if not the inheritance pattern should be completely designed up front.
Java had the misfortune to be designed at a time when OOP was the new craze and the design decision to force all code into an object hierarchy has not held up well. I'd rather use languages designed either before or after Java, where you can use objects when they are appropriate and ignore them when they aren't.
baq|4 years ago
naasking|4 years ago
nauticacom|4 years ago
IMO, the case where inheritance makes the most sense is when you have a set of objects polymorphically answering some question, usually with a simple answer.
which is used as such: You can then pass around a Subset object anywhere (aka dependency injection) and push conditionals up the stack as far as possible.Simply saying "inheritance is bad" gets nobody anywhere.
BorisTheBrave|4 years ago
orwin|4 years ago
And the text support that. The "general inheritance" the author describe is not the one you've just used.
And i'm hijacking your post, sorry, but i really agree with the author with the "incidental inheritance" point. This is the worst. I lost a month to a bug caused by this kind of inheritance (Jenkins package that tried to be cute and interfered with a cloudbees class). I won't take a java gig ever again. Not worth the brain damage.
devoutsalsa|4 years ago
cies|4 years ago
In such a language (e.g. Ruby), you will need test suites where languages with (strong) types use the type system to prove some level of correctness.
I used to be a fan of dyn typed langs (Ruby), but I've changed, I prefer strongly typed langs now for anything more than quick throw away scripts.
bruce343434|4 years ago
unknown|4 years ago
[deleted]
zinxq|4 years ago
This quickly devolves into the inheritance vs. composition argument which isn't where I thought the Author wanted to go (but then sort of ended up going there). I agree with other commenters that it's an overstated idea. Inheritance is ridiculously useful in the right design structure, as is Composition. They both have a place. (Incidentally, bad Inheritance design usually looks very ugly very fast - bad Composition is often less glaring).
I find that years of designing in OOP has led me to build designs that have a goal of preventing me from making future mistakes and correctly consider implications of my code.
I find that my most immediate designs tend me towards Abstract Classes and Interfaces. While I usually get credit for "programming to the Interface" for this, that's not what usually led me there.
I like abstract methods. They (i.e. the compiler will) FORCE me to think about something if I ever decide to create another subclass of the Abstract class. The Author points out the "forget to call super" bug which is particularly nefarious and I avoid it at all costs. I can do that by providing a final concrete method which calls the abstract method. Let the subclasses implement that and never worry about super.
Anyway - governing inheritance across package hierarchies seems like a reasonable guideline. As for Inheritance vs. Composition, I don't favor either. When designing a class structure, I just make my best guess (as we'd all do) and find the structure quickly evolves on it's own. Usually, this ends up in a blend of shallow Inheritance trees with logical composition. There's always multiple Class Structures that will work - my goal is to find a reasonable one of those.
josephg|4 years ago
I’ve made very little use of inheritance since I turned my back on C++/Java a decade and change ago. Can you give some examples where you feel inheritance wins out over composition?
ncmncm|4 years ago
Inheritance is just the right thing once in a while, but Java coders are obliged to apply it well beyond its useful range.
kaba0|4 years ago
Fortunately nowadays, records and sealed classes remedy this for the most part in java.
ivanche|4 years ago
AmericanBlarney|4 years ago
By extension then, because it's possible to misuse Java/any programming language/computers/electricity/etc., you should never use it.
stevenalowe|4 years ago
Make no mistake, designing classes to support inheritance is much harder than just declaring everything final, and in many scenarios there is no good reason to do so
alkonaut|4 years ago
The whole idea of language design (in my opinion) is to reduce the opportunities for mistakes, without getting in the way (thus reducing productivity). The biggest problem with Java and C# is that they are deceiptively simple. Anyone can get off the ground and the path of least resistance initially is the path of maximum pain in the end. That's the path of making large classes, lots of mutable state, long inheritance chains and so on. The languages aren't forcing anyone to use these antipatterns, but neither are they guiding the hand of the newcomer not to do that.
p2t2p|4 years ago
How about you stop making decisions for me and let _me_ decide whether I want to inherit your class or not.
BlackFly|4 years ago
Simple example, String is final in Java. It is also immutable, and that is (mostly) irrelevant. Lots of string fields on inbound requests have validations, a simple one would be a field that contains a fixed length string. So obviously you validate that at the ingress before passing it down. Now, the question arises, should the core library be defensive and re-validate the string? Why not simply capture the subtype, TenCharacterString and parameterize methods with that?
Modern languages get this right. Subtyping is not inheritance. Inheritance is not subtyping. I should be able to subtype at zero cost, I don't need inheritance to do that, [and encapsulation is definitely not subtyping].
But Java doesn't have that. You mark something as final and you lose the ability to subtype just to eliminate the possibility of inheritance. On the other hand, to be fair to the argument against final, the real answer to my complaint is a proper type aliasing support.
rzzzt|4 years ago
What issues do you see with wrapping? TenCharacterString eg. could use char[] as its backing store and implement CharSequence if you want to get it to speak a common language with String.
smarks|4 years ago
While reading it I was reminded of a design/implementation style I’ve run across several times over the years which is to find an existing class that does something similar to what you want. Then, subclass it and override methods until you get the behavior you want. And you’re done!
This leads directly to the Fragile Base Class problem. I think it also violates the Open/Closed principle. When subclassing occurs across components that are released independently (e.g., a library and an application), it either leads to continual breakage at each release, or ossification of the library. The latter happened to Java’s Swing. It got to the point where it was difficult to fix any bugs, because any “fix” would end up breaking some subclass that relied on the old behavior.
(See also Hyrum’s Law, which is more general than subclassing & inheritance.)
kaba0|4 years ago
One exception comes to mind though — Java’s SAMs, or in the general case, classes that more or less only wrap around a few methods intended to be overridden/implemented with clear requirements (but maybe this use case also should be restricted to interfaces?) But the default should be to add an explicit open instead of defaulting to non-final.
taeric|4 years ago
There is great value in reducing type errors at runtime. Is hilariously ironic that one of the main tools we reach for seems intent on just moving them to design time.
Notably, not compile time. Design. Most failures from mistakes in ontology stall the problem out before release.
(Obviously, ymmv.)
tabtab|4 years ago
I've experimented myself with "table oriented programming", but don't have time to explore all the leads I uncover and rework the problem areas. Maybe when I retire?
For example, modern CRUD stacks are really just "event handling databases" done poorly. An RDBMS would be better at managing the gazillion event snippets, if it could "talk to" the compiler properly.
The "do everything in code" mantra of the web era is a mistake. Databases are better at managing complex relationships and masses of field/UI attributes, code better at non-collection-oriented algorithms. We should use the right tool for the job. "Data annotations" in Java and C# look like JCL's mutant stepdaughter. If that's the pinnacle of CRUD, then slap me silly.
CRConrad|4 years ago
Is that you, Bryce?
cpcallen|4 years ago
I also note that, while the author does make some useful points about how to program more defensively, especially in the face of unexpected modifications to super/sub classes written by other programmers, one is inevitably beholden to at least a certain extent on the trustworthiness of code that one depends upon. (Even languages like LambdaMoo that start from the assumption that a program consists of code written by multiple mutually-untrusting programmers cannot entirely protect each against malicious subterfuge by the others.) I therefore question the value of the kind of 'hardening' the author recommends, especially when it might have unfortunate consequences on extensibility and testability.
SkeuomorphicBee|4 years ago
I believe the author's arguments are quite valid, inheritance breaks the concept of a "black box" in Object Oriented Design. Once you inherit from a class, all that class internals become an "unadvertised signature", nothing is a black box that can be transparently changed anymore, any internal change may break a subclass.
jqpabc123|4 years ago
Same stuff in a new way.
menotyou|4 years ago
How about considering if OOP might be a stupid idea at the first place?
toolslive|4 years ago
kaba0|4 years ago
Though I feel it would be dishonest to “blame it” on inheritance over on concurrency itself, when we don’t have any good solution to general concurrency as far as I know. We can only deal with it reasonably well by heavily restricting the domain-space to begin with (eg. immutables, no globals/sharing).
al2o3cr|4 years ago
burnished|4 years ago
stormbrew|4 years ago
At its core, inheritence is a special case of composition anyways (looked at from the other perspective it's syntactic sugar over either static or dynamic delegation), so it can't really be "faster".
At any rate, there's no abstraction so powerful it can prevent a programmer from making it slow.
Supermancho|4 years ago
What is inheritance? Inheritance (for any given language) is a language supported type of class composition (https://youtu.be/eEBOvqMfPoI?t=2874), as a closure. A class is a function and once you understand this, it opens up possibilities in how you design and test. This has nothing to do with performance, which is a nonsequitor. Is Rust less performant than Java because of how it does composition? No. Perhaps there's something in the JVM that makes mixins difficult to optimize for, but that would require some evidence (there's no general branch prediction in the JVM, last I checked) and is, ultimately, at the feet of the JVM implementation. Have a look at Go and Rust.
Naturally, because a specific kind of inheritance is a language feature, it gets overused and a language, (like Java) for backward compatibility's sake, overuses it in designing new features. Looking at other languages like Javascript, Lua, Erlang, PHP, Ruby, Rust, etc. saner heads have prevailed and even Java has resorted to using "Aspects" ...which are runtime traits for additional types of composition^^.
Regarding the rest of the article... His arguments for using final include: 1. Someone may forget to use super() and that's bad because what I want to happen trumps what they want to happen. 2. People can't subclass my class across package boundaries, because I don't know why handwaved JPMS (then covered in Should Inheritance Across Package Boundaries Ever be Used?). His reasoning is not compelling, in the least. I can say, without hesitation, that 'final' is harmful. Adding final to a class is such a violation of the concept of reusable software, I'm surprised the FSF doesn't boycott languages. In C++ there are performance benefits. In Java it's just to put up a roadblock. This was never a good idea and makes testing impossible in some cases (where final classes are injected into final classes). This is purely because of Java's design as a language, not because of some demonstrably helpful concept, implemented poorly^^.
^If you are a language designer, always allow backdooring of accessors for testing, at the very least. This conviction that they must always apply to protect developers from each other is misplaced and has hurt the reliability of software, badly.
^^Spring has a form tacked-on composition, which is both ugly from a conceptual standpoint and problematic from a testing standpoint. Java always seems 20+ years behind.
echelon|4 years ago
Object oriented programming gets a horrible wrap on the basis of inheritance alone, and it's no wonder. Outside of limited domains, such as GUI programming, object inheritance makes little sense. Computer science students are right to question their introductory classes on inheritance when they teach contrived examples of dogs barking and cats meowing as an example of Mammal.makeSound() inheritance.
It's almost as if we're shoehorning in a code dispatch framework as a major language feature, except that framework sucks and we're stuck building with it. The best strategy working in languages with inheritance is to avoid it.
Duck typing or traits are better ways to represent polymorphic behaviors. We've known this for over a decade now.
Here's hoping that no new languages come with object inheritance as a concept. It's deader than NULL and shouldn't be resurrected.
kaba0|4 years ago
Any objective measure of that? Because there is a catch, we can’t do what doctors can. There are no double-blind tests for language design. All we have is empirical studies and based on that, OOP languages do objectively much better. So if anything, your exceptional claim require exceptional evidence.
But otherwise I agree that these Animal hierarchies are just dumb and many definitely overuse inheritance.
mountainriver|4 years ago
MaxBarraclough|4 years ago
The trend is in the opposite direction.
TypeScript enables JavaScript programmers to benefit from static typing, and is seeing widespread use. Python now has type-hints.