That's a platitude which everybody can and does say about everything.
But I think one can make a case that there is something wrong with OO in general. The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general. So we need to ask, when is OO a win? But for simpler problems it doesn't matter, because anything would work. So we need to ask: what are the hard problems that OO makes significantly easier? I don't think anyone has answered that.
I suspect it's that OO is good when the problem involves some well-defined system that exists objectively outside the program - for example, a physical system. One can look at that outside thing and ask, "what are its components?" and represent each with a class. The objective reality takes care of the hardest part of OO, which is knowing what the classes should be. (Whereas most of the time that just takes our first hard problem - what should the system do? - and makes it even harder.) As you make mistakes and have to change your classes, you can interrogate the outside system to find out what the mistakes are, and the new classes are likely to be refinements of the old ones rather than wholly incompatible.
This answer boils down to saying that OO's sweet spot is right where it originated: simulation. But that's something of a niche, not the general-purpose complexity-tackling paradigm it was sold as. (There's an interview on Youtube of Steve Jobs in 1995 or so saying that OO means programmers can build software out of pre-existing components and that this makes for at least one order of magnitude more productivity - that "at least" being a marvelous Jobsian touch.)
The reason OO endures as a general-purpose paradigm is inertia. Several generations of programmers have been taught it as the way to program -- which it is not, except that thinking makes it so. How did it get to become so standard? Story of the software industry to date: software is hard, so we come up with a theory of how we would like it to work, do that, and filter out the conflicting evidence.
> The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general.
Wrong. Even the worst Enterprisey mess of Java classes and interfaces that you can find today, is probably better than most of the spaghetti, global state ridden, wild west that existed in the golden days of "procedural" programming.
If you consider that software is composed of Code and Data, then OOP was the first programming model that offered a solid, practical and efficient approach to the organization of data, code and the relationship between the two. That resulted in programs that, given their size and amount of features, were generally easier to understand and change.
That doesn't mean OOP was perfect, or that it couldn't be misused; it was never a silver bullet. With the last generation of software developers trained from the ground up with at least some idea that code and data and need to be organized and structured properly, it's time to leave many of the practices and patterns of "pure" OOP and evolve into something better. In particular, Functional has finally become practical in the mainstream, with most languages offering efficient methods for developing with functional patterns.
>The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general.
How so? I find OOP code to be much easier to understand and change.
The thing that is wrong with OO in general is that is is not a silver bullet. I disagree that OO is useful only for simulation - I've done virtually no simulation work and I've found using OO useful in many contexts. Maybe if I used functional style or hypertyped dynamic agile mumbojumbo style or whatever is the fashion de jour - I would save some time, but OO worked fine for me and allowed me to do what I needed efficiently. Would I use it everywhere? Definitely no. Does it have a prominent place in my tool belt? Definitely yes. I reject the false dichotomy of "OO is Da Thing, should be used for everything" and "OO is useless except in very narrow niche". My experience is that OO is very useful for reducing complexity of projects of considerable size with high reuse potential and defined, though not completely static interfaces.
Another reason it endures is the large collection of mature and easy to use libraries available. Of course this is something of a chicken and egg problem, but a very present one.
gruseom|14 years ago
That's a platitude which everybody can and does say about everything.
But I think one can make a case that there is something wrong with OO in general. The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general. So we need to ask, when is OO a win? But for simpler problems it doesn't matter, because anything would work. So we need to ask: what are the hard problems that OO makes significantly easier? I don't think anyone has answered that.
I suspect it's that OO is good when the problem involves some well-defined system that exists objectively outside the program - for example, a physical system. One can look at that outside thing and ask, "what are its components?" and represent each with a class. The objective reality takes care of the hardest part of OO, which is knowing what the classes should be. (Whereas most of the time that just takes our first hard problem - what should the system do? - and makes it even harder.) As you make mistakes and have to change your classes, you can interrogate the outside system to find out what the mistakes are, and the new classes are likely to be refinements of the old ones rather than wholly incompatible.
This answer boils down to saying that OO's sweet spot is right where it originated: simulation. But that's something of a niche, not the general-purpose complexity-tackling paradigm it was sold as. (There's an interview on Youtube of Steve Jobs in 1995 or so saying that OO means programmers can build software out of pre-existing components and that this makes for at least one order of magnitude more productivity - that "at least" being a marvelous Jobsian touch.)
The reason OO endures as a general-purpose paradigm is inertia. Several generations of programmers have been taught it as the way to program -- which it is not, except that thinking makes it so. How did it get to become so standard? Story of the software industry to date: software is hard, so we come up with a theory of how we would like it to work, do that, and filter out the conflicting evidence.
Jare|14 years ago
Wrong. Even the worst Enterprisey mess of Java classes and interfaces that you can find today, is probably better than most of the spaghetti, global state ridden, wild west that existed in the golden days of "procedural" programming.
If you consider that software is composed of Code and Data, then OOP was the first programming model that offered a solid, practical and efficient approach to the organization of data, code and the relationship between the two. That resulted in programs that, given their size and amount of features, were generally easier to understand and change.
That doesn't mean OOP was perfect, or that it couldn't be misused; it was never a silver bullet. With the last generation of software developers trained from the ground up with at least some idea that code and data and need to be organized and structured properly, it's time to leave many of the practices and patterns of "pure" OOP and evolve into something better. In particular, Functional has finally become practical in the mainstream, with most languages offering efficient methods for developing with functional patterns.
ceol|14 years ago
How so? I find OOP code to be much easier to understand and change.
smsm42|14 years ago
rbarooah|14 years ago
stonemetal|14 years ago