I'm always confused when people criticize OOP, because most of the time the criticisms they use are just plain bad programming. Or they're using hyperbole that isn't really useful.
You want to know why OOP is useful and common? Because it's easy. Easy things allow for faster development, faster onboarding of devs. Humans need a mental models of things and OOP very explicitly gives it.
But like most easy things, OOP still works well enough even when it's done badly. That's not a bad thing. Working software beats non-working software.
We shouldn't be telling people "stop using OOP! It's only for idiots!". OOP will be here forever. We should be teaching people how to do OOP right, what the pitfalls are that lead to bad design, and how to take badly designed OOP and fix it.
> I'm always confused when people criticize OOP, because most of the time the criticisms they use are just plain bad programming
Every paradigm can implement any type of program needed, in principle. The point of criticising a paradigm is to argue to what extent that paradigm encourages that bad programming you mention. If OOP naturally frames problems in a way that leads most people to bad solutions for common problems, that's a legitimate criticism.
Being the overwhelmingly dominant paradigm of the nineties and early oughts, largely because it was better for large scale programming than simple procedural and got bolted on in a very backwards compatible way to the overwhelmingly dominant procedural language of the immediately preceding period doesn't mean it is going to be around, in a significant way, forever.
> You want to know why OOP is useful and common?
Path dependence, its superiority for large products over what was dominant just before, the dominance of C before OOP became common, and the near simultaneous introduction of C++ and Objective-C which framed the debate about how to move beyond C (which C++ won, which is why not only is OOP dominant, but static class-based OOP on the C++ model, both strucurally and syntactically, is dominant within OOP.)
Yes, OOP is easy. At least the layers of sedimentary cruft were comparatively easy to write. The parts that actually do something, not easier in the slightest. It's still procedural code that must move data. (Usually that code is much less straightforward in OOP because it has to deal with so much boilerplate that is in the way of accessing the actual data).
It's maybe confusing because you are presupposing OOP as the default. Humans tend to observe some situation and sythesize stories to explain it, even when it comes to observing their own actions.
I don't mean to unfairly paraphrase but to transpose what you wrote so it shows these stories you are telling yourself
- usually people criticising OOP are "bad programmers"
- OOP is easy, alternatives are hard
- Even if OOP model is a bad fit, it's still a model and the extra devs needed to solve its bad fit can easily be chained to the oars
- If OOP makes trouble, it was because it was "done badly"
- Most easy things work well enough even if "done badly" (that is just not true)
- If OOP can be cobbled together to work at all, it is justified since that justifies anything
If you consider a simpler proposition like, "this can be done in a few hundred or thousand lines of C99 instead of latest C++ and boost", I think there is just no valid reason to bring OOP into it.
> You want to know why OOP is useful and common? Because it's easy.
With as much proof as you have given, let me offer you a counterargument: it is common because academia loves OOP. It's easy to teach, it's easy to test. It is most decidedly not easy and very often not useful.
My favorite example of how everything falls apart in due time is the color of a car. That's it, right? A car has a color. A Porsche Panamera might be a "carbon grey metallic" and it's stunning, but that's just one color still. Aye, up until the Mini Cooper tells you they need two colors. This world doesn't fit the OOP straightjacket. Your programming course does but the real world doesn't and when it doesn't then pain follows.
Sometimes there is just no right way when it comes to using OOP. Sure OOP is often a hammer that will make everything look like nails, but for some problems it really just isn’t the right approach at all.
> You want to know why OOP is useful and common? Because it's easy.
Try rewriting that without adjectives. Engineers are very convincible, but they need evidence. Personally, I don't find OOP to be particularly useful or easy.
Check this out... "I became a much better programmer when I started writing functional code. Now I tend to write OOP code in a functional way, but learning how to do that was and not easy, and I am not sure most would find it useful." By saying that, I'm basically taking a position that's the opposite of yours, and while it's true to me, it provides no evidence for you to help you evaluate my side of the conversation.
There are lots of cases when forcing yourself to stick to OOP styles or programming is much less easy. You could look at various comparisons of code line counts of equivalent programs between C# and F# code as some basic examples.
I don't mind having classes that can inherit available in a language, but languages like Java and C# that force you to use only those things leads to a lot of unecessary boilerplate. You can work around this with static classes full of static functions that you can treat as 'modules' full of free functions. Which, is what I do a lot :D
But I wish I didn't have to bother with that workaround.
I read this quickly to see if this was the piece that should convince me.
It was not. It is, IMO, a collection of strawmen.
People have abused OOP? Yes.
But
- citing FizzBuzz Enterprise Edition (which is really funny even for us Java/.Net developers because it is so horribly wrong)
or writing this
- Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.
again IMO, demonstrate that the author never really understood OOP.
What probably is true however is that a lot of people should unlearn the OOP they learned in school.
>- Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.
>- Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.
again IMO, demonstrate that the author never really understood OOP.
That's a "No true Scotchman/OOP" fallacy. The OOP that the author "never understood" is what we see ALL the time in enterprise and startup code.
There could be a better OOP (e.g. Alan Kay's definition of it), but that's not what people are taught or practice.
Yes, people have abused OOP and every other programming paradigm or tool as well. It's probably a necessary aspect to learning something new... do something contrived, atrocious, and trivial in order to understand the concepts.
The OOP naysayers seem to often knock the textbook examples, but there are a lot of creative things that can be done with OOP that get around the criticisms.
> again IMO, demonstrate that the author never really understood OOP.
I find arguments like this (legitimately fascinating). Obviously an amount of investment into learning common concepts is required. At the same time, if a topic is too complex for most to "truly understand" than it isnt useful.
How do we know the difference? What standards do we hold other coders to, and what expectations do we hold ourselves to?
I'd love to hear if there is much research on the topic. It is easy to find opinion articles, hard to fund data.
Maybe you should go back and read it slowly, as it was a good argument.
> again IMO, demonstrate that the author never really understood OOP.
This is what they always say...strange how OOP is the one paradigm no one ever seems to understand, no matter how much is written about it. Seems to me like there isn't actually anything to understand.
> The vast majority of essential code is not operating on just one object – it is actually implementing cross-cutting concerns. Example: when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?
Not saying this is the right way to do things but, if you're actually going to go 100% OO on something:
Player.hits(Monster) returns Hit
Hit takes a Player and a Monster in constructor, and has with(Weapon)
You end up with:
Player.hits(Monster).with(Weapon)
Then Hit reaches into said objects and deals with changing the HP, and XP. You then have encapsulated the details in the actual Hit itself, which seems correct.
While this rant rung a bell at that time, i've always found that this rant was too easy. Java had non public class, annonymous class and import static at that time.
Nowadays, the Javaland has steal lambda and var from Scala, moving away from a real kingdom of nouns (partially, you still need those pesky functional interfaces).
That was great. I read it for the first time. Similar scenarios happen in so many other fields. Some bad idea takes hold. Then schools teach it. Then more people invest time learning it so that they cannot admit it is bad and this goes spreading like wildfire and become sacred...
I've never liked classical OOP much, but multiple dispatch is a lovely paradigm. One doesn't define _classes_ per se, but rather just plain old boring structs.
struct Player
xp::Int
end
struct Monster
hp::Int
end
function hit(p::Player, m::Monster)
p.xp += 10
m.hp -= 20
end
The nicest thing is how one one doesn't need inheritance to "add a method" to an object. One just defines my_function(s::String) to be whatever, and it doesn't interfere with anyone else's code.
A lot of these initial points I don't think are relevant -- you can model your data in objects, data structures are complex because business needs are complex, data models wind up having implicit graph dependencies as well...
BUT, "cross-cutting concerns" is where I think the main valid argument is. In my experience, OOP is just way too restrictive of a model, by forcing you to shoehorn data and code into an object hierarchy that doesn't reflect the meaning of what's going on.
So I totally agree with the conclusion: just store your data in "dumb" arrays with hash tables or database tables with indices... be extremely rigorous about defining possible states... and then organize your functions themselves clearly with whatever means are at your disposal (files, folders, packages, namespaces, prefixes, arrays, or even data-free objects -- it all depends on what your language does/n't support).
Nope. Right there at the beginning is where the author goes off track.
Computation itself is the most important aspect of computing. Code and data are just complexity to manage.
> Do I have a Customer? It goes into class Customer. Do I have a rendering context? It goes into class RenderingContext.
I whole heartedly agree with this. The naive approach to domain modelling is to classify the primitives of a domain into classes and stop there. In actuality, the processor of those primitives is likely what your class should be, and those primitives ought to be methodless data structures.
I.e., OrderFulfiller instead of Customer and Part classes.
>Computation itself is the most important aspect of computing. Code and data are just complexity to manage.
Of course you're gonna write some computation, else there would be no program. That's not the point here.
First, author doesn't mean "data" as in what comes in, it means the data structures of a program.
Second, for the purposes of designing a program (and its computation part) data structures are a better guiding principle than objects. That's the argument being made.
"I whole heartedly agree with this. The naive approach to domain modelling is to classify the primitives of a domain into classes and stop there. In actuality, the processor of those primitives is likely what your class should be, and those primitives ought to be methodless data structures."
This is what I did not have the ability to articulate as well in an earlier comment. As far as I understand parent, the takeaway is that often OOP goes astray when the developer is unable to identify that a given need can be handled by generics/a parent class and instead instantiates their own class.
The problem with OOP is that it became so ubiquitous. Everything had to be OO, millions of hours spent trying to fit everything inside absurd taxonomies.
There's good bits in OO. You can find articles about how parameterizing large `switch` statements into objects can lead to obvious improvements.
My only conclusion is to bet on biodiversity~. I learned so much in relational (db) logic, functional programming, logic programming, stack languages (forth threaded code) etc etc. As soon as your brain sense something useless discard it and find another cool trick/theorem to grok.
I think for enterprise type software OOP works well. It easily allows us to re-use code and solve common problems once in a parent class and have that solution easily propagated to child classes.
However, when developing a video game, I ran into quite a few OO design conundrums that IMO were the hardest programming problems to solve in my career. I started looking into data driven design, and while I never changed my code to implement it, it looked like it might have been easier for the video game. I do not know for sure. But I do know that getting OO right in the video game I was implementing was daunting. Maybe I was doing it wrong. The one issue we kept running into was how to design it so that the flow of dependencies flowed in one direction. That is to say, classes should not require references to classes that were higher up the food chain, and vice versa. It sucked when you realize that your Bullet class requires a reference to the BattleField class when the BattleField object was not being passed down through all the intermediate objects that separated the two. I would be willing to say that it could have been poor design, or rather not realizing that dependency earlier in the process to deal with it. But many things we did not know till the requirement or change came up. Then it was programming somersaults to deal with it. Eventually we did get better at re-arranging things as things came up, basically we got used to having to change a lot of the design at a drop of a dime.
I do not know if data driven design would have helped, but it did sound like it was worth a shot. I must admit though, I do remember a data driven program i worked on, and it bothered me how much data had to be passed around that was not relevant to the method/class that was using it. And a lot of data got lumped together out of convenience.
This article seems to fit the template: Here are some abstract reasons why paradigm X is bad, and here is a class of problems that have a more straightforward solution in paradigm Y, therefore paradigm Y is better than X.
The real message here is that if you have a problem that nicely maps onto a relational database then use the database-like approach instead of OOP.
In my domain, I work on algorithms for a very specialized class of graphs whose structure and manipulation must obey a diverse set of constraints. I have tried implementing the core ideas using multiple popular paradigms but so far I did not find anything better than OOP.
Bashing imperative + structured + OOP is valid if you have a viable alternative. That viable alternative is proper namespacing, modularity and functional programming.
If your alternative is another form of spaghetti your problem is not OOP. Your problem is the way you build abstractions.
If your procedures and functions, the foundation of your program, are poorly thought, then you laid a shitty foundation for everything that follows.
1 is pretty obvious, and for a programming language this means communication between a person to a computer
2 might not be as clear, but if you know that people cannot count in languages that have no numbers, it becomes obvious. There is a tribe that only has 0, 1 and many, and guess what, they can't tell the difference between 7 and 8.
Now back to programming languages and their communication between computer and programmer: the old programming languages were very close to the computer. As languages evolved, they started to be become 'human', where it is easier for us to read and write them.
OOP in that sense leans very close to concepts of normal humans. Objects, things objects can do, objects have separate responsibilities, etc. It's easy for a human to have such a model inside his head, because we already do this every day.
Now as a programmer, most of the things that I need to do is make a representation of the real world into a program. Since the real world is made up of things that do stuff, it's easy to model it in such a concept.
Most arguments against OOP always come from either a theoretical or academic background.
But in the real world, with real companies, real problems to solve and real programmers, OOP is used. Because it lends itself really well for representing the real world in a computer model.
EDIT: not saying that anyone that doesn't use OOP isn't a real programmer. But those people are more into the algorithmic or mathematical problem space, not a problem space where a real-world concept needs to be modeled. Most software is like the latter, and therefore most programs are OO. Is it the best solution for everything? Definitely not.
> OOP programs tend to only grow and never shrink because OOP encourages it.
Most long-lived programs tend to grow, because people add new features to them. This isn't something unique to OOP.
As for growth of OOP programs in particular - does no one ever refactor anything? Shrinking OOP code through refactoring is a daily occurrence at almost every job I've ever had.
>The vast majority of essential code is not operating on just one object – it is actually implementing cross-cutting concerns. Example: when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?
As an indie dev this is something that I struggled with early on and my solution so far has been to choose the most obvious place where all those things should happen and just do it there (in this case it would be on the Player, in other less obvious cases it gets more fuzzy). What does the non-OOP solution for this problem look like?
> when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?
I don't see how this is a problem unless you think programming objects correspond with physical objects.
My first thought is make a separate Swing class representing the player swinging their weapon. There's probably an even better way to do it but this gets around the issues mentioned above.
class Player {
fun takeSwing {
swing = new Swing(
this.currentWeapon,
this.location.offset(this.direction)
)
if swing.killedMonster {
this.xp += swing.monster.level
}
}
}
class Swing {
fun new(weapon, location) {
this.weapon = weapon
this.monster = findMonster(location)
if this.monster != null { this.monster.takeHit(this) }
this.killedMonster = this.monster != null && this.monster.dead
}
fun damage {
return this.weapon.baseDamage
+ this.weapon.bonusDamage
}
}
class Monster {
fun takeHit(swing) {
this.hp -= swing.damage - this.defence
if this.hp <= 1 { this.die }
}
}
> What does the non-OOP solution for this problem look like?
function hit(player, weapon, monster) {
This is a major problem with single dispatch, a popular OOP implementation choice, but not the only one. Common Lisp and C++ have multiple dispatch, which is a generally accepted solution to this problem.
It seems like a lot of the challenges you're facing has to do with object-oriented design not being a good fit for your problem. Some problems really do organise well into independent actors, maybe the one you're doing isn't one of them?
OOP is not a silver bullet but for some classes of problems it's the best tool available.
That's the reason why all good rich GUI frameworks are OOP-based, including HTML DOM we use on the web. GPU APIs, OS kernel APIs are OOP-based as well.
Don't know why people keep insisting on this. OOP is a model. Functional is a model. Data oriented is a model. We as programmers just have to use them , in the most effective way possible to architect a solution to a problem. No one model of those can substitute any other. They complement themselves. There is bad OOP and effective OOP, just as there is bad Functional and effective Functional. The models are never bad by themselves. The programmers are.
Any programmer that bashes OOP in favor of any other model is just making a fool of himself and exposing his ingenuity.
There's something very important here. It also tends to be overstated. As a former OOP guy, I struggle with explaining what's going on with people that don't see it yet.
Perhaps beginning with praise might work best. OOA as a group analysis tool is probably one of the most powerful things coming out of computer science in the past 50 years. Oddly enough, nobody does it much. Many if the problems this author brings up with OOP actually work for the best in OOA.
It's not all bad. But there are problems with where we are. Big problems. We need to understand them.
> At its core, every software is about manipulating data to achieve a certain goal
> This part is very important, so I will repeat. goal -> data architecture -> code.
Wait, what? No. That isn't what you just said. You said my goal was my goal and manipulating the data was the way to achieve that goal.
Take Shopify. They had a goal: Make a ton of money by running ecommerce stores.
They used OOP. They IPO'd and they're doing great.
You can argue all you want about how they would have done better if they'd done some other programming style, but the reality is that almost every startup that I see win in fields like Shopify's (where there are a ton of different concerns with their own, disparate implementation specificities[0]) do so with OOP codebases.[1]
In large corps like Google non-OOP with typed languages like Go might work great. Streams of data and all that. But for startups it's too slow. OOP is agile because you get some data and you can ask it "what can you do?" and you can trick functional or logical programming languages into kinda doing that too, but they do it poorly.
[0] Even wording this in a non-OO way was a stupid waste of time. I could have just said "different models and methods" and 99% of the people here would have nodded and the 1% would have quibbled.
[1] Some startups like WhatsApp are a bit of an exception, but even YouTube used Python.
Sometimes the best way to decide what to do, is a OOP-like 100,000 line long legal code. Sometimes the best way to decide what to do, is something short and sweet yet possibly not entirely clear, like the Ten Commandments. When what the original designers chose was correct, everything will work quickly and reliably. When they don't, you'll suffer for a long time. Given that, you'll spend almost all of your wall clock time suffering and complaining about the designers selection being wrong. With a side effect of most of your suffering will be due to poorly implemented examples of the dominant paradigm. "AKA OOP SUX"
In summary, given all of the above, there are two true statements that OOP works AND simultaneously you'll spend almost all of your mental effort on OOP not working. Generally, OOP being inappropriately hyper dominant at this time, means that non-OOP solutions will utterly master some very low hanging fruit for first movers who abandon OOP.
I've seen some truly horrific object relational mappers trying to connect OOP to inherently functional software APIs, hardware device interfaces, "chronological engineering" in general, and persistent data stores. Not surprising if you're working in those areas, abandoning OOP will lead to massive success.
[+] [-] mabbo|7 years ago|reply
You want to know why OOP is useful and common? Because it's easy. Easy things allow for faster development, faster onboarding of devs. Humans need a mental models of things and OOP very explicitly gives it.
But like most easy things, OOP still works well enough even when it's done badly. That's not a bad thing. Working software beats non-working software.
We shouldn't be telling people "stop using OOP! It's only for idiots!". OOP will be here forever. We should be teaching people how to do OOP right, what the pitfalls are that lead to bad design, and how to take badly designed OOP and fix it.
[+] [-] naasking|7 years ago|reply
Every paradigm can implement any type of program needed, in principle. The point of criticising a paradigm is to argue to what extent that paradigm encourages that bad programming you mention. If OOP naturally frames problems in a way that leads most people to bad solutions for common problems, that's a legitimate criticism.
[+] [-] dragonwriter|7 years ago|reply
Being the overwhelmingly dominant paradigm of the nineties and early oughts, largely because it was better for large scale programming than simple procedural and got bolted on in a very backwards compatible way to the overwhelmingly dominant procedural language of the immediately preceding period doesn't mean it is going to be around, in a significant way, forever.
> You want to know why OOP is useful and common?
Path dependence, its superiority for large products over what was dominant just before, the dominance of C before OOP became common, and the near simultaneous introduction of C++ and Objective-C which framed the debate about how to move beyond C (which C++ won, which is why not only is OOP dominant, but static class-based OOP on the C++ model, both strucurally and syntactically, is dominant within OOP.)
[+] [-] jstimpfle|7 years ago|reply
[+] [-] letstrynvm|7 years ago|reply
I don't mean to unfairly paraphrase but to transpose what you wrote so it shows these stories you are telling yourself
- usually people criticising OOP are "bad programmers"
- OOP is easy, alternatives are hard
- Even if OOP model is a bad fit, it's still a model and the extra devs needed to solve its bad fit can easily be chained to the oars
- If OOP makes trouble, it was because it was "done badly"
- Most easy things work well enough even if "done badly" (that is just not true)
- If OOP can be cobbled together to work at all, it is justified since that justifies anything
If you consider a simpler proposition like, "this can be done in a few hundred or thousand lines of C99 instead of latest C++ and boost", I think there is just no valid reason to bring OOP into it.
[+] [-] chx|7 years ago|reply
With as much proof as you have given, let me offer you a counterargument: it is common because academia loves OOP. It's easy to teach, it's easy to test. It is most decidedly not easy and very often not useful.
My favorite example of how everything falls apart in due time is the color of a car. That's it, right? A car has a color. A Porsche Panamera might be a "carbon grey metallic" and it's stunning, but that's just one color still. Aye, up until the Mini Cooper tells you they need two colors. This world doesn't fit the OOP straightjacket. Your programming course does but the real world doesn't and when it doesn't then pain follows.
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] atoav|7 years ago|reply
For a very good example when this is indeed the case, watch (or read!) this amazing talk by kyren on Game ECS: https://kyren.github.io/2018/09/14/rustconf-talk.html
[+] [-] fermienrico|7 years ago|reply
[+] [-] PopeDotNinja|7 years ago|reply
Try rewriting that without adjectives. Engineers are very convincible, but they need evidence. Personally, I don't find OOP to be particularly useful or easy.
Check this out... "I became a much better programmer when I started writing functional code. Now I tend to write OOP code in a functional way, but learning how to do that was and not easy, and I am not sure most would find it useful." By saying that, I'm basically taking a position that's the opposite of yours, and while it's true to me, it provides no evidence for you to help you evaluate my side of the conversation.
[+] [-] gameswithgo|7 years ago|reply
>Because it's easy
There are lots of cases when forcing yourself to stick to OOP styles or programming is much less easy. You could look at various comparisons of code line counts of equivalent programs between C# and F# code as some basic examples.
I don't mind having classes that can inherit available in a language, but languages like Java and C# that force you to use only those things leads to a lot of unecessary boilerplate. You can work around this with static classes full of static functions that you can treat as 'modules' full of free functions. Which, is what I do a lot :D
But I wish I didn't have to bother with that workaround.
[+] [-] eitland|7 years ago|reply
It was not. It is, IMO, a collection of strawmen.
People have abused OOP? Yes.
But
- citing FizzBuzz Enterprise Edition (which is really funny even for us Java/.Net developers because it is so horribly wrong)
or writing this
- Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.
again IMO, demonstrate that the author never really understood OOP.
What probably is true however is that a lot of people should unlearn the OOP they learned in school.
[+] [-] alasdair_|7 years ago|reply
This drove me nuts too - it's the exact opposite of what OO design suggests to do. See: https://en.wikipedia.org/wiki/Law_of_Demeter
[+] [-] coldtea|7 years ago|reply
again IMO, demonstrate that the author never really understood OOP.
That's a "No true Scotchman/OOP" fallacy. The OOP that the author "never understood" is what we see ALL the time in enterprise and startup code.
There could be a better OOP (e.g. Alan Kay's definition of it), but that's not what people are taught or practice.
[+] [-] itronitron|7 years ago|reply
The OOP naysayers seem to often knock the textbook examples, but there are a lot of creative things that can be done with OOP that get around the criticisms.
[+] [-] ergothus|7 years ago|reply
I find arguments like this (legitimately fascinating). Obviously an amount of investment into learning common concepts is required. At the same time, if a topic is too complex for most to "truly understand" than it isnt useful.
How do we know the difference? What standards do we hold other coders to, and what expectations do we hold ourselves to?
I'd love to hear if there is much research on the topic. It is easy to find opinion articles, hard to fund data.
[+] [-] bvrmn|7 years ago|reply
Can you provide references for proper OOP?
[+] [-] totemizer|7 years ago|reply
[+] [-] tree_of_item|7 years ago|reply
> again IMO, demonstrate that the author never really understood OOP.
This is what they always say...strange how OOP is the one paradigm no one ever seems to understand, no matter how much is written about it. Seems to me like there isn't actually anything to understand.
[+] [-] jryan49|7 years ago|reply
Not saying this is the right way to do things but, if you're actually going to go 100% OO on something:
Player.hits(Monster) returns Hit
Hit takes a Player and a Monster in constructor, and has with(Weapon)
You end up with:
Player.hits(Monster).with(Weapon)
Then Hit reaches into said objects and deals with changing the HP, and XP. You then have encapsulated the details in the actual Hit itself, which seems correct.
It does read kind of nicely IMO...
[+] [-] hliyan|7 years ago|reply
[+] [-] _old_dude_|7 years ago|reply
While this rant rung a bell at that time, i've always found that this rant was too easy. Java had non public class, annonymous class and import static at that time.
Nowadays, the Javaland has steal lambda and var from Scala, moving away from a real kingdom of nouns (partially, you still need those pesky functional interfaces).
[+] [-] nyc111|7 years ago|reply
[+] [-] ced|7 years ago|reply
[+] [-] crazygringo|7 years ago|reply
BUT, "cross-cutting concerns" is where I think the main valid argument is. In my experience, OOP is just way too restrictive of a model, by forcing you to shoehorn data and code into an object hierarchy that doesn't reflect the meaning of what's going on.
So I totally agree with the conclusion: just store your data in "dumb" arrays with hash tables or database tables with indices... be extremely rigorous about defining possible states... and then organize your functions themselves clearly with whatever means are at your disposal (files, folders, packages, namespaces, prefixes, arrays, or even data-free objects -- it all depends on what your language does/n't support).
[+] [-] gerbilly|7 years ago|reply
Maybe, but if it's cross cutting concerns that bother you, just combine OOP with an aspect oriented programming library.
[+] [-] corebit|7 years ago|reply
Nope. Right there at the beginning is where the author goes off track.
Computation itself is the most important aspect of computing. Code and data are just complexity to manage.
> Do I have a Customer? It goes into class Customer. Do I have a rendering context? It goes into class RenderingContext.
I whole heartedly agree with this. The naive approach to domain modelling is to classify the primitives of a domain into classes and stop there. In actuality, the processor of those primitives is likely what your class should be, and those primitives ought to be methodless data structures.
I.e., OrderFulfiller instead of Customer and Part classes.
[+] [-] coldtea|7 years ago|reply
Of course you're gonna write some computation, else there would be no program. That's not the point here.
First, author doesn't mean "data" as in what comes in, it means the data structures of a program.
Second, for the purposes of designing a program (and its computation part) data structures are a better guiding principle than objects. That's the argument being made.
[+] [-] coolaliasbro|7 years ago|reply
This is what I did not have the ability to articulate as well in an earlier comment. As far as I understand parent, the takeaway is that often OOP goes astray when the developer is unable to identify that a given need can be handled by generics/a parent class and instead instantiates their own class.
[+] [-] agumonkey|7 years ago|reply
There's good bits in OO. You can find articles about how parameterizing large `switch` statements into objects can lead to obvious improvements.
My only conclusion is to bet on biodiversity~. I learned so much in relational (db) logic, functional programming, logic programming, stack languages (forth threaded code) etc etc. As soon as your brain sense something useless discard it and find another cool trick/theorem to grok.
[+] [-] jmartrican|7 years ago|reply
However, when developing a video game, I ran into quite a few OO design conundrums that IMO were the hardest programming problems to solve in my career. I started looking into data driven design, and while I never changed my code to implement it, it looked like it might have been easier for the video game. I do not know for sure. But I do know that getting OO right in the video game I was implementing was daunting. Maybe I was doing it wrong. The one issue we kept running into was how to design it so that the flow of dependencies flowed in one direction. That is to say, classes should not require references to classes that were higher up the food chain, and vice versa. It sucked when you realize that your Bullet class requires a reference to the BattleField class when the BattleField object was not being passed down through all the intermediate objects that separated the two. I would be willing to say that it could have been poor design, or rather not realizing that dependency earlier in the process to deal with it. But many things we did not know till the requirement or change came up. Then it was programming somersaults to deal with it. Eventually we did get better at re-arranging things as things came up, basically we got used to having to change a lot of the design at a drop of a dime.
I do not know if data driven design would have helped, but it did sound like it was worth a shot. I must admit though, I do remember a data driven program i worked on, and it bothered me how much data had to be passed around that was not relevant to the method/class that was using it. And a lot of data got lumped together out of convenience.
[+] [-] jabajabadu|7 years ago|reply
The real message here is that if you have a problem that nicely maps onto a relational database then use the database-like approach instead of OOP.
In my domain, I work on algorithms for a very specialized class of graphs whose structure and manipulation must obey a diverse set of constraints. I have tried implementing the core ideas using multiple popular paradigms but so far I did not find anything better than OOP.
[+] [-] partycoder|7 years ago|reply
If your alternative is another form of spaghetti your problem is not OOP. Your problem is the way you build abstractions.
If your procedures and functions, the foundation of your program, are poorly thought, then you laid a shitty foundation for everything that follows.
[+] [-] koonsolo|7 years ago|reply
1. communication
2. representation of concepts
1 is pretty obvious, and for a programming language this means communication between a person to a computer
2 might not be as clear, but if you know that people cannot count in languages that have no numbers, it becomes obvious. There is a tribe that only has 0, 1 and many, and guess what, they can't tell the difference between 7 and 8.
Now back to programming languages and their communication between computer and programmer: the old programming languages were very close to the computer. As languages evolved, they started to be become 'human', where it is easier for us to read and write them.
OOP in that sense leans very close to concepts of normal humans. Objects, things objects can do, objects have separate responsibilities, etc. It's easy for a human to have such a model inside his head, because we already do this every day.
Now as a programmer, most of the things that I need to do is make a representation of the real world into a program. Since the real world is made up of things that do stuff, it's easy to model it in such a concept.
Most arguments against OOP always come from either a theoretical or academic background.
But in the real world, with real companies, real problems to solve and real programmers, OOP is used. Because it lends itself really well for representing the real world in a computer model.
EDIT: not saying that anyone that doesn't use OOP isn't a real programmer. But those people are more into the algorithmic or mathematical problem space, not a problem space where a real-world concept needs to be modeled. Most software is like the latter, and therefore most programs are OO. Is it the best solution for everything? Definitely not.
[+] [-] alasdair_|7 years ago|reply
Most long-lived programs tend to grow, because people add new features to them. This isn't something unique to OOP.
As for growth of OOP programs in particular - does no one ever refactor anything? Shrinking OOP code through refactoring is a daily occurrence at almost every job I've ever had.
[+] [-] adnzzzzZ|7 years ago|reply
As an indie dev this is something that I struggled with early on and my solution so far has been to choose the most obvious place where all those things should happen and just do it there (in this case it would be on the Player, in other less obvious cases it gets more fuzzy). What does the non-OOP solution for this problem look like?
[+] [-] geowwy|7 years ago|reply
I don't see how this is a problem unless you think programming objects correspond with physical objects.
My first thought is make a separate Swing class representing the player swinging their weapon. There's probably an even better way to do it but this gets around the issues mentioned above.
[+] [-] geocar|7 years ago|reply
It seems like a lot of the challenges you're facing has to do with object-oriented design not being a good fit for your problem. Some problems really do organise well into independent actors, maybe the one you're doing isn't one of them?
[+] [-] jdub|7 years ago|reply
https://www.youtube.com/watch?v=aKLntZcp27M
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] krapht|7 years ago|reply
[+] [-] lispm|7 years ago|reply
http://www.oocities.org/tablizer/top.htm
[+] [-] antialias|7 years ago|reply
https://www.hillelwayne.com/post/decision-tables/
[+] [-] dredmorbius|7 years ago|reply
[+] [-] Const-me|7 years ago|reply
That's the reason why all good rich GUI frameworks are OOP-based, including HTML DOM we use on the web. GPU APIs, OS kernel APIs are OOP-based as well.
[+] [-] mercer|7 years ago|reply
[+] [-] rafaelvasco|7 years ago|reply
[+] [-] DanielBMarkham|7 years ago|reply
Perhaps beginning with praise might work best. OOA as a group analysis tool is probably one of the most powerful things coming out of computer science in the past 50 years. Oddly enough, nobody does it much. Many if the problems this author brings up with OOP actually work for the best in OOA.
It's not all bad. But there are problems with where we are. Big problems. We need to understand them.
[+] [-] 3pt14159|7 years ago|reply
> This part is very important, so I will repeat. goal -> data architecture -> code.
Wait, what? No. That isn't what you just said. You said my goal was my goal and manipulating the data was the way to achieve that goal.
Take Shopify. They had a goal: Make a ton of money by running ecommerce stores.
They used OOP. They IPO'd and they're doing great.
You can argue all you want about how they would have done better if they'd done some other programming style, but the reality is that almost every startup that I see win in fields like Shopify's (where there are a ton of different concerns with their own, disparate implementation specificities[0]) do so with OOP codebases.[1]
In large corps like Google non-OOP with typed languages like Go might work great. Streams of data and all that. But for startups it's too slow. OOP is agile because you get some data and you can ask it "what can you do?" and you can trick functional or logical programming languages into kinda doing that too, but they do it poorly.
[0] Even wording this in a non-OO way was a stupid waste of time. I could have just said "different models and methods" and 99% of the people here would have nodded and the 1% would have quibbled.
[1] Some startups like WhatsApp are a bit of an exception, but even YouTube used Python.
[+] [-] VLM|7 years ago|reply
In summary, given all of the above, there are two true statements that OOP works AND simultaneously you'll spend almost all of your mental effort on OOP not working. Generally, OOP being inappropriately hyper dominant at this time, means that non-OOP solutions will utterly master some very low hanging fruit for first movers who abandon OOP.
I've seen some truly horrific object relational mappers trying to connect OOP to inherently functional software APIs, hardware device interfaces, "chronological engineering" in general, and persistent data stores. Not surprising if you're working in those areas, abandoning OOP will lead to massive success.