My general take (and w/ the caveat that every system is different) is as follows:
- procedural code to enter into the system (and perhaps that's all you need)
- object oriented code for domain modeling
- functional code for data structure transformations & some light custom control flow implementation (but not too much)
I like the imperative shell, functional core pattern quite a bit, and focusing on data structures is great advice as well. The anti-OO trend in the industry has been richly earned by the OO architecture astronauts[1], but the idea of gathering a data structure and the operations on that data structure in a single place, with data hiding, is a good one, particularly for domain modeling.
In general I think we are maturing as an industry, recognizing that various approaches have their strengths and weaknesses and a good software engineer can mix and match them when building a successful software project.
There is no silver bullet. If only someone had told us that years ago!
OO code for domain modeling might be, to date, the single greatest source of disillusionment in my career.
There are absolutely use cases where it works very well. GUI toolkits come to mind. But for general line-of-business domain modeling, I keep noticing two big mismatches between the OO paradigm and the problem at hand. First and foremost, allowing subtyping into your business domain model is a trap. The problem is that your business rules are subject to change, you likely have limited or even no control over how they change, and the people who do get to make those decisions don't know and don't care about the Liskov Substitution Principle. In short, using one of the headline features of OOP for business domain modeling exposes you to outsize risk of being forced to start doing it wrong, regardless of your intentions or skill level. (Incidentally, this phenomenon is just a specific example of premature abstraction being the root of all evil.)
And then, second, dynamic dispatch makes it harder for newcomers to figure out the business logic by reading the code. It creates a bit of a catch-22 situation where figuring out which methods will run when - an essential part of understanding how the code behaves - almost requires already knowing how the code works. Not actually, of course, but reading unfamiliar code that uses dynamic dispatch is and advanced skill, and nobody enjoys it. Also, this problem can easily be mitigated with documentation. But that solution is unsatisfying. Just using procedural code and banging out whatever boilerplate you need to get things working using static dispatch creates less additional work than what it takes to write and maintain satisfying documentation for an object-oriented codebase, and comes with the added advantage that it cannot fall out of sync with what the code actually does.
Incidentally, Donald Knuth made a similar observation in his interview in the book Coders at Work. He expressed dissatisfaction with OOP on the grounds that, for the purposes of maintainability, he found code reuse to be less valuable than modifiability and readability.
If we're talking about IT (information processing in general), then the domain model is just data representing facts and should probably be treated as that, and not some metaphorical simulation of the world.
I've come up with a pretty useful test for when to apply OO:
When you need to model a _computational unit_[0] in terms of _operational semantics_, then use OO.
[0] Decidedly _not_ a simulation of a metaphor for the "real world".
---
Examples:
A resizable buffer: You want operations like adding, removing, preemptively resizing etc. on a buffer. It's useless to think of the internal bookkeeping of a buffer that is represented in its data structure when you use it.
A database object: It wraps a driver, a connection pool etc. From the outside you want to configure it at the start, then you want to interact with it via operations.
A HTTP server: You send messages to it via HTTP methods, you don't care about it's internal state, but only about your current representation of it (HATEOAS) and what you can do with it.
A memory allocator: The name gives away that you can _do_ things with it. You first choose the allocator that fits your needs, but then you _operate_ on it via alloc/free etc.
---
Some of us wince when we hear "OO", because it has been an overused paradigm. Some advocates of OO have been telling us that it is somehow total (similar to FP advocates) and people have been pushing back on this for a while now.
When applied to information processing especially, it becomes ridiculous, complex and distracting. I call this "Kindergarten OO": You to write code as if you explain the problem to a child via metaphors.
Computational objects however arise naturally and are very obvious. I don't care if those are encoded as classes, with closures or if we syntactically pretend as if they aren't objects. They are still objects.
> but the idea of gathering a data structure and the operations on that data structure in a single place, with data hiding, is a good one, particularly for domain modeling.
One can do this in a module without OOP.
The idea of mixing data and behaviour/state (OOP) instead of keeping data structure and functions transforming those (functional) is IMO the biggest mistake of OOP, together with using inheritance.
I believe making part of the program data instead of code (and thus, empty of bugs) is such a big advantage. Already lisp was talking about it. Mixing data with behaviour, without a clear delimitation creates a tight-coupled implementation full of implicit assumptions. Outside the class things are clean, but inside they ossify, and grow in complexity. Pure functions with data in data out are such a big improvement in clarity when possible.
I hadn't thought about it explicitly like this before, and I think I agree. My more nebulous thought process was something like:
1. Try to solve the problem purely functionally.
2. If that failed because of data issue, model the data with objects and simple operations in an OOP style or well thought collection of arrays (game devs answer to OOP causing memory and caching problems, but the end result programming is similar to OOP thought process)
3. If that is can't happen because some external restriction is impose use the minimal amount of procedural logic to solve the problem and round off as many sharp corners as is practical until it is unlikely any on the team gets cut.
Logically that is very close to inversion of thought process and ordering of operations to what you suggested. But I think we would recognize each others attempts to pick a design paradigm in code.
Now I want to think about this more. Is there some underlying principle here? Is this some kind of underlying principle? Where do domain specific languages fit in? Do other paradigms fit in? What are the bounds of this pattern, where does this process fail?
Look, I'm going to catch flak for this but at the end of the day the main problem is that Java and C++, the most popular OOP languages, are just bad programming languages.
There are OOP languages out there, most of them older than Java and C++, that actually provide a much better set of knobs and handles for writing sane OO programs.
Java is finally getting a bit better thanks to a lot of market pressure and good ideas from Kotlin. C++ will probably be a mess forever.
That's the strategy I always take when designing a system. Funny that I never thought about it before, I think most PHP developers will relate to that aswell.
- Procedural single point entrance into the system (network -> public/index.php, cli -> bin/console)
- OOP core for business logic, heavily borrowed (copied) from Java OOP model
> These things might be good architectures, they will certainly benefit the developers that use them, but they are not, I repeat, not, a good substitute for the messiah riding his white ass into Jerusalem, or world peace. No, Microsoft, computers are not suddenly going to start reading our minds and doing what we want automatically just because everyone in the world has to have a Passport account.
When you write a procedure that has to maintain an internal state between calls, changing it into a class makes sense. As for the name, you change the verb (write) into a noun (writer), and you now have a name for the class.
C# will silently create hidden closure classes for you when you use lambdas or yield.
James Gosling, who I'd consider the father of one of the most popular OO languages gave this advice:
"You should avoid implementation inheritance whenever possible"
My early days of Java where largely building unmaintainable inheritance trees into my code and then regretting it. This quote gave me comfort that it wasn't really that good an idea.
Although I agree with the recommendations, I cringe at the definition of abstraction. In a sane world, abstraction doesn't mean defining classes so much as it means identifying important unifying concepts. DRYing your code by moving a method to a common base class isn't abstraction in any important way, it's just adding a level of indirection. In fact, I'd argue that this example is the opposite of abstraction: it's concretion. Now every subclass relies implicitly on a particular implementation of that shared method. Not that doing this is never useful, but it's a mistake to call it abstraction when it's nothing of the sort. No wonder people complain that their abstractions leak.
I'm currently dealing with a codebase that does this to a ridiculous extent. Like, literally, every change affects the entire project because everything is made of base-classes mixed in weird ways. Every concrete object inherits multiple base classes and no individual behavior. Imagine something like this:
class Book extends ShelfableItem, Pagable, Authored, Readable, BaseBook {}
If by abstraction you mean identifying unifying concepts then I cant understand how you reasoned yourself into thinking that identifying a common method and sharing it between multiple classes by the means of the super class is not abstraction. You have identified a commonality - the common code, common method. By your definition it's abstraction.
I came to say more-or-less the same thing. The author is making some valid points, but the moral is that premature reification of abstract concepts may be harmful, especially if there is something vague about them.
I had to mull over this for a while, but I think I agree - abstractions at a conceptual level are much more powerful than object-level "compression".
Concepts/domain model/whatever tend to change over time though (at least in the business world, maybe not so much tooling etc). I think that's another source of leaky abstractions - things that conceptually made sense together at one point grow apart, and now you're left with common code that is deeply integrated but doesn't quite fit any more.
Gall’s law: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”
Your theory of premature architecture reinforces Gall’s law.
This is from the book Systemantics: How systems work and especially how they fail (1977).
Anyone who wants to do a deep dive into understanding effective abstractions, I highly recommend SICP. The full book[0] and lectures[1] are available online for free. You don't have to know Scheme to follow along.
I like to start with a fairly unambitious bit of procedural code and gradually introduce abstractions when it starts to get complicated or repetitious.
Straight code becomes functions, occasionally a group of functions cry out to become a class.
In C++ this is a huge effort to do - change hurts more there. In python it's much less painful and I end up with a program that is some imperfect composite of object oriented with functions. Next week when I want it to do more I move it further down the road to structure.
I also like keeping side effects in their own ghettos and extracting everything else out of those if possible but I'm not a big functional programming person - it's just about testing. Testing things with side effects is a pain.
I find that JS/TS also lends itself towards this in terms of Node/Deno/Bun usage for apps. You can have a file/module that simply exports a function, a collection of functions, a class, etc. It's easy to keep it simple and then combine with a mix of procedural, functional and oo concepts as best fits the use case.
Yes, I'm doing that a lot, too. I'm often astounded how hostile some languages are to later changes - e.g. java always feels resistant to change, while dotnet and especially python are more amenable. I.e. I totally transformed a program from function to oo in python without much sweat - would have been a total pain in dotnet or java
Wonderful write-up. One way I really try to avoid premature abstractions is co-locating code wherever it is used. If you don't try to put everything in a shared lib or utils, you keep the surface area small. Putting a function in the same file (or same folder) makes it clear that it is meant to be used in that folder only. You have to be diligent about your imports though and be sure you don't have some crazy relative paths. Then, if you find yourself with that same function in lots of places in your code, you might have an stumbled upon an abstraction. That's when you put it in the shared lib or utils folder. But maybe not, maybe that abstraction should stay in a nested folder because it is specifically used for a subset of problems. Again, that's to avoid over-abstracting. If you are only using it for 3 use cases that are all within the same parent folder path (just different sub-folders), then only co-locate it as far up in the file tree as is absolutely necessary for keeping the import simple. Again, it requires due diligence, but the compartmentalization of the folder structure feels elegant in its simplicity.
I'm not "formally" trained in software engineering and am primarily self-taught. This area, in particular, has been confusing to me over the years, especially after consuming so many contradictory blog posts.
I tried to model DDD in a recent Golang project and am mostly unhappy with the result. Perhaps in my eagerness, I fell into the trap of premature abstraction, but there's not anything in particular that I can point to that might lead to that conclusion. It just feels like the overall code base is more challenging to read and has a ton of indirection. The feeling is made worse when I consider how much extra time I spent trying to be authentic in implementing it.
Now, I'm starting a new project, and I'm left in this uncertain state, not knowing what to do. Is there a fine balance? How is it achieved? I appreciate the author's attempt at bringing clarity, but I honestly walked away feeling even more confident that I don't understand how this all works out.
The one point of DDD is that you make a dictionary of all of the domain terms and get everybody to accept them.
You can cut those 4 pages and throw the rest of the book away. But it is one of the best books on software engineering, and only gets better once you do that.
> I'm not "formally" trained in software engineering and am primarily self-taught
Welcome to the club and I wouldn't be too worried about it (but definitely read and learn what others have figured out).
Software design and development is still an unsolved problem. The industry has not collectively found a foundational set of standard practices that apply across the board other than some of the most basic (e.g. organization is good).
You can tell that it's not solved by the relentless flow of industry trends that become the new "best practice" until some years later when we figure out "well, that approach has these pros and these cons and tends to fit with these types of problems, but definitely not a silver bullet, let's try the next thing"
Regarding your specific issue on your new project: just be pragmatic, get it working and learn from your decisions, it's all just a collection of pros and cons and the analysis of pro vs con changes depending on the angle you look at it (e.g. short term vs long term, slow changing environment vs fast changing environment, cost to value ratio, etc., etc., etc.)
I think there's plenty of good advice in this post, though the OP doesn't talk as much about the evils of premature abstraction as one might like. Still, they do talk about how to avoid it using reasonable programming guidelines.
In the talk about data structures, I was reminded of Fred Brooks quote from MMM: "Show me your flowcharts, and conceal your table, and I shall continue to be mystified; show me your tables and and I won't usually need your flowchart; it'll be obvious." Several people have translated it to something like "Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious," for a modern audience.
Several years ago I was happy to work with several people with an interest in philosophical history. We whiled away the hours thinking about whether these quotes represented something of the tension between Hericlitus (You cannot step into the same river twice) and Plato (everything is form and substance.) So... I think the observation about the alternating utility of form and function is an old one.
As for Heraclitus vs. Plato, I think the lesson I’m trying to teach is to not pick a side until you understand each position’s implications and which of those might be more beneficial to the problem at hand ;)
A well chosen abstraction dramatically simplifies code and understanding, and a poorly chosen abstraction has the opposite effect.
When building a system some choices about abstractions can be made early, before the problem domain is fully understood. Sometimes they stand the test of time, other times they need to be reworked. Being aware of this and mindful of the importance of good abstractions is key to good system design.
In all seriousness though, you do hit a great point. The moment you stop being embarrassed about your mistakes and set your ego aside, is the moment that you can truly start learning from those same mistakes. At some point it even becomes the only way you can move forward, unless you want to stay boxed inside a niche of expertise defined by your own self-set boundaries.
> /// BAD: This mutates `input`, which may be unexpected by the caller.
> [...]
> /// GOOD: `input` is preserved and a new object is created for the output.
Neither of these are good or bad without knowing their context and understanding their tradeoffs. In particular, sometimes you want to mutate an existing object instead of duplicating it, especially if it's a big object that takes awhile to duplicate.
> Post-Architecture is a method of defining architecture incrementally, rather than designing it upfront
For anyone else wondering what it means.
I'm going to be honest, almost all architecture I've seen out in the wild has followed a more incremental approach. But then again everywhere I've worked hasn't separated the architecture/coding roles.
If you work with C# or Java in a lot of places, such as Banking in particular, you'll definitely see a lot of up-front architecture and excess abstractions early on.
>> Often, an abstraction doesn’t truly hide the data structures underneath, but it is bound by the limitations of the initial data structure(s) used to implement it. You want to refactor and use a new data structure? Chances are you need a new abstraction.
There's no greater joy in life than jumping through an abstract object, an object interface, and a factory method only to find out that the factory only services one object.
In my experience, I've always found the devil to be in the [late] details.
I have learned (the hard way), that, no matter how far I go down the rabbithole, in architecture, I am never a match for Reality.
I. Just. Can't. Plan. For. Everything.
I've learned to embrace the suck, so to speak. I admit that I don't know how things will turn out, once I get into the bush, so I try to design flexible architectures.
Flexibility sometimes comes as abstractions; forming natural "pivot points," but it can also come from leaving some stuff to be "dealt with later," and segmenting the architecture well enough to allow these to be truly autonomous. That can mean a modular architecture, with whitebox "APIs," between components.
People are correct in saying that OO can lead to insane codeballs, but I find that this is usually because someone thought they had something that would work for everything, and slammed into some devilish details, down the road, and didn't want to undo the work they already did.
I have learned to be willing to throw out weeks worth of work, if I determine the architecture was a bad bet. It happens a lot less, these days, than it used to. Hurts like hell, but I've found that it is often beneficial to do stuff I don't want to do. A couple of years ago, I threw out an almost-complete app, because it was too far off the beam. The rewrite works great, and has been shipping since January.
Anyway, I have my experience, and it often seems to be very different from that of others. I tend to be the one going back into my code, months, or years, after I wrote it, so I've learned to leave stuff that I like; not what someone else says it should be like, or that lets me tick off a square in Buzzword Bingo.
My stuff works, ships, and lasts (sometimes, for decades).
I know HN doesn't like quibbles about site design, but I'm literally having difficulty reading the article due to the font size being forced to be at least 1.3vw. Zooming out doesn't decrease the font size! Downvote if this is boring, but (a) I've never seen a site that did that before, so it's just notable from a "Daily WTF" kind of perspective, and (b) just in case the submitter is on HN: it's actually preventing me from reading the content (without changing it in DevTools anyway).
I would love a language that has this gradual evolutional abstracting as a core concern. That makes it easy. Where you can start from simplest imperative code and easily abstract it as the need for this arises.
For example a language that requires "this." or "self." prefix is not such language because you can't easily turn a script or a function into a method of some object.
> I would love a language that has this gradual evolutional abstracting as a core concern. That makes it easy. Where you can start from simplest imperative code and easily abstract it as the need for this arises.
This is about how I write Clojure.
I start out with some code that does the thing I want. Either effectfull code that "does the thing" or functions from data to data.
After a while, I feel like I'm missing a domain operation or two. At that point I've got an idea about what kind of abstraction I'm missing.
Rafael Dittwald describes the process of looking for domain operations and domain entities nicely here:
Here’s the scenario, hot shot intern comes in, calls a meeting to use generics so things can be done “easier”. He does a good job at presenting it and its value, the dumbass team lead oks it. Fast forward 1 week everyone complains behind on how much pain it is to use
I used to read and write a lot of Scala and a bit of Haskell code which claims to be very similar to what I think you're mentioning here. There people also start with defining the domain in interfaces (algebras, eDSLs) and data types.
In the end it's still the same indirection and abstraction as in any other Java or Go codebase, and it prevents the developer from easily accessing the actual logic of the program.
The discussion of procedural code doesn't make sense to me, because it seems to mix together some orthogonal concepts.
Procedural is not the opposite of object-oriented (nor is it particularly contrasting); idiomatic OOP is procedural to a large degree. Effective functional programming happens when you ditch the procedural approach in favour of a more declarative approach.
Though I agree about the point about not creating objects/instances where a pure function will get the job done, I disagree with the general stance against OOP. I think OOP is absolutely essential to simplicity. FP tends to lead to too many indirections with data traversing too many conceptual boundaries. FP (if used dogmatically) tends to encourage low cohesion. I want high cohesion and loose coupling. Some degree of co-location of state and logic is important since that affects cohesion and coupling of my modules.
The key to good OOP is to aim to only pass simple primitives or simple cloned objects as arguments to methods/functions. 'Spooky action at a distance' is really the only major issue with 'OOP' and it can be easily solved by simple pass-by-value function signatures. So really, it's not a fundamental issue with OOP itself. OOP doesn't demand pass by reference. Alan Kay emphasized messaging; which is more leaning on 'pass by value'; a message is information, not an object. We shouldn't throw out the baby with the bathwater.
When I catch a taxi, do I have to provide the taxi driver with a jerrycan full of petrol and a steering wheel? No. I just give the taxi driver the message of where I want to go. The taxi driver is responsible for the state of his car. I give him a message, not objects.
If I have to give a taxi driver a jerrycan full of petrol, that's the definition of a leaky abstraction... Possibly literally in this case.
That said, I generally agree with this article. That's why I tend to write everything in 1 file at the beginning and wait for the file size to become a problem before breaking things up.
There are many different ways to slice things up and if you don't have a complete understanding of your business domain and possible future requirement changes, there is no way you will come up with the best abstractions and it's going to cost you dearly in the medium and long term.
A lot of developers throw their arms up and say stuff like "We cannot anticipate future requirement changes"... Well of course, not on day 1 of your new system!!! You shouldn't be creating complex abstractions from the beginning when you haven't fully absorbed the problem domain. You're locking yourself into anti-patterns and lots of busy-work by forcing yourself to constantly re-imagine your flawed original vision. It's easier to come up with a good vision for the future if you do it from scratch without conceptual baggage. Otherwise, you're just seeding bias into the project. Once you have absorbed it, you will see, you CAN predict many possible requirement changes. It will impact your architecture positively.
Coming up with good abstractions is really difficult. It's not about intelligence because even top engineers working in big tech struggle with it. Most of the working code we see is spaghetti.
Thanks! I would just like to clarify I’m actually not opposed to OOP at all, and at several points I tell people it’s fine to go in that direction for the problems where you need it. I do try to warn against it as a go-to solution before you’ve understood what are the problems that actually need fixing, which it sounds like we’re pretty aligned on.
Indeed if you pass by value/use immutability where feasible, you already avoid most of the issues I’m warning against, so it sounds like you found a sensible way to apply it while avoiding the pitfalls.
> If I have to give a taxi driver a jerrycan full of petrol, that's the definition of a leaky abstraction... Possibly literally in this case.
> The real problem with the class Foo above is that it is utterly and entirely unnecessary
I see this sentiment a _lot_ in anti-OO rants, and the problem is that the ranter is missing the point of OO _entirely_. Hard to fault them, since missing the point of OO entirely is pretty common but... if you're creating classes as dumb-data wrappers and reflexively creating getters and setters for all of your private variables then yes what you're doing _is_ utterly and entirely unnecessary, but you're not doing object-oriented design at all. The idea, all the way back to the creation of OO, was to expose actions and hide data. If you're adding a lot of syntax just to turn around and expose your data, you're just doing procedural programming with a redundant syntax.
As someone who dies a lot of python, TS, dotnet and java - I disagree. The problem of dotnet and java is that everything is a object. And for many cases, I don't need that object at all, it can be a static class - but honestly, the python concept of a module fits a lot better. It's a grouping of functions in a module, not a class holding functions.
I've been a software eng professionally for 25 years. Have been coding more like 30 - 35. There is a fundamental principal here that I agree with and it surrounds a code smell that Martin Fowler termed "Speculative Generality" in his book "Refactoring."
Speculative Generality is when you don't know what will have to change in the future and so you abstract literally everything and make as many things "generic" as you possibly can in the chance that one of those generic abstractions may prove useful. The result is a confusing mess of unnecessary abstractions that adds complexity.
However, yet again I find myself staring at a reactionary post. If developers get themselves into trouble through speculative generality, then the answer is clearly "Primitive Obsession" (another code smell identified in "Refactoring") right?
Primitive Obsession is the polar opposite of abstraction. It dispenses with the introduction of high-level APIs that make working with code intuitive, and instead insists on working with native primitive types directly. Primitive Obsession often comes from a well meaning initiative to not "abstract prematurely." Why create a "Money" class when you can just store your currency figure in an integer? Why create a "PersonName" class when you can just pass strings around? If you're working in a language that supports classes and functions, why create a class to group common logical operations around a single data structure when you can instead introduce functions even if they take more parameters and could potentially lead to other problems such as "Shotgun Surgery."
This is not to say that the author is wrong or that one should embrace "premature abstraction." Only that I see a lot of reactionary thinking in software engineering. Most paradigms that we have today were developed in order to solve a very real problem around complexity at the time. Without understanding what that complexity was, historically, you are doomed to repeat the mistakes that the thinkers at the time were trying to address.
And of course, those iterations introduced new problems. Premature Abstraction IS a "foot gun." What software engineers need to remember is that the point of Design Patterns, the point of Abstractions, the point of High-Level languages and API design is to SIMPLIFY.
One term we hear a lot, that I have been on the war path against for the past decade or two is "over engineering." As engineers, part of our jobs is to find the simplest solution to a given problem. If, in your inappropriate use of a given design pattern or abstraction, you end up making something unnecessarily complicated, you did not "over engineer" it. You engaged in BAD engineering.
When it comes to abstractions, like anything else, the key to gain the experience needed to understand a) why abstractions are useful b) when abstractions can introduce complexity and then apply that to a prediction of what will likely benefit from abstraction because it is something that will be very difficult to change later.
All software changes. That's the nature of software and why software exists in the first place. Change is the strength of software but also a source of complexity. The challenge of writing code comes from change management. Being able to identify which areas of your code are going to be very difficult to change later, and to find strategies for facilitating that change.
Premature Abstraction throws abstractions at everything, even things that are unlikely to change, without the recognition that doing so makes the code more complex not less. Primitive Obsession says "we can always abstract this later if we need to" when in some situations, that will prove impossible(ex: integrating with and coupling to a 3rd party vendor; a form of "vendor lock-in" through code that is often seen).
Fine blog post overall, but the author fell to premature abstraction themselves in declaring that little Foo class bad. It's entirely too generalized for me to say anything negative about at all. Depending on the context, a tiny class like that could be completely sensible or utterly unnecessary.
recursivedoubts|1 year ago
- procedural code to enter into the system (and perhaps that's all you need)
- object oriented code for domain modeling
- functional code for data structure transformations & some light custom control flow implementation (but not too much)
I like the imperative shell, functional core pattern quite a bit, and focusing on data structures is great advice as well. The anti-OO trend in the industry has been richly earned by the OO architecture astronauts[1], but the idea of gathering a data structure and the operations on that data structure in a single place, with data hiding, is a good one, particularly for domain modeling.
In general I think we are maturing as an industry, recognizing that various approaches have their strengths and weaknesses and a good software engineer can mix and match them when building a successful software project.
There is no silver bullet. If only someone had told us that years ago!
[1] - https://www.joelonsoftware.com/2001/04/21/dont-let-architect...
bunderbunder|1 year ago
There are absolutely use cases where it works very well. GUI toolkits come to mind. But for general line-of-business domain modeling, I keep noticing two big mismatches between the OO paradigm and the problem at hand. First and foremost, allowing subtyping into your business domain model is a trap. The problem is that your business rules are subject to change, you likely have limited or even no control over how they change, and the people who do get to make those decisions don't know and don't care about the Liskov Substitution Principle. In short, using one of the headline features of OOP for business domain modeling exposes you to outsize risk of being forced to start doing it wrong, regardless of your intentions or skill level. (Incidentally, this phenomenon is just a specific example of premature abstraction being the root of all evil.)
And then, second, dynamic dispatch makes it harder for newcomers to figure out the business logic by reading the code. It creates a bit of a catch-22 situation where figuring out which methods will run when - an essential part of understanding how the code behaves - almost requires already knowing how the code works. Not actually, of course, but reading unfamiliar code that uses dynamic dispatch is and advanced skill, and nobody enjoys it. Also, this problem can easily be mitigated with documentation. But that solution is unsatisfying. Just using procedural code and banging out whatever boilerplate you need to get things working using static dispatch creates less additional work than what it takes to write and maintain satisfying documentation for an object-oriented codebase, and comes with the added advantage that it cannot fall out of sync with what the code actually does.
Incidentally, Donald Knuth made a similar observation in his interview in the book Coders at Work. He expressed dissatisfaction with OOP on the grounds that, for the purposes of maintainability, he found code reuse to be less valuable than modifiability and readability.
dgb23|1 year ago
I'm not sure about that.
If we're talking about IT (information processing in general), then the domain model is just data representing facts and should probably be treated as that, and not some metaphorical simulation of the world.
I've come up with a pretty useful test for when to apply OO:
When you need to model a _computational unit_[0] in terms of _operational semantics_, then use OO.
[0] Decidedly _not_ a simulation of a metaphor for the "real world".
---
Examples:
A resizable buffer: You want operations like adding, removing, preemptively resizing etc. on a buffer. It's useless to think of the internal bookkeeping of a buffer that is represented in its data structure when you use it.
A database object: It wraps a driver, a connection pool etc. From the outside you want to configure it at the start, then you want to interact with it via operations.
A HTTP server: You send messages to it via HTTP methods, you don't care about it's internal state, but only about your current representation of it (HATEOAS) and what you can do with it.
A memory allocator: The name gives away that you can _do_ things with it. You first choose the allocator that fits your needs, but then you _operate_ on it via alloc/free etc.
---
Some of us wince when we hear "OO", because it has been an overused paradigm. Some advocates of OO have been telling us that it is somehow total (similar to FP advocates) and people have been pushing back on this for a while now.
When applied to information processing especially, it becomes ridiculous, complex and distracting. I call this "Kindergarten OO": You to write code as if you explain the problem to a child via metaphors.
Computational objects however arise naturally and are very obvious. I don't care if those are encoded as classes, with closures or if we syntactically pretend as if they aren't objects. They are still objects.
epgui|1 year ago
You will never be able to unsee the complexity inherent to OOP.
mejutoco|1 year ago
One can do this in a module without OOP.
The idea of mixing data and behaviour/state (OOP) instead of keeping data structure and functions transforming those (functional) is IMO the biggest mistake of OOP, together with using inheritance.
I believe making part of the program data instead of code (and thus, empty of bugs) is such a big advantage. Already lisp was talking about it. Mixing data with behaviour, without a clear delimitation creates a tight-coupled implementation full of implicit assumptions. Outside the class things are clean, but inside they ossify, and grow in complexity. Pure functions with data in data out are such a big improvement in clarity when possible.
sqeaky|1 year ago
1. Try to solve the problem purely functionally.
2. If that failed because of data issue, model the data with objects and simple operations in an OOP style or well thought collection of arrays (game devs answer to OOP causing memory and caching problems, but the end result programming is similar to OOP thought process)
3. If that is can't happen because some external restriction is impose use the minimal amount of procedural logic to solve the problem and round off as many sharp corners as is practical until it is unlikely any on the team gets cut.
Logically that is very close to inversion of thought process and ordering of operations to what you suggested. But I think we would recognize each others attempts to pick a design paradigm in code.
Now I want to think about this more. Is there some underlying principle here? Is this some kind of underlying principle? Where do domain specific languages fit in? Do other paradigms fit in? What are the bounds of this pattern, where does this process fail?
voidhorse|1 year ago
There are OOP languages out there, most of them older than Java and C++, that actually provide a much better set of knobs and handles for writing sane OO programs.
Java is finally getting a bit better thanks to a lot of market pressure and good ideas from Kotlin. C++ will probably be a mess forever.
kingofthehill98|1 year ago
- Procedural single point entrance into the system (network -> public/index.php, cli -> bin/console)
- OOP core for business logic, heavily borrowed (copied) from Java OOP model
- Functional code inside classes/methods whenever possible (callables/closures, array_map/filter/reduce/walk, illuminate/collections, etc.)
gwbas1c|1 year ago
Priceless.
Dwedit|1 year ago
C# will silently create hidden closure classes for you when you use lambdas or yield.
NomDePlum|1 year ago
"You should avoid implementation inheritance whenever possible"
My early days of Java where largely building unmaintainable inheritance trees into my code and then regretting it. This quote gave me comfort that it wasn't really that good an idea.
Decent discussion on inheritance Vs composition also found here: https://en.m.wikipedia.org/wiki/Composition_over_inheritance
unknown|1 year ago
[deleted]
sevensor|1 year ago
withinboredom|1 year ago
sevnin|1 year ago
mannykannot|1 year ago
bubblyworld|1 year ago
Concepts/domain model/whatever tend to change over time though (at least in the business world, maybe not so much tooling etc). I think that's another source of leaky abstractions - things that conceptually made sense together at one point grow apart, and now you're left with common code that is deeply integrated but doesn't quite fit any more.
jaynate|1 year ago
Your theory of premature architecture reinforces Gall’s law.
This is from the book Systemantics: How systems work and especially how they fail (1977).
https://en.wikipedia.org/wiki/Systemantics
arendjr|1 year ago
Zambyte|1 year ago
[0] https://mitp-content-server.mit.edu/books/content/sectbyfn/b...
[1] https://ocw.mit.edu/courses/6-001-structure-and-interpretati...
morkalork|1 year ago
Is a hilarious understatement. No, you don't need to know it when you start the book but when you finish, you certainly will.
t43562|1 year ago
Straight code becomes functions, occasionally a group of functions cry out to become a class.
In C++ this is a huge effort to do - change hurts more there. In python it's much less painful and I end up with a program that is some imperfect composite of object oriented with functions. Next week when I want it to do more I move it further down the road to structure.
I also like keeping side effects in their own ghettos and extracting everything else out of those if possible but I'm not a big functional programming person - it's just about testing. Testing things with side effects is a pain.
tracker1|1 year ago
HdS84|1 year ago
stretch1414|1 year ago
aliasxneo|1 year ago
I tried to model DDD in a recent Golang project and am mostly unhappy with the result. Perhaps in my eagerness, I fell into the trap of premature abstraction, but there's not anything in particular that I can point to that might lead to that conclusion. It just feels like the overall code base is more challenging to read and has a ton of indirection. The feeling is made worse when I consider how much extra time I spent trying to be authentic in implementing it.
Now, I'm starting a new project, and I'm left in this uncertain state, not knowing what to do. Is there a fine balance? How is it achieved? I appreciate the author's attempt at bringing clarity, but I honestly walked away feeling even more confident that I don't understand how this all works out.
marcosdumay|1 year ago
You can cut those 4 pages and throw the rest of the book away. But it is one of the best books on software engineering, and only gets better once you do that.
RaftPeople|1 year ago
Welcome to the club and I wouldn't be too worried about it (but definitely read and learn what others have figured out).
Software design and development is still an unsolved problem. The industry has not collectively found a foundational set of standard practices that apply across the board other than some of the most basic (e.g. organization is good).
You can tell that it's not solved by the relentless flow of industry trends that become the new "best practice" until some years later when we figure out "well, that approach has these pros and these cons and tends to fit with these types of problems, but definitely not a silver bullet, let's try the next thing"
Regarding your specific issue on your new project: just be pragmatic, get it working and learn from your decisions, it's all just a collection of pros and cons and the analysis of pro vs con changes depending on the angle you look at it (e.g. short term vs long term, slow changing environment vs fast changing environment, cost to value ratio, etc., etc., etc.)
OhMeadhbh|1 year ago
In the talk about data structures, I was reminded of Fred Brooks quote from MMM: "Show me your flowcharts, and conceal your table, and I shall continue to be mystified; show me your tables and and I won't usually need your flowchart; it'll be obvious." Several people have translated it to something like "Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious," for a modern audience.
Several years ago I was happy to work with several people with an interest in philosophical history. We whiled away the hours thinking about whether these quotes represented something of the tension between Hericlitus (You cannot step into the same river twice) and Plato (everything is form and substance.) So... I think the observation about the alternating utility of form and function is an old one.
arendjr|1 year ago
As for Heraclitus vs. Plato, I think the lesson I’m trying to teach is to not pick a side until you understand each position’s implications and which of those might be more beneficial to the problem at hand ;)
resters|1 year ago
When building a system some choices about abstractions can be made early, before the problem domain is fully understood. Sometimes they stand the test of time, other times they need to be reworked. Being aware of this and mindful of the importance of good abstractions is key to good system design.
BobbyTables2|1 year ago
Fortunately, there are non-invasive therapies that can reduce the frequency of occurrence.
arendjr|1 year ago
In all seriousness though, you do hit a great point. The moment you stop being embarrassed about your mistakes and set your ego aside, is the moment that you can truly start learning from those same mistakes. At some point it even becomes the only way you can move forward, unless you want to stay boxed inside a niche of expertise defined by your own self-set boundaries.
yellowapple|1 year ago
> [...]
> /// GOOD: `input` is preserved and a new object is created for the output.
Neither of these are good or bad without knowing their context and understanding their tradeoffs. In particular, sometimes you want to mutate an existing object instead of duplicating it, especially if it's a big object that takes awhile to duplicate.
mrkeen|1 year ago
aswerty|1 year ago
For anyone else wondering what it means.
I'm going to be honest, almost all architecture I've seen out in the wild has followed a more incremental approach. But then again everywhere I've worked hasn't separated the architecture/coding roles.
tracker1|1 year ago
phkahler|1 year ago
Data structures are abstractions :-)
MisterBastahrd|1 year ago
ChrisMarshallNY|1 year ago
I have learned (the hard way), that, no matter how far I go down the rabbithole, in architecture, I am never a match for Reality.
I. Just. Can't. Plan. For. Everything.
I've learned to embrace the suck, so to speak. I admit that I don't know how things will turn out, once I get into the bush, so I try to design flexible architectures.
Flexibility sometimes comes as abstractions; forming natural "pivot points," but it can also come from leaving some stuff to be "dealt with later," and segmenting the architecture well enough to allow these to be truly autonomous. That can mean a modular architecture, with whitebox "APIs," between components.
People are correct in saying that OO can lead to insane codeballs, but I find that this is usually because someone thought they had something that would work for everything, and slammed into some devilish details, down the road, and didn't want to undo the work they already did.
I have learned to be willing to throw out weeks worth of work, if I determine the architecture was a bad bet. It happens a lot less, these days, than it used to. Hurts like hell, but I've found that it is often beneficial to do stuff I don't want to do. A couple of years ago, I threw out an almost-complete app, because it was too far off the beam. The rewrite works great, and has been shipping since January.
Anyway, I have my experience, and it often seems to be very different from that of others. I tend to be the one going back into my code, months, or years, after I wrote it, so I've learned to leave stuff that I like; not what someone else says it should be like, or that lets me tick off a square in Buzzword Bingo.
My stuff works, ships, and lasts (sometimes, for decades).
feoren|1 year ago
arendjr|1 year ago
scotty79|1 year ago
For example a language that requires "this." or "self." prefix is not such language because you can't easily turn a script or a function into a method of some object.
teodorlu|1 year ago
This is about how I write Clojure.
I start out with some code that does the thing I want. Either effectfull code that "does the thing" or functions from data to data.
After a while, I feel like I'm missing a domain operation or two. At that point I've got an idea about what kind of abstraction I'm missing.
Rafael Dittwald describes the process of looking for domain operations and domain entities nicely here:
https://youtu.be/vK1DazRK_a0
m3kw9|1 year ago
agentultra|1 year ago
There are other design “paradigms,” such as denotational design. You start with and build from abstractions.
vilunov|1 year ago
In the end it's still the same indirection and abstraction as in any other Java or Go codebase, and it prevents the developer from easily accessing the actual logic of the program.
sesm|1 year ago
epgui|1 year ago
Procedural is not the opposite of object-oriented (nor is it particularly contrasting); idiomatic OOP is procedural to a large degree. Effective functional programming happens when you ditch the procedural approach in favour of a more declarative approach.
k__|1 year ago
cryptica|1 year ago
The key to good OOP is to aim to only pass simple primitives or simple cloned objects as arguments to methods/functions. 'Spooky action at a distance' is really the only major issue with 'OOP' and it can be easily solved by simple pass-by-value function signatures. So really, it's not a fundamental issue with OOP itself. OOP doesn't demand pass by reference. Alan Kay emphasized messaging; which is more leaning on 'pass by value'; a message is information, not an object. We shouldn't throw out the baby with the bathwater.
When I catch a taxi, do I have to provide the taxi driver with a jerrycan full of petrol and a steering wheel? No. I just give the taxi driver the message of where I want to go. The taxi driver is responsible for the state of his car. I give him a message, not objects.
If I have to give a taxi driver a jerrycan full of petrol, that's the definition of a leaky abstraction... Possibly literally in this case.
That said, I generally agree with this article. That's why I tend to write everything in 1 file at the beginning and wait for the file size to become a problem before breaking things up.
There are many different ways to slice things up and if you don't have a complete understanding of your business domain and possible future requirement changes, there is no way you will come up with the best abstractions and it's going to cost you dearly in the medium and long term.
A lot of developers throw their arms up and say stuff like "We cannot anticipate future requirement changes"... Well of course, not on day 1 of your new system!!! You shouldn't be creating complex abstractions from the beginning when you haven't fully absorbed the problem domain. You're locking yourself into anti-patterns and lots of busy-work by forcing yourself to constantly re-imagine your flawed original vision. It's easier to come up with a good vision for the future if you do it from scratch without conceptual baggage. Otherwise, you're just seeding bias into the project. Once you have absorbed it, you will see, you CAN predict many possible requirement changes. It will impact your architecture positively.
Coming up with good abstractions is really difficult. It's not about intelligence because even top engineers working in big tech struggle with it. Most of the working code we see is spaghetti.
arendjr|1 year ago
Indeed if you pass by value/use immutability where feasible, you already avoid most of the issues I’m warning against, so it sounds like you found a sensible way to apply it while avoiding the pitfalls.
> If I have to give a taxi driver a jerrycan full of petrol, that's the definition of a leaky abstraction... Possibly literally in this case.
:D
ImHereToVote|1 year ago
commandlinefan|1 year ago
I see this sentiment a _lot_ in anti-OO rants, and the problem is that the ranter is missing the point of OO _entirely_. Hard to fault them, since missing the point of OO entirely is pretty common but... if you're creating classes as dumb-data wrappers and reflexively creating getters and setters for all of your private variables then yes what you're doing _is_ utterly and entirely unnecessary, but you're not doing object-oriented design at all. The idea, all the way back to the creation of OO, was to expose actions and hide data. If you're adding a lot of syntax just to turn around and expose your data, you're just doing procedural programming with a redundant syntax.
HdS84|1 year ago
gspencley|1 year ago
Speculative Generality is when you don't know what will have to change in the future and so you abstract literally everything and make as many things "generic" as you possibly can in the chance that one of those generic abstractions may prove useful. The result is a confusing mess of unnecessary abstractions that adds complexity.
However, yet again I find myself staring at a reactionary post. If developers get themselves into trouble through speculative generality, then the answer is clearly "Primitive Obsession" (another code smell identified in "Refactoring") right?
Primitive Obsession is the polar opposite of abstraction. It dispenses with the introduction of high-level APIs that make working with code intuitive, and instead insists on working with native primitive types directly. Primitive Obsession often comes from a well meaning initiative to not "abstract prematurely." Why create a "Money" class when you can just store your currency figure in an integer? Why create a "PersonName" class when you can just pass strings around? If you're working in a language that supports classes and functions, why create a class to group common logical operations around a single data structure when you can instead introduce functions even if they take more parameters and could potentially lead to other problems such as "Shotgun Surgery."
This is not to say that the author is wrong or that one should embrace "premature abstraction." Only that I see a lot of reactionary thinking in software engineering. Most paradigms that we have today were developed in order to solve a very real problem around complexity at the time. Without understanding what that complexity was, historically, you are doomed to repeat the mistakes that the thinkers at the time were trying to address.
And of course, those iterations introduced new problems. Premature Abstraction IS a "foot gun." What software engineers need to remember is that the point of Design Patterns, the point of Abstractions, the point of High-Level languages and API design is to SIMPLIFY.
One term we hear a lot, that I have been on the war path against for the past decade or two is "over engineering." As engineers, part of our jobs is to find the simplest solution to a given problem. If, in your inappropriate use of a given design pattern or abstraction, you end up making something unnecessarily complicated, you did not "over engineer" it. You engaged in BAD engineering.
When it comes to abstractions, like anything else, the key to gain the experience needed to understand a) why abstractions are useful b) when abstractions can introduce complexity and then apply that to a prediction of what will likely benefit from abstraction because it is something that will be very difficult to change later.
All software changes. That's the nature of software and why software exists in the first place. Change is the strength of software but also a source of complexity. The challenge of writing code comes from change management. Being able to identify which areas of your code are going to be very difficult to change later, and to find strategies for facilitating that change.
Premature Abstraction throws abstractions at everything, even things that are unlikely to change, without the recognition that doing so makes the code more complex not less. Primitive Obsession says "we can always abstract this later if we need to" when in some situations, that will prove impossible(ex: integrating with and coupling to a 3rd party vendor; a form of "vendor lock-in" through code that is often seen).
/stream-of-consciousness-thoughts-on-article
TurboHaskal|1 year ago
jimmaswell|1 year ago