top | item 25406798

Why software ends up complex

188 points| kiyanwang | 5 years ago |alexgaynor.net | reply

168 comments

order
[+] recursivedoubts|5 years ago|reply
This is one vector for complexity, to be sure. Saying "no" to a feature that is unnecessary, foists a lot of complexity on a system, or has a low power to weight ratio is one of the best skills a senior developer can develop.

One of my favorite real world examples is method overloading in Java. It's not a particularly useful feature (especially given alternative, less complicated features like default parameter values), interacts poorly with other language features (e.g. varargs) and ends up making all sorts of things far more complex than necessary: bytecode method invocation now needs to encode the entire type signature of a method, method resolution during compilation requires complex scoring, etc. The JVM language I worked on probably had about 10% of its total complexity dedicated to dealing with this "feature" of dubious value to end users.

Another vector, more dangerous for senior developers, is thinking that abstraction will necessarily work when dealing with complexity. I have seen OSGi projects achieve negative complexity savings, while chewing up decades of senior man-years, for example.

[+] ozim|5 years ago|reply
Well what I mostly experienced in my years in the field was that developers, wether senior or not, feel obliged to create abstract solutions.

Somehow people feel that if they won't do a generic solution for a problem at hand they failed.

In reality the opposite is often true, when people try to make generic solution they fail to make something simple, quick and easy to understand for others. Let alone the idea that abstraction will make system flexible and easier to change in the future. Where they don't know the future and then always comes a plot twist which does not fit into "perfect architecture". So I agree with that idea that abstraction is not always best response to complex system. Sometimes making copy, paste and change is better approach.

[+] monoideism|5 years ago|reply
> Another vector, more dangerous for senior developers, is thinking that abstraction will necessarily work when dealing with complexity.

I'm pretty good at fighting off features that add too much complexity, but the abstraction trap has gotten me more than once. Usually, a moderate amount of abstraction works great. I've even done well with some really clever abstractions.

Abstraction can be seductive, because it can have a big payoff in reducing complexity. So it's often hard to draw the line, particularly when working in a language with a type of abstraction I've not worked much with before.

Often the danger point comes when you understand how to use an abstraction competently, but you don't yet have the experience needed to be an expert at it.

[+] nradov|5 years ago|reply
As a Java end user I'm really glad that method overloading exists. The two largest libraries I ever built would have been huge messes without overloading. But I take your point that method overloading might be a net negative for the Java platform as a whole.
[+] SkyPuncher|5 years ago|reply
> This is one vector for complexity, to be sure. Saying "no" to a feature that is unnecessary, foists a lot of complexity on a system, or has a low power to weight ratio is one of the best skills a senior developer can develop.

I don't consider myself to be an exceptional developer, but this alone has launched my career much faster than it would if I was purely, technically competent. Ultimately, this is a sense of business understanding. The more senior/ranking you are at a company, the more important it is for you to have this tune in well.

It can be really, really hard to say no at first, but over time the people ask you to build things adapt. Features become smaller, use cases become stronger, and teams generally operate happier. It's much better to build one really strong feature and fill in the gaps with small enhancements than it is to build everything. Eventually, you might build "everything", but you certainly don't need it now. If your product can't exist without "everything", you don't have a strong enough business proposition.

----

Note: No, doesn't mean literally "I'm/we're not building this". It can mean two things:

* Forcing a priority. This is the easiest way to say no and people won't even notice it. Force a priority for your next sprint. Build a bunch of stuff in a sprint. Force a priority for another sprint. Almost inevitably, new features will be prioritized over the unimportant left overs. On a 9 month project, I have a 3 month backlog of things that simply became less of a priority. We may build them, but there's a good chance nobody is missing them. Even if we build half of them, that still puts my team 1.5 months head. For a full year, that's almost like getting 2 additional months of build time.

* Suggesting an easier alternative. Designers have good hearts and intentions, but don't always know how technically difficult something will be. I'm very aggressive about proposing 80/20 features - aka, we can accomplish almost this in a much cheaper way. Do this on 1 to 3 features a sprint and suddenly, you're churning out noticeably more value.

[+] userbinator|5 years ago|reply
I have seen OSGi projects achieve negative complexity savings, while chewing up decades of senior man-years, for example.

I'm not surprised; that and a lot of the Java "culture" in general seems to revolve around creating solutions to problems which are either self-inflicted or don't actually exist in practice, ultimately being oriented towards extracting the most (personal) profit for a given task. In other words: why make simple solutions when more complex ones will allow developers to spend more time and thus be paid more for the solution? When questioned, they can always point to an answer laced with popular buzzwords like "maintainability", "reusability", "extensibility", etc.

[+] OskarS|5 years ago|reply
I always found it surprising that Java implemented method overloading, but not operator overloading for arithmetic/logical operators. It's such a useful feature for a lot of types and really cleans up code, and the only real reason it's hard to do is because it relies on method overloading. But once you have that, why not just sugar "a + b" into "a.__add(b)" (or whatever).

You don't have to go all C++ insane with it and allow overloading of everything, but just being able to do arithmetic with non-primitive types would be very nice.

[+] karmakaze|5 years ago|reply
Optionals help, but what's really missing is union types for arguments.
[+] bertr4nd|5 years ago|reply
What is method scoring? I’ve never heard that term in VM/compilers and my Google-fu is failing me.
[+] taeric|5 years ago|reply
Your example is funny. Default parameters are far more complicated than method overloading.

That said, I fully agree on the OSGi point. Makes me worried about a lot of the new features I hear are on the way. :(

[+] dimmke|5 years ago|reply
Taking on the responsibility of pushing back hard on poorly conceived new features is one of the important hidden skills of being an effective software developer. Programmers who just do whatever they get told like marching ants end up shooting an organization in the foot long term.

You have to develop an opinion of the thing you're building/maintaining and what it should be able to do and not do. You can't count on project managers to do that.

The trick to doing this effectively is to find out the problem the feature is actually trying to solve and providing a better solution.

Usually the request is from end users of software and they have identified the problem (we need to do xyz) and prescribed a solution (put the ability to do xyz in a modal on this page.) But if you can look to what other software has done, do a UX review and find a way to add a feature in that solves their problem in a way that makes sense in the larger context of the software, they won't have a problem with it since it solves their problem and the codebase will take less of a hit.

Unfortunately, it's a lot easier to just add the modal without complaint.

[+] alkonaut|5 years ago|reply
> Programmers who just do whatever they get told like marching ants end up shooting an organization in the foot long term.

This. This is why you want senior devs too. You want people who can stand up to poorly conceived features. The most important job is to say “no”, or “how about X instead?”. I get furious when I see senior colleagues defend horrible design decisions with “it was what was specified”. Your job as a developer is to take the spec from the product owner and suggest one that actually fits the system in terms of maintainability, performance etc. Blindly implementing a specification is a horrible idea.

[+] elb2020|5 years ago|reply
> You can't count on project managers to do that.

This is one of my pet peeves when it comes to software development. I _really_ think that software development project managers ought to be able to spot the difference between a good architectural decision and a bad architectural decision, a good design decision and a bad design decision, a well implemented function and a badly implemented function. It sinks my heart, as a software development professional, having to work for project managers who, in many cases, would be hard pressed to explain what a byte is. It's just so wrong.

It's like working for a newspaper editor who does not know how to read or write. It does not mean that you cannot produce a newspaper, but it depends upon the workers stepping in and doing all the strategical technical decision behind the project managers back. As an engineer you can live with it for some time, but eventually it ends up feeling fake, and like a masquerade.

I'm much more in favor of hands on leadership types like Microsoft's Dave Cutler, with the technical skills to actually lead, and not just superficially 'manage'.

[+] officehero|5 years ago|reply
> Taking on the responsibility of pushing back hard on poorly conceived new features is one of the important hidden skills of being an effective software developer.

example from my current project: 1. Inherit half finished large software with lots of features. 2. It contains bugs and is impossible to effectively maintain/develop for the allocated manpower. 3. Management still wants all features. 4. Be brave and don't work on anything except essentials until they're sorted out. Lie to management if you have to e.g. that you found serious bugs that must be fixed (which is kind of true but they wouldn't understand)

[+] gfxgirl|5 years ago|reply
I've also seen the opposite. Leads who push the minimal features to the point that IMO the product would fail.

I don't know what good examples would be. maybe a word processor without support for bold and italics. Maybe a code compiler with no error messages. Maybe an email client with no threading.

Does a word processor need every feature of Word? No. But does it need more than notepad? Yes!

Basically you get one chance to make a good first impression. If the customers and press label your product a certain way it will take a ton of effort to change the inertia of anchor you've given them.

[+] speedgoose|5 years ago|reply
It's also faster to just add the modal. When you are asked to do xyz ASAP because it has been sold to a customer and should have been deployed a week ago, you don't feel the need to do a UX review.
[+] hypertele-Xii|5 years ago|reply
What you describe is a lack of a designer/architect in the loop. Devs are supposed to implement what is requested, as requested. Designers and architects are supposed to figure out what to request to the devs, based on the customers' needs. And this indeed entails figuring out the customers' actual problem, rather than parroting their solutions (which they are almost always unqualified to design).
[+] tacitusarc|5 years ago|reply
Within a problem space, there are two kinds of complexity: inherent complexity, and accidental complexity. This article is about accidental complexity.

There is, as far as I can tell, and enormous amount of accidental complexity in software. Far more than there is inherent complexity. From my personal experience, this largely arises when no time has been taken to fully understand the problem space, and the first potential solution is the one used.

In that case, the solution will be discovered as partially deficient in some manner, and more code will simply be tacked on to address the newfound shortcomings. I'm not referring here to later expansion of the feature set or addressing corner cases, either. I'm referring to code that was not constructed to appropriately model the desired behavior and thus instances of branching logic must be embedded within the system all over the place, or perhaps some class hierarchy is injected, and reflection is used in a attempt to make make the poor design decisions function.

I don't think adding features makes software more complex, unless those features are somehow non-systemic; that is, there is no way to add them into the existing representation of available behaviors. Perhaps an example would be a set of workflows a user can navigate, and adding a new workflow simply entails the construction of that workflow and making it available via the addition to some list. That would be a systemic feature. On the other hand if the entirety of the behaviors embedded within the workflow were instead presented as commands or buttons or various options that needed to be scattered throughout the application, that would be a non-systemic addition, and introduce accidental complexity.

[+] pydry|5 years ago|reply
One things I've noticed about building software is that the most appropriate contours of the problem space often only become clear with hindsight.

Even if you start off with the best intentions about not putting in too many features it won't always help.

This is why the second mover can also have an advantage in some areas. If they recognize the appropriate contours they can avoid the crufty features and more directly and effectively tackle the main problem.

[+] amw-zero|5 years ago|reply
While there is accidental complexity, we can not measure what is or isn’t accidental. So I think the statement that the majority of complexity is accidental is completely made up. I also think it’s wrong.

The majority of complexity in software is unavoidable. Accidental complexity just makes it even worse.

[+] jameskilton|5 years ago|reply
I feel that a lot of people misunderstand "complexity" vs "complicated". There's nothing wrong with complex. It's the nature of life that things are complex. Complicated though is almost always a negative. Complex code is fine, it's probably solving a real problem. Complicated code is not, it's just hard to work with.
[+] ChrisMarshallNY|5 years ago|reply
My experience, is that "complicated" vs. "complex," as you define, changes, depending on who is looking at the code.

If someone has a philosophical aversion to something like abstraction, then they will label it "complicated," but I use abstraction, all the time, to insert low-cost pivot points in a design. I just did it this morning, as I was developing a design to aggregate search results from multiple servers. My abstraction will afford myself, or other future developers, the ability to add more data sources in the future, in a low-risk fashion.

I also design frameworks in "layers," that often implement "philosophical realms," as opposed to practical ones. Think OSI layers.

That can mean that adding a new command to the REST API, for example, may require that I implement the actual leverage in a very low-level layer, then add support in subsequent layers to pass the command through.

That can add complexity, and quality problems. When I do something like that, I need to test carefully. The reason that I do that, is so, at an indeterminate point in the future, I can "swap out" entire layers, and replace them with newer tech. If I don't think that will be important, then I may want to rethink my design.

That is the philosophy behind the OSI layers. They allow drastically different (and interchangeable) implementation at each layer, with clear interface points, above and below.

[+] bryik|5 years ago|reply
This is a good point. I once worked on a system that checked projects met various legal standards and rules before allowing changes to be saved. This system was complex because the rules were complex, the only way to make it simpler would have been to convince the government to make the rules simpler.
[+] PsylentKnight|5 years ago|reply
I agree with your premise but I don't think you're using the correct terms here. "Complex" and "complicated" are synonyms as far as I can tell.

What you're describing sounds like "essential complexity" vs. "accidental complexity." See "No Silver Bullet."

Sorry, this is pedantic, but using the incorrect terms adds accidental complexity to a topic that is already essentially complex. ;)

[+] recursivedoubts|5 years ago|reply
I agree with your comment. However, a good tool for controlling complexity is deciding what your system is going to do. As I said in a sibling comment, consider method overloading in java: this is a real world feature, not uncommon in other languages. There are arguments for and against it (I am against it.)

The implementation of it may be amazing code, but none the less it makes the java compiler and runtime far more complicated that it would be if the feature were omitted.

So, again, I agree with you, but I also agree with the articles point that choosing features carefully is an important tool in controlling complexity.

[+] bluetwo|5 years ago|reply
"Complexity is a crutch" - I'm told Neil Gayman said this. Not 100% sure but I totally agree.
[+] Blikkentrekker|5 years ago|reply
I'd say that the only reason that software seems too complex, rather than as complex as it needs to be, is because every programmer thinks he can rewrite it in a simpler way, but when he's done, it's as complex that which he has rewritten.

I've seen it happen so many times, and I've done it. It's the very same principle that leads to almost every construction project running behind schedule — a man simply underestimates the complexity of nigh every task he endeavors to complete.

[+] nchelluri|5 years ago|reply
I see your points, and I see the merit of "rewrite syndrome", and lean strongly towards automated-test backed refactoring, and all in all I disagree with your thesis.

Sometimes, software patches and new features get tacked on and tacked on and the system loses all semblance of cohesion or integrity. Thinking of the system as a whole, iterating with the confidence brought by tests of some sort, one can begin to detangle all the unncessary intermixing and duplicate work and begin to make the system sensible.

[+] convolvatron|5 years ago|reply
I completely disagree. certainly the chances that a rewrite in a standard software organization are pretty good that the new version will be just as broken, but in a new and different way than the last.

but I've taken several projects in the 100s of k-lines and translated them into projects with equivalent functionality and spanning between 1-2 decimal orders of magnitude less source code.

that's not an argument for rewriting in all circumstances - I just think at least half of most mature software is just 'junk dna' - useless boilerplate, unused paths, poor abstractions, etc

[+] RivieraKid|5 years ago|reply
Depends. In my experience, I've never regretted a rewrite and always ended up with a better and simpler code.

It can be very frustrating to modify low quality and ugly code so I feel much better after a rewrite.

[+] justapassenger|5 years ago|reply
Biggest underlying reason for why software ends up complex, is because real world domain in which software operates, is also complex.
[+] beaconstudios|5 years ago|reply
Here's my approach. I think many feature requests fall under the X/Y problem.

- view a new feature request as a new user capability

- extend the model that the software implements, to encompass that capability - regardless of how the feature was implemented in the requester's head.

- extend the software to match the new model. This may require refactoring as the model may have had to undergo shifts to encompass the new capability

For example:

I have a car. I model the car as four wheels, an engine, a chassis, and a lever. The engine drives the wheels, the wheels support the chassis, the chassis contains the engine. A lever in the chassis sets the engine in motion. It's a simple model and is capable of 1. sitting still and 2. moving forwards and backwards. This is all the capabilities we've needed so far.

A user requests a new feature where the wheels are instead mecanum wheels (https://en.wikipedia.org/wiki/Mecanum_wheel).

The default industry response is to either implement the change as-requested, or reject it. I propose that the correct move instead is to ask the user WHY they want mecanum wheels. They reveal that they want the car to move in 2 dimensions, rather than one. From that understanding you can extend the model of the car to encompass the feature - you may add the mecanum wheels and a mechanism to control them, you may add a steering wheel and rack-and-pinion, you may do something completely different - totally depending on how and why the user wants 2d movement (depending on further questioning, ie "5 whys"). But you are working to the capability, not the feature. By extending the model, you can then change the software to match this new model.

I think as software engineers we have a tendency to forget the model and focus only on the code. A request for mecanum wheels becomes a question of how to change the software to encompass that feature. But we must always remember the existence of the model, and the user's relationship to it.

[+] hotcrossbunny|5 years ago|reply
In my humble opinion, a lot of projects go quietly bad when they experience some sort of new requirement that gets underestimated in terms of its architectural impact by project management. At such time senior Devs have either already moved on, or have their eye off the ball such that new features get incorporated without the necessary architectural support. These inflection points can themselves introduce complexity but often become the gateway for all sorts of subsequent small things that explode in size. In short, don't miss architecture moments
[+] senko|5 years ago|reply
In this regard, software is quite similar to law - more so than to other STEM disciplines like mechanical or civil engineering or math.

A body of law is adapted and extended through many years, by various groups having different priorities, and rarely does someone dive in and refactor it to be simpler (while achieving the same "business goals").

In software, at least we have an option of creating automated tests to verify correctness or internal consistency, and using stricter languages to avoid ambiguities.

In light of this, we (as an industry) are actually doing quite good!

[+] taeric|5 years ago|reply
My favorite point on how software end up complex, is our industry somehow thinks we are unique on this.

Everything winds up complex. Just look at how many ingredients go into simple store bought cookies. Look into the entire supply chain around your flour, that lets your homemade cookies be simple.

[+] AnonC|5 years ago|reply
> This means that supporters can always point to concrete benefits to their specific use cases, while detractors claim far more abstract drawbacks.

I don’t agree with this. If you have a salesperson/accountant who grew into being CEO and is just a bean counter who doesn’t (want to) understand the real costs of training new developers on increased complexity in a system or being able to maintain a complex system over decades, this could possibly be true. But in that case, your engineering manager isn’t a real engineering manager either, and is just another bean counter in disguise.

I get that senior management may not always (want to) understand the nuances in building, maintaining and supporting a complex system, but it’s not abstract. It costs real money that all these types can feel the pinch of.

The real reason why software is complex or can get complex is because the underlying domain and its requirements and constraints are complex, combined with layers of complexity added on the technical side to enable certain things (perhaps easy configurability, scalability, reliability, etc.). There are self-inflicted wounds too, where complexity is prematurely added. But that’s not the full story in all cases.

[+] goblin89|5 years ago|reply
Software can be viewed as a function.

A function is designed to be called in a certain way for certain output. If the required use of the function changes, function’s signature would need to support more arguments, and its behavior will increase in complexity accordingly.

To avoid that, it is important to understand that a changed use case calls for what is effectively another function (possibly more than one). It may not be immediately feasible to rewrite and switch all callers over due to limited resources and lack of control over aforementioned callers, but conceptually it is another beast now—and existing implementation should move in that direction or be put on a deprecation schedule, rather than keep widening input and output spaces in a futile attempt to try to be everything at once.

Software is very similar in this regard, except it is much more tempting (and often significantly easier) to “append” features rather than rethink the fundamentals as time goes by and users with different needs get on board.

This is why, I believe, rigorously assessing[0] the scope and the intended audience of a piece of software at early stages (and constantly reassessing them afterwards) can go a long way against unchecked rise in complexity over time. As can prudently abstracting architecture pieces away into separate self-containing focused pieces, which can be recombined in new ways when context inevitably calls for what essentially is a different piece of software—instead of having to rewrite everything or horseshoe new features to support use cases that were not originally envisioned (but have to be supported for business reasons).

[0] By having frank discussions and asking “why” many, many times.

[+] wslh|5 years ago|reply
I don't think most software development should end up complex since most software development in the world is redundant.

The problem is that we don't have the right high level abstractions, low level robustness, and groups focusing on this problem to build software faster. It is a matter of time, there will be a silver bullet for most software needs.

There is also some conflict of interest: imagine if Microsoft gives you frameworks to build/integrate large software projects with a few "parameters"? There will be fewer developers and customers for them.

[+] karmakaze|5 years ago|reply
The far greater source of complexity is the extra that has resulted from features being analyzed and implemented sequentially. Each one may have been economical in isolation, but as a whole is not. This is why a rewrite often seems so attractive, to analyze the entire known scope and use a smaller set of mechanisms to do the same often allowing for additional changes to fit more easily. Painting yourself in a corner is either short-sightedness (technical/vertical, voluntary/oblivious), a tech-debt choice, something that didn't turn out as well as expected despite efforts for technical reasons or unexpected changes. It also comes from trying too hard, generalizing too early. So there are many reasons, and pragmatically balancing them is something that is given far too little weight.
[+] carapace|5 years ago|reply
One the one hand it's really hard to program well. I think anyone who can solve a sudoku puzzle can learn to program, but only people with freakish skill or determination can learn to program really well.

On the other hand, those folks who can program really well are often what I call "complexity junkies": programming is their sudoku, it's fun and exciting. It helps that you can get paid well to do it.

So you get things like Haskell and Rust.

[+] mirekrusin|5 years ago|reply
There are complex things and there are complicated things. You want to make complex things easier to understand by refining abstractions and eliminate complicated things alltogether. Constant refactoring and tabu word - intelligence - is the key here. It doesn't mean you have to reshuffle everything every week. It means spending maybe 5% - 20% of time on small things to improve code based on current knowledge. This has to be pushed by developers because semi-agile methodologies most companies use have built-in forces to move those tasks out of sprints.
[+] k__|5 years ago|reply
I had the impression the main problem was missing cognitive flexibility of developers.

Some people just do things like they always have. They don't bother to understand what people before did and then bend everything to their will.

[+] throwaway201103|5 years ago|reply
Another twist on this is that some people are passive-aggresively lazy. They may understand that what they are being asked to do has longer term negative consequences, but they don't want to invest the effort in trying to change any minds. They just do what they are asked, and when things get complicated they can say "I did what you asked for; you didn't ask me to simplify anything"

This is seen more in rigidly hierarchical organizations where debate and ideas from the lower ranks tend to be quashed. Think of the boss who says some variant of "you're paid to do what I ask, not to talk about it"

[+] SulfurHexaFluri|5 years ago|reply
This downplays the difficulty of "bothering" to understand things. Most developers work on applications of insane complexity that they couldn't ever hope to understand. Almost every change is done in some amount of ignorance of the surroundings.
[+] nchelluri|5 years ago|reply
I agree wholeheartedly on the Brooks' "conceptual integrity" thing. I really enjoyed his book The Design of Design.

It takes effort to maintain cohesion and sound architecture, but it pays off for future development.

[+] LC_ALL|5 years ago|reply
Building features on top of other features is often zero cost. Code becomes a many layered cake consumed by the end user. In the web development stack the simplest of features like text on a screen is the achievement of decades of technological progress. Text may be localized, shaped with HarfBuzz, run through libicu's BIDI algorithm, encoded with a nontrivial encoding, wrapped in markup language, nested inside of multiple layers of network headers and corresponding metadata, sent over the wire as a series of 0s and 1s, and then painstakingly unpacked in reverse order.

This is clearly complicated and clearly works. Many different actors operating quasi-independently. You can imagine the difficulty when one actor in a time crunch tries to design a similarly complicated cake stitched together with parts homemade, parts open sourced and parts paid for.

[+] lbblack|5 years ago|reply
I think too many HNers here have an incomplete picture of what makes something complex. There is genius in simplicity. Complexity can be best understood (for me) as simultaneous interactions of many simple things under the same roof, which we then consider to be in totality an object of reality.

Those simple things can be different in an infinite variety of ways, in which complexity can be derived. Personally, if I've jumped into a codebase that is messy and cluttered - unorganized... It's immediately noticable that the original project developers had a shallow and or narrow strategy for how they wanted to design their system.

Start with simple and useful mechanisms, which are the building blocks for whatever problem you are trying to solve. Complexity and abstraction can then be extrapolated from a simple yet brilliant foundation. I don't know how that isn't common sense.