top | item 45069135

(no title)

codingwagie | 6 months ago

I think this works in simple domains. After working in big tech for a while, I am still shocked by the required complexity. Even the simplest business problem may take a year to solve, and constantly break due to the astounding number of edge cases and scale.

Anyone proclaiming simplicity just hasnt worked at scale. Even rewrites that have a decade old code base to be inspired from, often fail due to the sheer amount of things to consider.

A classic, Chesterton's Fence:

"There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”"

discuss

order

sodapopcan|6 months ago

This is the classic misunderstanding where software engineers can't seem to communicate well with each other.

We can even just look at the title here: Do the simplest thing POSSIBLE.

You can't escape complexity when a problem is complex. You could certainly still complicate it even more than necessary, though. Nowhere in this article is it saying you can avoid complexity altogether, but that many of us tend to over-complicate problems for no good reason.

lll-o-lll|6 months ago

> We can even just look at the title here: Do the simplest thing POSSIBLE.

I think the nuance here is that “the simplest thing possible” is not always the “best solution”. As an example, it is possible to solve very many business or operational problems with a simple service sitting in front of a database. At scale, you can continue to operate, but the amount of man-hours going into keeping the lights on can grow exponentially. Is the simplest thing possible still the DB?

Complexity is more than just the code or the infrastructure; it needs to run the entire gamut of the solution. That includes looking at the incidental complexity that goes into scaling, operating, maintaining, and migrating (if a temporary ‘too simple but fast to get going’ stack was chosen).

Measure twice, cut once. Understand what you are trying to build, and work out a way to get there in stages that provide business value at each step. Easier said than done.

Edit: Replies seem to be getting hung up over the “DB” reference. This is meant to be a hypothetical where the reader infers a scenario of a technology that “can solve all problems, but is not necessarily the best solution”. Substitute for “writing files to the file system” if you prefer.

jbreckmckye|6 months ago

I think you're accidentally conducting a motte and bailey fallacy here.

It's making an ambitious risky claim (make things simpler than you think they need to be) then retreating on pushback to a much safer claim (the all-encompassing "simplest thing possible")

The statement ultimately becomes meaningless because any interrogation can get waved away with "well I didn't mean as simple as that."

But nobody ever thinks their solution is more complex than necessary. The hard part is deciding what is necessary, not whether we should be complex.

hammock|6 months ago

Yes. I like to distinguish between “complex” (by nature) and “complicated” (by design)

imgabe|6 months ago

A complex system that works is always found to have evolved from a simple system that worked.

You can keep on doing the simplest thing possible and arrive at something very complex, but the key is that each step should be simple. Then you are solving a real problem that you are currently experiencing, not introducing unnecessary complexity to solve a hypothetical problem you imagine you might experience.

MetaWhirledPeas|6 months ago

Exactly.

And to address something the GP said:

> I am still shocked by the required complexity

Some of this complexity becomes required through earlier bad decisions, where the simplest thing that could possibly work wasn't chosen. Simplicity up front can reduce complexity down the line.

motorest|6 months ago

> We can even just look at the title here: Do the simplest thing POSSIBLE.

I think you're focusing on weasel words to avoid addressing the actual problem raided by OP, which is the elephant in the room.

Your limited understanding of the problem domain doesn't mean the problem has a simple or even simpler solution. It just means you failed to understand the needs and tradeoffs that led to complexity. Unwittingly, this misunderstanding originates even more complexity.

Listen, there are many types of complexity. Among which there is complexity intrinsic to the problem domain, but there is also accidental complexity that's needlessly created by tradeoffs and failures in analysis and even execution.

If you replace an existing solution with a solution which you believe is simpler, odds are you will have to scramble to address the impacts of all tradeoffs and oversights in your analysis. Addressing those represents complexity as well, complexity created by your solution.

Imagine a web service that has autoscaling rules based on request rates and computational limits. You might look at request patterns and say that this is far too complex, you can just manually scale the system with enough room to handle your average load, and when required you can just click a button and rescale it to meet demand. Awesome work, you simplified your system. Except your system, like all web services, experiences seasonal request patterns. Now you have schedules and meetings and even incidents that wake up your team in the middle of the night. Your pager fires because a feature was released and you didn't quite scaled the service ro accommodate for the new peak load. So now your simple system requires a fair degree of hand holding to work with any semblance of reliability. Is this not a form of complexity as well? Yes, yes it is. You didn't eliminated complexity, it is only shifted to another place. You saw complexity in autoscaling rules and believed you eliminated that complexity by replacing it with manual scaling, but you only ended up shifting that complexity somewhere else. Why? Because it's intrinsic to the problem domain, and requiring more manual work to tackle that complexity introduces more accidental complexity than what is required to address the issue.

sgjohnson|6 months ago

The title is not “Do the simplest thing POSSIBLE”. It’s do the “Simplest thing that could POSSIBLY work”.

There’s a HUGE difference between the simplest thing possible, and the simplest thing that could possibly work.

The simplest thing that could possibly work conveniently lets you forget about the scale. The simplest thing possible does not.

flohofwoe|6 months ago

The key is 'required complexity'.

This is different from adding pointless complexity that doesn't help solve the problem but exists only because it is established 'best practice' or 'because Google does it that way' and I've seen this many more times than complex software where the complexity is actually required. And such needlessly complex software is also usually a source of countless day-to-day problems (if it makes its way out the door in the first place) while the 'simplistic' counterpart usually just hums along in the background without anybody noticing - and if there's a problem it's easy to fix because the codebase is simple and easy to understand by anybody looking into the problem. Of course after 20 years of such changes, the originally simple code base may also grow into a messy hairball, but at least it's still doing its thing.

fuzzfactor|6 months ago

Yes, I would say do the simplest thing and it could possibly work.

If it doesn't, go from there whether you need to find an alternative or add another layer of complexity.

I think when complexity does build, it can snowball when a crew comes along and finds more than could be addressed in a year or two. People have to be realistic that there's more than one way to address it. For one it could be a project to identify and curtail existing excess complexity, another approach is to reduce the rate of additional complexity, maybe take it to the next level and completely inhibit any further excess, or any additional complexity of any kind at all. Ideally all of the above.

Things are so seldom ideal, and these are professionals ;)

No matter what, the most pressing requirement is to earnestly begin mastering the existing complexity to a pretty good extent before it could possibly be addressed by a neophyte. That's addressing it right there. Bull's-eye in fact.

Once a good amount of familiarity is established this could take months to a year(s) in such a situation. By this point a fairly accurate awareness of the actual degree of complexity/debt can be determined, but you can't be doing nothing this whole time. So you have naturally added at least some debt most likely, hopefully not a lot but now you've got a better handle on how it compares to what is already there. If you're really sharp the only complexity you may add is not excess at all but completely essential and you make sure of it.

Now if you keep carrying on you may find you have added some complexity that may be excess yourself, and by this time you may be in a situation where that excess is "insignificant" compared to what is already there, or even compared to what one misguided colleague or team might be routinely erecting on their own. You may even conclude that the only possible eventual outcome is to topple completely.

What you do about it is your own decision as much as it can be, and that's most often the way it is bound to always increase in most organizations, and never come down. So that's the most common way it's been addressed so far, as can be seen.

prerok|6 months ago

You are not wrong, but the source of the problem may not be the domain but poor software design.

If the software base is full of gotchas and unintended side-effects then the source of the problem is in unclean separation of concerns and tight coupling. Of course, at some point refactoring just becomes an almost insurmountable task, and if the culture of the company does not change more crap will be added before even one of your refactorings land.

Believe me, it's possible to solve complex problems by clean separation of concerns and composability of simple components. It's very hard to do well, though, so lots of programmers don't even try. That's where you need strict ownership of seniors (who must also subscribe to this point of view).

thwarted|6 months ago

> then the source of the problem is in unclean separation of concerns and tight coupling

Sometimes the problem is in the edges—the way the separate concerns interact—not in the nodes. This may arise, for example, where the need for an operation/interaction between components doesn't need to be idempotent because the need for it to be never came up.

motorest|6 months ago

> If the software base is full of gotchas and unintended side-effects then the source of the problem is in unclean separation of concerns and tight coupling.

Do you know how you get such a system? When you start with a simple system and instead of redesigning it to reflect the complexity you just keep the simple system working while extending it to shoehorn the features it needs to meet the requirements.

We get this all the time, specially when junior developers join a team. Inexperienced developers are the first ones complaining about how things are too complex for what they do. More often than not that just reflects opinionated approached to problem domains they are yet to understand. Because all problems are simple once you ignore all constraints and requirements.

trey-jones|6 months ago

The classic comeback - every time I mention simplicity to a particular team member of mine, this is what he says. Complexity is unavoidable. Yes. But if you don't fight it tooth and nail, spend more time than you want trying to simplify the solution, getting second opinions (more minds on difficult problems are better!), then you will increase complexity more than you needed to. This is just a different form a technical debt: you will pay the price in maintenance later.

Maksadbek|6 months ago

Exactly! If you don't try to keep it simple, especially in bigtech, things get way too complex. I think choosing simplest solution in bigtech is in orders of magnitude more important than in a simple domains.

anymouse123456|6 months ago

Okay, I'll bite.

> Anyone proclaiming simplicity just hasnt [sic] worked at scale

I've worked in startups and large tech organizations over decades and indeed, there are definitely some problems in those places that are hard.

That said, in my opinion, the majority of technical solutions were over engineered and mostly waste.

Much simpler, more reliable, more efficient solutions were available, but inappropriately dismissed.

My team was able to demonstrate this by producing a much simpler system, deploying it and delivering it to many millions of people, every day.

Chesterton's fence is great in some contexts, especially politics, but the vast majority of software is so poorly made, it rarely applies IMO.

whstl|6 months ago

Hard agree.

I also worked on some quite large organizations with quite large services that would easily take 10x to 50x the amount of time to ship if they were a smaller org.

Most of the time people were mistaking complexity caused by bad decisions (tech or otherwise) with "domain complexity" and "edge cases" and refusing to acknowledge that things are now harder because of those decisions. Just changing the point of view makes it simple again, but then you run into internal politics.

With microservices especially, the irony was that it was mostly the decisions justified as being done to "save time in the future" that ended up generating the most amount of future work, and in a few cases even problems around compliance and data sovereignty.

ozim|6 months ago

Problem is that you can’t create a system in vacuum.

Mostly it is not like a movie where you hand pick the team for the job.

Usually you have to play the cards you’re dealt with so you take whatever your team is comfortable building.

Which in the end is dealing with emotions, people ambition, wishes.

I have seen stuff gold plated just because one vocal person was making fuss. I have seen good ideas blocked just because someone wanted to feel important. I have seen teams who wanted to „do proper engineering” but they thought over engineering was proper way and anything less than gold plating makes them look like amateurs.

gozzoo|6 months ago

So, case by case then?

dondraper36|6 months ago

The author is a staff engineer at GitHub. I don't think they haven't worked at scale

pinoy420|6 months ago

[deleted]

monkeyelite|6 months ago

I have worked at scale - I have found countless examples of people not believing in simple solutions which eventually prevail and replace the big-complex thing.

Complexity is a learned engineering approach - it takes practice to learn to do it another way. So if all you see is complex solutions how would you learn otherwise?

motorest|6 months ago

> I have worked at scale - I have found countless examples of people not believing in simple solutions which eventually prevail and replace the big-complex thing.

I have worked at scale. I have found examples where simple solutions prevail due to inertia and inability or unwillingness to acknowledge the simple solutions failed to adequately address the requirements. The accidental complexity created by those simple solutions is downplayed as it would require reevaluating the simple solution, and thus run books and operations and maintenances are required as part of your daily operations because that's how the system is. And changing it would be too costly.

Let's not fool ourselves.

PaulRobinson|6 months ago

I remember reviewing some code of an engineer I was managing at a FAANG. Noticed an edge case. Pointed out I thought if/when that hit, it was going to cause an alarm that would page on-call. He suggested it might be OK to ship because it was "about a one in a million chance of being hit". The service involved did 500,000 TPS. "So, just 30 times a minute, then?"

And you're right about the amount of engineering that goes into solving problems. One service adjacent to my patch was more than a decade old. Was on a low TPS but critical path for a key business problem. Had not been touched in years. Hadn't caused a single page in that decade, just trudged along, really solidly well engineered service. Somebody suggested we re-write it in a modern architecture and language (it was a kind of mini-monolith in a now unfashionable language). Engineering managers and principals all vetoed that, thank goodness - would have been 5+ years of pain for zero upside.

jajko|6 months ago

I am deep in one such corporate complexity, yet I constantly see an ocean of items that could have been in much simpler and more robust way.

Simple stuff had tons of long term advantages and benefits - its easy to ramp up new folks on it compared to some over-abstracted hypercomplex system because some lead dev wanted to try new shiny stuff for their cvs or out of boredom. Its easy to debug, migrate, evolve and just generally maintain, something pure devs often don't care much for unless they become more senior.

Complex optimizations are for sure required for extreme performance or massive public web but that's not the bulk of global IT work done out there.

ricardobeat|6 months ago

At least half the time, the complexity comes from the system itself, echoes of the organizational structure, infrastructure, and not the requirements or problem domain; so this advice will/should be valid more often than not.

malux85|6 months ago

I was one of the original engineers of DFP at Google and we built the systems that send billions of ads to billions of users a day.

The complexity comes from the fact that at scale, the state space of any problem domain is thoroughly (maybe totally) explored very rapidly.

That’s a way bigger problem than system complexity and pretty much any system complexity is usually the result of edge cases that need to be solved, rather than bad architecture, infrastructure or organisational issues - these problems are only significant at smaller, inexperienced companies, by the time you are at post scale (if the company survives that long) then state space exploration in implementation (features, security, non-stop operations) is where the complexity is.

codingwagie|6 months ago

Right but you cant expect perfect implementation, as the complexity of the business needs grows, so does the accidental complexity.

makeitdouble|6 months ago

> the organizational structure, infrastructure

Those are things that matter and can't be brushed away though.

What Conway's law describes is also optimization of the software to match the shape it can be developped and maintained with fewer frictions.

Same for infra, complexity induced by it shouldn't be simplified unless you also simplify/abatract the infra first.

mattmcknight|6 months ago

This is where John Gall's Systemantics comes into play, “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system."

Obviously a bit hyperbolic, but matches my experience.

thinkharderdev|6 months ago

I agree with the saying as such but I think it's actually a counterpoint to the "do the simplest thing that could possibly work" idea. When building a system initially you want to do the simplest thing that can possible work, given some appropriate definition of "working". Ideally as the systems requirements evolve you should refactor to address the complexity by adding abstractions, making things horizontally scalable, etc. But for any given change the "simplest thing that can possible work" is usually something along the lines of "we'll just add another if-statement" or "we'll just add another parameter to the API call". Before you know it you have an incomprehensible API with 250 parameters which interact in complex ways and a rats nest of spaghetti code serving it.

I prefer the way Einstein said it (or at least I've heard it attributed to him, not sure if he actually said it): "Make things as simple as possible, but no simpler".

damnever|5 months ago

Those who’ve worked at scale know simplicity is brutally hard — but those who stop pushing for it altogether have failed their responsibility.

jimbokun|6 months ago

When the domain is complex, it's even MORE important that the individual components be simple with clean interfaces between them. If everything is too intertwined, you lose the ability to make changes or add new functionality without accidentally breaking something else.

As for Chesterton's Fence, you have the causality backwards. You should not build a fence or gate before you have a need for it. However, when you encounter an existing fence or gate, assume there must have been a very good reason for building it in the first place.

daxfohl|6 months ago

Though in my previous job, a huge amount of complexity was due to failed, abandoned, or incomplete attempts to refactor/improve systems, and I frequently wondered, if such things had been disallowed, how much simpler the systems we inherited would have been.

This isn't to say you should never try to refactor or improve things, but make sure that it's going to work for 100% of your use cases, that you're budgeted to finish what you start, and that it can be done iteratively with the result of each step being an improvement on the previous.

fijiaarone|6 months ago

The problem isn’t refactoring, its that it was failed, abandoned, or incomplete.

And that’s usually because the person or small group that began the refactor weren’t given the time and resources to do the refactor, and uninterested or unknowledgable people hijacked and over complicated the process, and others blocked it from happening, so what would have taken a few weeks for the initial team to have completed the refactor successfully, with a little help and cooperation from others, and had they not been pulled in 10 different ways to fight other fires — instead after months and months and expending tons of time and money on people mucking it up instead of fixing it, the refactor got abandoned, a million dollars was wasted, and the system as a whole was worse than it was before.

rednafi|6 months ago

Every refactor attempt starts with the intention of 100% coverage.

No one can predict how efficacious that attempt will be from the get-go. Eventually, often people find out that their assumptions were too naive or they don’t have enough budget to push it to completion.

Successful refactoring attempts start small and don’t try to change the universe in a single pass.

patmcc|6 months ago

The problem with this is no one can agree about what "at scale" means.

Like yes, everyone knows that if you want to index the whole internet and have tens of thousands of searches a second there are unique challenges and you need some crazy complexity. But if you have a system that has 10 transactions a second...you probably don't. The simple thing will probably work just fine. And the vast majority of systems will never get that busy.

Computers are fast now! One powerful server (with a second powerful server, just in case) can do a lot.

hansvm|6 months ago

Yeah, we do 100k ML inferences per second. It's not a single server, but the architecture isn't much more complicated than that.

With today's computers, indexing the entire internet and serving 100k QPS also isn't really that demanding architecturally. The vast majority of current implementation complexity exists for reasons other than necessity.

rednafi|6 months ago

Yep, vertical scaling goes a long way. But it’s not compute where the bottleneck for scale lies, rather in the resiliency & availability.

So although a single server goes a long way, to hit that sweet 99.999 SLA, people horizontally scales way before hitting the maximum compute capacity of a singe machine. HA makes everything way more difficult to operate and reason about.

MangoToupe|6 months ago

This could also point to the solution of cutting down the complexity of "big tech". So much of that complexity isn't necessary because it solves problems, it just keeps people employed.

mdaniel|6 months ago

This is a horrifically cynical take and I wish it would stop. I doubt very seriously there is any meaningfully sized collection of engineers who introduce things "just to keep themselves employed," to say nothing of having to now advance that perspective into a full blown conspiracy because code review is also a thing

What is far more likely is the proverbial "JS framework problem:" gah, this technology that I read about (or encounter) is too complex, I just want 1/10th that I understand from casually reading about it, so we should replace it with this simple thing. Oh, right, plus this one other thing that solves a problem. Oh, plus this other thing that solves this other problem. Gah, this thing is too complex!

mhitza|6 months ago

> Anyone proclaiming simplicity just hasnt worked at scale.

Most projects don't operate at scale. And before "at scale", simple, rewritable code will always evolve better, because it's less dense, and less spread out.

There is indeed a balance between the simplest code, and the gradual abstractions needed to maintain code.

I worked with startups, small and medium sized businesses, and with a larger US airline. Engineering complexity is through the roof, when it doesn't have to be. Not on any of the projects I've seen and worked on.

Now if you're an engineer in some mega corp, things could be very different, but you're talking about the 1% there. If not less.

bdangubic|6 months ago

every complex domain and “at scale” is just a bunch of simple things in disguise… our industry is just terrible in general about breaking things down. we sort of know this so we came up with shit things like “microservices” but you spend sufficient time in the industry (almost three decades for me) and you won’t find a single place that has microservices architecture than you haven’t wished was a monolith :) we are just terrible at this… there is no complex domain, it is just a good excuse we use to justify things

zhouzhao|6 months ago

Oh boy, this is the best example of "I have been doing it the same way for 30 years" I have ever seen in the world wild web

analog31|6 months ago

If it's a legacy system, then it lives at the edges. The edges are everything.

I wish I could remember or find the proof, but in a multi-dimensional space, as the number of dimensions rise, the highest probability is for points to be located near the edges of the system -- with the limit being that they can be treated as if they all live at the edges. This is true for real systems too -- the users have found all of the limits but avoid working past them.

The system that optimally accommodates all of the edges at once is the old system.

CuriouslyC|6 months ago

You don't need a complicated proof, just assume a distribution in some very high number of dimensions, with samples from that distribution having randomly generated values from the distribution for each dimension. If you have if you have ~300 dimensions then statistically at least one dimension will be ~3SD from the mean, i.e. "on the edge," and as long as any one dimension is close to an edge, we define a point as being "near the edge."

It's not really meaningful though, at high dimensions you want to consider centrality metrics.

kloop|6 months ago

You're not wrong, but I'm also constantly surprised at places where devs will inject complexity.

A former project that had a codec system for serializing objects that involved scala implicits comes to mind. It involved a significant amount of internal machinery, just to avoid writing 5 toString methods. And made it so that changing imports could break significant parts of the project in crazy ways.

It's possible nobody at the beginning of the project knew they would only have 5 of these objects (if they had 5 at the beginning, how many would they have later?), but I think that comes back to the article's point. There are often significantly simpler solutions that have fewer layers of indirection, and will work better. You shouldn't reach for complexity until you need it.

breadwinner|6 months ago

The point is to not overengineer. This is not about ignoring scale, or not considering edge cases. Don't engineer for scale that you don't even know is necessary if that complicates the code. Do the simplest thing that meets the current requirements, but write the code in such a way that more features, scale etc. can be added without disrupting dependencies.

See also: Google engineering practices: https://google.github.io/eng-practices/review/reviewer/looki...

And also: https://goomics.net/316

xelxebar|6 months ago

> I think this works in simple domains.

Business incentives are aligned around incremental delivery, not around efficient encoding of the target domain. The latter generally requires deep architectural iteration, meaning multiple complete system overhauls and/or rewrites, which by now are even vilified as a trope.

Mostly, though, I think there is just availability bias here. The simple, solid systems operating at scale and handled by a 3-person team are hard to notice over the noise that naturally arises from a 1,000-person suborganization churning on the same problem. Naturally, more devs will only experience the latter, and due to network effects, funding is also easier to come by.

naasking|6 months ago

> Even the simplest business problem may take a year to solve, and constantly break due to the astounding number of edge cases and scale.

Is this really because the single problem is inherently difficult, or because you're trying to solve more than one problem (scope creep) due a fear of losing revenue? I think a lot of complexity stems from trying to group disparate problems as if they can have a single solution. If you're willing to live with a smaller customer base, then simple solutions are everywhere.

If you want simple solutions and a large customer base, that probably requires R&D.

3036e4|6 months ago

Much of this begins with the customers. If they were better at identifying their real needs and specify the most simple possible tools they need, we would not have to deliver bizarrely complex does-everything bloated monster solutions and they could have much more stable, and cheaper, software.

Of course marketing and sales working hard to convince customers that they need more of everything, all the time, doesn't help.

afro88|6 months ago

> I think this works in simple domains

You're not wrong. So many engineers operating in simple domains, on MVPs that don't have scale yet, on internal tools even. They introduce so much complexity thinking they're making smart moves.

Product people can be similar, in their own way. Spending lots of time making onboarding perfect when the feature could do less, cater for 95% of use cases, and need no onboarding at all.

vasco|6 months ago

Consider this, everyone at whichever skill level they're at, benefits from applying simplicity to their designs. Also, everyone at any skill level will have a tendency to think the work they are doing is actually deep enough to require the complexity when it reaches the border of their intelligence.

I dont know if you only have genius friends but I can tell you many stories of things people thought warranted complexity that I thought didn't. So whatever you consider hard enough to warrant complexity, just know there's another smarter guy than you thinking you're spinning your wheels.

Also it's an impossible conversation to have without specific examples. Anyone can come and make a handwavy case about always simplifying and someone can make a case about necessary complexity but without specific example none can be proven wrong.

isaacremuant|6 months ago

> Even the simplest business problem may take a year to solve, and constantly break due to the astounding number of edge cases and scale.

You're doing it wrong. More likely than not.

> Anyone proclaiming simplicity just hasnt worked at scale. Even rewrites that have a decade old code base to be inspired from, often fail due to the sheer amount of things to consider.

Or, you're just used to excusing complexity because your environment rewards complexity and "big things".

Simple is not necessarily easy. Actually simple can be way harder to think of and push for, because people are so used to complexity.

Yes. Massive scale and operations may make things harder but seeking simplicity is still the right choice and "working in big tech" is not a particular hard or rare credential in HN. Try an actual argument instead of an appeal to self authority.

bytefish|6 months ago

For a lot of problems it’s a good idea to talk to customers and stakeholders, and make the complexity very transparent.

Maybe some of the edge cases only apply to 2% of the customers? Could these customers move to a standard process? And what’s the cost of implementing, testing, integrating and maintaining these customer-specific solutions?

This has actually been the best solution for me to reduce complexity in my software, by talking to customers and business analysts… and making the complexity very transparent by assigning figures to it.

mikeryan|6 months ago

I had an engineering boss who used this as a mantra (he is now an SVP of engineering at Spotify and we worked together at Comcast)

I think the unspoken part here is “let’s start with…”

It doesn’t mean you won’t have to “do all the things” so much as let’s start with too little so we don’t waste time doing things we end up not needing.

Once you aggregate all the simple things you may end up with a complex behemoth but hopefully you didn’t spend too much time on fruitless paths getting there.

mpweiher|6 months ago

Yet, many times a lot of that scale and complexity is accidental.

Case in point: when I joined the BBC I was tasked with "fixing" the sports statistics platform. The existing system consisted of several dozen distinct programs instantiated into well over a hundred processes and running on around a dozen machines.

I DTSSTCPW / YAGNIed the heck out of that thing and the result was a single JAR running on a single machine that ran around 100-1000 times faster and was more than 100 times more reliable. Also about an order of magnitude less code while having more features and being easier to maintain expand.

https://link.springer.com/chapter/10.1007/978-1-4614-9299-3_...

And yeah, I was also extremely wary of tearing that thing down, because I couldn't actually understand the existing system. Nobody could. Took me over half a year to overcome that hesitancy.

Eschew Clever Rules -- Joe Condon, Bell Labs (via "Bumper Sticker Computer Science", in Programming Pearls)

https://tildesites.bowdoin.edu/~ltoma/teaching/cs340/spring0...

ehnto|6 months ago

> Even rewrites that have a decade old code base to be inspired from, often fail due to the sheer amount of things to consider.

The amount of knowledge required to first generate the codebase, that is now missing for the rewrite, is the elephant in the room for rewrites. That's a decade of decision making, business rules changing, knowledge leaving when people depart etc.

Much like your example, if you think all the information is in the codebase then you should go away and start talking to the business stakeholders until you understand the scope of what you don't currently know.

zaphirplane|6 months ago

Accidental complexity is a thing, YAGNI is a thing, tech debt caused complexity is a thing, I’m a foo programmer let me write bar code like it’s foo is a thing. I don’t know if its all high quality needed

rufus_foreman|6 months ago

>> Even rewrites that have a decade old code base to be inspired from, often fail due to the sheer amount of things to consider

A rewrite of a decade old code base is not the simplest thing that could possibly work.

tim333|6 months ago

A very complex domain is medical records. The UK has managed to blow billions on custom systems that didn't work. The simplest thing that could have worked was maybe just to download an open source version of VistA (https://en.wikipedia.org/wiki/VistA). Probably would have worked better.

javier2|6 months ago

First of all, I dont disagree. Just wanted to add that "the simple thing" is often not the obvious thing to do, and only becomes apparent after working on it for a while. Often times, when you dive into a set of adjacent functionality, you discover that it barely even works, and does not actually do nearly all the things you thought it did.

hammock|6 months ago

Yes. The simple thing is not necessarily the obvious thing or the most immediately salient thing. First explore the problem-solution space thoroughly, THEN choose the simple thing

greymalik|6 months ago

> Anyone proclaiming simplicity just hasnt worked at scale.

The author of the article is a staff engineer at GitHub.

etse|6 months ago

There is a lot of sentiment in these comments about needing to scale still. I wonder how many need to do this in a pre-PMF stage vs growth stage? The trade off is faster growth if your PMF bet wins and loss of time if your bet goes south.

jlg23|6 months ago

> Even the simplest business problem may take a year to solve, and constantly break due to the astounding number of edge cases and scale.

edge case (n): Requirement discovered after the requirements gathering phase.

mytailorisrich|6 months ago

Sometimes, often even, complexity and edge cases are symptoms that the problem is not fully understood and that the solution is not optimal.

jaynate|6 months ago

Wow, Chesterton’s fence parable could apply in so many places (not the least of which, politics).

pickdig|6 months ago

staying kinda anonymous saying this... oftentimes for most programmers the road is a pretty simple one yet the fence or gate is a tolling station of some private interest, so yeah if possible just quit arguing and try to destroy it.

bryanrasmussen|6 months ago

>Anyone proclaiming simplicity just hasnt worked at scale.

or they haven't worked in fields that are heavily regulated, or internationally.

This is why the DOGE guys were all like hey there are a bunch of people over 100 years old getting social security!! WTF!? Where someone with a wider range of experience would think, hmm, I bet there is some reason we need to figure out why they just jumped right to "this must be fraud!!"

fuckaj|6 months ago

[deleted]