top | item 23449152

Microservices Considered Harmful (2019)

187 points| maattdd | 5 years ago |blog.matthieud.me

176 comments

order
[+] ceronman|5 years ago|reply
I once read a quote that was something like "You are only an expert in a given technology when you know when not to use it". Unfortunately I forgot the exact quote and the author. (If anyone knows please let me know).

This is such a nice quote that speaks a lot about what it means to be an experienced (senior) software engineer. Our field is such a dynamic one! New tools and techniques appear all the time.

It's easy to fall into the trap of thinking that newer tools are better. I think this is because in some areas of tech this is almost always true (e.g. hardware). But in software, new tools are techniques are rarely fully better, instead they just provide a different set of trade offs. Progress doesn't follow a linear pattern, it's more like a jagged line slowly trending upwards.

We think we are experienced because we know how to use new tools, but in reality we are only more experienced when we understand the trade offs and learn when the tools are really useful and when they are not.

A senior engineer knows when not to use micro services, when not to use SQL, when not to use static typing, when not to use React, when not to use Kubernetes, etc.

Junior engineers don't know these trade offs, they end up using sophisticated hammers for their screws. It doesn't mean that those hammers are bad or useless, they were just misused.

[+] userbinator|5 years ago|reply
Progress doesn't follow a linear pattern, it's more like a jagged line slowly trending upwards.

Recently (as in the past few years), it feels more like it's not trending upwards anymore, just jumping around an equilibrium point and maybe even slowly declining.

Junior engineers don't know these trade offs, they end up using sophisticated hammers for their screws.

They also end up making hammer factory factory factory factories.

(http://web.archive.org/web/20150106004241/http://discuss.joe...)

[+] mightybyte|5 years ago|reply
"You are only an expert in a given technology when you know when not to use it"

This is such a great quote. I would also love to know the origin.

I'll also bite on the when not to use static typing bit. Not using static typing is a bit of a misnomer because you can use a statically typed language and just use `String` (or `Bytes`, `Object`, `Value` or whatever the equivalent is in your language). The question is more of whether to use one of these catch-all structures or to use a more structured domain-specific type. And the answer here is when you don't need all of the structure, don't want to force the whole thing to validate, etc. For example, maybe you have JSON getting read into some complex customer data structure. If you only need a single field out of the structure, and haven't already parsed it into the customer data structure for some other reason, it might be best to just reach in and grab the field you need. You can think about it kind of as the principle of least context but in a data parsing scenario.

[+] macspoofing|5 years ago|reply
>when not to use static typing

I'll bite. When should static typing not be used?

(Note: I agree with your general point)

[+] dpeck|5 years ago|reply
In my experience, microservices grew to prominence not because of their technical merit, but because it allowed engineering "leadership" to not have to make decisions. Every developer or group of developers could create their own fiefdoms and management didn't have to worry about fostering consensus or team efforts, all that had to be agreed on was service contracts.

We end up with way too many developers on a given product, an explosion of systems that are only the least bit architected, but thankfully the vp of engineering didn't have to worry themselves with actually understanding anything about the technology and could do the bare minimum of people management.

Individual minor wins, collectively massive loss.

* there are reasons for microservices at big scales, if everyone is still fitting in the same room/auditorium for an all-hands I would seriously doubt that they're needed.

[+] riffraff|5 years ago|reply
That is the classic Conway's law that system design follows an organisation's communication structure.

I would argue it's not a bad thing per se: reducing coordination can speed up some processes and reduce overhead.

But I agree there is a threshold below which it doesn't make much sense.

[+] pjmlp|5 years ago|reply
And the worse it that it just gets rebooted every couple of years.

Anyone doing distributed computing long enough has been at this via SUN RPC, CORBA, DCOM, XML RPC, SOAP, REST, gRPC, <place your favourite on year XYZW>.

[+] rowanG077|5 years ago|reply
You make it sound like having separate teams is bad. It's the best. The larger the team the slower it moves.
[+] staticassertion|5 years ago|reply
> However, your codebase has now to deal with network and multiple processes.

Here's the thing I see repeatedly called out as a negative, but it's a positive!

Processes and networks are amazing abstractions. They force you to not share memory on a single system, they encourage you to focus on how you communicate, they give you scale and failure isolation, for force you to handle the fact that a called subroutine might fail because it's across a network.

> f your codebase has failed to achieve modularity with tools such as functions and packages, it will not magically succeed by adding network layers and binary boundaries inside it

Functions allow shared state, they don't isolate errors. Processes over networks do. That's a massive difference.

If you read up on the fundamental papers regarding software reliability this is something that's brought up ad nauseum.

> (this might be the reason why the video game industry is still safe from this whole microservice trend).

Performance is more complex than this. For a video game system latency might be the dominating criteria. For a data processing service it might be throughput, or the ability to scale up and down. For many, microservices have the performance characteristics that they need, because many tasks are not latency sensitive, or the latency sensitive part can be handled separately.

> would argue that by having to anticipate the traffic for each microservice specifically, we will face more problem because one part of the app can't compensate for another one.

I would argue that if you're manually scaling things then you're doing it wrong. Your whole system should grow and shrink has needed.

[+] amelius|5 years ago|reply
> They force you to not share memory on a single system, they encourage you to focus on how you communicate, they give you scale and failure isolation, for force you to handle the fact that a called subroutine might fail because it's across a network.

The problem: distributed systems are hard to get right. Better stay away from them unless you really need them, AND you have the time/resources to implement them correctly. The benefits of microservices are a bad excuse, most of the time.

[+] twic|5 years ago|reply
> If you read up on the fundamental papers regarding software reliability this is something that's brought up ad nauseum.

I don't believe there are any papers that show that adding network hops to an application makes it more reliable. I would be extremely interested in any references you could provide.

[+] mixedCase|5 years ago|reply
> for force you to handle the fact that a called subroutine might fail because it's across a network.

That just adds one failure mode to the list of failure modes people ignore due to the happy-path development that languages with "unchecked exceptions as default error handling" encourage.

> Functions allow shared state, they don't isolate errors. Processes over networks do. That's a massive difference.

Except not, because "just dump that on a database/kv-store" is an all-too-common workaround chosen as an easy way out. This problem is instead tackled by things such as functionally pure languages such as Haskell and Rust's borrow checker, and only up to a certain degree at which point it's still back into the hands of the programmer's experience; though they do help a ton.

[+] DSMan195276|5 years ago|reply
> Functions allow shared state, they don't isolate errors. Processes over networks do. That's a massive difference. > > If you read up on the fundamental papers regarding software reliability this is something that's brought up ad nauseum.

I think you missed the point - just using separate processes does not guarantee you separate errors and state between services, there's lots of ways to get 'around' that. What if the two services talk to the same database/service? What if there's weird co-dependencies and odd workarounds for shared state? What if one service failing means the entire thing grinds to a halt or data is screwed up?

Now that said, yes, if you use good development practices and have a good architecture microservices can work quite well, but if you were capable of that you probably wouldn't have created a non-microservice ball of mud. And if you're currently unable to fix your existing ball of mud, attempting to make it distributed is likely going to result in you adding more distributed mud instead. In other words, the problem here isn't really a technical one, it's a process one. And using their current failing processes to make microservices is just going to make worse mud, because they haven't yet figured out how to deal with their existing mud.

[+] mic47|5 years ago|reply
> Functions allow shared state, they don't isolate errors.

So why not use Haskell (or other pure language)? It's pure, so functions don't share state. And you don't have to replace function call with network call.

[+] 0xbadcafebee|5 years ago|reply
> They force you to not share memory on a single system,

So instead people use a single SQL database between 20 microservices.

> give you scale and failure isolation,

Only if you configure and test them properly, and they actually tend to increase failure and make it harder to isolate its origin (hello distributed tracing)

> force you to handle the fact that a called subroutine might fail because it's across a network

They don't force that at all. It's good when people do handle that, but often they don't.

> I would argue that if you're manually scaling things then you're doing it wrong

And I would argue that if people are given a default choice of doing the wrong thing, they will do that wrong thing, until someone forces them not to.

Microservices allow people to make hundreds of bad decisions because nobody else can tell if they're making bad decisions unless they audit all of the code. Usually the only people auditing code are the people writing it, and usually they have no special incentive to make the best decisions, they just need to ship a feature.

[+] mpfundstein|5 years ago|reply
> Processes and networks are amazing abstractions. They force you to not share memory on a single system, they encourage you to focus on how you communicate, they give you scale and failure isolation, for force you to handle the fact that a called subroutine might fail because it's across a network.

Stop on! Its so much easier to enforce separation of concern if there is an actual physical barrier. Its just to easy to just slip bad decision through peer review.

So I totally agree

[+] pjmlp|5 years ago|reply
Plenty of languages allow to write modular code, all the way back to early 80's.

The developers that aren't able to write modular code, are just going to write spaghetti network code, with the added complexity of distributed computing.

[+] dpenguin|5 years ago|reply
I agree 99.9% of products do not need a micro service architecture because 1. They will never see scaling to the extent that you need to isolate services 2. They don’t have zero downtime requirements 3. They don’t have enough feature velocity to warrant breaking a monolith 4. They can be maintained by a smaller team

I also agree that the way to build new software is to build a monolith and when it becomes really necessary, introduce new smaller services that take away functionality from the monolith little by little.

Microservices do have a good usecase even for smaller teams in some cases where functionality is independent of existing service. Think of something like LinkedIn front end making calls directly to multiple (micro)services in the backend- one that returns your contacts, one that shows people similar to you, one that shows who viewed your profiles, one that shows job ads etc. none of these is central to the functionality of the site and you don’t want to introduce delay by having one service compute and send all data back to the front end. You don’t want failure in one to cause the page to break etc.

Unfortunately, like many new tech, junior engineers are chasing the shiniest objects and senior engineers fail to guide junior devs or foresee these issues. Part of the problem is that there is so much tech junk out there on medium or the next cool blog platform that anyone can read, learn to regurgitate and sound like an expert that it’s hard to distinguish between junior and senior engineers anymore. So if leaders are not hands on, they might end up making decisions based on whoever sounds like an expert and results will be seen a few years later. But hey, every damn company has the same problem at this point.. so it’s “normal”.

[+] Sherl|5 years ago|reply
Atleast frontend and backend needs to be decoupled in almost any development for the future. I work with several legacy apps where we use python requests just to collect data. It's a huge pain when https certificate expires, when they change something in validation header and when they deploy a new 'field'. Most CRUD applications do need a place when you can collect all the data after the backend process all the business logic without touching the frontend.

Almost the entire RPA industry revolves around the idea of supporting this legacy apps problem -- scrapping content and not worrying about them breaking.

[+] cameronbrown|5 years ago|reply
Microservices were never about code architecture, they were an organisational pattern to enable teams to own different services. Most "microservices" don't actually look micro to those implementing them, because it's really just "a lot of services".

For my personal projects, I just have a frontend service (HTTP server) and a backend service (API server). Anything more is overkill.

[+] bjt|5 years ago|reply
I came here to make a similar point. I see two big benefits to microservices, neither of which is spoken to by the article:

1. Using small-ish (I hate the word "micro"), domain-bounded services leads engineers to think more carefully about their boundaries, abstractions, and interfaces than when you're in a monolith. It reduces the temptation to cheat there.

2. Conway's law is real. If you don't think very deliberately about your domain boundaries as you code, then you'll end up with service boundaries that reflect your org structure. This creates a lot of pain when the business needs to grow or pivot. Smaller, domain-bounded services give you more flexibility to evolve your team structure as your business grows and changes, without needing to rewrite the world.

I'm a big fan of the "Monolith First" approach described by Martin Fowler. Start with the monolith while you're small. Carve off pieces into separate services as you grow and need to divide responsibilities.

A multi service architecture works best if you think about each service as a component in a domain-driven or "Clean Architecture" model. Each service should live either in the outer ring where you interface with the outside world or the inner ring where you do business logic. Avoid the temptation to have services that span both. And dependencies should only point inward.

Carving pieces off a monolith is easier if the monolith is built along Clean Architecture lines as well, but in my experience, the full stack frameworks that people reach for when starting out (e.g Rails, Django) don't lead you down a cleanly de-couple-able path.

https://en.wikipedia.org/wiki/Conway%27s_law

https://www.thoughtworks.com/radar/techniques/inverse-conway...

https://martinfowler.com/bliki/MonolithFirst.html

https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-a...

[+] benhoyt|5 years ago|reply
Agreed. Or as I've heard it said, "microservices solve a people problem, not a technical one". This is certainly how they were pushed at my current workplace -- it was all about "two-pizza teams being able to develop and deploy independently".

Out of interest, what does the "frontend service" do in your setup? For my personal projects I generally just go for a single server/service for simplicity.

[+] throw_m239339|5 years ago|reply
People here keep on repeating that statement, yet people actually implementing microservices keep on believing it is a solution for an architectural problem. If they majority keeps on implementing microservices for the wrong reasons then who's right? What you describe is just classic SOA. Then what are "microservices" exactly?
[+] tomlagier|5 years ago|reply
I think that pattern scales really well up to a medium-sized startup.

I stick those two (webserver/static content and API) plus a database in a docker-compose file, and put all three plus a frontend in a single repo. That feels like the sweet spot of "separate but integrated" for my work.

[+] z3t4|5 years ago|reply
Take away the word micro from micro services. Its just a buzzword. Now you have just services. You can have just one service that handles email, chat, payroll, website - or you can break them up into independent services. Ask yourself: Does it make sense to have two different services to handle x and y. Just don't break something up because of some buzzword mantra. Maybe the public website is the bottleneck in your monolith, then it might be a good idea to put it on its own server and scaling strategy so that it doesn't bog down the other parts of the system.
[+] marcinzm|5 years ago|reply
>If your codebase has failed to achieve modularity with tools such as functions and packages, it will not magically succeed by adding network layers and binary boundaries inside it

This is assuming you're converting an existing non-modular monolithic service to micro services. If you're starting from scratch or converting a modular monolithic service then this point is moot. It says nothing about the advantages or disadvantages of maintaining a modular code base with monoliths or microservices which is what people are actually talking about.

[+] mic47|5 years ago|reply
If you are starting from scratch (again), you can make good monolith too, since you already know a lot about the problem you are solving.
[+] drdaeman|5 years ago|reply
Data API over HTTP spaghetti is surely a bad way to do microservices (some accidental exclusions apply[1]). And if you'd have to do cross-service transaction or jump back and forth through the logs, tracing the event as it hops across myriad of microservices, it means that either your boundaries are wrong or you're doing something architecturally weird. It's probably a distributed monolith, with in-process function invocations replaced with network API calls - something worse than a monolith.

At my current place we have a monolith and trying to get services right by modelling them as a sort of events pipeline. This is what we're using as a foundation, and I believe it addresses a lot of raised pain points: http://docs.eventide-project.org/core-concepts/services/ (full disclosure: I'm not personally affiliated with this project at all, but a coworker of mine is).

___

[1] At one of my previous jobs, I've had success with factoring out all payment-related code into a separate service, unifying various provider APIs. Given that this wasn't a "true" service but a multiplexer/adapter in front of other APIs, it worked fine. Certainly no worse than all the third-party services out there, and I believe they're generally considered okay.

[+] bhntr3|5 years ago|reply
Microservices are the actor model (erlang or akka) except they require lots of devops work, being on call for x services every night, and a container management system like kubernetes to be manageable.

Actors are a simple solution to the same problems microservices solve and have existed since the 1970s. Actor implementations address the problem foundationally by making hot deployment, fault tolerance, message passing, and scaling fundamental to both the language and VM. This is the layer at which the problem should be solved but it rules out a lot of languages or tools we are used to.

So, in my opinion, microservices are a symptom of an abusive relationship with languages and tools that don't love us, grow with us or care about what we care about.

But I also think they're pretty much the same thing as EJBs which makes Kubernetes Google JBoss.

[+] plandis|5 years ago|reply
> A recent blog post by neobank Monzo explains that they have reached the crazy amount of 1500 microservices (a ratio of ~10 microservices per engineer)

That’s wild. Microservices are mostly beneficial organizationally — a small team can own a service and be able to communicate with the services of other small teams.

If anything I think a 10:1 software engineers:services is probably not far off from the ideal.

[+] axlee|5 years ago|reply
> That’s wild. Microservices are mostly beneficial organizationally — a small team can own a service and be able to communicate with the services of other small teams.

And a cross-concern fix that a dev used to be able to apply by himself in a day, now has to go through 5 teams, 5 programming languages, 5 kanban boards and 5 QA runs to reach production. I never understood the appeal of teams "owning" services. In my dream world, every engineer can and should be allowed to intervene in as many layers/slices of the code as his mental capacity and understanding allows. Artificial - and sometimes bureaucratic - boundaries are hurtful.

To me, it's the result of mid-to-senior software engineers not being ready to let go of their little turfs as the company grows, so they build an organizational wall around their code and call it a service. It has nothing to do with computer science or good engineering. It is pure Conway's law.

[+] michaelbuckbee|5 years ago|reply
When I hear things like that all I can think is that I have a wildly different idea of what a "service" is that it can be broken down to such a small chunk of functionality.

Are there examples of the size of these individual services? What are they doing?

[+] twic|5 years ago|reply
If you have ten people working full-time on it, it is not a microservice, it is just a service.

I think the discussion about microservices has suffered more than anyone realises from a lack of shared understanding about what a microservice actually is.

[+] snowoutside|5 years ago|reply
I think 10:1 is a bit much. If services are well-scoped, 3 engineers/service can be an effective ratio.
[+] ts0000|5 years ago|reply
While I agree with the notion of treating microservices with caution, I found the article a bit too shallow, barely supporting the claim. Especially the second "fallacy" reads like a draft and it overall ends abruptly.
[+] Gollapalli|5 years ago|reply
Microservices have some inherent advantages, mainly that you can manage, modify and deploy one service at a time, without taking down/redeploying the rest of your application(s). This is arguably the big thing that is missing from monoliths. It's hard to only change a single API endpoint in a monolith, but easier to do a change across the entire monolith, when you have to change something about how the whole system works. The best compromise that I've come up with would be to have something that can keep your entire app in one place, but allow individual portions of it to be hot-swapped in the running application, and is built to be run in a distributed, horizontally scalable fashion, In addition, there's a lot to be said for the old way of putting business logic in stored procedures, despite the poor abstraction capabilities of SQL, relative to something like lisp, but with modern distributed databases, we can conceivably run code in stored procedures written in something like Clojure, keeping code close to the database, or rather, data close to the code, allowing hot-swapping, modification, introspection, replay of events, and all other manner of things, all while managing the whole thing like a monolith, with a single application, configuration, etc. to deploy, and a more manageable and obvious attack surface to secure.

This is my solution, called Dataworks, if anyone's interested: https://github.com/acgollapalli/dataworks#business-case

(Some of those things like introspection and replay-of-events are in the road map, but the core aspects of hotswapping and modification of code-in-db work.)

[+] fffernan|5 years ago|reply
For the most part this level of microservice solves the problem of: New engineering leader comes in. New engineering Leader wants to rewrite the entire thing cause "it sucks". Business doesn't have resources to rewrite (for the nth time). New leader and business compromise to create a microservice. Rinse and repeat. Cloud/container/VM tech as really allowed this pattern to work. The art of taking over an existing codebase, keeping it going at low cost, low overhead is gone. Nobody's promo packet is fulled with sustainment work. One microservice per promotion. Etc etc.
[+] MarkMarine|5 years ago|reply
This misses some of the main reason Microservices are nice, it’s much easier to change code that isn’t woven throughout a code base. Microservices make the boundary between units defined and forces API design on those boundaries. Yes, you can properly design these abstractions without a service boundary, but having the forcing function makes it required.
[+] jayd16|5 years ago|reply
Hammer considered harmful. Cannot secure screws, says blog.
[+] je42|5 years ago|reply
i think he is skipping a couple of points.

For example the deployment aspect: - monolith single deployable unit. - microservice multiple independently deployable units.

Multiple teams on a monolith:

- you have to coordinate releases and rolebacks... - code base grows and dependencies between modules (that have shouldn't have dependencies on each other as well, unless you have a good code review culture.) - deployment get slower and slower over time. - db migrations also need to coordinates over multiple teams.

These problems go away when you go microservices. Of course you get other problems.

My point is, in the discussion microservices vs monolith you need to consider a whole bunch of dimensions to figure our what is the best fit for your org.

[+] scarmig|5 years ago|reply
Start with a monolith for your core business logic. Rely on external services for things outside your value prop, be it persistent storage or email or something else. Keep on building and growing until the monolith is causing teams to regularly step on each other's toes. By that I mean, continue well past teams needing to coordinate or needing dedicated headcount to handle coordination to the point where coordination is impossible. When that point approaches, allow new functionality to be built in microservices and gradually break off pieces of the monolith when necessary.
[+] jb_gericke|5 years ago|reply
Microservices aren't a panacea by any means, but like any tool, they provide certain advantages when dealing with certain use-cases.

One thing the article fails to mention are the boat loads of tooling out there to address the failings of, and complement microservices architecture, of which Kubernetes is only one.

Sure they come with their own levels of complexity, but deploying K8 today is orders of magnitude simpler than it was 4 years ago. The same will hold true for similar tooling in the general microservices/container orchestration domain, such as service mesh (it's a lot simpler to get up and running with Istio or Linkerd than it was 18 months ago), distributed tracing (Jaeger/Opentelemetry) and general observability.

I'd also point out that MS can provide benefits outside of just independent scaling and independent deployment of services, but should in theory also allow for faster velocity in adding new services, all dependent on following effective DDD when scaffolding services, they allow different teams in a large org to design, build and own their own service ecosystem with APIs as contracts between their services and upstream/downstream consumers in their own org and new team members coming onboard should in theory be able to get familiar with a tighter/leaner codebase for a microservice as opposed to wading through thousands of lines of a monoliths code to find/understand the parts relevant to their jobs.

[+] SiNiquity|5 years ago|reply
In my experience, the benefits of microservices are primarily better delineated responsibilities and narrower scope, and secondary benefits tend to fall out from these. There are downsides, but the "harmful" effects do not reflect my experience. I fully grant more things on a network invite lower availability / higher latency, but I contend that you already need to handle these issues. Microservices do not tend to grossly exacerbate the problem (in my experience anyway).

The other callout is clean APIs over a network can just be clean APIs internally. This is true in theory but hardly in practice from what I've seen. Microservices tend to create boundaries that are more strictly enforced. The code, data and resources are inaccessible except through what is exposed through public APIs. There is real friction to exposing additional data or models from one service and then consuming it in another service, even if both services are owned by the same team (and moreso if a different team is involved). At least in my experience, spaghetti was still primarily the domain of the internal code rather than the service APIs.

There's also a number of benefits as far as non-technical management of microservices. Knowledge transfer is easier since again, the scope is narrower and the service does less. This is a great benefit as people rotate in and out of the team, and also simplifies shifting the service to another team if it becomes clear the service better aligns with another team's responsibilities.

[+] stillbourne|5 years ago|reply
Microservices are middleware. That's they way I treat them anyway. I build them as the glue between the backend and frontend. They handle things like authentication, business logic, data aggregation, caching, persistence, and generally act as an api gateway. I really only ever use microservices to handle crosscutting concerns that are not directly implemented by the backend but have a frontend requirement. The only way that is harmful is if you write bad code. Bad code is always harmful.